Gunning Fog Index: What It Is and How to Calculate It
- Gunning Fog Index
- readability
- editing
- plain language
- grade level
A full guide to the Gunning Fog Index: history, the Grade Level = 0.4 × (ASL + PHW) formula, worked examples, score bands, limits, and how to lower your Fog score—plus when to pair it with Flesch-Kincaid.
The Gunning Fog Index (often shortened to Fog or Gunning Fog) is a readability formula that estimates how many years of formal U.S. schooling someone might need to understand a passage on a first read. It is built from two simple inputs—how long your sentences are, on average, and what share of words count as “hard” (usually three or more syllables). This guide explains where Fog came from, how to calculate it by hand, how to interpret the score, and how it compares to Flesch–Kincaid. When you are ready to measure drafts at scale, run them through SynthRead (also reachable as the readability tool) so Fog sits alongside other formulas and sentence-level highlights.
Who invented the Gunning Fog Index?
American businessman and writing consultant Robert Gunning introduced the Fog Index in 1952 in his book The Technique of Clear Writing. Gunning was reacting to what he saw as needlessly foggy business and public prose: long sentences, abstract nouns, and polysyllabic words stacked in ways that exhausted readers. His metric was not meant to judge ideas or nuance; it was a practical signal for editors and communicators who wanted a consistent, repeatable check on density.
Fog belongs to the same family of postwar readability tools as Flesch–Kincaid and SMOG: each uses surface features of text—length and syllable load—as proxies for difficulty. None of them “understands” content; they approximate cognitive load from form.
What is the Gunning Fog formula?
The classic presentation is:
Grade Level ≈ 0.4 × (ASL + PHW)
Where:
- ASL = average sentence length = total words ÷ total sentences
- PHW = percentage of hard words = (hard words ÷ total words) × 100
In Gunning’s tradition, a hard word is usually defined as any word with three or more syllables, subject to counting rules (see below). Some tools also exclude certain proper nouns or apply suffix adjustments—always compare like with like when tracking edits.
If you prefer to think in counts rather than percentages, you can expand PHW:
PHW = 100 × (hard words ÷ total words)
Then plug ASL and PHW into 0.4 × (ASL + PHW). The result is expressed as a grade-like number (for example, 10–12 for many general-audience magazines, higher for dense policy text).
Why 0.4?
The multiplier scales sentence length and hard-word percentage onto a curve that roughly aligns with U.S. grade-level expectations in Gunning’s calibration samples. It is an empirical fit, not a law of nature—treat outputs as guidelines, especially outside general nonfiction.
Three worked examples (different text types)
The examples below use the same counting rules throughout so you can see how genre shifts the score. (Your software may round syllables slightly differently; for hand checks, pick one syllabifier and stay consistent.)
Example 1 — Plain, instructional copy
Text: “Wash your hands. Use soap. Rinse well.”
- Sentences: 3
- Words: 8
- ASL: 8 ÷ 3 ≈ 2.67
- Hard words (3+ syllables): none → PHW = 0%
- Fog: 0.4 × (2.67 + 0) ≈ 1.1
This is extremely easy on the Fog scale—short sentences and no polysyllabic pile-up.
Example 2 — News-style paragraph
Text: “City leaders approved funding for new libraries after months of debate.”
- Sentences: 1
- Words: 12
- ASL: 12
- Hard words: “libraries” (three syllables: li-bra-ries) → 1 of 12 → PHW ≈ 8.33%
- Fog: 0.4 × (12 + 8.33) ≈ 8.1
One long sentence pulls ASL up; a single three-syllable word is enough to register. Splitting into two sentences and trimming length would move Fog quickly—see long sentences: how to split.
Example 3 — Policy / bureaucratic tone
Text: “The implementation of multiagency coordination mechanisms necessitated additional appropriations.”
- Sentences: 1
- Words: 9
- ASL: 9
- Hard words (illustrative): “implementation,” “multiagency,” “coordination,” “mechanisms,” “necessitated,” “additional,” “appropriations” → 7 of 9 → PHW ≈ 77.8%
- Fog: 0.4 × (9 + 77.8) ≈ 34.7
This is extreme on purpose: stacked nominalizations and one sentence balloon both parts of the formula. Editing would swap nouns for verbs, split the sentence, and replace abstract bundles with concrete actors and actions—patterns that also help B2B clarity.
What counts as a “hard word”?
Most implementations use a three-syllable rule: words with three or more syllables are “complex” or “hard” for Fog-style scoring. Shorter words can still be unfamiliar, but Fog does not capture semantics—only length.
Common exceptions and edge cases
Exact rules vary by tool, but editors routinely watch for:
- Proper nouns and trademarks: Some calculators exclude them; others count syllables anyway. If your tool includes them, brand-heavy copy scores higher even when readers find the names easy.
- Hyphenated words: Often counted as one token; syllables may follow either part (“state-of-the-art” can add multiple beats).
- Compound words: “Healthcare” vs “health care” can change tokenization and syllable counts.
- Suffixes and inflections: Some syllable counters treat endings consistently; others split “running” into two syllables. Small differences change PHW at the margin.
- Numbers, citations, and acronyms: May be skipped or counted depending on tokenizer rules—another reason to use one analyzer for before/after comparisons.
If two tools disagree, sentence boundaries and syllable maps are the usual culprits, not the Fog arithmetic itself.
Hand calculation checklist
When you score a passage manually:
- Pick a contiguous sample (many editors use 100+ words for stability).
- Count sentences using the same punctuation rules each time (questions and exclamations count; some tools treat colons or semicolons differently—match your software).
- Count words as tokens separated by whitespace, then decide how you will treat numbers, URLs, and dashes before you start.
- Mark hard words with a consistent syllable rule; re-check edge cases like everyone (three syllables) vs business (two).
- Compute ASL and PHW, then apply 0.4 × (ASL + PHW).
If your hand result diverges from SynthRead by more than a point or two, re-audit sentence splits first—those change ASL dramatically.
How do you interpret a Gunning Fog score?
Fog is calibrated so typical ranges read like this (approximate bands):
| Fog score (approx.) | How readers often experience it | |---------------------|-----------------------------------| | ~6 | Fairly easy general reading; close to many plain-language targets for broad audiences | | ~12 | Roughly high-school density; common for magazines and opinion pieces | | ~17+ | College-level thickness; specialist briefings, legal memos, and dense technical summaries often land here |
Scores are not judgments of quality. A high Fog number can be appropriate for expert readers if precision demands longer terms—pair Fog with audience analysis, not vanity metrics. For public health, civic, or web copy aimed at everyone, teams often aim lower; see writing for grade 8 for how “grade level” goals translate to real edits.
Fog is not the same as “reading grade” everywhere
Grade-level metaphors come from U.S. schooling norms; international audiences may not map “grade 12” to the same life experience. Treat Fog as a relative score inside your workflow: compare draft v3 to draft v1, or your page to a competitor’s explainer, rather than chasing a universal constant.
Also remember that grade estimates are not age estimates. A Fog of 10 does not literally mean “tenth graders only”—it means the surface features resemble calibrated samples at that band.
Gunning Fog vs Flesch–Kincaid: when to use which
Flesch–Kincaid (Reading Ease and Grade Level) leans on syllables per word and words per sentence with a different weighting curve; it does not use a discrete “hard word” percentage the way classic Fog framing does. Gunning Fog explicitly foregrounds polysyllabic density alongside sentence length, which makes it sensitive to bureaucratic stacks (implementation of…) even when average syllables-per-word looks moderate.
Practical guidance:
- Use Flesch–Kincaid when you want a widely cited grade benchmark and a 0–100 Reading Ease dial for stakeholder reports.
- Use Fog when you want long-word load surfaced clearly—useful for enterprise landing pages, policy drafts, and anywhere nominalizations creep in.
- Use both (plus SMOG for health-style polysyllable focus) and fix sentences that move multiple metrics together.
SynthRead computes several formulas in one pass—open the readability analyzer and keep a single source of truth for your editorial baseline.
Where is the Gunning Fog Index used?
- Journalism and editorial: Desk editors use Fog alongside word counts to catch paragraphs that will fatigue readers on mobile. A breaking-news lede may intentionally run short and punchy; a Sunday explainer may tolerate higher Fog if context and narrative carry the load—metrics help teams argue for trims where space is tight.
- Government and civic communication: Plain-language initiatives pair targets for sentence length and word choice with readability checks before publication. Fog is useful when multiple authors contribute to one PDF: it surfaces outlier sections that read heavier than the rest, even when tone matches on a skim.
- Healthcare: Patient education teams balance clinical accuracy with Fog-style density—often next to SMOG for polysyllable-heavy medical vocabulary. The goal is not to remove necessary terms, but to budget them: define once, surround with short sentences, and avoid stacking three rare words in the same line.
- Legal and compliance: Long sentences and Latinized terms inflate Fog; teams use it to justify plain-English revisions while preserving necessary precision. Contracts still need exact terms; Fog helps identify where definitions and examples should sit so the obligation is understandable, not just enforceable.
Across these fields, the pattern is the same: Fog is a QA lens, not a substitute for legal review or clinical validation.
Limitations you should not ignore
Fog is fast and transparent, but it has blind spots:
- Jargon and domain terms: Short acronyms may score “easy” while remaining opaque; long technical words may score “hard” while being second nature to specialists.
- Semantics and structure: It ignores discourse, argument order, headings, and visuals—all of which affect real comprehension.
- Audience motivation: A motivated reader tolerates higher Fog; a distracted reader does not.
- Translation and multilingual text: Syllable heuristics differ by language; Fog is rooted in English conventions.
For SEO and engagement, combine numeric targets with substance—see readability and SEO—and edit for claims, evidence, and scannability, not just syllables.
Dialogue, poetry, and UI microcopy
Fog is a poor fit for transcripts, fiction dialogue, and one-line UI strings—sentence boundaries and intentional fragments skew ASL. Use human judgment and task-based testing there. For microcopy, prefer clarity checks (“Did someone succeed on first try?”) over chasing a Fog number on three words.
How to lower your Gunning Fog Index (specific techniques)
- Split overloaded sentences. Aim for one main idea per sentence; break chains linked by semicolons or “which” stacks.
- Swap nominalizations for verbs. Conduct an analysis → analyze; provide an indication → show.
- Replace abstract bundles with actors. The implementation of the policy → We implemented the policy (when voice allows).
- Define long terms once, then use short forms. First mention: electrocardiogram (ECG); later: ECG.
- Use lists for procedures instead of paragraph-long enumerations.
- Cut filler intensifiers that lengthen clauses without adding meaning.
- Read aloud the worst-scoring sentences; if you stumble, simplify structure before arguing over individual words.
- Front-load the human subject when possible: The committee approved… beats Approval was given by the committee…—often shorter in words and clearer to scan.
- Collapse double helpers: In order to → to; due to the fact that → because (when accuracy allows).
What not to do: Do not “dumb down” meaning. If a long word is the only precise term, keep it and pay down Fog elsewhere—split neighboring sentences, add a plain-language gloss, or move detail into a caption or footnote so the main line breathes.
After each pass, re-run SynthRead so you can compare before/after Fog with the same tokenizer—especially helpful alongside average sentence length work.
Quick reference
Formula: Grade Level ≈ 0.4 × (ASL + PHW), with ASL = words ÷ sentences and PHW = percentage of hard (typically 3+ syllable) words.
Origin: Robert Gunning, 1952, The Technique of Clear Writing.
Benchmarks (approx.): ~6 easier general reading; ~12 high-school density; ~17+ college-thick.
Tools: Use the readability tool (SynthRead) to score Fog next to Flesch–Kincaid, SMOG, and other checks in one workflow.
Related reading
- Flesch–Kincaid complete guide — Grade level vs Reading Ease.
- SMOG readability index — Polysyllable-heavy texts.
- Passive voice: why it matters — Structure edits that often reduce Fog.
- LinkedIn post readability checklist — Short-form density patterns.
Itamar Haim
SEO & GEO Lead, SynthQuery
Founder of SynthQuery and SEO/GEO lead. He helps teams ship content that reads well to humans and holds up under AI-assisted search and detection workflows.
He has led organic growth and content strategy engagements with companies including Elementor, Yotpo, and Imagen AI, combining technical SEO with editorial quality.
He writes SynthQuery's public guides on E-E-A-T, AI detection limits, and readability so editorial teams can align practice with how search and generative systems evaluate content.
Related Posts
SMOG Readability Index Explained (Formula, Scores, and When to Use It)
Learn how the SMOG grade is calculated, what a “good” score looks like for web and health content, and how it compares to Flesch–Kincaid—plus a practical editing workflow.
ARI (Automated Readability Index): Formula and Practical Guide
Learn how the Automated Readability Index works, why it counts characters instead of syllables, how to interpret ARI scores against US grade levels, and when ARI beats—or falls short of—formulas like Coleman-Liau and Flesch–Kincaid.
Average Sentence Length and Readability: Targets That Actually Work
Why mean sentence length shows up in so many formulas, what range to aim for by channel, and how to find the few sentences that drag your whole score down.
Get the best of SynthQuery
Tips on readability, AI detection, and content strategy. No spam.