NPS = % promoters minus % detractors. Passives shape the denominator only. All processing runs in your browser. Free tools hub · PPC Budget Calculator.
Benchmark comparison
Illustrative industry averages for discussion only—your segment, sample, and methodology matter more than a single reference line. SaaS ~+41, Ecommerce ~+45, B2B ~+25.
Net Promoter Score, usually abbreviated NPS, is one of the most widely recognized customer experience metrics in modern marketing and product organizations. At its core, it classifies survey respondents into three buckets based on a single eleven-point likelihood-to-recommend question, typically scored from zero to ten. People who answer nine or ten are called promoters—often interpreted as enthusiastic advocates. Scores of seven or eight are labeled passives: satisfied enough in many contexts, but not strongly enthusiastic. Scores from zero through six are detractors, representing varying degrees of dissatisfaction or risk to reputation. The headline NPS number is not an average of those raw scores; it is the percentage of promoters minus the percentage of detractors, expressed on a scale that runs from negative one hundred to positive one hundred. Passives matter because they sit in the denominator and dilute both promoter and detractor shares, which keeps teams honest about the full surveyed population rather than cherry-picking happy customers only.
Why has NPS become a de facto standard alongside operational metrics? Because it compresses a complex attitude into a single comparable index that boards, investors, and cross-functional teams can track over time. It is easy to explain in a quarterly business review, pairs naturally with follow-up qualitative work, and supports segmentation when you break scores by cohort, geography, or product line. Critics correctly note that one number cannot capture every nuance of customer health, that cultural differences affect how people use rating scales, and that sampling bias can distort trends if you only survey post-purchase euphoria or only angry ticket submitters. Used with clear methodology and humility, however, NPS remains a practical anchor for customer satisfaction measurement and for prioritizing experience investments.
This free NPS Calculator from SynthQuery helps you compute the headline score and full distribution from either manual bucket counts or a pasted list of raw zero-to-ten responses. You see color-coded NPS, a gauge visualization, a donut breakdown of promoters versus passives versus detractors, and an illustrative benchmark chart with reference points commonly cited for SaaS, ecommerce, and B2B contexts. A reset control clears inputs; copy summary exports a text block for slides or tickets. Everything executes in your browser so survey rows you paste are not uploaded to SynthQuery servers—ideal when working with customer identifiers or sensitive feedback tabulations.
What this tool does
Dual input modes reflect real-world messiness. CX analysts sometimes receive only aggregated buckets from a locked-down dashboard; growth teams sometimes dump CSV snippets into scratchpads. Supporting both paths reduces friction and keeps the arithmetic transparent. The color coding on the headline NPS uses conservative bands: negative scores read as destructive-toned warnings, low positives as caution, mid-high positives as neutral foreground, and strong positives as success-toned emphasis. Colors are aids, not verdicts—your industry and competitive set still define whether a thirty-five is celebration or crisis.
The semicircular gauge and the linear bar both map the same score from negative one hundred to positive one hundred. The arc offers an at-a-glance emotional anchor for stakeholders who dislike tables; the bar reinforces the same scale for accessibility and screen-reader-friendly summaries. The donut chart encodes the three population shares, reminding viewers that passives influence NPS indirectly by shrinking promoter and detractor percentages. Without passives in the denominator, a small promoter group could look artificially inflated; with them included, the metric stays grounded in everyone who answered the recommend question.
Benchmark bars plot illustrative industry averages—SaaS near plus forty-one, ecommerce near plus forty-five, B2B near plus twenty-five—drawn from commonly quoted reference bands rather than live syndicated data feeds. When your calculated NPS is available, it appears beside those references for conversational context, not for automatic grading. The approximate ninety-five percent margin of error uses a multinomial variance for the difference between promoter and detractor proportions, scaled to the NPS line. It widens dramatically in small samples; treat it as a teaching aid and consult a statistician for high-stakes decisions.
Reset returns all fields to empty defaults without touching your browser history elsewhere. Copy results builds a monospace-friendly block listing NPS, totals, percentages, margin line, and a short benchmark reminder—suitable for email, Notion, or ticketing systems. The implementation stays entirely client-side, consistent with other SynthQuery marketing calculators.
Technical details
Let P be the count of promoters, A the count of passives, and D the count of detractors, with total respondents N equals P plus A plus D greater than zero. Promoter percentage is one hundred times P divided by N, detractor percentage is one hundred times D divided by N, and NPS equals promoter percentage minus detractor percentage. Equivalently, NPS equals one hundred times open parenthesis P minus D close parenthesis divided by N. Passives appear only through N in the denominator; they do not enter the numerator difference. The score ranges from negative one hundred when everyone is a detractor to positive one hundred when everyone is a promoter. If only passives exist, NPS is zero because promoter and detractor shares both equal zero percent.
Sampling variability matters. Treating each respondent as an independent draw from a multinomial distribution over three categories yields a standard error for the difference between observed promoter share and detractor share. Scaling by one hundred produces an approximate standard error for NPS on the minus-one-hundred to plus-one-hundred line; multiplying by one point nine six gives a rough ninety-five percent margin of error. Small N inflates that band; enterprise populations with tens of thousands of responses narrow it. This page does not adjust for weighting, stratification, or clustered sampling—if your survey vendor applies post-stratification weights, use their official confidence intervals for compliance reporting.
NPS is a percentage-point construct, not a probability strictly between zero and one. When comparing periods, look at both the point estimate and the margin band so noise is not over-interpreted. For A/B tests on survey invitation design or question wording, changes in measured NPS may reflect methodology rather than true experience gains—hold instruments constant when possible.
Use cases
Customer satisfaction tracking teams use NPS as a heartbeat metric on executive dashboards. When leadership asks whether last quarter’s support backlog dented loyalty, analysts compare promoter and detractor shares before and after operational changes, not only the single index. This calculator helps reconcile survey exports with board slides when someone questions arithmetic mid-meeting—retyping counts takes seconds and rebuilds trust in the chain of evidence.
Product feedback workflows often combine NPS with verbatims. A spike in detractors after a pricing change might be obvious in text; the calculator quantifies how many respondents moved buckets so prioritization debates start from shared numbers. For roadmap conversations, segmenting promoters can highlight features worth doubling down on, while detractor concentration can justify quality sprints.
Quarterly reporting packages frequently require both the headline NPS and the underlying distribution. Finance may ask how satisfaction interacts with churn assumptions; marketing may ask how it interacts with referral programs. Exporting a consistent text summary from this tool keeps version control simple when appendices are assembled from multiple authors.
Ecommerce brands running post-delivery surveys sometimes see seasonal swings tied to shipping carriers rather than product quality. Running separate calculations per carrier or region surfaces whether the headline average masks a solvable logistics story. B2B firms with long sales cycles may survey only implemented customers; passives in that population can signal “okay but not expansion-ready” accounts that customer success should nurture. SaaS teams pairing NPS with product usage data often discover that scores alone miss power users who forgot to answer—this calculator does not fix sampling bias, but it makes the math on completed responses explicit.
How SynthQuery compares
NPS is one of several popular customer metrics. Understanding how it differs from CSAT and CES helps teams pick the right question for the decision at hand.
Aspect
SynthQuery
Typical alternatives
NPS vs CSAT
NPS uses a zero-to-ten recommend scale and a promoter-minus-detractor formula; CSAT typically measures satisfaction on a shorter scale (often 1–5) for a specific interaction or product without the same global benchmarking story.
CSAT excels for transactional touchpoints (tickets resolved, onboarding steps); NPS excels for holistic loyalty narratives and executive summaries.
NPS vs CES
Customer Effort Score focuses on how easy it was to get help or complete a task—effort is a leading indicator of repeat behavior but does not replace recommend intent.
Use CES to diagnose friction in support and onboarding flows; use NPS for broader relationship tracking.
Scale sensitivity
This calculator assumes standard NPS cutpoints (9–10, 7–8, 0–6). Altering cutpoints would change comparability with public benchmarks.
Academic or regional studies sometimes experiment with different thresholds—document any deviation.
Relationship to revenue
Neither NPS nor CSAT directly equals revenue; linking scores to cohort spend requires separate modeling.
Pair satisfaction metrics with CLV, churn, and conversion analytics for commercial interpretation.
How to use this tool effectively
Begin by deciding which input mode matches the data you already have. If your survey tool exports counts of promoters, passives, and detractors—or if you have already tallied those buckets in a spreadsheet—choose Manual counts. Enter the three nonnegative integers, then press Calculate. The tool validates that every field is a whole number at least zero, sums them for total respondents, derives each percentage, and applies the standard NPS formula. If all three counts are zero, calculation stops with a clear message because the denominator would be undefined.
If instead you have a raw column of scores, switch to Bulk scores. Paste integers from zero to ten separated by spaces, commas, or line breaks. The parser ignores blank tokens, skips values outside the valid range, and skips non-integers so a stray header row is less likely to break the run—though you should still clean exports when possible. After a successful bulk run, the calculator repopulates the manual fields with the implied promoter, passive, and detractor counts so you can verify the categorization matches your survey vendor’s logic. Promoters are nine and ten, passives seven and eight, detractors zero through six, which is the classic NPS segmentation.
For quarterly reporting, align the survey window with the business narrative: trailing ninety days for product squads, fiscal quarter for investor slides, or campaign-specific windows for launch postmortems. Copy results into your documentation package after you calculate so assumptions travel with the number. Use Reset between unrelated datasets—different brands, regions, or experiments—to avoid mixing populations. When you need financial context alongside NPS, pair this page with the CLV Calculator or Conversion Rate Calculator from the SynthQuery free tools collection, and use the PPC Budget Calculator when acquisition spend must be justified alongside satisfaction trends.
Limitations and best practices
Benchmarks on this page are static teaching references, not real-time market data. Vertical, geography, sampling channel, and question wording all move observed NPS. Do not treat plus forty as a universal pass line.
The margin-of-error line is an asymptotic multinomial approximation without design effects. Complex surveys need vendor-reported confidence intervals.
Churn and retention rate calculators are complementary to NPS but are not yet first-class SynthQuery routes—use the free tools hub to discover what ships next, and export NPS outputs into your BI stack for cohort churn models. For experiment planning and readouts, pair this page with the Sample Size Calculator for study design and the A/B Test Significance Calculator for conversion-based winner calls; neither replaces representative NPS sampling.
For internal linking discipline, bookmark the Free tools hub, the PPC Budget Calculator for paid media planning, the Conversion Rate Calculator for funnel diagnostics, and the CLV Calculator when loyalty economics must accompany satisfaction trends.
Power and MDE planning for A/B tests—use before fielding surveys or UX tests whose NPS impact you will later measure.
Frequently asked questions
Net Promoter Score is a customer loyalty index built from one question, usually how likely someone is to recommend your company or product on a scale from zero to ten. Respondents who pick nine or ten are promoters, seven or eight are passives, and zero through six are detractors. You calculate NPS as the percentage of promoters minus the percentage of detractors, which produces a number between negative one hundred and positive one hundred. Passives count toward the total respondents but do not add or subtract directly in the numerator, which is why a sea of neutral scores can drag down an otherwise decent promoter count.
Context determines whether a score is good. Positive NPS means you have more promoters than detractors as a share of respondents, which is healthier than negative NPS. Public benchmark studies often cite different averages by industry; the illustrative references on this page place SaaS near plus forty-one, ecommerce near plus forty-five, and B2B near plus twenty-five, but your competitive set, sampling method, and geography can shift fair comparisons. The strongest practice is to benchmark against your own prior periods with consistent methodology, then layer qualitative research to explain movement. A forty-five that drops from sixty-five in one quarter deserves investigation even if it still beats a generic industry average.
Start by closing the loop with detractors when policy allows—fast, empathetic responses convert some critics into neutrals or promoters. Mine verbatims for recurring themes: shipping delays, billing confusion, missing features, or support tone. Prioritize fixes that affect high-volume journeys, then re-survey after changes mature. Train teams so every customer-facing role understands how small friction compounds. Remember that gaming the metric by only surveying happy cohorts destroys trust; representative sampling and honest reporting beat vanity scores. Product, support, and marketing must share ownership rather than treating NPS as a single department’s vanity KPI.
Syndicated benchmark reports vary by year, region, and methodology, which is why this calculator shows rounded teaching values instead of live feeds. SaaS and subscription businesses often see wide dispersion between product-led growth brands with enthusiastic users and niche vertical tools with smaller bases. Ecommerce reflects seasonality and delivery experience as much as merchandising. B2B organizations may survey only accounts that successfully implemented, which skews upward compared to surveying entire prospect lists. Use benchmarks to sanity-check extremes, not to label a business good or bad on one number.
Cadence depends on decision speed and survey fatigue. Many SaaS companies run quarterly relationship NPS plus transactional surveys after key milestones like onboarding or support resolution. High-velocity ecommerce might sample continuously with monthly rollups. Measuring too rarely hides regressions until they hurt revenue; measuring too often with the same users annoys respondents and biases samples toward the easily reachable. Align survey waves with release cycles and executive reviews, and avoid changing question wording or channel mix without noting a methodology break.
The classic definition measures the gap between enthusiastic advocates and explicit critics. Passives are neither counted as promoters nor detractors in the numerator because they represent middling sentiment—often satisfied in the short term but vulnerable to competitors. They still matter mathematically because they increase the denominator, lowering promoter and detractor percentages when they dominate the sample. Some teams track passive share as its own diagnostic when they worry about complacent customers who would not actively recommend.
Yes. Use Bulk scores mode to paste integers from zero to ten separated by spaces, commas, or new lines. The tool categorizes each value, totals the three buckets, and computes NPS. Invalid tokens are skipped; ensure your paste truly contains only survey responses if you need an exact census. After calculation, manual count fields update to reflect the derived buckets so you can cross-check against your survey platform.
The displayed band is a rough ninety-five percent margin using a multinomial variance estimate for the difference between promoter and detractor proportions, scaled to the minus-one-hundred to plus-one-hundred NPS line. It widens for small sample sizes and narrows for large ones. Weighted enterprise surveys, clustered samples, and stratified designs need more sophisticated intervals from your analytics vendor. Treat this approximation as educational, not audit-grade.
CSAT usually targets satisfaction with a specific interaction on a short scale, making it sensitive to recent touchpoints. Customer Effort Score asks how easy it was to resolve an issue—powerful for support design. NPS targets broader recommend intent, which leadership often uses for longitudinal tracking. None of these metrics alone proves financial outcomes; combine them with retention, expansion revenue, and acquisition efficiency for a full story.
No. This calculator runs entirely in your browser like other SynthQuery free utilities. Counts and pasted scores stay on your device unless you copy them elsewhere. Follow your organization’s data-handling policies when working with personally identifiable feedback, and avoid pasting confidential customer quotes into shared tools without review.