Paste the script into a <script type="application/ld+json"> block or validate in the Rich Results Test. Eligibility depends on page quality, policy compliance, and Google's algorithms — markup alone does not guarantee stars in search.
Review schema markup is structured data—usually JSON-LD—that tells search engines a page contains a first-party or editorial review of a specific thing: a product, local business, book, movie, course, app, recipe, or another Schema.org type. When Google trusts the page and the markup matches visible content, review snippets can enhance organic listings with star ratings, vote counts, or short textual cues drawn from your data. That extra visual signal competes for attention against neighboring results, which is why ecommerce merchandisers, publishers, SaaS marketers, and local SEO teams all care about getting the vocabulary right.
Stars in Google search results do not come from wishful thinking: they emerge from a combination of eligible structured data, policy-compliant implementation, and Google’s own quality thresholds. A generator cannot guarantee rich results, but it can help you avoid the most common JSON mistakes—missing authors, unnamed items, or aggregate blocks without review counts—that cause validation failures or silent disqualification. SynthQuery’s Review Schema Generator runs entirely in your browser: choose Single Review mode for one or more Review objects, or Aggregate Rating mode for a Product-style summary with aggregateRating, then copy or download the JSON-LD and test it in Google’s Rich Results Test.
Who benefits? Review and affiliate sites publishing long-form critiques need accurate Review markup tied to the same text users read. Online stores often pair visible reviews with aggregateRating on product pages. Local businesses must be especially careful about self-serving reviews, yet legitimate third-party editorial coverage still uses Review types when the content truly is a review. SaaS companies summarizing G2-style testimonials on a landing page may use Review where the testimonial meets review criteria, or they may rely on other schema patterns—always align with Google’s documentation for your situation. This tool keeps you focused on the mechanics: itemReviewed typing, rating scales, and multi-review @graph bundles you can paste beside your HTML.
What this tool does
Two authoring modes share one mental model. Single Review emits Schema.org Review objects with nested itemReviewed, author, reviewRating (typed Rating), optional datePublished, reviewBody, and publisher. Multiple reviews collapse into a single JSON-LD document using @graph so crawlers see separate nodes without repeating @context. Aggregate Rating mode instead wraps aggregateRating on the root entity, mirroring how merchants expose average scores alongside product names.
The itemReviewed selector spans the types marketers use daily—Product, LocalBusiness, Restaurant, Hotel, Movie, Book, Recipe, SoftwareApplication, Course, Event, Organization, Service, Game, TVSeries, CreativeWork, Brand, and WebSite—so you are not stuck pasting manual @type strings for common cases. Real-time preview re-stringifies JSON whenever a field changes, then pipes the output through syntax highlighting so keys, strings, and numbers remain readable during long editing sessions.
Validation encodes practical Google review-snippet expectations: required names, rating values, and—for aggregates—at least one of reviewCount or ratingCount. Warnings call out empty review bodies, missing dates, unclear scales, and the self-serving review pitfalls Google documents for local entities. Copy and Download actions keep deployment simple whether you paste into a CMS snippet field or attach the file to a ticket for engineering.
An outbound shortcut opens the Rich Results Test in a new tab so you can paste a URL or code sample immediately after generating markup. Together, preview plus validation plus Google’s tester forms a tight loop: draft locally, fix structural issues before deploy, then confirm what Google’s parser sees in staging or production.
Technical details
Schema.org distinguishes Review—a single authored opinion with text—from AggregateRating, a statistical summary of many ratings. Google’s review snippets documentation lists required and recommended properties for each pattern; breaking them reduces eligibility. Stars in SERPs appear only when Google chooses to show them; the Rich Results Test confirms parsing, not placement.
Google explicitly warns against self-serving reviews that misrepresent who produced the rating—especially for LocalBusiness entities controlled by the business owner. The vocabulary still includes reviewRating, itemReviewed, author, and publisher because legitimate publishers and marketplaces use them millions of times daily. JSON-LD should mirror human-visible content: if users cannot read the review text or see the score, do not inject hidden schema.
Schema.org’s review vocabulary extends to nested Rating objects, aggregateRating on Things, and relationship properties such as itemReviewed. This generator stays conservative: Rating nodes always include @type, and aggregates nest under the same root @type you assign to the item. For advanced scenarios—cross-linking @id graphs, combining offers with reviews, or wiring sameAs URLs—extend the downloaded JSON manually after export.
Use cases
Product review templates on affiliate blogs often list pros, cons, and a verdict paragraph—pair that visible copy with Review markup where each author and rating is genuine. Ecommerce PDPs may surface user-generated reviews in the DOM while exposing aggregateRating in JSON-LD; keep the numbers synchronized with your review provider to avoid discrepancies.
Local SEO landing pages that summarize third-party sentiment can reference LocalBusiness or Restaurant types when the article truly reviews the venue; your own location’s marketing site should not fabricate critic reviews. Course platforms publishing instructor-led evaluations use Course as itemReviewed, while software directories rate SoftwareApplication entities with version-agnostic names unless the review targets a specific release.
Restaurant critics and food bloggers combine Restaurant type with Person authors and narrative reviewBody fields so rich results align with long-form content. Book and movie sites map to Book, Movie, or TVSeries, especially when star ratings accompany syndicated blurbs. SaaS comparison hubs mixing brand summaries can still emit Review nodes per product when each section includes independent scoring and signed authorship.
Whenever you migrate CMS templates, export JSON from this generator first, diff against legacy markup, and regression-test a sample URL after launch. Pair the workflow with SynthQuery’s SERP Preview tool to ensure titles and snippets still read well next to any future star treatment.
How SynthQuery compares
Many schema generators output a single template with fixed @type values, skip validation, or hide aggregate rules until after you pay. Spreadsheet macros and AI drafts often hallucinate property names or omit reviewCount entirely, which fails silently in production. SynthQuery focuses on review-specific flows: dual modes, multiple Review nodes, broader item typing, and instant feedback tied to Google’s published expectations.
Unlike opaque “SEO boxes” that only prettify JSON, this page explains warnings in context—missing reviewBody, ambiguous scales, or policy nudges for local entities—so editors learn while they build. Everything executes client-side without sending your draft copy to a server, which matters when reviews include embargoed product details. You still need engineering rigor for caching, CSP, and CMS-specific script placement, but the markup foundation is transparent and testable.
Aspect
SynthQuery
Typical alternatives
Review vs aggregate workflows
Dedicated Single Review and Aggregate Rating modes with distinct field sets and shared item context.
One-size templates that mix patterns or omit aggregate counts.
Item coverage
Selector spans products, local entities, media, software, courses, services, and more.
Product-only forms or free-text @type without validation hints.
Multi-review output
@graph bundles multiple Review nodes for editorial roundups.
Manual copying or duplicate script tags per review.
Guidance
Inline errors/warnings aligned with common review snippet requirements.
Syntax-only JSON validation without SEO context.
Privacy
Runs locally in the browser; no server-side storage of your copy.
Hosted tools that POST content to unknown backends.
How to use this tool effectively
1) Pick Single Review when each rating belongs to a distinct author and narrative—think editorial roundups, multi-critic articles, or a testimonials grid where every card is a full Review. Pick Aggregate Rating when you want one average score and a total number of reviews for the item as a whole, typical on product detail pages.
2) Select the itemReviewed @type that matches what you are reviewing in plain language. Products map to Product; restaurants to Restaurant or LocalBusiness; software landing pages to SoftwareApplication; online classes to Course. The type informs how Google clusters the entity, so avoid guessing “CreativeWork” unless nothing else fits.
3) Enter the visible name of the item exactly as it appears on the page—consistency between JSON-LD, headings, and body copy reduces mismatch penalties. If you switch modes, the name and type carry over so you do not retype shared context.
4) In Single Review mode, fill author type (Person vs Organization) and the author’s display name. Add ratingValue plus best and worst values so the scale is explicit—five-star systems usually use 1 and 5. Set datePublished from the real publication date of that review, and write reviewBody text that actually shows on the page; thin or hidden text is a policy risk.
5) Optional publisher helps when a review is syndicated or clearly attributed to an editorial brand; the generator wraps a simple Organization name for you. Click Add another review when your article contains multiple distinct critiques of the same item—the output becomes an @graph array of Review nodes sharing one itemReviewed definition.
6) In Aggregate Rating mode, enter the average ratingValue, then supply reviewCount, ratingCount, or both as your reporting allows. Google expects honest totals drawn from real reviews users can find; fabricated counts invite manual action. Keep bestRating and worstRating aligned with your UI stars.
7) Watch the validation panel: red items block minimum Google-style completeness; yellow items are strong recommendations such as missing reviewBody or policy reminders for LocalBusiness-style entities.
8) Copy JSON-LD or download the .jsonld file, embed it in a script tag with type application/ld+json, and open Google’s Rich Results Test from the in-app link. Fix any tool-reported gaps, then monitor Search Console for enhancement status on high-value URLs.
Limitations and best practices
Generators model structure, not business facts. Never inflate review counts, borrow ratings from other sites, or mark up testimonials that are not visible. Keep rating scales consistent with on-page UI, update aggregates when new reviews arrive, and avoid duplicate conflicting JSON-LD blocks on the same URL. For medical, financial, or YMYL topics, follow industry advertising rules in addition to schema policies. When Google deprecates a field, revisit this tool’s output against the latest Search Central docs.
Full catalog at https://synthquery.com/tools — AI detection, readability, plagiarism, paraphrasing, and more.
Frequently asked questions
Review schema is JSON-LD (or microdata/RDFa) that uses Schema.org types such as Review, Rating, and AggregateRating to describe evaluations of a thing named in itemReviewed. Search engines may use that data to understand authorship, numeric scores, review text, and publication dates. It does not replace visible content: Google expects users to see the same information you encode. Review markup is common on editorial critique pages, product detail pages with user ratings, and marketplace listings, but eligibility for enhanced SERP features still depends on quality signals, crawl access, and policies specific to your niche.
A Review node represents one author’s opinion, usually with reviewBody text, a dated publication, and a reviewRating object. AggregateRating summarizes many contributors into averages and counts—think “4.7 stars from 215 reviews.” Use Review when each voice matters on the page; use AggregateRating when you display a combined score UI. They can coexist on complex templates, but duplicating the same rating in conflicting ways confuses parsers. Google’s review snippet guidelines spell out which properties are required for each presentation; this generator splits the workflows so you do not accidentally merge incompatible fields.
You can add schema whenever the visible page contains a genuine review, but Google discourages manipulative or self-serving first-party review markup—especially for LocalBusiness entities you control—when it looks like the business wrote its own five-star praise. Legitimate scenarios include embedding verified third-party reviews with permission, syndicating critic content with clear attribution, or using aggregate data sourced from an independent platform that users can verify. Always read the latest Google structured data general guidelines plus niche rules (shopping, jobs, events). When unsure, prioritize honest textual content and consult legal or compliance teams before publishing markup.
Schema.org allows reviews on many Thing subtypes: Product, Book, Movie, Recipe, SoftwareApplication, Course, Event, Game, CreativeWork, Service, Organization, and more. Google’s rich result eligibility varies by vertical—some types trigger product-centric previews, others informational panels. Pick the type that best matches the primary subject of the page, keep the name aligned with visible headings, and avoid mismatched types (for example, labeling a blog post as Product just to chase stars). When your subject spans multiple entities, choose the main item users believe they are reading about.
Stars render when Google’s systems decide to show review enhancements for a URL, the structured data parses cleanly, and the page meets quality thresholds. The Rich Results Test confirms technical parsing but not guaranteed display. Factors include mobile usability, relevance to the query, consistency between markup and DOM text, absence of spam signals, and ongoing experiments in Google’s UI. Aggregates may show review counts next to stars; single reviews might surface textual snippets. Monitor Search Console enhancements reports after deployment to see whether Google recognizes your markup and whether warnings appear.
No. Valid JSON-LD is necessary but not sufficient. Google may ignore markup that conflicts with on-page content, violates policies, or competes with stronger listings. Seasonal algorithm updates can also change which SERP features appear for a given query even when your code stays static. Treat structured data as a best-practice signal: implement it accurately, test with Google’s tools, then measure clicks and impressions in Search Console rather than assuming permanent stars. SynthQuery surfaces validation hints to reduce technical rejection, not to promise placement.
Google does not publish a universal integer that works for every site. Documentation stresses truthful counts tied to real ratings users can access. Some verticals may see stars only after substantial trust history; others never show them despite valid data. Focus on supplying accurate reviewCount or ratingCount fields, keeping averages mathematically consistent with visible reviews, and avoiding sudden spikes that look synthetic. If a marketplace feeds your data, mirror their totals rather than inventing marketing-friendly numbers.
Testimonials qualify only when they meet review content requirements: identifiable author or publisher, clear rating if you show stars, and full text available to visitors. Generic praise without attribution is a poor fit. If a testimonial is really a quote snippet, consider other structured patterns or present it as plain text. Never mark up testimonials that are hidden, rotated via inaccessible widgets, or sourced from people who did not use the product. Transparency wins both legally and algorithmically.
Consequences range from ignoring the structured data to manual spam actions that suppress rich results or demote the site. Google’s spam policies cover misleading structured data alongside link schemes and cloaking. Recovering trust can require removing offending markup, documenting authentic review sources, and filing reconsideration requests where appropriate. Teams should log who approved schema deployments and audit third-party plugins that auto-inject JSON-LD without editorial oversight.
After generating JSON-LD here, paste it into a custom HTML block, hook it through your theme’s wp_head action, or use a vetted SEO plugin that allows custom schema snippets. Avoid duplicate JSON-LD from multiple plugins—only one coherent graph should describe the same review. Purge page caches and CDN layers so crawlers see the new script tag quickly. Test a permalink in the Rich Results Test, then monitor Search Console. For WooCommerce or review plugins, prefer native integrations that sync stars with database totals instead of hand-maintaining aggregates.