A Look-Up Table (LUT) is a compact way to describe how input colors should become output colors. In video and photo pipelines, three-dimensional LUTs map every combination of red, green, and blue (sampled on a cube, commonly 33×33×33) to a graded result, so editors can reuse a colorist’s decisions across shots, cameras, and deliverables without manually re-drawing curves on every clip. SynthQuery’s LUT Generator from Image targets creators who already have a reference still—a graded frame, a mood board extract, or a hero social post—and want a portable .cube file they can load in finishing software or hand to a collaborator.
Video editors, colorists, and photographers use LUTs to keep looks consistent between on-set monitoring, offline edits, and final color. Photographers may export a LUT that echoes a favorite grade for batch work in another app. Agencies use them to align UGC, stock, and studio captures with a single brand palette. Generating a custom LUT from a reference image is valuable when you do not have time to rebuild every control from scratch, when you want a starting point that already resembles a proven still, or when you need a lightweight artifact you can version in Git beside your project files.
This page builds a 33³ table in your browser, exports Adobe-compatible .cube text, optionally previews the LUT on a second “before” image with a draggable comparison, and blends strength from zero to one hundred percent—all without uploading pixels to SynthQuery. When your project mixes imagery with copy that may involve AI assistance, pair visual work here with the AI Detector and Humanizer, and browse the full catalog at https://synthquery.com/tools for detection, readability, and schema utilities beyond imaging.
What this tool does
The hero workflow is deliberately linear: establish the look with a reference still, generate the LUT, optionally validate on a different frame, then export. Feature depth stays focused on portability and honesty about limits—this is a browser utility, not a full ACES-managed color pipeline—while still delivering artifacts professionals recognize.
Real-time feedback matters when you are judging whether a reference still is representative. If the reference is extremely flat or a single solid color, statistics collapse and the LUT may approach a gentle global balance rather than a cinematic curve; the UI surfaces errors when decoding fails instead of silently failing. Keyboard-accessible comparison controls mirror the Photo Sepia Filter pattern so muscle memory transfers across SynthQuery imaging tools.
Format flexibility on download reduces friction for CMS uploads, Slack reviews, and NLE import round-trips. The optional preview path is strictly optional: many users only need the .cube for Resolve or Premiere Pro. When you do load a preview, the comparison strip clarifies shadow tint, skin-line behavior, and saturation shifts that numeric sliders alone might obscure.
Automatic LUT extraction from a reference still
Upload one raster reference in JPG, PNG, WebP, BMP, or TIFF. The tool downsamples safely for stability, analyzes color statistics in a perceptual Lab space, and fills a 33³ lattice so each input RGB triplet (normalized to zero–one) maps to an output triplet that reflects the reference grade. You see a loading state while the table builds; nothing is sent to a server for that computation.
.cube export for professional apps
Download a text .cube file with TITLE, LUT_3D_SIZE 33, DOMAIN_MIN/MAX spanning zero–one, and triplets ordered for broad compatibility with DaVinci Resolve, Adobe Premiere Pro, Photoshop’s LUT loaders, Final Cut Pro workflows that accept cubes, and many open-source tools. The file is small enough to email or drop into shared drives.
Preview on any optional “before” image
Add a second image that represents footage or a photo before grading. The interface renders the LUT through trilinear interpolation on Canvas, updates when you move the intensity slider, and shows a vertical divider you can drag or nudge with the keyboard for an accessible before/after review.
Intensity control and consistent exports
The intensity slider blends between the identity transform and the full extracted grade. The same percentage drives the on-screen preview, the graded raster download, and the exported .cube table (each lattice entry is blended toward its input coordinate), so what you see matches what you import elsewhere at that strength.
Multiple raster formats and client-side privacy
Reference and preview images honor the same format set as other SynthQuery Canvas tools. Choose match, JPEG, PNG, or WebP for the graded preview download when a preview image is present. Decoding and encoding follow browser capabilities; BMP and TIFF exports fall back to PNG where encoders require it. Because processing stays local, sensitive plates and unreleased marketing frames never traverse our network for LUT generation.
Technical details
Extraction uses global color statistics in CIE Lab space: the tool compares the reference image’s mean and spread in L*, a*, and b* against a neutral prior typical of general sRGB imagery, then maps lattice points through that transfer before converting back to gamma-encoded RGB. This Reinhard-style global transfer is fast, deterministic, and understandable, but it cannot recover information that never existed in the reference (for example, true film halation or lens flare physics).
The implementation walks the 33³ cube in JavaScript, stores results in a typed array, and applies the same table to preview pixels with trilinear sampling. Alpha channels on preview images are preserved where the pipeline reads RGBA buffers. EXIF metadata is generally stripped on re-encode, so keep masters archived separately when legal metadata matters.
How 3D LUTs and interpolation work
A 3D LUT stores output RGB samples on a regular grid in input RGB space. Software evaluates colors that fall between lattice points using trilinear interpolation—linear blends along red, green, and blue axes—so arbitrary pixels receive smooth results without storing infinite entries. Larger sizes (33 vs 17) reduce banding but increase file size; 33 is a common broadcast default.
.cube layout and compatibility notes
The exported file lists one RGB triplet per line after metadata. DOMAIN_MIN and DOMAIN_MAX declare the input range (here 0–1 per channel). Most hosts assume sRGB-like display-referred values when no ICC chain is present; for strict scene-linear workflows you should verify compatibility inside your color-management settings. This generator does not embed ICC profiles or ASC CDL sidecars.
Color space and reference quality
Analysis assumes browser sRGB decoding for uploaded rasters, consistent with other Canvas-based SynthQuery tools. Wide-gamut masters may shift slightly when constrained through eight-bit encoders. High-resolution references are downscaled for analysis to protect mobile GPUs; the artistic statistics remain representative for typical grades. For best results, pick a reference that already reflects the contrast and white balance you want, not a severely underexposed thumbnail.
Use cases
Film and video color grading teams can snapshot a hero grade from a graded still and share a starting LUT with junior artists or remote editors who need a consistent baseline before shot-specific tweaks. Documentary pipelines sometimes match archival scans to newly shot interviews; a reference still from the archival pass can seed a cube for first-pass alignment.
Social teams chasing Instagram-style presets can extract a look from an approved reference frame, preview it on raw phone captures, and iterate intensity before committing captions in the same browser session. Brand guidelines that specify warm highlights and muted teal shadows translate faster into a distributable LUT than verbal briefs alone—though legal should still approve any trademarked “look” claims.
Video production coordinators juggling B-roll from multiple cameras use cubes as a lingua franca between DIT, offline, and finishing. Photographers who batch in Lightroom or Capture One can still generate a cube here for one-off comps, teaser videos, or partner agencies that live entirely inside video tools. Content creators on tight deadlines benefit when the only “grade master” they have is a PNG from a mood board: this page turns that still into something loadable, not just inspirational wallpaper.
SynthQuery also serves writers and marketers; when LUT work sits beside AI-assisted scripts, run those drafts through the AI Detector and Humanizer so disclosure and tone stay aligned with platform rules.
How SynthQuery compares
Professional suites such as DaVinci Resolve and Adobe Photoshop ship deep toolsets: node graphs, masks, HDR scopes, and print-ready soft proofing. They are the right home for final mastering. SynthQuery’s LUT Generator targets a different moment in the workflow—when you need a credible cube in seconds, without launching a project file, signing into a cloud renderer, or training new teammates on a full grading interface.
Resolve and Photoshop can also generate or manipulate LUTs, but they assume you already understand their color management model, sometimes require paid licenses, and may route media through heavy GPU paths. This page stays free, instant, and readable: one reference upload, optional preview, two downloads (.cube and graded still), and intensity that stays synchronized across outputs. Use desktop suites for shot-final color science; use SynthQuery when portability and zero upload latency matter more than per-pixel masking.
Aspect
SynthQuery
Typical alternatives
Where pixels are processed
Reference decoding, LUT construction, preview rendering, and downloads execute in your browser tab.
Cloud LUT services may upload frames to GPUs you do not control; desktop apps process locally but require installation.
Learning curve
Single reference image plus optional preview—no node tree, timeline, or layer stack required.
Resolve and Photoshop excel for experts but carry extensive UI surface area and training cost.
Output artifacts
Standard .cube text (33³) plus optional graded raster in common web formats.
Pro tools may emit many formats (CLF, CSP, AML) and embed them inside project bundles.
Privacy posture
Designed for confidential stills: no server-side image upload for LUT generation on this page.
Collaboration-first SaaS may require accounts and cloud storage policies you must audit per vendor.
How to use this tool effectively
1. Prepare a reference still you have rights to use. Choose a frame that already shows the contrast, white balance, and saturation you want others to mimic—flat log stills work, but the resulting LUT reflects that flatness until paired with a proper transform elsewhere.
2. Open the LUT Generator page and upload the reference via drag-and-drop or the browse control. Wait for the busy indicator to finish while the 33³ table builds. If decoding fails, re-export from your editing app as eight-bit sRGB PNG or JPEG and try again.
3. Click Download .cube to save a Resolve- and Premiere-friendly text LUT. The filename includes your source name for easy versioning. Import it into your NLE’s LUT slot or Photoshop’s adjustment pipeline and verify under your display profile.
4. Optionally upload a second “before” image that represents ungraded footage or a neutral photo. The tool renders the LUT at your chosen intensity and shows a draggable comparison: original on the left, graded preview on the right. Use arrow keys on the focused divider for precise alignment checks.
5. Adjust the intensity slider to tame aggressive shifts or to blend subtly with the source photography. Remember that the exported .cube matches the slider, so set strength here before handing files to collaborators if you want WYSIWYG behavior.
6. When satisfied, download the graded preview image in your preferred format (match, JPEG, PNG, or WebP). Archive the untouched originals separately because re-encoded files lose generational fidelity.
7. For more free utilities, visit /free-tools. When copy workflows intersect with imaging, use the AI Detector and Humanizer from the main navigation, and keep https://synthquery.com/tools bookmarked for the full SynthQuery catalog.
Limitations and best practices
This extractor models a global color mapping, not shot-specific power windows, spatial keys, or temporal noise reduction. It will not replace a colorist’s eye on skin tones in every lighting condition. RAW files must be developed to raster first. Animated GIFs, HDR floating-point buffers, and CMYK print separations are out of scope. Very small references (a few hundred pixels) produce noisy statistics; prefer at least HD-sized stills when possible. Always soft-proof on calibrated hardware before broadcast or theatrical delivery.
Screen captions, scripts, and metadata that accompany graded deliverables for AI disclosure policies.
Frequently asked questions
A LUT (Look-Up Table) is a recipe that says “when you see this input color, replace it with that output color.” One-dimensional LUTs might only adjust brightness curves; three-dimensional LUTs capture interactions between red, green, and blue so shifts feel like real grades. Editors load them as presets, stack them under masks, or convert between color spaces in controlled pipelines. SynthQuery builds a 33³ 3D LUT you can save as .cube and reuse anywhere that format is accepted.
Yes in most default workflows: all three support .cube files for display-referred creative LUTs. Import paths differ—Resolve uses LUT slots on nodes or clips; Premiere maps through Lumetri; Photoshop loads 3D LUT files through adjustment layers or Camera Raw depending on version. Always test inside your specific color-managed timeline because scene-linear vs display-referred tagging changes how strong the LUT appears. If a host rejects the file, re-export from this page and confirm the extension is literally .cube with UTF-8 text inside.
Accuracy depends on how representative the still is. A well-exposed frame from the same camera and lighting as your target footage produces more believable results than a random stock photo with different gamut and gamma. This tool uses global Lab statistics, so it cannot invent local adjustments like sky-only desats. Treat the cube as a creative starting point or matching hint, not a guarantee of shot-final science. For mission-critical deliverables, validate vectorscopes and skin-line plots in your finishing environment.
The file format is often identical—many photo apps import the same .cube used in video. The difference is context: video pipelines worry about broadcast legal ranges, log encoding, and temporal consistency across frames, while photo pipelines worry about print profiles and still resolution. SynthQuery outputs a generic creative cube; you remain responsible for placing it after the correct technical transforms (log-to-display, ACES transforms, etc.) in video, or beneath soft proofing in print photo workflows.
Browser Canvas reads your uploaded rasters as eight-bit RGBA with sRGB-ish display assumptions, consistent with other SynthQuery imaging utilities. The exported cube maps normalized zero–one RGB triplets in that domain. Wide-gamut displays may preview slightly differently than Rec.709 monitoring until you manage ICC profiles downstream. If you need strict ACES or P3 mastering, consult your color scientist—this page does not embed ICC tags or IDTs.
Very low-resolution references can make statistics noisy; very large files are downscaled internally so browsers remain responsive. Aim for a still that is at least around HD resolution with meaningful color variation. Extreme panoramas or tiny social thumbnails may skew means and standard deviations. If two references disagree, trust the one shot under production lighting conditions closer to your final footage.
SynthQuery blends each lattice entry toward identity by the same percentage shown on screen so previews, graded still exports, and imported cubes stay aligned. At zero percent the table becomes effectively neutral; at one hundred percent you get the full extracted mapping. Some other tools keep cubes at full strength only—here the choice is deliberate for predictable handoffs.
You can preview a creative intent, but log encodings require a technical transform before a display-referred creative LUT behaves as expected. Apply log-to-rec709 (or your show LUT) first in the NLE, then layer the SynthQuery cube as a creative pass—or bake the log transform into your reference still before uploading. HDR mastering needs explicit luminance metadata this page does not author; treat outputs as SDR creative hints unless you verify headroom externally.
No. LUT generation, preview rendering, and downloads run entirely in your browser using Canvas and typed arrays, mirroring the privacy model of the Photo Sepia Filter and Image Resizer. Normal website analytics may still record that you visited the page, similar to any public site, but the image bytes are not transmitted to SynthQuery for this tool’s processing. Clear downloads on shared machines if previews contained confidential talent or locations.
Start at /free-tools for the utilities grid linking resizers, converters, the Photo Sepia Filter, and more. Companion workflows such as white balance correction, duotone, hue shift, saturation and contrast controls, palette extraction, and histogram inspection may appear in that hub over time as SynthQuery expands imaging coverage. Until each dedicated route exists, chain this LUT with the Image Resizer, WebP Converter, and PNG Compressor linked above, and browse https://synthquery.com/tools for AI detection, humanization, plagiarism scanning, and readability products when your project mixes visuals with text.
LUT Generator from Image - Free Online Image Editing Tool
Reference → 3D LUT (.cube) · optional before image · intensity 0–100% · draggable compare · client-side Canvas only (EDIT-003)