Why does Joker feel so unsettlingly green and yellow?
Why is Blade Runner soaked in teal and orange?
Why does film make skin look soft and warm?
It’s not magic. It’s color.
And whoever controls color — controls the emotional weight of the frame. Color grading is cinema’s visual language — a way to shape atmosphere, guide the viewer’s attention, and set the tone of a story. It can be subtle and almost invisible. Or bold and stylized, like in the work of Wong Kar-Wai, Fincher, Nolan, or Villeneuve. But to speak this language fluently, you first need to understand its grammar.
Where does color work actually begin? How do we know what the “right” color even is? And how do you translate creative intent into technical action?
Spoiler: it doesn’t start with LUTs or filters. It starts with color targets, calibration, and carefully chosen palettes.
In this series, we’ll explore:
- what color grading is on the most fundamental level;
- why limiting your palette is a feature, not a bug;
- how color charts work (and why they matter);
- and how to build your own grading tool in the browser — with React and some simple math.
We won’t just explain it. We’ll show it. And you’ll be able to experiment, tinker, and maybe for the first time actually see what color grading is made of.
What is Color Grading, and Why Start with Color Targets?
Color grading is the process of adjusting and stylizing an image to achieve a specific visual tone, emotional effect, or technical standard. Whether in film, photography, or digital production, grading usually includes:
- balancing exposure and white point;
- correcting technical color shifts;
- crafting a visual style with contrast, tones, and color curves.
But grading isn’t just about “making it pretty.” It’s about control — of the mood, of the viewer’s focus, of the visual language of the entire narrative.
Before you apply any creative look, you need to bring the image to a neutral state, which means:
-
correcting unwanted color casts (color correction);
-
normalizing the footage to a standard color space (like Rec.709 or sRGB);
-
matching material from different cameras into a consistent baseline.
And for that, colorists use color targets like the **ColorChecker, ChromaDuMonde,**or other reference charts.
A color target is a chart of color patches with precisely measured values. These values aren’t arbitrary — they’re obtained through spectrophotometric measurements in controlled environments using professional equipment like X-Rite or Konica Minolta spectrophotometers.
One of the first widely adopted targets was the Kodak Gray Scale — a strip of neutral gray tones used for exposure control. Later came more advanced charts with full-color patches — like the Macbeth ColorChecker, introduced in 1976 (now known as the X-Rite ColorChecker). It features 24 color swatches designed to represent common real-world colors: human skin, blue sky, green foliage, and more.
With the rise of digital photography and digital cinema, color targets became even more critical. They are now essential tools for calibrating not just cameras, but also monitors, printers, scanners — and any device that handles color. They’re used in color matching, profiling, and neutral balancing workflows — from film production to scientific imaging.
Take X-Rite’s ColorChecker, for example. Each patch is measured under standardized lighting (usually D65 or D50), with results recorded in CIE XYZ coordinates — a device-independent color model. Those coordinates are then converted into RGB values, depending on your working color space (like sRGB, Rec.709, or AdobeRGB).
So the RGB arrays we use in our app aren’t guesswork — they’re precise digital representations of standardized, physically measured patches.
If the skin tone patch in the ColorChecker Classic is defined as [194, 150, 130] in RGB, that’s how it should look under correct conditions. If your footage shows something different, that’s a sign of a color cast — and a starting point for correction.
The Catch: Color Charts Are Just the Beginning
Color targets are essential for calibration — but that’s all they are. A beginning. They don’t account for:
- how colors behave in highlights or shadows;
- the unique characteristics of film stock or lenses;
- or the creative intent behind a particular look.
In professional tools like DaVinci Resolve or Dehancer, color charts are just step one in a long pipeline. From there, you move into advanced processes like film emulation, tone mapping, grain, halation, bloom, and other stylistic transformations. So it’s critical to understand: a chart is a calibration tool — not a style.
CinePalette: A Minimal Tool for Exploring Color Palettes
To show how choosing a palette affects an image, we built CinePalette — a simple web app that visualizes what happens when you restrict your color space (a process known as palette reduction).
What You Can Do with CinePalette:
- upload any image;
- pick a palette (ColorChecker, Portra, Sepia, etc.);
- remap every pixel to the closest color in that palette;
- compare before & after with an interactive slider;
- save the result;
- or build your own palette from scratch.
How It Works in Code
Our app runs entirely in the browser using React and the Canvas API. The project — called CinePalette — will be open-sourced and available on GitHub (link at the end of the series).
We start with a set of predefined palettes, but users can also build and save their own. Palettes are defined as arrays of RGB values — for example, here’s what the Kodak Portra 400 palette looks like:
"Portra 400": [
[75, 60, 50], // shadows
[160, 130, 110], // skin tones
[220, 200, 180], // highlights
[60, 100, 80], // foliage
[180, 150, 100] // neutral
],
The selected palette defines which colors are “allowed” to appear in the final image. These become the visual language of the frame — the base tones that set its mood and style.
When a user uploads an image and chooses a palette, here’s what happens under the hood:
- The image is rendered to a hidden <canvas> — this gives us pixel-level access to manipulate the data.
- We extract the ImageData object, which contains an array where each pixel is represented by four values: [R, G, B, A].
- We loop through every pixel, extract its RGB color.
- For each pixel, we find the closest matching color from the selected palette, using Euclidean distance in RGB space — and replace it.
Let’s load up a Shirley card and try applying different palettes — you’ll see immediately how the palette choice shapes the image.
The core of the magic lies in a function that analyzes each individual pixel and finds the closest matching color from the selected palette:
const findClosestColor = (r, g, b) => {
let minDist = Infinity;
let closest = [r, g, b];
for (let [pr, pg, pb] of palette) {
const dist = Math.sqrt((r - pr) ** 2 + (g - pg) ** 2 + (b - pb) ** 2);
if (dist < minDist) {
minDist = dist;
closest = [pr, pg, pb];
}
}
return closest;
};
Then, we replace the pixel’s original color in the ImageData with the closest match from the palette. And we repeat this — for every single pixel in the image.
for (let i = 0; i < data.length; i += 4) {
const [r, g, b] = [data[i], data[i + 1], data[i + 2]];
const [nr, ng, nb] = findClosestColor(r, g, b);
data[i] = nr;
data[i + 1] = ng;
data[i + 2] = nb;
}
Once all pixels have been processed, we render the result back onto the <canvas> and convert it to an image using .toDataURL(). This allows the user to see the result instantly in the browser — and download the filtered image with a single click.
ctx.putImageData(imageData, 0, 0);
setFilteredImage(canvas.toDataURL());
Here, we use Euclidean distance in RGB space — a classic method to measure how “close” two colors are:
const dist = Math.sqrt((r - pr) ** 2 + (g - pg) ** 2 + (b - pb) ** 2);
Here, (r, g, b) is the color of the current pixel, and (pr, pg, pb) is one of the colors in the palette. Out of all the distances calculated, we choose the smallest one — the closest visual match within the selected palette.
This approach is intuitive and easy to implement, but it has limitations: RGB space doesn’t account for how humans actually perceive color — for instance, we’re more sensitive to green than to blue, and brightness differences can be misleading.
We use this approach in CinePalette as a simple and accessible way to demonstrate the basic principle of color mapping. However, even in its current form, you might notice that some colors get replaced in ways that feel unexpected or “off.”
In future versions, we plan to add a toggle between RGB and CIELAB color spaces — allowing users to compare how different models affect the accuracy of color matching.
Why Does This Matter?
CinePalette showcases a basic but fundamental step in color grading: palette restriction. This is where every visual style begins — with the question: “What if we only used these colors?”.
A Portra palette brings warm, nostalgic tones. Pro 400 feels cool and subdued. Teal & Orange delivers high-contrast cinematic punch. Unlike tools like Dehancer or Resolve, CinePalette doesn’t simulate the physics of film. But it captures the essence: color is a tool for style and storytelling.
What’s Next?
This is just the beginning. In the next parts of the series:
- we’ll expand CinePalette with the ability to pick a palette from a reference image;
- add automatic extraction of color schemes from any frame or photo;
- introduce a toggle between RGB and LAB for more perceptually accurate matching;
- and break down how color harmony works — and how you can use it in real-world grading.
Stay tuned — and get ready to not just learn color, but truly see it.