Digital color is a mess
Design · Written on 28 Jan 2026

Color feels objective until you open the same file on two screens and it shifts.

Color is not a property of light, it is perception. Light in the ~400-700nm range hits your retina, cone responses kick in (roughly peaking near 420nm, 530nm, and 560nm), and your brain constructs what it calls "red" or "blue." That process is deeply non-linear: we see green differences more easily than blue, and brightness perception is closer to logarithmic than linear.

img
Electromagnetic spectrum

From Indexed Color to RGB

Early displays used indexed color: small integers pointing to a palette table. When memory got cheaper, we switched to full RGB per pixel. That helped, but it did not solve consistency. (255, 0, 0) on one display could still look different on another because hardware gamuts differ.

The International Color Consortium (1993) introduced ICC profiles to describe device color behavior. Then Microsoft and HP shipped sRGB in 1996, modeled on typical CRT phosphors. It became the default web color space, and it still is.

What Makes a Color Space

A color space is not just "RGB." A practical color space has three core parts:

  • Primary chromaticities (in CIE coordinates)
  • A white point (usually D65 / 6500K for digital work)
  • A transfer function (often gamma around 2.2)

Primaries define gamut boundaries, white point defines the neutral reference, and transfer functions shape how code values map to light and how efficiently we spend precision in shadows where our eyes notice differences faster.

Gamut

Display P3 covers roughly 25% more color volume than sRGB, with most of that expansion in reds and greens. ProPhoto RGB is wider again.

When wide-gamut content lands on a narrower display, gamut mapping has to happen. You either clip out-of-gamut values (fast, but detail dies at the edges) or compress the mapping (better relationships, more computation). In practice, most pipelines clip.

img
Color Gamut

Models vs. Spaces

RGB is one coordinate system. HSL/HSV are alternate coordinate systems over mostly the same underlying colors. That distinction matters for gradients: RGB interpolation takes straight paths through the cube, which often creates muddy midpoints. Polar-ish models can move around hue instead of cutting through it.

Perceptual Uniformity

CIE LAB (1976) aimed for perceptual uniformity: equal numeric steps should feel like equal visual steps. It models opponent channels: L for lightness, a for red-green, and b for yellow-blue.

Oklab (2020) improved hue behavior and made computation friendlier for modern use. Oklch is its cylindrical form, and you see it more in CSS now. You still need gamut constraints, or you will ask displays to show colors they physically cannot.

Bit Depth

"16.7 million colors" comes from 8-bit channels: 256^3. At 10-bit, you get over a billion possible RGB combinations. HDR workflows generally need 10-bit or higher, and professional pipelines often use 16-bit for editing headroom.

More depth reduces banding, but it increases storage and processing cost. JPEG's 8-bit ceiling is a big reason heavy edits fall apart: you're repeatedly quantizing already-quantized data.

Color Management

Your OS is constantly compositing content from different spaces. Apps attach ICC profiles, and the color management module converts through a connection space (usually CIE XYZ) into your display profile.

When this works, color is stable. When it breaks, usually from missing profiles or untagged assets defaulting to sRGB, things drift in ways that feel random.

Know your target gamut. Pick sane bit depth. Tag profiles correctly. The rest is plumbing.