Three wavelengths aren’t enough for realistic CG color

A bout of insomnia this morning allowed me to rehash an idea that I’ve long meant to re-visit. It arises from work I began in the summer of 1994 and refined in 1996, and I think has implications that could improve the realism of computer graphics rendering. Put simply, I believe color should be represented with a much larger sampling of wavelengths than RGB. I have not searched extensively but I don’t think this has ever been implemented.

My project in 1994 (at the NSF Geometry Center in Minneapolis) centered around a whimsical topic, soap bubbles (or more generally, “thin films”). I wanted to find a way to render them, in particular their swirling iridescent colors, which took me on a foray into understanding their physics. As part of my summer project there, I created a rudimentary ray tracer (more of a proof of concept than anything; I didn’t know anything about ray tracers when I started) just for the purpose of rendering thin films with some degree of realism. Unfortunately the Geometry Center and any records of my project are all long gone now, though I still have a poster and a video I created, perhaps some code somewhere, and a vague memory of the physics.

The reason thin films are colored has to do with interference effects of light; the film is so thin (on the order of the wavelengths of visible light) that reflections of a coherent light ray from the inner surfaces of the film interfere with the original ray, causing that wavelength to come out lighter or darker depending on the exact thickness of the film versus the wavelength. The physics are explained reasonably well at Wikipedia. The variations in color across the film are due to slight variations in thickness (resulting from gravity, evaporation, wind, viscosity, etc.) which cause different wavelengths of light to interfere more or less.

The point is that this is happening at the wavelength level. Now imagine a naive computer graphics approach to rendering this; you would calculate the effect on red, green, and blue light – three wavelengths – and that would be your color. But it should be obvious that three wavelengths hardly represent the whole spectrum, at least when the effects are occurring at the level of individual wavelengths. And indeed, the results were pretty unconvincing. For a better approximation, I added three wavelengths, although I can’t remember now how that worked, but at least it was easy to work into a color model, although it may have been wrong-headed (see for reference Magenta Ain’t A Colour). The color bands did look better. Wish I still had the pictures from then to show here.

In 1996, as part of a computer graphics project course at NCSU, I revisited this problem with the idea of using the full spectrum. In an ideal world, we would model a ray of light as a continuous spectral function I(lambda) – intensity at each wavelength – and interactions with objects (such as reflection) similarly as functions Rn(lambda) to transform brightness of the light at each wavelength, with the result being simply the product of all interaction functions Rn and the original ray of light I. The problem is, what exactly does this look like? We don’t perceive a spectrum of light; we perceive a single color. How do I translate a wavelength function into an RGB value to display on a computer screen?

In 1996, the web did not extend (as far as I could find – actually it may have) to a deep practical treatment of how to do this. There may have been articles like this one now on Wikipedia to explain the concept fairly clearly, but for the actual empirical numbers I had to dig up an obscure book in the library. To summarize, our eyes have color receptors of three different varieties, each of which is stimulated differently by a different range of visible wavelengths – one mostly by longer wavelengths, one mostly by shorter wavelengths, and one mostly by the mid-range. In fact each could be modeled by a stimulation function Sreceptor(lambda) which could be multiplied by a spectral function I(lambda) and integrated over the visible spectrum to determine that receptor’s stimulation by a particular ray of light (see that Wikipedia article).

These functions are my key to mapping a spectral intensity function into values I can display. In the 1930s a set of experiments (I can only imagine as incredibly tedious) mapped typical receptor responses as a function of various wavelengths. The receptiveness of the three receptors defines a three-dimensional space called (somewhat arbitrarily) the CIE XYZ color space. Red (long), blue (short), and green (midrange) light can be used to stimulate the individual receptors almost in isolation, which is why monitors only need to emit three wavelengths of light to cover most of our range of color perception.

But that doesn’t mean that these three wavelengths are representative of interactions with the entire spectrum, and thin films are a clear example of the inadequacy. Using the forty or so wavelengths sampled in my sources, I got very convincing renderings indeed. My film thickness models were trivial –  just linearly decreasing from bottom to top – but the bands of color looked very realistic.

Everyone’s familiar with light sources having different spectra. This is why everything looks amazing in department store lights, and why your digital camera likely has different settings for light sources that are incandescent, fluorescent, sunny, cloudy, etc. We can easily see these effects, yet the standard RGB representation of color contains no hint that more than three wavelengths are involved. We know that color interactions actually occur due to the surface characteristics at the microscopic (indeed, molecular) level where all different wavelengths must be affected differently. So why not treat them that way in computer graphics?

My hope is to integrate my full-spectrum  model into an existing renderer, not only allowing me to render films along with normal scenes, but also to apply texture maps to films as thickness maps, thereby modeling the variations seen in actual films (how cool would it be to write your name on a bubble this way?). An open source project like POVray would be perfect. When I looked at this program in particular, I was hoping that color would be represented by a class that I could simply modify. Unfortunately the implementation was more integrated – macro-based, as I recall – probably for performance reasons. Perhaps a different program would be more amenable. But in any case, I would run into the problem of representation: no one is used to specifying colors as a sampling at 40+ wavelengths. Everyone’s scene definitions use RGB colors, and there’s no unique mapping of an RGB coordinate into such a representation (which when you think about it is the whole point here – two surfaces may both appear the same under one light source, but reflect very different colors under another, due to different interaction functions) – in fact there are infinitely many. In order to specify realistic colors as a full spectrum, we’d have to go around measuring in the real world with a spectrometer.

Still, this is an intriguing idea to me, and I’d really like a chance to see how it works out one day.

Advertisements