Three wavelengths aren’t enough for realistic CG color

A bout of insomnia this morning allowed me to rehash an idea that I’ve long meant to re-visit. It arises from work I began in the summer of 1994 and refined in 1996, and I think has implications that could improve the realism of computer graphics rendering. Put simply, I believe color should be represented with a much larger sampling of wavelengths than RGB. I have not searched extensively but I don’t think this has ever been implemented.

My project in 1994 (at the NSF Geometry Center in Minneapolis) centered around a whimsical topic, soap bubbles (or more generally, “thin films”). I wanted to find a way to render them, in particular their swirling iridescent colors, which took me on a foray into understanding their physics. As part of my summer project there, I created a rudimentary ray tracer (more of a proof of concept than anything; I didn’t know anything about ray tracers when I started) just for the purpose of rendering thin films with some degree of realism. Unfortunately the Geometry Center and any records of my project are all long gone now, though I still have a poster and a video I created, perhaps some code somewhere, and a vague memory of the physics.

The reason thin films are colored has to do with interference effects of light; the film is so thin (on the order of the wavelengths of visible light) that reflections of a coherent light ray from the inner surfaces of the film interfere with the original ray, causing that wavelength to come out lighter or darker depending on the exact thickness of the film versus the wavelength. The physics are explained reasonably well at Wikipedia. The variations in color across the film are due to slight variations in thickness (resulting from gravity, evaporation, wind, viscosity, etc.) which cause different wavelengths of light to interfere more or less.

The point is that this is happening at the wavelength level. Now imagine a naive computer graphics approach to rendering this; you would calculate the effect on red, green, and blue light – three wavelengths – and that would be your color. But it should be obvious that three wavelengths hardly represent the whole spectrum, at least when the effects are occurring at the level of individual wavelengths. And indeed, the results were pretty unconvincing. For a better approximation, I added three wavelengths, although I can’t remember now how that worked, but at least it was easy to work into a color model, although it may have been wrong-headed (see for reference Magenta Ain’t A Colour). The color bands did look better. Wish I still had the pictures from then to show here.

In 1996, as part of a computer graphics project course at NCSU, I revisited this problem with the idea of using the full spectrum. In an ideal world, we would model a ray of light as a continuous spectral function I(lambda) – intensity at each wavelength – and interactions with objects (such as reflection) similarly as functions Rn(lambda) to transform brightness of the light at each wavelength, with the result being simply the product of all interaction functions Rn and the original ray of light I. The problem is, what exactly does this look like? We don’t perceive a spectrum of light; we perceive a single color. How do I translate a wavelength function into an RGB value to display on a computer screen?

In 1996, the web did not extend (as far as I could find – actually it may have) to a deep practical treatment of how to do this. There may have been articles like this one now on Wikipedia to explain the concept fairly clearly, but for the actual empirical numbers I had to dig up an obscure book in the library. To summarize, our eyes have color receptors of three different varieties, each of which is stimulated differently by a different range of visible wavelengths – one mostly by longer wavelengths, one mostly by shorter wavelengths, and one mostly by the mid-range. In fact each could be modeled by a stimulation function Sreceptor(lambda) which could be multiplied by a spectral function I(lambda) and integrated over the visible spectrum to determine that receptor’s stimulation by a particular ray of light (see that Wikipedia article).

These functions are my key to mapping a spectral intensity function into values I can display. In the 1930s a set of experiments (I can only imagine as incredibly tedious) mapped typical receptor responses as a function of various wavelengths. The receptiveness of the three receptors defines a three-dimensional space called (somewhat arbitrarily) the CIE XYZ color space. Red (long), blue (short), and green (midrange) light can be used to stimulate the individual receptors almost in isolation, which is why monitors only need to emit three wavelengths of light to cover most of our range of color perception.

But that doesn’t mean that these three wavelengths are representative of interactions with the entire spectrum, and thin films are a clear example of the inadequacy. Using the forty or so wavelengths sampled in my sources, I got very convincing renderings indeed. My film thickness models were trivial –  just linearly decreasing from bottom to top – but the bands of color looked very realistic.

Everyone’s familiar with light sources having different spectra. This is why everything looks amazing in department store lights, and why your digital camera likely has different settings for light sources that are incandescent, fluorescent, sunny, cloudy, etc. We can easily see these effects, yet the standard RGB representation of color contains no hint that more than three wavelengths are involved. We know that color interactions actually occur due to the surface characteristics at the microscopic (indeed, molecular) level where all different wavelengths must be affected differently. So why not treat them that way in computer graphics?

My hope is to integrate my full-spectrum  model into an existing renderer, not only allowing me to render films along with normal scenes, but also to apply texture maps to films as thickness maps, thereby modeling the variations seen in actual films (how cool would it be to write your name on a bubble this way?). An open source project like POVray would be perfect. When I looked at this program in particular, I was hoping that color would be represented by a class that I could simply modify. Unfortunately the implementation was more integrated – macro-based, as I recall – probably for performance reasons. Perhaps a different program would be more amenable. But in any case, I would run into the problem of representation: no one is used to specifying colors as a sampling at 40+ wavelengths. Everyone’s scene definitions use RGB colors, and there’s no unique mapping of an RGB coordinate into such a representation (which when you think about it is the whole point here – two surfaces may both appear the same under one light source, but reflect very different colors under another, due to different interaction functions) – in fact there are infinitely many. In order to specify realistic colors as a full spectrum, we’d have to go around measuring in the real world with a spectrometer.

Still, this is an intriguing idea to me, and I’d really like a chance to see how it works out one day.

Android design for CafePress tshirts

I spent most of my developer time yesterday futzing with GIMP. GIMP is great, but like every other image editor it seems to be designed to frustrate. I just wanted to take one of the stock android images, blow it up, fiddle with it a bit, and add a “toast” at the bottom with some text – for a tshirt design. Someone with experience could probably do this in a few minutes. Took me hours.

First, turning the andoid.ps Google provides into an image. When I imported it, I could choose the size of the image to produce, but that wasn’t the size it actually produced. I think it had something to do with calculating DPI instead of pixels, but whatever that was about it’s pretty confusing. Once I got it large enough, I wanted to just move the android’s arm. Easy, right? Well, not as easy as I remembered. I  selected the arm, and then went to move it… and the whole picture moved. WTF? There’s a box in the move tool for moving the selection vs the layer, so I clicked that… moved the selection box instead of the contents? Believe it or not, it seemed to be easier to make two separate layers, then delete stuff from each to make one the arm, and one everything else, then move/rotate the arm layer; and even then I had a time of it because the layer boundaries on the arm would clip it when I rotated. Just a nightmare.

Well, of course there had to be a better way, and when I was constructing the toast, I consulted Google and found others complaining about the same feature (evidently this is new in GIMP 2.6 – I recall a simple move doing the job before). I forget where I read it, but the best thing anyone said was to cut (Ctrl-X) then paste (Ctrl-V). This creates a floating layer – you can move/rotate/transform the layer as desired, then anchor it back to the layer it came from. Good thing I have at least an inkling what layers are about. There’s probably some better way to do this, but it’s not too bad.

I had fun fiddling with the color tools. Somewhere along the way I accidentally changed the green of the android (didn’t notice until I’d uploaded to the store and had everything set up – can’t decide now which I like better). I learned that when you export to a PNG, the “Flatten layers” option turns transparency into your background color, while “Merge layers” leaves transparency intact. This is important especially for black CafePress tshirts – evidently black ink on a black shirt is unsightly, so it’s best to leave the background transparent for that medium.

So now I have a store front with my design. I really like the idea of the android plus the toast. And unlike everyone else who’s doing this, I actually put the CC attribution in the tiny text at the bottom (we’ll see if it’s readable on the shirt). I bought myself a black and a white, so we’ll soon see how I like it in the physical world.