In my renderer, I'm having to ignore two out of three RGB components. I do one wavelength per ray, and randomly sample across all wavelengths. When the final wavelength is integrated into the rendered image, I do the normal RGB weighting. What is surprising (perhaps) is that good color fidelity is achieved rather quickly, certainly faster than is needed to get a good (unbiased) rendering to begin with.
Single-wavelength rendering also make certain kinds of effects trivial, makes it easier to model physical lighting exactly, and is in general, IMO, easier conceptually to work with than RGB.
It would be great if OSL could have a mode that was optimized for this. :-)
Also, I'm willing to work on this issue. I just wanted to get discussion started on it and perhaps an early indication of whether this is something that is a candidate for inclusion in OSL to begin with.
I am personally using this in the context of PBRT (v2), which does support spectral rendering.
The simplest place to deal with this is on the integrator side. Basically when you get back a list of closures with RGB weights, you sample them via your favorite method to get their intensity at the wavelength(s) you care about. This also lets you change your mind about how colors should be represented without affecting the OSL runtime.
For example, note that many spectral renderers (like luxrender or indigo) track multiple wavelengths together until the first dispersive event. This improves the color noise quite a bit. It also means the situation isn't quite as simple as you describe. Of course the single wavelength approach has certain advantages, but different spectral renderers will have different goals.
Right, but what about emissive closures (for lights)? There is no RGB triplet for an arbitrary physical light, at least, not one that's actually correct. Fluorescent and sodium lights have unique spectral emission that cannot be described by an RGB triplet. But since a OSL closure is used to define the "light", and the link between closures are RGB color values, it seems like we're stuck.
The hack I'm looking at is to just use a wavelength/intensity pair and make all of my closures/shaders understand this convention (and obviously the integrator in the renderer as well). Does that sound reasonable?
If I understand correctly, you want to switch the meaning of color from RGB triplet to wavelength/intensity? I think that might lead to confusing behavior with respect to things like texture lookups. You would either end up having to implement the spectral conversion in your shader, or be really careful not to mix the two meanings.
A cleaner solution might be to make your own emissive closure (or extend the builtin one with custom arguments) and describe the spectrum you want as parameters of the closure.
This goes for things like refraction() too. Presumably if you have a spectral renderer you are interested in things like dispersion. Rather than code the wavelength dependence of IOR into the shader - I would just extend refract() to take an optional argument specifying the cauchy coefficient (or however you want to parametrize the dispersion).