Skip to content
Tobias Wolf edited this page Apr 26, 2013 · 3 revisions
10-bit DAC resolution?

A 10 BIT DAC allows you to select from 1024 levels of gray/luminance per color gun, of which only 256 will be displayed at a given time (assuming a 8 bit frame buffer).

Related: FAQ: 10 bit Framebuffers (i.e. 1024 simultaneous shades of gray/luminances per color gun) .

Q: Is it possible to get 10-bit DAC resolution with PTB-3 under Windows?

A: Yes, with unfortunate limits.

Any ATI video card made since early 2000 has 10 bit DACs, and rumor has it that recent NVIDIA cards do as well. Windows sets strange limits on what legal LUT DAC values are, however, so at best you can use the LUT to linearize the luminance of the display, but any hope of doing LUT animation or somesuch is pretty much hopeless.

The constraint is, something like LUT values must be monotonically increasing. But not quite, as if you use significantly small sized increments early in the LUT, Windows will also reject it. This can be a real problem if your goal was to use the 10-bit resolution to gain very fine contrast control over a very limited range of luminance, and those luminances are quite dark. If you want fine resolution near mean gray, or above, windows will generally accept that kind of LUT. If you have an NVIDIA card, there is a DLL provided by them that allows you to set an arbitrary LUT, bypassing Window's constraints, but I (Alan Robinson) have never heard of anybody trying this. Another option is to use 10-bit frame buffers, which does work under Windows, with the proper hardware.

Q: Is it possible to get 10-bit DAC resolution with PTB-3 under OS X?

A: Psychtoolbox on OS X (beta flavor as of 22.1.2007) allows you to query the operating system about the DAC output resolution

[lut dac] = Screen('ReadNormalizedGammatable', screen);

will return the DAC size for a given output display screen in the return argument dac. Typical return values are 8 for 8 Bit DAC's, e.g., Intel graphics, and 10 for 10-bit DAC's on modern ATI or NVidia graphics cards. These values are in accordance to the specifications of ATI and NVidia, so one can assume they are correct. The best way to be certain is of course verification with some photometer. This has been done for the Radeon X800, as reported below.

Background

At issue is whether 10-bit color resolution can be achieved with PTB-3 and video cards sporting 10-bit DACs. Even if the hardware supports 10-bit depth, it has remained unclear whether the software pipeline (video card drivers, OpenGL, PTB, etc.) would truncate double-precision Matlab pixel values at the 8-bit (256 colors) or 10-bit (1024 colors) level. Although 8-bit artifacts can sometimes be determined by visually inspecting the display, the best test is to take a photometer to the screen and make measurements.

Hardware

Power Mac G5 Dual 2.7 GHz 4 GB DDR SDRAM ATI Radeon® X800 XT Mac Edition (has 10-bit DACs) Iiyama HM204DT "Vision Master 514" 22" color monitor Minolta LS-110 Hand-held Photometers + tripod

Software

OS/X 10.4.5 Matlab 7.0.4 for Mac PTB-3

Image-matrix (frame buffer) values vs. color lookup table values

10-bit values with most hardware that provides 10-bit DACs can only be obtained by manipulating values inside the color look-up table (LUT, also known as the gamma table) – the frame buffer itself does not provide 10-bits per RGB channel per pixel. To illustrate this point, suppose you want to display a 50%-contrast grating. The wrong way to do it is to define a 50%-contrast image matrix in Matlab: 127.5 * (1 + 0.5 * grating) and display that image using a normalized LUT that spans the full 0.0 to 1.0 range of values. This inevitably result in less-than-10-bit quantization. The right way to do it, if you want to take full advantage of the 10-bit DACs, is to produce a nominally full-contrast grating: 127.5 * (1 + 1.0 * grating) in the Matlab image matrix but display the image using a normalized LUT whose values only span the .25 to .75 range. Unfortunately, most video cards that provide 10-bit DACs do this with pseudo 10-bit color, so that the lookup tables have 256 slots, each allowing 10-bit precision per RGB channel. So you must choose 256 values out of the 1024 available. This limitation can be problematic in cases where you need to display, say, a very low contrast pattern next to a very high contrast one.

Matlab Program

Here's the Matlab script I've used to measure luminance:

function Test10Bits

     p.screenNumber = max(Screen('Screens'));
     AssertOpenGL;
     Screen('Preference', 'SkipSyncTests', 1);

     masterGammaTable = ones(256, 3);

     windowPtr = Screen('OpenWindow', p.screenNumber, 255, [], 32, 2);
     Screen('LoadNormalizedGammaTable', windowPtr, masterGammaTable);

     Screen('FillRect', windowPtr, 0);
     Screen('Flip', windowPtr);

     while 1==1
         volts = input('Normalized Voltage Value (0-1): ');
         thisGammaTable = volts * masterGammaTable;
         Screen('LoadNormalizedGammaTable', windowPtr, thisGammaTable);
     end

     Screen('CloseAll');

end

Valid entries into a normalized LUT are bounded between 0.0 and 1.0 inclusively. In a 8-bit system, the 0.0 to 1.0 range is divided in 256 possible values, each value corresponding to a 0.00392 increment over the previous value. In a 10-bit systems, the 0.0 to 1.0 range is divided in 1024 values, each value corresponding to a 0.00098 increment over the previous value. Therefore, if you step through the 0.0 to 1.0 range in increments of 0.001, a 10-bit system would increase luminance on every step whereas an 8-bit system would increase luminance on every fourth (sometimes third) step.

Photometer Measurements

Measurement precision varies depending on photometer setup. In my case, it was important to put the photometer on a tripod (rather than using it hand-held) and set it to a long integration period to average out the noise and get reliable readings. Photometer precision also varies with the range of luminances being measured (e.g. my photometer goes from two to one decimal point when it goes above 100 cd/m² ). Long story short, continuous photometer readings jumped noticeably on every 0.001 LUT increment I tested, and this indicates that my display has 10-bit precision.

Clone this wiki locally