Skip to content

Collecting Image Samples

Oscar de Lama edited this page Feb 21, 2015 · 11 revisions

The vvm and the hvdvm class contain tools automatizing the processing of images for the analysis and characterization of a camera sensor noise. To use those tools, image samples from photographs taken with the camera under study are required. Those photographs must be recorded using the camera raw file format and taken with a careful setup to get meaningful results from the vvm and hvdvm class. Here we will see the setup and considerations to take the photographs and also the steps to crop the required image samples.


Table of Contents


Taking the Photographs

The goal in each photograph we take, is to expose the camera sensor photosites to --ideally-- exactly the same flux of photons. This way, the variance in the photosites readings that vvm and hvdvm will compute and process, is caused only by the camera sensor photon-collection and processing, which is what we want to measure, and not from darker or lighter areas in the photo target scene.

Another goal in the batch of photograph we will take is to allow the camera sensor photosites to collect different amounts of photons, along all the available range in the camera raw image file format. From very close to zero up to the maximum, but without having highlight nor shadow clippings.

Considering those goals, we will photograph a plain flat surface (e.g a white cardboard) filling completely the camera frame. The surface will be as uniformly lit as possible with a diffused source of light, and we will choose a setup with angles between the camera, the light source and the target surface, avoiding highlights or uneven reflections from the target surface.

We will take all the shots with the same aperture, camera position and target scene setup. The camera lens will be slightly out of focus, just enough to avoid the recording of light variance coming from the target surface micro structure.

We will take shots with varying exposition time in order to get mean photosite readings along all the range the raw image file can record. If required, we will decrease the light intensity to get shots with mean values close to zero or increase it to get shots with mean values close to the top limit with exposition times under the second. In any case we change the lighting, we will take notes to identify the shots in each lighting level.

How Many Shots We Should Take

The vvm and the hvdvm class will fit a linear model with the mean photosites values as independent (or predictor) variable and the variance of the photosites values as the dependent (or predicted) variable. The corresponding pair of variance and mean photosite value is called an observation.

Unfortunately, for the observations we will collect, the photosite values variance is heteroscedastic with respect to their mean. In particular the photosite variance increases linearly with their mean. In other words, with higher mean there is greater uncertainty about where the fitted model should pass in the variance/mean space.

As with any model fitting, more observations are better for the fitting accuracy. In particular we want observations very close to the mean equal to zero, to get a more precise value of the model interception, and to counteract the heteroscedaticity, for higher mean values we need a greater amount of photographs.

Notice when we say smaller mean photosite values, we meant lower exposition time or lower lighting, and vice versa for higher mean values. As a rule of thumb, we can take around 6 or 8 shots when the mean photosite values are in the top quarter of its range, 5 to 6 shots in the second and third quarter, and four shots in the first one.

Better than taking a lot of photos in identical conditions is to take more shots with slightly, but perceptible, variations of lighting. As the consecutive camera exposition times have a geometrical effect in the photosites mean values (while all other things are held constant), the shots with the mean value in the top third of its range are much more spread than those with lower mean values. Taking shots varying the light intensity in the top third of the mean values will balance this increasing spread and also the heteroscedasticity.

For example, for a 14-bit raw file format, where 16,383 ADUs is the practical top mean limit. If we manage to take shots with a mean value –lets say– around to 15,000 ADUs, diminishing the camera exposition time by just one notch, will bring photographs with a mean value around 12,000 ADUs (assuming the camera has -as usual- exposition times in steps of 1/3 stop), this is a large gap of 3,000 ADUs!. But if we have a shot two stops behind the top limit, as around 4,000 ADUs, the following smaller exposition time will bring photos with mean values around 3,200 ADUs. This is a spread of just 845 ADU. Notice that great difference.

As we cannot change the exposition time in steps smaller than 1/3 stop, we must manage to change the intensity of the light over the target around 1/6 stop. That way, the shots varying 1/3 stop of exposition time as usual, will land in the middle of the gaps of those shots taken before the change of light intensity.

Cropping the Image Samples with Iris

The goal of having equally exposed camera sensor photosites –hitted by the same flux of photons– without professional and specialized devices, is almost impossible. Take in consideration not only the target scene illumination, but the light fallout along from the center to the image borders, caused by the lens vignetting. Specialized devices for the measure of sensor characteristics, expose the sensor directly to collimated light.

Without such devices, we will just find the best evenly lit sample area in the images, containing around 1,000 to 2,000 pixels approximately, and crop that same area from all the image files to process them instead the whole image files.

We can find the very best evenly lit image area using the "Process > Filters > Variance..." command in ImageJ software, then we can crop that sample area from all the photo files, in just one step, using the command "Digital photo > Decode RAW files" in Iris software.

The resulting files from Iris software will have a common prefix and a sequence number starting from 1. They will also have the .fit name extension, which is expected by the classes in the imgnoiser package. For example, the common prefix can be crop_ and if we feeded Iris with 245 photos, we will get the sample files named as "crop_1.fit", "crop_2.fit" up to "crop_245.fit". Please take into account the imgnoiser classes expects this sequence without a gap.

The Iris software saves the crops with the .fit extension name and those files has the FITS format. Nevertheless, other software, when saving files with the FITS format use the .fits extension name. However, both (i.e. .fit and .fits) are supported; also the PGM format with the .pgm name extension is supported. The case is irrelevant for the file name extensions.

Preparing the samples with Dcraw and ImageMagick

Another possible way to prepare the samples is by the use of Dcraw and ImageMagick. Both are free software.

The raw camera files can be processed with Dcraw, whose output will be a .pgm image file. In order to get the raw data from the camera raw image file and not the usual RGB color image, we can use the -D interpolation option (Show the raw data as a gray scale image with no interpolation. With the original unscaled pixel values) and the output option -4 to get unprocessed linear 16 bit pixel values, as coming from the photosites.

dcraw.exe -D -4 "MyPhoto.nef"

Dcraw can process an impressive long list of camera brands and models, including nowadays most known and populars, as Canon, Fuji, Nikon, Olympus, Panasonic, Pentax, Samsung and Sony Cameras. That way, in the example above instead of .nef Nikon format, you can use your own camera raw file format.

After the processing the raw photo file with Dcraw, you will get a big .pgm file with all the raw image data (e.g 31.7 MB for a Nikon D7000). For the above example we will get a file named MyPhoto.pgm.

To extract the sample area of interest, getting rid of the big .pgm file, we can use the ImageMagick -extract command using the mogrify program:

mogrify -extract 40x32+20+30 MyPhoto.pgm

Using this command we will cut –for example– starting from the (20,30) image position, an area of 40 pixels of width and 32 pixels of height. The system of position coordinates has as origin the image pixel at the top left corner as the (0,0) position.

ImageMagick-Extract

After the execution of this command, instead of the big file from Dcraw, we will have one with the same name but with smaller size (3 KB), containing only our area of interest.

Now we have a sample file which can be processed by imgnoiser classes.

Sample Borders and Channel Identification

Most digital camera sensors (DSLR) contain a 2x2 pixel block (Bayer filter) repeated along its surface. That's the kind of sensor expected by the imgnoiser package. In generic terms it is said those blocks are composed by channels. However, starting from the image top left corner, for most of the cases (e.g. most Nikon and Canon cameras) the layout of this block looks the way is shown below, and is called the RGGB pattern for obvious reasons.

+-------+-------+---
| Red   | Green |
+-------+-------+-
| Green | Blue  |
+-------+-------+-
|       

From now on, we will suppose this is also the pattern in your camera. If you don't know for sure if that is the case, you can google it, or you can shot a white surface and inspect the resulting raw image, for example using Dcraw as described above. The pixels with the two top highest values will correspond to the green photosites, the following one will correspond to the blue photosite, and the last one –with the lowest value– will correspond to the red photosite. This is an heuristic based on the fact that almost all DSLR cameras sensors have the green photosite as the more sensitive one, followed by the blue, and the red one as the least sensitive. For your camera, you can check that information –for example– in the DxO Mark site. Following this link you will find in the Relative sensitivities section, how that is true for the Nikon D7000 camera, you can search that information for your camera.

Depending on how many pixels you left above and at the left of your samples, this pattern will also repeat from the top left corner of your samples or not. If you left an even number of pixels above and at the left of your samples, the pattern will repeat exactly into your samples and that is practice we recommend to avoid confusions. Be aware of the coordinates of the pixels origin in the software you use. If it is (0,0), even coordinates value will align the sample with the image pattern, but if it is (1,1) you will need to use odd coordinate values.

Also considering the 2x2 Bayer pattern block, your samples must have even dimensions sizes to get samples with the same size for each color type photosite (channel). If they are not even, imgnoiser will round them to the nearest even number. For example if the sample is 43x20 pixels, imgnoiser will use only the 42x20 pixels area of the sample.

Without your specification, imgnoiser cannot known the Bayer pattern layout in your camera sensor and neither if your samples are aligned with it. For that reason, by default, imgnoiser refers to that pattern as shown below.

+-----------+-----------+---
| Channel 1 | Channel 2 |
+-----------+-----------+---
| Channel 3 | Channel 4 |
+-----------+-----------+---
|       

There is more than one way to tell to imgnoiser which channel should be labeled as green, red or blue. You will see that in the documentation of the package functions and in the usage vignette.

Sometimes, it is useful to distinguish one green channel from the other one. It is customary to refer to them as the green channel in the red row or the green channel in the blue row, for example as Green B and Green R. Also can be useful to get the average values considering both green channels, in such cases, that synthetic green average channel, by default, is labeled as avg green, but you will also have the chance to name it in the way you want.

You can’t perform that action at this time.