Replies: 11 comments
-
Hi again, You can do a quick subsample with eg. You can offset an image quickly with I've fixed bad pixels by making a pixel map, then for each image computing a median and using ifthenelse to flip between the median image and the pixels, eg. When I've calibrated cameras, I've also modelled gain and offset for each sensor site. There can be quite a degree of variation (several percent) due to inconsistencies in the manufacturing process. I would take a lens-cap black for the offset calibration and do something like: For gain, take a shot with the lens off and the bare sensor exposed. Correct it for offset, then for each image compute eg. Gain and offset will vary with sensor temperature, so be cautious. You'll also need to correct for lens vignetting, aperture, etc. |
Beta Was this translation helpful? Give feedback.
-
Oh, the median ifthenelse thing will only work if your bad pixels are fairly spread out. A large cluster of them will cause problems. But I imagine bad pixels are pretty rare in the type of sensor you are using. |
Beta Was this translation helpful? Give feedback.
-
Sorry, one more thing, a little C for this first stage of calibration is easy and quite a bit quicker, if speed is important. You can certainly prototype in pyvips. |
Beta Was this translation helpful? Give feedback.
-
Thanks for directing me to the I fully agree with your calibration pipeline. It is exactly what I'm doing now: dark calibration (depending on gain, temperature and exposure time) and "flat field" calibration to account for vignetting, sensor pixel sensitivity variation etc. Moreover I need to assure a linear sensor response and of course a linear data processing. In fact, I'am interested in extended, very low surface brightness objects and the quality of calibrations is crucial. I do have a working pipeline already (without libvips) but would like to replace some slow processing steps by much faster libvips/pyvips operations. |
Beta Was this translation helpful? Give feedback.
-
You'd could use Something like (untested): images = [image.zoom(2,2) for image in [red, green1, blue, green2]]
index = pyvips.Image.new_from_array([[0, 1], [2, 3]]).replicate(green1.width + 1, green1.height + 1)
bayered = index.case(images) |
Beta Was this translation helpful? Give feedback.
-
Your pipeline sounds good. I would be tempted to do both a no-lens flatfield and a lens flatfield to separate pixel gain correction from lens vignetting but if you seldom change objectives perhaps it's more sensible to combine them. |
Beta Was this translation helpful? Give feedback.
-
Some sensors have a row of masked pixels down one edge (the chip is manufactued with a metal mask over those sites), it'd be worth checking if you have them available. They are very handy for correcting for temperature and exposure time in one go. |
Beta Was this translation helpful? Give feedback.
-
Great info on how to combine the corrected sub-images and build up the corrected bayered image! Yes, the 'overscan area' which is present in some sensor image data is a first approximation of its dark level. Obviously it is used in many cameras for this reason internally - even if it is not delivered in the raw data file to the user. The problem in astrophotography is the usually long exposure time which on most sensors creates a significant amp glow from heat generated by readout electronics. This is obviously more pronounced towards edge/edges or corner/corners and the only way to account for it is by taking full frame dark images. I wonder if it would be of more general interest to implement a new function which does e.g. mean and/or median filtering based on an arbitrary kernel image (bi-level mask), somewhat similar to |
Beta Was this translation helpful? Give feedback.
-
Yes, that's a nice idea. You could just add an optional |
Beta Was this translation helpful? Give feedback.
-
Shall I open a new issue tagged as 'feature request' for this extension of the rank function? |
Beta Was this translation helpful? Give feedback.
-
By the way, this is the code I ended up with. Knowing the right functions it was pretty easy (just a few corrections to your hints) ...
|
Beta Was this translation helpful? Give feedback.
-
Large, cooled one-shot-color CMOS cameras are gaining increased attention in astrophotography. The light is passing a Bayer matrix filter before reaching the sensor and therefore pixels in the resulting grey image represent different colors.
Several python wrappers for libraw do already exist (like pyraw) but they are designed to work with DSLR camera images and heavily rely on their metadata information as far as I know. Images created by the CMOS cameras I mentioned are not supported. Those are typically (single band) FITS images.
What I am looking for at the moment is some way of using libvips functionality for preprocessing of the bayered image, in particular
Whould you have any recommendation about how to achieve the extraction and replacement steps?
(BTW: I am currently using graphicsmagicks convert for the extraction like
gm convert <bayered_img> -roll +0+0 -filter point -resize 50% <single_color_img>
but I'd much prefer a pyvips solution)Beta Was this translation helpful? Give feedback.
All reactions