|Languages||English - Français|
- Signal and noise
- Image calibration
- Aligning images
- Analyzing images quality
- Stacking images
- Post processing
Lynkeos uses some signal processing concepts which are explained hereunder.
Signal and noise
The image recorded by the detector is not perfect (otherwise you wouldn't use this software). It can be described as a perfect image deteriorated by some processes (turbulence for example). In signal processing, the perfect image is called the "signal" and the deteriorations, the "noise". The term noise usually refer to a statistically random deterioration, for a systematic one we use the term "bias". When many images are added, the signal and the biases are added (they grow linearly with the number of images), while the noises are added with a quadratic sum. This means that the resulting noise is the square root of the sum of the squares of the noise in each image (remember that Pythagore theorem?). For short, when stacking, the signal and the bias grow much faster than the random noise : stack 4 images, the signal and bias are multiplied by 4, the random noise by 2, the signal is two times stronger with respect to the random noise in the resulting image. As the biases accumulate with the signal, they shall be corrected before stacking.
The identified noises/biases, our images suffers from are :
- Turbulence : the atmospheric turbulence causes the image to randomly move, being distorted and blured.
- Uneven transmission/sensitivity : as the optical setup is not perfect (nothing in this world is!), the light may not be transmitted evenly accross the detector. The best known case is the vignetting, ie: darkening in the corners. The detector pixels may not all have the same sensitivity, the effect on the recorded image is the same as above.
- The photon noise : as proposed by A. Einstein in 1905, the light interacts with our detector in a discrete manner ; the discrete light quantum was later called a "photon". Under the same illumination, the number of photons hitting two photosites (pixels) differs because the hit of a photon is (as far as we know) random.
- The thermal bias and thermal noise : the photons hits are converted into electrons accumulated in a photosite potential well. The problem is that the electrons are not only produced by photons hit, they are also produced by thermal agitation in the detector substrate. These electrons accumulate too in the photosites. The number of electrons in a photosite depends on the detector temperature, the substrate properties in the photosite region and the integration time (time since the photosite was emptied). This is a thermal bias. But as the thermal electron generation is random, there is a thermal random noise analoguous to the photon noise.
- The transfer/readout noise : the electrons of a photosite are moved from potential well to potential well until they end in a last well were they are "read". Some electrons are lost during this transfer and the read is subject to electronic noise. The transfer loss is not significant in a webcam since the other noises are far more powerful.
- The quantification noise : This number of electrons is, after some silicium black magic, converted into a number by an analog/digital converter. The step of this converter is not 1 electron, it is far higher. Hence two different electron numbers are converted to the same value. This is a noise. This noise is not random in nature. But, if the random noises are higher than the converter step, it can be treated as if it were random. Depending on the camera and the subject, this can be an unknown bias or a random like noise.
Many functionalities of this software each intend to reduce the amount of one kind of noise.
- The thermal noise is corrected by substrating a dark frame from each image (it nulls out the bias introduced by the thermal noise), and by stacking many images (to even the remaining random thermal noise).
- The uneven illumination is corrected by dividing each image by a flat field frame.
- The turbulence induced random moves of the image are corrected by aligning all images.
- The photon, readout and quantification noises are lowered by stacking many images.
- The blur caused by the turbulence and some imperfections in the optical setup, can be corrected by a deconvolution filter.
Prior to any processing, the source image must be calibrated, ie remove the biases. To create the calibration frames described hereafter, all the webcam settings shall be (except otherwise noted) the same as for the recording of the images to be processed
The dark frame is a stack of many frames recorded with the telescope shut, at the same temperature as for the recording of the images to be processed. It is a measure of the thermal bias. It is substracted from each image before processing.
The flat field is a stack of many images recorded with the telescope looking at an evenly illuminated target, with the same optical setup as the images to be processed, at a shutter speed giving a bright enough, but non saturated image. It is a measure of the uneven transmission/sensitivity. Each image is divided by the flat field before processing.
To align two images, Lynkeos uses intercorrelation. To speed up the computing, it uses a fast fourier transform (FFT), this is why the alignment box is a square with a power of two as side. It then searches a peak in the correlation result, which gives the alignment offset between the images.
Some user preferences affect this processing :
- The align frequency cutoff defines a frequency limit, above which the spectrums are cleared. This is used to avoid spurious aligning on some systematic noise like interleaving.
- The align precision threshold is the maximum value the standard deviation of the correlation peak can have for the alignment to be declared successful.
To compute an image quality level, Lynkeos uses one of two methods, based on :
1. Image entropy
The entropy is a measure of the image "lack of information" ie: a "flat" image with all pixels at the same value, has the highest entropy and the less information in it (it can be described with just one pixel value). The quality value is derived from entropy, the best images are supposed to be those with the highest quality value. This method is rather efficient on planetary images.
2. Power spectrum
Once again FFT is used for speed's sake, that's why the image part is a square and its side a power of two. The quality level is the mean value of the power spectrum between two frequencies, which are modifiable in user preferences :
- Analysis lower frequency cutoff which defines the threshold where lower frequencies are suposed to be "blur".
- Analysis upper frequency cutoff which defines the threshold where higher frequencies are suposed to be noise.
This method has not proved to be very effective but has been kept for historical reasons : as it was the only method in early releases of Lynkeos, it may be of some use to some users.
No analysis method is perfect, so far. Therefore, if you know a killer analysis algorithm, let me know.
Stacking the images
To stack the images, Lynkeos translate the images by their alignment offset down to fractions of pixels. Each pixels is split in 4 according to the pixel fraction and accumulated in the pixels of the resulting image.
The stacking is done with floating numbers, there is no risk of "brightness overflow" if you have many bright images.
The final processing is done by using some image processing filters. The application contains the following image processings :
- The deconvolution filter is exactly what it says (a kind of Wiener deconvolution filter to be precise). This is done by dividing the spectrum of the image by the spectrum of the presumed convolver, a gaussian in this case. The threshold value is the convolver spectrum module value under which the division is not performed. This avoids amplifying too much the highest frequencies (experiment with low threshold to see what I mean).
- The Lucy Richardson deconvolution is an iterative algorithm which also performs a deconvolution on the image. It is more difficult to manage, but can give better results. The convolver can be a gaussian curve, a sample in the image to process or read in an external image file.
- The wavelet filter is a processing where the image is layered into "frequencies" layers (isolated by a wavelet, hence the name), and each layer is separately amplified or attenuated.
- The unsharp masking filter is the digital translation of a darkroom technique. It substracts a blurred image to the original image and amplifies it. This amplifies small scale details and attenuate large scale brightness fluctuations.
Other kinds of image processing can be added to the application as plugins.
The filtered image, wich is a floating numbers image is then brought back to an 8 bit integer RGB image for display on the screen. This conversion uses the black and white levels provided by you. There is once again no risk of brightness under or overflow whatever processing you do (don't forget to adjust the levels).
The filtered image is converted the same way for exporting into an image file.