When it comes to measuring anything, whether that be recording sounds, taking images, or scientific measurements in a laboratory, noise is an inescapable part of that measurement. While nature adheres quite closely to the laws of physics, nature is also messy and chock full of uncertainty.
It is impossible to measure something with infinite accuracy - the exact point will always be a little bit spread out. In addition, some level of background noise is always there. In daytime photography, your signal will usually vastly overwhelm your noise, so the noise is not noticeable - think about the background sound of a a ticking watch while in a crowded cafeteria. But in astrophotography, our signal is very weak, oftentimes just barely above the noise, especially if you are shooting from a light polluted location or with a camera operating at ambient temperature.
I have often had people show me some attempts they've made at astrophotography, and how disappointed they have felt when their single images are noisy and dark. The key is, you can't stop at a single image - you've got to get statistics to work
for you rather than against you. Imagine drawing conclusions about how wind effects gas mileage when you have only driven one mile! You have to increase your sample size in order to increase what's known as the
signal-to-noise ratio, or SNR for short - how far above the noise your signal is, or how much more prominent the deep sky object (DSO) is over the background light and noise of your image. When it comes to a good astrophoto, it's all about SNR.
Sources of Noise
There are several sources of noise:
Dark current - The basic function of a camera sensor is to convert photons into electrons. The electrons generated over the length of a single exposure are held in each pixel, and then the accumulated charge is converted to a digital value of intensity, which is stored in the image file. Cameras are not only sensitive to visible light, however - ambient heat and heat from generated in the battery and circuitry can also create electrons in the pixel, generating a false signal. This is known as dark current. If you put the lenscap on your camera, you can record that dark current. It is random, however, since who can say whether a given pixel will be the one to record a bit of that heat? In general, the intensity of the dark current will increase with the exposure time (so doubling the exposure time will double the intensity of the recorded dark current), and it will also double for about every 13 degrees Fahrenheit (6 degrees Celsius). Some pixels will also be more sensitive to this noise than others. Below is an example dark frame, a 5-minute exposure on my Nikon D5300 at ISO-3200 and 72 degrees F. The top image is the raw frame, but it can be hard to see (especially when it's converted to jpeg as it is here), so on the bottom is a brightened version.
An example dark frame from my DSLR (top), and a brightened version of the same frame (bottom).
Read noise - Wherever there is electronic circuitry, there is electronic noise. Some of it shows up as a random pattern, since noise is an inherently random process. Some of it will show up as a fixed pattern, since variations in the manufacturing process will cause some pixels to run "hotter" than others, or have some low level of baseline charge (known as
offset or
bias). Some of it may show up as horizontal or vertical lines in your image, especially if you are using a CCD camera, since those are read off row-by-row instead of pixel-by-pixel.
Quantization error - The charge that is held in each pixel is an analog signal. In order to create a digital image, however, it must be digitized and turned into a number - it must be measured. This is done using an
analog-to-digital converter (ADC). Different cameras record images at different
bit depths. My new ZWO ASI1600MM Pro will record images at 12 bits. This means that 0 is black (no signal was collected), and 2^12, or 4,096, is white (the pixel was
saturated, meaning it held as much charge as it can store). This means that the images saved out from the camera can have 4,096 shades of intensity. Now, my camera has a well depth of 20,000 electrons, meaning it can hold 20,000 electrons before it saturates and can't hold anymore. This means that every level of intensity is a difference of 5 electrons. If we pretend the camera has perfect
quantum efficiency (how well it converts photons to electrons), this means you can not differentiate between parts of an image that differ by fewer than 5 photons, since they will appear the same brightness. This loses you that subtle detail in knots and Bok globules in a nebula, for instance. Now, you could decrease the gain (or ISO) on your camera until the pixel could only hold 4,096 electrons, but then you lose
dynamic range, since the separation between the brightest and darkest parts of your image is much smaller. So it's a tradeoff.
Shot noise - Shot noise has less to do with the camera and more to do with the uncertainty that exists at the most fundamental levels of physics. Due to how dim and distant the things we image as astrophotographers are, the
photon flux, or rate at which photons arrive at your camera is quite low. Imagine you are standing in a rainstorm; the raindrops are coming frequently enough that you can't tell the difference in how many raindrops are hitting you per second. But if it's just beginning to rain, you have no idea when the next raindrop is going to arrive - a half second later, five seconds later, etc. The same goes for photons - at these low rates, you don't know when they're going to arrive. In one frame, 5 photons might hit your camera; in the next, it could be 12, or 3. It may surprise you to know that the shot noise actually
increases with intensity - but as the square root. If you have 10 photons get absorbed by your camera in one frame, and then 100 in another, your signal has increased 10 times, but your noise has only increased by sqrt(10), or 3.2. So even though the noise is higher, the signal is much higher.
Quantum and Transmission Efficiency -
Quantum efficiency is essentially how good the sensor is at converting photons to electrons. There is no guarantee that just because a photon strikes the detector that it will get converted to an electron. If it doesn't, then it is lost. The fewer signal photons you collect, the harder it is to distinguish your signal from the noise.
Transmission efficiency is how much light makes it through all of the optics and filters between your camera and the sky. Generally, telescope optics are high quality and have high transmission. Color filters, however, can lessen your signal. This is especially true in DSLRs, where the Bayer matrix (the array of red, green, and blue filters laid over the top of the sensor so that you can image all three colors at once) can have relatively low transmission. I borrowed the chart below from one of Craig Stark's presentations on astrophotography (found
here, and it is a great resource!).
The top graph compares a monochrome CCD camera, the QSI 540, to is color version, the 540c. The bottom chart compares the monochrome QSI 540 to a Canon 40D/50D DSLR. As far as the filters themselves go, independent from the camera sensor (the images above include the camera's responsiveness to those wavelengths), they can have nearly 100% transmission, such as with the Astronomik LRGB Type 2c filters I have (and love).
Again, fewer photons means it's harder to distinguish the signal from the noise.
Light pollution - Light pollution isn't quite the same as the other sources of noise, but it can certainly result in a lot of problems. Light from a nearby town reflects of particles in the atmosphere, and the camera will capture those as well. Think about trying to see a dim cell phone screen out in full daylight versus in a dark room - it's a lot harder to get much contrast in the galaxy you're trying to image when the background light is just as bright! In order to get more signal, people will usually turn up the gain or ISO on their cameras, which increases the camera's sensitivity not only to light, but also to heat and read noise. In addition, remember shot noise? Light pollution is also a source of shot noise, except it doesn't contribute to your signal (the object you're imaging), so it's only adding additional noise.
Take a look at the two images below - the top one was taken in a suburban/rural transition sky (5 or yellow on the
Bortle Scale), and the bottom in the dark skies of west Texas. Note that the top image used a light pollution filter - the useful thing to look at here is the contrast between the object and the background.
5-minute frame, ISO-1600, from a light-polluted location
6-minute frame, ISO-1600, from a much darker location
Okay so the bottom frame is one minute longer exposure time, but you get the picture.
How can we ever hear the beautiful music over all of this noise??
It sounds like a daunting task! Don't fret, however - we have the power of digital processing at our fingertips. Stacking, calibration, and post-processing are extraordinarily powerful tools that let us turn noisy messes into beautiful recreations of the universe's many wonders.
I've got oodles of examples of subframes that look noisy and terrible, and processed frames that look way more awesome. As far as distinguishing the target from light pollution, though, this one takes the cake.
I have wondered for some time now, is it possible for me to quantify how much better the images get with stacking and processing? The answer is yes, of course, if you don't mind a little math!
Stacking
The whole point of stacking (see
this post for a tutorial on how to stack astro images) is to increase the certainty that the light in a given pixel is "real" and not noise or light pollution. If you have one picture, and you look at a given pixel, you may not be able to tell. But if you have 20 pictures, and in 19 of them the pixel is just about the same color and brightness, you can be pretty sure that that is the real value of the pixel. Stacking is a statistical process that increases that certainty, and the bottom line is you get an increase in the SNR. In general, your SNR increases by the square root of the number of frames you, but you can see much larger gains by applying calibration frames and doing some post-processing, as you well see evidence for in a moment.
Calibration
There are two kinds of calibration images that will help you reduce noise - darks and biases. (Flats reduce vingetting, or the darkening of the corners of your image, so I'm not including those here). Darks record your dark current, and biases record your read noise and fixed pattern noise. It is important to note that your dark frames will also have read/fixed pattern noise, but apps like DeepSkyStacker handle the subtraction so that the biases don't get subtracted twice. For more on how to capture calibration frames, see
this post. Now, remember how I said noise is random - you can't take a dark and a bias frame and just subtract them. The distribution of noise moves around like static on an empty analog broadcast TV channel. Here again we get some help from statistics. You take several dark and bias frames, and then DeepSkyStacker or your other favorite stacking app will average them and subtract a "master" from your stacked image (where the noise has also been averaged). In Gaussian statistics, which most noise sources in nature are (the class bell-shaped curve), the average value approached the truth. If you have a noisy camera (like a DSLR), you're going to want more dark frames. I find that DeepSkyStacker struggles with more than about 60 (and that's if you've undergone the process of expanding its RAM-using capability from 2 GB to 4 GB - see
this website for how to do it (it does require Microsoft Visual Studio, but the Community Edition (the free one) will do it)). (Wow, nested parentheses!) I usually use 20. I wouldn't go less than 10. (Again the square root law applies here, the mainstay of Gaussian statistics).
Other considerations
Having a cooled camera sensor makes a world of difference, as I have already begun to see in the first images from my ZWO ASI1600MM Pro. Your dark current diminishes dramatically, which is a huge source of noise. The combination of a cooled sensor, longer subframes, more subframes, a lower-read-noise chip, and much higher quantum efficiency and transmission efficiency combined to drastically decrease the noise between these two images of the Orion Nebula.
Nikon D5300, 12x60s frames at ISO-1600, taken on an Orion ST-80 ahchromatic refractor, ambient temperature = 36 F (2 C)
ZWO ASI1600MM using Astronomik LRGB filters, total 113x60s frames, gain=unity (139), taken on a 140mm Vixen neo-achromat refractor, sensor temperature = -30C (-22F)
Light pollution
Darker skies will give you greater contrast, making it much easier to pick up dim details (see the Whirlpool Galaxy images above) and enabling you to distinguish very dim signal from the noise and background light. They will also decrease the extra shot noise added by light pollution.
All right, show me the numbers!
For my experiment, I chose a dataset where I actually had enough subframes to measure the difference in SNR in stacking greater numbers of subframes - usually I am rather impatient and don't gather more than about 25 subframes. I picked M8-M20 #2, an image of both the Lagoon and Trifid Nebulae (M8 and M20) captured on the back side of Casper Mountain on August 17th, 2017 while I was there for the solar eclipse. It was taken with my Nikon D5300 attached to my Borg 76mm apochromatic refractor, using a Hotech SCA field flattener. The telescope was attached to my Celestron NexStar SE mount (chosen for this trip so I wouldn't have to polar align it before dawn the morning of the eclipse, since this mount is an alt-az mount). The subframes are 30 seconds long (the NexStar has some serious periodic tracking error) and ISO-1600. The temperature was between 53-55F over the course of the acquisition of that dataset. All stacks were done in DeepSkyStacker, and with the exception of the image I stacked without calibration frames to measure the difference in SNR, I used 20 darks, 20 biases, and no flats (I didn't have any for that scope yet, it was fairly new to me). I stacked 10 frames, 44, and 88, and then stacked 88 without the calibration (dark and bias) frames (all using the auto-adaptive weighted average stacking option, my preference as of late). I saved out the raw 16-bit TIFFs with changes embedded, not applied, and didn't do any adjustments in DSS. In Photoshop, I stretched the histograms of the stacked images, and the 88-frame one the calibration files I used my post-processed, completed image. I didn't make any additional adjustments to the 10 or 44 stacks, or the 88 stack without calibration.
The method of calculating SNR is quite simple: take a sample of the image over a flat area (on your DSO, since that's what you care about the most), so either in a nebulous region that isn't changing brightness much and doesn't include any stars, or between spirals of a galaxy, or something like that. Grab the mean and standard deviation of that area. SNR = mean / standard deviation - that's it! It's a unitless value, although if you are into radio or other kinds of signal transmission and love your decibels, you can convert SNR to decibels using dB = 20*log10(SNR). (log10 = log base 10, if that wasn't clear).
Now, you need all of the images your comparing to be positioned the same way so that you are sure to grab the exact same area of each image, since the SNR over the region you are sampling will change depending on where you are sampling it. I did this by opening up each of the images I compared in Photoshop, cropped them to be the same size, and then copied them over one of the images all as layers.
The Layers pane in Photoshop (if you don't already see it in the lower right corner: Window -> Layers)
The order these are in does not matter. I turned off viewing all of the layers except for the base layer (my finished image) and each image at a time by clicking the eyeball box to the left of the layer thumbnail, clicked on the image I wanted to align (in the screencap above, the "single frame" layer), and turned the opacity down to about 50-60% (the "opacity" selector box). Then I hit Ctrl + A for Select All, and clicked on the Move Tool on the left panel. Then I used the arrow keys to nudge the image until it was in line with the base image by looking at the stars. Get it as close as you can, knowing that there is probably some sub-pixel shift and you won't be able to get it exactly.
"Single frame" not aligned with "Complete"
The two images are now aligned (or are at least close)
Repeat for any other versions of the image you want to compare - different numbers of stacked frames, calibrated vs not calibrated, using different stacking methods, using a different number of darks and biases, etc. Hit Ctrl + D when done to deselect.
Next, zoom in on your DSO, and try to find a flat area without stars or much change in brightness. The larger your sample, the better, but if it's too large you'll get true variation in the DSO, which will skew your measurement; I used a 25x25 pixel box. This is 625 pixels total; the square root of that is 25, which is 7.7% of 625, so I'll have an inherent 7.7% error in my SNR measurements (less than 10% is safe, by rule of thumb). You can set the size of the selection box by clicking the Rectangular Marquee Tool (the dashed-box-shaped icon on the left panel), change the "Style" to "Fixed Size," and set your width and height. Click the area you want to put the selector box there. I use brightest image of my comparison set so I can know what I'm looking at to choose the area - in this case, the completed image (again by clicking the eyeballs to toggle viewing the layers).
If you go to Window -> Info, you can have a panel open that shows your coordinates (in inches by default, although you can click the options button (the three lines in the corner of the panel) and change it to pixels, centimeters, what have you). I'd record these wherever you are writing down your measurements for future reference, and I use the upper left corner of the selected area.
Finally, in the Histogram panel, click the options button (three lines) and select Expanded View and Show Statistics. This will show you the statistics you need.
In Source, select "Selected Layer" so that you're only measuring the image layer you have selected. In Channel, select Luminosity. This is so you aren't taking color variances into account. The human eye notices differences in luminance far more than chominance anyway.
Now we are ready to roll!
First, here is a raw, single frame (converted to jpeg of course).
It is quite dark. If you zoom in, you will see the noise.
Single raw frame. You will also see how good the tracking is on my NexStar mount. (sarcasm)
I recorded the mean of that square to be 20.15, and the standard deviation to be 5.07. Dividing those two, we get a SNR of 3.97. This means that your signal doesn't rise very far above your noise, a fact that is easy to see in the image above.
Next, I selected the layer that is my stack of 10 frames, and recorded a mean of 43.56 and standard deviation of 2.72, which yields SNR = 16.01. Just by stacking 10 frames and doing dark and bias subtraction, we have quadrupled our signal to noise ratio!
Stack of 10 frames, dark and bias subtracted.
Now, statistics says a stack of 10 will only get us a SNR increase of sqrt(10) = 3.16, and 3.97 x 3.16 = 12.5. But again, dark and bias subtraction help us out.
Next, I measured a stack of 44 frames (half of the maximum), and recorded a mean of 35.99 and standard deviation of 1.58. This means a SNR of 22.77. This is a 1.4x increase. You can see our diminishing returns happening already, but it's still a solid increase.
Stack of 44x30s frames, dark and bias subtracted.
Finally, I measured my stack of 88 that I post-processed - stretched the histogram, adjusted the light curves, adjusted the color balance (not important for noise), clipped the left end of the histogram (important for background and dim noise reduction, although you also lose real signal that's lost in the mix), and denoising blurring algorithm. I recorded a mean of 99.07 and standard deviation of 2.45, which gives us a SNR of 40.44. Fantastic!
By stacking, calibrating, and post-processing, we have increased the signal-to-noise ratio by 10 times! And it shows - compare the single raw frame to the final product.
Like all good research, let's summarize the results in a nice Excel table.
"Increase from single sub" here is the factor of the increase of the SNR - SNR of the frame / SNR of the single frame
I am going to look into some image quality metric tools as well, but I wanted to do this as a warm-up. What a fun exercise!
Bottom line: Signal-to-noise ratio matters a lot! Cameras are noisy, but we can beat that down with the power of statistics and get some really nice-looking space images, even though the odds can seem to be against us.
Whew! That was a long post.