What is LRGB?
LRGB imaging refers to using luminance, red, green, and blue filters on a monochrome camera in order to create color images.
DSLRs and other "one-shot color" (OSC) cameras have built-in filters and do all of the color combination work for you. They have what are called Bayer filters overlaid on the sensor. The Bayer filter has quads of four filters: one red, two green, and one blue. Thus, it takes four pixels to make a single color pixel on an OSC camera. The reason green was the one chosen to have two is that our eyes are most sensitive in that part of the spectrum. This is largely because green is the peak wavelength of the sun's visible spectrum.
Bayer filter on a color camera
Monochrome cameras do not have these filters on top of the sensor, so every image you acquire on a monochrome camera will appear black-and-white. To get color, we place a filter in front of the camera - in this case, either an L, R, G, or B filter. Luminance filters pass all visible wavelengths, and L images usually your highest signal-to-noise ratio, sharpest, and highest contrast images, so they are used to bring out the detail in the color frames. They also block IR (infrared) and UV (ultraviolet) light, which camera sensors are sensitive to (less so than visible wavelengths, but still). IR and UV light won't focus at the same point as visible light will in your telescope (in refractors, at least), which will cause your images to appear blurry if you don't block those wavelengths.
Why LRGB?
While getting all of your colors in only one frame is convenient, there are several drawbacks. One big one is sensitivity. Imaging deep sky objects already puts us in a low-photon regime - we are getting very little light per second that strikes our telescope and gets absorbed by the camera's detector. Having a filter of any kind reduces the amount of light that hits your camera, since some of that energy will be absorbed or reflected. The dye that is used on Bayer filters makes this situation even worse, but is necessary in order to make such itty-bitty filters. Each pixel on your camera is only a few microns across - that is, 0.000001 meters, or about 40-millionths of an inch. The graph below shows the quantum efficiency of the red, green, and blue pixels on a Canon 40D and Canon 50D, both standard DSLR cameras. Quantum efficiency is basically the percentage of light that hits the sensor that ultimately becomes electrons inside the chip, which ultimately makes your image.
Divide by 10 to convert A (angstroms) to nm - 4000 A = 400 nm, etc
As you can see, the highest percentage of light that makes it through the system is only 35%. So you are losing a good amount of what little light you already had! This leads not only to you having to take longer exposures or bumping up the ISO (which adds noise), it also decreases the signal-to-noise ratio of your images, since you are getting fewer "real" photons, but the amount of noise on your camera sensor has not decreased.
In contrast, the peak quantum efficiency of the ZWO ASI1600MM Pro is 60%, which is the 1.0 on the graph below.
Color filters that you use for monochrome cameras are much larger than Bayer matrix filters, so they can be made with different materials that are much more highly transmissive. The graph below shows the transmission efficiency of the LRGB filters I use, Astronomik Type 2c.
The colors present each filter; red, green, and blue. The orange line represents the luminance filter, which passes visible wavelengths but blocks IR and UV.
As you can see, they all transmit at least 90% of the incident light, so you lose very little in the color filters. Between the quantum efficiency of the sensor and the transmission efficiency of the color filters, you have an overall quantum efficiency between 31% at 400 nm at the worst and 57% at 520 nm at the best, all of which is better than a DSLR with a Bayer matrix.
Not to mention narrowband imaging!
But enough about that. It's tutorial time.
For more on LRGB vs one-shot color imaging, and on how you should divvy up time for luminance vs RGB, check out this great slide deck by Craig Stark.
Stacking
For this tutorial, we're going to stack and process the M51 Whirlpool Galaxy image I recently took at the National Youth Science Camp near Green Bank, WV.
This set was taken over two nights. Here are the deets:
Date: 8 July 2018, 9 July 2018
Location: National Youth Science Camp, WV
Object: M51 Whirlpool Galaxy
Camera: ZWO ASI1600MM Pro
Telescope: Borg 76ED
Accessories: Hotech field flattener, Astronomik LRGB Type 2c 1.25" filters
Mount: Celestron AVX
Guide scope: Orion 50mm mini-guider
Guide camera: QHY5
Subframes: L: 17x180s (51m) (8 July 2018)
R: 8x180s (24m) (9 July 2018)
G: 6x180s (18m) (9 July 2018)
B: 7x180s (21m) (9 July 2018)
Total: 38x180s, 1h54m
Gain/ISO: 139
Stacking program: DeepSkyStacker 3.3.2
Stacking method (lights): Median kappa sigma clipping (2,5), because satellites
Darks: 20
Biases: 30
Flats: 0
Temperature: -25C (chip)
File format and pre-processing
When you image with a monochrome astro camera, a good format to save your data in is FITS. FITS (or Flexible Image Transport System) is the most commonly used format for astronomy data. It has a flexible metadata system in which you can store a variety of information; everything from specs on your camera and telescope to coordinate system data, such as the celestial coordinate position of stars in the field of your image, and much more. It's not just for image data; if can store arrays of arbitrary dimension (higher than 2D that's used for image files). It's a raw format that you need extra software to view; I use AvisFV, which is a lightweight FITS file reader, which lets you also see the histogram, stretch the histogram, and see the header data. Because of its raw-ness, the images will look really weird at first, so it's a little harder to "see" into the image to get a feel for if it's going to be good or not (need longer exposure time, shorter exposure, higher gain, etc). For example, here's a screenshot of one of the red images, as seen in AvisFV.
Single raw 3-minute frame on M51 through the red filter.
Now, you might look at this and think, "This looks like crap!" It certainly looks super noisy, and the DSO looks blown out. Fear not, however; this is a very raw and linear representation of the image. The final product will look great, don't worry!
I collected 17 luminance frames, 8 red, 6 green, and 7 blue, each 3-minutes long at unity gain. I had 8 of each of the RGB frames at first, but as M51 sunk toward the horizon, the atmosphere was mushier there, so the guiding wasn't as good. This led to streaky or bloated stars, so I deleted those frames. Be sure to always go through all of your image frames and delete the ones with streaky or bloated stars, or ones where clouds rolled through, your telescope fogged up, etc. Bad data will make for a less-than-satisfactory image at the end. Garbage in, garbage out!
Stacking in DeepSkyStacker
Registration
DeepSkyStacker is not equipped to do automated stacking of LRGB data. It's really more meant for OSC data. However, with a little extra legwork, it can be put to use for stacking LRGB data - it just takes a little extra effort. Luckily, FITS files are great, and stack way faster than .NEF or .CR2 raw DSLR camera files. Basically what we're going to do is register all of the frames to a common reference frame, and then stack the L's, R's, G's, and B's separately, and save out those individual files for combination in Photoshop.
First, import all of your lights (all LRGB), darks, flats (if you have them) and biases, just like you would for OSC camera images.
Click "Open picture files..." and be sure to select the FITS file type. Click one of the images, and then use the Ctrl+A keyboard shortcut to select all, and hit Open.
Do the same for darks, flats, and bias/offset by clicking those text-buttons. If you have different exposure times on your frames, don't worry - import all of your lights and darks anyway.
Next, hit the "Check All" text-button over on the left-hand side - all light frames are toggled off by default, since DSS expects you to use its viewer to check your files for streaky stars and other problems. I find that it loads images very slowly and you have to stretch it yourself to see it, so I don't use its viewer to pre-process.
Next, click the "Register checked pictures" button on the left. Un-check "Stack after registering," and make sure "Register already registered pictures" is checked. Hit the Advanced tab and check your star detection threshold - I usually use 3-5%. Click OK.
It's reminding me to add flats - I didn't get a set of flats with this dataset. I find I don't usually need flats on refractors so much.
If you've got a lot of images, go grab yourself a snack or a cup of coffee.
Once that's done, scroll through the file list. You'll see that every light frame has a score. This is an image quality metric. If you click the column "Score," it will sort by value. Scroll all the way to the bottom to see the frame with the highest score, which will mostly likely be a luminance frame. Right-click it and click "Use as reference frame."
You'll see that the score gains an asterisk.
Now, click "Register checked pictures..." again, and again be sure that "Register already registered pictures" is checked, and "Stack after registering" is unchecked. Click OK. Go pet your cat or browse Facebook, or start planning your next star party.
Stacking
All right, now that registration is all done, time to stack!
First, put your files back in name-order by clicking the "File" column. Next, un-check your RGB files, leaving the L files checked. If you have different darks, biases, and flats that correspond with these (if you have different temperatures or different exposure times, for example), leave only the files checked that go with your L's. You can select multiple files at once using the CTRL key to click individual ones, or CTRL+SHIFT to select several sequentially. Then right-click somewhere on your block of selected files and click "Uncheck."
Now only your L files and their corresponding darks, flats, and biases should be checked.
Next, click the "Stack checked pictures..." button. Go through the Stacking Parameters options and make your parameters picks. See this post for a rundown on options. I used the "Median kappa sigma clipping" stacking method for this dataset, since there were several satellites that went through, and this is a good method for rejecting them from the final image. Finally, hit OK.
Go catch up on the latest astro-news, or shop around astronomy websites to lust after expensive equipment.
Once your image is done stacking, don't touch any adjustment buttons, and click the "Save picture to file..." button on the left. I've been saving out images as 32-bit integer TIFFs lately. Not sure if it helps, but why not, it only adds a few extra steps in Photoshop. Be sure to hit the "Embed adjustments, but do not apply" radio button. Plop that file into a folder (I make a folder called LRGB inside of my Finals folder), pick a name (see the "organize your files" section of this post for some advice), and then hit Save. I recommend using "_L," "_R," etc at the end of your filenames so that you know which is which - remember, they're all black-and-white.
Now, rinse & repeat with your R's, G's, and B's. Un-check your L's and their corresponding calibration frames (if different from your RGB frames), and then check your reds, stack, uncheck, greens, stack, uncheck, etc.
At the end, you should have 4 TIFF files: luminance, red, green, and blue.
Now, I'll mention here a few things. First is that the Windows photo viewer does not support 32-bit images, so they're going to look super weird if you open them. Second, looking at them in DeepSkyStacker, or in Photoshop, they'll look really dark. Don't worry - we're about to fix that.
Post-Processing in Photoshop
Stretching
I have a subscription to Photoshop Creative Cloud, so right now I'm using Photoshop CC 2018 for this process.
Go to the folder where you saved out your four TIFFs from DSS, and click and drag them all into Photoshop. I like to re-arrange the tabs to be in LRGB order (Photoshop will open them in alphabetical order - just click and drag the tab across to where you want it).
These images are all unstretched, meaning that the data are all sitting in an itty bitty peak way down to the left side of the histogram. This is because, like I mentioned, we're in a low-photon regime, so all of our images are going to be quite dark. What we want to do is widen the peak where the data are located so that they span a much larger width of the available brightness in the scale of brightnesses. Go ahead and hit CTRL+L as a shortcut to Levels, or if you don't like shortcuts, go to Image -> Adjustments -> Levels up at the top.
The only reason you can see the galaxy so well here is because I imaged from a particularly dark location, so there is almost no background light to obscure the DSO.
Photoshop won't show the histogram for 32-bit images, since it doesn't really support 32-bit (notice the big fat 0 in the histogram window on the right), but you can see it in the Levels tool, which is what we're going to use to stretch. All of the image data are in that little gray peak way down on the left (black) end of the histogram. In these histograms, black, or low-intensity data, are on the left, and white, or high-intensity data, are on the right. We want to widen that peak to cover more area. We'll do this by adjust the black and gray sliders - the leftmost slider, and the middle slider. Leave the rightmost slider (white point) alone - it'll oversaturate your stars, which we don't want.
Start by pulling the gray point (middle slider) in toward the peak. The image will get really bright. Don't worry about how bright it is at this point.
Hit OK to commit, and then open Levels again. This time, you'll see the histogram has moved. Move the black point slider in toward the left edge of the peak (leave a little space though), and move the gray point slider toward the right edge of the peak (again leaving some room).
Hit OK. Play it again, Sam.
Hit OK. Now, go to Image -> Mode, and select "16 bits/Channel," and choose "Exposure and Gamma" as the Method. Click OK.
Now that we're in 16-bit, you'll see the histogram in the Histogram box on the right side of the screen. (If it's not there, you can go to Window -> Histogram). We still have some more stretching to do, so open up Levels again, and you'll see that converting it to 16-bit moved the peak to the center. Go ahead and move that black point slider up toward the left end of the peak, and if needed, move the gray slider in toward the right edge of the peak.
That's looking pretty good to me. If needed, you can do Levels on the 16-bit image a few more times. You'll do more tweaking later, so don't go too crazy.
Flatenning
If you didn't use flats, or your flats didn't work very well, or you have a light pollution gradient, you may want to make a synthetic flat. I've found this process to work rather well. Other tutorials I've seen have you do the flattening after combination - I've found it works a lot better if you do it before. Now, synthetic flats only really work on images with small DSOs. If you have a big huge nebula that dominates your image, this method won't work. Luckily, the Whirlpool Galaxy is a pretty small component of this image set.
First, click the Rectangular Marquee tool in the left tool panel (the one with the dashed-line box), then hit CTRL+A to select the entire image, and then hit CTRL+C for Copy. Then hit CTRL+N for New, and hit Create. Then paste your image in with CTRL+V. Now you have a copy of your image.
Next, long-press the tool with the picture of the band-aid on the left tool panel, and select "Healing Brush."
In the top panel, click the down arrow next to the brush size icon, and set the "Hardness" to 0.
You can re-size the circle that represents the size here with "Size," or you can use the [ and ] keys to change the size. 500 is a good size for this image.
Now, press the ALT key, and click an area a short distance away from your DSO. Then paint over your DSO. You may need to select multiple areas so that you don't pull the reference area over your DSO, and you get a nice flat star field where your DSO used to be. Do the same for bright stars. Be sure to choose reference points that are near your image, especially if your gradients or vignetting are severe. The point is to find just the gradient of the image.
Bye-bye deep sky objects, and bright stars
Once that is done, go to Filter -> Noise -> Dust & Scratches. Set the radius high enough so that you don't see any stars, and you have just the gradient. Click OK.
Now, go back to your original image, and click Image -> Apply Image. Select the "Untitled-#" as the source file, and choose Subtract as the blending mode. Choose an offset between 30-50, and you can decrease the opacity if you want. You'll see a preview of the result. Once you're satisfied, click OK.
Now you've got a nice flat image.
Now, rinse & repeat with your R, G, and B images. Whew! Lots of work.
With just your RGB frames and not your L, also do some denoising. Carboni's Tools has a tool called "Deep Space Noise Reduction." Run this on your RGB frames. Sometimes I've even done the noise reduction tool in Camera Raw Filter, which applies a slight blur to the images. But it really helps reduce the color noise that is bound to show up in the compiled image, and sharpness comes from luminance anyway, not from chrominance. You can blur the RGB frames quite significantly and you won't be able to tell very much in the final image.
Noisy stacked red image
Less noisy (noise has been blurred so that it's less obvious)
One other thing I want to mention real quick is that if you image over multiple nights and your camera ends up in not quite the same orientation, that's fine - your stacked images will still be all lined up thanks to the registration process we went through earlier, although you'll have to crop your final image. It will also cause a thin spike at 0 in your histogram, which will impede your ability to cut out the background light. But you can take care of that after you crop - we'll get there later. Leave some background light in for now anyway.
Couldn't quite get my camera in the same orientation on the second night as the first...
Combination
Once you've stretched, flattened, and de-noised your images, it's finally time to combine them! I recommend saving out a copy of your LRGB images as 16-bit TIFFs into another folder so that you have a starting point later if you want to re-work anything. I make another folder inside my LRGB folder called "Stretched & flatenned."
Now, go back to your L frame and click the Rectangular Marquee tool again, and hit CTRL+A to select the whole image, and then CTRL+N to make a new image. Don't copy and paste your L image yet; it comes last. Make sure to choose "RGB Color" as the Color Mode, and hit Create. The canvas should be the same size as your L image, since you used it to create the new image. I usually choose a black background, since I've had weird stuff happen before when I've done transparent.
In the Layers box down in the lower right corner (go to Window -> Layers if it's not there), click the Channels tab (again, Window -> Channels if you don't see it). Select the Red channel. Then go to your red image, CTRL+A to select all, CTRL+C to copy, and then back over in your new image, CTRL+V to paste.
Now do the same for the green and the blue, making sure to select the green and the blue channels, respectively. When you're done, click the RGB channel, and you can see the image with all three channels together.
Since we registered all of the frames to one common reference frame, the images will already be perfectly aligned on top of one another - more perfectly than you could do yourself, since DSS can do sub-pixel adjustments.
Now, you'll see that it looks kind of fuzzy - a result of our denoising. And it's not very bright, the colors are weird, etc. We'll get to that.
Now it's time to add the luminance channel. Click the Layers tab to go back to the layers view. Then go back over to your L image, select all with CTRL+A, CTRL+C for copy, and then in your new RGB image, CTRL+V for paste. This will add the L image as a layer above your "Background" RGB image. In order to put them together, click the layer you just added (it'll be called Layer 1), and in the Layers box, change the type from "Normal" to "Luminosity" in the drop-down menu.
In order to keep track of which layer is which, double-click each layer to change its name. I name Background "rgb" and Layer 1 "lum." Note that for the Background layer, a window will pop up to change the name, whereas with any other layer, you can change it right there in the Layers box.
Finally, let's crop out the areas that don't exist in all four images. Use the Rectangular Marquee tool. If you don't like the box you drew, you can move it around, or you can press CTRL+D to deselect and start over. Once you're satisfied, go to Image and click Crop. You can crop multiple times.
All right, now we're ready to get to work!
Color Correction
My Astronomik LRGB filters advertise that they are already white balanced in sRGB space, which is the most commonly used color space for computers and printing. As you can see in the above image, it does look pretty good. Sometimes, though, I'll still get an undesirable shade of something, especially if I imaged with one filter in an area of the sky with more light pollution than with another filter later, or if it was lower on the horizon later, or if I stretched one more than the others, etc.
There are many ways to color balance images, and this is where a lot of the art comes into astrophotography. While we can do photometry on DSOs all day long and say precisely how much light is emitted at which wavelengths, it's hard to say what it would really look like if we were close enough to it to get enough light hitting our eyes to get color information like we do in the daytime. And different camera sensors respond differently to different wavelengths, no matter how well-balanced the filters are - just check out the quantum efficiency graph on my camera up above.
One way is to use an app like eXcalibrator, which finds a G2V star in the image, the same type as our sun, by plate-solving. This star is then used as a white reference. I read about eXcalibrator in a recent Sky and Telescope issue, but my attempts to use it today were ultimately unsuccessful. No matter how "nice" of data I gave it - stretched, cropped, etc - it could not plate solve, no matter what solving method or dataset I used. (I also tried untouched stacked frames out of DSS, but it was still a no-go). I couldn't find any help online, so I gave up. In PixInsight, you can use the whole galaxy as a white reference, since theoretically, all the light of the light in a given galaxy will average out to be white.
And finally, you can use other internet images. You have to do that wisely, though, keeping in mind that many images of things like nebulae are in narrowband instead of wideband LRGB color, many people like to take artistic freedom with their images, and many images from ESA or NASA have added IR, X-ray, or UV data in them as additional colors. I usually "just wing it" in this manner.
Select the RGB layer, then click the "Create new adjustment layer" button at the bottom of the Layers box, which looks like a circle that is half black, half white. Select Levels, and a small box will appear on the right side of the image with the Levels controls. Now, keep in mind, this is only acting on the RGB layer, not the luminance layer, and changes are not cumulative - that is to say, if you add additional adjustment layers, things you did in another adjustment layer won't show up in the histogram in the new layer; essentially, they're independent. You can move the black slider a bit into the peak, since we don't really need color data in the background of the image. Don't go too far though, or you'll lose color information in the parts of the image you care about.
Add another adjustment layer to the RGB layer of Curves. This is where I do most of my color correction. Mine's pretty close, so I only make minor adjustments. Remember that red + green = yellow, so you'll need to bump up both of those if you want more yellow.
Click the RGB layer and add another adjustment layer for Hue/Saturation, and turn up your saturation some. This can be done before Curves as well to help see the color adjustments better.
Additional Adjustments
Now it's time to play with the luminance channel. Add a Curves layer to the luminance layer, and cut out some of the background by moving the black point in. Don't move it in too far though, or you'll lose the dimmer parts of the image, such as the disruption cloud around NGC 5195 here. You also don't want to make the background pure black - it will look unnatural.
Feel free to play around with the many other types of adjustments that Photoshop has to offer. These are the main ones I use.
Now, I am only so good with layers. I'm still getting them all figured out, but this is about where my knowledge in that realm ends. But there are still a few tweaks I'd like to make, particularly to the noise, and I want to drop the background a bit more. There are some good Carboni tools to use, plus Camera Raw, but I need a single layer to operate most of that stuff on. So I save out a PSD file here, a Photoshop file, so I can have all the layers for editing later. Then, I go to Layer -> Flatten Image, which collapses all of the layers down.
First, I go to Camera Raw by hitting CTRL+Shift+A, and I adjust the Basic settings to my liking. I've found if I'm so close to being color balanced, but not quite there, I adjust the color temperature and tint, and I can hit right where I want to be. Dehaze is another particularly powerful tool, boosting sharpness. I also go over to the Detail tab (the icon with the little mountains) and do some denoising on the color and luminance sliders. However, running the Deep Space Noise Reduction tool in the Carboni Tools is a little smarter - it creates a layer mask so that it only denoises the background, since usually the DSO has enough signal-to-noise that it's not so noisy. This way, you don't lose detail in the DSO.
I see that it's a little pink still, so I hit OK on Camera Raw and hit CTRL+M to open Curves to adjust it a bit more.
Finally, when I image on my club's achromatic refractor, I get blue halos around my stars. Carboni's tools has a blue halo reducer, which works better with my DSLR images than with my astro camera images, but it helps some. They're not in this image because I used my apochromatic refractor. And I might run a few other tools, like Enhance DSO/Reduce Stars or Increase Star Color or others, just to see what they do. Sometimes it doesn't work very well, and I have to backup - you can't just undo one of the tools because there are multiple steps involved in each action, so you have to click the History button over just to the left of the histogram and back up there. (Again, Window ->History if it's not there).
Once I'm mostly satisfied, I save out a TIFF and a JPG. I say mostly because art is never finished, only abandoned.
Conclusion
Here's a new version of the finished product (which I use Lightroom to watermark, since Photoshop astoundingly doesn't have a watermarking tool).
So we've come a long ways from the messy-looking black-and-white raw frame! And this only scratches the surface of what you can do with your images. I'm sure I will be re-visiting this one once I get a better handle on PixInsight. But I'd still say it looks great! You can see that my synthetic flats have a few issues, however - nothing really beats a good set of actual flats taken on your telescope, especially when you don't have light pollution to contend with, like in this dataset.
Feel free to ask questions in the comment section, or you can message me on Facebook on my AstronoMolly Images page.
Have fun processing!