Sunday, July 29, 2018

Processing LRGB Images with DeepSkyStacker and Photoshop

Since getting my first monochrome camera, the ZWO ASI1600MM Pro, back in February, I think I've finally got the hang of processing monochrome LRGB images, at least in DeepSkyStacker and Photoshop.  Getting PixInsight figured out is my next step, but I'll share with you my current process.

What is LRGB?


LRGB imaging refers to using luminance, red, green, and blue filters on a monochrome camera in order to create color images.  

DSLRs and other "one-shot color" (OSC) cameras have built-in filters and do all of the color combination work for you.  They have what are called Bayer filters overlaid on the sensor.  The Bayer filter has quads of four filters: one red, two green, and one blue.  Thus, it takes four pixels to make a single color pixel on an OSC camera.  The reason green was the one chosen to have two is that our eyes are most sensitive in that part of the spectrum.  This is largely because green is the peak wavelength of the sun's visible spectrum.

Bayer filter on a color camera

Monochrome cameras do not have these filters on top of the sensor, so every image you acquire on a monochrome camera will appear black-and-white.  To get color, we place a filter in front of the camera - in this case, either an L, R, G, or B filter.  Luminance filters pass all visible wavelengths, and L images usually your highest signal-to-noise ratio, sharpest, and highest contrast images, so they are used to bring out the detail in the color frames.  They also block IR (infrared) and UV (ultraviolet) light, which camera sensors are sensitive to (less so than visible wavelengths, but still).  IR and UV light won't focus at the same point as visible light will in your telescope (in refractors, at least), which will cause your images to appear blurry if you don't block those wavelengths.

Why LRGB?


While getting all of your colors in only one frame is convenient, there are several drawbacks.  One big one is sensitivity.  Imaging deep sky objects already puts us in a low-photon regime - we are getting very little light per second that strikes our telescope and gets absorbed by the camera's detector.  Having a filter of any kind reduces the amount of light that hits your camera, since some of that energy will be absorbed or reflected.  The dye that is used on Bayer filters makes this situation even worse, but is necessary in order to make such itty-bitty filters.  Each pixel on your camera is only a few microns across - that is, 0.000001 meters, or about 40-millionths of an inch.  The graph below shows the quantum efficiency of the red, green, and blue pixels on a Canon 40D and Canon 50D, both standard DSLR cameras.  Quantum efficiency is basically the percentage of light that hits the sensor that ultimately becomes electrons inside the chip, which ultimately makes your image.
Divide by 10 to convert A (angstroms) to nm - 4000 A = 400 nm, etc

As you can see, the highest percentage of light that makes it through the system is only 35%.  So you are losing a good amount of what little light you already had!  This leads not only to you having to take longer exposures or bumping up the ISO (which adds noise), it also decreases the signal-to-noise ratio of your images, since you are getting fewer "real" photons, but the amount of noise on your camera sensor has not decreased.

In contrast, the peak quantum efficiency of the ZWO ASI1600MM Pro is 60%, which is the 1.0 on the graph below.

Color filters that you use for monochrome cameras are much larger than Bayer matrix filters, so they can be made with different materials that are much more highly transmissive.  The graph below shows the transmission efficiency of the LRGB filters I use, Astronomik Type 2c.

The colors present each filter; red, green, and blue.  The orange line represents the luminance filter, which passes visible wavelengths but blocks IR and UV.

As you can see, they all transmit at least 90% of the incident light, so you lose very little in the color filters.  Between the quantum efficiency of the sensor and the transmission efficiency of the color filters, you have an overall quantum efficiency between 31% at 400 nm at the worst and 57% at 520 nm at the best, all of which is better than a DSLR with a Bayer matrix.

Not to mention narrowband imaging!

But enough about that.  It's tutorial time.

For more on LRGB vs one-shot color imaging, and on how you should divvy up time for luminance vs RGB, check out this great slide deck by Craig Stark.  

Stacking


For this tutorial, we're going to stack and process the M51 Whirlpool Galaxy image I recently took at the National Youth Science Camp near Green Bank, WV.

This set was taken over two nights.  Here are the deets:
Date: 8 July 2018, 9 July 2018
Location: National Youth Science Camp, WV
Object: M51 Whirlpool Galaxy
Camera: ZWO ASI1600MM Pro
Telescope: Borg 76ED
Accessories: Hotech field flattener, Astronomik LRGB Type 2c 1.25" filters
Mount: Celestron AVX
Guide scope: Orion 50mm mini-guider
Guide camera: QHY5
Subframes: L: 17x180s (51m) (8 July 2018)
   R: 8x180s (24m) (9 July 2018)
   G: 6x180s (18m) (9 July 2018)
   B: 7x180s (21m) (9 July 2018)
   Total: 38x180s, 1h54m
Gain/ISO: 139
Stacking program: DeepSkyStacker 3.3.2
Stacking method (lights): Median kappa sigma clipping (2,5), because satellites
Darks: 20
Biases: 30
Flats: 0
Temperature: -25C (chip)

File format and pre-processing


When you image with a monochrome astro camera, a good format to save your data in is FITS.  FITS (or Flexible Image Transport System) is the most commonly used format for astronomy data.  It has a flexible metadata system in which you can store a variety of information; everything from specs on your camera and telescope to coordinate system data, such as the celestial coordinate position of stars in the field of your image, and much more.  It's not just for image data; if can store arrays of arbitrary dimension (higher than 2D that's used for image files).  It's a raw format that you need extra software to view; I use AvisFV, which is a lightweight FITS file reader, which lets you also see the histogram, stretch the histogram, and see the header data.  Because of its raw-ness, the images will look really weird at first, so it's a little harder to "see" into the image to get a feel for if it's going to be good or not (need longer exposure time, shorter exposure, higher gain, etc).  For example, here's a screenshot of one of the red images, as seen in AvisFV.

Single raw 3-minute frame on M51 through the red filter.

Now, you might look at this and think, "This looks like crap!"  It certainly looks super noisy, and the DSO looks blown out.  Fear not, however; this is a very raw and linear representation of the image.  The final product will look great, don't worry!

I collected 17 luminance frames, 8 red, 6 green, and 7 blue, each 3-minutes long at unity gain.  I had 8 of each of the RGB frames at first, but as M51 sunk toward the horizon, the atmosphere was mushier there, so the guiding wasn't as good.  This led to streaky or bloated stars, so I deleted those frames.  Be sure to always go through all of your image frames and delete the ones with streaky or bloated stars, or ones where clouds rolled through, your telescope fogged up, etc.  Bad data will make for a less-than-satisfactory image at the end.  Garbage in, garbage out!

Stacking in DeepSkyStacker


Registration


DeepSkyStacker is not equipped to do automated stacking of LRGB data.  It's really more meant for OSC data.  However, with a little extra legwork, it can be put to use for stacking LRGB data - it just takes a little extra effort.  Luckily, FITS files are great, and stack way faster than .NEF or .CR2 raw DSLR camera files.  Basically what we're going to do is register all of the frames to a common reference frame, and then stack the L's, R's, G's, and B's separately, and save out those individual files for combination in Photoshop.

First, import all of your lights (all LRGB), darks, flats (if you have them) and biases, just like you would for OSC camera images.

Click "Open picture files..." and be sure to select the FITS file type.  Click one of the images, and then use the Ctrl+A keyboard shortcut to select all, and hit Open.


Do the same for darks, flats, and bias/offset by clicking those text-buttons.  If you have different exposure times on your frames, don't worry - import all of your lights and darks anyway.

Next, hit the "Check All" text-button over on the left-hand side - all light frames are toggled off by default, since DSS expects you to use its viewer to check your files for streaky stars and other problems.  I find that it loads images very slowly and you have to stretch it yourself to see it, so I don't use its viewer to pre-process.

Next, click the "Register checked pictures" button on the left.  Un-check "Stack after registering," and make sure "Register already registered pictures" is checked.  Hit the Advanced tab and check your star detection threshold - I usually use 3-5%.  Click OK.
It's reminding me to add flats - I didn't get a set of flats with this dataset.  I find I don't usually need flats on refractors so much.

If you've got a lot of images, go grab yourself a snack or a cup of coffee.


Once that's done, scroll through the file list.  You'll see that every light frame has a score.  This is an image quality metric.  If you click the column "Score," it will sort by value.  Scroll all the way to the bottom to see the frame with the highest score, which will mostly likely be a luminance frame.  Right-click it and click "Use as reference frame." 


You'll see that the score gains an asterisk.  

Now, click "Register checked pictures..." again, and again be sure that "Register already registered pictures" is checked, and "Stack after registering" is unchecked.  Click OK.  Go pet your cat or browse Facebook, or start planning your next star party.

Stacking


All right, now that registration is all done, time to stack!

First, put your files back in name-order by clicking the "File" column.  Next, un-check your RGB files, leaving the L files checked.  If you have different darks, biases, and flats that correspond with these (if you have different temperatures or different exposure times, for example), leave only the files checked that go with your L's.  You can select multiple files at once using the CTRL key to click individual ones, or CTRL+SHIFT to select several sequentially.  Then right-click somewhere on your block of selected files and click "Uncheck."  


Now only your L files and their corresponding darks, flats, and biases should be checked.  

Next, click the "Stack checked pictures..." button.  Go through the Stacking Parameters options and make your parameters picks.  See this post for a rundown on options.  I used the "Median kappa sigma clipping" stacking method for this dataset, since there were several satellites that went through, and this is a good method for rejecting them from the final image.  Finally, hit OK.


Go catch up on the latest astro-news, or shop around astronomy websites to lust after expensive equipment.

Once your image is done stacking, don't touch any adjustment buttons, and click the "Save picture to file..." button on the left.  I've been saving out images as 32-bit integer TIFFs lately.  Not sure if it helps, but why not, it only adds a few extra steps in Photoshop.  Be sure to hit the "Embed adjustments, but do not apply" radio button.  Plop that file into a folder (I make a folder called LRGB inside of my Finals folder), pick a name (see the "organize your files" section of this post for some advice), and then hit Save.  I recommend using "_L," "_R," etc at the end of your filenames so that you know which is which - remember, they're all black-and-white.

Now, rinse & repeat with your R's, G's, and B's.  Un-check your L's and their corresponding calibration frames (if different from your RGB frames), and then check your reds, stack, uncheck, greens, stack, uncheck, etc.

At the end, you should have 4 TIFF files: luminance, red, green, and blue.

Now, I'll mention here a few things.  First is that the Windows photo viewer does not support 32-bit images, so they're going to look super weird if you open them.  Second, looking at them in DeepSkyStacker, or in Photoshop, they'll look really dark.  Don't worry - we're about to fix that.

Post-Processing in Photoshop


Stretching


I have a subscription to Photoshop Creative Cloud, so right now I'm using Photoshop CC 2018 for this process.  

Go to the folder where you saved out your four TIFFs from DSS, and click and drag them all into Photoshop.  I like to re-arrange the tabs to be in LRGB order (Photoshop will open them in alphabetical order - just click and drag the tab across to where you want it).

These images are all unstretched, meaning that the data are all sitting in an itty bitty peak way down to the left side of the histogram.  This is because, like I mentioned, we're in a low-photon regime, so all of our images are going to be quite dark.  What we want to do is widen the peak where the data are located so that they span a much larger width of the available brightness in the scale of brightnesses.  Go ahead and hit CTRL+L as a shortcut to Levels, or if you don't like shortcuts, go to Image -> Adjustments -> Levels up at the top.

The only reason you can see the galaxy so well here is because I imaged from a particularly dark location, so there is almost no background light to obscure the DSO.

Photoshop won't show the histogram for 32-bit images, since it doesn't really support 32-bit (notice the big fat 0 in the histogram window on the right), but you can see it in the Levels tool, which is what we're going to use to stretch.  All of the image data are in that little gray peak way down on the left (black) end of the histogram.  In these histograms, black, or low-intensity data, are on the left, and white, or high-intensity data, are on the right.  We want to widen that peak to cover more area.  We'll do this by adjust the black and gray sliders - the leftmost slider, and the middle slider.  Leave the rightmost slider (white point) alone - it'll oversaturate your stars, which we don't want.  

Start by pulling the gray point (middle slider) in toward the peak.  The image will get really bright.  Don't worry about how bright it is at this point.


Hit OK to commit, and then open Levels again.  This time, you'll see the histogram has moved.  Move the black point slider in toward the left edge of the peak (leave a little space though), and move the gray point slider toward the right edge of the peak (again leaving some room).


Hit OK.  Play it again, Sam.


Hit OK.  Now, go to Image -> Mode, and select "16 bits/Channel," and choose "Exposure and Gamma" as the Method.  Click OK.


Now that we're in 16-bit, you'll see the histogram in the Histogram box on the right side of the screen.  (If it's not there, you can go to Window -> Histogram).  We still have some more stretching to do, so open up Levels again, and you'll see that converting it to 16-bit moved the peak to the center.  Go ahead and move that black point slider up toward the left end of the peak, and if needed, move the gray slider in toward the right edge of the peak.  


That's looking pretty good to me.  If needed, you can do Levels on the 16-bit image a few more times.  You'll do more tweaking later, so don't go too crazy.

Flatenning

If you didn't use flats, or your flats didn't work very well, or you have a light pollution gradient, you may want to make a synthetic flat.  I've found this process to work rather well.  Other tutorials I've seen have you do the flattening after combination - I've found it works a lot better if you do it before.  Now, synthetic flats only really work on images with small DSOs.  If you have a big huge nebula that dominates your image, this method won't work.  Luckily, the Whirlpool Galaxy is a pretty small component of this image set.

First, click the Rectangular Marquee tool in the left tool panel (the one with the dashed-line box),  then hit CTRL+A to select the entire image, and then hit CTRL+C for Copy.  Then hit CTRL+N for New, and hit Create.  Then paste your image in with CTRL+V.  Now you have a copy of your image.

Next, long-press the tool with the picture of the band-aid on the left tool panel, and select "Healing Brush."




In the top panel, click the down arrow next to the brush size icon, and set the "Hardness" to 0.


You can re-size the circle that represents the size here with "Size," or you can use the [ and ] keys to change the size.  500 is a good size for this image.

Now, press the ALT key, and click an area a short distance away from your DSO.  Then paint over your DSO. You may need to select multiple areas so that you don't pull the reference area over your DSO, and you get a nice flat star field where your DSO used to be.  Do the same for bright stars.  Be sure to choose reference points that are near your image, especially if your gradients or vignetting are severe.  The point is to find just the gradient of the image.

Bye-bye deep sky objects, and bright stars

Once that is done, go to Filter -> Noise -> Dust & Scratches.  Set the radius high enough so that you don't see any stars, and you have just the gradient.  Click OK.


Now, go back to your original image, and click Image -> Apply Image.  Select the "Untitled-#" as the source file, and choose Subtract as the blending mode.  Choose an offset between 30-50, and you can decrease the opacity if you want.  You'll see a preview of the result.  Once you're satisfied, click OK.


Now you've got a nice flat image.

Now, rinse & repeat with your R, G, and B images.  Whew!  Lots of work.

With just your RGB frames and not your L, also do some denoising.  Carboni's Tools has a tool called "Deep Space Noise Reduction."  Run this on your RGB frames.  Sometimes I've even done the noise reduction tool in Camera Raw Filter, which applies a slight blur to the images.  But it really helps reduce the color noise that is bound to show up in the compiled image, and sharpness comes from luminance anyway, not from chrominance.  You can blur the RGB frames quite significantly and you won't be able to tell very much in the final image.

Noisy stacked red image

Less noisy (noise has been blurred so that it's less obvious)

One other thing I want to mention real quick is that if you image over multiple nights and your camera ends up in not quite the same orientation, that's fine - your stacked images will still be all lined up thanks to the registration process we went through earlier, although you'll have to crop your final image.  It will also cause a thin spike at 0 in your histogram, which will impede your ability to cut out the background light.  But you can take care of that after you crop - we'll get there later.  Leave some background light in for now anyway.

Couldn't quite get my camera in the same orientation on the second night as the first...

Combination

Once you've stretched, flattened, and de-noised your images, it's finally time to combine them!  I recommend saving out a copy of your LRGB images as 16-bit TIFFs into another folder so that you have a starting point later if you want to re-work anything.  I make another folder inside my LRGB folder called "Stretched & flatenned."  

Now, go back to your L frame and click the Rectangular Marquee tool again, and hit CTRL+A to select the whole image, and then CTRL+N to make a new image.  Don't copy and paste your L image yet; it comes last.  Make sure to choose "RGB Color" as the Color Mode, and hit Create.  The canvas should be the same size as your L image, since you used it to create the new image.  I usually choose a black background, since I've had weird stuff happen before when I've done transparent.

In the Layers box down in the lower right corner (go to Window -> Layers if it's not there), click the Channels tab (again, Window -> Channels if you don't see it).  Select the Red channel.  Then go to your red image, CTRL+A to select all, CTRL+C to copy, and then back over in your new image, CTRL+V to paste.


Now do the same for the green and the blue, making sure to select the green and the blue channels, respectively.  When you're done, click the RGB channel, and you can see the image with all three channels together.



Since we registered all of the frames to one common reference frame, the images will already be perfectly aligned on top of one another - more perfectly than you could do yourself, since DSS can do sub-pixel adjustments.  

Now, you'll see that it looks kind of fuzzy - a result of our denoising.  And it's not very bright, the colors are weird, etc.  We'll get to that.

Now it's time to add the luminance channel.  Click the Layers tab to go back to the layers view.  Then go back over to your L image, select all with CTRL+A, CTRL+C for copy, and then in your new RGB image, CTRL+V for paste.  This will add the L image as a layer above your "Background" RGB image.  In order to put them together, click the layer you just added (it'll be called Layer 1), and in the Layers box, change the type from "Normal" to "Luminosity" in the drop-down menu.


In order to keep track of which layer is which, double-click each layer to change its name.  I name Background "rgb" and Layer 1 "lum."  Note that for the Background layer, a window will pop up to change the name, whereas with any other layer, you can change it right there in the Layers box.

Finally, let's crop out the areas that don't exist in all four images.  Use the Rectangular Marquee tool.  If you don't like the box you drew, you can move it around, or you can press CTRL+D to deselect and start over.  Once you're satisfied, go to Image and click Crop.  You can crop multiple times.

All right, now we're ready to get to work!

Color Correction

My Astronomik LRGB filters advertise that they are already white balanced in sRGB space, which is the most commonly used color space for computers and printing.  As you can see in the above image, it does look pretty good.  Sometimes, though, I'll still get an undesirable shade of something, especially if I imaged with one filter in an area of the sky with more light pollution than with another filter later, or if it was lower on the horizon later, or if I stretched one more than the others, etc.  

There are many ways to color balance images, and this is where a lot of the art comes into astrophotography.  While we can do photometry on DSOs all day long and say precisely how much light is emitted at which wavelengths, it's hard to say what it would really look like if we were close enough to it to get enough light hitting our eyes to get color information like we do in the daytime.  And different camera sensors respond differently to different wavelengths, no matter how well-balanced the filters are - just check out the quantum efficiency graph on my camera up above.  

One way is to use an app like eXcalibrator, which finds a G2V star in the image, the same type as our sun, by plate-solving.  This star is then used as a white reference.  I read about eXcalibrator in a recent Sky and Telescope issue, but my attempts to use it today were ultimately unsuccessful.  No matter how "nice" of data I gave it - stretched, cropped, etc - it could not plate solve, no matter what solving method or dataset I used.  (I also tried untouched stacked frames out of DSS, but it was still a no-go).  I couldn't find any help online, so I gave up.  In PixInsight, you can use the whole galaxy as a white reference, since theoretically, all the light of the light in a given galaxy will average out to be white.

And finally, you can use other internet images.  You have to do that wisely, though, keeping in mind that many images of things like nebulae are in narrowband instead of wideband LRGB color, many people like to take artistic freedom with their images, and many images from ESA or NASA have added IR, X-ray, or UV data in them as additional colors.  I usually "just wing it" in this manner.

Select the RGB layer, then click the "Create new adjustment layer" button at the bottom of the Layers box, which looks like a circle that is half black, half white.  Select Levels, and a small box will appear on the right side of the image with the Levels controls.  Now, keep in mind, this is only acting on the RGB layer, not the luminance layer, and changes are not cumulative - that is to say, if you add additional adjustment layers, things you did in another adjustment layer won't show up in the histogram in the new layer; essentially, they're independent.  You can move the black slider a bit into the peak, since we don't really need color data in the background of the image.  Don't go too far though, or you'll lose color information in the parts of the image you care about.

Add another adjustment layer to the RGB layer of Curves.  This is where I do most of my color correction.  Mine's pretty close, so I only make minor adjustments.  Remember that red + green = yellow, so you'll need to bump up both of those if you want more yellow.


Click the RGB layer and add another adjustment layer for Hue/Saturation, and turn up your saturation some.  This can be done before Curves as well to help see the color adjustments better.

Additional Adjustments


Now it's time to play with the luminance channel.  Add a Curves layer to the luminance layer, and cut out some of the background by moving the black point in.  Don't move it in too far though, or you'll lose the dimmer parts of the image, such as the disruption cloud around NGC 5195 here.  You also don't want to make the background pure black - it will look unnatural.  

Feel free to play around with the many other types of adjustments that Photoshop has to offer.  These are the main ones I use.

Now, I am only so good with layers.  I'm still getting them all figured out, but this is about where my knowledge in that realm ends.  But there are still a few tweaks I'd like to make, particularly to the noise, and I want to drop the background a bit more.  There are some good Carboni tools to use, plus Camera Raw, but I need a single layer to operate most of that stuff on.  So I save out a PSD file here, a Photoshop file, so I can have all the layers for editing later.  Then, I go to Layer -> Flatten Image, which collapses all of the layers down.  

First, I go to Camera Raw by hitting CTRL+Shift+A, and I adjust the Basic settings to my liking.  I've found if I'm so close to being color balanced, but not quite there, I adjust the color temperature and tint, and I can hit right where I want to be.  Dehaze is another particularly powerful tool, boosting sharpness.  I also go over to the Detail tab (the icon with the little mountains) and do some denoising on the color and luminance sliders.  However, running the Deep Space Noise Reduction tool in the Carboni Tools is a little smarter - it creates a layer mask so that it only denoises the background, since usually the DSO has enough signal-to-noise that it's not so noisy.  This way, you don't lose detail in the DSO.


I see that it's a little pink still, so I hit OK on Camera Raw and hit CTRL+M to open Curves to adjust it a bit more.  

Finally, when I image on my club's achromatic refractor, I get blue halos around my stars.  Carboni's tools has a blue halo reducer, which works better with my DSLR images than with my astro camera images, but it helps some.  They're not in this image because I used my apochromatic refractor.  And I might run a few other tools, like Enhance DSO/Reduce Stars or Increase Star Color or others, just to see what they do.  Sometimes it doesn't work very well, and I have to backup - you can't just undo one of the tools because there are multiple steps involved in each action, so you have to click the History button over just to the left of the histogram and back up there.  (Again, Window ->History if it's not there).

Once I'm mostly satisfied, I save out a TIFF and a JPG.  I say mostly because art is never finished, only abandoned.

Conclusion


Here's a new version of the finished product (which I use Lightroom to watermark, since Photoshop astoundingly doesn't have a watermarking tool).



So we've come a long ways from the messy-looking black-and-white raw frame!  And this only scratches the surface of what you can do with your images.  I'm sure I will be re-visiting this one once I get a better handle on PixInsight.  But I'd still say it looks great!  You can see that my synthetic flats have a few issues, however - nothing really beats a good set of actual flats taken on your telescope, especially when you don't have light pollution to contend with, like in this dataset.

Feel free to ask questions in the comment section, or you can message me on Facebook on my AstronoMolly Images page.



Thursday, July 26, 2018

My First Date with PixInsight

There are a variety of image processing tools out there.  Some specific to astrophotography, some with a wider audience that nonetheless are useful.  One program that seems to be the gravity well into which all astrophotographers are eventually sucked is PixInsight.

I've been planning on getting into it for a while, but wanted to exhaust Photoshop first, or at least start hankering for some more powerful tools.  I've been able to do quite well with Photoshop.  But I recently joined the cast of The Astro Imaging Channel after I gave a talk on it, and the other members have been egging me on to take the plunge.  I've been very busy lately (as evidenced by the slew of log entries from the week before last), but I finally have a bit of time on my hands to check out the trial version.

I'm not entirely sure why I'm bothering with the trial version - there's almost no question that I'm going to be buying it anyway - but something just didn't feel right about sinking $270 into a piece of software that offers a fully-featured 45-day trial.

PixInsight is only a few steps above straight-up code.  Even all of the tool names are probably what their actual C++ class names, given their name structure - DynamicBackgroundExtraction (camel case, no spaces), the lack of dashes, symbols, and numbers, etc.  I code in C, C++ (a bit), Python, Fortran (somewhat), Matlab, and Mathematica (if you can call Mathematica coding), so I recognize it when I see it!  You can even dive into the source code of the instance of the tool you're using if you want to tweak it at a very base level.

But it does have a very nice GUI (graphical user interface).  It's just not quite as "fully automatic" as Photoshop.  But this lets you have a lot more control over what happens to your images, and opens up some very powerful functionality.

This post isn't meant to be a tutorial from me on how to use PixInsight, but I wanted to record my first interaction with PixInsight - and hopefully, people who have used it before will get a kick out of seeing me make the same mistakes they did (or worse!), and people who haven't used it yet will laugh anyway from watching the mayhem.

I decided to follow this LRGB processing example video on PixInsight's own website.  Next, I needed to choose a dataset to be my first to process.  Immediately my Rosette Nebula came to mind, but it's so good that I wasn't sure how much using PixInsight would improve on it in my first go, so I decided against it.  I tried to think of datasets that had good data, but the final image turned out less than I'd wanted - M31 Andromeda Galaxy #13 fit the bill.  But, since they were processing LRGB data in the video (as opposed to one-shot color, like from my DSLR), I decided my first dataset would be from my ZWO camera, because I wanted my first image to be awesome, and to follow that processing process.  But which to choose?  I haven't done a whole lot with that camera yet.  I wanted it to be a set where I'd already produced 32-bit TIFFs from DeepSkyStacker, as opposed to 16-bit TIFFs (I usually do 32-bit nowadays, but sometimes I don't).  I was originally thinking that my "first light" image with my ZWO camera, M42 Orion Nebula, would be a good choice - the image I got out of DSS and Photoshop for it turned out great - but it has the added complication of a set of 5-second luminance frames I took to properly expose the bright inner core, which gets saturated out in the longer 60-second luminance images I took.  I finally settled on M81-82 #5, Bode's Galaxy and the Cigar Galaxy, since it came out really sharp and I think my raw data is good, but I couldn't quite get the colors to work, and I had to really beat the background light into submission.  Plus, the video features processing a galaxy, so it made more sense.

What I could do with DeepSkyStacker and Photoshop.  The colors aren't right, and I think I can pull out the sharpness that's there in the data, but ends up getting kind of obscured.  Maybe I can even get more of the red jets coming out of M82 to appear.
Details:
Date: 10 & 15 March 2018
Object: M81 & M82
Camera: ZWO ASI1600MM Pro
Telescope: Vixen na140ssf
Accessories: Astronomik LRGB Type 2c 1.25" filters
Mount: Losmandy Gemini II
Guide scope: Celestron 102mm
Guide camera: QHY5
Subframes: L: 18x180s
   R: 11x180s
   G: 18x180s
   B: 14x180s
   Total time: 3h3m
Gain/ISO: Unity (139)
Stacking program: DeepSkyStacker
Stacking method (lights): Auto-Adaptive Weighted Average (x5)
Darks: 20
Biases: 20
Flats: 0
Temperature: -30C (chip), 29F & 35F (ambient)

1:45 into the video, I immediately hit my first little roadblock: opening a tool.  I single-clicked the LRGBCombination tool, and nothing happened.  I tried clicking and dragging, but got a weird mini-box instead, not the one showing on the screen.  Finally I tried double-clicking, and got the box with the options.  Phew!  Okay.  Got this.

It first has me combine just the RGB.  L comes later, I guess.

Next, I get acquainted with the ScreenTransferFunction tool, which stretches the visualization of the image on the screen so you can see it without applying anything to the actual image.  This is done with the little radioactive symbol button (hee hee).
And now I have a nice green image.  But so does the video, so I'm not freaked out yet.
This is because the color channels are not calibrated.

After unlinking the RGB channels in the screen transfer function and re-automatically stretching the image, I see that I have some pretty severe gradients.  This comes as zero surprise to me since I image through a 1.25" barrel (because I was not ready to buy 2-inch filters and a 2-inch filter wheel earlier this year).  The red and green are on one side and the blue on the other because I did a meridian flip.

Luckily, the galaxy image in the video tutorial also has gradients (albeit barely visible ones), so I am about to be introduced to the infamous DynamicBackgroundExtraction tool.  Very excited.  In Photoshop, I've had to make my own synthetic flats, which works for galaxies all right but less well for nebulae.  There's a tool for Photoshop to handle gradients, but it costs $50 (called Gradient Xterminator).  Let's see how this one goes.

Mmmmmkay, so I plotted my own points because the tutorial video loaded a point model they'd already created (thanks guys...super helpful...), aaaaaand it didn't really do much besides add more gradients.  Based on a presentation I saw on The Astro Imaging Channel this past Sunday, I think there's a way to auto-generate a grid of the points, and then you just adjust them to avoid having them sit on stars.  Let me go watch that part of the video real quick...

Ah, okay.  Just needed to click on the image.  Then the previously grayed-out boxes in the Sample Generation dropdown allowed me to edit them.  Then I could click Generate.  Woot!  It generates several sample points, avoiding my two galaxies and the center of the image.  Now I go check all of the points to make sure they're not on top of stars.  There's a helpful zoom box in the tool window.

Easy-peasy.

All right.  Points are checked.  Now to execute.  Some commands flash on the terminal screen aaaand...

...gradients still suck.  This process may need some refinement.  But like, the gradients in my image are way worse than the ones in the tutorial video, so it may need a heavier hand.  

So I go to the Googly machines and quickly find a DBE tutorial on another severely-gradient-ed image, but from a DSLR.  The process seems to work better this time.

Just the gradients, stretched for viewing (original is too dark to see)

Auto-stretched image with DBE applied

So it's better, but now my other problem is evident: some rotation between the frames.  I was indeed using a German equatorial mount, but the data were taken over two nights, and I don't have a marked "home" position for this camera on the telescope I used because I used the club's 5-inch Vixen refractor, and I don't want to put my own marks on it.  I thought about cropping it, but then my luminance image, which I haven't applied yet, won't line up.  All right, let me go crop these to the same size in Photoshop, and then I'll start over.  Be right back...

Fun fact, layering 32-bit images (so I can see where the overlaps are) is really hard on RAM!  Good thing I have 16 GB...
The rest of that 82% is a zillion Chrome tabs.


I used a crop action I created (follow this video for a quick guide) on each of the frames so that they were each cropped in the exact same place.  The one downside is that Photoshop can't save out 32-bit images, so my result images are 16-bit.  I guess I'll just have to live with that for now.

And so I start over with the LRGB combination and the dynamic background extraction.  Good practice, right?

All right let's do thiiiiiis

So it's better, but still pretty splotchy and vignetted.  But I think it's called "Dynamic" for a reason.
Just the gradients

 Splotchification

Earlier, following the Light Vortex tutorial, I had saved a new instance of that DBE process with all of my adjusted points to the workspace, and I apply that to this new image and double-check the point locations.  I decrease the tolerance from 2.5 to 0.5, and then apply.

The splotchy background

 The new image

Still pretty splotchy.  I begin to wonder whether it might be more beneficial to apply DBE to each channel individually.  But let me press on and see what happens first - remember we have a Screen Transfer Function applied, basically auto-stretching what the image looks like (without actually changing it), and as you can see, it's over-doing it.  So these background gradients may not show up much in the final image.

Well I went to work and came home, and then read further in the Light Vortex tutorial - turns out that using Division instead of Subtraction mode is better for gradients caused by vignetting.  So I went back to the RGB combined image, chose Division instead of Subtraction, and ran it again.  Then I put that point model onto the new image, and ran it again, but with Subtraction, since the vignetting was mostly gone but it was still splotchy.  Now it's less splotchy and less vignetted.  This might be good enough.  We shall see.


Next in the YouTube video from PixInsight is background calibration and color calibration.  Again I'm not doing a tutorial here, so I'll just say I did it, and here's the result.


It doesn't look that much different - this is probably because the Astronomik filters I have advertise that they are already calibrated in sRGB space, which is the most frequently-used color space on today's computers.  But it's good to know.  Now onto Part 2 of the video tutorial.

Looks like it's time to deal with the luminance.  Here's pre-DBE (screen stretch applied):

And I can use the DBE process icon I saved from the RGB process to have the points already where they need to be.

The ScreenTransferFunction went a little crazy on the result, so I turned it down a bit.  You can see some vignetting still, but it seems less severe.  What's interesting is that the area to the right of M82 looks darker, which is the same area where the green is shining through in the RGB image.  I wonder what's happening there.  It doesn't look remotely like a dust mote (you can see a few here, and they are much more perfectly round).  I applied DBE a second time with Subtraction instead, but don't see much improvement, and it might be splotchier.  Pressing on, for now...


So recall, we've been viewing all of these images with a screen stretch - just modifying how the image looks so we can see its data.  It's really all still linear data compressed into an itty bitty peak far on the left, which you can see in the lower histogram in the image above.  

After that is done for both the L and RGB images, the video tutorial has you you a HDRWaveletTransform to help mitigate saturation.  The video is a little out-of-date, and the function is now called HDRMultiscaleTransform, and has a few extra options.  Its job is also to help bring out contrast in the bright areas.  I choose 5 for the number of layers, and turn on the Lightness Mask option, which prevents the contrast enhancement from dimming the very dim areas of the image.

WHOA.


I knew this data was unusually sharp, but holy cow this just brings that right out!  And that's the RGB image.  The luminance is likewise awesome.

All righty, time to combine these puppies!


All right!  Now we're getting somewhere.  It's kind of dark, so hopefully there's a step in a bit to play with the histogram some more.  Plus, I'll need to increase saturation.

Hey look at that, saturation is next in the video.
I learned quickly here that steps are cumulative.  When I adjusted the Saturation slider in the LRGBCombination window, it didn't do much, so I pulled it down further (small numbers is more saturation here).  I did this a couple times, and then it became way over-saturated!  When I pulled the slider back to the right, it got worse!  So I had to undo the last few steps.  It's guess-and-check, I suppose.


Hmm, colors are starting to get a little weird.  I'll need to adjust those in a bit.
I'll note here real quick the blue halos - this is a result of the fact that my club's 5-inch Vixen refractor is achromatic (well, "neo-achromatic," which I suppose is better?), which means that the colors are not fully corrected in the optics.  In apochromatic refractors, additional lenses are used to place the focal points of all three color components - red, green, and blue - in the exact same spot, or almost.  In achromatic refractors, usually only the red and the green have been aligned, with the blue still left slightly out-of-focus.  Hence the blue halos.  Carboni's tools for Photoshop has a blue halo remover, which works better on my DSLR images than my astro camera images, but helps.  Hopefully PixInsight has something like that (I'm sure it does).

Ah good, next is additional stretching in the histogram tool.  I seize my opportunity to make color adjustments by using the histogram.  In this tool I can't just grab the curve anywhere and bend it to my will, but between moving the black and gray points, I get something acceptable.  Of course, the splotchy background makes a rude appearance, but hopefully I'll have another chance to deal with that.  

All right.  Getting closer.  The colors probably aren't right, but it looks acceptable to me.  Even managed to get a bit of the red cloud in M82, which I haven't been able to before!  And the detail is awesome.

Next is noise reduction.  The tutorial has me using the SCNR tool on the problematic color channels.  I do green first, and some of the splotchiness reduces!  Next I try blue, and this actually removes the blue halos.  The two galaxies lose blue too though, so I go back to the histogram and bump it up.  Now the colors look better and the blue halos are gone, but for some reason, the outer portions of M81 look kind of weird.  And I still have the red splotch left of M81.  I couldn't get rid of it with the SCRN tool because it pulled all the red out of the galaxies too.

And at some point, M82 got a little oversatured in luminance.  

This is where the video tutorial ends.  Their data was obviously taken in a less light polluted area.  But I definitely see the potential of all of these awesome tools, once I figure out how to refine them.  On to more YouTube videos!

I'm going to end this one here for now.  But my efforts will continue!