I've been very busy lately and haven't had much time to keep messing with it, so it's been a while since I gave it another shot. But since it's been nearly a month since I've been able to do any deep-sky imaging, I'm itching for more space awesomeness, and decided to dedicate my Sunday afternoon to stacking and processing another dataset.
First, keeping track of all my image data.
As of yesterday, I finally completed a project I've been working on for a while, which was to make a master spreadsheet of all of my images. I have so many datasets now (208 deep sky and 99 lunar & planetary, so be exact!) that I can't keep track anymore! Each of my datasets has its own text file with all of the documentation about that dataset, but unless I'm sitting at my home computer or have my backup hard drive with me, I can't quickly access that info. I frequently either find myself talking to the public or other astrophotographers and trying to remember when, where, and with what gear and settings I took a given image with, or I'm out in the field and trying to remember which targets need more data if my primary target for the evening isn't going to work out, or like today I'm wanting to process something in PixInsight, but can't remember offhand which datasets might benefit from it. This spreadsheet solves all of those problems, and is easily accessible from my phone and my telescope-control tablet.
Green means "you should re-process this," and yellow means "need more data."
Scores are 1-6, where 1 is "Useless" and 6 is "I'd sell prints of this."
I'll note here that it took some real fancy equation-ing and a hidden dummy column to get Excel to sort the Messier-numbered items in natural sort order, aka M1, M8, M13, etc, rather than lexicological sort order M1, M100, M101, etc. At first, I tried an equation I found online that would let you grab certain characters in the text and would put those in the cell. Using the MID equation, starting on character 2 and ending at the space, it would grab the number and put it in the cell. But then when I sorted, it would put all of the M objects at the top, followed by everything else in alphabetical order. The only way around it was to sort just the M objects based on that dummy column of numbers after sorting the whole sheet alphabetically. Too much work. After consulting with my dad and my sister, both accountants and thus Excel whizzes, I managed to cobble together a super-equation of equations (a nested if statement) that handled all cases - it put a 0 in the dummy column for all objects starting with A-L, and then a 1000 in for Ma - Z. Now all I have to do is sort by the dummy column and then by object name, and it's perfect!
Basically, it's a big IF statement that first checks whether the first character in the cell is less than M (so A-L), and if true, makes the value of the cell 0. If false, it checks for if the second character is a letter (the ISERR part), which would be false for the Messier objects. If true (and also if the first letter is not A-L still), then it puts a 1000 in the cell. If false, then the second character is a number, and it grabs said number and puts it in the cell (the MID equation). That was a fun exercise! Since I only start with the catalog name for Messier objects (there are too many NGCs to use those for sorting! Although I do put them in parantheses after the target name for reference).
All right, now let's pick a dataset.
I scrolled through my sheet looking for all the green-highlighted items, which are ones I feel are possible candidates for re-processing. I tagged these as I went through updating the documentation files (another big project I only recently finished) and as I filled in this spreadsheet, since I give a score to each dataset. When I'm deciding what's worth re-processing, I look at the final image I was able to get out of Photoshop or DeepSkyStacker (however long ago it was I last processed it), and then I go look at the raw data to see if there are issues like clouds and bad tracking that make the raw data not likely to yield a much better result.
I decided to go with M20 Trifid Nebula #6, a dataset I took back in July when I was trying to get my Celestron 8-inch Schmidt-Cassegrain to work on my Celestron AVX mount (I'm still trying to make it work!). You can follow that particular adventure here (July 12th) and here (July 13th). Guiding wasn't good enough for much longer than 3-minute frames, primarily because I had the added weight of my Borg 76ED on top of the C8, but I was using my rather sensitive ZWO ASI1600MM Pro camera, and figured I could get something good out of this reasonably bright nebula. I stacked it in DeepSkyStacker and processed in Photoshop, but got a somewhat disappointing result for my 2 hours worth of data. I've done better with less on my DSLR.
Date: 12 July 2018 (RGB), 13 July 2018 (L)
Object: M20 Trifid Nebula
Camera: ZWO ASI1600MM Pro
Telescope: Celestron C8
Accessories: f/6.3 focal reducer, Astronomik LRGB Type 2c 1.25" filters
Mount: Celestron AVX
Guide scope: Borg 76ED
Guide camera: QHY5
Subframes: L: 17x180s (51m)
R: 6x180s (18m)
G: 8x180s (24m)
B: 9x180s (27m)
Total: 40x180s, 2h
Stacking program: DeepSkyStacker 3.3.2
Stacking method (lights): Auto-Adaptive Weighted Average
Post-processing program: Photoshop CC 2018
Temperature: -20C (chip)
I think I could do better in PixInsight. Let's find out!
DeepSkyStacker will create master darks, flats, and biases for you, apply them, and register and stack all of your images for you in a single step (at least, for one color channel, or a one-shot color camera). While this is handy, you can get a better result if you do some of the process yourself so that you can tweak settings along the way. I'm going to follow this wonderful Light Vortex Astronomy tutorial that I used recently on one of my images of the Andromeda Galaxy.
For a rundown on what the different kinds of calibrations are, see this post.
First, I made a new PixInsight folder inside my M20 Trifid #6 folder for holding the files I'll create along the way. I already have my images split up into their "lights," "darks," and "biases" folders (I don't have a set of flats yet on my C8 with my ZWO camera because I haven't quite settled down the orientation I'll use so I can mark it with silver sharpie), so inside the PixInsight folder I created, I created four more subfolders as directed by the tutorial - "lights _cal," "lights_cal_cc," "lights_cal_cc_reg," and "masters." Cal is for calibrated, cal_cc is for color corrected, cal_cc_reg is for registered, and masters is for the master darks and biases I'll create along the way. I already have a master dark and bias from DSS, but maybe PixInsight's will be better.
So rather than repeat back to you all of the steps that the tutorial leads me through, I'll just make some notes along the way.
First I made a master superbias and master dark using the ImageIntegration process. I have 20 bias frames (I really ought to do more), so I picked the "Winsorized Sigma Clipping" rejection algorithm, as recommended by the tutorial. After setting the other recommended settings, PixInsight chewed through my biases. I applied a screen stretch (by clicking the screen + radioactive symbol icon in the upper right, "STF AutoStretch") so you can see what the master bias looks like.
You can see that on my ZWO ASI1600MM Pro, it's a little hotter in the middle than the top and bottom of the chip. Interesting.
As you can see, there is still noise in this master bias frame, and since noise is random, it won't help us do what we want to do, which is to remove the baseline variation in pixel response levels from the light images. This is where making the master superbias comes in - according to the tutorial, it makes it more like I stacked thousands of frames, average out all of the noise. This is done with the Superbias process.
The result looked weird when I screen stretched it at first, but then the tutorial mentioned stretching it in 24-bit mode (the STF icon with the 24 on it instead of the nuclear symbol in the upper right corner). Remember that applying a screen stretch doesn't change anything about the image, it just allows us to see the data that is actually compressed into a very narrow portion of the histogram. The image would look black without it. I saved this master superbias in 32-bit FIT format, and then closed all the images and processes.
Now that the master superbias is created, I need to use it to calibrate the darks before turning them into a master dark. This removes the bias signal from the dark frames, since it's not only present in the lights, but the darks as well. This is done again with the ImageCalibration process. Since I'm keeping my PixInsight processing files separate from the originals for now, I added a "darks_cal" folder to the PixInsight folder I made earlier to hold the calibrated dark frames.
With each of my 20 darks calibrated with the master superbias, now I can make a master dark with the ImageIntegration process. On the recommendation of the Light Vortex tutorial, all settings remain the same as for making the master bias, except for the Rejection Algorithm type if you have a different number of darks than biases. In my case, I also only have 20 darks at the temperature, gain, exposure, and binning settings I used for this dataset, so I leave it at Winsorized Sigma Clipping.
I applied a screen stretch on the result, and wowza there's a lot of dark current! It also looks like the pixel rejection rejected a lot of high pixels (the leftmost image).
It was bad enough that I went and looked at a single dark frame (in AvisFV) to make sure I had the cooler turned on! I did. Maybe it's just like super-duper stretched. You can see the hot corners in a single frame too. This camera is supposed to have almost no amp glow, but alas, it's there when you dig deep enough.
Anyway, pressing on. Saved out the master dark as a 32-bit FIT.
Next in the tutorial is making a master flat, but since I don't have flats with this set, I'm skipping that step.
Now it's time to calibrate the lights - the actual image data.
With my master superbias and master dark in hand, I use the ImageCalibration process with the settings recommended by the tutorial. I do this on all of my LRGB data files at once, and save them in the "lights_cal" folder from earlier.
The next step is to remove hot and cold pixels using the CosmeticCorrections process. I zoomed in on one of the luminance channel frames and scanned around for hot pixels, but I didn't really see many, so I left the Sigma value somewhere in the middle. I left the cold pixel setting off.
Upon zooming in on a pre-cosmetic corrected and post-cosmetic corrected image, there's not much difference. I don't really get many hot pixels with this CMOS camera.
Previously, I haven't used the SubframeSelector process for picking out the best frames to stack (something else DSS does automatically). I'm going to give it a try here, but only with the luminance frames, since I don't really have enough of the RGB frames to be getting rid of any that aren't blatantly bad from tracking (a step I completed right after I originally downloaded the data).
Rather than being a process, SubframeSelector is actually a script, and can be found in the Scripts menu instead of the Processes menu, under Batch Processing. I add just the luminance frames from among all of the frames I've calibrated and cosmetic corrected.
In the SystemParameters section, I calculate the arcsec/pixel resolution using the super-handy Astronomy Tools FOV calculator, using the Imaging Mode. It has a lot of telescopes, cameras, and reducers/Barlows in the system already, but you can also define your own parameters. For my Celestron C8, ZWO ASI1600MM Pro, and 0.63x focal reducer, my resoltuion is 0.61 arcsec/px.
For the camera gain, I opened up the FITS data for one of my light frames in AvisFV (using Ctrl+F), and went down to the "EGAIN" value, which is electrons per ADU. For me, it's 1.0011. This is basically the amount of signal (electrons, which convert to voltage when read out) per digitized brightness value. My camera is 12-bit, so the digitized brightness comes out on a scale of 0-2^12-1, or 4,095. Since I ran the camera at gain 139, which is unity for this camera according to the camera driver, my electrons per ADU should be very nearly 1, which it is. Unfortunately, the calculation is not simple for DSLRs, and I've had a hard time finding that data anywhere. I don't know which ISO value is unity gain for my camera, although there is a way to find this out by experimentation. For the bit depth, even though my camera takes images as 12-bit, they're saved out as 16-bit, so I leave that set at 16-bit (this is common).
After entering these values, I click Measure, and let PixInsight chew through my 17 luminance frames. It calculated about 1,000-2,000 stars in each image, which sounds about right.
Now we're gonna apply some SCIENCE!!
As far as SNRWeight is concerned, higher is better. SNR is signal-to-noise ratio, and SNRWeight values are relative to the images you're measuring. For FWHM (full-width half-max, a measure of star size), smaller is better. Now, I do have some serious coma going on, which I'm going to crop out anyway, but these FWHM values actually aren't bad. Sticking to the adage "garbage in, garbage out," I decided to sacrifice a few frames for a better end-product, hopefully. I removed the worst three in SNRWeight, and two more with poor FWHM scores by clicking on those points on the graph. Clicking on the frames in one graph removes them for all (the little x's).
Full-width at half-maximum star size plot
There are several other measures you can exclude frames with, but I'm deciding to care most about signal-to-noise ratio and full-width half-max of stars. You'll notice that two of the X's in the SNRWeight plot actually have good SNRWeights, but they had bad FWHM's. It did happen though that the bad SNRWeight points also had not great FWHM's. It just depends.
If you have a lot of exposures and don't want to click every dot to exclude them, you can put in expressions like "FWHM < 6 && Eccentricity <= 0.7" to exclude points with FWHM's above 6 and eccentricities above 0.7. Since I have only a few frames, I just did it by hand.
Finally, once frames are excluded, you can weight the better frames over the worse frames that you're keeping. The tutorial has an expression pre-built that you can use, and you just have to change some of the values to be the minima and maxima of your particular dataset. You can sort the table to get these values quickly by using the "Sort table by" drop-down to the right of the table. I'll give it a shot. The expression for me is:
(10*(1-(FWHM-3.747)/(4.531-3.747)) + 10*(1-(Eccentricity-0.4382)/(0.6410-0.4382)) + 30*(SNRWeight-10.15)/(11.35-10.15))+50
The tutorial instructed me how to set up the output settings so that we can use the weights during integration.
You can see even with just the Subframe Selector how much more power and control you have over the stacking process than something like DSS. It's a lot of extra legwork, but can prove very helpful, or so I'm hoping. Someday, when I can spend much more time acquiring subframes, this will be even more helpful.
Be sure to remember which frame had the highest score (weight) - you'll want to use that as your reference frame for registration. I forgot, so I had to re-measure my luminance frames (typically your highest-scoring frame will be luminance).
Looking ahead to Registration, in order to utilize the weights for the L frames, I think I'm going to need them for the RGB frames too. so I'm re-running the SubframeSelector process on just the RGB frames now, but I'm not going to de-select any. I'll just give them weights.
Registration and Stacking
Okay, now I can register (align all the images to each other) and integrate (stack) them. The tutorial mentions some special things to do with one-shot color camera images and debayering, since since I have monochrome LRGB images, I get to skip these steps here. (I need to do them when using my DSLR images, though).
First, I register and align all the frames using the Star Alignment process. I used my highest-weighted luminance frame from SubframeSelector as my reference frame, and I enabled Generate Drizzle Data because I want to try drizzling these for finer detail and higher resolution.
With the alignment complete, the tutorial recommends applying a new LocalNormalization process, which helps to increase SNR and make for cleaner-looking stacked images. I kept the default settings, used the same reference frame as for registration (luminance frame #23), and let it rip.
This is a very CPU-heavy process, using 80-100% of my CPU time throughout.
I'm sure you're curious to see what the image looks like now - I'm sorry to disappoint, but it still looks about the same. The fun part is coming later!
Time to stack now using the ImageIntegration process. Now, since I have LRGB data, I will run this process 4 times - once for each channel, LRGB. I added in my calibrated, cosmetically corrected, weighted, and registered light frames, as well as the files PixInsight generated for drizzle and local normalization.
I did the blue frames first because they're listed first alphabetically. The stacking process went quickly, and I got this result:
Wow uh...okay. That doesn't look quite right. I went and checked one of my blue subframes to make sure nothing went wrong in an earlier step, but it looks fine.
I double-checked that each of my settings matched the tutorial's, and I opened up each blue frame to make sure they were all OK. Things all looked fine. I tried "Median" for the combination mode in the Image Integration section, but it came out the same. Then I wondered if maybe something was wrong with the local normalization files? So I changed the "Normalization" parameter in the Image Integration section to "Additive," which is what the tutorial says one should do if they didn't do local normalization.
All right, that seems to have fixed it. Weird. So here's the stacked blue frame:
Now, it's here that the tutorial instructs you to look carefully through the image for remaining cosmic rays, satellite trails, etc, and set the sigma values in Pixel Rejection (2) to eliminate them. I did find one cosmic ray, but it's at the edge, which I'm going to crop anyway.
Now, each run of ImageIntegration updates the drizzle files, so now that I have settings that work, I'm actually going to close all of the ImageIntegration stuff, including the master light for the blue filter I just made, and run DrizzleIntegration instead, as laid out by the tutorial. But first, I need to do this same process on my red, green, and luminance data as well. Be right back!
The local normalization issue with the weird fuzzing of the image appeared for the green and red channels as well, but not the luminance. My guess is that I need to run that process on the each of the channels separately. They'll have different SNR levels, so that makes sense!
All right, so now moving over to DrizzleIntegration. This one was also a little CPU-intensive. After I ran each channel through, I saved out a 32-bit FITS. Since I applied Drizzle, they're now all twice the resolution as before.
I applied an auto-screen-stretch to each of these on PixInsight to view them. You can see they all have some nasty vignetting and backgrounds that will need to be taken care of.
Now I have finally completed pre-processing!! ...I've been at this since 2 PM, and it's now almost 7 PM. Of course, this included the time it takes for me to type stuff in here and make all of these pretty screenshots, and occasional breaks to pet my napping kitties. I'm going to break for dinner, and then launch into processing!
Mmm...munching on the creamy chicken pesto pasta I made last night...okay back to processing! I can multi-task!
I've really liked Light Vortex Astronomy's tutorials, so I'm going to stick with them. They have a whole list of PixInsight tutorials here. I'm going to follow several in a row. But first, I'm going to follow a tutorial on "Preparing Monochrome Images for Colour-Combination and Further Post-Processing."
Since my color frames are already aligned, I'm skipping the first portion of the tutorial, and going down to #2, "Cropping the Black Edges with DynamicCrop." I've got my three red, green, and blue images open in PixInsight, each with a screen stretch so I can see what I'm doing. I opened the luminance image too so that it would be cropped the same as the others when I combine them later. I opened a DynamicCrop process, decided where I wanted the crop to be using the red image (which had the largest rotation, and thus the worst black edges), and I actually brought the box all the way in to where I was going to crop it later anywhere to get rid of the vignetting and bad coma stars. Then I dragged the "New Instance" triangle icon over to a blank area of the workspace, which created a process that I renamed "Crop." After closing DynamicCrop, I clicked and dragged the Crop process I just created, and it applied the exact same crop to each of the images. I'm starting to get a feel for how PixInsight works, woo hoo! Very handy being able to apply the same thing to any number of images. Any process can do this.
Dealing with Background Gradients
All right, now it's time for PixInsight's "secret sauce" - DynamicBackgroundExtraction! From what I hear, it's basically everyone's favorite tool. Or one of them, at least. I'm still following the same tutorial as in the previous step. Since I image in a fairly light-polluted area, I always have gradients. They become even more difficult to deal with when you're doing LRGB imaging, and half of your images are on the other side of the meridian, and are thus 180-degree flipped. The problem is even worse for me since I'm using 1.25" filters on a 4/3" sensor on my ZWO ASI1600MM Pro camera, so I always have vignetting, and haven't been able to get much in the way of flats. DBE is a pretty awesome and smart tool that also has a ton of ability to tweak to perfection. I'll start with luminance.
I started off with the tolerance value at 0.500 (under the Sample Generation tab), but it didn't give me enough in the corners, so I raised it to 1.000. I also followed the advice of the tutorial and raised the Default Sample Radius to 15 and the Samples Per Row to 15, and hit "Generate." This automatically generated sample points. Then I went through and checked each one to make sure it wasn't on a star. You do that by clicking on each point (you can zoom in the image to help with clicking), and looking at the super-zoomed-in view in the DBE box. It's inverted so you can see the stars more easily. Simply move the point to get it off a star.
The black semi-circle in the DBE box means that the selected point is on a star - move the point away from the star.
You also don't want points on your target - you can tell because you'll see lots of stuff in the zoomed view in DBE. For example:
If you accidentally create a point, or want to delete one, just select it (it'll turn green) and hit the Delete key on your keyboard.
After making a New Instance of the DBE process to save for later, I executed it on the image, and got this result:
Wow what an improvement in contrast! There's some weird dark areas near the nebula though that I'm not sure are real, and of course there's still some leftover vignetting from my crop.
The gradient it calculated, if you're curious, looks like this:
That looks about right to me, compared to the original luminance frame. Again, screen stretches on all.
I closed the background image (it's more for interest), and saved the new image, which PixInsight helpfully already had a _DBE postfix added to.
I closed the Luminance frame and the DBE process, and opened the Red, and double-clicked the process I'd put in the workspace to open it on this new image. Since the images are aligned, all the points should already be perfectly placed. But since the background is different in the red frame, I don't just want to apply the same process from the L by clicking and dragging the process onto the R, so I actually open up the process and have it re-calculate by hitting Execute. Again, it looks pretty nice! I save out the image and do the same for the green and blue.
To help fix the varying levels of background light between color channels, PixInsight has a tool called LinearFit. I close the pre-background subtraction images I had open, and open the ones I just saved out from doing background extraction.
The tutorial instructs me to choose a reference images for the LinearFit process based on which one is the brightest, which can be determined from the histograms of the images. A good way to see those histograms is the HistogramTransformation process. Once open, select one of your images using the drop-down menu, and then zoom in by entering a number like 30 into the top left box between the two plots. Then select each image and see where the peak is at. It's pretty quick to tell which image is the brightest (where the peak is furthest to the right) - for me, it's the luminance image.
After applying the LinearFit to each of the images (besides the reference L image), they looked really bright, but then I re-applied the screen stretch by clicking the radioactive button in the top right corner of PixInsight, and it re-adjusted the stretch.
Now the images match in average background and signal brightness levels.
All right, now it's time to combine the RGB frames into one color image.
All right, now it's time to combine the RGB frames into one color image.
Combining the RGB images
I'm using "Combining R, G, and B Images into a Colour RGB Image and applying Luminance" for this section.
So I've got my 32-bit FITS files of calibrated, cosmetically corrected, weighted, registered, drizzled, stacked, background-extracted, and leveled images of my red, green, and blue filter data.
First, I open the RGB images up in PixInsight, and then use the ColorCombination process. Since I have in the file names _R, _G, and _B, it's a simple matter to assign them.
Now, let me take a second here to show you something interesting. As I was coming up with a logical order of tutorials, I tried combining the RGB images first, but then saw that it's better to do the DynamicBackgrounExtraction on the individual channels. And then you can do the LinearFit to level them together. Compare the above image with those things done to the one below, which is just the RGB images combined before the other steps.
I think we are in a good place with this data. I'll eventually have to do something about the vignetting on the right side, but I'll probably just crop it. Obviously I still have some background-killing and de-noising to do.
Now, I'm going to put off combining the L with the RGB until later - it's better to post-process them separately, and them combine. Plus, the LRGBCombination process requires that the input data be non-linear, meaning actually stretched instead of just this screen stretch on the linear data, which I'm putting off till later.
All right, I think it's time for the next tutorial that deals with the combined color data.
All right, I think it's time for the next tutorial that deals with the combined color data.
I'll be following "Colour-Calibrating Images" next.
Now that we've done background extraction and linear fitting on the color images and combined them, we can deal with getting the relative levels of those colors right, and further dealing with the background.
First comes the BackgroundNeutralization process. I pick a small preview area devoid of stars, as instructed by the tutorial. This will be your reference image. The tutorial mentions turning on Readout Mode - this is under Image -> Mode -> Readout Mode, and then you click & hold to see the box it's referring to. You can adjust the size and location of your preview box using the icon with the picture of the page and the mouse cursor up at the top of PixInsight, or by using Alt+E. Try to make the box as devoid as stars as you can!
The tutorial says to set the Upper Limit to below the lowest of the three RGB values in this preview box. I moved around and found kind of an average, and set the Upper Limit to be 0.010000 as a result.
If you apply a process and then want to change a parameter, make sure to right-click on the tab for that image and hit Reset before applying the process - applying processes is additive!
It didn't do much, but it brightened it slightly:
I guess that means it was already pretty close, thanks to the linear fit. The colors do look pretty good (in the foreground, at least).
Close the BackgroundNeutralization process, but leave the preview box for the next bit.
Since I have PixInsight 1.8.5, I'm going to skip to the PhotometricColorCalibration section. I don't need that preview box after all. This is a super cool process because it uses the location of the image and a star database to find a G2V star in the image, which is the same type as our Sun, and bases the white balance on that, so that the colors are how our eyes would perceive them.
The process can either take in RA and declination coordinates of the approximate center of your image that you give it, or you can do a look-up of the target in the center of your image. I've got M20 right in the middle, so I just did a search for that, and it grabbed the coordinates for me. It also needs to know the pixel scale. Since I did a 2x drizzle, the tutorial says I can simply multiply my focal length by two to get the pixel scale. The focal length of my C8 is 2,032mm, but since I was using a 0.63x focal reducer, it's actually 1,280mm (2032 x 0.63). Times that by 2 is 2,560mm. My camera's pixel size is 3.8 microns.
Shoot, I should've unchecked Background Neutralization because I've already done it. Oh well...
Here's the result - WOW!
It took a few minutes to do, but it looks fantastic!
I'm still unhappy with the background, so I did anther DynamicBackgroundExtraction. Unfortunately, it didn't help much.
This is really being stubborn. I decreased the tolerance to 0.500 and moved the centroid closer to the left side of the image. There was only a slight improvement. I guess I'll just have to cut the rest later when I stretch the histogram.
All right let's do some noise reduction. Then I'll deal with the luminance.
Following the "Noise Reduction" tutorial now, let's do the MultiscaleLinearTransform.
I followed the steps of the tutorial, and I'm just going to skip ahead because it spells it out pretty nicely. For reference, the instructions for making a mask from color data are in Part 2 of their "Producing Masks" tutorial.
Here's the before (left) and after (right):
All right, let's put that aside for now and deal with the luminance.
Same Stuff for Luminance
I'll apply the background extraction and noise reduction to the luminance image as well first.
All right, it's time to actually stretch the L and RGB data - it looks like most remaining steps are meant for non-linear data.
Stretching - Linear to Non-Linear
All right, I took a break and went to bed, worked today, and now I'm back to finish! Working on this 8 hours yesterday left me bleary-eyed, and I dreamed about image processing and data sorting all night! 😮
I'll be following "Stretching Linear Images to Non-Linear" next, specifically Part 3, "Stretching using HistogramTransformation." I've used this process previously when I've practiced on other images.
First, I remove the screen stretches from both my color and luminance images by clicking the icon near the top right with the picture of the screen and the red X for "reset transfer functions." You can also hit Ctrl+F12. Now we're back to looking at the unstretched, linear data, which look very dark, aside from a few bright stars, and just a hair of the red part of the nebula (all monochrome here for the luminance frame).
After opening HistogramTransformation, I select the image I want to operate on from the dropdown list, and hit the Reset button in the lower right corner of the window. Then I hit the Real-Time Preview button, which opens up another window with the image so I can see what my changes are doing before applying them to the main image. There's a way to apply the screen stretch to the histogram window by using the New Instance button in ScreenTransferFunction, but I want to tweak what it's showing me anyway, so I'll do it myself.
The top histogram shows what the new histogram will look like after you apply the stretch, and the bottom histogram shows the luminance curve you are changing. Bringing the middle slider (gray point) toward the left increases the brightness of the middle brightness range of the data, which is usually where your target lay. I bright the gray point all the way in to the right edge of the main part of the peak (you can zoom in using the leftmost number box between the two histograms). You can also move the leftmost slider (black point) to clip out light pollution in the background (this will appear as empty-ish space between the leftmost edge of the histogram plot and the start of the peak). It appears that DynamicBackgroundExtraction did its job here; my peak is quite close to the left edge.
Stretching the histogram happens in multiple steps, so now I hit the Apply button (square in the lower left corner of the HistogramTransformation window). My main image now looks like the preview did, and the preview is now all blown out. This is because it's showing what would happen if you applied the same stretch a second time - remember, PixInsight editing is additive. So I hit the Reset button on the HistogramTransformation window to reset to what is now the main image.
I only tweak it a little bit to get a visual result I like better. Now, since I image from a fairly light polluted location, I still have a lot of what I consider to be background light that I would like gone. But if I move the gray point or the black point to the right, my nebula gets dimmer too. But alas, there's a way around this - masking! A mask will allow me to operate only on part of the image. There's a tutorial on the different ways of doing it at "Producing Masks." Since this is already a monochrome image, the task is very simple. I just right-click on the main image (not the real-time preview), select Duplicate, and minimize it because I don't need to see it for this method. Since my image is already stretched, I don't need to do the stretching to the mask that the tutorial walks you through. So I go up to Mask -> Select Mask and select the cloned image I just made. The main image turns red. Red areas are protected areas - they won't be touched by whatever processing I apply next.
You can also invert the mask and project the nebula and stars - this is helpful when you are applying noise reduction, since there tends to be more noise in the background than on the target, and you can save the fine details of your target from the smoothing that is a result of noise reduction algorithms. Masks are powerful!
Now I pull my preview image back over, and I see that (after hitting the Reset button on HistogramTransformation) that adjustments I make are only applied to the nebula, and not the background. Excellent, now I can increase contrast just on the nebula, and leave the background an acceptable level of dark.
Now that is starting to look pretty nice. I apply the change and hit the Reset button again. My background is still kind of high, so I invert the mask to protect the nebula and stars while I clip the background a bunch. Not too much, because the mask doesn't protect everything (there are ways to tweak it so that it will catch more, or just exactly the places you want - I did this with the Andromeda Galaxy image I mentioned earlier to get more of the dim outer edges), but some.
Sweet. I apply the change and remove the mask (Mask -> Remove Mask). All right, now for the color data. Be sure to select the color image from the drop-down in HistogramTransformation, and I hit Reset again as well.
Hmm, so something weird when I went to stretch the color data.
But it still looks just fine with ScreenTransferFunction...
So I try the trick of apply ScreenTransferFunction to the histogram by clicking and dragging the New Instance icon to the bottom bar of HistogramTransformation.
STF? More like STFU!!
Soooooo I don't really know what's up with this. I checked the location of the white point slider, and it's all the way up at the top where it should be. The peaks look fine and not really weird at all. I tried applying it to make sure it wasn't just a weird preview thing, but no, it looked the same.
Let me back up a bit. Unfortunately, I don't have a good place to go back to - the last step I saved out was doing the linear fit on each color channel. But I suppose I could check each of those, combine them again, and do the photometric calibration again, and see where the HistogramTransformation gets weird. I'll report back...
All right, so the individual color channels, R, G, and B, that I saved out from doing LinearFit each look fine when stretched. I don't apply the stretch, I just wanted to see. Now I'll do ChannelCombination again.
Stretch preview looks fine here.
All right, did BackgroundNeutralization, PhotometricColorCalibration, MultiscaleLinearTransform for noise reduction with a mask, and stretching is still looking good. I suspect now that I have re-done the process that I may have left the mask on the RGB image from applying the MultiscaleLinearTransform previously. Whoopsie! All right, let me stretch the RGB image, and then I can finally combine the luminance and color images.
Stretched color image
All righty, combo time!
Back to "Combining Monochrome R, G, and B Images into a Colour RGB Image and Applying Luminance," Part 3: "Applying a Luminance Image with LRGBCombination."
LRGBCombination requires non-linear (aka stretched) images.
I take the tutorial's advice and decide to mess with saturation etc in CurvesTransformation later.
All right, it took a bit (not sure why, Photoshop does it like in the blink of an eye) to apply the Luminance to the RGB, but it's done! Here's the difference.
RGB left (from a save I did just before this step), LRGB right
It might not look like much of a difference here, but I think it will come out as a winner once I do a few more processing steps. Basically what's happened is the Lightness parameter of the RGB image (the one I extracted earlier for making the mask) has been replaced with the luminance frame. Since the luminance filter on the camera passes all visible wavelengths, it has a stronger signal-to-noise ratio, and thus better contrast and detail. Astrophotographers will generally spend at least an equal amount of time, or more, on just the L channel than the red, green, and blue channels combined. In this case, I actually wound up with fewer subframes in my L than the RGB combined frames, but this was due to tracking errors that made me have to delete a bunch.
Final Steps - Curves, Enhancing Features, Noise Reduction
First, let's open up CurvesTransformation to tweak the lightness of the different parts of the image and mess with saturation. The "Enhancing Feature Contrast" tutorial goes through how to use this tool.
Actually, I'm going to crop out the bad corners again first, then do Curves.
Well, and just for kicks, because apparently I haven't punished myself enough yet, I decided to do one more run of DynamicBackgroundExtraction to cure some weird gradients I see over on the righthand side. It didn't really help though. Oh well.
Anyway, curves!! Don't forget to use real-time preview here.
Also, I made a new Lightness mask as done before in the "Producing Masks" tutorial so that I can operate on just the background to kill some leftover colors in the background. It worked great. Highly recommend.
I'm not going to mess with the color of the target at all because the photometric calibration process looks like it hit the nail on the head. Fabulous!
I wanted to reduce the noise a bit more, so I ran the same MultiscaleLinearTransform I created earlier, and applied a new lightness mask again to project the fine details on the target. You can see the difference.
Pre-denoising on the left, post on the right
There is more I'd like to play with, but I think I've spent enough time on this image for now!
Here's the Light Vortex Astronomy tutorials I went through, in order:
- Pre-Processing (Calibrating and Stacking) Images in PixInsight
- Preparing Monochrome Images for Colour-Combination and Further Post-Processing
- Combining R, G, and B Images into a Colour RGB Image and applying Luminance
- Colour-Calibrating Images
- Noise Reduction (with Producing Masks)
- Stretching Linear Images to Non-Linear
- Combining Monochrome R, G, and B Images into a Colour RGB Image and Applying Luminance (again for applying the luminance)
- Enhancing Feature Contrast
- Started Sharpening Fine Details, but decided it was time for bed
I'll probably come back to it at some point to try out some even more advanced stuff, but I do have to say that with all the time I spent on it, it's definitely an improvement over what I was able to do in Photoshop.
Using DeepSkyStacker and Photoshop
You can see right off the bat that the color is better, and I was able to pull out more nebulosity in the blue reflection nebula region. I think I also got more of the dark nebula region that is around the edges of the main emission and reflection nebulae. When I zoom in, I can see that a little more of the detail in the nebula was preserved during denoising. I'm definitely happier with the PixInsight result, and I think with some practice, I could get it to be even more awesome! Not bad for a scope rig that is arguably too heavy for its mount and less than 2 hours of data :D
I'm starting to get the hang of how PixInsight is used. Can't wait to use it more! And it should become faster as I figure out my workflow.