Thursday, July 26, 2018

My First Date with PixInsight

There are a variety of image processing tools out there.  Some specific to astrophotography, some with a wider audience that nonetheless are useful.  One program that seems to be the gravity well into which all astrophotographers are eventually sucked is PixInsight.

I've been planning on getting into it for a while, but wanted to exhaust Photoshop first, or at least start hankering for some more powerful tools.  I've been able to do quite well with Photoshop.  But I recently joined the cast of The Astro Imaging Channel after I gave a talk on it, and the other members have been egging me on to take the plunge.  I've been very busy lately (as evidenced by the slew of log entries from the week before last), but I finally have a bit of time on my hands to check out the trial version.

I'm not entirely sure why I'm bothering with the trial version - there's almost no question that I'm going to be buying it anyway - but something just didn't feel right about sinking $270 into a piece of software that offers a fully-featured 45-day trial.

PixInsight is only a few steps above straight-up code.  Even all of the tool names are probably what their actual C++ class names, given their name structure - DynamicBackgroundExtraction (camel case, no spaces), the lack of dashes, symbols, and numbers, etc.  I code in C, C++ (a bit), Python, Fortran (somewhat), Matlab, and Mathematica (if you can call Mathematica coding), so I recognize it when I see it!  You can even dive into the source code of the instance of the tool you're using if you want to tweak it at a very base level.

But it does have a very nice GUI (graphical user interface).  It's just not quite as "fully automatic" as Photoshop.  But this lets you have a lot more control over what happens to your images, and opens up some very powerful functionality.

This post isn't meant to be a tutorial from me on how to use PixInsight, but I wanted to record my first interaction with PixInsight - and hopefully, people who have used it before will get a kick out of seeing me make the same mistakes they did (or worse!), and people who haven't used it yet will laugh anyway from watching the mayhem.

I decided to follow this LRGB processing example video on PixInsight's own website.  Next, I needed to choose a dataset to be my first to process.  Immediately my Rosette Nebula came to mind, but it's so good that I wasn't sure how much using PixInsight would improve on it in my first go, so I decided against it.  I tried to think of datasets that had good data, but the final image turned out less than I'd wanted - M31 Andromeda Galaxy #13 fit the bill.  But, since they were processing LRGB data in the video (as opposed to one-shot color, like from my DSLR), I decided my first dataset would be from my ZWO camera, because I wanted my first image to be awesome, and to follow that processing process.  But which to choose?  I haven't done a whole lot with that camera yet.  I wanted it to be a set where I'd already produced 32-bit TIFFs from DeepSkyStacker, as opposed to 16-bit TIFFs (I usually do 32-bit nowadays, but sometimes I don't).  I was originally thinking that my "first light" image with my ZWO camera, M42 Orion Nebula, would be a good choice - the image I got out of DSS and Photoshop for it turned out great - but it has the added complication of a set of 5-second luminance frames I took to properly expose the bright inner core, which gets saturated out in the longer 60-second luminance images I took.  I finally settled on M81-82 #5, Bode's Galaxy and the Cigar Galaxy, since it came out really sharp and I think my raw data is good, but I couldn't quite get the colors to work, and I had to really beat the background light into submission.  Plus, the video features processing a galaxy, so it made more sense.

What I could do with DeepSkyStacker and Photoshop.  The colors aren't right, and I think I can pull out the sharpness that's there in the data, but ends up getting kind of obscured.  Maybe I can even get more of the red jets coming out of M82 to appear.
Details:
Date: 10 & 15 March 2018
Object: M81 & M82
Camera: ZWO ASI1600MM Pro
Telescope: Vixen na140ssf
Accessories: Astronomik LRGB Type 2c 1.25" filters
Mount: Losmandy Gemini II
Guide scope: Celestron 102mm
Guide camera: QHY5
Subframes: L: 18x180s
   R: 11x180s
   G: 18x180s
   B: 14x180s
   Total time: 3h3m
Gain/ISO: Unity (139)
Stacking program: DeepSkyStacker
Stacking method (lights): Auto-Adaptive Weighted Average (x5)
Darks: 20
Biases: 20
Flats: 0
Temperature: -30C (chip), 29F & 35F (ambient)

1:45 into the video, I immediately hit my first little roadblock: opening a tool.  I single-clicked the LRGBCombination tool, and nothing happened.  I tried clicking and dragging, but got a weird mini-box instead, not the one showing on the screen.  Finally I tried double-clicking, and got the box with the options.  Phew!  Okay.  Got this.

It first has me combine just the RGB.  L comes later, I guess.

Next, I get acquainted with the ScreenTransferFunction tool, which stretches the visualization of the image on the screen so you can see it without applying anything to the actual image.  This is done with the little radioactive symbol button (hee hee).
And now I have a nice green image.  But so does the video, so I'm not freaked out yet.
This is because the color channels are not calibrated.

After unlinking the RGB channels in the screen transfer function and re-automatically stretching the image, I see that I have some pretty severe gradients.  This comes as zero surprise to me since I image through a 1.25" barrel (because I was not ready to buy 2-inch filters and a 2-inch filter wheel earlier this year).  The red and green are on one side and the blue on the other because I did a meridian flip.

Luckily, the galaxy image in the video tutorial also has gradients (albeit barely visible ones), so I am about to be introduced to the infamous DynamicBackgroundExtraction tool.  Very excited.  In Photoshop, I've had to make my own synthetic flats, which works for galaxies all right but less well for nebulae.  There's a tool for Photoshop to handle gradients, but it costs $50 (called Gradient Xterminator).  Let's see how this one goes.

Mmmmmkay, so I plotted my own points because the tutorial video loaded a point model they'd already created (thanks guys...super helpful...), aaaaaand it didn't really do much besides add more gradients.  Based on a presentation I saw on The Astro Imaging Channel this past Sunday, I think there's a way to auto-generate a grid of the points, and then you just adjust them to avoid having them sit on stars.  Let me go watch that part of the video real quick...

Ah, okay.  Just needed to click on the image.  Then the previously grayed-out boxes in the Sample Generation dropdown allowed me to edit them.  Then I could click Generate.  Woot!  It generates several sample points, avoiding my two galaxies and the center of the image.  Now I go check all of the points to make sure they're not on top of stars.  There's a helpful zoom box in the tool window.

Easy-peasy.

All right.  Points are checked.  Now to execute.  Some commands flash on the terminal screen aaaand...

...gradients still suck.  This process may need some refinement.  But like, the gradients in my image are way worse than the ones in the tutorial video, so it may need a heavier hand.  

So I go to the Googly machines and quickly find a DBE tutorial on another severely-gradient-ed image, but from a DSLR.  The process seems to work better this time.

Just the gradients, stretched for viewing (original is too dark to see)

Auto-stretched image with DBE applied

So it's better, but now my other problem is evident: some rotation between the frames.  I was indeed using a German equatorial mount, but the data were taken over two nights, and I don't have a marked "home" position for this camera on the telescope I used because I used the club's 5-inch Vixen refractor, and I don't want to put my own marks on it.  I thought about cropping it, but then my luminance image, which I haven't applied yet, won't line up.  All right, let me go crop these to the same size in Photoshop, and then I'll start over.  Be right back...

Fun fact, layering 32-bit images (so I can see where the overlaps are) is really hard on RAM!  Good thing I have 16 GB...
The rest of that 82% is a zillion Chrome tabs.


I used a crop action I created (follow this video for a quick guide) on each of the frames so that they were each cropped in the exact same place.  The one downside is that Photoshop can't save out 32-bit images, so my result images are 16-bit.  I guess I'll just have to live with that for now.

And so I start over with the LRGB combination and the dynamic background extraction.  Good practice, right?

All right let's do thiiiiiis

So it's better, but still pretty splotchy and vignetted.  But I think it's called "Dynamic" for a reason.
Just the gradients

 Splotchification

Earlier, following the Light Vortex tutorial, I had saved a new instance of that DBE process with all of my adjusted points to the workspace, and I apply that to this new image and double-check the point locations.  I decrease the tolerance from 2.5 to 0.5, and then apply.

The splotchy background

 The new image

Still pretty splotchy.  I begin to wonder whether it might be more beneficial to apply DBE to each channel individually.  But let me press on and see what happens first - remember we have a Screen Transfer Function applied, basically auto-stretching what the image looks like (without actually changing it), and as you can see, it's over-doing it.  So these background gradients may not show up much in the final image.

Well I went to work and came home, and then read further in the Light Vortex tutorial - turns out that using Division instead of Subtraction mode is better for gradients caused by vignetting.  So I went back to the RGB combined image, chose Division instead of Subtraction, and ran it again.  Then I put that point model onto the new image, and ran it again, but with Subtraction, since the vignetting was mostly gone but it was still splotchy.  Now it's less splotchy and less vignetted.  This might be good enough.  We shall see.


Next in the YouTube video from PixInsight is background calibration and color calibration.  Again I'm not doing a tutorial here, so I'll just say I did it, and here's the result.


It doesn't look that much different - this is probably because the Astronomik filters I have advertise that they are already calibrated in sRGB space, which is the most frequently-used color space on today's computers.  But it's good to know.  Now onto Part 2 of the video tutorial.

Looks like it's time to deal with the luminance.  Here's pre-DBE (screen stretch applied):

And I can use the DBE process icon I saved from the RGB process to have the points already where they need to be.

The ScreenTransferFunction went a little crazy on the result, so I turned it down a bit.  You can see some vignetting still, but it seems less severe.  What's interesting is that the area to the right of M82 looks darker, which is the same area where the green is shining through in the RGB image.  I wonder what's happening there.  It doesn't look remotely like a dust mote (you can see a few here, and they are much more perfectly round).  I applied DBE a second time with Subtraction instead, but don't see much improvement, and it might be splotchier.  Pressing on, for now...


So recall, we've been viewing all of these images with a screen stretch - just modifying how the image looks so we can see its data.  It's really all still linear data compressed into an itty bitty peak far on the left, which you can see in the lower histogram in the image above.  

After that is done for both the L and RGB images, the video tutorial has you you a HDRWaveletTransform to help mitigate saturation.  The video is a little out-of-date, and the function is now called HDRMultiscaleTransform, and has a few extra options.  Its job is also to help bring out contrast in the bright areas.  I choose 5 for the number of layers, and turn on the Lightness Mask option, which prevents the contrast enhancement from dimming the very dim areas of the image.

WHOA.


I knew this data was unusually sharp, but holy cow this just brings that right out!  And that's the RGB image.  The luminance is likewise awesome.

All righty, time to combine these puppies!


All right!  Now we're getting somewhere.  It's kind of dark, so hopefully there's a step in a bit to play with the histogram some more.  Plus, I'll need to increase saturation.

Hey look at that, saturation is next in the video.
I learned quickly here that steps are cumulative.  When I adjusted the Saturation slider in the LRGBCombination window, it didn't do much, so I pulled it down further (small numbers is more saturation here).  I did this a couple times, and then it became way over-saturated!  When I pulled the slider back to the right, it got worse!  So I had to undo the last few steps.  It's guess-and-check, I suppose.


Hmm, colors are starting to get a little weird.  I'll need to adjust those in a bit.
I'll note here real quick the blue halos - this is a result of the fact that my club's 5-inch Vixen refractor is achromatic (well, "neo-achromatic," which I suppose is better?), which means that the colors are not fully corrected in the optics.  In apochromatic refractors, additional lenses are used to place the focal points of all three color components - red, green, and blue - in the exact same spot, or almost.  In achromatic refractors, usually only the red and the green have been aligned, with the blue still left slightly out-of-focus.  Hence the blue halos.  Carboni's tools for Photoshop has a blue halo remover, which works better on my DSLR images than my astro camera images, but helps.  Hopefully PixInsight has something like that (I'm sure it does).

Ah good, next is additional stretching in the histogram tool.  I seize my opportunity to make color adjustments by using the histogram.  In this tool I can't just grab the curve anywhere and bend it to my will, but between moving the black and gray points, I get something acceptable.  Of course, the splotchy background makes a rude appearance, but hopefully I'll have another chance to deal with that.  

All right.  Getting closer.  The colors probably aren't right, but it looks acceptable to me.  Even managed to get a bit of the red cloud in M82, which I haven't been able to before!  And the detail is awesome.

Next is noise reduction.  The tutorial has me using the SCNR tool on the problematic color channels.  I do green first, and some of the splotchiness reduces!  Next I try blue, and this actually removes the blue halos.  The two galaxies lose blue too though, so I go back to the histogram and bump it up.  Now the colors look better and the blue halos are gone, but for some reason, the outer portions of M81 look kind of weird.  And I still have the red splotch left of M81.  I couldn't get rid of it with the SCRN tool because it pulled all the red out of the galaxies too.

And at some point, M82 got a little oversatured in luminance.  

This is where the video tutorial ends.  Their data was obviously taken in a less light polluted area.  But I definitely see the potential of all of these awesome tools, once I figure out how to refine them.  On to more YouTube videos!

I'm going to end this one here for now.  But my efforts will continue!


No comments:

Post a Comment