Tuesday, December 18, 2018

#173 - Tuesday, December 18, 2018 - The Moon and the Library

Over the last year and a half, I have been doing astronomy outreach programs at the local libraries to show patrons how to use the library telescopes.  My astronomy club donated an Orion StarBlast 4.5 to one of the libraries in 2017, and modified it as well to make it easier to use and a little more fool-proof (attaching strings to the caps so they don't get lost, swapping the CR2032 button cell battery on the red dot finder for two AA's instead, etc).  It was so popular that other libraries in the area purchased their own (they're only about $200) and we modified those for them as well.  Patrons can check them out for a week at a time and take them home, camping, out to the local state park or other dark sites, etc.  My Girl Scout co-leader works at the library that got the first scope, and doing the program was her idea.  It's grown since then, and now I've done programs at seven libraries!

This library in particular had 20 people sign up and a waitlist, but only about 10 actually came over the course of the evening, ranging in age from toddlers to elderly.  It was chilly, but not too bad (low 40s), and the sky was fabulously clear!  Sometimes it's cloudy during the programs, and I show them how to use it indoors, and then show the planetarium app Stellarium up on a project to familiarize them with the night sky.  But nothing beats looking through the eyepiece at something yourself!

Before the program started, I was sitting in the audience chairs and chatting with some participants when two middle school-age girls came bounding in.  One of them asked me, "Are you the professional astronomer?" I said "Yes," and she proceeded to give me a giant hug!  She was so excited so meet an astronomer because she loved astronomy so much.  She got a telescope for her birthday one year, but it was accidentally dropped on their first night out, so she didn't get to look through it!  So she was excited to check these out.

First, I went over how the telescope worked and the different parts and pieces.  Then I did a demonstration of how one uses the finderscope to find an object, and then look through the eyepiece to get it centered and focused.  Finally, we took all of the scopes outside (usually the library hosting the program gathers scopes from around the county so that we can have multiple going at once), and set them on tables in the courtyard.  There were a lot of lights, trees, and tall buildings, but the waxing gibbous moon was high, and Mars was easily visible as well (although very small in the scope these days!).  The StarBlasts are very easy to use -- just plop on a table, point, and look.  I nearly always recommend them when people are looking for a telescope for their kids, or want something really simple and cheap to get started with.  Despite the price and small size of the StarBlast, they deliver pretty good views for a starter scope - much more satisfactory than cheap refractors.  You can see open clusters, some bright large nebulae, and the moon quite nicely with them, as well as the rings of Saturn and the moons of Jupiter.

After practicing with the moon and Mars, we all went back inside for hot cocoa and cookies.  It was a fun evening!  I always love doing these events.






Monday, December 17, 2018

#172 - Sunday, December 16, 2018 - A Rude Fog

So there I was, sitting at my desk still trying to process my comet images from last weekend, when I saw the thick blanket of clouds that had been covering us begin to thin about a bit.  I checked Clear Sky, and it showed that it would clear pretty substantially!  The local forecasts agreed, so I quickly threw together a peanut butter & jelly sandwich for dinner, got into my warm clothes, grabbed my camera bags and related gear, and dashed out the door.  There was still some fog floating around from the rain we had over the past few days, but I figured it would clear once the temperature dropped more.  But boy was I wrong!!

I got out there about 6, and the moon was high and about half full, but I figured I could get more more luminance frames and some in-focus green and blue frames on the Crab Nebula to beef up that dataset some and still have it be enough above the moonlight background to be fine.  But since it wasn't going to be up high enough until after 7:30, and there was still some thin fog around, I figured I could work on further fine-tuning the polar alignment on the new Celestron CGX-L that's in the memorial dome first.  But before that, I went ahead and got my Vixen Polarie set up with my Nikon D5300.  Earlier that evening, I got a reminder on my phone that I'd set a while back alerting me that Comet 46P/Wirtanen was passing near the Pleiades - perfect timing!  A clear night!  Hopefully.  I framed the image based on SkySafari, or at least my best guess - the comet had dimmed to mag +8.8, and it was too foggy still (and the moon was too bright) to see it in the subframes.

I figured I'd try to get the comet, the Pleiades, and the California Nebula in one shot with my 55-200mm lens set at 55mm!  So I let that go while I got the memorial dome set up for polar alignment.

I started with a regular polar alignment in SharpCap.  I had to crank up the exposure time to 2 seconds though, and have the gain at 300 with the histogram stretched, to get enough stars to plate solve.  The moonlight and the fog were killing me.


When I finally did get enough stars, it came back with a "fair" grade on the current polar alignment - off by less than half a degree, but still requiring some adjustment.   I expected to be closer than that, seeing as how I had just adjusted it the previous weekend to be as close as I can get with our bad seeing.  But I went ahead and adjusted it anyway.  My next goal was to do a drift alignment, which SharpCap doesn't have, but PHD does.  I hadn't tried using my ZWO ASI1600MM Pro on PHD before, but luckily it didn't take much convincing to get those two to talk to each other.  However, I don't think PHD could handle the 16 megapixel chip, and it doesn't have the ability to crop the image, that I know of.  It was slow at getting frames, and then it kept losing the star, even though I could see at least a few pretty easily.  

Finally, I gave up trying to get PHD to work with my ZWO camera, so instead I pulled out my QHY5L-II guide camera and attached it to the guide scope, since I'd need it there anyway when it cleared up soon.  But I couldn't remember if the mirror diagonal I usually use with my older QHY5 camera put the QHY5L-II far enough away to focus, and I was having trouble getting the guide scope to see a bright star.  So I went to the moon stead, and after fishing around a bit, finally got the moon to appear in the guide scope.  This was further complicated by the fact that I didn't re-align the scope after the polar alignment adjustment (mainly because the fog was starting to thicken).  Moving the focuser didn't seem to help get the moon to be any closer to being resolved, however, so I swapped out cameras back to the QHY5.  I was working on trying to get that re-focused, but by then my fingers were cold, and the fog had suddenly become so thick that I could barely see anything!


Seeing that were was no point in staying, especially since I had work the next morning and I had no idea when the fog might clear, I called it early, packed up, and went home.  What a waste!  The drive home took a while too, since the fog was so thick I had to drive below the speed limit, particularly because I usually need to use my brights on the dark country roads, which is not a good idea in dense fog.  


Monday, December 10, 2018

#171 - Monday, December 10, 2018 - Can't Get Lucky Twice

Last night was gorgeous!  And tonight was originally supposed to be clear too...

As of last night, the Clear Sky forecast said it would be clear tonight.  Then this morning, it decided it would actually be pretty cloudy, but the other four forecasts I check all said clear, even the most pessimistic Clear Outside one.  All day they said this!  So I packed up my cameras, made dinner quickly, and got out to the observatory by 6:30.  It was already dark, but the Crab Nebula wouldn't be high enough until 7:30 anyway.

But staring around 5 PM, the "civil" weather forecasts (as opposed to the astronomy ones) were pushing the hour that the clouds would clear out back, hour by hour, until now it says 10 PM (I'm sitting in the warm room at the observatory writing this).  I've got my Nikon D3100 clicking away hoping to watch the clouds clear, but most likely that timelapse will be boring.  I'm at least taking dark frames on my Nikon D5300, since it's 29 degrees outside, and I have a lot of darks at 30 degrees but not a complete set.

I poked my head outside at 9 PM, but nope!  Still a thick blanket of clouds.  And now the civil weather apps are telling me it won't clear until midnight!  All righty, time to pack it up...Darn!  Hopefully another opportunity comes soon.  Time to go catch up on some sleep.

And since you took the time to read this sad little post, here's a meme.






#170 - Sunday, December 9, 2018 - Back After a Long, Cloudy Hiatus

It's been a month since the last time I was out at the observatory!  It's been cloudy!  But my first night back out was a good one - no moon, pretty good transparency, and the clouds stayed on the horizon.

The new rig my club put inside the memorial dome - a Celestron CGX-L mount with a Meade 127mm f/9 apo - was roughly polar aligned by another club member, but still needed some fine-tuning.


So I hooked up my ZWO ASI1600MM Pro on its filter wheel, ran all the cables and got everything hooked up, and started up SharpCap to run its polar alignment routine.  Polar alignment is important to get right for astrophotography -- you need the mount pointed directly at the north celestial pole in order to have accurate tracking.  SharpCap reported that the mount was off by about 43 arcminutes, which is close enough for visual observing, but is certainly what was causing all the drift I saw last time I was out.


SharpCap tells you whether you need to move the mount up or down and left or right, so I adjusted each direction until it reported an error of only 2-6 arcseconds, which varied every frame due to the atmosphere.  Pretty good!


After rebooting the mount so that I could put in new alignment stars since I changed the polar alignment, I had it go to Vega first, and its first guess was fairly close -- it put the star within the camera's field-of-view, at least.  After doing the two alignment stars plus four calibration stars, I told it to slew to M1, the Crab Nebula.  I had a tough time choosing a target -- we're not far enough into winter yet to have Orion and all of its goodies be high enough to image in the first part of the evening, but all of the summertime goodies were off to the west, which I try to avoid because of the light pollution.  I had originally expected polar alignment to take longer, however, so when the mount finished slewing, the telescope was pointing toward the trees!  So I slewed up to a higher star to focus the camera and guide camera, and then I left it alone for a bit while I set up my other rig for the evening: my Nikon D5300 on my Vixen Polarie.

I just bought a new tripod so that I could have a removable head to replace with the Polarie's Fine Adjustment head (on Amazon here).  I was also thinking about my trip to Chile, so I got one that folds up small and is lightweight.  This one is made of carbon fiber, and it has three leg joins to collapse the legs, which fold upward back along the center post so that the whole thing is less than two feet long.  But if you extend the legs and the center post all the way, it goes up to 63 inches tall.  It also has a hook in the center column to attach a sandbag or something to weight it down, and you can remove one of the legs and turn it into a monopod.  The whole thing was $130, which as far as nice tripods go, isn't too bad!  Unfortunately, the bolt that you attach mount heads to was a different size than I thought, so I still couldn't attach the Polarie FA head directly, and had to use my 3/8-to-1/4 bushing and attach the FA head to the ball mount that came with the tripod.  Soon I'll get all the right screws!

I got everything connected and got the Polarie pointed north, or at least as north as I can tell by aiming my eye through the sight hole on the edge of the Polarie.  The FA knobs help a lot to move it more precisely.  I wanted to test out using my guidescope and camera again, but it was too cold to want to mess with it.  My USB thermometer reported 29 degrees F.  There were a lot of moving parts in this rig - first I needed to make the tripod level, but the bubble level is on the ball mount head, so I had to get that approximately upright first.  Then I had the FA head screwed onto that, and the Polarie screwed onto that, and another ball mount head on the Polarie, whose rotating section is also screwed on with thumbscrews, and then finally the DSLR attached to that.  Sometimes I'd go to adjust the ball mount that the DSLR was on, and would instead accidentally turn one of the thousand other rotatable things that I hadn't screwed down tightly enough.

I got a little rail that attaches to the camera shoe (same size as for gun scopes) and attached the red/green dot sight that came with my Oberwerk binoculars to it.  Since it's perfectly aligned with the binoculars, I didn't want to mess with it, so it's wrong, but I've got another one on the way just for the camera!  Should make aiming it way easier.

While I was adjusting the polar alignment, I saw a bright flash just above, and a slow-ish-moving green-blue meteor splashed across the sky.  Behind it was a sputtering tail of smoke.  Beautiful sight!  Possibly an early Geminid, since it was moving from east to west.

Finally, I got the camera pointed in the vicinity of the California Nebula (there's a string of three bright stars that make that very easy) and set at 100mm.  After a few focusing and test frames, I remembered that I had a much more important target to image -- Comet 46P/Wirtanen!  How could I forget! So I looked up its position in SkySafari, and it was nice and high just underneath Cetus, with a bright star nearby for reference.  I found it easily, although it took about 10 test frames before I was able to get it centered exactly where I wanted.  Finally that was all done, and I took test frames to see how long the Polarie would track for.  Two minutes showed streaks, 1 minute 30 seconds showed streaks too, and finally I got down to a minute and was still getting small star streaks.  So I adjusted the polar alignment a bit, and finally got reasonably small star streaks.  Then I let it run.

By the time this was done, it was a little after 7:30, and the Crab Nebula had cleared the trees up to the 20 degree mark, which is the absolute minimum I'll image at because the atmosphere is too mushy below that.  Since it was still kind of low, I decided to start with the blue filter and work my way backwards, since it's less important that the color channels be really clear, and then it would be nice and high and in the good part of the atmosphere for the luminance frames later.  I calibrated PHD for guiding, and it looked good.  I had set the camera cooler on -40C, which it reached, and everything else looked good to go, so I told it to take 15x180s frames, and I snuck back inside where it was warm.

A little later on, I came out with a pair of handheld binoculars from the equipment room and went hunting for the comet.  It wasn't difficult to find, although it looked more like a splotch than a comet.  If it wasn't so cold, I might have set up a larger pair of binoculars on a tripod, but it was cold!  I hurried back inside, where I had the warm room warmed up to a nice 70 degrees.

I periodically went out to the dome to check on things, and the blue and green filter images looked slightly out of focus.  When I changed to the red filter, I slewed to a nearby star and re-focused - it must have been pushed in a little when I was rotating the filter wheel.  My filter wheel is very stiff, especially when it's cold.  It won't be too long now before I just give up and get an electronic one and 2-inch filters...although I'm going to need to start plugging things into the ZWO's two back USB ports because I'm out of ports on my 4-port USB hub!

Guiding went okay-ish.  The dec axis looked good, but RA was bouncing all over the place.  I thought at first it was the seeing, but since dec looked fine, I wasn't sure.  I zoomed in on the stars, and they were slightly skewed up-down, which I think was along the RA axis.  I wonder if I need to apply some different settings in PHD for the CGX-L mount, since it's belt-driven.  The agressiveness was already turned down to 70, but maybe that's still too high.

9:30 PM rolled around while I was hanging out in the warm room, and I tried to connect to the weekly Astro Imaging Channel show, but the wifi out here is too slow, and even my LTE cell service is too slow, at least inside the building.  Download is workable, but upload is almost nonexistent!  So I missed the call.  Darn!

I'm writing this while I'm out at the observatory the following night, Monday night, so I haven't had a chance to try processing the Crab Nebula image yet.  I was hoping to get more luminance frames tonight, but the clouds that were originally supposed to clear out by 7 PM are still here, and the forecast now keeps pushing that back hour by hour.  I've got my Nikon D3100 set up now pointed east in hopes of seeing the clouds clear, catching some Geminids, and watching Orion rise, but it's not looking hopeful.  I haven't even set up the memorial dome yet, or the Vixen Polarie.  I'm going to stick it out until 9, and if there's no clearing in sight, then I'll go home and get some sleep.

[Update December 15, 2018]
All right, finally got to process the final Crab Nebula image!  My blue and green channels ended up out-of-focus, like I thought, due to pushing the focuser in a bit while I was trying to rotate the filter wheel in those sub-zero temperatures.  But my fix of that for the red and luminance channels helped, and I was able to apply a deconvolution algorithm to the luminance and get some really nice detail back from the not-fantastic guiding.

These are the steps I followed in PixInsight, with guidance from the Light Vortex Astronomy tutorials:
- Used BatchPreprocess to generate master dark, master bias, calibrate & register light frames (linking the different exposure times of the RGB vs the L)
- Stacked each channel with Light Vortex tutorial recommendations
- Combined RGB channels
- Applied DynamicBackgroundExtraction RGB and L images to remove the light pollution background
- Applied PhotometricColorCalibration to properly color-balance the RGB image
- Dust spot removal using the CloneStamp on L and RGB (my attempt to clean the objective of the refractor didn't work very well when it was below freezing - the cleaning fluid didn't want to evaporate, so I had to use a hair dryer!  I think I also need to check my filters for dust)
- Denoising with MultiscaleLinearTransform on L and RGB, with a mask to protect the nebula from being blurred
- Stretched L and RGB images
- Applied the Deconvolution process with a PSF generated from the image (uses the shape of the stars to inform the algorithm)
- Combined LRGB into one image
- Reduced star sizes with MorphologicalTransform with a star mask (needed because of the un-focused G and B channels)
- CurvesTransformation to boost saturation, brightness, and I also used it with a star mask to reduce the green halos that were a result of the un-focused green channel image

And here you go!
Date: 9 December 2018
Object: M1 Crab Nebula
Attempt: 5
Camera: ZWO ASI1600MM Pro
Telescope: Meade 127mm f/9 apo (club's)
Accessories: Astronomik LRGB Type 2c 1.25" filters
Mount: Celestron CGX-L (club's)
Guide scope: Celestron 102mm (club's)
Guide camera: QHY5
Subframes: L: 10x300s
   R: 12x180s
   G: 14x180s
   B: 15x180s
Gain/ISO: 139
Stacking program: PixInsight 1.8.5
Stacking method (lights): Average, winsorized sigma clipping
Post-Processing program: PixInsight 1.8.5
Darks: 30
Biases: 50
Flats: 0
Temperature: -40C (sensor), 29F (ambient)

I'm continually amazed with what I can get out of an image that has sub-standard input data!  This whole thing is only less than 3 hours of total integration time, and I managed to get some reasonable SNR (signal-to-noise ratio) and detail on the nebula.  Now just think if I had enough clear nights (and enough patience) to have much longer total integration times!

Having awesome software helps too...here's the comparison of the pre- and post-deconvolution images!

Crazy!!

Comet results are coming soon...it takes a long time to process all 164 images for making the movie (calibration, background extraction, noise reduction, color calibration...), and then I'm learning how to use PixInsight's comet alignment process to make a final stacked image.  More coming soon!

[ Update December 30, 2018 ]

Well, I had about another two pages of text and images written in here, but then some dialog box came up that I didn't see when I alt-tabbed back over after checking on something, and I lost all of my work :(  Guess I'll start over...This time I reverted the post to a draft, so that it would save my work...for some reason it doesn't do that if you're just editing an already-published post...but why am I surprised, Blogger is a pretty outdated platform that Google is probably going to kill anyway, along with nearly all the rest of its good ideas (sorry I'm kind of bitter about them killing Inbox!)

I got busy with some work stuff, graduate school applications, and the holidays, so processing the comet image has been slow-going.  I've also had numerous setbacks -- this dataset is proving difficult!  I got the video done weekend before last, so I'll talk about that first.

Video


I wanted to create the video first since several other people were making cool videos, and I figured it would end up being easier than processing the whole image.  There are a few steps that are in common between the two processes, though, so I could at least make some headway on both at the same time.  I've previously used Photoshop to fix a sequence of raw frames into something reasonable for a video, but I decided to give PixInsight a try.  It's an obvious choice since you can apply the same process to multiple images using ImageContainer.

 First, I needed to calibrate the light frames with their corresponding darks and biases.  I had a master dark and a master bias from DeepSkyStacker previously at that temperature, ISO, and exposure time from another image I had processed there previously, so I just decided to use those.  However, after calibrating, debayering, and registering, I checked some of the images, and they came out kind of crazy weird.  I wish I had a screenshot, but I dumped them all into the Recycle Bin and flushed the toilet.  So I started completely over and used PixInsight to generate a master dark and a master superbias, which is better than just a master bias alone, or so I'm told.  (The superbias does some averaging to reduce noise and make it more like you used a lot more frames to create your master bias so you can just record the bias signal and not the frame-to-frame noise).  After re-creating those, I re-calibrated, debayered, and registered all of the frames.  They looked much better.

Next, I opened up the first and last image, and made two DynamicCrop processes, which between the two would cover the furthest extents of the black edges left behind by the registration process.  I used ImageContainer to apply the crops to all images in the sequence.

Next, I opened up DynamicBackgroundExtraction, since the raw DSLR images had a pretty strong green background as a result of there being twice as many green pixels as red and blue ones.  But it didn't really come out with the same-looking result for each frame, and I couldn't figure out how to make it do that, so I gave up and switched back over to Photoshop after running all the files through the BatchFormatConversion script to convert them from XSIF (a PixInsight format) to TIFF.  It wasn't just the processing problems though -- PixInsight was also generating these massive files, so I kept having to delete the images from the previous steps because my poor 128 GB SSD that I process stuff off of just couldn't handle multiple copies of all 94 of the 283 MB behemoths.

In Photoshop, I opened up one of the images, and recorded an action of stretching, cropping, color balancing (by eye), adjusting curves, and and reducing noise, and then I applied that action to all of the images.  Then I added a text layer, made copying that to the image an action, and applied that action to every frame, since I wanted text on the video saying what it was, the date/time, etc.  Finally, I converted all the images to JPG, loaded them into Timelapse Movie Monkey, and generated my comet video.  I posted it to YouTube on December 18th.


Image

With the video done and a 4-day weekend upon me, I could finally take another whack at the complete comet image.  Previously, I used DeepSkyStacker for this, which has a neat comet stacking mode that allows you to do it one of two ways: hold the comet steady and streak the stars, or do two stacks and combine them to get steady comet and stars.  DSS is pretty quick about it and it produces a pretty good image if you choose the right stacking mode, but you have to select the nucleus of the comet in every frame, and who has time for that??  So I decided to see if PixInsight had a better way.  To the Google Machine!  I quickly found a tutorial from the PixInsight folks themselves on the process, located here.  It walks you through how to make a comet image where both the stars and the comet are steady.  First I'll integrate the comet, then the stars, and then combine.

Much to my delight, PixInsight does indeed do it the most awesome and logical way: its CometAlignment routine has you select the comet nucleus in only the first and last frame in your sequence time-wise, and then it registers everything from there.  Beautiful!

I started with the calibrated, debayered, and registered images I had already generated from making the video.  I pulled them into the CometAlignment process and after selecting a reference image (the image that has the comet located where you want it to be in the final image).  But when I went to the first image to select the comet nucleus, the first 24 frames were missing!  I found them at the bottom -- somehow the timestamps had gotten messed up.  Back to the drawing board...

I deleted everything and decided to use BatchPreprocessor this time.  I loaded my lights, master dark, and master superbias, but when I hit Run, it came up with a memory read error, and also an error about a CFA issue with the dark frame.  CFA has to do with the debayering, so I wondered if the dark was debayered while the lights still weren't, or something like that.  So instead, I closed out of that script and just did it myself again.  

Calibration - the reduction in noise is easier to see when zoomed in on the raw file, sorry!
Also, it's monochrome because it hasn't been debayered yet.

When I went to debayer the frames with the Debayer process, PixInsight kept outright crashing, like totally shutting down.  I had just installed the new version, 1.8.6, and was regretting it already!  But then I remembered that the Light Vortex tutorials advised DSLR users to change the RAW format setting to "pure raw," so I did that, and it finally was able to process and not crash.  How to do that can be found in part 1 of this Light Vortex Astronomy tutorial.  

Debayered single frame -- now in color!

I used the Blink process to inspect the registered frames after registration.  I can only load about 35 or so at a time, since they fill up my RAM rather quickly (that and Google Chrome!)

After I re-calibrated, re-debayered, and re-registered all 94 frames again, and this time the timestamps were correct.  I selected the comet nucleus in the first and last frames, and then it crunch through aligning the frames on the comet.  

The frames are now aligned in the comet.

With the images aligned on the comet, it was time for integration (stacking).  I used the ImageIntegration process, which gave me just the comet and not the stars, since the stars change position every frame since we're comet-aligned, and the integration process is designed to reject parts of the image that change from frame-to-frame (such as noise and satellites).  

Comet only.  It still has a high background, which we'll be killing soon.  The two dark spots aren't actually dust -- it's a bug on the lens that changed positions partway through!

Next, I went back to CometAlignment but this time, I used it to subtract the stacked comet image from the single frames to get just the stars.



Next came stacking all of the stars-only images.



After that, I cropped out the darkened edges that were a result of registration, cropping both the stars-only and comet-only images the same using DynamicCrop.  I had both images open, but cropped only one of them (the comet image has deeper edges than the stars one, so it would cover both), then I dragged the New Instance icon into the workspace, applied the process to the comet image, and then clicked and dragged the icon onto the stars image to crop it the same.  

Then it was time to work on the backgrounds.  The PixInsight tutorial leads you through two rounds of DynamicBackgroundExtraction -- the first one to reduce gradients, and the second one to further suppress the dim, streaked stars in the background of the comet-only image leftover from the stacking process.  The first DBE was easy, especially with the absence of stars, which meant I didn't have to check every sample point to make sure it wasn't on a star, I just needed to clear out the ones around the comet.  But after I applied it, the image was still green -- greener, actually!  


I compared settings with the PixInsight tutorial, and they were the same.  But then I checked with the Light Vortex tutorial, and saw that I actually needed to have "Normalize" un-ticked.  After I did that, the background was finally actually subtracted.  


The next step was to do a high-density background subtraction to eliminate the leftover stars in the comet-only image.  For this, you want a lot of sample points.  The PixInsight tutorial author's computer could only handle 41 samples per row, so he split his image into three parts for processing.  My desktop is pretty beef-tacular, so I figured it could handle the full 200.  It didn't have much trouble plotting those points or rendering them, although it was a tad slow navigating the image.  I zoomed in and deleted samples around the comet nucleus, since we don't want to kill that.


I hit go and waited.  And waited.  And typed up this update on the blog post.  And waited.  Finally after about an hour, the console window appeared, and it started calculating the 2D surface splines on the first color channel.  It got to 92% and then stopped for a half hour!  I finally killed PixInsight and started it again.  This time I'll just wait.  It might need to be overnight, we'll see.  The first part, which I think is when it's analyzing the sample points, only uses one core of my 4-core hyperthreaded Intel i7 processor (4700K...it's time for an update...), but that one core is still at 4.0 GHz.  Next time, when I finally get myself a water cooler, remind me to overclock it!  The next part though -- calculating the splines -- was using almost all of my processing power, which made it hard to use the computer!  

It finally finished, at least two hours later but possibly as long as four, since I stepped away from the desk to make dinner...and I forgot to actually have the correction subtract!  😭😭 I tried subtracting it myself with PixelMath, but it looked terrible.  So I let it run again...

Here's the background:

Wow that is cool!  So much hidden beneath the surface.

[Update December 31, 2018]

I let it run overnight, and here's the result!


Wait that looks the same????  😐 Maybe the stars are deeper in the background now...screen transfer function can be deceptive since it's automatic, so it might be digging deeper into the background.  Guess I'll just keep processing and see what happens!

Next, I went back over to the stars-only image, and realized that I hadn't saved the cropped and DBE'd one I'd made earlier, so I would just have to re-crop the stars image and guess.  Luckily, it doesn't have to be perfect, since the comet is moving anyway.  Hopefully it's close enough.  Then I re-DBE'd the stars-only image.


Then it was time to add the stars and comet images together with PixelMath.  


Well...okay yeah guessing on the crop isn't going to work here either since we still have the ghost stars in the background of the comet image.  *sigh*  All right, time for round three...

I went back and opened up the stars-only and comet-only stacks, cropped them the same with DynamicCrop (and saved those images), and then did a DynamicBackgroundExtraction on the stars-only image (and saved that image as well as the DBE process), and then did the wide DBE on the comet-only image.  For interest's sake, here's the background it extracted from the comet-only image.

Background that was subtracted from the comet-only image.

Now it's time for another high-density DBE to hopefully try and kill the leftover stars in the comet image.  Since I still had stars using the parameters in the PixInsight tutorial, I cut out a small preview window from the whole image so that I could mess with the parameters and see what worked.  With the small preview window, it only took a few seconds to generate the DBE'd image.  


I played around with Tolerance, Shadows Relaxation, and Smoothing Factor in the Model Parameters (1) section, but they all looked more or less the same.  The backgrounds that were subtracted varied a bit in granularity, but the outcome looked pretty equivalent.  I just got the new edition of Warren Keller's Inside PixInsight, which I'm hoping will offer some insight into the parameters of these processes and what they're really doing.  Now that I'm starting to get a good feel for how they work, I want to actually know what they're doing!  Then I can tune the parameters more smartly (is that a word??)

After doing some comparisons, it looked like the PixInsight tutorial's values would reduce the background stars the best, so I might just have to use other means to reduce them further later on.  I went back to the main image, set the settings, and then went and worked on other stuff.

About three hours later, it was finished.


Then I could add the stars and comet images together with PixelMath!


The tutorial instructs doing a deconvolution next with a star mask, so I made one from the stars-only image.


Normally I do this to increase sharpness on a deep sky object and I make a model star, but I figured I'd roll with it and see where it went.

Wow, it was terrible.


I made the stars bigger in the star mask by applying a MorphologicalTransform twice:


It didn't help.  In fact, I think it made these worse!  So I skipped over that part.


Next was helping to reduce the background light with BackgroundNeutralization.  You need to make a preview window that doesn't have any stars so that PixInsight knows what is truly background.  I used the same preview window for running ColorCalibration as well.  


The initial image out of BackgroundNeutralization did appear darker, until I re-applied the ScreenTrasnferFunction, which made it look the same again.  But if I clone it and undo the process (and not re-apply the STF), you can see the difference.


Next came the ColorCalibration, using a new preview window (because a) I deleted the old image with the preview without thinking, and b) shouldn't the background reference be on the now-neutralized image?)


The color of the comet became slightly greener, but that was about it -- DBE can sometimes do a pretty good job of color balancing itself.

I decided it would be a good time to get rid of those little bug spots, so I used CloneStamp.  I hadn't re-done that after starting over, since I figured it'd be better to do it on the combined image anyway.

Yayyyyyyyyy

I got tired of looking at the noise, and the next step in the tutorial was stretching, so I went ahead and applied MultiscaleLinearTransform, which is amazing.  Since there's not really any detail with the comet, I didn't bother to make a mask to protect it like I would any DSO I wanted detail on.  I have a copy of the process saved in my astrophotography folder with settings recommended by the Light Vortex Astronomy tutorial so that I don't have to change the settings over and over again -- very handy!


Woo hoo!  Much cleaner.

Since the stars were kind of green (ColorCalibration never seems to work in my favor), I decided to try my favorite color calibration tool -- PhotometricColorCalibration.  I input the RA and Dec coordinates for where the comet was in the reference frame that night, my focal length and pixel size, and let it run.  The result (even after re-applying the STF) was...waaaaaay off.


Un-did that!!

Next, it was time to stretch.  I hoped I could dark the background enough to hide the streaked stars.  I decided to try the MaskedStretch that the PixInsight tutorial does, using the same settings.  


Mm, gonna give that one a thumbs-down.  Lemme just do this myself.

I really had to clip it deep in the blacks to hide the streaked stars and the halos around the bug spots.


Ha-rumph.  I decided to roll with it, but had another idea brewing...

Next was fixing the green cast with CurvesTransformation.  I like adjusting curves, in both PixInsight and Photoshop -- it's your one-stop shop for color correction, selective brightening/darkening, and saturation adjustment.


Ugh, this was just not working out.  Time for plan B!

So, the comet is bright, and the stars are bright.  The rest is not.  Let's make a mask!  I backed up a few steps to pre-stretching the image.  Applying a STF made it look terrible, even after the MultiscaleLinearTransform.  This dataset is really a mess.

I went to the RangeSelection process, turned on Real-Time Preview, and set the lower limit to reveal the comet.  It was lower than I thought, and it was a very fine line between revealing the comet and revealing the entire image.  I found a value that looked good and hit Apply.  It came out dimmer than the preview, so I stretched it.  It happened to catch most of the stars too, so I didn't need to also make a star mask to add to it.


Now I could apply the mask either for protecting the background or protecting the stars and comet by inverting it.  

Red areas are protected.


You'll notice that the outer halo of the comet is not protected here - it's too close to the background noise, sorry!

So first I had the mask protect the background, and I stretched just the comet and stars.  Then I inverted the mask and just did a tiny stretch on the background so I could keep it dim but not so dark as to look fake.  Much better result!


I think the left corner got a little left out of the mask, but it's too large to crop out.  I'll just leave it, this image is terrible anyway!

Finally, I fixed the colors using CurvesTransformation, and boosted the saturation.  And then I called it quits!!

Date: 9 December 2018
Object: Comet 46P/Wirtanen
RA/Dec at Reference Frame: 03h 13m 16.4s, -00 deg 02' 58.8"
Attempt: 1
Camera: Nikon D5300
Telescope: Nikon 18-55mm lens @ 100mm, f/4.8
Accessories: N/A
Mount: Vixen Polarie
Guide scope: N/A
Guide camera: N/A
Subframes: 94x60s (2h44m)
Gain/ISO: ISO-1600
Stacking program: PixInsight 1.8.5
Stacking method (lights): Average, Winsorized Sigma Clip
Post-Processing program: PixInsight 1.8.5
Darks: 72 (30F)
Biases: 20 (28F)
Flats: 0
Temperature: 29F

In its final battle cry, the TIFF saved out with like additional stretching or something, and Lightroom only lets me import TIFFs and not JPGs for some reason, so I couldn't watermark it.  I'm not happy with it anyway, so who cares!  Haha.

As miraculous as image processing is, sometimes bad data just can't be helped!!  Worth a shot though!


Tuesday, November 13, 2018

Astronomy From Afar: My First Trip with Remote Imaging

Intro


When I was at the Texas Star Party this year, I met some folks from The Astro Imaging Channel, who were going around interviewing people with astrophotography rigs and asking about their setups for a video they were putting together.  They asked if I would do a presentation on their weekly show, and I had a great time presenting "Astrophotography Joyride: A Newbie's Perspective." (It has 2,500 hits now!!)  I stayed on as a panel member for the channel, and have gotten to know the other members.  For example, another presenter Cary Chleborad, president of Optical Structures (which owns JMI, Farpoint, Astrodon, and Lumicon), asked if I would test a new design of a Lumicon off-axis guider (which I still have because I'm still trying to get my AVX to work well enough with my C8 so I can give it a fair shake!).  (You can read about that adventure here.)  Now, Cary and Tolga Gumusayak collaborated to give me 5 hours of telescope time on a Deep Sky West scope owned by Tolga of TolgaAstro with a new FLI camera on loan and some sweet Astrodon filters, and asked me to write about the experience!  Deep Sky West is located in Rowe, New Mexico, under some really dark Bortle 2 skies.

The Gear


The telescope rig in question is the following:
- Mount: Software Bisque Paramount Taurus 400 fork mount with absolute encoders
- Telescope: Planewave CDK14 (14-inch corrected Dall-Kirkham astrograph)
- Camera: Finger Lakes Instrumentation (FLI) Kepler KL4040
- Filters: Astrodon, wideband and narrowband
- Focuser: MoonLite NiteCrawler WR35
I'm told the whole thing is $70k!  And you'll notice the lack of autoguiding gear - you don't need to autoguide this mount.  It's just perfect already once you're perfectly polar aligned.

Screenshot from the live camera feed from inside the observatory

The Target


After getting the camera specs, I needed to select a target.  With a sensor size of 36x36mm and a focal length of 2563mm, my field-of-view was going to be 48x48 arcmin (or 0.8x0.8 degrees).  So a little bigger than my 11-inch Schmidt-Cassegrain with its focal reducer, but smaller than the 5-inch refractor I use at the observatory.  It sounded like I was going to get the time soon, so I needed a target that was in a good position this time of year.  While I was tempted to do a nebula with narrowband filters, I haven't processed narrowband images before, so I wanted to stick with LRGB or LRGB + Ha (hydrogen alpha).  So I decided that I should do a galaxy.  Some ideas that came to mind were M81, the Fireworks Galaxy, the Silver Dollar Galaxy, and M33.  M74 was also recommended by the resident expert astrophotographer in my club.  I finally settled on M33, since because of its large angular size on the sky, it's difficult for me to get a good image of from my home location, and it has some nice HII nebula regions that I haven't been able to satisfactorily capture.

Messier 33 is also known as the Triangulum Galaxy for its location in the small Triangulum constellation up between the Aries and Andromeda constellations. It's about 2.7 million lightyears from Earth, and while it is the third largest galaxy in our Local Group at 40% of the Milky Way's size, it is the smallest spiral galaxy.

As far as how to use the 5 hours went, I originally proposed 30x300s L, and 10x300s RGB each.  But then Tolga told me that this camera (like my ZWO ASI1600MM Pro) has very low read noise, but kind of high dark current, and it's also very sensitive, so shorter exposures would be better.  He also told me that the dynamic range was so good on this camera that he shot 5-minute exposures of the Orion Nebula with it, and the core was not blown out!  Even on my ZWO, the core was blown out after only a minute.  So I revised my plan to be 33x180s L, 16x180s RGB each, and I also wanted some Ha, so I asked for 10x300s of that.

Data Capture


The very next night, November 7, Tolga messaged me saying he was getting the M33 data and asked if I wanted to join him on the VPN!  He had me install TeamViewer, and then sent me the login information for the telescope control computer out at the remote site.  It was a little laggy, but workable.


This was really cool!  We could control the computer as if we were sitting in front of it.  The software, TheSkyX with CCDCommander, let you automate everything, of course.  The list shown on the screen is the actions for the scope to follow, which instead of being time-based are event-based.  The first instruction is "Wait for Sun to set below -10d altitude."  This way, you don't have to figure out all the times yourself every night - it just looks at the sky model for that night for that location.  It turns the camera on, lets it cool, and then goes to the next checked action, which is to run a sublist for imaging M33 in LRGB until M33 sets below 30 degrees altitude.  I thought I grabbed a screenshot of the sublist, but it looks like I didn't.  Darn!  Anyway, it has the exposure times and filter changes and everything else in there.  It also has how often to dither - dithering is when you move the scope just a few pixels every frame or couple so that you don't get hot pixels in the same place in every frame.  I haven't had to do this yet since I've never been perfectly polar aligned enough or had a scope with good enough gears for it not to already be drifting around a little bit frame-to-frame on its own.

Also in the above screenshot is a raw single luminance frame.  To the untrained eye, it looks blown out and noisy as heck, but I know better, having looked at a lot a raw files now - it looks great to me!

He only took some of the luminance frames and red frames - the rest he'd get on another night soon - and then switched to green.  On the second green frame, the stars had jumped!  Tolga thought at first a cable might be getting caught, so he switched to the live camera feed and moved the scope around a bit, but everything looked fine.  He mentioned that it had been hitching in this same spot about a month ago.  It later turned out to be a snagged cable, which was later fixed.  Anyway, the mount moved past that trouble spot, and the rest of the frames came out fine.  I logged off because it was getting late.

He collected the rest of the frames, and then on November 11, sent me the stacked L, R, G, and B images.  Now it's time to process!

Preparing for Combination

Since I'm still learning PixInsight, I'll once again be following the Light Vortex Astronomy tutorials, starting with "Preparing Monochrome Images for Colour-Combination and Further Post-processing."
First, I open up the stacked frames in PixInsight and apply a screen stretch so I can see them.


Wowee!!!!!

The first processing step I'll do is DynamicBackgroundExtraction to remove background on each of the four stacked images.  It may be very dark out in Rowe, NM, but there is likely still some background light.  Since they're aligned, I can use the same model for each one, so I'll start with the luminance frame, and then apply the process to each of them.

Following the tutorial's advice, I set the "default sample radius" to 15 and "samples per row" to 15 in the Sample Generation tab.  I hit Generate, but there were still a lot of points missing from the corners, so I increased the tolerance (in the Model Parameters (1) tab) to 1.000.  After increasing all the way to 1.5, there were still points missing from the corners, but I decided just to add some by hand.  I also decided there were too many points per row, so I reduced that from 15 to 12.  Then I went through and checked every point, moving it to make sure it was not overlapping a star, and deleting points that were on the galaxy.  You want only background.  



Next, I lower the tolerance until I start getting red points - ones that DBE is rejecting, making sure to hit "Resize All" and not "Generate" so I don't lose all my work!  I stop at 0.500, and all my points are still valid.  I open the "Target Image Correction" tab, select "Subtraction" in the "Correction" dropdown, and then hit Execute.  After I autostretch the result, this is what I have:


Hmm, maybe a little too aggressive - there's some dark areas that I don't think are real.  I back off Tolerance to 1.000 and try again.


The result looks pretty much the same, so I'm going to run with it and see what happens.  I'll save the process to my workspace so I can adjust later if needed (and I also need to apply it to my RGB frames).  This is what the background it extracted looks like, by the way:


I close the background image, minimize the pre-DBE image, and put a New Instance icon for the DBE process in the workspace (by clicking and dragging the New Instance triangle icon on the bottom of the DBE window into the workspace), and then I close the DBE process.  Then I minimize the DBE'd luminance image and open up the red image, and double-click the process I just put into the workspace, which then applies the sample points to the red image.  None are colored red for invalid, so I execute the process, and get the following result:


Lookin' good.  I do the same for the green and blue, and save out all of the DBE'd images for later reference, if needed.  I also save the process to the same folder for possible later use.

Next, I open up the LinearFit process, which levels the LRGB frames with each other to account for differences in background that are a result of imaging on different nights, different times of the night, the different background levels you can get from the different filters, etc.  For this process, you want to select the brightest image as your reference image.  It's probably luminance, but we can check with HistogramTransformation.  


I select L, R, G, and B (the ones I've applied DBE to) and zoom in on the peak (in the lower histogram).  It's so dark at the Deep Sky West observatory that especially after background extraction, there is no background, and pretty much all the peaks are in the same place.  Even the non-DBE'd original images have basically no background (which would show up as space between the left edge of the peak and the left side of the histogram window). So I select the luminance image as reference, and then apply the LinearFit process to each of the R, G, and B frames by opening them back up and hitting the Apply button.  I needed to re-auto-stretch the images afterwards.  

Combining the RGB Images


Now that their average background and brightness levels are leveled, it's time to combine the LRGB images together.  For that, I go to the "Combining Monochrome R, G and B Images into a Colour RGB Image and applying Luminance" tutorial.  

First, I open the ChannelCombination process, and make sure that "RGB" is selected as the color space.  Then I assign the R, G, and B images that I have background extracted and linearly-fitted to each of those channels, and hit the Apply Global button, which is the circular icon on the bottom of the process window.  


Yay color!  It's showing some noise at the moment, but we'll get there.  Remember, this is just a screen stretch, which tends to be less refined than when I will actually stretch the image.

I'll come back to this tutorial later to combine the luminance image with the RGB, since it's a good idea to process them separately, then bring them together, since they bring different features to the table.

Here, I took a quick break to make some Mexican hot chocolate.
Mmmmmmm yes.

Color Calibration

Since I'm dealing with the color image first, I'll go ahead and color calibrate now using PhotometricColorCalibration.  This process uses a Sun-type star that it finds in the image using plate solving as a white reference from which to re-balance the colors.  In order to plate solve, you need to tell it where the camera is looking and what your pixel resolution is.  To tell it where this image is looking, I simply click "Search Coordinates," enter "M33," and it grabs the celestial coordinates (right ascension and declination) for that object.  


After hitting "Get," I enter in the focal length and pixel size.  Focal length on the Planewave CDK14 is 2563mm, and the pixel size on the FLI Kepler KL4040 is a whopping 9 microns!  I enter these values and hit the Apply button, then wait.  A few minutes later, the result appears.


The change is small this time, but other times I've used this, it's a made a huge difference.  It looks like these Astrodon filters are already color balanced.  My own Astronomik filters are too, but sometimes still require a small bit of tweaking.  My DSLR images benefit the most from color calibration!  Or images where I used other filters besides my Astronomik ones.

Noise Reduction

Time to deal with the background noise!  I'll be following the "Noise Reduction" and "Producing Masks" tutorials.

First, since I want to reduce noise without blurring fine details in the brighter parts of the image, I'm going to use a mask that will protect the brighter parts of the image, where the signal-to-noise ratio is already high, so that I can attack the dark areas more heavily.  Since I have a luminance image that matches the color data, I'm going to use that as my mask first, and then see if I need to make a more selective one.  Now, masks work better when they are bright and non-linear, so I duplicate my luminance image first by clicking and dragging the tab with the name of the image (once I've re-maximized it so I can see it) to the workspace.  Then I turn off the auto screen stretch by hitting the "Reset Screen Transfer Functions: Active Window" button in the button bar at the top of PixInsight, and open up the ScreenTransferFunction process.


Then I hit the little radioactive icon to apply an auto stretch again, and I open the HistogramTransformation process.  I select my cloned luminance image from the dropdown list, hit the "Reset" button in the bottom of that window, and then click and drag the "New Instance" icon (triangle) from the ScreenTransferFunction process to the bottom bar of the HistogramTransformation window.  This applies the same parameters that the auto stretch calculated to the actual histogram of the image.  Now I hit the Reset button on the ScreenTransferFunction window, close it, and hit the Apply button in HistogramTransformation to apply the stretch.  I close HistogramTransformation.

To apply it to my color image, I select the color image to make the window active again, I go up to Mask -> Select Mask, and I select my cloned, stretched luminance image.  


Now, the red areas are the areas the mask is protecting, so since I want to apply the noise reduction to the dark areas, I invert the mask by going to Mask -> Invert Mask.


There we go.

I open up MultiscaleLinearTransform for the noise reduction, and I minimize the cloned luminance image.  Since I don't need to see the mask anymore, I go up to Mask -> Show Mask.  Now, don't forget you have it on - a few times I've tried to do a stretch or other processing and it looks really weird or doesn't work, and it's because I left the mask on!

Following the tutorial's recommendation, I set the settings for the four layers, and hit Apply.



If you want to see the effect of changes made to the parameters without having to run this a bunch of times, you can create a small preview window by clicking the "New Preview Mode" button at the top of PixInsight, selecting a portion of the image (I'd pick one with some bright and some dark areas both), and then hit the "Real-Time Preview" (open circle) icon at the bottom of the MultiscaleLinearTransform window.  It still takes a bit to apply, but less time, and then once you're happy, you can go back to the whole image and apply it there.  I think this worked pretty well here. I remove the mask before I forget.

While I've got the window open, I apply the same mask I created to the luminance channel now as well, and run the same MultiscaleLinearTransform on it.


Sharpening Fine Details

I'm going to try here a new process I haven't tried yet for bringing out finer details - deconvolution with DynamicPSF.  I'll be following that section of the "Sharpening Fine Details" tutorial on the luminance image.  Deconvolution is awesome because it helps mitigate the blurring effects of the atmosphere, as is easily seen when processing planetary images.  It's magical!

But first, a short break to pet one of my cute kitties.


His name is Orion.  My other cat is Nova.
I might be a little bit obsessed with astronomy.


I open up the DynamicPSF process and select about 85 stars - "not too big, not too little" according to the tutorial.  


I then make the window bigger and sort the list by MAD (mean absolute difference), and scroll through them to see where the most number of stars are clustered around.  1.5e-03 and 2.5e-03 seem to be about the range.  I delete the ones outside this range.  Next, I re-sort the list by A (amplitude).  The tutorial recommends excluding stars outside the range of 0.25-0.75 amplitude, but the brightest star I still have left is 0.285 in amplitude, so I just cut the ones below 0.1.  

Next I sort by r (aspect ratio).  The tutorial recommends keeping stars between 0.6-0.8, and all of mine are already pretty tight in that range, between 0.649 and 0.746, so I keep all 20 of them.


Then I hit the "Export" icon (picture of the camera below the list of star data), and a tiny model star appears underneath the window.

Infinite cosmic power!  Ittttty bitty living space!

I had noticed that the stars, even in the center of the image, looked ever-so-slightly stretched.  You can see that here with this model star.

Now I close the DynamicPSF process, but keep the star image open.  First, I need to make another kind of mask, involving RangeSelection.  Not gonna lie, I'm a little out of my depth when it comes to masks, but I'm sure if I use them more, I'll start to get a better feeling.  For now, I'll just rely on what the tutorial recommends.

I re-open the stretched luminance image I used earlier and then open RangeSelection process (shown in part of this tutorial) and tweak the settings as suggested in the tutorial until the galaxy is selected.


Next, I needed to include a star mask with this as well, so I minimize the range mask for the moment and open the StarMask process, as described in part 5 of that same tutorial.  


I stretch it a bit with HistogramTransformation to reveal some dimmer stars.  According to the tutorial, it will help to make the stars a little bit bigger before convolving this with the range mask, so I open up MorphologicalTransformation, and copy the tutorial's instructions.


To combine the two masks, I open the hallowed PixelMath process, and the math portion of it is range_mask - star_mask, creating a new image.


I skip the part of the tutorial that makes the super-bright stars all black because none of mine are over the nebulous region of the galaxy.  I skip ahead to the making the more pronounced stars over nebulosity have more protection.


Next comes smoothing the mask using ATrousWaveletTransform, I apply it twice with the recommended settings to blur the mask.


Finally I can apply the mask.


Okay, what was I doing? I can't remember.  *scrolls back up* Oh yeah, deconvolution with DynamicPSF.  Let me go back over to that tutorial.

Looking at the mask, I don't think enough of my stars are protected, but we'll see how it goes.

I open up the Deconvolution process, click on the External PSF tab, and give it the star model I made earlier with DynamicPSF.  I set other settings recommended by the tutorial, and create a preview so I can play with the number of iterations without waiting forever for it to complete.  All the way up to 50, it hasn't converged yet, so I go ahead and run 50 on the whole image.



The difference isn't enormous for all the work I did to get there, but you can definitely tell that the image is sharper.  Pretty cool!

All right, time to stretch!  **Don't forget to remove the mask!** I almost did.

Stretching

In my Photoshop processing process, the very first thing I do is stretch.  But in PixInsight, there's a bunch of stuff that works better on unstretched (linear) data.  When your image comes out of stacking, all of the brightness data are compressed into a very small region.

It's tiny!

Stretching makes the data in that peak fill up more of the range of brightnesses so that you can actually see it.  All of the data is there, it just appears very dark right now.

I open up HistogramTransformation and turn off the screen stretch, and reset the histogram window.  Now the image is quite dark.  I open up a real-time preview so I can see what I'm doing.


Now I move the gray point slider (the middle one) toward the left, and then I zoom in on the lower histogram.  The upper one shows what the result histogram will look like, and the preview window shows what the image will look like. 


Stretching is a multi-step process; I hit Apply, and then Reset, and then I move the gray point slider some more.  The histogram changes each time as the data fill up more of the brightness range.  As you stretch, the histogram will move off the far left edge, and you can kill some extra background there if needed by moving the black point up to the base of the peak.  Don't go too crazy though - astro images with perfectly black backgrounds look kind of "fake."  

After a few iterations, my image is now non-linear - no more screen stretch required.


Now I do the same thing for the RGB image.


All right.  With those two killer images, time to combine L + RGB!

Applying L to RGB

Since the luminance filter passes all visible wavelengths, that image tends to have a higher SNR (signal-to-noise ratio), and thus finer detail because it's not lost in the noise.  While you can make a good image with just RGB, applying a luminance channel can really make the fine details come out more, plus give you more flexibility with contrast, since you can act on the L alone and not do weird things to the color.  

The application process is simple, and is described in part 3 of the "​Combining Monochrome R, G and B Images into a Colour RGB Image and applying Luminance" tutorial.  I open up LRGBCombination, and disable the R, G, and B channels, since they're already combined.  I select the L image, leave the channel weights set to 1, and leave the other settings as they are, since I'm going to play with saturation and lightness in CurvesTransformation instead later on.  I will tick "Chrominance Noise Reduction."  Then I make sure the RGB image window is active, and hit Apply.


Yeeeeeeeeeeeaaaaaaaaaaahhhhhhhhhhhhhhh this is getting awesome!!

Final Touches

Almost there folks, almost there...

First, I'll apply HDRMultiscaleTransform to boost the dynamic range, just to see what it does.  It works best with a range selection mask, and I'm just going to re-use the range selection - star mask mask I made earlier.


The result:

You know, I'm not a fan.  The core is like gone.  Undo!

I tried fewer layers - 4 - but it was even worse.  So I increased to 10, and I think I kind of like it!


Yeah, more definition on the arms!  I think I like it.

And, lastly (I think), CurvesTransformation.  (And removing the mask first...) I use a preview window to see what I'm doing.  I move the whole RGB/K curve into more of an S-shape, and then bump up saturation in the middle.

Drumroll please...

All righty, here it is!


Now that is a damn fine image!  When can I get me one of these telescopes and one of these cameras??

Conclusion

As far as processing goes, this data was easy to work with.  I could have not done anything to it but stretch and combine, and it would've been awesome.  

The colors were very nearly already balanced - I think the PhotometricColorCalibration only did a tiny bit.

I just can't get enough of the detail in this image!  I've zoomed in and looked all around, and everywhere there is something sharp and marvelous.  Excellent detail in the HII regions and other nebulous regions, several tiny background galaxies, and so many stars that are resolved in the galaxy!  I showed off the image to my coworkers, and they also couldn't get over the level of detail that was revealed.  The fact that I can see structure in the nebulae of another galaxy 2.7 million lightyears away just leaves my jaw on the floor.  

So this was a lot of fun!  Starting to get a good feel for PixInsight now, and it's nice having just some really nice data to play with.  And having some epically-awesome data to process made using PixInsight even more fun.  As cool as this was though, it's not nearly as satisfying as suffering through setting up gear in the cold, polar aligning, aligning, focusing, warming back up inside while the images are capturing, panicking and freezing some more as I fix problems, and then trudging back home at 2 AM, and then processing the data and seeing the target rise up out of the light pollution haze and the camera noise.  As much as I pine for being able to stay inside and stay warm and get images from pristine dark skies while I watch Netflix in town, there's something to be said for a little sweat making the victory sweeter.

Despite that feeling, it is also satisfying to make a truly killer image with some epic (and epically expensive) gear!!

Thanks again to Tolga and Cary for letting me have this data and this experience!