Tuesday, November 13, 2018

Astronomy From Afar: My First Trip with Remote Imaging

Intro


When I was at the Texas Star Party this year, I met some folks from The Astro Imaging Channel, who were going around interviewing people with astrophotography rigs and asking about their setups for a video they were putting together.  They asked if I would do a presentation on their weekly show, and I had a great time presenting "Astrophotography Joyride: A Newbie's Perspective." (It has 2,500 hits now!!)  I stayed on as a panel member for the channel, and have gotten to know the other members.  For example, another presenter Cary Chleborad, president of Optical Structures (which owns JMI, Farpoint, Astrodon, and Lumicon), asked if I would test a new design of a Lumicon off-axis guider (which I still have because I'm still trying to get my AVX to work well enough with my C8 so I can give it a fair shake!).  (You can read about that adventure here.)  Now, Cary and Tolga Gumusayak collaborated to give me 5 hours of telescope time on a Deep Sky West scope owned by Tolga of TolgaAstro with a new FLI camera on loan and some sweet Astrodon filters, and asked me to write about the experience!  Deep Sky West is located in Rowe, New Mexico, under some really dark Bortle 2 skies.

The Gear


The telescope rig in question is the following:
- Mount: Software Bisque Paramount Taurus 400 fork mount with absolute encoders
- Telescope: Planewave CDK14 (14-inch corrected Dall-Kirkham astrograph)
- Camera: Finger Lakes Instrumentation (FLI) Kepler KL4040
- Filters: Astrodon, wideband and narrowband
- Focuser: MoonLite NiteCrawler WR35
I'm told the whole thing is $70k!  And you'll notice the lack of autoguiding gear - you don't need to autoguide this mount.  It's just perfect already once you're perfectly polar aligned.

Screenshot from the live camera feed from inside the observatory

The Target


After getting the camera specs, I needed to select a target.  With a sensor size of 36x36mm and a focal length of 2563mm, my field-of-view was going to be 48x48 arcmin (or 0.8x0.8 degrees).  So a little bigger than my 11-inch Schmidt-Cassegrain with its focal reducer, but smaller than the 5-inch refractor I use at the observatory.  It sounded like I was going to get the time soon, so I needed a target that was in a good position this time of year.  While I was tempted to do a nebula with narrowband filters, I haven't processed narrowband images before, so I wanted to stick with LRGB or LRGB + Ha (hydrogen alpha).  So I decided that I should do a galaxy.  Some ideas that came to mind were M81, the Fireworks Galaxy, the Silver Dollar Galaxy, and M33.  M74 was also recommended by the resident expert astrophotographer in my club.  I finally settled on M33, since because of its large angular size on the sky, it's difficult for me to get a good image of from my home location, and it has some nice HII nebula regions that I haven't been able to satisfactorily capture.

Messier 33 is also known as the Triangulum Galaxy for its location in the small Triangulum constellation up between the Aries and Andromeda constellations. It's about 2.7 million lightyears from Earth, and while it is the third largest galaxy in our Local Group at 40% of the Milky Way's size, it is the smallest spiral galaxy.

As far as how to use the 5 hours went, I originally proposed 30x300s L, and 10x300s RGB each.  But then Tolga told me that this camera (like my ZWO ASI1600MM Pro) has very low read noise, but kind of high dark current, and it's also very sensitive, so shorter exposures would be better.  He also told me that the dynamic range was so good on this camera that he shot 5-minute exposures of the Orion Nebula with it, and the core was not blown out!  Even on my ZWO, the core was blown out after only a minute.  So I revised my plan to be 33x180s L, 16x180s RGB each, and I also wanted some Ha, so I asked for 10x300s of that.

Data Capture


The very next night, November 7, Tolga messaged me saying he was getting the M33 data and asked if I wanted to join him on the VPN!  He had me install TeamViewer, and then sent me the login information for the telescope control computer out at the remote site.  It was a little laggy, but workable.


This was really cool!  We could control the computer as if we were sitting in front of it.  The software, TheSkyX with CCDCommander, let you automate everything, of course.  The list shown on the screen is the actions for the scope to follow, which instead of being time-based are event-based.  The first instruction is "Wait for Sun to set below -10d altitude."  This way, you don't have to figure out all the times yourself every night - it just looks at the sky model for that night for that location.  It turns the camera on, lets it cool, and then goes to the next checked action, which is to run a sublist for imaging M33 in LRGB until M33 sets below 30 degrees altitude.  I thought I grabbed a screenshot of the sublist, but it looks like I didn't.  Darn!  Anyway, it has the exposure times and filter changes and everything else in there.  It also has how often to dither - dithering is when you move the scope just a few pixels every frame or couple so that you don't get hot pixels in the same place in every frame.  I haven't had to do this yet since I've never been perfectly polar aligned enough or had a scope with good enough gears for it not to already be drifting around a little bit frame-to-frame on its own.

Also in the above screenshot is a raw single luminance frame.  To the untrained eye, it looks blown out and noisy as heck, but I know better, having looked at a lot a raw files now - it looks great to me!

He only took some of the luminance frames and red frames - the rest he'd get on another night soon - and then switched to green.  On the second green frame, the stars had jumped!  Tolga thought at first a cable might be getting caught, so he switched to the live camera feed and moved the scope around a bit, but everything looked fine.  He mentioned that it had been hitching in this same spot about a month ago.  It later turned out to be a snagged cable, which was later fixed.  Anyway, the mount moved past that trouble spot, and the rest of the frames came out fine.  I logged off because it was getting late.

He collected the rest of the frames, and then on November 11, sent me the stacked L, R, G, and B images.  Now it's time to process!

Preparing for Combination

Since I'm still learning PixInsight, I'll once again be following the Light Vortex Astronomy tutorials, starting with "Preparing Monochrome Images for Colour-Combination and Further Post-processing."
First, I open up the stacked frames in PixInsight and apply a screen stretch so I can see them.


Wowee!!!!!

The first processing step I'll do is DynamicBackgroundExtraction to remove background on each of the four stacked images.  It may be very dark out in Rowe, NM, but there is likely still some background light.  Since they're aligned, I can use the same model for each one, so I'll start with the luminance frame, and then apply the process to each of them.

Following the tutorial's advice, I set the "default sample radius" to 15 and "samples per row" to 15 in the Sample Generation tab.  I hit Generate, but there were still a lot of points missing from the corners, so I increased the tolerance (in the Model Parameters (1) tab) to 1.000.  After increasing all the way to 1.5, there were still points missing from the corners, but I decided just to add some by hand.  I also decided there were too many points per row, so I reduced that from 15 to 12.  Then I went through and checked every point, moving it to make sure it was not overlapping a star, and deleting points that were on the galaxy.  You want only background.  



Next, I lower the tolerance until I start getting red points - ones that DBE is rejecting, making sure to hit "Resize All" and not "Generate" so I don't lose all my work!  I stop at 0.500, and all my points are still valid.  I open the "Target Image Correction" tab, select "Subtraction" in the "Correction" dropdown, and then hit Execute.  After I autostretch the result, this is what I have:


Hmm, maybe a little too aggressive - there's some dark areas that I don't think are real.  I back off Tolerance to 1.000 and try again.


The result looks pretty much the same, so I'm going to run with it and see what happens.  I'll save the process to my workspace so I can adjust later if needed (and I also need to apply it to my RGB frames).  This is what the background it extracted looks like, by the way:


I close the background image, minimize the pre-DBE image, and put a New Instance icon for the DBE process in the workspace (by clicking and dragging the New Instance triangle icon on the bottom of the DBE window into the workspace), and then I close the DBE process.  Then I minimize the DBE'd luminance image and open up the red image, and double-click the process I just put into the workspace, which then applies the sample points to the red image.  None are colored red for invalid, so I execute the process, and get the following result:


Lookin' good.  I do the same for the green and blue, and save out all of the DBE'd images for later reference, if needed.  I also save the process to the same folder for possible later use.

Next, I open up the LinearFit process, which levels the LRGB frames with each other to account for differences in background that are a result of imaging on different nights, different times of the night, the different background levels you can get from the different filters, etc.  For this process, you want to select the brightest image as your reference image.  It's probably luminance, but we can check with HistogramTransformation.  


I select L, R, G, and B (the ones I've applied DBE to) and zoom in on the peak (in the lower histogram).  It's so dark at the Deep Sky West observatory that especially after background extraction, there is no background, and pretty much all the peaks are in the same place.  Even the non-DBE'd original images have basically no background (which would show up as space between the left edge of the peak and the left side of the histogram window). So I select the luminance image as reference, and then apply the LinearFit process to each of the R, G, and B frames by opening them back up and hitting the Apply button.  I needed to re-auto-stretch the images afterwards.  

Combining the RGB Images


Now that their average background and brightness levels are leveled, it's time to combine the LRGB images together.  For that, I go to the "Combining Monochrome R, G and B Images into a Colour RGB Image and applying Luminance" tutorial.  

First, I open the ChannelCombination process, and make sure that "RGB" is selected as the color space.  Then I assign the R, G, and B images that I have background extracted and linearly-fitted to each of those channels, and hit the Apply Global button, which is the circular icon on the bottom of the process window.  


Yay color!  It's showing some noise at the moment, but we'll get there.  Remember, this is just a screen stretch, which tends to be less refined than when I will actually stretch the image.

I'll come back to this tutorial later to combine the luminance image with the RGB, since it's a good idea to process them separately, then bring them together, since they bring different features to the table.

Here, I took a quick break to make some Mexican hot chocolate.
Mmmmmmm yes.

Color Calibration

Since I'm dealing with the color image first, I'll go ahead and color calibrate now using PhotometricColorCalibration.  This process uses a Sun-type star that it finds in the image using plate solving as a white reference from which to re-balance the colors.  In order to plate solve, you need to tell it where the camera is looking and what your pixel resolution is.  To tell it where this image is looking, I simply click "Search Coordinates," enter "M33," and it grabs the celestial coordinates (right ascension and declination) for that object.  


After hitting "Get," I enter in the focal length and pixel size.  Focal length on the Planewave CDK14 is 2563mm, and the pixel size on the FLI Kepler KL4040 is a whopping 9 microns!  I enter these values and hit the Apply button, then wait.  A few minutes later, the result appears.


The change is small this time, but other times I've used this, it's a made a huge difference.  It looks like these Astrodon filters are already color balanced.  My own Astronomik filters are too, but sometimes still require a small bit of tweaking.  My DSLR images benefit the most from color calibration!  Or images where I used other filters besides my Astronomik ones.

Noise Reduction

Time to deal with the background noise!  I'll be following the "Noise Reduction" and "Producing Masks" tutorials.

First, since I want to reduce noise without blurring fine details in the brighter parts of the image, I'm going to use a mask that will protect the brighter parts of the image, where the signal-to-noise ratio is already high, so that I can attack the dark areas more heavily.  Since I have a luminance image that matches the color data, I'm going to use that as my mask first, and then see if I need to make a more selective one.  Now, masks work better when they are bright and non-linear, so I duplicate my luminance image first by clicking and dragging the tab with the name of the image (once I've re-maximized it so I can see it) to the workspace.  Then I turn off the auto screen stretch by hitting the "Reset Screen Transfer Functions: Active Window" button in the button bar at the top of PixInsight, and open up the ScreenTransferFunction process.


Then I hit the little radioactive icon to apply an auto stretch again, and I open the HistogramTransformation process.  I select my cloned luminance image from the dropdown list, hit the "Reset" button in the bottom of that window, and then click and drag the "New Instance" icon (triangle) from the ScreenTransferFunction process to the bottom bar of the HistogramTransformation window.  This applies the same parameters that the auto stretch calculated to the actual histogram of the image.  Now I hit the Reset button on the ScreenTransferFunction window, close it, and hit the Apply button in HistogramTransformation to apply the stretch.  I close HistogramTransformation.

To apply it to my color image, I select the color image to make the window active again, I go up to Mask -> Select Mask, and I select my cloned, stretched luminance image.  


Now, the red areas are the areas the mask is protecting, so since I want to apply the noise reduction to the dark areas, I invert the mask by going to Mask -> Invert Mask.


There we go.

I open up MultiscaleLinearTransform for the noise reduction, and I minimize the cloned luminance image.  Since I don't need to see the mask anymore, I go up to Mask -> Show Mask.  Now, don't forget you have it on - a few times I've tried to do a stretch or other processing and it looks really weird or doesn't work, and it's because I left the mask on!

Following the tutorial's recommendation, I set the settings for the four layers, and hit Apply.



If you want to see the effect of changes made to the parameters without having to run this a bunch of times, you can create a small preview window by clicking the "New Preview Mode" button at the top of PixInsight, selecting a portion of the image (I'd pick one with some bright and some dark areas both), and then hit the "Real-Time Preview" (open circle) icon at the bottom of the MultiscaleLinearTransform window.  It still takes a bit to apply, but less time, and then once you're happy, you can go back to the whole image and apply it there.  I think this worked pretty well here. I remove the mask before I forget.

While I've got the window open, I apply the same mask I created to the luminance channel now as well, and run the same MultiscaleLinearTransform on it.


Sharpening Fine Details

I'm going to try here a new process I haven't tried yet for bringing out finer details - deconvolution with DynamicPSF.  I'll be following that section of the "Sharpening Fine Details" tutorial on the luminance image.  Deconvolution is awesome because it helps mitigate the blurring effects of the atmosphere, as is easily seen when processing planetary images.  It's magical!

But first, a short break to pet one of my cute kitties.


His name is Orion.  My other cat is Nova.
I might be a little bit obsessed with astronomy.


I open up the DynamicPSF process and select about 85 stars - "not too big, not too little" according to the tutorial.  


I then make the window bigger and sort the list by MAD (mean absolute difference), and scroll through them to see where the most number of stars are clustered around.  1.5e-03 and 2.5e-03 seem to be about the range.  I delete the ones outside this range.  Next, I re-sort the list by A (amplitude).  The tutorial recommends excluding stars outside the range of 0.25-0.75 amplitude, but the brightest star I still have left is 0.285 in amplitude, so I just cut the ones below 0.1.  

Next I sort by r (aspect ratio).  The tutorial recommends keeping stars between 0.6-0.8, and all of mine are already pretty tight in that range, between 0.649 and 0.746, so I keep all 20 of them.


Then I hit the "Export" icon (picture of the camera below the list of star data), and a tiny model star appears underneath the window.

Infinite cosmic power!  Ittttty bitty living space!

I had noticed that the stars, even in the center of the image, looked ever-so-slightly stretched.  You can see that here with this model star.

Now I close the DynamicPSF process, but keep the star image open.  First, I need to make another kind of mask, involving RangeSelection.  Not gonna lie, I'm a little out of my depth when it comes to masks, but I'm sure if I use them more, I'll start to get a better feeling.  For now, I'll just rely on what the tutorial recommends.

I re-open the stretched luminance image I used earlier and then open RangeSelection process (shown in part of this tutorial) and tweak the settings as suggested in the tutorial until the galaxy is selected.


Next, I needed to include a star mask with this as well, so I minimize the range mask for the moment and open the StarMask process, as described in part 5 of that same tutorial.  


I stretch it a bit with HistogramTransformation to reveal some dimmer stars.  According to the tutorial, it will help to make the stars a little bit bigger before convolving this with the range mask, so I open up MorphologicalTransformation, and copy the tutorial's instructions.


To combine the two masks, I open the hallowed PixelMath process, and the math portion of it is range_mask - star_mask, creating a new image.


I skip the part of the tutorial that makes the super-bright stars all black because none of mine are over the nebulous region of the galaxy.  I skip ahead to the making the more pronounced stars over nebulosity have more protection.


Next comes smoothing the mask using ATrousWaveletTransform, I apply it twice with the recommended settings to blur the mask.


Finally I can apply the mask.


Okay, what was I doing? I can't remember.  *scrolls back up* Oh yeah, deconvolution with DynamicPSF.  Let me go back over to that tutorial.

Looking at the mask, I don't think enough of my stars are protected, but we'll see how it goes.

I open up the Deconvolution process, click on the External PSF tab, and give it the star model I made earlier with DynamicPSF.  I set other settings recommended by the tutorial, and create a preview so I can play with the number of iterations without waiting forever for it to complete.  All the way up to 50, it hasn't converged yet, so I go ahead and run 50 on the whole image.



The difference isn't enormous for all the work I did to get there, but you can definitely tell that the image is sharper.  Pretty cool!

All right, time to stretch!  **Don't forget to remove the mask!** I almost did.

Stretching

In my Photoshop processing process, the very first thing I do is stretch.  But in PixInsight, there's a bunch of stuff that works better on unstretched (linear) data.  When your image comes out of stacking, all of the brightness data are compressed into a very small region.

It's tiny!

Stretching makes the data in that peak fill up more of the range of brightnesses so that you can actually see it.  All of the data is there, it just appears very dark right now.

I open up HistogramTransformation and turn off the screen stretch, and reset the histogram window.  Now the image is quite dark.  I open up a real-time preview so I can see what I'm doing.


Now I move the gray point slider (the middle one) toward the left, and then I zoom in on the lower histogram.  The upper one shows what the result histogram will look like, and the preview window shows what the image will look like. 


Stretching is a multi-step process; I hit Apply, and then Reset, and then I move the gray point slider some more.  The histogram changes each time as the data fill up more of the brightness range.  As you stretch, the histogram will move off the far left edge, and you can kill some extra background there if needed by moving the black point up to the base of the peak.  Don't go too crazy though - astro images with perfectly black backgrounds look kind of "fake."  

After a few iterations, my image is now non-linear - no more screen stretch required.


Now I do the same thing for the RGB image.


All right.  With those two killer images, time to combine L + RGB!

Applying L to RGB

Since the luminance filter passes all visible wavelengths, that image tends to have a higher SNR (signal-to-noise ratio), and thus finer detail because it's not lost in the noise.  While you can make a good image with just RGB, applying a luminance channel can really make the fine details come out more, plus give you more flexibility with contrast, since you can act on the L alone and not do weird things to the color.  

The application process is simple, and is described in part 3 of the "​Combining Monochrome R, G and B Images into a Colour RGB Image and applying Luminance" tutorial.  I open up LRGBCombination, and disable the R, G, and B channels, since they're already combined.  I select the L image, leave the channel weights set to 1, and leave the other settings as they are, since I'm going to play with saturation and lightness in CurvesTransformation instead later on.  I will tick "Chrominance Noise Reduction."  Then I make sure the RGB image window is active, and hit Apply.


Yeeeeeeeeeeeaaaaaaaaaaahhhhhhhhhhhhhhh this is getting awesome!!

Final Touches

Almost there folks, almost there...

First, I'll apply HDRMultiscaleTransform to boost the dynamic range, just to see what it does.  It works best with a range selection mask, and I'm just going to re-use the range selection - star mask mask I made earlier.


The result:

You know, I'm not a fan.  The core is like gone.  Undo!

I tried fewer layers - 4 - but it was even worse.  So I increased to 10, and I think I kind of like it!


Yeah, more definition on the arms!  I think I like it.

And, lastly (I think), CurvesTransformation.  (And removing the mask first...) I use a preview window to see what I'm doing.  I move the whole RGB/K curve into more of an S-shape, and then bump up saturation in the middle.

Drumroll please...

All righty, here it is!


Now that is a damn fine image!  When can I get me one of these telescopes and one of these cameras??

Conclusion

As far as processing goes, this data was easy to work with.  I could have not done anything to it but stretch and combine, and it would've been awesome.  

The colors were very nearly already balanced - I think the PhotometricColorCalibration only did a tiny bit.

I just can't get enough of the detail in this image!  I've zoomed in and looked all around, and everywhere there is something sharp and marvelous.  Excellent detail in the HII regions and other nebulous regions, several tiny background galaxies, and so many stars that are resolved in the galaxy!  I showed off the image to my coworkers, and they also couldn't get over the level of detail that was revealed.  The fact that I can see structure in the nebulae of another galaxy 2.7 million lightyears away just leaves my jaw on the floor.  

So this was a lot of fun!  Starting to get a good feel for PixInsight now, and it's nice having just some really nice data to play with.  And having some epically-awesome data to process made using PixInsight even more fun.  As cool as this was though, it's not nearly as satisfying as suffering through setting up gear in the cold, polar aligning, aligning, focusing, warming back up inside while the images are capturing, panicking and freezing some more as I fix problems, and then trudging back home at 2 AM, and then processing the data and seeing the target rise up out of the light pollution haze and the camera noise.  As much as I pine for being able to stay inside and stay warm and get images from pristine dark skies while I watch Netflix in town, there's something to be said for a little sweat making the victory sweeter.

Despite that feeling, it is also satisfying to make a truly killer image with some epic (and epically expensive) gear!!

Thanks again to Tolga and Cary for letting me have this data and this experience!


No comments:

Post a Comment