Tuesday, November 13, 2018

Astronomy From Afar: My First Trip with Remote Imaging

Intro


When I was at the Texas Star Party this year, I met some folks from The Astro Imaging Channel, who were going around interviewing people with astrophotography rigs and asking about their setups for a video they were putting together.  They asked if I would do a presentation on their weekly show, and I had a great time presenting "Astrophotography Joyride: A Newbie's Perspective." (It has 2,500 hits now!!)  I stayed on as a panel member for the channel, and have gotten to know the other members.  For example, another presenter Cary Chleborad, president of Optical Structures (which owns JMI, Farpoint, Astrodon, and Lumicon), asked if I would test a new design of a Lumicon off-axis guider (which I still have because I'm still trying to get my AVX to work well enough with my C8 so I can give it a fair shake!).  (You can read about that adventure here.)  Now, Cary and Tolga Gumusayak collaborated to give me 5 hours of telescope time on a Deep Sky West scope owned by Tolga of TolgaAstro with a new FLI camera on loan and some sweet Astrodon filters, and asked me to write about the experience!  Deep Sky West is located in Rowe, New Mexico, under some really dark Bortle 2 skies.

The Gear


The telescope rig in question is the following:
- Mount: Software Bisque Paramount Taurus 400 fork mount with absolute encoders
- Telescope: Planewave CDK14 (14-inch corrected Dall-Kirkham astrograph)
- Camera: Finger Lakes Instrumentation (FLI) Kepler KL4040
- Filters: Astrodon, wideband and narrowband
- Focuser: MoonLite NiteCrawler WR35
I'm told the whole thing is $70k!  And you'll notice the lack of autoguiding gear - you don't need to autoguide this mount.  It's just perfect already once you're perfectly polar aligned.

Screenshot from the live camera feed from inside the observatory

The Target


After getting the camera specs, I needed to select a target.  With a sensor size of 36x36mm and a focal length of 2563mm, my field-of-view was going to be 48x48 arcmin (or 0.8x0.8 degrees).  So a little bigger than my 11-inch Schmidt-Cassegrain with its focal reducer, but smaller than the 5-inch refractor I use at the observatory.  It sounded like I was going to get the time soon, so I needed a target that was in a good position this time of year.  While I was tempted to do a nebula with narrowband filters, I haven't processed narrowband images before, so I wanted to stick with LRGB or LRGB + Ha (hydrogen alpha).  So I decided that I should do a galaxy.  Some ideas that came to mind were M81, the Fireworks Galaxy, the Silver Dollar Galaxy, and M33.  M74 was also recommended by the resident expert astrophotographer in my club.  I finally settled on M33, since because of its large angular size on the sky, it's difficult for me to get a good image of from my home location, and it has some nice HII nebula regions that I haven't been able to satisfactorily capture.

Messier 33 is also known as the Triangulum Galaxy for its location in the small Triangulum constellation up between the Aries and Andromeda constellations. It's about 2.7 million lightyears from Earth, and while it is the third largest galaxy in our Local Group at 40% of the Milky Way's size, it is the smallest spiral galaxy.

As far as how to use the 5 hours went, I originally proposed 30x300s L, and 10x300s RGB each.  But then Tolga told me that this camera (like my ZWO ASI1600MM Pro) has very low read noise, but kind of high dark current, and it's also very sensitive, so shorter exposures would be better.  He also told me that the dynamic range was so good on this camera that he shot 5-minute exposures of the Orion Nebula with it, and the core was not blown out!  Even on my ZWO, the core was blown out after only a minute.  So I revised my plan to be 33x180s L, 16x180s RGB each, and I also wanted some Ha, so I asked for 10x300s of that.

Data Capture


The very next night, November 7, Tolga messaged me saying he was getting the M33 data and asked if I wanted to join him on the VPN!  He had me install TeamViewer, and then sent me the login information for the telescope control computer out at the remote site.  It was a little laggy, but workable.


This was really cool!  We could control the computer as if we were sitting in front of it.  The software, TheSkyX with CCDCommander, let you automate everything, of course.  The list shown on the screen is the actions for the scope to follow, which instead of being time-based are event-based.  The first instruction is "Wait for Sun to set below -10d altitude."  This way, you don't have to figure out all the times yourself every night - it just looks at the sky model for that night for that location.  It turns the camera on, lets it cool, and then goes to the next checked action, which is to run a sublist for imaging M33 in LRGB until M33 sets below 30 degrees altitude.  I thought I grabbed a screenshot of the sublist, but it looks like I didn't.  Darn!  Anyway, it has the exposure times and filter changes and everything else in there.  It also has how often to dither - dithering is when you move the scope just a few pixels every frame or couple so that you don't get hot pixels in the same place in every frame.  I haven't had to do this yet since I've never been perfectly polar aligned enough or had a scope with good enough gears for it not to already be drifting around a little bit frame-to-frame on its own.

Also in the above screenshot is a raw single luminance frame.  To the untrained eye, it looks blown out and noisy as heck, but I know better, having looked at a lot a raw files now - it looks great to me!

He only took some of the luminance frames and red frames - the rest he'd get on another night soon - and then switched to green.  On the second green frame, the stars had jumped!  Tolga thought at first a cable might be getting caught, so he switched to the live camera feed and moved the scope around a bit, but everything looked fine.  He mentioned that it had been hitching in this same spot about a month ago.  It later turned out to be a snagged cable, which was later fixed.  Anyway, the mount moved past that trouble spot, and the rest of the frames came out fine.  I logged off because it was getting late.

He collected the rest of the frames, and then on November 11, sent me the stacked L, R, G, and B images.  Now it's time to process!

Preparing for Combination

Since I'm still learning PixInsight, I'll once again be following the Light Vortex Astronomy tutorials, starting with "Preparing Monochrome Images for Colour-Combination and Further Post-processing."
First, I open up the stacked frames in PixInsight and apply a screen stretch so I can see them.


Wowee!!!!!

The first processing step I'll do is DynamicBackgroundExtraction to remove background on each of the four stacked images.  It may be very dark out in Rowe, NM, but there is likely still some background light.  Since they're aligned, I can use the same model for each one, so I'll start with the luminance frame, and then apply the process to each of them.

Following the tutorial's advice, I set the "default sample radius" to 15 and "samples per row" to 15 in the Sample Generation tab.  I hit Generate, but there were still a lot of points missing from the corners, so I increased the tolerance (in the Model Parameters (1) tab) to 1.000.  After increasing all the way to 1.5, there were still points missing from the corners, but I decided just to add some by hand.  I also decided there were too many points per row, so I reduced that from 15 to 12.  Then I went through and checked every point, moving it to make sure it was not overlapping a star, and deleting points that were on the galaxy.  You want only background.  



Next, I lower the tolerance until I start getting red points - ones that DBE is rejecting, making sure to hit "Resize All" and not "Generate" so I don't lose all my work!  I stop at 0.500, and all my points are still valid.  I open the "Target Image Correction" tab, select "Subtraction" in the "Correction" dropdown, and then hit Execute.  After I autostretch the result, this is what I have:


Hmm, maybe a little too aggressive - there's some dark areas that I don't think are real.  I back off Tolerance to 1.000 and try again.


The result looks pretty much the same, so I'm going to run with it and see what happens.  I'll save the process to my workspace so I can adjust later if needed (and I also need to apply it to my RGB frames).  This is what the background it extracted looks like, by the way:


I close the background image, minimize the pre-DBE image, and put a New Instance icon for the DBE process in the workspace (by clicking and dragging the New Instance triangle icon on the bottom of the DBE window into the workspace), and then I close the DBE process.  Then I minimize the DBE'd luminance image and open up the red image, and double-click the process I just put into the workspace, which then applies the sample points to the red image.  None are colored red for invalid, so I execute the process, and get the following result:


Lookin' good.  I do the same for the green and blue, and save out all of the DBE'd images for later reference, if needed.  I also save the process to the same folder for possible later use.

Next, I open up the LinearFit process, which levels the LRGB frames with each other to account for differences in background that are a result of imaging on different nights, different times of the night, the different background levels you can get from the different filters, etc.  For this process, you want to select the brightest image as your reference image.  It's probably luminance, but we can check with HistogramTransformation.  


I select L, R, G, and B (the ones I've applied DBE to) and zoom in on the peak (in the lower histogram).  It's so dark at the Deep Sky West observatory that especially after background extraction, there is no background, and pretty much all the peaks are in the same place.  Even the non-DBE'd original images have basically no background (which would show up as space between the left edge of the peak and the left side of the histogram window). So I select the luminance image as reference, and then apply the LinearFit process to each of the R, G, and B frames by opening them back up and hitting the Apply button.  I needed to re-auto-stretch the images afterwards.  

Combining the RGB Images


Now that their average background and brightness levels are leveled, it's time to combine the LRGB images together.  For that, I go to the "Combining Monochrome R, G and B Images into a Colour RGB Image and applying Luminance" tutorial.  

First, I open the ChannelCombination process, and make sure that "RGB" is selected as the color space.  Then I assign the R, G, and B images that I have background extracted and linearly-fitted to each of those channels, and hit the Apply Global button, which is the circular icon on the bottom of the process window.  


Yay color!  It's showing some noise at the moment, but we'll get there.  Remember, this is just a screen stretch, which tends to be less refined than when I will actually stretch the image.

I'll come back to this tutorial later to combine the luminance image with the RGB, since it's a good idea to process them separately, then bring them together, since they bring different features to the table.

Here, I took a quick break to make some Mexican hot chocolate.
Mmmmmmm yes.

Color Calibration

Since I'm dealing with the color image first, I'll go ahead and color calibrate now using PhotometricColorCalibration.  This process uses a Sun-type star that it finds in the image using plate solving as a white reference from which to re-balance the colors.  In order to plate solve, you need to tell it where the camera is looking and what your pixel resolution is.  To tell it where this image is looking, I simply click "Search Coordinates," enter "M33," and it grabs the celestial coordinates (right ascension and declination) for that object.  


After hitting "Get," I enter in the focal length and pixel size.  Focal length on the Planewave CDK14 is 2563mm, and the pixel size on the FLI Kepler KL4040 is a whopping 9 microns!  I enter these values and hit the Apply button, then wait.  A few minutes later, the result appears.


The change is small this time, but other times I've used this, it's a made a huge difference.  It looks like these Astrodon filters are already color balanced.  My own Astronomik filters are too, but sometimes still require a small bit of tweaking.  My DSLR images benefit the most from color calibration!  Or images where I used other filters besides my Astronomik ones.

Noise Reduction

Time to deal with the background noise!  I'll be following the "Noise Reduction" and "Producing Masks" tutorials.

First, since I want to reduce noise without blurring fine details in the brighter parts of the image, I'm going to use a mask that will protect the brighter parts of the image, where the signal-to-noise ratio is already high, so that I can attack the dark areas more heavily.  Since I have a luminance image that matches the color data, I'm going to use that as my mask first, and then see if I need to make a more selective one.  Now, masks work better when they are bright and non-linear, so I duplicate my luminance image first by clicking and dragging the tab with the name of the image (once I've re-maximized it so I can see it) to the workspace.  Then I turn off the auto screen stretch by hitting the "Reset Screen Transfer Functions: Active Window" button in the button bar at the top of PixInsight, and open up the ScreenTransferFunction process.


Then I hit the little radioactive icon to apply an auto stretch again, and I open the HistogramTransformation process.  I select my cloned luminance image from the dropdown list, hit the "Reset" button in the bottom of that window, and then click and drag the "New Instance" icon (triangle) from the ScreenTransferFunction process to the bottom bar of the HistogramTransformation window.  This applies the same parameters that the auto stretch calculated to the actual histogram of the image.  Now I hit the Reset button on the ScreenTransferFunction window, close it, and hit the Apply button in HistogramTransformation to apply the stretch.  I close HistogramTransformation.

To apply it to my color image, I select the color image to make the window active again, I go up to Mask -> Select Mask, and I select my cloned, stretched luminance image.  


Now, the red areas are the areas the mask is protecting, so since I want to apply the noise reduction to the dark areas, I invert the mask by going to Mask -> Invert Mask.


There we go.

I open up MultiscaleLinearTransform for the noise reduction, and I minimize the cloned luminance image.  Since I don't need to see the mask anymore, I go up to Mask -> Show Mask.  Now, don't forget you have it on - a few times I've tried to do a stretch or other processing and it looks really weird or doesn't work, and it's because I left the mask on!

Following the tutorial's recommendation, I set the settings for the four layers, and hit Apply.



If you want to see the effect of changes made to the parameters without having to run this a bunch of times, you can create a small preview window by clicking the "New Preview Mode" button at the top of PixInsight, selecting a portion of the image (I'd pick one with some bright and some dark areas both), and then hit the "Real-Time Preview" (open circle) icon at the bottom of the MultiscaleLinearTransform window.  It still takes a bit to apply, but less time, and then once you're happy, you can go back to the whole image and apply it there.  I think this worked pretty well here. I remove the mask before I forget.

While I've got the window open, I apply the same mask I created to the luminance channel now as well, and run the same MultiscaleLinearTransform on it.


Sharpening Fine Details

I'm going to try here a new process I haven't tried yet for bringing out finer details - deconvolution with DynamicPSF.  I'll be following that section of the "Sharpening Fine Details" tutorial on the luminance image.  Deconvolution is awesome because it helps mitigate the blurring effects of the atmosphere, as is easily seen when processing planetary images.  It's magical!

But first, a short break to pet one of my cute kitties.


His name is Orion.  My other cat is Nova.
I might be a little bit obsessed with astronomy.


I open up the DynamicPSF process and select about 85 stars - "not too big, not too little" according to the tutorial.  


I then make the window bigger and sort the list by MAD (mean absolute difference), and scroll through them to see where the most number of stars are clustered around.  1.5e-03 and 2.5e-03 seem to be about the range.  I delete the ones outside this range.  Next, I re-sort the list by A (amplitude).  The tutorial recommends excluding stars outside the range of 0.25-0.75 amplitude, but the brightest star I still have left is 0.285 in amplitude, so I just cut the ones below 0.1.  

Next I sort by r (aspect ratio).  The tutorial recommends keeping stars between 0.6-0.8, and all of mine are already pretty tight in that range, between 0.649 and 0.746, so I keep all 20 of them.


Then I hit the "Export" icon (picture of the camera below the list of star data), and a tiny model star appears underneath the window.

Infinite cosmic power!  Ittttty bitty living space!

I had noticed that the stars, even in the center of the image, looked ever-so-slightly stretched.  You can see that here with this model star.

Now I close the DynamicPSF process, but keep the star image open.  First, I need to make another kind of mask, involving RangeSelection.  Not gonna lie, I'm a little out of my depth when it comes to masks, but I'm sure if I use them more, I'll start to get a better feeling.  For now, I'll just rely on what the tutorial recommends.

I re-open the stretched luminance image I used earlier and then open RangeSelection process (shown in part of this tutorial) and tweak the settings as suggested in the tutorial until the galaxy is selected.


Next, I needed to include a star mask with this as well, so I minimize the range mask for the moment and open the StarMask process, as described in part 5 of that same tutorial.  


I stretch it a bit with HistogramTransformation to reveal some dimmer stars.  According to the tutorial, it will help to make the stars a little bit bigger before convolving this with the range mask, so I open up MorphologicalTransformation, and copy the tutorial's instructions.


To combine the two masks, I open the hallowed PixelMath process, and the math portion of it is range_mask - star_mask, creating a new image.


I skip the part of the tutorial that makes the super-bright stars all black because none of mine are over the nebulous region of the galaxy.  I skip ahead to the making the more pronounced stars over nebulosity have more protection.


Next comes smoothing the mask using ATrousWaveletTransform, I apply it twice with the recommended settings to blur the mask.


Finally I can apply the mask.


Okay, what was I doing? I can't remember.  *scrolls back up* Oh yeah, deconvolution with DynamicPSF.  Let me go back over to that tutorial.

Looking at the mask, I don't think enough of my stars are protected, but we'll see how it goes.

I open up the Deconvolution process, click on the External PSF tab, and give it the star model I made earlier with DynamicPSF.  I set other settings recommended by the tutorial, and create a preview so I can play with the number of iterations without waiting forever for it to complete.  All the way up to 50, it hasn't converged yet, so I go ahead and run 50 on the whole image.



The difference isn't enormous for all the work I did to get there, but you can definitely tell that the image is sharper.  Pretty cool!

All right, time to stretch!  **Don't forget to remove the mask!** I almost did.

Stretching

In my Photoshop processing process, the very first thing I do is stretch.  But in PixInsight, there's a bunch of stuff that works better on unstretched (linear) data.  When your image comes out of stacking, all of the brightness data are compressed into a very small region.

It's tiny!

Stretching makes the data in that peak fill up more of the range of brightnesses so that you can actually see it.  All of the data is there, it just appears very dark right now.

I open up HistogramTransformation and turn off the screen stretch, and reset the histogram window.  Now the image is quite dark.  I open up a real-time preview so I can see what I'm doing.


Now I move the gray point slider (the middle one) toward the left, and then I zoom in on the lower histogram.  The upper one shows what the result histogram will look like, and the preview window shows what the image will look like. 


Stretching is a multi-step process; I hit Apply, and then Reset, and then I move the gray point slider some more.  The histogram changes each time as the data fill up more of the brightness range.  As you stretch, the histogram will move off the far left edge, and you can kill some extra background there if needed by moving the black point up to the base of the peak.  Don't go too crazy though - astro images with perfectly black backgrounds look kind of "fake."  

After a few iterations, my image is now non-linear - no more screen stretch required.


Now I do the same thing for the RGB image.


All right.  With those two killer images, time to combine L + RGB!

Applying L to RGB

Since the luminance filter passes all visible wavelengths, that image tends to have a higher SNR (signal-to-noise ratio), and thus finer detail because it's not lost in the noise.  While you can make a good image with just RGB, applying a luminance channel can really make the fine details come out more, plus give you more flexibility with contrast, since you can act on the L alone and not do weird things to the color.  

The application process is simple, and is described in part 3 of the "​Combining Monochrome R, G and B Images into a Colour RGB Image and applying Luminance" tutorial.  I open up LRGBCombination, and disable the R, G, and B channels, since they're already combined.  I select the L image, leave the channel weights set to 1, and leave the other settings as they are, since I'm going to play with saturation and lightness in CurvesTransformation instead later on.  I will tick "Chrominance Noise Reduction."  Then I make sure the RGB image window is active, and hit Apply.


Yeeeeeeeeeeeaaaaaaaaaaahhhhhhhhhhhhhhh this is getting awesome!!

Final Touches

Almost there folks, almost there...

First, I'll apply HDRMultiscaleTransform to boost the dynamic range, just to see what it does.  It works best with a range selection mask, and I'm just going to re-use the range selection - star mask mask I made earlier.


The result:

You know, I'm not a fan.  The core is like gone.  Undo!

I tried fewer layers - 4 - but it was even worse.  So I increased to 10, and I think I kind of like it!


Yeah, more definition on the arms!  I think I like it.

And, lastly (I think), CurvesTransformation.  (And removing the mask first...) I use a preview window to see what I'm doing.  I move the whole RGB/K curve into more of an S-shape, and then bump up saturation in the middle.

Drumroll please...

All righty, here it is!


Now that is a damn fine image!  When can I get me one of these telescopes and one of these cameras??

Conclusion

As far as processing goes, this data was easy to work with.  I could have not done anything to it but stretch and combine, and it would've been awesome.  

The colors were very nearly already balanced - I think the PhotometricColorCalibration only did a tiny bit.

I just can't get enough of the detail in this image!  I've zoomed in and looked all around, and everywhere there is something sharp and marvelous.  Excellent detail in the HII regions and other nebulous regions, several tiny background galaxies, and so many stars that are resolved in the galaxy!  I showed off the image to my coworkers, and they also couldn't get over the level of detail that was revealed.  The fact that I can see structure in the nebulae of another galaxy 2.7 million lightyears away just leaves my jaw on the floor.  

So this was a lot of fun!  Starting to get a good feel for PixInsight now, and it's nice having just some really nice data to play with.  And having some epically-awesome data to process made using PixInsight even more fun.  As cool as this was though, it's not nearly as satisfying as suffering through setting up gear in the cold, polar aligning, aligning, focusing, warming back up inside while the images are capturing, panicking and freezing some more as I fix problems, and then trudging back home at 2 AM, and then processing the data and seeing the target rise up out of the light pollution haze and the camera noise.  As much as I pine for being able to stay inside and stay warm and get images from pristine dark skies while I watch Netflix in town, there's something to be said for a little sweat making the victory sweeter.

Despite that feeling, it is also satisfying to make a truly killer image with some epic (and epically expensive) gear!!

Thanks again to Tolga and Cary for letting me have this data and this experience!


Sunday, November 11, 2018

#169 - Saturday, November 10, 2018 - I'm Not Ready for Winterrrrrrr

Saturday was Members Night for the astronomy club out at the observatory, and for how cold it was, it was very well-attended!  Part of the reason it was cold was because it was also clear, which I'm sure was a big part of the draw.  There were quite a few telescopes set up with people braving the cold to see the sights!  It was in the mid-20s F at the start of the evening, and we had our potluck inside the warm room.

I decided it was going to be too cold to setup my own telescope, so I brought out my camera bags and my Vixen Polarie.  My first plan for the evening was to test out a polar alignment rig I cobbled together for the Polarie that I'm going to see if it'll work for my trip to Chile next summer.

I spent part of the afternoon looking through all of the hardware in my house to see what I could put together for the Polarie.  My master plan is to hack together basically a DIY Polemaster, which is a camera from QHY that helps do computer-based polar alignment.  I've been doing pretty much the same thing with my camera attached to my telescope using SharpCap's polar alignment routine, so rather than spending $300 for the Polemaster, I started looking for ways to get a lens onto my QHY5 and get it attached to the cold shoe on the Polarie.  Then I could just use the polar alignment routine in SharpCap with it.  I bought a M42-to-C-mount converter with the intent of getting like a 35mm or 50mm CCTV lens, but then I figured I could also use my little Orion 50mm guidescope (focal length 162mm).  I had recently bought a dual mount for it so I could have both the guide scope and my red dot finder mounted onto my Borg refractor at the same time instead of having to swap them out, and I noticed that it also had a standard 1/4-20 tap on the bottom.  I also recently purchased a couple of camera shoe-to-1/4-20 converters.  So I attached one of them to the bottom of the dual finderscope mount, and then attached the guide scope to that.  It's not the firmest of connections, and the whole thing is kind of heavy, but for just testing the concept, it'd work.

In addition to the polar camera idea, I also needed a more solid base with fine adjustment knobs, and luckily Vixen makes one especially for the Polarie.  It has three altitude settings for 0-30, 30-60, and 60-85, and at each setting altitude can be changed +/-15 degrees.  Azimuth can also go +/-15 degrees.  The base can attach directly to a tripod either with a 1/4 or 3/8 screw, and it comes with both kinds.  I wanted to attach it directly, but I don't think the heads come off of any of the three of my tripods.  I'm already shopping for a short, lightweight but sturdy one that does have a removable head.  For the moment, I attached it like I would a camera to my sturdiest tripod, my Orion Paragon.


They thought of everything with this mount head - it even has a circular attachment that screws into the bottom of the Polarie so you can just pop it in and out with a set screw.  The top where the Polarie attaches is just wide enough for the Polarie, and has raised edges so that it won't rotate by accident.


Next, I replaced the ball head that came with the Polarie with a taller one I picked up from an acquaintance of mine from area star parties while I was at the Hidden Hollow Star Party, Dave.  This one is taller and beefier.  Then I attached my dual finder mount to the cold shoe on top of the Polarie.


Finally, I put my DSLR on the ball head and my guide scope and guide camera on the finder mount to act as the polar alignment camera.

My Nikon D5300 with a Nikon 55-200mm lens attached to my Vixen Polarie tracker on the Vixen polar fine adjustment mount head, and my Orion 50mm mini-guidescope with my QHY5 red hockey puck guide camera attached.

Here are the links to all the stuff I bought:

It was still bright outside, so I went inside and ate dinner, and then did a quick training to re-qualify on using the memorial dome since we had the new Celestron CGX-L in it.  I had pulled out the CGEM when I first got there only to realize that they had indeed already mounted the Meade 127mm apo in the memorial dome, so I couldn't get in there to image until everyone was done re-qualifying and looking through the scope.

Once I was done eating and it was darker, I plugged my QHY5 into my tablet and opened up SharpCap.  After replacing the nosepiece on the QHY5 with the one I had a parfocal ring on already adjusted for that particular guide scope, I was getting nice stars, and opened up the polar alignment routine.  Using the hole on the side of the Polarie got me close - Polaris was already in the FOV.  

I quickly realized a fatal flaw in my plan, however.  Normally for this polar alignment routine (and the QHY Polemaster), you rotate the mount 90 degrees in RA, and then it calculates how much you were off.  This doesn't make sense on the Polarie, unless I use the main imaging camera for polar alignment, which wouldn't make sense either because it could be pointing anywhere.  So instead, I just used the plate solving it does in the first step to show me where the north celestial pole is, and I put that in the center of the camera's FOV.


I'm not entirely convinced that this is a scientifically sound way of doing it because the guide scope is not necessarily boresighted with the polar axis of the Polarie, but it may yet be close enough for 100-200mm lenses for 2-5 minutes.  

Once that was done, I carefully removed the guide camera and scope (because it was getting in the way of me seeing through the viewfinder of my DSLR), and then started looking for the Heart & Soul Nebulae, using the bottom star of the W of Cassiopeia as a reference in the viewfinder, and then the Double Cluster as a reference with some 5s exposures.  I got lucky this time and nailed focus dead-on with only a few tries, woot!  I had a hard time finding it, but eventually I got it mostly in there, or at least as far as I could tell using SkySafari and reference stars.  I set up the intervalometer to take 2-minute subframes, but it's been finicky for a while now, and it was acting up again and not working.  So finally I just hooked up the camera via USB to my Surface 3 tablet and used BackyardNikon to trigger exposures.  I was just starting to re-adjust the target position when the last person was finally finished with the memorial dome, so I grabbed my other laptop and cameras and moved in.

Meade 127mm apo ED riding atop the Celestron CGX-L beefcake mount inside of the club's memorial dome.  This is from the end of the night when everything was frosted over.

I hadn't used the laptop in a while, and it has a weird fault where it will run at 100% disk usage for the first 10 minutes or so after it wakes up from sleep or hibernation.  This issue persisted through a hard drive replacement and at least one, possibly two clean Windows installs.  I aligned visually with an eyepiece while I waited for it to work, and the polar alignment still needs a little work - the first star, Vega, was fairly significantly out of the FOV of even the finderscope.  It took a while to find it; we're going to install a Telrad on there too soon, which will help with that situation.  Finally I was able to kind of use the laptop (very slowly while the disk was still at 100%).  I couldn't get my ZWO camera to show up at first, so I tried reinstalling the driver.  It still wasn't working...and then I realized I forgot to plug in the USB cable. -.-

Next, I set to work on focusing the guidescope, but I couldn't get my QHY5L-II to talk to my laptop either for some reason, so I had to swap back to my QHY5.  I remembered that I had to use a mirror diagonal to get enough backfocus to focus it, and this made the short USB cable that plugs into the back of my ZWO camera for help with cable management too short, so I had to run back out to my other bag and grab a longer cable.  Finally I got it focused, did final focus with the Bahtinov mask on the precise goto star for my chosen target, galaxy M74, and then calibrated guiding.  The calibration went quickly, and soon I was off.

Perfect focus with the Bahtinov focusing mask!

At first I was frustrated at how late it must be and how much time I'd wasted, but then I glanced at the computer clock and did a double-take.  I thought I might have it still set for Texas time from the Texas Star Party, but I checked my watch and it also reported that it wasn't 9 PM yet!  One nice thing about wintertime astronomy...way more time!  It was 8:45 when I finally started getting images in.


Above is the guide graph for the Celestron CGX-L we have in there.  Keep in mind that polar alignment probably needs some work, and seeing is never particularly good around here.  The CGX-L is belt-driven, so backlash is nonexistent.  The total RMS error hovered around 1 arcsec, which is pretty good!  I'm going to work on that polar alignment next time I'm out there though, since my 5-minute test frames showed a little elongation in the stars.  For that reason, I decided to stick with 3-minute frames.

By this time, it was 22F outside, and my ZWO ASI1600MM Pro camera was happy to oblige my request that it cool itself to -40C.  Now, I don't even have dark frames at that temperature, which is why I haven't used it much, but hopefully I can attempt to get some.  They're hard to get because even when it's in the teens outside, the heat from my apartment leaks out on the porch and the camera has to run at full tilt to make it that cold.  However, I could try putting it in the fridge or the freezer, especially now that it's drier and condensation hopefully won't form.  It's supposed to be able to do -45C below ambient.  My fingers were getting very cold indeed - I can use a stylus with my tablet and leave my gloves on, but my laptop's touchpad needs fingers.  I was also wrapped up in a blanket sitting on the camp chair in the dome.

While the ZWO was taking luminance frames (with much better-looking stars than Tuesday night, might I add), I went back over to the Polarie to get it rolling.  Frost had formed on the camera's lens already, despite the presence of a toe warmer, so I added a second one that had been in my pocket.  This inevitably messed up the focus, despite the fact that I had been so careful in putting it on!  So I had to re-focus.  This is much easier to do on the camera screen because there's no download time, and I just look for when the stars look pixelated when I'm super zoomed in.  After getting it refocused, I plugged it back into my tablet, but my tablet wouldn't recognize it was there (even though the camera told me it was plugged in).  Shortly thereafter, the tablet shut off - it was too cold for the poor thing.  With a dead intervalometer and dead tablet, I had to resort to using the camera's internal timer, which I can only go as long as 30s with.  I knew I wasn't going to be able to get the Heart & Soul Nebulae with only 30s, so I switched to the Pleiades, which were now rather high in the sky.  I saw an image on AstroBin recently with the Pleiades and the California Nebula in the same view - I hadn't realized how close they were!  So I zoomed out to 55mm and got them both in the field (or at least, the star pattern nearby, since I couldn't see the actual nebula in subframes).  While I was getting that target in view, the polar fine adjustment tripod head kept slipping around on where I'd attached it to the tripod - I must not have tightened the screw enough - and I lost all of my hard polar alignment work. :(  So I just did it the old-fashioned way and lined up Polaris through the little hole in the side.  Then I just let that roll.  I was so cold and grumpy that I finally stormed inside for a while!

I checked on the memorial dome periodically and rotated the dome so that the telescope could still see through the slit.  Around 11 PM, it was time for a meridian flip - the CGX-L can probably go a ways past the meridian, but the Meade scope is so long (especially with how far back I have to put the camera to focus) that the camera was already almost hitting the laptop tray attached to the pier right at when it was crossing the meridian.  So I told the mount to prefer west so I could get it to flip, but unfortunately all of the precise goto stars were in places that would have made the camera hit the laptop tray.  At one point it did accidentally hit it when I hit a wrong key, and since it was still trying to slew but wasn't actually moving, the mount lost alignment!  So then I had to spend 5 minutes re-aligning, but luckily I found Deneb quickly this time as the first star, and then the second star was in my camera's FOV.  I only did two stars in the west and none in the east because I was only going to be in the west for the rest of the night getting M74.  By the time alignment was done, the precise goto stars were far enough west that the camera wasn't going to hit the laptop tray, so I got M74 back in view and carried on with the luminance frames.

Still need to clean off the dust spot.

Around 1 AM, I flipped to the blue filter, which was really hard because the metal in the filter wheel was cold and expanded.  As a result, M74 ended up closer to the top of the frame (and the dust spot) than before, and I didn't catch it till later.  Sigh...

At 2 AM, I decided to call it a night.  I packed up the memorial dome first, then helped my minion Miqaela pack up her gear, and got my Polarie put away.  Comparing Tuesday night's images to tonight's, there was definitely some kind of seeing or focus issue, so I think I'm going to have to re-take the red and green filter data, although I'll give it a shot at stacking first just to see what happens.

My gear was very frosty by the end of the night!


I was feeling pretty frosty too!  20 degrees right now feels frigid.  At least the wind was dead, otherwise it would have been really miserable.  I had to run the hair dryer on my camera lens a couple times, and whenever I took my glasses off to get close to the viewfinder, they froze over too and took forever to clear!

Since the Polarie wasn't perfectly polar aligned, the DSLR images drifted over the course of the night, with the Pleiades, which were originally on the right side of the frame, moving toward the left. 

I don't have cold enough darks, but I do have a set of 30s, ISO-1600 darks at 24 degrees, which is close enough.  I keep forgetting that I don't have a bias frame library built up yet though, so I had to go without for stacking.  Bonus points though, I got 400 light frames though, which should really help my SNR (signal-to-noise ratio)!  This amounts to about 3h20m!

I stacked in DeepSkyStacker and had it choose the best 90% (after I deleted images where I shined light on the lens, etc), and I honestly wasn't expecting much.  I stretched it a bit in Photoshop and tried a few things, but wasn't getting anywhere.  But while I was messing around in Photoshop, I kept thinking of all the things I could do in PixInsight instead...so I imported it to PixInsight and got to work.  A few hours later, my jaw was nearly hitting my keyboard - I just couldn't believe it!  

Date: 10 November 2018
Object: M45 Pleaides Cluster
Attempt: 9
Camera: Nikon D5300
Telescope: Nikon 55-200mm lens @ 55mm, f/4.5
Accessories: N/A
Mount: Vixen Polarie
Guide scope: N/A
Guide camera: N/A
Subframes: 359x30s (2h59m30s)
Gain/ISO: ISO-1600
Stacking program: DeepSkyStacker
Stacking method (lights): Auto-adaptive weighted average
Post-Processing program: PixInsight 1.8.5
Darks: 20 (24F)
Biases: 0
Flats: 0
Temperature: 22F
See on AstroBin

I did not think I could ever go this deep from my light-polluted location, much less with 30-second exposures, much less with an unmodified DSLR and no light pollution filter.  Yes that is dark nebula you are seeing throughout the image - part of the Taurus Dark Nebula Complex.  Simply amazing!  Especially when you consider one of the single subframes:
Single 30s subframe

It's all about getting lots of exposures and doing good processing!  I applied the following processes, in order, with guidance from the amazing Light Vortex Astronomy tutorials:
- Stacked in DeepSkyStacker (I'm going to start stacking in PixInsight, but I didn't have bias frames, and I wanted to run this one fairly quickly)
- Cropped out black edges with DynamicCrop
- Nuked background with DynamicBackgroundExtraction
- Cropped again, for vignetting that was revealed
- Applied ColorCalibration to balance colors properly (PhotometricColorCalibration wasn't working for some reason, but the older algorithm still worked great)
- Applied MultiscaleLinearTransform for noise reduction
- Stretched the image (took from linear to non-linear brightness data)
- Applied HDRMultiscaleTransform to increase contrast
- Applied CurvesTransformation to increase contrast and boost saturation a tiny bit
- Did a little more denoising in Photoshop Camera Raw

The reason that M45 is a little off-center is because I was trying to also grab the California Nebula, but my not-excellent polar alignment made the camera FOV slowly drift, which cut the nebula from the image after not too much time.  I made a second stacked image using "mosaic" mode, so I'm going to run that through PixInsight as well and see if I have enough SNR there to see it.  It would be super cool if it showed up!

Since I don't have an gain=300 dark frames at -30C or any dark frames at -40C on my ZWO ASI1600MM Pro, the M74 galaxy image is going to have to wait until it gets colder at night.  I'll get back to you on that!

All told (including some trouble with frozen gate locks), I got into bed at 4 AM!  I'm in for a loooong cold winter of imaging, but once the CGX-L gets dialed in and we get a good alignment model in there, it will be very quick indeed to get setup night-to-night.  I've got a long list of winter targets yet to image, so get ready!!

[Update December 1, 2018]
I finally got around to processing all the data I had on M74!  It was tough to process, since the red and green data were a little out of focus, the blue and luminance data were drifting due to not-quite-complete polar alignment on the mount, and there were a ton of dust spots on the objective (remind me to bring my multi-coated optics cleaner next time I'm out there), not to mention the fact that I had two different temperatures on the camera, but here it is!
Date: R,G: 6 November 2018, B,L: 10 November 2018
Object: M74
Attempt: 1
Camera: ZWO ASI1600MM Pro
Telescope: Meade 127mm f/9 ED/apo (club's)
Accessories: Astronomik LRGB Type 2c 1.25" filters
Mount: R, G: Celestron CGEM (club's), B,L: Celestron CGX-L (club's)
Guide scope: R, G: Orion 50mm mini-guider, B,L: Celestron 102mm
Guide camera: QHY5
Subframes: L: 32x180s (1h36m)
   R: 13x180s (39m)
   G: 14x180s (42m)
   B: 14x180s (42m)
   Total: 73x180s (3h39m)
Gain/ISO: 300
Stacking program: PixInsight 1.8.5
Stacking method (lights): 
Post-Processing program: PixInsight 1.8.5
Darks: 40 @ -30C, 40 @ -40C
Biases: 50 @ -30C, 50 @ -40C
Flats: 0
Temperature: R,G: -30C (sensor), 44F (ambient); B,L: -40C 

Here's all the stuff I did in PixInsight:
- Generated master bias for both temperatures
- Generated master superbias for both temperatures
- Generated master dark for both temperatures
- Skipped CosmeticCorrection because there's not really any hot pixels
- B and L frames calibrated with -40C dark and superbias
- R and G frames calibrated with -30C dark and superbias
- Local normalization files generated
- Decided not to use the local normalization because it duplicated dust spots and did some blurring
  on the RGB channels
- Integrated each channel's subframes, using SubframeSelector to pick the better ones
- Combined RGB channels
- Cropped both RGB and L images
- DynamicBackgroundExtraction on RGB, used same points for L
- Color calibrated RGB with PhotometricColorCalibration, with background neutralization
- Clone-stamped out some enormous dust spots in the RGB
- Denoised RGB with MultiscaleLinearTransform, with mask
- Clone-stamped enormous dust spots out of L
- Denoised L with MultiscaleLinearTransform, with mask
- Denoised RGB again with MultiscaleLinearTransform, with mask
- Stretched L and RGB
- Combined LRGB
- Applied Deconvolution with star model from DynamicPSF, with range_mask - star_mask mask
- Applied MultiscaleLinearTransform for noise reduction
- Tried HDRTransformation, but didn't like it
- Applied CurvesTransformation and HistogramTransformation to boost galaxy, kill background a bit more

You know, I would hear from fellow astrophotographers about taking hours to process a set of data, b but I could never understand why - it would only take me about an hour to stack in DeepSkyStacker and do everything I knew how to do in Photoshop.  But now I understand!  There is so much to do in PixInsight, and even just getting the images stacked takes about half the time.  I would have tried BatchPreprocessor, but I had two different sensor temperatures of data, in addition to having four different filters.  You can label groups of images so that you can link a specific set of flats with a specific filter, etc, but I'm not really sure yet how to use BatchPreprocessor with multiple channels of monochrome data yet.  Next time!