Sunday, March 3, 2024

Automated Solar Eclipse Sequence in NINA

Everybody always says to not image a solar eclipse but just witness it instead -- but why have just one when you can have both!

In the past, I've successfully automated my eclipse imaging using BackyardNikon with my DSLR, and it worked great. I started the main eclipse sequence exactly 1 minute before 2nd contact, and I had carefully timed everything and practiced a bunch, and I got some great images in 2017 and 2019 without having to look at my camera or computer once during totality.

But now for 2024, I'm ready to use an astro cam instead, which opens up some great possibilities with automation. NINA is my tool of choice since there is so much flexibility in the exact ordering and timing of events.

I've been talking about making this sequence for a while, but with PhD things, I'm finally just now having some time -- and I finally got it done and tested! A lot of people have asked if they can have a copy of the sequence, and I'm happy to share.

Files:

Whole eclipse, with time spans instead of clock times for testing
Excel spreadsheet for planning times
README file with the same information in this post
Link to the folder with all of these



IMPORTANT NOTES

You will need to modify this sequence! It is set specifically for timing at the location I will be at, and the exposure times are set for a specific calibration point in Fred Espenak's table (more on that in a second). I'm also including the spreadsheet I used to work out all the timing of each imaging phase. 

Exposure Times

First, you need to calibrate your astro cam to Fred Espenak's exposure table. Scroll down on this page to the "Solar Eclipse Exposure Guide." It's designed for DSLRs, so you'll need to choose a gain on your camera to match an ISO/f-stop value so you know which column to use. 

To do this, set up the telescope and camera in the configuration you're going to use for the eclipse. Put on the same solar filter you're going to use, and find out whether it is ND 4 or ND 5 (how dark it is). My Seymour Solar filter is ND 5. Get the sun in your camera's field of view. Then, in the live-view app of your choice (SharpCap is perfect for this), choose an exposure time from the table in the ND 4 or ND 5 row, depending on which filter you have, and set that as the exposure time on your camera. For instance, I used the column where the partial phase with an ND 5 filter has an exposure time of 1/250s, or 4 ms. Then I adjusted the gain until the sun looked good. I used the Histogram tool and looked for when the combined histogram peak (white) had a hump around 50% (you'll also have a peak at darker values -- this is the black background). You don't want too high of a gain so you don't sacrifice dynamic range, nor too low that you end up in the last column and the outer corona images end up being very long. On my ZWO ASI2600MC Pro, I chose 210. So these settings put me in the column with the partial phase ND 5 exposure time of 1/250s, and I can base the exposure times for all the rest of the phenomena from there. 

For the eclipse, I'm using the whole FOV, not the ROI I'm showing here.


Eclipse Timing

To get all the contact times, I used Xavier Jubier's website, where you can put in the coordinates of where you plan on imaging from, and it will tell you the "local circumstances," including each of the precise contact times, the altitude & azimuth of where the sun will be at that time, and some other numbers. If you have a backup location, I would make a separate sequence for that location if it is different enough in its timing. 

Organization

There are three Sequential Instruction Sets -- pre-eclipse partial phase, totality, and post-eclipse partial phase. The totality instruction set has several sub-sets for the different eclipse features -- the sliver while you remove your filter before 2nd contact, Bailey's Beads right around 2nd contact, chromosphere shortly after 2nd contact, the corona out to multiple solar radii (shorter exposures for the bright inner corona, and longer exposures for the dimmer outer corona), prominences when the moon is centered at mid-eclipse and a "long" exposure for outermost corona and hopefully Earthshine, then another round of corona shots because there's time (if you're uncertain about your gain setting, you could do a different set of gain/exposure times for this second runthrough), and then finally chromosphere just before 3rd contact, Bailey's Beads & diamond ring through 3rd contact, and then some short exposures while you put your filter back on. Then the post-eclipse partial phase picks back up.

Each segment loops until the start time of the next feature; no need to time how fast your capture rate is and estimate a number of exposures to fit the right time frame. The segment will stop when it hits the "loop until" time and move onto the next segment automatically. Most of the segments are 15s long. During the partial phases, the sets of bracketed exposures are 5 minutes apart; feel free to change this.

Other Notes


Bracketing

Each phenomenon segment has three exposure times -- the one from Fred's table, and then half that, and then double that. It's a long eclipse, so there's plenty of time for this.

Computing Power

While NINA isn't especially resource-heavy, you'll definitely want to test the capture speed if you're using an older machine or a light tablet. It's a lot of image download very rapidly, and it might get bogged down.

If you have a computer with a solid-state hard drive, highly recommend using that to maximize frame rate. If you're using an astro cam with USB 3.0, definitely use a computer with a 3.0 port. 

I am using my Microsoft Surface Pro 7 -- less power-hungry than a laptop, but pretty capable, and it as a USB 3.0 A port.

Bailey's Beads

On my camera and computer, it takes about 2-3 seconds per frame to capture and download, with star analysis and stretching turned off. Make sure you turn these off! (Toggle the buttons for these at the top of the Image tab in the Imaging view). This isn't ideal for very quick events like Bailey's Beads, so I decided to try something, but only on the 3rd contact Bailey's Beads, just in case -- using video mode. There's a plugin for NINA called LuckyImaging, which adds a container type called Lucky Target Container, and within that, I use the Take Video Roi Exposures. I turned off using an ROI -- I could probably get a faster frame rate using one, but just in case the sun ends up not quite centered, I don't want to miss it. You can try using an ROI if you have good enough tracking. This lets me take 3 fps at a single exposure time. It still saves out FITS files. In the # box, I've got 100 frames set (it will stop when it hits the "loop until" time), and for the exposure time, the Bailey's Beads exposure time from Fred's table. I should also get the Diamond Ring here. The other reason I'm not doing this for the 2nd contact Bailey's Beads is so that I can bracket, just in case these exposure times end up a little off.

Safety Notes

As always, never point your scope at the sun without a solar filter firmly in place. Don't take it off until at most a minute before totality; you're safe within 30 seconds of totality for sure with a refractor, maybe shorter if you're using a large-aperture reflector. Also make sure you cover any guide scopes or spotting scopes, or remove them. Never put solar eclipse glasses on the eyepiece side of a telescope or pair of binoculars, you can burn out your eyes and your optics!

Make sure your computer is in the shade! In 2017, my tablet shut off about 40 minutes before totality because it overheated; fortunately I was able to get it cooled and back on before totality. I put it in the shade of my telescope case.

My Equipment

Wondering what I'm planning on using? 
  • Telescope: Astro-Tech 72ED
    •  It has a focal length of 430mm, so a good wide FOV for corona. A bit longer would probably be better, but this scope is a lot lighter and smaller than my Takahashi FSQ-106N.
  • Camera: ZWO ASI2600MC Pro
    • Color camera, APS-C format sensor. You could certainly use an uncooled camera, or one with a smaller sensor if your focal length is short enough. I don't recommend doing mono; it would be a lot of filter changes and focus shifts and basically a lot of dead time to get each color. Note: you don't need an H-alpha filter to see the prominences and chromosphere -- they become visible during totality!
  • Mount: Celestron NexStar SE
    • I chose this mount because it is alt-az instead of equatorial, which means I don't need to polar-align the night before (or with a compass/tilt meter during the day, which is approximate at best). I can just plop it down, put in my coordinates and the time, say "here's the sun," and it will track it pretty well. I may have to nudge it once or twice, but it will definitely keep the sun centered for the duration of totality. As far as tracking stability goes, when I used to use this mount for deep sky back in the early dates, I could take images as long as 20 or 30 seconds before too many of them started to streak.
  • Computer: Microsoft Surface Pro 7, a tablet of pretty good computing power but in a compact and not-power-hungry package.
At this position, I might not be able to reach the 68 degrees the sun will be at for me -- I'm going to add a longer dovetail so I can sit it further forward.

Troubleshooting

I was having trouble with my 2600 having an error and disconnecting for exposure times longer than 1s; I figured out that even though it can run off USB power alone for shorter exposures, it needs the 12V power, even though I'm not cooling, for longer exposures. Plugging it in solved the problem. I'm not cooling so that I can run on battery power if I have to (using my Jackery); if I have AC power, I'll probably run the cooler at -5C or -10C, if it's not too hot, to reduce noise for the longer exposures. The max exposure time I'm using is 15s, so not much noise will accumulate, but still. 

The sequence is quite large -- it might take several seconds to open.

If you notice any errors or have questions, you can reach out to me either on Facbeook or via email at astronomolly.images at gmail dot com. You can also post a comment below.

Want to see my previous eclipse notes?

Friday, November 19, 2021

#590 - A Frosty Lunar Eclipse

 The year 2021, despite its many setbacks, was a good year for lunar eclipses -- we were treated to two here in the US!  I was fortunate to be able to image both from my house.

A rushed setup

I had just returned to Ohio from a trip out to Berkeley, CA to conduct my PhD research experiment out at the Lawrence Berkeley National Laboratory's 88-inch Cyclotron, having come back early because the cyclotron broke down and we were unable to run the experiment. A sad thing turned into a good thing though -- I had planned on imaging the eclipse from the roof of the cyclotron during my graveyard shift, but California ended up being clouded over! Ohio, on the other hand, was only partly cloudy (although the forecast had called for very few clouds). 

I got back to my house from the airport at about 10:15 PM, and after dropping off my suitcases inside and saying hello to my cats, I got busy getting my imaging rig set up. I had brought my Vixen Polarie and carbon-fiber Neewer tripod with me to California, but since I was home, I set up my more-stable Sky-Watcher Star Adventurer on my Celestron AVX tripod instead.  (Same center bolt size!) Atop the Star Adventurer was my trusty Nikon D5300, paired with my Nikon 70-300mm f/4.5-5.6G lens.  The lens is a bit soft and has some chromatic aberration, but it's the longest one I have.  I had wanted to image the eclipse using my Takahashi telescope and ZWO ASI294MC Pro color camera, but unfortunately the Moon would drop below the edge of my roof before totality started. So I set up the Sky-Watcher Star Adventurer in my driveway.

Taken the morning after



To operate the DSLR, I set up my old Microsoft Surface 3 on a folding table and connected the camera via USB to BackyardNikon, an indispensable app for DSLR astrophotography. It was below freezing outside, so I worked quickly to get everything plugged in, and I built the rig inside the house and then brought it outside. After getting the camera pointed at the Moon and setting the Star Adventurer to the lunar tracking rate, I dashed back inside to operate the tablet from my toasty warm desktop computer using Google Chrome Remote Desktop.  For the May 2021 lunar eclipse, I timed my exposures with the different phases of the eclipse, but I was running out of time to do all the math, so I decided to just brute-force it instead, especially since this was a very long-duration eclipse; I just made a list of several exposure times running from 1/4000s (for the pre-eclipse phase) to 30s (which I probably wouldn't need since this eclipse wasn't total, but just in case).  Then I ran it in Loop mode all night.  From Dayton, the eclipse ran from 1 AM until after sunrise, with maximum eclipse occurring at about 4 AM.

I finally got to bed at around 1 AM (I had also set up my Takahashi to run a similar loop to get what I could before the Moon set behind the roof).  I woke up by accident at 4 AM, but took the opportunity to dash outside and re-point the camera to be good for the rest of the night.  Polar alignment on my Star Adventurer is never that great; I think I've offset the polar scope alignment image by accident. But it hadn't drifted out of the frame at least, so that was good.  It also gave me a chance to take a look at the eclipsed Moon, which was obviously not in 100% totality like the two other lunar eclipses I've seen, but still pretty cool! Even if it was behind some thin clouds.

Results

Despite the clouds, I got quite a few clear shots!  I downloaded about 15 GB of data in the morning from the tablet and sorted through the images, deleting the over-exposed ones and the clouded-out ones. In the end, I made a few composites I'm quite happy with, as well as a few single exposures that came out really cool.

I love this one! The effect of the clouds is so cool! ISO-200, f/5.6, 1/2s. Pre-eclipse full Moon.

Partial phase with clouds. ISO-200, f/5.6, 3s.

Totality! Or as total as it got -- 97%. ISO-200, f/5.6, 2s.

Trying to be creative!




Since the Moon set not long after totality (and had to go through some trees and my neighbor's house first), I didn't get any good post-totality partial phase photos, but there are lots of ways to make cool composites even without them.  I'm so glad I was able to set up so quickly and get good results!  I've really got the process honed now.










Monday, October 11, 2021

#574 & #575 - October 8 & 9, 2021 - Hidden Hollow Star Party

 After a two-year absence, it was very exciting to return to the Hidden Hollow Star Party!  It's a small star party up at the Warren Rupp Observatory hosted by the Richland Astronomical Society, south of Mansfield, OH.  The weather is usually less than ideal, but it's only a two-hour trip for me, and it's a fun group of people at a nice summer camp location in the woods.  One of the awesome things about Hidden Hollow is their enormous 36-inch Newtonian telescope, "Big Blue."  It's a treat to look through between the clouds!


Because the forecast was not looking promising at all, I decided to only bring one rig.  I recently bought this awesome 3D-printed bracket on Agena Astro for the Rokinon/Samyang 135mm f/2 lens, ZWO EAF focuser, and ZWO ASIAir.  



I mounted it on my Celestron AVX.  Eventually I'd like to get it running on my Sky-Watcher Star Adventurer, but for this trip I wanted slew control, target centering, two-axis guiding, etc.  

What is all on this rig:
- Rokinon 135mm f/2 lens
- Red box: ZWO EAF electronic focuser. It's attached to a belt and notched circle that come with the kit to focus the camera lens.  It works really well actually.
- On top: Orion 50mm guide scope + ZWO ASI120MM-S guide camera, as well as a red dot finder
- On the other side: ZWO ASIair Pro
- On the back: ZWO ASI294MC Pro camera + Starlight Xpress 5-position 2-inch filter wheel.  Inside the filter wheel is an Astronomik L Type 2c luminance filter, Astronomik CLS-CCD light pollution filter, and Optolong L-eXtreme dual-narrowband filter.

Part of my goal for the weekend was to try using the ASIair to control the rig (except for the filter wheel, which is not ZWO and therefore can't be controlled by it).  Unfortunately, while everything connected to the ASIair when I was at home, neither of the cameras wanted to talk to it once I got out to Hidden Hollow.  I had brought along one of my capture laptops (a 2012 Lenovo running Windows 10) just in case, so I used that instead.

Here it is all set up at Hidden Hollow:


Yes it's a bit of a cable mess, but since this isn't (yet) one of my standard rigs, I don't have a cable harness made for it yet.


Friday

I had originally planned on going out on Thursday, but the weather not only looked cloudy but also rainy, so I didn't think anyone would be there.  So I drove up Friday instead.  I brought my awesome little camper along with me.  I forgot to get a picture at Hidden Hollow, but here's another photo:


I drove my car up by where I planned on setting up and got unloaded and the mount built and balanced.  After chit-chatting and greeting lots of people I'd seen in the past there, I finally snagged a few minutes to heat up a couple of bratwurst on the stove in my camper before it got fully dark.  

The forecast called for lots of clouds, but I was hoping to get enough of a northern view to at least get polar aligned so that I would be ready for any opportunity I had the next night to image.  The sky was clear in segments, but they kept covering up the north star.  Finally I got several minutes of clear sky up north to get polar aligned using SharpCap, woot!  It actually cleared out a decent bit, so I decided to try some imaging and get whatever I could get.  By the time I finally got polar aligned, it was about 10:30 PM, so my main target for the weekend, the California Nebula, was up and ready to roll.  The AVX slewed there no problem being controlled by CPWI, but the plate solver wasn't working -- I couldn't get either PlateSolve2 nor ASTAP to work, they both kept coming back with "invalid solve" after only a little bit of searching, not the usual wide search they'll do.  I uploaded an image to astrometry.net to make extra sure that the pixel scale I put in Sequence Generator Pro was right -- 7.07 arcsec -- and it was.  While I was trying to think of what else to do, I went ahead and got PHD2 calibrated for autoguiding, and then more clouds rolled in.  I decided to give up for the night because the clouds were coming in stronger and I was pretty damp from the high humidity.

Saturday

Saturday started out foggy and cloudy, but cleared out later in the day.  I spent the day attending talks on topics from meteorites to "spooky" celestial objects, as well as running around with my two DSLRs doing timelapse videos.  

By mid-afternoon, the forecast had much improved for the night!  I went around exclaiming the good news.

In the Astrospheric app


I got to give a talk in the late afternoon about my trip to Chile in 2019, which was a lot of fun and I got a lot of compliments on it.  (You can see a version of that presentation here).  

After my talk was the raffle drawing, and they had some fun and nice stuff this year.  I put in for a waterproof case and some green laser pointers among other things, and ended up winning a fun Star Trek ornament instead!

Deep Space Nine version of Worf holding his bat'leth


After having another quick bratwurst and some leftover pasta salad I brought along, it was time to get set up for the night.  During twilight, I tried again to get the plate-solver to work, this time connecting my laptop to my cell phone's wifi to make sure that the plate solve catalogs were still downloaded in my OneDrive.  I also shut everything down and rebooted.  Finally, I apparently Googled the right thing because I had a solution in about thirty seconds: untick the "highest accuracy solution" button in PlateSolve2 when doing widefield stuff.  Whoops! Had forgotten that little tweak.  Finally, plate solving worked!  The California Nebula wouldn't be high enough until 10:30 PM or so, so I did a short run on the Andromeda Galaxy as well.  I hit "go" in Sequence Generator Pro about a half hour before astro-darkness and went off to go gleefully look through other people's telescopes.

Saturn and Jupiter were on full display in the early evening, so I looked through a few scopes at them.  Venus and the crescent Moon also made a brilliant display as they set in the west.



I also looked at a few globular clusters and M82 through people's Dobs.  One guy had a white night vision monocular that he had a visual hydrogen-alpha filter attached to.  We took turns looking around the Milky Way -- you could see all of our hydrogen regions!  It was incredible!  All the nebula of the Cygnus region just jumped out, M8 and others in Sagittarius shone brightly, and you could easily see the Heart, Soul, and Elephant Trunk nebulae up north.  It was so cool to see them at 1x magnification on the sky!  Another night vision monocular was on one of the Dobs, and they were looking at M13 I think when I was over there.  I snagged a few pictures!


I couldn't quite get my smart phone camera (Samsung Galaxy Z Flip 3) to center on the eyepiece, but it was really cool anyway.

Looking through Big Blue

Of course, you can't go to Hidden Hollow and not look through the 36-inch, 9-meter focal length monstrosity that is Big Blue!  In order to reach the eyepiece, they use a scissor lift, which definitely takes some skill to drive around the dome.  They have a computer and monitor up on the lift that is connected to a system on the telescope so that you can give it one alignment point and then slew the telescope to your target by hand with a distance counter on the monitor showing you how much farther N-S and E-W you have to go, and it's quite accurate.  They also have a wireless remote up there to rotate the dome.  It's a really awesome setup.  But it is difficult to move the lift around the scope, and there is only a limited set of the sky you can look at because of the length of the telescope, how far the dome slit opens, whether you can maneuver the lift around, and the eyepiece location -- the scope is equatorially mounted, so the eyepiece rolls around out of reach in some parts of the sky.  Most of the evening was spent looking at Jupiter and Saturn, which displayed an incredible amount of detail at that insane focal length and aperture.  I could count more cloud bands on Jupiter than I think I'd ever seen before, and I could spot a small storm.  Saturn had something like five moons on display for us -- Rhea, Iapetus, Enceladus, Tethys, and Dione, with Hyperion on the edge of the FOV.  

After everyone got their fill of the planets, they tried to move the scope to M27, the Dumbbell Nebula, but it turned out to be blocked by the top of the slit, which I guess normally opens further but wasn't that night for some reason.  So I suggested we try the Ring Nebula instead, but that turned out to be too far west -- the eyepiece rolled over the top, where we couldn't reach it.  I was up on the lift with one of the club members who was operating the scope, so I hurriedly scrolled through my SkySafari list to see what would work, and globular cluster M2 was not far from Jupiter, which we knew we could reach.  Globular clusters are awesome in big ol' telescopes.  I got to slew the scope and rotate the dome to get on target, which was fun!  I found it in the finderscope, and then in the eyepiece, and it was awesome to look at.  The club member (couldn't tell who it was in the dark) maneuvered the scissor lift, which was a tough job!  It took us a while to get in position.  It was a fun time.  All the while, my telescope rig was running on its own, so I didn't have to worry anymore about getting sidetracked and missing a filter change or target change or meridian flip.  Soooo nice :D

Transparency had degraded, and everything was getting soaking wet, so most of the visual observers on the pad were packing up for the night.  I was chit-chatting with a few folks, and felt like it must have been after 1 AM or something.  But no, by the time I got to my camper to get ready for bed, it was only five minutes before midnight!  I went to bed and got a nice long night's sleep.

Death of the D3100

This weekend finally saw the end of my Nikon D3100, my first DSLR, as well as its kit 18-55mm lens.  The lens started its death spiral about a year ish ago when I accidentally knocked over my other DSLR, my Nikon D5300, during a timelapse in my backyard in California, which knocked the front section of the lens out of alignment, but I was able to get it back into place.  This time around, something must have broken recently because I couldn't get it back into place, and I couldn't get it to sit straight.  Almost at the same time, I was looking through the timelapse images I'd just taken with the D3100 (which earlier hadn't turned on, but eventually did), and the shutter was only opening partway for the first couple frames, and then didn't open at all after that.  I tried to lift it with my finger, but only the front plastic part was going up, not the rubbery second layer.  I'm not sure how they're normally attached to each other, but I think there's something wrong with the little hinges in there.  

The D3100 has been a real workhorse.  I bought it in July 2014, almost exactly one year before I got my first telescope.  It's traveled with me on many hikes and a few backpacking trips, always hanging around my neck or shoulder.  It was my first astro camera, capturing my very first astrophotos through my Celestron 8-inch Schmidt-Cassegrain on the NexStar alt-az mount of Saturn and the Lagoon Nebula, and has come along as a timelapse and widefield camera to star parties and astro weekends from Washington and California to Texas and Wyoming to Ohio and West Virginia.  It captured wide shots of both the 2017 solar eclipse in Wyoming and the 2019 solar eclipse in Chile.  Over the last seven years, it has captured nearly 130,000 images, according to the shutter count, which is right around the mean lifetime for the D3100.  My second DSLR, bought in 2016, already has over 300,000 shutter actuations!  

It looks like I might be able to replace the shutter myself, or send it to Nikon for repair.  I might just do that.  My D5300 has been having trouble with exposure metering, so it'd be good to have a DSLR that can still do that.  However, I had already been kicking around the idea of getting a new DSLR to replace the D3100 that has the computer control capability that the D5300 has...we shall see.  

Unfortunately on the lens front, they don't make that 18-55mm kit lens anymore.  There's a VR (vibration reduction) version that technically works with my D5300, but the VR part isn't compatible.  I found one of the same model as my kit lens on the used camera gear retailer KEH, but they just emailed me and said that it's not actually in stock after all. :(  That lens I definitely need to find a replacement for ASAP!  I'm open to suggestions...

Results

The sequence ran until about 4:30 AM, when the clouds and fog started rolling in in earnest, according to an all-night timelapse I had set up.  I got 44x300s images on the California Nebula, and 22x300s on M31.  Not too shabby!  I also got nine timelapse videos that I'll be putting together into a single video with some music.

The California Nebula came out all right.  Using a narrowband filter with a fast f/2 optic is problematic since the light gets shifted a bit off-band by the optics (and stopping down the lens doesn't help since you're not changing the optics), which results in lower transmission.  I was surprised at first to barely be able to see the nebula in subframes (it jumped out using an H-alpha filter with my ZWO ASI1600MM Pro on my C8 last year), but then I remembered this fact.  It came out all right anyway, if a bit noisy!  And since I forgot to stop the lens down to like f/2.8 or so, the coma around the edges was pretty bad.  But I did get some nice color!

Date: 9 October 2021
Location: Hidden Hollow Star Party, OH
Object: California Nebula
Attempt: 2
Camera: ZWO ASI1294MC Pro
Telescope: Rokinon 135mm f/2 lens @ f/2
Accessories: ZWO EAF focuser, Optolong L-eXtreme 2" filter
Mount: Celestron AVX
Guide scope: Orion 50mm guidescope
Guide camera: ZWO ASI120MM-S
Subframes: 38x300s (3h10m)
Gain/ISO: 120
Acquisition method: Sequence Generator Pro
Stacking program: PixInsight 1.8.8-8
Post-Processing program: PixInsight 1.8.8-8
Darks: 75
Flats: 25
Temperature: -15C


The M31 image had a few problems -- one was that I rapid-cooled the 294, thinking I had time for the frost spot to go away, but it stuck around for quite a while.  Normally, to keep the frost spot from forming, I cool it by 5 degrees C over 5 minutes at a time.  This works quite well, but I haven't figured out how to script it yet, so I have to sit there and do it.  (And yes, I have recharged the dessicant).  But here it is anyway:
Date: 9 October 2021
Location: Hidden Hollow Star Party, OH
Object: M31 Andromeda Galaxy
Attempt: 21
Camera: ZWO ASI1294MC Pro
Telescope: Rokinon 135mm f/2 lens @ f/2
Accessories: ZWO EAF focuser, Astronomik L Type 2c 2" filter
Mount: Celestron AVX
Guide scope: Orion 50mm guidescope
Guide camera: ZWO ASI120MM-S
Subframes: 22x300s (1h50m)
Gain/ISO: 120
Acquisition method: Sequence Generator Pro
Stacking program: PixInsight 1.8.8-8
Post-Processing program: PixInsight 1.8.8-8
Darks: 75
Flats: 25
Temperature: -15C


Despite the iffy forecast, Saturday night was decently clear and a fun night.  The whole weekend was a really nice time -- getting to see quite a few people I haven't seen in a while and getting to give a talk, and just enjoying some fresh air and camping and stargazing.  It was a nice weekend, and I'm looking forward to next year!









Sunday, April 18, 2021

#510 - Saturday, April 17, 2021 - New Off-Axis Guide Cam!

On long-focal-length telescopes like Schmidt-Cassegrains (and especially Schmidt-Cassegrains, with their floppy mirrors), off-axis guiding can provide better guiding than a guide scope.  I've been using an off-axis guider with my C8 since 2018, and despite some troubles, it has still largely been a better solution than when I was using a guide scope.

I initially paired it with my QHY5 (the original red puck), but found it to be not sensitive enough to pick up guide stars, even at f/6.3 (1280mm).  I picked up the more-sensitive QHY5L-II CMOS guide camera not long after, which typically gets just barely enough signal-to-noise ratio to hold onto a guide star on my setup.  Of course, it performs better under dark skies, but any drop in transparency, and I'm barely holding onto a star, especially in the spring when the density of stars around out-of-galactic-plane targets tends to be a lot lower.

A good night of guiding in February...just enough SNR to hold onto the star.

While the QHY5L-II worked more often than not for me, I started growing tired of losing images because of lost guide stars, and losing hours of the night while Sequence Generator Pro made attempts to recover.  So I've been kicking around the idea of getting a Lodestar, which is a tiny CCD camera made by Starlight Xpress with big juicy pixels and high sensitivity.  The current iteration of this camera is the Lodestar Pro, which was 8.6 micron pixels and 77% quantum efficiency.  Unfortunately, it costs almost $600, which is a bit hard for me to justify when my system mostly works. 

However, I recently came across a used original-model Lodestar being sold by an area astronomy club member, so I pounced on it (along with an enormous 315mm electroluminescent flat panel and a network-enabled sky-quality meter).  The older version of the Lodestar has 8.4 micron pixels and 65% quantum efficiency, so not a huge difference technically (several other specs are the same as well), although the read noise is lower in the new version.  

The first thing to do was to make sure it would talk to my laptop.  I installed the ASCOM driver for it, connected it to SharpCap, and it started taking frames right away.  Woot!  Shining light on it showed a response.

Focusing the Guide Camera


The next challenge was getting it on the off-axis guider.  This is an event.  Getting it focused is not easy.  You have the three-fold problem of not being in focus, not having a star in the field bright enough to show up when very unfocused, and it's very hard to tell where the OAG is looking to get a star placed in it.

The first night I tried it, Wednesday, didn't go well at all.  There were low clouds running across the sky, and once I got a bright star in my main camera and started slewing around to see if it would show up as a big unfocused blob in the guide camera, a cloud would inevitably cover the star.  I gave up that night and went to bed.

Saturday night was much better, and had better transparency.  Bonus points, the waxing crescent Moon was high enough above my lemon tree to see with my C8.  A super-bright object is a lot easier to land in the guide camera because you can see it coming from off the edge.  I slewed the scope over to the Moon, centered and synced it in the main camera, and then slewed above and below the main camera image to see where it would show up in my guide camera.  Pretty quickly I could start to see the guide camera image lighting up, so I adjusted the exposure time down for seeing the Moon's surface, and got the guide camera roughly focused.  I also created a field-of-view indicator element in TheSkyX where the Moon was in my guide camera compared to the main camera so that I could more easily land a star inside of it.

The center rectangle is the main camera; the smaller one above is my old guide camera; the rectangles above and below are the E and W side of the pier positions of the guide camera, respectively.

Next, I slewed the main camera to Regulus and centered it, synced the mount to it (so that it would be in the correct position with respect to my camera FOVs on the map), and then slewed the mount so that Regulus would show up inside the guide camera box.  And bang, there it was!  Now time to critically-focus.

Now, I use two different brands of filters: Astronomik CLS-CCD & RGB, and Chroma narrowband filters.  They have different thicknesses, and thus adjust the main camera's exact focus point a bit.  ("A bit" on my PrimaLuce Esatto focuser is still like 20,000 steps).  So to set the guide camera position so that it's mostly in focus for both focus points of the two sets of filters, I take the CLS-CCD filter's in-focus point and the H-alpha filter's focus point and split the difference.  I set the focuser there in the middle of the two, and then moved the camera in and out until the star was as small as it would get.  Now, since we're so far off-axis and this is a Schmidt-Cassegrain, the star shapes are pretttttyyyy yucky, so "in focus" is hard to determine.  But I got it about as small as it would appear in the camera, and called it good.  

Unfortunately, that focus point has the Lodestar just barely inside the tube!  I put on a C-mount extension tube, but it has a lip on it that won't let me insert it far enough to get the camera in focus.  So I need to hunt down an extension tube that doesn't have a lip.  We'll see if I can find one.  


Left to right: ZWO ASI1600MM Pro, ZWO electronic filter wheel, 21mm spacer, Lumicon off-axis guider + Lodestar, PrimaLuce Lab Esatto focuser

Time to Test

Next, I needed to re-calibrate guiding in PHD2 for the new camera.  I slewed the scope to somewhere roughly due East and about 50 degrees up the sky and started looping 8s exposures.  Now, I had already discovered during initial connection testing that this Lodestar has a lot of hot pixels.  And because of the way the camera is read out, some kind of interlacing to increase the speed, the hot pixels really look rather star-like.  I did try to take a dark library in PHD, but they never work for me -- they always over-subtract, and I end up with a bright gray image with black specs where the noise pixels used to be.  However, this is one place where the gross shape of the stars off-axis in a Schmidt-Cassegrain actually works out in my favor: they're way bigger than the hot pixels.  After some trial and error, I set the HFD (half-flux diameter) minimum value in the settings (brain icon) to be 3 pixels, and this did the trick.  PHD grabbed what appeared to be dim, blobby stars rather than the star-like noise pixels.  

The hot pixels really mess with PHD's auto-stretching algorithm, so I could barely even see the guide star.  However, your ability to see it isn't what matters -- only PHD's ability to see it.  This is given by the SNR (signal-to-noise ratio) reading.  The higher the value, the better (until it says "SAT" which means you're saturated, which diminishes PHD's ability to calculate the centroid of the star).  While I have been used to values between 5-20 on the QHY5L-II, the Lodestar showed values of 50-90, even in star-poor galaxy regions!


This was very exciting and very promising indeed.  So I got the main camera cooled and started running the sequence for the C8 for that night, which consisted of planetary nebula Abell 31, planetary nebula Sh2-313 (Abell 35), galaxy NGC 5248, and M16 Eagle Nebula.  Abell 31 had some initial trouble with the guide star wandering off, but it was almost behind the tree anyway, so I had SGP just advance to the next target.  Despite Sh2-313 being really low in the south for me, the PHD2 screenshot above was taken there, showing good SNR even through muddy and light-polluted skies.

The rest of the night was quite successful -- all of my targets ran and got a lot of subframes, and it looks like the night finished strong with M16, which is in an area with plentiful bright stars to choose from.


0.55 arcsec RMS guiding is pretttty good :D  And so is that SNR and star profile!

Single luminance (CLS-CCD) subframe, 300s, M16 Eagle Nebula, on the C8



Monday, March 8, 2021

How I Organize My Data

Astrophotography generates a lot of data -- what is one to do?  Between different cameras, telescopes, targets, months, how do you keep track?  I've only been doing this hobby for 5-1/2 years, and I already have over 12 TB of data!

Everyone develops their own organization scheme, but I have one that I think is particularly excellent that I'd like to share.  Maybe some parts of it will help you!

First and foremost -- keep a log book!


A logbook is an essential part of any scientist's toolkit.  Take Adam Savage's advice on it: 

Whether it's hand-written in a journal or notebook or typed up on a computer, it's important to keep track of some of the basics of every night you observe.  What gear you used, weather conditions, things that went well, things that broke, etc.  To make life easier for myself, I take notes in Google Keep's sticky-note-esque app (available both on smartphones and on any web browser), and then later dump all those notes into a Word document with more details.  I number every night in sequential order, and I've been keeping notes since my very first night of observing!  

Example note in Google Keep

I have semi-permanent setups in my backyard, so my equipment configurations don't change that often, but if yours do, then make sure to write down the gear you used.

Image Organization: In the Morning


Every morning, I bring my data acquisition laptops (DAQs) inside (I don't have a permanent computer housing built yet) and pull all of the images off of them onto a flash drive, and then over to my desktop computer.  I highly recommend copying the data off your laptop rather than cutting it; leave it on your laptop until you are sure it is safely transferred to your image processing computer.  Sometimes storage drives can have weird faults that wipe all your data, or data can be corrupted.  I check all my image files before deleting them from my DAQs.

On my "Stacks" hard drive (an 8 TB drive dedicated to deep sky images), I have a folder called "Backyard - To Process."  Within that folder are sub-folders for all of the targets on which I am currently taking data or haven't yet attempted to process.  One folder at the top is called "_to sort" (the underscore keeps it at the very top of the list).  When I copy the images off my DAQs, they go into a folder of the night's date.

The older folders have planetary & lunar data I haven't had time to deal with!

After scanning through all of last night's images using the Blink tool in PixInsight (or if you have DSLR images, you can just open them using Windows Photo Viewer or whatever other image viewer), I shuffle them out to their target folders in the "Backyard - To Process" directory.


The green tick marks are made by right-clicking the folder, clicking Properties, going to the Customize tab, and selecting "Change Icon."  It's an easy way to spot which datasets I have deemed ready to process.

Inside each of those target folders is another set of folders: lights, cal, finals, and PixInsight.  The light frames go into the "lights" folder (separated further by filter, if needed); corresponding master darks and flats go into the "cal" folder (copied over from my dark and flat libraries -- more on that in a minute); "PixInsight" is the folder in which I do my processing; and "finals" is where I keep final copies of the images.


Since I use this template for every dataset, I finally wrote myself a simple batch script to generate these folders and a copy of my metadata text file template (more on that in a bit).  They're very simple to make: create a new text file (right-click an empty place in the folder window, New->Text Document, and name it "something.bat" (no quotes).  Open it with your preferred text editor (right-click, Open With-> choose text editing program).  Mine looks like this:
mkdir lights
mkdir cal
mkdir finals
mkdir PixInsight
mkdir PixInsight\processes

copy "Q:\_Stacks\stats format.txt" finals
ren "finals\stats format.txt" stats.txt

"mkdir" means "create directory;" "copy" means, well, copy (first argument is "copy from" location, second argument is "copy to" location); and "ren" means "rename" (first argument is the file location and name that you want to rename, the second is what you want to rename it to).  

To execute the batch file, copy it into the folder you want to make the folders in and double-click it.  It will run quickly, and then you can delete the copy of the batch file.  If you want to get even fancier and move all existing images into the "lights" folder, you can add:

move *.fit lights

where the * means "all files with" and the .fit is the image extension my images files are saved as.

Don't forget, if you have a directory or filename that has spaces in its name, you need to put the whole filepath in quotes (like I did in the "copy" line above). 

Linux and Mac have different commands, but a similar idea.  (If you use Linux, I hope you already know how to do this!)

Image Organization: Each Dataset


First, I have a different hard drive for each type of data: deep sky, planetary, timelapse, and miscellaneous (this has nightscapes, images collected for competitions, solar/lunar eclipses,  other people's data that I've helped them process, pictures of my telescope setups, and whatever else doesn't have a home).  Having different hard drives is just a result of having too much data to fit on a single drive, so I broke it up my logical categories.

In general, I organize my data in this hierarchy: target, attempt.  Inside each attempt is the same setup as in "Backyard - To Process," with the cal, lights, finals, and PixInsight folders.


An "attempt" on a target can be one night, or many nights, but it's all the data I am going to combine into a single, final image.  Occasionally, I go back and combine multiple datasets; those combinations would go into the most recent attempt folder that is included in that combination.  For example, if I combine data from Lagoon #4 and Lagoon #5, the processing steps and final images would go into the Lagoon #5 folder.

Metadata File


Even if you are young with a more keen mind, once you get enough datasets rolling, it becomes easy to forget which gear you used, where you took the images, etc.  The best way to combat that is to write it down and keep it with that dataset.  In the "finals" folder, I make a simple text file called "stats.txt" that holds all that info in a standardized template I developed.  Text files are nice because they are readable on every platform, for free, and will be for a very long time.  My preferred app is Notepad++, but you can even just use the simple Notepad app that's built into Windows, or vim on Linux if you really hate yourself, or whatever text editor you prefer.


In addition to having a text file with each dataset, I also have a summary of all of these text files for easy searching in an Excel spreadsheet.  It's sortable and filterable, so I can quickly do things like find which target attempt uses compatible gear to combine datasets; find example images for creating comparisons between telescopes, cameras, techniques, etc; see when the last time I imaged a target was; see if I need to re-do a target now that I have better skills; all sorts of things.  It's also handy for when I'm at a star party or outreach event and someone asks, "How long was this exposure?" or "What telescope/camera did you use?" and I can quickly go look it up from my phone.

Green highlight means "re-process with more skill;" yellow highlight means "need more data."

Processing Files


Inside the "PixInsight" folder in the attempt folder, I have more folders that contain my processing steps.  I number them sequentially so that it's easier to go back and re-do steps if I don't like the result.


In addition, I keep notes in the metadata file with what processing steps I used and some details about them as needed (what type of pixel rejection I used in stacking, how many iterations of deconvolution I did, which subframe I used as the reference frame for registration, etc).  

Deleting Data


I never delete entire datasets, even if they seem like crap.  For one, they might actually be fine, but I don't have the skill to process them yet.  Or, if they truly are crap, they make useful teaching examples about how to spot clouds and bad tracing, or can even help diagnose problems with your gear, like frost spots or burrs on a gear.  (I do delete bad subframes in a single dataset, although sometimes I set them aside for further analysis or using as examples).  It's also fun to go back and see how bad my images used to be and how far I've come :)

To keep dataset size down, once I'm done processing, I delete all of the pre-processing data: calibrated, approved (from SubframeSelector), debayered, and registered subframes.  But I keep the original stacked data to start re-processing from (I don't often have to go back and re-stack data after I've given it a few attempts), and I keep the matching calibration files (master darks and flats) with the dataset so I can easily re-generate the pre-processing frames if needed later on.  This saves enormously on dataset size, especially now that I gather 20-30 hours of total exposure time per target these days.

File Naming Convention


Subframes


I use Sequence Generator Pro to do my data acquisition, and you can program the file naming convention right in the sequencer.  They've even got a little button with what all of the reference key symbols mean, and there are a ton of bits of information you can include in the filename.  My personal preference is a filename like "ngc-7662_30s_-20C_CLS_f202.fit," which has the important pieces of information that change from image to image for my setup: target name, exposure time, camera temperature, filter name, and then the frame number.  (I always use the same gain, offset, and binning, and I don't yet have a rotator to need the angle).  I also like to have the images for a given target be stored in a folder of that target name.  So my filename convention in SGP is this: "%tn\%tn_%el_%ct_%fe_f%04."

Other metadata, such as RA/dec, gain value, and any other SGP knows because I've programmed it into the Equipment Profile (such as pixel scale, focal length, and more) are saved in the FITS header (which can be accessed in programs like PixInsight, FitsLiberator, and more).  

Final Images

After I'm all done processing an image, it's time to save it out, in a couple of formats: XISF (PixInsight's preferred format), TIFF (for high-quality digital copies and for printing), and JPG (for posting on social media and keeping a copy of on my phone).  

The filename I give my finals files leads me straight back to where their original data are stored.  For example, Orion Nebula #17's final is named orion_15_1_5.  The convention goes: target-name_attempt_stack_process.  Each new attempt at imaging a target increments the attempt number.  Each different time I stack it (whether that's in different software, using different stacking settings, or mixing with other data) increments the stack number.  And each post-process (applying different techniques post-stacking) increments the process number.  So orion_15_1_5.jpg is the Orion Nebula, attempt #15, stack #1, process #5.



This way, when I have just the jpg on my phone, I can immediately know where to go looking for the image acquisition details (like exposure time, camera, telescope, location, etc) either in the metadata file or the Excel spreadsheet.  (This has saved me after AstroBin's massive data loss event -- I name my images on there with their attempt numbers, like 'M42 Orion Nebula #15,' so it was easy to figure out which file I needed to re-upload!)

Calibration Libraries


News flash: You don't have to take new darks and flats every night you image.  You can generate libraries of files that you can re-use, depending on the circumstances.

Darks


With cooled cameras, it's relatively easy to generate dark libraries, since you can set the camera temperature (to within what your camera can cool to depending on ambient temperature).  To build my dark library, I would set my camera out on the back porch, cover it a bin and blanket for greater darkness, run the power and USB cables inside, and then use SGP to run an all-night sequence of various exposure times at my selected temperature and gain.  I've even taken darks in my refrigerator when I needed a set I didn't have and it wasn't cold enough outside to match some recently-acquired data!

For darks, you only need new master darks under the following circumstances:
  • Different camera temperature
  • Different gain/offset
  • Different binning
  • Different exposure time
  • Different camera (even if it's the same model)
  • Periodically, as the electronics and sensor characteristics can change over time (my darks from three years ago no longer match darks I've taken more recently, so I'm having to re-do them, on my ZWO ASI1600MM Pro)
In my "Dark Archives" folder on my Stacks drive, I have my dark subframes and master darks organized by camera, then by temperature, then by gain, then by exposure time.  (If I binned, which I do for my CCD camera but not for my CMOS cameras, there would also be a 1x1 or 2x2 set of folders).  Inside of each bottom-level folder (exposure time) is the master dark, as well as the subframes (so I can re-stack if needed).


Thanks to all my effort upfront to built up my darks library, I haven't had to take new darks on my ZWO ASI1600MM Pro in over a year.  

Flats


Flats are a little more complicated -- at least, if you have a non-permanent setup.  Flats need to be re-taken under the following circumstances:
  • Different gain
  • Different filter
  • Different telescope, reducer, or other optic-train component (even non-optics components can change your flat -- like additional vignetting from a new filter wheel, adapter, or off-axis guider)
  • Different camera (even if it's the same model)
  • Every time you either rotate your camera or remove it from the telescope
The main things that flats address are vignetting and dust bunnies.  If you rotate your camera at all, you need a different set of flats because a) the dust bunnies will be in different places (unless they're on your camera sensor or window itself, of course) and b) the location of the vignetting may change since the camera is unlikely to be smack in the middle of your image circle, and because most sensors are rectangular.  

To deal with this, I organize my flats in the following hierarchy: first by camera, then by optics train (for example, "C8, focal reducer, Esatto focuser, ZWO EFW), then by date, then by filter.  



Unless your telescope is in a laboratory-grade clean room, then yes, you will need new flats every time you set up and tear down and for each different filter.  And to capture the dust bunnies, you'll need to be in focus -- so I always take my flats the next morning, after I've focused on the stars during the night.

Backups, Backups,  Backups!!


Image if your astrophotography data hard drive failed tomorrow.  How devastating would that be?  Years of data and many, many hours of hard work, gone.  Backing up your data is vitally important.  

Local Backup


For local backup, I have several external hard drives that I backup using some free software (FreeFileSync for me, but there are plenty out there) about once a month.  Each external hard drive goes with one of my internal hard drives.  They're also handy to bring to star parties for on-site processing when I only have my laptop.  The rest of the time, they live in a fireproof, waterproof safe to help ensure their survival in case of fire.  It's also important that they're not plugged in continuously so that they're protected from power surges and lightning strikes.  

I'm eventually going to set up a NAS, aka a set of hard drives in some external hardware that uses a raid array configuration to mirror data to other drives to keep it safe from hard drive failure.  All hard drives eventually fail, especially spinning-disk drives, which typically only have a lifespan of 3-5 years.

Online Backup


Local backup is still problematic; you can't keep a NAS in a fireproof safe, and you might forget to unplug your machine or NAS during a lightning storm (especially if they're frequent where you live!).  Online backup allows you to store a backup copy of your data offsite somewhere, usually distributed across many servers around the country or world.  They have better data reliability than managing a NAS or external hard drives yourself, and your data is safe if your computer or house is destroyed.  

These services come at a cost, depending on which service you go with and what type of service.  Some services allow free backup, but it costs to download your data; this is known as catastrophic backup.  Some services have different pricing tiers for the amount of data you can store and the maximum file size.  

I personally use Backblaze.  It's $6/month for unlimited file storage, and it will backup continuously if you set it to.  If you lose your data, you can either re-download it (which will take a long time if you have a lot of it, like I do) or pay to have them ship you a drive -- currently $189 for up to 8 TB (but they'll refund you if you ship it back within 30 days). 

The only issue with online backup I've run into so far is butting up against data caps.  Comcast has a data cap of 1.2 TB in most states right now (up from the usual 1 TB due to the pandemic...woo-hoo), which means I can only upload about 800 GB per month because I use about 400 GB in my daily life (video calls, Netflix, YouTube, etc).  You do get one courtesy month where they won't charge you extra ($10/50GB I think), and in that month I did manage to get about 4 TB uploaded, but with 12 TB of astro data + 1-2 TB or something of personal data and images, it's taking a long time to get backed up. I've been working on it since October (it's now March), and I have to keep an eye on my data usage and stop the backup once I get over 800 GB.  It's a pain, but once the initial upload is complete, I should be easily able to stay under the data cap -- I probably only generate about 100 GB a month or so, depending on how much processing I do, whether I do timelapses, or whether I've been to a star party.  :)

Back it up!


It's worth your while to pursue both options.  Imagine your life without your hard-won astro data!

Conclusion


Dealing with so much data can be a challenge, but I totally love the system I've developed and it works great for me.  I know pretty much exactly where to find everything, in short order, and I know everything about how I created that final image.  Put some thought into how you want to organize your data and try it out.  A little planning ahead can go a long way.  I'm eventually going to write more scripts to do more automation for me (like grabbing the matching master flats and darks for my datasets!).  

Best of luck!