I was looking at the forecast in the Astropheric app on Monday, and saw a nice high ISS pass was coming up on Wednesday. Imaging the International Space Station has been on my "astrophotography bucket list" ever since I saw it was possible with an 8-inch reflector. I decided I'd give it a shot -- an experimental run this time, and a better one the next.
I opened up my trusty SkySafari app and got to work planning. The ISS zips through space at 17,000 mph relative to the surface of the Earth, crossing the whole sky in just a few minutes. Timing, and correct positioning of my scope, would be essential. It would be a far easier task in something wide-field like my Takahashi, but since the ISS is quite small sitting up there 250 miles above the surface, I needed a scope with magnification and resolution. My 8-inch Schmidt-Cassegrain was up to the challenge. All 2 meters of its compacted focal length! I turned on the FOV indicator in SkySafari for my ZWO ASI294MC Pro and the C8, and set about finding a good spot from which to image it. The highest point of its pass would be the closest, which for this pass was 71 degrees altitude. However, that would happen in the northeast, where there was a reasonable chance it would be behind my plum tree. So I picked a more easterly position, which put it farther away, but had a better chance of success. I decided on a star that it would pass near, and picked that position as where I'd have Sequence Generator Pro center. Then, I would switch to SharpCap for the actual capture, since it captures way
faster than SGP (and does video).
Mid-afternoon, when I needed a break from doing homework, I went outside to take off the Tak and attach the C8. Thanks to the handy cable bundle I've made for it, all I had to do was detach the bundle in one place from the mount, and take off the whole thing. I just left everything plugged in (although I did unplug my other ZWO camera from the USB hub so that my programs wouldn't get confused).
I got started with dinner late, and thus didn't get outside until 7:05 PM, when the pass was at 7:20. I wanted to get out there at 6:50 to make sure the alignment offset between the two was not too bad, and get it focused. I slewed to a star to see what the alignment looked like, but the camera USB cable I had routed through the cable hook on my mount didn't slide like it was supposed to and tugged on the dec axis instead, so it didn't make it to the star, and threw off the alignment. I didn't realize that's what had happened at first though, so I just re-added stars to the alignment model, but then model was still not really working.
Time was running short, so I decided just to have SGP plate-solve its way there. However, when the PlateSolve2 module opened up, it had 0's in for the FOV, even though I had input the FOV earlier! This turned out to be a result of not having put the parameters also into SGP's Control Panel for that sequence, I think. So I quickly pulled up the note on my phone that had them, and had it try again. It didn't crash this time, but it spiraled around for a while trying to find a match for the frame. When the clock hit 7:20, I had to give up -- I only had 45 seconds left until it came into my rough FOV. Time for plan B -- chase it!
So I hurriedly disconnected, SGP, opened SharpCap, set the settings (I had decided on 1 ms exposure time and 139 gain after looking up a few examples in similar scopes), set the video time limit to 3 minutes, and hit Start Capture. I turned on my red dot finder, and saw that the bright dot of the iSS was actually going to pass through about where my scope was actually pointed. I watched it go through the red dot, and saw a flash on my computer screen -- I got it!! Then I had a thought. Why not keep chasing it? So I grabbed the hand controller and, watching how it moved, anticipated where it would be next. I put the red dot there, watched a few frames flash on the screen, and then moved on to the next spot. I got it 5 or 6 times!!
I opened up the AVI to see how it looked (I forgot to switch to SER format!). I had a super-screen-stretch on the video feed in SharpCap so that I could see it easier, so I wasn't sure whether 1 ms would be actually bright enough. But I decided to err on dimmer than brighter, since you can pull dimmer pixels up, but you can't push saturated pixels back down. I jogged through the video slowly, and mostly it was all black. But then I caught a brief flash -- there it was!!
I was so excited to see how it came out that I unplugged my USB hub just to copy the data off onto my hard drive (I couldn't find my larger flash drive, and the hub is unpowered and couldn't support both the hard drive and the power-hungry cameras at the same time).
While I had the C8 on, I decided to grab some moon and Saturn images as well. Unfortunately, there was a lot of heat coming up off the cement pad that my scope is on, which showed up in the video as large, undulating shimmers instead of the small-scale shimmer more indicative of bad atmosphere. The concrete pad is nice for physical stability, but bad for temperature and localized air distortion.
After that, I took the C8 off and put the Takahashi back on with my ZWO ASI1600MM Pro and hydrogen alpha filter. I went ahead and just re-aligned the mount. Since it was still well before 8 PM, I decided it was probably time to re-polar-align, since undoubtedly the tripod has shifted around from me moving around the scope, taking the cover on and off, and a small earthquake we had last week. After I adjusted the polar alignment using Celestron's All-Star routine, I went to re-add the alignment stars to fix their positions, but the mount was way further off than it should have been. So I rebooted it, but it was still largely under-slewing in RA. I rebooted it a few more times, but it kept behaving the same way. It also had some erroneous alignment star data -- it said Altair was still in the eastern sky, even though it was well past the meridian by this point. I double-checked my location data, time, and date to make sure nothing got scrambled -- everything looked fine. Finally, I tried just aligning anyway, and that seemed to work -- by the time I got through the five stars, it was doing a good job of landing them near the center. Weird.
I paused to admire the stars, and noticed that Cassiopeia was high enough to clear my neighbor's garage already, and further to the right of my plum tree than I had thought. Maybe I could catch M31, the Andromeda Galaxy? I opened up SkySafari, and it looked like yes, it would be high enough as it neared the meridian to clear my plum tree after all! So I added that to my target list (and took off M33, since I had over 130 hydrogen alpha frames, which would be plenty!). I needed another target before M31 was high enough, however. Since Cassiopeia was more visible than I thought in my partially cut-off sky, I wondered where the Heart Nebula was, which I have been dying
to get a good image of. (My last attempt
came out pretty bad, and we had cloudy winters in the midwest). Much to my delight, it was far enough east of my tree that I figured I could get it for a little while! So I added that too. But I still needed a target to fill a gap in the morning hours before the Rosette Nebula was high enough, so I went a-hunting for something good in H-alpha. I checked my H-alpha observing list in SkySafari, and saw a nebula I had forgotten about -- NGC 2174, the Monkey Head Nebula. It's way far up in Orion, practically in Gemini, and so would be high enough to image before the Rosette cleared my neighbor's garage. It was magnitude 10, and rather large, but I figured I'd give it a shot, despite the large, bright moon.
The next morning, I looked at my new targets -- and was very excited!
5-minute H-alpha frame on the Heart Nebula
5-minute H-alpha frame on the Monkey Head Nebula (NGC 2174)
Cannot wait to stack these!!
I started the sequence and went inside to process my ISS images. First, I brought the AVI video into VirtualDub and broke it up into single frames that would be a lot easier to scroll through and find all the frames that had captured it. I found 69 frames total! I put them into RegiStax, but it didn't like how dim the ISS was, and wouldn't let me pick alignment points. They were quite dim, so I opened one of them up in PixInsight, stretched it, and then applied that process to all of them using ImageContainer.
Of course, these were all JPGs, but the histogram peak was sufficiently narrow that they sort of acted like raw files. I saved them back out, and tried again with RegiStax. It worked, but the image at the end was all black. Probably it had a hard time detecting this small thing and didn't align the frames properly. So I brought the frames back into PixInsight, and used its FFTRegistration
script to give it a go. It did a good job; however, I realized as I used the Blink
tool to scroll through them that stacking them wasn't going to be an option because the ISS had a different size and rotation in every frame since my frame rate was kind of low. So I took the first frame, where it was the largest and relatively sharp, and processed that by itself in PixInsight, reducing noise, adjusting the curves, and applying some sharpening via the MultiscaleLinearTransform
process. It came out pretty great!
Because I broke up the video in VirtualDub, all of the color information was lost, since it doesn't debayer. So I looked for an app online that would debayer my videos (which may also solve my problems with processing raw color camera videos in RegiStax in general). I found one that I came across once before but hadn't explored yet: PIPP (Planetary Image Pre-Processor). It has a whole host of tools that are going to be very useful! I passed the video through it and got debayered TIFF images out the other end, with timestamped filenames and everything.
While I was going through the images, I saw that a different frame had a sharper, if slightly smaller, image of the ISS, so I went ahead and processed that one instead. Stretch, curves, and then denoising and sharpening with MultiscaleLinearTransform (yes it is a versatile tool!). Super exciting result!
Date: 9 October 2019
Location: East Bay area, CA
Camera: ZWO ASI294MC Pro
Telescope: Celestron C8
Accessories: Astronomik L Type 2c 2" filter
Mount: Celestron AVX
Guide scope: N/A
Guide camera: N/A
Exposure time: 1 ms
Acquisition method: SharpCap
Stacking program: N/A
Post-Processing program: Planetary Image Pre-Processor, PixInsight
I am extremely thrilled to have gotten such nice detail on my very first attempt! I did not believe that it would go this well! I'm excited for more. I've got some satellite tracking software setup in the works, and I'll also eventually add a 2x Barlow for increased magnification. I'm also going to try not Barlow-ing, and cropping the frame instead so that I can up the frame rate and get multiple frames with the ISS at about the same size and angle so that I can do some stacking. I'll also try with my monochrome camera -- no color, but the resolution is a lot higher, due to the smaller pixel size and the fact that it's monochrome.
[ Update: October 22, 2019 ]
I processed the Cocoon Nebula image last weekend, but haven't had a chance to write it up yet. October 9th was the last night I took data on it, which is when I have decided to write up the processing for these multi-night extravaganzas.
The Cocoon Nebula (IC 5146) is a combination reflection, emission, and dark nebula, all in one gorgeous package. It's located up high in the constellation Cygnus, the swan. Lying about 4,000 lightyears away, it spans 15 lightyears (or 12 arcminutes, for you observers out there - a little less than half the width of a full Moon for everyone else!). Besides the gorgeous color and detail, there is an enormous dark nebula that forms a thick line with the Cocoon Nebula, blotting out background stars. I didn't pick it up as much as I would have liked in this image, but that is hard to do from such light-polluted skies!
The emission part of the nebula is the pink/red area, which is caused by a characteristic red glow emitted from hydrogen gas energized by the young, hot stars in the nebula. The dim blue halo around it is the reflection nebula part, where gas is reflecting the blue starlight. The dark portions are molecular gas that absorbs light.
This image is a combination of RGB (color) images and monochrome hydrogen-alpha images taken with my monochrome camera and a special narrowband filter that cuts out all other wavelengths of light except for a narrow slice right around the deep red color of H-alpha emission (656.3 nm). The H-alpha helps to accentuate the red (which is the bulk of the nebula), as well as refine detail in the nebula, since the deep red light is less disturbed by the atmosphere than the wide-band color.
After five weeks of photon collection, both color and H-alpha...here's the result!
Date: RGB: 1, 7, 8, 9, 19, 20, 22, 23, 24, 25, 28 September
Ha: 30 September, 1, 2, 3, 9 October 2019
Location: East Bay area, CA
Object: IC 5146 Cocoon Nebula
Camera: RGB: ZWO ASI294MC Pro
Ha: ZWO ASI1600MM Pro
Telescope: Takahashi FSQ-106N
Accessories: Starlight Xpress filter wheel, Astronomik CLS 2-inch filter (RGB), Astronomik H-alpha T-thread 12nm T2 filter (Ha)
Mount: Celestron AVX
Guide scope: Orion 50mm mini-guider
Guide camera: QHY5
Subframes: RGB: 141x180s (7h3m)
Ha; 22x300s (1h50m)
Gain/ISO: RGB: 120
Acquisition method: SequenceGenerator Pro
Stacking program: PixInsight 1.8.7
Post-Processing program: PixInsight 1.8.7
Darks: RGB: 40
Biases: RGB: 100
Temperature: -15C (chip)
Whew! My data files get longer and longer!
So this image consists of nearly 9 hours total of color and H-alpha data, as you can see in the description. It's so exciting to be able to get that much data on one target. Now that I don't have to drive out somewhere, set up, stay up late, pack back up, drive back, and then haul all my crap up to my second-story apartment, I am more free to collect a lot of data since it requires little effort on my part (on a nightly basis, at least! It's taken a lot of legwork to get to this point!)
So processing this image was really interesting. I haven't done a whole lot with adding H-alpha to color datasets before, so it was exciting to see what it would do. To compare how big of a difference the H-alpha channel makes, I also processed a RGB-only image. Here's that result:
So you can see a bit more of the blue reflection nebula halo around this one, but the contrast, detail, and saturation in the main part of the nebula is far less than the H-alpha version. Fascinating! What a difference.
Here's the whole process:
- Calibrated lights with master dark and bias
- Unticked "optimize" for darks (in RGB data) to prevent amp glow from showing up
- Added weights and rejected frames in SubframeSelector
- Debayered RGB frames
- Registered with StarAlignment
- Cropped with DynamicCrop
- Stacked with ImageIntegration
- Denoised with MultiscaleLinearTransform
, with lum mask
- Corrected color with PhotometricColorCalibration
- Attempted Deconvolution
, but it didn't do much (most of the stars have very little eccentricity!)
- Combined Ha and RGB channels with Light Vortex Astronomy method
- Stretched with HistogtamTransformation
(liked the result better than MaskedStretch
- Increased contrast with HDRMultiscaleTransform
, 9 iterations
- Reduced stars with Light Vortex method
- Tweaked a bit more with CurvesTransformation
- Cropped with DynamicCrop
I've been largely using DynamicBackgroundExtraction
after giving AutomaticBackgroundExtraction
a try once and not liking it, but Warren Keller's book Inside PixInsight
gives some suggestions on the settings, and after some tweaking, I found that the result was at least as good as DBE, but with waaaay less work on my part. So I've added that process icon to my workspace for quick access. It's getting pretty crowded!
Side note -- is anybody else having issues with PixInsight after the 1.8.7 update? Mine sometimes crashes processes randomly, or they'll just disappear, or a few times the entire program has crashed. Was very stable before that update. A patch update just came out the other day -- hopefully that fixes the bugs.