Gear
Mount
Unlike deep sky imaging, you don't need a mount that tracks or guides super well. I use my NexStar SE alt-az mount all the time, which has developed some slop and can't stay on a target for very long. My 20-30 second deep sky images always had period error and drift in them. But for planetary imaging, as long as you have a mount that tracks long enough to have the image not move too much over the course of a few minutes, you're good.
Telescope
Aperture does not rule the roost for planets. In fact, large aperture can work against you - it provides a higher cross-section for light that is disturbed by the atmosphere to enter your telescope. This is part of the reason why my 8-inch always gets better planetary images than my 11-inch.
What does matter, however, is focal length. If you hook up a DSLR or a CCD with a large chip to a 3-inch refractor, the planet in question will appear quite small, and you won't be able to get much detail. So instead, you need something with a long focal length so that you can get a small FOV (field of view), and also not saturate your camera. Different combinations of cameras, apertures, and focal lengths will give you different FOVs - I recommend this awesome tool for checking yours: FOV Calculator.
Alternatively, you can get effectively longer focal lengths using Barlows and eyepiece projection - but be careful, a small aperture will have a resolution limit that you may well exceed. The FOV Calculator can help you determine this too.
Camera
While I have used my DSLR for planetary imaging, I wouldn't necessarily recommend it to get the best images. For one, recording a video on your DSLR can downsample your resolution. For example, my Nikon D5300 has a 6,000x4,000 pixel chip. However, it only shoots video in HD - 1920x1080 pixels. So I'm automatically losing data and detail.
Instead, many cheap CCD cameras that are frequently used for guiding work great as planetary imagers. I borrowed a ZWO ASI120MM from Miqaela, which you can get for $150-$200 and has a resolution of only 1280x960 pixels, and for reasons I do not quite yet understand, along with the fact that it is monochrome, my planetary pictures came out suddenly much more awesome, even when the seeing was average and not great.
With a monochrome camera, every pixel counts, as opposed to a color camera, where you have to use four pixels in order to get one pixel of image. This is because of Bayer filters. Quoting myself from this post, in short, a Bayer filter is what makes a color camera able to
shoot in color. It's a filter that lies
over the top of the pixels and has a color pattern, typically RGGB. In other words, it takes four pixels to make
one image pixel - one red, two greens, and a blue. (The green is duplicated because it is the
wavelength that your eye is most sensitive to).
So it cannot resolve as fine of detail as a monochrome camera can. In order to get color images from a monochrome camera, you'll need a set of RGB filters, and I recommend putting those into a filter wheel for ease of changing them between video captures.
Observing Conditions
When observing or imaging DSOs, you care about having very dark skies, and care slightly less about the seeing conditions ("seeing" being a qualitative measure of how "messy" the atmosphere is - humidity, convection currents, etc, not including clouds and dust). However, planetary imaging is the opposite - you can image a planet under a bright Moon, or in the middle of a city, and you won't see that light pollution. However, atmospheric conditions dictate how well your image is going to come out, so you don't want to shoot through shimmering air currents rising from rooftops, or after a rainstorm, or other conditions that cause waviness in the air.
Data Acquisition
DSLR
If all you have is a DSLR, you should still give it a try. Instead of single frames like you would do for a DSO, you will want to shoot a video. This is because the image processing algorithm we will use - known as "lucky shot imaging" - takes many frames and finds the ones where the air happened to have a clear moment, and prioritizes those in its averaging algorithm. If you shoot at 30 fps, your video only needs to be a few minutes long - typically, you don't need more than about 3,000 or so frames.
Shooting video also gets rid of another effect - vibration. Every time your shutter closes, it shakes the telescope a little bit, which will immediately cause your images to be blurry, not matter how good the atmosphere is. On my DSLRs are least, in order to shoot a video, you have to turn on Live View first, which opens up the shutter. Then you can start the video and let go of the telescope, and it'll take a few seconds to stabilize, but those frames will just get thrown out anyway. Then you have hundreds of frames with no vibration from your camera shutter.
CCD
I'm pretty new to CCD imaging, so take my advice with caution.
There are many CCD capture programs out there; the one I hear about most often, and the one I now use, is called SharpCap. It's free, and has a lot of great features, including a recent update to SharpCap 3.0 that comes with some great ones, like plate solving and improved polar alignment routines.
Every camera will have slightly different-looking settings controllers. The main one you will want to adjust is exposure time. You want to be able to see the planet, but you don't want any part of the image to saturate. You can turn on the histogram in order to check this - the curve should be somewhere in the middle third of the histogram for best results. Also, keep your gain at about the middle for best quality (I haven't played with this much yet, but mine is set default to 50%, and planets are so bright that you don't need to push the gain in order to see them at all).
I recommend shooting the video in AVI format, since it's more common. Again, you only need a few minutes - Jupiter, for instance, spins fast, and you can get blurring if you go for too long. But remember that your CCD will probably have a lower frame rate, unless you have a USB 3.0 one plugged into a USB 3.0 port using a USB 3.0 cable. My QHY5, for instance, only shoots 2-3 frames per second on USB 2.0.
Data Pre-Processing
Video Conversion
I have found that RegiStax can be a little finnicky on the types of AVI video it will accept. Not every AVI file type is created equal, and as far as I can tell, I have no idea how to tell the difference.
For my CCD video, I've been using VirtualDub to convert AVI to AVI that RegiStax likes. It's another piece of freeware that I also use to make my timelapse videos (more on that later). You just import the AVI file, then go to File -> Save as AVI, and then RegiStax will accept it no problem. This might be because both of the CCDs I have used thus far for planetary imaging are kind of old. So this is an option if RegiStax doesn't like your AVI files for some reason.
If you use a DSLR, you will definitely need to convert the video. Nikons shoot in .MOV, and I believe that Canons do too. RegiStax (and a lot of other things) don't like this format. But in order to convert it to AVI, I needed some additional codecs. I installed the Standard K-Lite codec pack. I also needed to install FFMpeg for VirtualDub. You can find instructions here. This didn't used to work for me (it started to only recently, for reasons unknown), so I used to use VirtualDub to break up my .MOV's into thousands of still frames, but those still frames were low-quality JPEG files, which is probably why most of my planetary images before about a month ago are not very good.
Data Processing in RegiStax
There are probably a number of planetary stacking programs out there, but the two that are most commonly used that I know of are RegiStax and AutoStakkert. I haven't used AutoStakkert myself yet, so I'm just going to show a tutorial of RegiStax instead. I'll do it for RGB CCD video, since DSLR video is the same, except you only have to do it once (since it's color) instead of 3-4 times for RGB monochrome.
Align
First, import your R video file by going to Select up at the top and choosing your AVI file. It should import fairly quickly. Then, move the slider on the bottom of the screen until you find a frame that caught a moment of good seeing, where the image looks nice and sharp. You can keep pretty much all of the default settings here, although I would change the "Limit Setup" option to "Best Frames" and put 65% in. Your changes will save for the next time you open RegiStax.
Next, click the "Set Alignpoints" button, which will have a green bar underneath it. RegiStax will automatically select some points on which to base its registration algorithm. If the planet is small in your image, you may only get two or three points. If you are processing the Moon, you may get a lot. You can add your own by clicking on the picture, or delete points by right-clicking on the point. This is helpful for if you have specs of dust in your image that RegiStax put an alignpoint on.
Next, click "Align." Now you'll wait while RegiStax registers all of the frames in your video. Since there is most likely some small movements due to tracking errors in your video, this is very important, otherwise your final image would have the planet all of the place! Also, this is the other reason you don't want more than a couple thousand frames - RegiStax will have a very hard time with large numbers of frames.
Limit
Next, click on the "Show Registrationgraph" button near the top. It will bring up a graph. By this point, RegiStax has scored and sorted all of your frames by quality, with highest quality on the left, and lowest on the right. The red line shows the quality of that frame, and the green line shows the difference in position from the reference frame.
If there are any tall lines above or below the rest, click on the "Show Framelist" button, and move the slider on the bottom of the screen until the blue bar on the registrationgraph is sitting almost on top of the distant point. Uncheck the boxes around that point. (When you click on a frame shown in the box, it will move the blue bar to where that frame is at on the graph - use that as a reference).
Next, close the Framelist box, and then move the slider until the blue line on the registrationgraph is as far down in quality as you're willing to go. For good data like this, I like to stay above the first or second line down from the top. Then, close the registrationgraph, and click "Limit."
Stack
The next screen has some additional options, none of which I have really messed with. Feel free to mess with them sometime. Click "Stack."
Now we wait for RegiStax to stack the images. RegiStax works by applying some statistics to your frames - if the planet changes from one frame to the next, it's due to atmosphere, so it keeps what is common between the frames, more or less. I haven't yet found a good explanation of exactly how it does this.
Sharpen & Process
Once it's done, you will get a rather fuzzy and unsatisfying image. But we're not done yet! Click on the "Wavelets" tab.
Now, I might be a physicist with a master's degree and a PhD in my future, but wavelets are still magic. So just move the happy sliders and you don't have to think too much on how it works!
I recommend using "Default" instead of "Gaussian" mode, but everyone has their own preferences, and one mode my sometimes work better in some situations than the other. I've had better results with the Default mode so far. You will see some sliders.
In a nutshell, moving a slider from left to right increases the sharpness, where 1 is more coarse, and 6 is more fine. My word of caution: do not go crazy with the sharpening! It can introduce all kinds of weird effects and noise that will ultimately make your image not that great. So go easy, and wiggle the sliders around until you get something you like. You can see the result immediately for a section of your image. If you click on the "Show Processing Area" button near the top, it will show you the box it's doing the sample processing on, and you can click anywhere on the image to move this box. To see how it affects the whole image, click the "Do All" button highlighted in green at the top.
This is generally what my sliders tend to look like:
Results my vary!
Now, there are some other processing options available in RegiStax too, over on the right side of the screen - brightness, contrast, gamma, etc. Go ahead and play with these a bit too if you want.
When you are satisfied, click the "Do All" button, and then the red "Save Image" button. If you are doing RGB channel processing, save your file with "_R" "_G" or "_B" at the end for whichever channel you just processed, and save it in FIT format. You can call it whatever you want, just make sure it ends in _R, _G, or _B. This will let RegiStax know which is which in the final image stacking, and it will color them accordingly.
Wash, Rinse, Repeat for RGB/LRGB
Now, re-do this whole process for your green and blue channel data, and L channel if you did that as well. Having a luminance channel and help to sharpen your image, especially if it's done with an IR-pass filter, so I've read (although I have not yet tried this - I own an IR-pass filter, but my filter wheel only has 3 slots). Save these FIT files out with their proper designations.
Final RGB processing
Once you have all of your _R, _G, and _B FIT files, open a new instance of RegiStax, and import those three files. You may need to change the file type to FIT in the drop-down bar to see them.
RegiStax may give you a few prompts, like "Stretch intensity levels?" (I usually say yes to this one) and "Process in B/W?" (Say no to this one for RGB processing in this last step).
This process goes like doing the video stacks, but not we only have three frames. I'd use red as the reference frame, since it's probably the best quality. Click Set Alignpoints and then Align, and then make sure the slider is all the way over to the right before hitting Limit. Leave the "Colour" box checked, but not the "LRGB" one. Hit Stack.
Sometimes, the alignment won't quite work, and your red, green, and blue images won't quite be on top of each other. Not to worry though - RegiStax has a tool for this (as long as they are not rotated with respect to each other). Click on the Wavelets tab, and then click RGB Align. A green box will appear - drag it and change its size to cover the planet images. Then click Estimate, and it will calculate where it needs to move each channel in order for the images to be on top of one another. If this doesn't work, you may need to say out your R, G, and B final stacked images as TIFFs and manually align them in Photoshop instead. I'm not going to do a tutorial on that, but there are other places online that will walk you through it, including assigning color to those channels.
At the end, you should have a nice image! (The one below is from another data set - the ones I have screenshots for above ended up with a red channel that was rotated with respect to the others, and I was unable to fix it in RegiStax. I might go deal with it in Photoshop later, but my blue channel didn't turn out very well anyway for that one).
No comments:
Post a Comment