Monday, October 23, 2017

A Super-Duper Primer of Astrophotography Part 11 - Timelapse Photography

Along with the many other things you can do with astrophotography without a telescope (see Part 10 of this series for some ideas), you can make super cool timelapse videos!

One of my favorite side-activities when at a star party or even out at the observatory by myself is timelapse.  You can create videos of the Milky Way rising behind an observatory, activity on the observing field at the star party, your telescope tracking across the sky, planes zipping through the night, clouds appearing and disappearing, a thunderstorm rolling through – all kinds of things.  Timelapse is also my go-to for cloudy nights at the observatory so I can at least get something for my drive out there!  It’s also rather easy to do with a DSLR and a tripod.

This method is also great for imaging meteor showers - you have a much higher chance of catching a meteor if you're imaging continuously, and then you get to make a neat video at the end too.  Two for one deal!

Compose Your Shot

Figure out what you want in your foreground and your background.  Clouds are more interesting to watch than an empty blue daylight sky; the Milky Way rising is always pretty neat, or some other constellation; telescopes moving or people moving around the observing field also make great timelapses.  I did one at the Green Bank Star Quest of me and my minion Miqaela taking down our gear at the end of the weekend with clouds rolling around in the sky at the massive 300-ft telescope moving around (watch it here).  I did another one at the Texas Star Party of the Milky Way rising behind the upper observing field, which was stuffed to the gills with people and telescopes (check that out here).  I’ve also done several of my telescope tracking across the sky and changing targets.  Daytime timelapse is fun too – clouds rolling over hills, thunderstorms, weather fronts, traffic, people, all kinds of stuff.   Again, you are only limited by your imagination!


Focusing in daytime is easy - use auto-focus, and then switch it to manual focus during the timelapse sequence (and don't touch the focuser.  At night, however, it's a little trickier, since your auto-focus won't work.  See Part 10 of this series for a section on focusing at night.

Taking Images

Take several test photos to decide on your ISO, shutter speed, and focal ratio.  If you are imaging during sunset or twilight, you may want to start out bright so that your images don’t get too dark too quickly.  Once you have decided, run your camera in manual mode, turn off everything auto, including D-lighting, set your white balance on something other than auto, and set your focus to manual.  My intervalometer only goes up to 399, and my camera’s interval timer goes up to 999.  Depending on your shutter speed, this may or may not be enough.  During the day, when I’m using short exposures, 399 will only get me 16 seconds of video at 24 fps, and any interruptions in your sequence, like restarting your interval timer, will show up as a jump in your final video.  999 frames will get me 41 seconds at 24 fps, and I’ve found that spending 30 seconds to a minute on a given scene in a video seems to be a good length of time.  For longer timelapses, I’ll usually use digiCamControl on my tablet to take an arbitrary number of photos (usually I’ll set it for like 3 hours and then just stop it when I need to).  For long exposures at night, it depends on how long your exposure is.  The longest I can do without getting star trails is about 15 seconds at 18mm of focal length, but with timelapse, you can get away with 30 seconds, since the images will be blurred together anyway.  However, the longer your exposure time, the fewer images you’ll be getting, so your final video will be shorter.  I imaged the Milky Way rising over the upper observing field at the Texas Star Party for about an hour and a half, and I only got 180 images, which is only 18 seconds at 10 fps.  But, I had a nice bright Milky Way.

Also, you can go ahead and take these in JPG instead of raw.

Problems You Might Run Into

Battery life is probably the number one inhibitor of timelapse.  My stock Nikon batteries only last about two hours in my D3100 that has a non-closing screen, and about four hours in my D5300 that has a closing screen.  I’ll plug into AC power when I can, but this limits where I can image.  So what I usually end up doing is when my battery dies, I’ll move to a different spot and take another timelapse, since I don’t like jumpy gaps in my timelapse videos.

Dew is also an issue, at least at night.  In the case of humid nights, I’ll station my camera near my telescope, and run my 2-inch dew heater strap over to the camera (I got an extension cord, but it’s a little too long, so it’s very lossy, which means I have to crank the current up quite a bit), and wrap it around the lens. I have a dew heater controller from Thousand Oaks that can run four heater straps – one for my 11-inch, one for my guide scope, and one for my timelapse camera.  I can’t use a blow dryer on it because you will see it in your timelapse.  I can’t even check for dew because that will show up too.  So the dew heater is a good solution.  

Another option that astrophotographer Brent Maynard told me about at the Green Bank Star Quest that I think is very clever is to use those chemical foot warmers you can get at the grocery store or the gas station.  The foot warmers are sticky on one side so they’ll attach to your socks – they’ll also attach to the side of your lens!  He even uses them on his reflector, spacing them out around the ring, and using a stretchy band or extra tape to hold them on.  They work great.  I've also used hand warmers attached to the lens with rubber bands - you only need one since they get quite hot.  Cheap and simple solution!

Creating the Video

I mentioned the app VirtualDub in my post about planetary astrophotography as a way to convert .MOVs or .AVIs to a format of AVI that RegiStax likes.  You can also use it to make videos and animated GIFs.  There is a little pre-processing you’ll need to do to your images first, though.

First, get all of your images in time-order. Nikon does this annoying thing where it rolls over after 999 images, and then starts numbering again, from 001.  This means that when you put them all together in the same folder, you’ll have your first 001, and then 001 (1) from when it rolls over, and then 001 (2) when it rolls over again…so basically putting them in name order does not put them in time order.  You can do this by going to the View tab in the folder, choosing Details, and then clicking on the “Date modified” column twice so that it’s in ascending order.  Windows, unfortunately, is very slow at re-ordering large numbers of files, so be patient. 

Once they are in time-order, click on the first image, and then hit Ctrl + A for Select All. Then, either press your F2 key, or right-click on the first image and click “rename.”  Rename it whatever you want, and then hit Enter.  This will name all of the photos the same thing, but with a (1), (2), etc after it.  This is necessary for VirtualDub to be able to pull them all in to make a video.  The numbers have to be sequential – if there is a gap, VirtualDub will only pull in images up to the gap, and then stop.  The first image has to be (1).  So if you delete some photos from the sequence later, you will need to repeat this step to re-number all the photos.  Again, Windows is slow at this, so give it a few seconds, especially if you have over 500 or so images.

Open VirtualDub, and click File -> Open Video File, and then go to the folder where your images are saved.  Make sure the “Files of Type” box says either “All types” or “Image sequence.”  Click on only the first photo in the list, which should be your (1) photo.  Click Open. This will load in all of the sequentially-numbered images in that folder.

Next, go to Video -> Frame Rate.  Here you have some flexibility – 24 fps is about what the minimum is for motion to appear smooth for the human eye, although I’ve found as low as 10 fps looks pretty nice too, at least for timelapse. You can certainly go faster, but it depends on how fast the things in your images are moving, and how long you want a particular sequence to last for when the video is all compiled.  Click the “Change frame rate to” radio button, and enter what fps you want. Click OK.

Next, go to Video -> Filters.  Click the “Add…” button.  Now, your DSLR takes some pretty large frames.  Mine are 6000x4000, which is greater than 4k video.  HD video is 1920x1080.  The larger your image frames are, the larger your final video file will be – like, dozens of gigabytes.  These take forever to upload to YouTube.  So, I reduce the frame size by clicking the “2:1 Reduction” filter, and clicking OK.  Later, I’ll compress it further.  Another option is to use a batch resizing program beforehand to bring them down to a smaller size.  I don’t use any of the other filters (besides MSU Deflicker - see below), although you are welcome to give them a try (and let me know if you get any helpful or cool results).  Click OK.

Now, go to File -> Save as AVI.  Choose where you want the video saved at, and then click OK.  A small window will appear showing progress.  Uncheck the “Show input video” and “show output video” boxes, which should make it run a little faster.  Then sit and wait.  Once it’s done, check out your video!

A word of caution here.  My homebuilt rig has a NVIDIA GTX 1070 card in it, 16 GB of RAM, and an Intel i7 processor, and it cannot play the giant video files that come out of VirtualDub smoothly.  It’s glitchy and slow.  So, your next order of business will be to reduce the file size by compressing the video.  I use an app called Any Video Free Converter to do it.  Simply drag the file in, set all of the video and audio settings to “Original” or something reasonable, and then click Convert.  Even if you set everything to Original, I think it changes it to a different AVI format or applies some other compression algorithm, and the file size comes out like one tenth of what it was – something like a few GB down to even hundreds of MB.  Not only will this play on your computer better, but it will upload to YouTube in an amount of time shorter than the age of the Universe.  I’ve found there not to be much degradation in quality after the conversion process, particularly since my computer screen is only 1920x1080.  

Adding Audio

Your timelapse video will be even more interesting with some flowy space music or epic sci-fi anthems in the background.  First, figure out how long your video is going to be.  This is easy – just take the number of frames, and divide by the fps you’re using – for instance, 1000 frames at 24 frames per second is a 41.7-second video.  Second, find a song you want to use.  I’ve got a spacey playlist I use at public outreach events that I draw from, and there is a lot of that kind of music out there, especially for sci-fi video games.  Then, I use the free software Audacity to cut the song and have it fade out.  Just import the song, cut it at the length of your video, select about the last 5 seconds of the song clip, and then to go Filters -> Fade Out.  Then go to File -> Export to export the song as mp3.

In VirtualDub, with your image files still open, go to Audio -> Audio from another file, and select your song clip.  I usually select the option of auto-detect bit rate, and it seems to work well.  Then do the same as before – File -> Save as AVI to create your video. The audio will also be glitchy until you compress the video.


Sometimes, your timelapse will see to flicker.  This is usually due to leaving some auto-setting on, like white balance or active D-lighting.  Don’t fret, your hard work isn’t toast – there’s a great little plugin for VirtualDub called MSU DeFlicker.  It does some averaging between frames to virtually eliminate this effect.  You can just take its default settings, or play with it some if the flickering isn’t resolved with the defaults.  


And there you have it!  It takes a bit of legwork, but nothing too technically complicated, to make some kick-ass timelapse videos of the sky.  Enjoy!

#117 - Friday, October 20, 2017 - ...try, try again!

Friday night, I was exhausted from my busy week, but all the forecasts were looking so good, and there was no moon, so I just had to answer the call!  I cleaned my blue filter with some distilled water and a cotton swab, and then dried it with a clean paper towel and let it air dry, and there were no streaks.  I packed up the CCD camera and headed back out to the observatory, where now not only Jim, but a few others were camping as well.  I didn't get out until after dark because eating dinner is fairly important, so I got set up as quickly as I could.  I've gotten my process down pretty well now on the memorial scope.

I started with the blue filter on M33, but it looked just as bad as usual!  So I threw up my hands and decided just to re-take the luminance frames, in better focus.  I borrowed Jim's Bahtinov mask to help with focusing (see this post for more on what a Bahtinov mask is).   Those looked pretty good.  While they were going, I was deciding what to do next - take L frames on another target, or use my DSLR?  Jim mentioned that he had a hydrogen alpha filter he was trying out with his Mallincam earlier in the week, but he didn't have it pulled out Friday night because he was tired after several nights of observing all week.  I eagerly asked if I could borrow it - I've never tried narrowband imaging before, and I was anxious to try!  And the memorial scope was the perfect platform to test it on, since it usually has the superb tracking and guiding that is necessary for narrowband imaging, where you have to take very long exposures.

Quick aside - Narrowband Imaging

So what is narrowband imaging?
So let's say you have a monochrome CCD camera.  In order to get color, you put red, green, and blue filters in front.  These are considered "wide-band" filters, since they allow a large swath of wavelengths that are considered in the red, green, and blue regions.  Now, a lot of stuff in space emits on very specific wavelengths.  Take hydrogen, for example - all of the red you see in nebulae like Orion and Rosette and those gorgeous red blotches in galaxy M33 are coming from a specific energy transition of hydrogen atoms, known to astronomers as hydrogen-alpha, or H-alpha for short.  (Also known as H-II to most other physicists and chemists).  For anyone who remembers their high school chemistry, the red light is emitted when an electron excited to the second excited state decays down to the first excited state.  
An illustration of an electron changing energy states and emitting a photon (light).  Taken shamelessly from Starizona.
The energy of the light emitted corresponds with 656.3 nm (nanometers), which is deep in the red to us humans.  The greenish-blue in many nebulae comes from the oxygen-3, or O-III transition for short (500.3 nm, green).  Other important emission lines include sulfur-2, or S-II (672.4 nm, deep red), hydrogen-beta (486.1 nm, blue) and nitrogen-2 or N-II (658.4 nm, red).
Spectrum of some narrowband filters, with red-green-blue filters in the background for comparison. Also taken shamelessly from Starizona.
After the images are acquired at a few of these narrowband wavelengths (usually H-alpha and O-III, and then combined with LRGB), you can get exquisite detail on many kinds of deep-sky objects.  Many of the Hubble images you are probably familiar with use narrowband filters - however, in order to more easily identify differences between the different gases in a given object, false colors will be assigned to each narrowband color channel, hence why Hubble images of things we usually see as red (like the Bubble Nebula below) will look blue instead.
Image result for bubble nebula
Bubble Nebula with the "Hubble pallette" of narrowband color assignments

Bubble Nebula, as seen by my DSLR

One of the huge advantages of narrowband imaging is totally knocking out light pollution.  Since the filters select a very narrow range of color, there is very little light pollution to enter your image.  To make up for it, though, you have to take much longer exposures since you are cutting out most of the incident light - 15 minutes or more, usually.  So it can be very time consuming.

I'll do a more detailed post on narrowband imaging when I get further along in practicing it!

Continuing on...

So my first attempt at narrowband imaging was less than fruitful.  The H-alpha filter specifically was the Meade CCD Interference HA-50 Visible - A/R Coated filter, which is a little wider than some others (so I'm told - it's hard to find info on it, as it's been discontinued).   I thought I had the camera focused pretty well (again using the Bahtinov mask), but the images look kind of soft.  And even with 15-minute subframes, I didn't get very much light from the deep-sky objects I tried.  Orion wasn't quite up yet, so I went with the Veil Nebula.  I left it monochrome since I don't have RGB data on the Western Veil Nebula with the CCD camera yet.  I've seen people use H-alpha as their L frames since you can get much finer detail, so I might give that a go at some point.  Or they'll add H-alpha to their LRGB data.  Lots of things to try.
Western Veil Nebula (NGC 6960), in H-alpha, 8x900s (2 hours)

It might work better with some like the Bubble with more H-alpha, or Orion that is just brighter.  I also tried imaging the Flame and Horsehead Nebulae (Horsehead is more H-beta, but Flame is mostly H-alpha), but the scope started not tracking well again, and it was nearing 2 AM, which is my make-or-break point for either driving home or staying the night.  I had some plans for Saturday, so I left.  But I shall have to experiment with this more.  In addition to the other troubles I had, the sky transparency wasn't very good, which didn't help, of course.

#116 - Wednesday, October 18, 2017 - You Can't Say Uranus In Front of Middle Schoolers

Wednesday night I went to an outreach event at a local middle school, so I brought along my 8-inch SCT (Schmidt-Cassegrain telescope) on my Celestron NexStar alt-az mount.  It's not the best tracking mount (at least, not with an 8-inch, I think I need to tighten up a few screws), but it does the job at outreach events (and to think I used to image on this thing!).  All that aperture is awesome at outreach events to get jaws dropping at Saturn, which is always a crowd-pleaser.  It looks so sharp in my scope, and so large, that people are convinced I have a picture of Saturn taped to the front of the telescope!  Saturn is always the first thing we get pulled up in our scopes at star parties this time of year since you can see it soon after sunset.  Once it got darker, we were able to look at some other things.  I slewed to M31, the Andromeda Galaxy.  Now, in most scopes anywhere with light pollution, Andromeda looks like a fuzzy blob that is not terribly exciting.  But it's all about how you tell the story.
"Who wants to look at another galaxy?!"  Way more interesting than those hum-drum nearby planets you hear about all the time.
"How far away is it?" they'll ask.
"2.5 million lightyears," I'll say.  "Light travels really fast, right?  You see it as soon as you turn it on, even from far away.  But even light has a speed limit.  And as fast as it moves, the next-nearest galaxy to us is so far away that it takes 2.5 million years for that light to reach us."
The middle-school aged kids can think about it for a second, and then they realize, "So that light we're seeing started 2.5 million years ago?" and they are awe-struck.
"Yes," I'll say.  "And that's the closest galaxy to us!"
I also like to talk about how our galaxy and Andromeda will collide - sans any actual collisions, because space is big and things are far apart.  Except for the fact that the black holes will probably merge.  Then I'll show them one of my pictures of M31 so they can see what it really looks like.

After M31, I slewed to Uranus, since it was very near opposition, and thus large and bright.  Well, as large as Uranus gets.
"Who wants to come see Uranus?"
Not that pronouncing it "UR-a-nus" is much better than "ur-ANE-us."  I'd be game for "ur-ANN-us," but it'll never catch on.

Uranus did look wonderful in my 8-inch - it was obvious that it was not a star, since you could resolve the disc shape, and easily see its blue-green color.

I tried for Neptune next after a while, since it is also not too far from opposition, but the goto on that mount is not spectacular (even with Precise Goto), and I couldn't tell which of the several dots in the field-of-view it might be.

After the crowd died down, I decided to try and image it.  I knew my blue filter wasn't doing well, but I thought I'd try anyway.  But since it's quite dim, I had to take rather long frames, 10-30 seconds depending on the filter, but the mount wasn't doing well enough for it to stay put for that long, so I gave up.  One of the other club members let me try on his scope, but I was having trouble getting it centered in the field-of-view.  It was getting late, so I decided to scrap the idea.  Maybe another time, and I'll try the Orion StarShoot Color Imager IV I won at Hidden Hollow last year.  It's not as sensitive as the QHY5, and I have trouble making it talk to one of my CCD capture programs, but we'll see.  I can't use my DSLR because those longer frames mean I can't use video mode, and the shutter will cause too much vibration if I use eyepiece projection to get some magnification on it.

All in all, a fun evening!  I probably had about 50 parents and kids come look in the eyepiece.  I always love doing outreach events!

#115 - Tuesday, October 17, 2017 - If at first you don't succeed...

I haven't been out since the Hidden Hollow Star Party, and it was clear with no moon, so who cares if it was a week night??  The stars were calling...

I brought my camera bag out to the observatory to use the memorial scope, and surprise surprise, the weather was so good this week that one of the other club members, Jim, was out there in his trailer all week.  Yay company!  Of course, it's hard to enjoy company when you are having equipment issues...

I decided I wanted to play more with the CCD camera I'm borrowing (a SBIG ST-8300M for anyone keeping track), with filters this time.  But as I discovered at Hidden Hollow, while my red and green filters look fine, my blue filter is making the stars look...interesting.
M33 with my messed-up blue filter.

This is M33, the Triangulum Galaxy (it is also confusingly referred to as the Pinwheel Galaxy, same as M101, so for clarity I'll call it the Triangulum because it's in the constellation Triangulum) with the blue filter.  You can see how the stars have vague cross-shapes and spikes...for comparison, here's one with the red filter.
M33 with my red filter - much better

The blue filter is the one I tried to clean with a glasses cleaner moist towelette, as you may recall from my account of the Hidden Hollow Star Party, since it was pretty bad.  Maybe it left some kind of residue?  I've held it up to lights in my house and it looks fine.  I need to find my multi-coated optics cleaner, but I can't find it.

Anyway, I decided to take just red, green, and luminance data of M33 then (180-second frames, about 10 per color channel because I was short on time), but after setting up the L frames to go, I went to chat with Jim, and then came back later to discover that I had forgotten to turn the cooler back on!  As a result, the stars looked kind of soft, like they were out-of-focus.  Probably something to do with thermal expansion of the chip slightly shifting the chip out of the focal plane.  It was getting near time to go, so instead of re-taking the data, I decided just to slew to M42 instead, the Orion Nebula, since I was out late enough to see it clear the trees.  However, guiding was starting to give me issues and tracking was off too (which was breaking the guiding), so I decided it was just time to call it a night, somewhere around 12:30 AM.  I still had work the next day.  So I'll have to either figure out a way to clean the blue filter, or buy a new one, before I can make an image of M33.