Bidirectional Z-scanning with Micro-manager and an ASI Z-stage

Conventional (unidirectional) Z-stack acquisition as compared with bidirectional Z-stack acquisition. In the conventional case, the time for the Z-stage to return to its starting position (the rescan time) limits how fast stacks can be acquired. In the bidirectional case, the stage is continuously moving, first up, then down, allowing continuous image acquisition.

Long time readers of this blog know that I’ve spent a lot of time working to make acquisition on our systems as fast as possible. Recently, I was approached with a request from Saul Kato, a new faculty member at UCSF, to go even faster. He wanted to be able to image neuronal activity in C. elegans at > 5 volumes per second.

What limits the acquisition speed of multiple volumes in Micro-manager is that it acquires Z-stacks unidirectionally and then has to return to the start position at the end of the Z-stack. This return time, also known as the retrace time, can actually add quite a bit of overhead (> 100 ms). In part, this overhead is to allow time for the piezo to return to its start position (I remember working with a Micro-manager version, many years ago, that didn’t allow enough time to return to the start, so all Z stacks after the first were missing the first plane or two). There is also some software overhead in this retrace time.

To eliminate this overhead, Saul and I set up bidirectional Z-scanning in Micro-manager. To avoid the rescan time, bidirectional Z-scanning first acquires one Z-stack ascending, followed by one descending. Because there are no large stage moves, the camera can acquire continuously during the entire process and so the overall acquisition rate is much faster.

We implemented this by taking advantage of the same trick in talking to the ASI stage that I’ve used before: Micro-manager allows you to communicate directly with the stage over the already open serial port from a script. Saul’s script then loads the ring buffer on the ASI stage with positions for both the ascending and descending Z stacks, and sets it so that camera triggers cause it to move from one plane to the next. With this set up, you just acquire a time lapse with as many frames as you want, and the Z-stacks are automatically acquired. You need to post-process the resulting stack to assemble the frames in the right order but the acquisition is very simple.

The script for doing this is on Github, as is one for turning off the bidirectional movement.

Github pages

Github pages is awesome. I’ve been dimly aware of it for some time, but only just tried it. It’s really simple – if you have a Github repo that is a webpage, just tell Github that it should serve it as such, and it will become a live webpage. For instance, a few mouse clicks made my FPvisualization repository visible as a live webpage. Commits pushed to the repository automatically go live on the web.

Software tools for writing image analysis code

I was recently at a small meeting at UC Berkeley to get together engineers, computer scientists, and biologists around the theme of computational imaging, and more generally to get the various groups at UCB who are working on similar problems talking to each other. Aside from hearing about a lot of interesting research being done, I learned about some work being done to make programming languages specifically for image analysis. The goal here is to decouple knowledge of the algorithms to solve the image analysis problem from the problem to be solved, so that the people who are not experts in computation can write image analysis code that is fast.

I haven’t tried either of these tools yet, but they both look interesting. One is an embedded language for Python called ProxImaL that formulates operations like deblurring and denoising as constrained optimizations. The other is a C++ embedded language called Halide designed to make it easy to write high performance image analysis code that can be compiled to multiple targets (CPU, GPU, etc.) .

Both of these are a little beyond my current programming experience but they sound like tools that should be more widely known.

Destriping of Light Sheet data

We’ve been working on a simple, home-built light sheet system in the NIC. It’s designed for imaging cleared organs, and so uses a cylindrical lens to produce a light sheet, about the simplest illumination system you can use for such a microscope (it’s similar to the system described in [1]). Because the illumination traverses the sample, if there is an opaque or scattering part of the sample, it blocks part of the illumination beam, casting shadows through the sample that show up as stripes in the resulting images.

I recently discovered a software tool for removing stripes from these images [2]. It’s not perfect – in particular, it assumes that the noise is additive, when it is really multiplicative – but it does a good job. You can download a Fiji plugin that implements it here, and you can see the results below.

Raw image

Raw image

After destriping

After destriping


  1. H. Dodt, U. Leischner, A. Schierloh, N. Jährling, C.P. Mauch, K. Deininger, J.M. Deussing, M. Eder, W. Zieglgänsberger, and K. Becker, "Ultramicroscopy: three-dimensional visualization of neuronal networks in the whole mouse brain", Nature Methods, vol. 4, pp. 331-336, 2007.
  2. J. Fehrenbach, P. Weiss, and C. Lorenzo, "Variational Algorithms to Remove Stationary Noise: Applications to Microscopy Imaging", IEEE Transactions on Image Processing, vol. 21, pp. 4420-4430, 2012.

Denoising plugin for ImageJ

There has been a lot of excitement around the use of denoising algorithms to allow reconstruction of microscopy images to allow data collection at very low light levels, thus allowing fast long-term timelapse imaging of samples that would otherwise suffer too much photodamage. Much of this work has been done by the Sedat lab and colleagues here, so I hear a lot about it [1][2].  The algorithm they use comes from the work of Jerome Boulanger and Charles Kevrann, and apparently performs very well. However, it’s been hard for me to test because obtaining the software is relatively difficult.

Yesterday, a new ImageJ plugin for denoising was posted on the ImageJ mailing list. It’s called CANDLE-J, and a preprint describing it is here. I haven’t had a chance to try it yet, but the results reported in the preprint look promising, and it is freely available for download. Binaries for Mac and Linux are available as is the source code. I’m guessing building it on Windows won’t be too hard.

An earlier version that runs in Matlab is also available.


  1. M. Arigovindan, J.C. Fung, D. Elnatan, V. Mennella, Y.M. Chan, M. Pollard, E. Branlund, J.W. Sedat, and D.A. Agard, "High-resolution restoration of 3D structures from widefield images with extreme low signal-to-noise-ratio", Proceedings of the National Academy of Sciences, vol. 110, pp. 17344-17349, 2013.
  2. P.M. Carlton, J. Boulanger, C. Kervrann, J. Sibarita, J. Salamero, S. Gordon-Messer, D. Bressan, J.E. Haber, S. Haase, L. Shao, L. Winoto, A. Matsuda, P. Kner, S. Uzawa, M. Gustafsson, Z. Kam, D.A. Agard, and J.W. Sedat, "Fast live simultaneous multiwavelength four-dimensional optical microscopy", Proceedings of the National Academy of Sciences, vol. 107, pp. 16016-16022, 2010.

A python script for automatically moving and deleting files

On our microscopes equipped with high speed (100 fps) sCMOS cameras, we’ve generally set up a fast SSD RAID 0 array for streaming data to and a slower magnetic disk RAID 1 array for longer term data storage. To simplify data management and keep the SSD from filling up, I wrote a script that moves data every night from the SSD array to the magnetic disk array. It also deletes files on the magnetic disk older than 30 days, and benchmarks the write speed of the SSD array, so we can detect any slowdown. In case it’s useful to other people, I’ve posted it on Github.

If you use it, be careful, as it will happily delete whatever directory you tell it to, so you can easily wipe out your OS if you set it up incorrectly.

How to make easily portable videos

As long as I’ve been doing microscopy, it’s been tricky to make videos that are easily playable cross-platform. A lot of microscopy software packages support only a few video output formats, possibly with codecs that either aren’t available on the machine you want to play videos from or that produce bad compression artifacts.

To avoid these problems I currently use Handbrake to transcode videos to H264 video in an mp4 container, which appears to be playable on just about any machine. It’s how I’ve produced all the videos on this blog. It also produces pretty good compression and nice looking movies. In general, I first use the microscopy software to produce an uncompressed AVI file. This is huge, but avoids introducing compression artifacts. I then open this in Handbrake and transcode it to H264 video. The default settings produce good results and converting a 10 second video only takes a few seconds. You can see some of the results in these posts.

Matlab to Python conversion

As readers of this blog will have guessed, I’m a big fan of open solutions to problems, including open source software. Despite that, I’ve used Matlab for many years to develop data analysis code, but I’ve always felt a little bad about developing code that requires someone else to buy Matlab if they want to use it.

Today I was looking at Maria Kilfoil’s particle tracing micro-rheology code, and saw that they’ve reimplemented all their Matlab code in Python. The Python code looks auto-generated from the Matlab code, so I went looking, and indeed there are Matlab-to-Python converters. It doesn’t look like they will generate Python code that will automatically run and replace your Matlab code, but if like me, you’ve been looking to migrate from Matlab to Python, they may well come in handy.

CUDA Deconvolution

We’ve recently been testing the graphics card accelerated deconvolution software from the Butte lab [1]. It’s very impressive – we can deconvolve a 1024 x 1024 x 50 slice image stack in about 8 seconds.  The test data we were using has some spherical aberration, so the resulting deconvolved images aren’t that nice and I won’t post them, but I think that’s the fault of our data and not of the software.

The data set size you can deconvolve is limited by the amount of memory on the graphics card, so the 1024 x 1024 x 50 data set fit fully into the graphics card RAM, a 1536 x 1024 x 50 data set required using some CPU RAM in order to deconvolve, and I was unable to process a 2048 x 2048 x 50 data set.

We’ve tried two different graphics cards; here is the time required to deconvolve the 1024 x 1024 x 50 data set if you are interested:

Quadro K2000: 12.5 sec
GTX 750Ti: 8.1 sec

I hope to do some more comprehensive testing and comparison of different deconvolution tools, but this one is the fastest of all the ones I’ve seen.


  1. M.A. Bruce, and M.J. Butte, "Real-time GPU-based 3D Deconvolution", Optics Express, vol. 21, pp. 4766, 2013.

Position open on the Micro-Manager team

As readers of this blog know, I make extensive use of Micro-Manager to control our microscopes.  It happens that they are looking for a microscopist and programmer. Here’s the job announcement:

A Research Specialist position is open on the Micro-Manager development team. Micro-Manager ( is Open Source software for microscope control that is used in thousands of laboratories world-wide and has more than 50 code contributors. We are looking for a person with strong programming skills (C++, Java), who understands light microscopy (preferably has extensive experience with microscopes), can write documentation, and enjoys helping/teaching scientists work with microscopes. The Micro-Manager project is part of Ron Vale’s laboratory at UCSF and has about 2 more years of NIH funding. We plan to transition Micro-Manager to an independent (possibly non-profit) organization that will continue the core mission of Micro-Manager (to provide open source software tools for microscopists) but be financed more directly by those who benefit from its existence. The position requires a minimum of a Bachelor’s degree or equivalent (preferably in life sciences, physics, engineering or computer sciences) and demonstrable programming experience in Java and/or C++ as well as preferably one or more years experience with scientific imaging using light microscopes. In addition, the position requires excellent communication skills, great problem-solving abilities, and a detail oriented personality.

To Apply: Please submit cover letter, CV, and contact information for two professional references to

UCSF seeks candidates whose experience, teaching, research, or community service has prepared them to contribute to our commitment to diversity and excellence. UCSF is an Equal Opportunity/Affirmative Action Employer.