Marc Levoy (ex-Stanford; now at Google Research) has made a full course on photography available online. While not all of the material is relevant to microscopy, it has great sections on image formation and camera sensors, and is well worth checking out.
I recently came back from the east coast with an unwanted guest attached to me: a tick, probably a lone star tick, Amblyomma americanum. After removing it, I decided to have some fun with it – I dehydrated it in methanol, cleared it in methyl salicylate, and then imaged it on our spinning disk confocal. The movie below is stitched from four images, and is about 1.8 mm on a side and 1.2 mm thick. Total image size is ~2800 x 2800 x 306. The fluorescence is endogenous autofluorescence excited at 488 and 561 nm. This is probably a nymphal tick, and it looks like the mouthparts are missing.
I’ve been working with a group at UCSF that studies neurodegenerative diseases in humans. They have access to a large number of postmortem human brains that they would like very much to image to look at markers of neurodegenerative diseases. As you might imagine, this is not easy. The sections are on the order of 10 cm by 7 cm – not the sort of thing you usually image on a microscope.
Recently, we got a DS-Ri2 camera from Nikon. This is a 16 megapixel color camera with 7.7 μm pixels. Combined with a 2.5x coupling lens, we get a pixel size of 1.5 μm, which gives us diffraction limited imaging with our 2x / 0.1 NA objective. Capturing 16 megapixels at a time makes it much faster to capture a sample this large, and we were able to capture the entire image in about 15 minutes. It would have been faster except that the section was wavy and so we needed to use image-based autofocusing to correct the focus every few images. The resulting image is just under 4 gigapixels; acquisition and stitching were done in NIS-Elements, which had no problem with this large of an image. The edges of the brain are cut off because we ran into the limits of the stage travel.
I’ve uploaded the image to Gigapan and you can view it below:
I’ve finished my testing of concentrated dye solutions for flat-fielding images. As described previously (1, 2), we’re using concentrated dye solutions to collect shading correction images, following the work of Michael Model. Following his protocol, we use 100 mg/ml fluorescein, rose bengal, and acid blue 9 for correcting the FITC, Cy3, and Cy5 channels, respectively. Additionally, we’ve found that 50 mg/ml 7-diethylamino-4-methylcoumarin is a good dye for collecting shading images for the DAPI channel.
A detailed protocol for collecting the shading images is posted on the NIC wiki, but in brief we first collect a dark image with no light going to the camera, and then collect multiple images of each dye at different positions, and calculate the median of these images to eliminate any spatial nonuniformities (e.g. dust particles) in the dye itself. Example dark and flat-field images are shown below.
Since my previous post on flat-field correction, I’ve become aware of two commercial sources for slides with uniform fluorescent films deposited on them: Valley Scientific and Argolight. These are more expensive than the DIY solution but more convenient. The Argolight slide also includes a number of very small features for measurement of resolution and distortion (this also makes it fairly expensive). I don’t have personal experience with either one, but they may come in handy.
I hope to have a report soon on the testing of all the flat-fielding dyes. We need to do more testing, but we have promising initial results on using Acid Blue 9 to calibrate the Cy5 channel.
Since my recent post on shading correction of microscopy images, I’ve become aware of two papers by Michael Model describing the use of concentrated dye solutions for shading correction and intensity calibration of microscopes. The first paper  describes the testing of the solutions, while the second paper  provides recipes for green, red, and far-red calibration solutions. He finds that concentrated solutions (10% w/v fluorescein, for instance) perform best, and also identifies dyes that are highly water soluble and can be prepared at these high concentrations for measurement of shading images.
In particular, he recommends fluorescein for the correction of green images, rose bengal or acid fuchsin for red images (all available from Sigma-Aldrich), and acid blue 9 for far-red (Cy5 images).
I posted previously about automated acquisition and stitching of tiled fluorescence images in Micro-Manager. Today I want to talk about how to properly flat-field correct them. In the previous post I mentioned that I have been developing tools for flat-fielding images with independent correction images in each channel. However, if you looked at the linked stitched image from the previous post you will notice that there is still some uncorrected shading in the images, which manifests itself as the checkerboard pattern in the final stitched image.
I suspected that this was because the correction image was not a good match to the true shading image. Normally, we measure flat-field correction images using 1 mm thick fluorescent plastic slides. Chroma gives these out at conferences, and they’re easy to use, but you might expect that a 1 mm thick fluorescent slide is not a good way to measure the correction image for a 20 μm thick tissue section. To test this, I measured correction images from one of these fluorescent plastic slide and from concentrated and dilute solutions of dye (fluorescein or rhodamine). To image the dye samples a drop of dye was placed between a coverslip and slide to produce a thin layer of dye. The dilute dye solutions produced poor correction images due to high variability in intensity from position to position. The concentrated dye solutions (a spatula-full of dye dissolved in 5 mL of PBS) produced good correction images. These were tested by tiling image acquisition to look for uniformity in the stitched image. The results are shown below.
Regular readers of this blog will have noticed that I post about large image acquisition and stitching a lot. Partly this is due to demand from users of our facility who want to acquire images of tissue sections. In particular, there’s a lot of demand for imaging fluorescently-labeled mouse brain sections. There doesn’t seem to be a readily available slide scanner at UCSF, so I’ve been trying to come up with some easy methods for acquiring these images. I’ve been working on developing an easy workflow for doing tiled image acquisition and image stitching in Micro-Manager.
Recently, I’ve found that the Grid/Collection stitching plugin in FIJI works well for stitching images acquired in Micro-Manager. Because Micro-Manager saves its images in the OME-TIFF format, they include the coordinates that each image was acquired at. This makes stitching the images pretty easy, because the stitcher knows exactly where each image came from. Because of this, acquiring tiled images in Micro-Manager and stitching them is seamless – simply use the create grid function in Micro-Manager to acquire the images and then open and stitch them using Grid/Collection stitching. I’ve written up a full description of how to do this on the NIC Wiki, along with some details about other stitching programs we’ve looked at.
Micro-Manager also includes a plugin for flat-field correcting images, but it doesn’t allow different flat-field images for different channels. On our microscope, there is some variation in shading from image to image, so I’ve put together a plugin that allows different flat-field correction images for different channels. This plugin has been submitted to Micro-Manager and should be available in future releases.
Putting this all together, here’s a fluorescence image of a kidney section, composed of 180 individual 3-color images. It took 3.5 min to acquire using the high speed scanning system I’ve posted about before and 11 min to stitch on a dual quad-core Xeon with 32 GB of RAM. The flat-fielding isn’t perfect, but I hope to improve that soon.
This is stitched from 725 images, which took about ten minutes to acquire. These were overlaid, flat-fielded, and white balanced in Matlab, which took about another ten minutes, and stitched and exported in Microsoft ICE, which took about an hour. I don’t yet understand why there is a color gradient across the image. I’ll have to figure out where that comes from and fix it.
As I’ve discussed previously, I’ve been working on building a system for high speed image stitching. I’ve described previously a number of methods I’ve tried for image acquisition. To get high speed mosaic scanning, I’ve tried software and hardware control of the stage, as well as rapid stage scanning and strobe illumination. To date, however, I haven’t tried very hard to speed up the stage itself. I spent a few days last week trying to make the ASI stage move as fast as possible, and it turns out you can get more than a two-fold increase in speed just by configuring the stage correctly. Continue reading