Qioptiq Optem Fusion

I was googling around today, looking for low magnification lenses to interface with a 35mm camera sensor, and I came across this interesting looking modular optical system from Qioptiq. It’s essentially a modular low magnification microscope, with optional motorized focus and zoom options, with an NA up to 0.18.  If you use their 4x / 0.18 NA lower lens, combined with a 2.5x tube lens, you get 10x magnification overall, and can fill a 35 mm sensor, achieving a field of view of 2.4 x 3.6 mm. Combine that with a camera like the Nikon DS-Qi2, and you get a Nyquist sampled, 16 megapixel image with 1.7 um resolution. If you had a camera with slightly smaller pixels you could go down to 8x overall magnification and a slightly larger field of view.

For low magnification, large field of view imaging, this system looks pretty interesting, particularly given its modular nature and the fact that there are motorization options and options to add fluorescence illumination.  I have no idea about pricing, and wonder how it compares to more conventional microscope options. If it’s cost effective, though, it could be a nice option for certain types of imaging – I’m currently rebuilding a Nikon AZ100 into a low-magnification, low-resolution light sheet system (about which, more later) and we’ve gotten very nice images of cleared mouse brain with a 2x / 0.2 NA lens and an extra few-fold magnification from the zoom system.

If anyone has any experience with this system, or anything similar to it, I’d be very curious to hear about it.

Fluorescence calibration slides

Since my previous post on flat-field correction, I’ve become aware of two commercial sources for slides with uniform fluorescent films deposited on them: Valley Scientific and Argolight. These are more expensive than the DIY solution but more convenient.  The Argolight slide also includes a number of very small features for measurement of resolution and distortion (this also makes it fairly expensive). I don’t have personal experience with either one, but they may come in handy.

I hope to have a report soon on the testing of all the flat-fielding dyes.  We need to do more testing, but we have promising initial results on using Acid Blue 9 to calibrate the Cy5 channel.

How much information does your microscope transmit?

I want to revisit the subject I discussed in the very first post on this blog – how many pixels does your camera need to capture all the information transmitted by the microscope objective?  I’m revisiting this because of a paper published this summer on a clever method for a acquiring high resolution wide field of view images [1]. The method is called Fourier ptychographic microscopy, and essentially amounts to doing image stitching in the Fourier domain to reconstruct a single high resolution image from many low resolution images. This is done by acquiring low resolution transmitted light images from many angles of illumination; the different illumination angles correspond to imaging different regions of frequency space. Reassembling these regions into a single frequency domain image means that much higher resolution is obtained over the full field of view of the microscope.  The net result is they get an image that has the field of view of the 2x lens they use, but has resolution comparable to that of an 0.5 NA lens.

They quantify the combination of resolution and field of view (FOV) by the space-bandwidth product (SBP), which is a fancy way of measuring the number of pixels required to capture the full area at full resolution. Put another way, this is just the FOV divided by the pixel size required to achieve Nyquist sampling at the resolution of the image.  For example, a Nikon 100x/1.4 NA lens has a field of view of about 250 μm in diameter and a resolution of about 220 nm, requiring pixels 110 nm on a side. The area of a 250 μm diameter circle divided by the area of a square 110 nm on a side is about 4.1 million, so we need 4.1 megapixels to capture the full field of view and full resolution (this assumes a circular camera, so we’d need more if our camera was square). This measure is a nice way of quantifying the amount of information transmitted by a microscopy system; for the Fourier ptychographic microscope above, it’s in the gigapixel range. Continue reading


  1. G. Zheng, R. Horstmeyer, and C. Yang, "Wide-field, high-resolution Fourier ptychographic microscopy", Nature Photonics, vol. 7, pp. 739-745, 2013. http://dx.doi.org/10.1038/nphoton.2013.187

Full Field of View Fluorescence Performance

I finally got around to doing something I’ve wanted to do for a while: inspecting the point spread function of our new wide field of view microscope that uses the Andor Zyla camera. If you look back at some of my early posts, you can see that I’ve been wondering for a while what limits the effective field of view we can image through the microscope. As it’s clear that a much bigger field of view is accessible from the objective than makes it to the camera, why is it so hard to access that larger field of view? Once possibility that I’ve suspected is that the image quality is poor at the edges of the field of view.

To test this, I’ve measured point-spread functions (PSFs) for a Nikon Plan Apo VC 100x/1.4 objective using beads distributed across the field of view.  The PSF is an excellent way to see aberrations in your image (a colleague once compared measuring a PSF of your microscope to being naked; both are excellent at spotting imperfections that might otherwise be hidden). These images were recored on our Andor Zyla camera, which captures nearly the full field of view of the eyepieces.  As this is a new lens, the PSF in the center of the field of view is excellent, aside from some modest spherical aberration (see below).


Z-series montage of the point-spread function of a 100nm bead in the center of the field of view. Sections are space 200 nm apart.

If we look at one of the corners of the image, however, the PSF appears very different. Below is the PSF from the upper left corner of the image. Here we can see that as we go out of focus there is a pronounced elongation of the PSF. The PSF is elongated perpendicular to the vector connecting the location of the PSF to the center of the image.


The point-spread function from the upper left corner of the image. Otherwise identical to above.

We see similar aberrations elsewhere in the image – at the edges of the field of view the PSF becomes elongated. Fortunately, the aberration is only pronounced at the very edges of the field so that by reducing our image size modestly, we throw away most of the worst parts of the image. For high-resolution work on this microscope, I’m now recommending using the 2048 x 2048 ROI on the camera so that the worst aberrations are eliminated.

Basic Ray Optics for Microscope Design

As mentioned in the previous post, I’ve been working on designing a microscope to be built on an optical rail. As part of the design, I’ve needed to calculate a bunch of distances and sizes – for instance, the size of the back focal plane – that are not usually provided by the objective manufacturer, but that are easy to calculate. So that you won’t have to hunt down all the necessary formulas (most are in chapter 9 of the Handbook of Biological Confocal Microscopy), I thought I would reproduce them here. Continue reading

Cameras, Magnification and Field of View, Part 2

I had the pleasure last week of demoing a Yokogawa CSU-W1 spinning disk confocal – this is the new large field-of-view Yokogawa scanhead. Andor and Technical Instruments arranged a demo for us and paired the scanhead with an Andor Zyla camera. I’ll have more to say about the demo later – it’s a pretty cool confocal – but for now I want to focus on some field of view (FOV) issues it raised.

Continue reading