How much information does your microscope transmit?

I want to revisit the subject I discussed in the very first post on this blog – how many pixels does your camera need to capture all the information transmitted by the microscope objective?  I’m revisiting this because of a paper published this summer on a clever method for a acquiring high resolution wide field of view images [1]. The method is called Fourier ptychographic microscopy, and essentially amounts to doing image stitching in the Fourier domain to reconstruct a single high resolution image from many low resolution images. This is done by acquiring low resolution transmitted light images from many angles of illumination; the different illumination angles correspond to imaging different regions of frequency space. Reassembling these regions into a single frequency domain image means that much higher resolution is obtained over the full field of view of the microscope.  The net result is they get an image that has the field of view of the 2x lens they use, but has resolution comparable to that of an 0.5 NA lens.

They quantify the combination of resolution and field of view (FOV) by the space-bandwidth product (SBP), which is a fancy way of measuring the number of pixels required to capture the full area at full resolution. Put another way, this is just the FOV divided by the pixel size required to achieve Nyquist sampling at the resolution of the image.  For example, a Nikon 100x/1.4 NA lens has a field of view of about 250 μm in diameter and a resolution of about 220 nm, requiring pixels 110 nm on a side. The area of a 250 μm diameter circle divided by the area of a square 110 nm on a side is about 4.1 million, so we need 4.1 megapixels to capture the full field of view and full resolution (this assumes a circular camera, so we’d need more if our camera was square). This measure is a nice way of quantifying the amount of information transmitted by a microscopy system; for the Fourier ptychographic microscope above, it’s in the gigapixel range.

Here are values of the SBP for a variety of Nikon objectives:

Nikon Objectives, Field of View and Resolution

Objective (Magnification / NA)Resolution limit for 500 nm light (nm)Field of View (assuming 25 mm field number, mm)Space-bandwidth product, megapixelsObjective Family
1x/0.1305025211AZ100
2x/0.2152512.5211AZ100
5x/0.56105211AZ100
16x/0.83811.5652.7CFI75
25x/1.1277140.8CFI75
2x/0.1305012.552.7CFI60
4x/0.215256.2552.7CFI60
10x/0.456782.542.7CFI60
20x/0.754071.2529.7CFI60
40x/1.32350.6322.3CFI60
60x/1.42180.4211.5CFI60
100x/1.42180.254.1CFI60

A few things jump out – first, low magnification objectives perform much better than high magnification objectives by this measure. In particular, the lenses for the AZ100 transmit enormous amounts of information. The SBP scales as NA2/M2, and the magnification goes up faster than the NA for high magnification lenses. Essentially, our eyes (and most of our cameras) don’t have the resolution (pixel size) to capture all the information passed by the low magnification objectives, so this extra magnification is needed to use the full resolution of a diffraction-limited lens. Second, with current cameras, we don’t have nearly enough pixels to make use of the full resolution and field of view of most objectives. It’s also unlikely that we will have such cameras anytime soon.  First, the pixels would need to be very small – 1.5 – 3 μm for the lowest magnification objectives. Second, and more problematically, getting the information off the camera would be very challenging.  Imagine a 200 megapixel camera that would capture all the information from an AZ100 objective. At two bytes per pixel, each image is 400 MB, and imaging at just 10 Hz would require collecting data at 4GB/s.  As we’ve seen with current sCMOS cameras, these large data rates are tough to deal with.

What can we conclude from all this? Well, even with current sCMOS sensors, we’re still throwing away a lot of information from our objectives. There are ways to get it back, either by stitching low resolution, large FOV images together in Fourier space (Fourier ptychographic microscopy) or by stitching high resolution, small FOV images together in real space (conventional tiling and image stitching). Finally, for very low resolution, very large field of view imaging, consider scanners.  Transparency scanners can scan at 4800 or 6400 pixels per inch, corresponding to 8 – 10 μm resolution.  This is equivalent to an NA of only 0.03 or 0.04, but you can capture 2.3 gigapixels at that resolution. For certain applications, this is an excellent choice; for instance, it’s how the BigBrain data was captured [2].

References

  1. G. Zheng, R. Horstmeyer, and C. Yang, "Wide-field, high-resolution Fourier ptychographic microscopy", Nature Photonics, vol. 7, pp. 739-745, 2013. http://dx.doi.org/10.1038/nphoton.2013.187
  2. K. Amunts, C. Lepage, L. Borgeat, H. Mohlberg, T. Dickscheid, M. Rousseau, S. Bludau, P. Bazin, L.B. Lewis, A. Oros-Peusquens, N.J. Shah, T. Lippert, K. Zilles, and A.C. Evans, "BigBrain: An Ultrahigh-Resolution 3D Human Brain Model", Science, vol. 340, pp. 1472-1475, 2013. http://dx.doi.org/10.1126/science.1235381

Leave a Reply

Your email address will not be published. Required fields are marked *