For my inaugural post, I want to talk about something that I’ve been thinking about a lot recently – how to capture the maximum amount of information from your microscope. A user came to me recently wanting to maximize the field of view he could acquire at high resolution from the microscope – he was doing an image based screen and wanted to maximize the number of cells he could capture in one field of view. I immediately realized that our standard 1.4 megapixel, ICX285 based cameras weren’t going to cut it – this was a job for an sCMOS camera, or so I thought

Then I started thinking more about the problem. For his application, he didn’t need high resolution, so we were talking about imaging at 10 or 20x. When I started doing the math for the pixel size you need to acquire a diffraction-limited image from a 10x / 0.45 objective, I realized that our standard ICX285 cameras that are diffraction limited with a 100x / 1.4 oil lens aren’t diffraction limited for a 10x / 0.45 objective. Going from a 100x oil lens to a 10x air lens reduces the magnification by 10-fold, but the NA, and hence resolution, only drops by about 3-fold. So you either need a 3X magnifier between your scope and your camera, or you need 3-fold smaller pixels.

OK, so all the imaging we’ve done over the years with the 10x objective turns out not to be diffraction limited, and we need a camera with about 3 μm pixels if we want to be diffraction limited. How many do we need? It turns out the side port of a Nikon Ti has a field of view of 18mm. The eyepieces and the bottom port have a bit larger field of view, 22mm, but since I’ve only ever seen one Ti with a bottom port, I’ll stick with the side port numbers. If we want to truly maximize the field of view, will want a camera that’s 18 mm on a side. This will have black spaces in the corner, however, because the field of view is circular. If we want to have a camera that doesn’t have any black spaces, say, for tiled acquisition, we can inscribe a square camera in the 18 mm field of view. This gives a camera that’s 12.73 mm on a side, but we only capture 2/π = ~64% of the field of view.

Putting this altogether, for any given objective, I can calculate the pixel size I need to achieve Nyquist resolution and how many of those pixels it will take to fill the field of view (FOV). You can see a spreadsheet with those numbers here. For the 100x/1.4 objective, our standard ICX285 chip does an OK job. It only captures 37% of the FOV captured by a 12.73 x 12.73 mm camera, but the pixels are well under Nyquist, ensuring oversampling. For a 60x lens, the 6.54 μm pixels of the ICX285 are right at Nyquist, but to actually capture the (inscribed square) FOV would require a 3.8 megapixel chip. When you look at lower magnification objectives, the answers are even more surprising For that 10x / 0.45 objective, we need a camera with 3.4 μm (or smaller pixels) and for a 12.73 x 12.73mm camera, you’d need 14 million of them. So now, this doesn’t look like a job for an sCMOS camera – those pixels are too big, and there aren’t nearly enough of them.

Fortunately, there are some CCDs, that if not quite what we want for this project, are getting close. For example, the new Sony ICX814 – it has a 16 mm diagonal and 9.14 million 3.69 μm pixels. The pixels are still a bit big for our 10x objective, but this is very nearly perfect for our 20x objective. As far as I know, no one’s put this into a scientific camera yet, but it looks very promising. However, Raptor Photonics has recently released the Kingfisher V, a camera based on the related ICX694 chip, that promises 6 million 4.5 μm pixels, and some other very impressive specs, like 1.5 e- read noise. I’m supposed to get one to try soon, which I’m eagerly awaiting.

One final thought. If you look at the spreadsheet, you’ll see that the desired pixel sizes cluster around 3-4 μm for the low magnification objectives and around 6-7 μm for the high magnification objectives. This means that a camera with 3 μm pixels, which would give Nyquist sampling for low mag objectives could be binned 2×2 and would be a pretty good match for the high mag objectives. So a 14 megapixel, 3 μm pixel size camera would pretty much capture all the information there is to capture in the field of view and could be operated with ideal pixel sizes for both low magnification and high magnification objectives. Hopefully someone will make one soon….

Pingback: Camera sizes | Kurt's Microscopy Blog

Pingback: Cameras, Magnification and Field of View, Part 2 | Kurt's Microscopy Blog

Dear Kurt,

thank you for the very useful blog. The new DSQi2 camera using 2,5x mag c-mount on the side port seems to fulfill the sampling criteria for most lenses.

May I ask your your opinion here?

Thank you, Jens

I was not previously aware of that camera but it seems like it would perform well, assuming the 2.5x coupler can fill the large chip size (36 x 24 mm!).

Thanks for explaining things so clearly.

When you’re working out the desired pixel size, wouldn’t it be more accurate to calculate it from the largest square which will fit in the FOV, rather than the largest circle?

It depends on whether you want to capture all the information from the microscope and have black corners on the image, or fill the camera, and throw away information from the microscope. I calculate both on the spreadsheet linked above.

This blog has been very instructive for me trying to learn more about microscopes and cameras.

I am trying to put together a microscope for calcium imaging and bright-field, and I am really trying to fit a 1mm worm into a 20x field of view so I can do automated tracking and centering.

Do you know what the FOV for different microscopes are? It’s not really easy to find and compare. It looks like the Leica DMi8 advertises a new, larger 19mm FOV for larger format sCMOS cameras. You say above that the Nikon is 18mm, but my understanding is that Zeiss offers 23 mm at all ports (except the tri-noc). Do you think the trend will be toward larger camera chips driving larger FOV?

You can definitely capture more than an 18 mm FOV on the Nikon scope (see http://nic.ucsf.edu/blog/?p=108) although the image quality at the edges of the FOV may be degraded. I don’t know if there will be a trend towards increasing microscope FOVs – it would be nice, but demands more correction of the objectives to image a large FOV without aberration at the edges.