Shading correction of fluorescence images

I posted previously about automated acquisition and stitching of tiled fluorescence images in Micro-Manager. Today I want to talk about how to properly flat-field correct them. In the previous post I mentioned that I have been developing tools for flat-fielding images with independent correction images in each channel. However, if you looked at the linked stitched image from the previous post you will notice that there is still some uncorrected shading in the images, which manifests itself as the checkerboard pattern in the final stitched image.

I suspected that this was because the correction image was not a good match to the true shading image. Normally, we measure flat-field correction images using 1 mm thick fluorescent plastic slides. Chroma gives these out at conferences, and they’re easy to use, but you might expect that a 1 mm thick fluorescent slide is not a good way to measure the correction image for a 20 μm thick tissue section. To test this, I measured correction images from one of these fluorescent plastic slide and from concentrated and dilute solutions of dye (fluorescein or rhodamine). To image the dye samples a drop of dye was placed between a coverslip and slide to produce a thin layer of dye. The dilute dye solutions produced poor correction images due to high variability in intensity from position to position. The concentrated dye solutions (a spatula-full of dye dissolved in 5 mL of PBS) produced good correction images. These were tested by tiling image acquisition to look for uniformity in the stitched image. The results are shown below.

A 6x4 stitched image of a mouse kidney section with no shading correction applied.

A 6×4 stitched image of a mouse kidney section with no shading correction applied.

The same mouse kidney section, flat-field corrected using shading images recorded from a 1mm thick fluorescent plastic slide.

The same mouse kidney section, flat-field corrected using shading images recorded from a 1mm thick fluorescent plastic slide.

A 6x4 stitched image, flat-field corrected using shading images acquired from concentrated solutions of fluorescein and rhodamine.

The same mouse kidney section, flat-field corrected using shading images acquired from concentrated solutions of fluorescein and rhodamine.

The entire kidney imaged with the best shading correction and stitched together in Fiji can be seen in this Gigapan.

As you can see, the fluorescent plastic slide performs substantially better than no correction, but the correction using the dye solution is much better. It’s a little bit more work to do the correction with the dye solution, but I’m hopeful that the corrections will be stable over time so that we don’t have to re-measure them very often. Fluorescein and rhodamine work well for acquiring correction image for the green and red channels; I’m hopeful that 7-diethylamino-4-methylcoumarin will work well for the DAPI channel. All of these are available cheaply. For the Cy5 channel I don’t know of any cheap dye, so we’ll probably just use Cy5 or Alexa 647. I didn’t measure the concentration of the dye solutions, but here’s a picture of them so you can get a sense of the concentration.


Concentrated rhodamine (left) and fluorescein (right) solutions.

The shading correction is implemented as (Image – Background) / Shading, where Background is an average image with no light reaching the camera, and Shading is the average image recorded from the dye solution (I average 30-50 images recorded at different positions to average out dust particles and other spatial fluctuations). The Shading image is itself background subtraction. This is all implemented in a plugin for Micro-Manager, MultiChannelShading, which will be available in nightly builds soon.

15 thoughts on “Shading correction of fluorescence images

  1. Hi,

    This looks really nice. Would it be possible to make a regular ImageJ/Fiji plugin from it rather than a µmanager plugin? That would allow post-acquisition correction using saved background and flat-field images, regardless of the acquisition software used.

  2. The math is really quite simple – it just does an image subtraction and a division – so that would be trivial to write an ImageJ macro to do. The reason I do it in Micro-manager is so that I can take advantage of the tags to determine what channel the image was acquired in and look up the correct correction image for that channel. I’m not sure how to generalize that to images from other image acquisition software, but possibly you might be able to do something that would work with the OME-TIFF header tags. Unfortunately, I don’t have the time to take this on.

    You can see the source code here if you’re interested:

  3. Hey Kurt,

    I was an early adopter of Mike Model’s concentrated dye slides and I actually put up a detailed protocol on how to make them on the Spectral website:

    One question I’ve often wondered about is why the plastic slides don’t work as well as the the concentrated dye solutions, and I suspect it may be to the refractive index of plastic not being the same as water and the fact that the plastic slides are so fluorescent that even a little bit of excitation light will cause out-of-focus light to blur the illumination pattern you’re trying to measure. Your thoughts?

    • John,
      That’s a nice protocol.

      In general I’ve assumed that the reason the plastic slides don’t work as well as the dye solutions is that they are so thick that a fair amount of out of focus fluorescence is excited and adds to the in-focus fluorescence signal. We also typically use them without a coverslip present so we are adding spherical aberration as well. I don’t know if they’d perform better with a coverslip or not. The refractive index difference is also a possibility, but I’m not sure how that would affect the flat-field image, other than adding additional aberrations as you image deep into the sample.

      One thing I’m curious to test is whether the ultra-thin optical sections generated by the very concentrated dyes are necessary, or if a lower concentration produces an acceptable flat-field image when sandwiched between a slide and coverslip to make a thin layer.

  4. If method is just background subtraction, in an area where is no signal, the image intensity is it corrected?

    • The correction first subtracts off the background signal and then divides the remaining signal by the flat-field image, so that regions both with and without signal should be appropriately corrected.

  5. I like the idea of correcting uneven intensities using the concentrated dye. But I am still curious about two things:

    1. Could the requirement for dye concentration be lowered by using higher gains for camera CCD or extending the exposure time?

    2. How can one make sure that the cover slip used to sandwich the dye solution when measuring the shading sample is level? I tried to do this by pipetting a droplet (20ul) onto a glass slide, then lay a cover slip on top of the liquid droplet. By looking from the side, I found that the cover slip was not level. It was able to stably remain afloat the liquid even it was tilted to some 5degrees. Thus when imaged via either an inverted or upright scope, the regions with thicker liquid will yield higher fluorescence, because a normal vector to the glass slide plane would have a longer light path in the thicker regions. I am wondering how you correct for that fact or if there’s a special technique you use to lay those cover slips? Thanks!

  6. Xianrui –

    The requirement for high concentration is not for brightness but to get such high light absorption that light penetrates less than a micron into the sample. This is designed to make issues like the thickness of the dye layer irrelevant. So if all the light is absorbed in a layer thinner than the dye thickness, variations in thickness shouldn’t matter. If you were still concerned, you could make a flow cell with double-sticky tape spacers or something similar to make a uniform thickness solution, but I think even without high absorption, the thickness variation over the small field-of-view seen by the objective will be pretty negligible.

  7. Dear Kurt Thorn,

    I have fluorescent calibration slides from DELTA VISION. The above discussions are useful but now i am in dilemma to use them or rose bengal for calibration for Propidium iodide stained nuclei. moreover when i am going through a book titled “Image Cytometry: Protocols for 2D and 3D Quantification in Microscopic Images” the authors mentioned as follows for fig 3 legend:

    “Shading or vignetting introduces noise during object segmentation, particularly with poorly contrasted objects. Empty, so-called ‘flat field’, image showing uneven illumination with a brighter center and darker corners. Uneven illumination interferes with valid quantification in microscopical images but can be easily corrected. A ‘flat field’ image must be acquired under the same light conditions as the images that are used for quantification (test images). Algorithms can then be applied to obtain a uniform light distribution in the test images that can now be safely used for valid quantification.”

    in this paragraph i did not understand well, what authors actually mean by -” A ‘flat field’ image must be acquired under the same light conditions as the images that are used for quantification (test images).”?

    if it means including exposure time, my biological sample give good dynamic range at about 50-60 millisec and my calibration plastic slide saturates pixels just after 1 millisec. (zeiss axioscope A1 with MRm 12bit camera)

    when i follow shading correction as you mentioned in this blog and wiki page i got weird results. please help me to move forward


  8. Are the images used for Background Subtraction and Shading Correction also montage images, or are those individual images?
    You probably have been asked this question before, but could you please make a video showing the process of acquiring the fluorescent standard and then processing the sample using Image J or some other software? That would be very helpful.
    Thank you

Leave a Reply

Your email address will not be published. Required fields are marked *