We’ve recently been testing the graphics card accelerated deconvolution software from the Butte lab . It’s very impressive – we can deconvolve a 1024 x 1024 x 50 slice image stack in about 8 seconds. The test data we were using has some spherical aberration, so the resulting deconvolved images aren’t that nice and I won’t post them, but I think that’s the fault of our data and not of the software.
The data set size you can deconvolve is limited by the amount of memory on the graphics card, so the 1024 x 1024 x 50 data set fit fully into the graphics card RAM, a 1536 x 1024 x 50 data set required using some CPU RAM in order to deconvolve, and I was unable to process a 2048 x 2048 x 50 data set.
We’ve tried two different graphics cards; here is the time required to deconvolve the 1024 x 1024 x 50 data set if you are interested:
I hope to do some more comprehensive testing and comparison of different deconvolution tools, but this one is the fastest of all the ones I’ve seen.
- M.A. Bruce, and M.J. Butte, "Real-time GPU-based 3D Deconvolution", Optics Express, vol. 21, pp. 4766, 2013. http://dx.doi.org/10.1364/OE.21.004766