We’ve been fortunate enough to get our hands on a 10-tap Andor Zyla camera. This is a new sCMOS camera that is capable of reading out its full field of view (5.5 megapixels) at 100 frames per second. At two bytes per pixel, that’s 11 MB per image, for a total data rate of 1.1 gigabytes per second. It turns out that acquiring and storing 1.1 GB/s is not so easy. A solid state drive (SSD) can deliver sustained write speeds of 520 MB/s, about half of what we need. The solution is to use multiple solid state drives, in RAID 0, so that we can write to them in parallel. Both Hamamatsu (for their Flash4.0) and Andor recommend four SSDs in RAID 0. In principle, with four SSDs at 520 MB/s, we should be able to get a total data transfer rate of 2.08 GB/s, assuming nothing else is limiting.
However, getting this data transfer rate in practice is not so easy. We just bought such a setup to go with our new Zyla. It has an Intel RAID card and four 240GB Intel SSDs. The host PC has two quad-core Xeons with a 1600 MHz bus clock. In principle it seems that it ought to run somewhere close to 2 GB/s. However, it turns out that getting these speeds in practice is not so easy. Out of the box, it ran at about 500 MB/s. The problem turned out to be that we had not enabled write caching on the device in Windows; once we did that the write speed jumped to about 950 MB/s. This is still lower than it seems like we should be able to get and not quite the 1100 MB/s we would need to continually stream data from that camera. Unfortunately, figuring out how to adjust the raid settings for optimal performance is not so easy (for example) and there is a lack of real-world documentation on how to optimize RAID settings for maximum write throughput. Partly this is because sustained write performance is a little bit of an unusual requirement – most applications care a lot more about random read/write performance. So it seems like for now I’ll be adjusting settings on the RAID controller to see what combination of settings gives the highest write throughput. If anyone reading this knows how to set up our RAID for maximum speed with out me blindly trying settings, please let me know.
It turns out that the PC hardware isn’t the only limiting factor in handling this much data. The Micro-manager team has been busy rewriting the Micro-manager core to minimize overhead and handle these high data rates in Micro-manager. Right now we are able to run continuously with 15 msec exposure times, or 66.7 frames per second. Hopefully we’ll be able to get to the full 100 fps rate with a little tweaking. Not surprisingly, you can fill up hard drives pretty fast this way – at 66.7 fps, we’ll fill our entire RAID 0 drive in about 20 minutes.
Assuming we do get our RAID up to speeds above 1.1GB/s, I’ll post how we did it.