Recently the good folks at Datalight published results of testing they'd done on their Reliance Nitro transactional file system and Microsoft's Texfat (Transaction-Safe Extended FAT). Both are used in embedded systems like ATMs and mobile devices with critical data reliability needs.
There's no doubt that SSDs have more raw performance than SD cards. But in this case its not what is used, but how it is used.
Datalight found that creating a file and writing a file with Texfat on an SSD took over 8 seconds, while the same operations on an SD card took anywhere from 41 to 101ms.
When they investigated, they found that the SSD
. . . is actually doing something (flushing data out of a cache internal to the SSD, most likely), whereas on SD - it does no additional work (since SD has no internal cache), and so . . . SD appears "faster".
If you worry about corrupting data due to power loss, those 8 seconds can be an eternity.
As Datalight points out:
. . . the path taken by data from application to media can be convoluted, passing through caches and buffers at different layers of the system in an effort to improve or maintain performance."
While the caches and buffers are an architectural strategy to improve performance, Datalight's testing shows that they don't always do so.
No, you shouldn't replace your SSDs with SD cards, though some are sporting faster-than-disk specs.
Rather, I see Datalight's results as evidence that the old disk-oriented software stacks and file systems with many layers of caching are less and less relevant in a world of high-performance solid state storage. For 50 years all systems were engineered to minimize the long latency and low-IOPS of disks - and with fast SSDs much of that careful engineering is obsolete and counter productive.
If you are engineering embedded systems, the Datalight blog post is worth a close read. If you are running data systems, you should be more aggressive in understanding how your chosen software works - or not - with SSDs.
Comments welcome, as always.