This is part 2 of a two-part series in “Networking” investigating flash memory, solid state drives (SSDs), and hard disk drives (HDDs). In part 1, we surveyed the fundamentals of flash memory and SSD structure, and how flash memory works. This time, we change perspectives just a bit to focus on the differences between SSDs and HDDs, while adding a few buying tips along the way.
When looking to purchase an SSD, evaluate the data transfer rates for each model under consideration by comparing their input/output operations per second metric (IOPS). Most of the time, manufacturers report data transfer speeds in maximum sequential R/W operations, which can look quite impressive. This measurement is based on constant data transfer going on for a long time—not the way drives are used in real life. By contrast, IOPS is based on a 4 KB (page size) random read and write. I think this is closer to reality and sets a better common ground for comparison. (See the sidebar to this article to learn how to calculate IOPS if it is not given in the drive’s specs.) In general, expect to see IOPS rates around 100,000 for SSDs, versus around 100 IOPS for HDDs!
It’s easy to convert MBs into an IOPS value.
The equation is IOPS = (MBs/KB per I/O)*1024. Use 4 KBs per random R/W operation, since that is the most common page size.
For example, the SAMSUNG 840 EVO MZ-7TE500BW 2.5-inch 500GB SATA III TLC Internal Solid State Drive boasts a 540 MBs data transfer rate. The IOPS value will be (540/4)*1024 or 138,000. That’s not bad—expect to see values around 100,000.
Converting IOPS into Data Transfer Rates
What’s the data transfer rate for an IOPS of 90,000? This is also an easy conversion – just use the equation backward. MBs = (IOPS*KB per I/O)/1024. In this case it will be (90,000*4)/1024 or about 350 MBs (rounding off).
For comparison, a typical 7200 RPM HDD runs somewhere between 80 to 120 IOPS! 9
The Weakest Link
In some cases, data centers may have storage system controllers that are 10 years or more old. SSDs bring at least 100 times faster data access than HDDs. The data center will need newer architecture to take better advantage of the SSD fast access. Otherwise, it will be a bottleneck to best system performance. A capital management company in Michigan found reduced data access time to its Microsoft Exchange server of 89% in a comparison with the use of SSD and its existing 15K RPM fiber channel drives. Although switching to solid-state memory ended up costing the company about 10 times the original system costs, it is experiencing lower power consumption and utility bills by switching to solid-state memory. That will help to defray some of the cost of ownership.1
A critical part of using SSDs and flash in general is the controller. The total system memory access rates are dependent on how fast the controller can move data. Currently, the fastest SSD controllers are SATA III and PCI Express. For example, the SandForce SATA 3.0 (6 GB/second) 150K/80K IOPS is a screamer. PCI Express v4.0 can reach 256 GT/s (Giga transfer per second) with a 16-lane controller. So it appears that we’ll be seeing faster and faster computing devices as the surrounding infrastructure catches up with flash speeds.2,3
NAND flash memory technology also uses a few other utilities to manage the entire process. There are always error-protection algorithms in play to ensure data integrity. Additionally, there are three other utilities to note. The first is called garbage collection. This is a utility that makes best use of SSD memory cell space in a background operation. Since we can only erase the memory on a block level, and a block consists of pages, the garbage collection can bring together loose pages to fill a block. Recall that NAND can write at the page level. Finding page space in a block and moving pages around to use up as much of the block as possible helps with overall operation and speeds.
The second utility to mention is something called TRIM. You should look to see if the SSD you’re thinking about buying uses TRIM. TRIM manages file deletion efficiency somewhat like garbage collection, but on the fly. If you delete a file at the user level, it affects many pages spread around memory blocks. TRIM will take any remaining pages from an affected block and temporarily store them in cache, then go back to purge the entire block. Once cleaned out, TRIM recovers the other pages from cache and puts them back in the same block in an orderly way, allowing the rest of the system to efficiently add new pages. It all helps to maintain maximum effectiveness and data transfer speeds.4
Probably the most useful tool is called wear-leveling. Obviously, if you keep reading and writing to the same block all the time, it will wear out faster than the other memory blocks. By ensuring all blocks are used at about the same rate, this tool will extend the overall life of the SSD.
Making the Choice
In this two-part column, we have taken a fairly extensive tour of SSD structures. HDDs are still quite useful, and cheap enough for personal use and for enterprise data storage. Currently, the cost of HDDs is around 5¢ per GB. SSDs are running about 42¢ per GB. Also, take into account that there are no R/W limitations with HDDs. In addition, HDDs have greater capacity (up to 10 TB, which Western Digital announced in September of this year5).
Since about 2000, processor speeds have continually increased, but the HDD has not. This makes it a slow link or bottleneck in the data-transfer chain. CPUs can process data much faster than the HDD can supply it. Again, this is a case where the entire system benefits from flash memory.
Obviously, HDDs are electromechanical devices. I think we should rename them EMDs—for electro-mechanical drives—due to the advent of solid-state drives. In the same way, in cellular technology, 2G wasn’t called 2G until 3G came out, and in networking, thick-net didn’t get its name until thin-net came out. How does one petition for a change in the vernacular? Via social media?
I think that SSD wins on reliability. An SSD test of 20 GB of R/W daily for 5 years produced zero failures. Intel rates its 530 series drive to last 10 years if you write 10 GB per day for 365 days a year.6 HDDs are prone to failure within 3 or 4 years. If they make it past the fourth year, then failures continue to happen at a fail rate of about 12% per year thereafter.7
In addition, SSD failures usually don’t happen without some warning. They tend to get slow as the dielectrics wither over the years, and they need to try several times to write to a cell. This is more likely to happen as chip architecture gets to 7-nm architectures. Everything is smaller and closer together. However, like an automobile tire sensor keeping an eye on tread depth and wear levels, there will be sensors and techniques to keep an eye on oxide layers and retries.1 By contrast, the mechanical HDD can fail suddenly and sometimes catastrophically. The only thing I have yet to determine is whether data from a limping SSD can be recovered, as data from a crashed HDD can.
The bottom line is this: HDDs win on price, capacity, and availability. SSDs win on speed, ruggedness, power consumption, and noise. If it weren’t for the price and capacity issues, SSDs would be the winner hands down.8
Finally, a few tips on preserving your SSD. Number one: turn off indexing. There is no need for constant surveillance in reading/tracking what’s in memory, since we’re already running lickety-split. Next, ensure that your OS and drive controller are using TRIM.4 Also, if you’re a defragger, resist the urge to defragment your SSD. It’s a useless effort in SSD, and by defragging, you’re using up R/W cycles.
Will SSDs eventually replace all HDDs? I don’t think they will entirely. There will always be a need for the HDD. I think data centers will continue to use them for backup purposes and emergency replacement. Most consumers go for price, especially since a typical SSD is currently eight times the cost of an HDD for the same capacity. As costs even out over time, however, we will see more and more SSDs showing up in all kinds of computing devices. When will you jump into SSD land?
1. Rouse M. (2014, October). SSD (solid-state drive, solid-state disk, solid-state storage drive). Retrieved from Tech Target: http://searchstorage.techtarget.com/definition/solid-state-drive. Accessed December 12, 2014.
2. Wikipedia. (2014, October 29). PCI Express. Retrieved from Wikipedia: https://en.wikipedia.org/wiki/PCI_Express. Accessed December 12, 2014.
3. Wikipedia. (2014, November 1). Serial ATA. Retrieved from Wikipedia: https://en.wikipedia.org/wiki/Serial_ATA. Accessed December 12, 2014.
4. Fitzpatrick J. (2011, March 8). HTG Explains: What’s a solid state drive and what do I need to know? Retrieved from How-To Geek: http://www.howtogeek.com/howto/45359/htg-explains-whats-a-solid-state-drive-and-what-do-i-need-to-know/. Accessed December 12, 2014.
5. Anthony S. (2014, September 10). Western Digital unveils world’s first 10TB hard drive: Helium-filled, shingled recording. Retrieved from Extreme Tech: http://www.extremetech.com/computing/189813-western-digital-unveils-worlds-first-10tb-hard-drive-helium-filled-shingled-recording. Accessed December 12, 2014.
6. Hagedoorn H. (2013, August 15). Intel 530 SSD review – What is SATA 3 (6G)? – 1x nm NAND – MLC . Retrieved from The Guru of 3D: http://www.guru3d.com/articles-pages/intel-530-ssd-benchmark-review-test3.html. Accessed December 12, 2014.
7. Paul I. (2014, January 21). Three-year, 27,000 drive study reveals the most reliable hard drive makers. Retrieved from PC World: http://www.pcworld.com/article/2089464/three-year-27-000-drive-study-reveals-the-most-reliable-hard-drive-makers.html. Accessed December 12, 2014.
8. Domingo JS. (2014, February 20). SSD vs HDD – What’s the difference? Retrieved from PCMAG.COM: www.pcmag.com/article2/0.2817.2404258.00.ASP. Accessed December 12, 2014.
9. Burke S. (2012, March 22). SSD dictionary: Understanding SSD specs – The basics. Retrieved from Gamers/Nexus: http://www.gamersnexus.net/guides/785-ssd-dictionary-understanding-ssd-specs. Accessed December 12, 2014.