Last month, Drive Rescue Data Recovery attended Embedded Word 2020 Nuremburg, Germany checking out some of the latest solid state disk technologies. Given the circumstances of the COVID-19 outbreak, the atmosphere was rather funereal but most exhibitors and attendees seemed to make the most of it. Here are some changes and insights happening in the SSD market at the moment.
96-Layer (BiCS4) NAND goes mainstream
As most readers of this blog probably know already, the first generation of flash memory was “planar” or “2D” NAND meaning that the memory cells are layered horizontally across the die (chip).
3D NAND is now going from 64-layer (BiCS3) to 96-layer (BiCS4). This means NAND cells are getting vertically “stacked” on top of each other, creating 96 different layers – rather like the storeys of a skyscraper. For NAND manufacturers, this means adding additional chemical deposition and etching machines to their production lines.
96-layer sounds great, but as ever, manufacturers are pushing the NAND layer envelope even further by using a process known as “string-stacking”. This involves, for example, stacking, a 32L die on top of a 64L die. By adapting their production lines, this layering process is being adopted by most NAND manufacturers. However, Samsung (being Samsung…) is expected to continue using a single-stack process known as High Aspect Ratio Contact in their fabs for etching dies up to 200 layers.
And while multi-layered chip lithography greatly enhances SSD storage densities, it also increases cell-to-cell interference (sometimes referred to as “crosstalk”). To reduce this, a quality SSD controller, which uses effective Error Correction Control (ECC) algorithms is needed now more than ever.
Common ECC algorithms like BCH are simply not cutting it anymore. That is why more sophisticated ECC engines powered by LDPC can offer pseudo-soft bit correction and read-level tracking are needed. SSD controller manufacturers are responding to this need. Phison’s new E16 controller, for instance, uses fourth generation LDPC ECC to complement 3D TLC and even QLC NAND. Silicon Motion, a controller manufacturer exhibiting at Embedded World 2020, have introduced their sixth generation “NANDXtend” ECC technology. This combines LDPC with RAID protection which promises end-to-end data integrity along the entire Host-to-NAND data pathway. Some of Silicon Motion’s controller line-up even employs machine learning algorithms for enhanced data integrity when the disk operates at high temperatures. Machine learning and artificial intelligence are terms which the storage world are probably going to be hearing a lot more of in the future.
From S-ATA to M.2 There is little point in having all of this fast NAND if it’s bottlenecked by the disk’s command interface or connector type. For example, if you buy an S-ATA MLC-based SSD today, more likely than not, the S-ATA connector will be the most likely data throughput bottleneck. (Remember the maximum bandwidth throughput for S-ATA III is only 600 MS/s.) Moreover, most laptop and tablet manufacturers are finding the S-ATA connector too bulky to fit their ultra-slim devices.
This explains why M.2 is now proving such a popular connector type. This connector comes in 5 main “key” or connector types including “A key”, “B key”, “E key” “M key” and “B&M key”. These come in a variety of sizes such as the popular “2280 B+M” form factor which is 80mm in length and 22mm in width. These variations provide manufacturers, systems integrators or end-users with ample spatial flexibility.
From AHCI to PCIe
It’s not only the connector type which can slow down an SSD. The command (or data pathway) protocol used is just as important. Currently, for standard S-ATA III disks, AHCI is the standard command protocol. This has 1 command queue and 32 commands per queue, allowing data transfers of up to 600MB/s. Whereas NVMe allows 65,535 queues and 65,536 commands per queue. Or, to use a roadway analogy, AHCI is your country boreen for data while PCIe is a super-highway.
Enter PCIe 4.0
Solid state disk manufacturers are now beginning to deploy PCIe 4.0 in their drives. PCIe doubles the data throughput from 32GB/s to 64GB/s – that is assuming the underlying hardware supports this new specification. This increase in bandwidth should be very pleasing to those involved in the processing of, for example, stereoscopic images or large data sets required by artificial intelligence.
Optimising PCIe with NVMe
But how about optimising PCIe so that it works smarter and faster while minimising motherboard compatibility issues? That’s where the Non-Volatile Memory Express (NVMe) standard comes into play. NVMe is a data pathway standard devised by PCI-SIG, a consortium of chip makers, controller designers and solid state disk manufacturers (such as SanDisk, Samsung and Intel). Their mission is to enhance I/O functionality and performance of solid state disks using PCI NVMe but also to promote industry-wide interoperability and adoption of the NVMe standard. NVMe is currently on version 1.4.
Already NVMe allows SSDs to exploit parallelism which allows the concurrent processing of I/O requests. One of the biggest changes from NVMe 1.3 however, is a feature known as “I/O determinism”. This feature allows an SSD to be configured into multiple “sub-drives”. Different data types (e.g. Video, photo or database streams) can be partitioned so they don’t interact. A feature which could prove very useful when processing hyperscale data.
Using Namespace Preferred Write Alignment, NVMe 1.4 makes TRIM commands work more efficiently
Not only does NVMe 1.4 allow for better organised data, but it also enables TRIM to work more effectively. Currently, for most SSDs, TRIM operates as a background function to ensure that data blocks or pages which are no longer needed are deleted. While in theory this sounds very efficient, often the data ranges subject to deletion can be very small or misaligned. This leads to write amplification (where entire “blocks” instead of more granular “pages”, of user data and metadata stored on NAND are subject to erase and write cycles). And you really don’t want this process happening too much inside an SSD because it can lead to early wear-out. However, with NVMe 1.4 a process known as Namespace Preferred Write Granularity has been introduced. NPWG helps SSDs to report more rich information back to the host, such as page sizes and erase block cycles. In turn, this allows TRIM commands to get executed in a more granular and targeted way. Executing erase commands executing at page-level greatly reduces write amplification and extends the life the SSD.
The G-List for the SSD era
For years electro-mechanical disks (HDDs) have used a G-List (short for Growth List) which records bad sectors detected by the disk’s firmware. NVMe 1.4 introduces something similar for SSDs called “Get LBA Status”. This allows an SSD to report LBA ranges (areas of blocks) which are likely to return a read-error if a read or verify command is executed. This feature should hopefully enable solid state disk OEMs, vendors and third-party software developers to develop more accurate SSD diagnostic and monitoring tools. Already on the GitHub file repository, we’re seeing the how the relative openness of NVMe standard has led to independent software developers across the world produce fairly nifty firmware hacks for NVMe-based SSDs.
Looking forward: NVMe 2.0 to introduce Zone Name Spacing
Diverse data types dispersed across a disk’s NAND plane can also make write, read and erase cycles very inefficient. For example, for a user like a video producer or editor, whose SSD might contain a various range of file types such as audio, video, music or photos (e.g. MP4, AIFF, RAW). Thanks to the SSD’s controller and a process known as wear-levelling, these files will be written evenly across the SSD’s NAND blocks. This is important because it prevents the same blocks being written to, or erased continually. (Think of wear-levelling as being like a car park attendant. He tries to prevent drivers all parking their cars near the lifts and encourages them to park more evenly throughout the car park) Spreading out the writes in a more distributed manner across the NAND cells reduces disk wear-out. However, there is a problem with this. Each time a read, write or erase command is issued, the controller has a lot more work to do because the data is stored non-sequentially. The consequent problem of latency introduced by storing data non-sequentially is a well-known problem. Some readers might remember how, on operating systems such as Windows 98, ME or XP, non-sequential data stored on the host’s disk would necessitate the running of inbuilt or third-party disk defragmentation applications to improve disk performance. Almost two decades later and SSDs are facing the same problem which that old spinning Maxtor, Seagate (or even a Quantum Fireball…) disk inside your first PC experienced. NVMe 2.0 promises to eliminate this problem using Zoned Name Spaces. This protocol (already in use by Shingled Magnetic Recording HDDs) divides the disk’s logical address spaces into fixed sized ranges and enforces sequential write rules. Each zone must be written sequentially. If the host or application violates this rule, an error is generated. ZNS allows the SSD to match the workloads of the host to the natural erase patterns of flash (NAND) memory. This results in significantly reduced latency, reduced over-provisioning, less garbage collection and less write amplification. And ultimately, it allows for much faster I/O operations.
In the context of data recovery, NVMe 1.4 introduces a potentially useful feature known as “persistent error log”. This basically acts as a black box flight recorder for NVMe storage devices – allowing the vendor, OEM or data recovery technician access the event logs of the device. While disk event logging is nothing new, HDD manufacturers such as Seagate and WD have featured it before. However, the logging was usually specific to a hard disk family or model. (And just to complicate things, for more recent HDD models, the logs are sometimes inaccessible due to locked firmware…) The “persistent error log” feature on NVMe SSDs promises to be completely open (as opposed to locked down or encrypted…) and uses standardised (as opposed to manufacturer-specific) logs. The PEL will record important disk operating telemetry such as health snapshots, NVMe namespace changes, firmware commits, power-on logs, reset logs, hardware error and logs pertaining to disk format and sanitise commands. This should make the diagnosis of complex SSD firmware or File Layer Translation problems less time consuming and hopefully expedite the data recovery process.
Another interesting trend Drive Rescue noticed at Embedded World 2020 is the return of removable WORM (Write Once Read Many) media for the SSD era. Most of you will be already familiar with WORM medihuja such as CD-R, DVD-R and BD-R. Well, flash storage companies like Silicon Power are making WORM versions of their SD cards. These are tamper proof SD cards (at least from soft forms of tampering…) which once written to, cannot be modified. Nor can the write-protection be disabled by flicking the “read only” switch on the side like on standard SD cards. The write protection on WORM SD cards is hard-coded. Wondering about the practical applications of such cards, the super-informative Silicon Power team told me that these non-erasable cards have wide use applications in areas where data integrity is paramount such as POS equipment, body cams, electronic voting and medical systems.
According to the Silicon Power team, USB memory sticks are still being sold in large quantities. They explained how their ease-of-use and off-line availability along with fast transfer speeds are still a very attractive proposition to lots of computer users. The death of USB-based flash storage has been greatly exaggerated. The company has even introduced USB 3.2 memory sticks such as the Helios 202 which offers sequential speeds of up to 5Gbps and capacities of up to 256GB. Silicon Power source their NAND from Toshiba (now called Kioxia) or WD (SanDisk) with controllers supplied from Phison, Silicon Motion or Marvell.
USB-C… More than just a connection type
Transcend, another major player in flash memory products was also at Embedded World 2020. Their team sees USB-C as the next big thing in portable storage. The reason? Well, unlike USB 3.2, USB 3.1 etc., USB-C is generally more widely supported by tablets and phones. It offers multi-platform interoperability between Windows, Apple and Android. It also offers theoretical sequential data transfer speeds of 10 Gbps. In fact, one congenial Transcend representative sees USB-C more than just a connection type but rather a platform in itself. It can, for instance, be connected directly to external displays. (Monitor manufacturers like LG, Benq and Dell now support USB-C in some of their high-end displays). USB-C also supports daisy-chaining – a useful feature which allows multiple external hard disks to be interlinked whilst appearing to the host as independent drives.
RAID hasn’t gone away you know…
While USB memory sticks are likely to be hanging around for many years to come. So too are a lot of other storage technologies borrowed from the world of spinning hard disks (HDDs). As already mentioned, the process of Zoned Storage is borrowed from Shingled Magnetic Recording (HDDs) and is now being deployed in some NVMe-based solid state drives. Native Command Queuing (an adjunct command protocol developed to alleviate AHCI command queue bottlenecks) is another spinning disk technology, which has crossed the chasm into the SSD world. And let’s not forget RAID, a technology which some predicted would fizzle out, has not gone away either. Not only are the principles of RAID virtualisation applied to NAND arrays found on individual SSDs, but RAID is now extensively used in conjunction with SSDs for redundancy and to increase storage capacity (just like with HDDs). For example, modern ATM or ticketing machines extensively use SSD RAID arrays. Modern NAS devices from manufacturers such as Synology and Qnap now offer compatibility for S-ATA and M.2 SSDs. In fact, Synology have introduced PCIe add-in adaptor cards for some of their high-end devices which support M.2 2280, 2260 and 2242 SSDs. WD has recently introduced their SA500 NAS SATA SSD range designed to work with NAS devices.
Hardware manufacturers such as Icydock have introduced products such as the ToughArmor 4-port SSD bay for NVMe M.2 disks. Designed for handling intensive I/O workloads such as deep-learning or 4K/8K video, it can offer speeds of 32Gbps over a MiniSAS connector with the host or RAID card managing the array. Rather than solid state storage sunsetting RAID, it has actually given it a new lease of life.
The role of gamers in solid state disk development
Adata, another major manufacturer of solid state disks and flash-based storage devices was also in attendance. Commonly known for their consumer-level SSDs such as their popular SU630, SU650 and SU800 models. This manufacturer also produces solid state disks for the gaming and industrial use markets. For the gaming market they produce a M.2 PCIe range of “XPG” branded SSDs such as the SX6000, SX7000 and SX8200. For disk manufacturers this market can be quite challenging because gamers are quite a discerning bunch of customers. They tend to perform extensive pre-purchase research and often demand high-performance low-latency disks.
Unsurprisingly then perhaps, gamers have been the progenitors for a substantial number of breakthrough information technology products such as the sound card and the graphics card. (The gaming industry has also been responsible for bestowing upon the world neon-illuminated keyboards and other PC components which look like props from a Star Wars film…) Gamers using virtual and augmented reality today are not just playing, they are fine-tuning technologies. Because once refined, these technologies will most likely trickle into the mainstream in the not too distant future. In this regard, the informative Adata representative explained how the consumer and gaming market, offers his company valuable feedback and insights into real-life disk performance and reliability. (Learnings which could probably never be gleaned from a synthetic SSD benchmark test)
These insights can be leveraged to design high-reliability industrial-class solid state disks such as the Adata ISSS332 (S-ATA III disk using MLC NAND). For the industrial SSD market, not only are criterion such as reliability and power protection important, but also the need for disk models to have a fixed bill of materials (BOM). (Unlike the consumer disk market, where a manufacturer or vendor can change key components willy-nilly whilst retaining the same disk model number). This is important because it ensures reliability, performance and consistency in environments sensitive to component changes. For example, operators of medical diagnostic machinery may require an SSD with a large DRAM-cache for their machinery to run smoothly. If the SSD manufacturer or vendor decides to change or remove the DRAM-cache in subsequent production runs of the disk, this could easily impact the running of a finely tuned machine. Therefore, “fixed BOM” guarantees that all disk components and specifications pertaining to a specific disk model will remain unchanged during the production run of the disk.
Overall, despite the virus threat, Embedded World 2020 exhibitors and speakers provided some very interesting insights into the fast moving world of solid state disks.
Drive Rescue are based in Dublin, Ireland and provide an SSD data recovery service for S-ATA and M.2 drives which are no longer being recognised by your computer. We recover from brands such a Lite-On, Crucial, Samsung, WD and Lenovo. We also recover from USB memory sticks (flash drives) . Phone us on 1890 571 571.