Here are some of the the top reasons for SSD (S-ATA, PCIe NVMe and m-SATA) failure which we’ve come across in the Drive Rescue lab last year:
File Translation Layer corruption
Bad Blocks
Gate-oxide Failure
Failure of solder-joints on printed circuit board
Exhausted NAND
Failure of Power Management IC
Read Disturb Failures
DRAM-chip Failure
Wear-out of System Area containing firmware
Complete NAND Chip Failure
The above list covers failure modes across all brands and interface types of solid state disk including Samsung, Micron, SK Hynix, WD, Toshiba, HP, Kingston and Apple models. You can find out more our SSD data recovery service here
One of the great drawbacks of the electro-mechanical disks is their propensity to develop bad sectors. And unfortunately, SSDs don’t escape this problem.
Bad sectors are a problem for storage devices simply because they can result in inaccessible or lost data. Moreover, bad sectors often have a happy knack of developing in the same areas of your disk where your most important data is stored.
Typical Symptoms of Bad Sectors on an SSD include:
Your S-ATA or PCIe SSD (such as Samsung, Micron, SK Hynix etc) is causing your computer to intermittently freeze.
In Window’s Event Viewer, you see evidence of “bad blocks” being reported.
Your S-ATA, PCIe (NVMe) or USB (3.1 / USB-C) SSD is not being recognised by your computer
You can see your SSD’s folders and files in Finder (MacOS) or Explorer (Windows) but cannot copy them to another medium.
You’re receive an “access is denied” error message when you try to access your Micron SSD in Windows.
In MacOS, you see error messages like “First Aid found corruption” after running in-built disk repair utilities. Or, you see messages like “The disk you inserted was not readable by this computer”
You’ve tried running a data recovery program like EaseUs or Recuva but it keeps on freezing.
Checkdisk (Chkdsk) freezes at a particular point.
So, why do bad sectors or bad blocks develop on SSDs?
Well, there are a number of reasons. First of all, like with HDDs, SSDs actually leave the factory with some factory-marked bad blocks. This is because the manufacturing process for NAND is not perfect. Imperfections in the NAND wafer, from which NAND dies are cut, are almost inevitable.
As the SSD gets used, grown bad blocks (sometimes known as runtime bad blocks) start to develop. These can occur for a number of reasons including:
Wear and Tear – The insulation layer of the tunnel oxide in NAND cells begins to degrade due to the Fowler-Nordheim tunnelling process which occurs during P/E (Program/Erase) cycles. Altough the wear levelling (WL) algorithms are designed to evenly distribute block usage across the volume, WL is not a perfect process. And don’t forget that some types of NAND have lower endurance than others. On one end of the spectrum, you have high-endurance SLC NAND (which is actually rarely used even in industrial-class SSDs) while at the other end you have QLC NAND which is considered low endurance NAND. Or, to put it into perspective, a 1TB TLC SSD would typically have an endurance rating of 1 DWPD (Data Writes Per Day) while a 1TB QLC SSD would typically have an endurance rating of just .1 DWPD.
Trapped Electrons between NAND cells has always been a problem causing intercell interference.
Trapped Charge – Sometimes after prolonged usage, electronic charges can get trapped in the nitride layer between the NAND cells. This makes the voltage threshold for program/read or erase operations too high resulting in unreadable or unerasable sectors. The trapped charge problem can also be caused by improper shutdowns of the host system or by power supply issues with the SSD.
Prolonged Storage – If flash-based storage devices such as SSDs have been left powered-off for a while, they can lose charge. This retention loss can result in blocks becoming unreadable and being marked bad by the disk’s Status Register. These bad blocks are also added to the Bad Block Table. Some SSD manufacturers include “refresh” algorithms in their controllers which are designed to recharge cells when the device is connected.
Disturb Failure – NAND cells can get “disturbed” when a bit is unintentionally programmed from a “1” to “0” or vice versa. This occurs when the voltage for cells-to-programmed creates an electric field which interferes with neighbouring cells.
Bad blocks or bad sectors can become very problematic when they start to develop in the System Area of an SSD. This can result in unreadable firmware or unreadable boot initialisation code. The latter scenario can result in your SSD failing to be recognised by your computer.Bad blocks occurring in the user addressable area of the disk can be managed. Most SSDs have a Bad Block Management (BBM) feature which marks blocks as bad (unreadable). BBM then uses “good” cells from the reserved section of the disk to substitute for the bad ones.
Fixing Bad Sectors on SSDs
Over the years commercial products have been patented and developed to cure bad sectors using methods such as hysteresis. But most of these solutions never really resolved the bad sector problem. Just as with HDDs, there is no real way to fix bad sectors on an SSD. However, an experienced data recovery technician can work around bad sectors and try and recover as much as your data as possible using specialised equipment.
Examples of specialised data recovery equipment include:
Slow Sector Reading
Equipment which slow-reads of sectors. The read timeout parameters on a standard operating system are configured for healthy disks. Data recovery equipment allows the technician to read the disk using modified read timeout settings. This means that sectors which a standard operating system such as (MacOS, Windows or a Linux-based OS) would report as “unreadable” are actually readable by the equipment.
Smaller Sector Sizes
Equipment which uses variable sector sizes. For example, an Apple MacOS system will typically read disks in increments of 4096 bytes. Professional-level data recovery equipment allows the technician to read data in increments as low as 16 bytes. This sort of granularity, along with delayed-reads, allows for successful data recovery from bad sector areas.
Firmware Emulation
Data recovery companies can use equipment which can change the voltage supply to an SSD. This means that an S-ATA or PCIe (NVMe) SSD which is unreadable to a standard computer can be successfully read.If the System Area of your SSD has become damaged due to bad sectors, a firmware emulator can be used by a data recovery company to substitute for the original. This can result in previously inaccessible data being made accessible again.
Data Recovery from a Micron 2300 SSD
Here at Drive Rescue we recently came across a prime example of how bad sectors can affect a disk. The Micron 2300 512GB M.2 disk was taken from a Dell laptop. In the BIOS, the system reported SMART predictive failure. The disk was being recognised by the BIOS but not by Windows Explorer. The disk used an M.2 form factor and used 96-layer TLC NAND coupled with an in-house Micron controller. Initial diagnostics reveal that several firmware modules could not be read. Therefore, we used a firmware emulator to substitute for the damaged controller. However, the disk was still reporting extensive bad blocks. We set our data recovery equipment to use a read-timeout of over 20,000 milliseconds. We also set the sector retry rate to 3. Moreover, we used a read block size of just 64 sectors. These parameters gave substantially healthier disk-reads. After almost 24 hours on our recovery bench, the results were very pleasing. The most important files for our project manager client were .XLSX. PDF and .MPP (MS Project). These were all successfully recovered. They only files which were not recovered were some .MOV files which the client could download again anyway. Case closed and our project manager could back to managing projects instead of the painful and time-consuming task of reconstructing files.
Drive Rescue, Dublin, Ireland offer a complete SSD data recovery service for failed Micron SSDs including models such as Micron C300, Micron C400, Micron 1100 256GB, Micron 1100 512GB, Micron 2210, Micron 2200s, Micron 2200v, Micron 2300 NVMe, Micron 5100 Pro M.2, Micron 5200, Micron 5300, Micron M550, Micron mtfdhba512qfd, Micron mtfddav256tbn and Micron mtfddak512tbn. We recover from Micron SSDs that are not being detected or not recognised by your computer. We also recover from Bitlockered Micron SSDs. Excellent success rates and fast service.
The WD My Passport external hard drive is an extremely popular type of external storage device in Ireland. Made by Western Digital Corporation, these portable USB (2.0, 3.0, 3.1, 3.2) drives come in a variety of colours and sizes. Popular capacities include 1TB, 2TB, 4TB and 5TB. However, like any type of storage media, My Passport disks can fail.
Here are the main reasons:
Bad Sectors
Your WD My Passport may fail due to bad sectors. These occur when areas of the disk platter become unreadable. While almost all disks have some bad sectors, which can be managed by the disk’s firmware, some bad sectors cannot be remedied by the disk’s firmware. If these sectors contain user data – it can result in the data becoming inaccessible. Or, if bad sectors develop in the System Area of the drive (where firmware modules are stored) or where MFT (Master File Table) information is stored – this can also result in inaccessible data.
The Fix: The bad sector problem can be mostly solved by using specialised data recovery equipment which is designed to read and re-read damaged sectors at an extremely slow speed and in very small sector sizes.
2) Lost in Translation
Like all hard disks, your WD My Passport uses a process known as File Layer Translation to translate logical addresses to physical addresses. (Basically, your file system stores data logically and uses FLT tables to translate these logical areas to actual physical sectors on your hard drive. Hard drives use this process because it makes file storage more efficient.) However, sometimes, due to underlying disk problems, the FLT table goes corrupt which means your disk can’t find the data.
The Fix: Any underlying disk problems such as bad disk-heads or bad sectors must be resolved before the FLT can be read properly.
3 ) Oops…Accidental Deletion
If you’ve accidentally deleted data from your WD My Passport disk, you’re not alone. Every year, scores of computer users in Ireland accidentally delete data from their disks. This is often due to the distractions of multi-tasking. Confusing one disk for another is more common than you think.
The Fix: Assuming you’ve not over-written the data with fresh data, your data should be recoverable. This is because, like with any HDD, when you delete data from a WD My Passport, it is not actually deleted. The area of the disk is simply marked as “free” but its data is not actually deleted until you write new data to the disk.
4) Accidental Drop of your WD My Passport
One of the top reasons why a WD My Passport disks fail prematurely is because the user drops it. Even a small drop from a coffee table can result in your drive’s disk-heads incurring damage. In the worst-case scenario, the heads can scrape against the drive platters causing irreversible damage.
The Fix: In most cases, the only fix for this type of problem is to bring the disk into a clean-room and insert a new head-disk assembly. In a small minority of cases, the disk-heads can be remapped by manipulating the disk’s firmware, but this methodology will not always be successful.
5) Accidental Liquid Spillage on your WD My Passport
You’re having a nice relaxing cup of coffee. When reaching over your desk to reach over to pick up yesterday’s unread newspaper, that cup of Java decides to capsize spilling its contents all over your desk and onto your hard disk.
The Fix: Any liquid like coffee, water, beer or tea getting into contact with your disk’s PCB (printed circuit board – the electronic board just inside the plastic casing of your disk) can cause corrosive damage or pre-amplifier failure. This means that the components (such as diodes and resistors) on the disk’s PCB can get corroded by the liquid – a process which sometimes takes weeks. If you’ve been very unlucky, the liquid spill might have caused a power surge to occur inside your disk causing its pre-amplifier chip to fail. The first problem can be fixed by fitting a new PCB or by component level repair. A transplant of the EEPROM chip from old PCB is needed. If it’s the pre-ampflier chip which has failed, this usually means a new head-disk assembly. Both fixes are usually successful in getting your WD disk working again.
6) Spindle Damage
The spindle motor plays a crucial role in spinning your disk platters at 5200 RPM. Most modern My Passport disks use a Fluid Dynamic Bearing (FDB). This is a highly sophisticated mechanism which has to spin the platters at a constant rate but also in a way to minimise NRRO (non-repeatable run off errors). If the spindle motor is even a nano-metre off kilter, it can result in bad reads. However, sometimes, after a knock or fall, the spindle motor will seize. This is because a) its herringbone bearing inside the motor will seize or b) the lubricating oil inside the spindle motor chamber leaks out due to shock damage. The latter process is usually invisible to the naked eye.
The Fix: A special hard disk spindle replacement tool has to be used to extract the old spindle and replace it with a new mechanism. This is a delicate procedure which has to be performed in a clean-room. In most cases, it results in complete data recovery of your WD My Passport disk.
Drive Rescue, Dublin, Ireland offer a complete data recovery service for My Passport disks which are not showing up in Windows or Mac, which are appearing at not initialised, which are generating an “access denied” error message or disks which are not mounting. We recover from all My Passport models including Passport for Mac, My Passport Ultra, My Passport Slim and WD My Passport Go SSD.
Predicting or detecting SSD failure is much harder than predicting HDD failure. If an HDD is failing, it can become slow, it can cause a computer to freeze or go slow. Or, it can trigger a kernel panic or blue screen of death to appear on the host system. And in some cases, the user will hear a clicking, grinding, beeping or chirping noise. A failing SSD however, does few of these things. In fact, failing flash-based storage be quieter than the proverbial church mouse.
That is worrying because a lot of users are not prepared for sudden-death failure of their disk. At least with a HDD, the user sometimes gets a bit leeway to perform an emergency backup. Your SSD could fail in the morning without even giving a peep of warning. SSD manufacturers have brought over a legacy technology called SMART (Self-Monitoring, Analysis and Reporting Technology) to monitor and help predict failure. Designed by IBM primarily for ATA and SCSI disks, it monitors disk parameters such as the Read Error Rate, Reallocated Sectors Count, Power-On Hours, Temperate and Uncorrectable Error Count. And for the SSD-era, parameters such as flash program fail, wear level count and wear-out indicator have been added to the SMART attribute set. But even taking this newly bolted-on features into account, SMART is still an old technology designed for electro-mechanical disks.
How Accurate is SMART?
SSDs are first and foremost electronic devices. And SMART does not take into account failure or impending failure of electronic components. Failing DRAM chip?, problem with write amplification? problem with LBA mapping tables? –SMART, alas, does not have you covered. SMART will continue to merrily push out disk attributes sometimes with little salience to the operation of a modern SSD.
While power-up and power-down events are recorded. SMART gives us now information as to whether these power events were clean or dirty. An SSD could fail with its DRAM cache full to the brim just before a data corrupting power-event, but SMART will be blissfully unaware of it.
SMART is a very siloed tool. It takes into account individual disk performance parameters but does not view them holistically.
SMART is not standardised. While the NVM Express working group is endeavouring to change this, SMART has also been implemented by SSD manufacturers on a non-standardised basis. This means that a sector reallocation event for a Samsung Evo SSD might be defined totally differently by Sandisk Plus SSD.
And because SMART has been implemented by manufacturers on their terms, it has invariably been driven by a commercial imperative. Let’s face it, manufacturers do not want a deluge of RMA’ed SSDs being sent back to them based at the slightest hint of malfunction. Therefore, most manufacturers have set their SMART failure thresholds high.
Why SMART is a problem for the end-user, computer technicians or system administrators
SMART provides a false sense of security to users. They might have a SSD which is on its last legs, but it will pass a SMART test. Here at Drive Rescue, we’ve seen this sort of scenario play out a countless number of times.
The problem of SMART and third-party SSD Diagnostic Tools
Most SSD diagnostic tools such CrystalDiskInfo and SNMP monitoring tools like PTRG rely on SMART information to perform their tests. While these tools can be extremely useful, they can also provide inaccurate information. This is because many SSD disk manufacturers have designed their disks’ firmware so that its telemetry cannot be fully interrogated by third-party tools. These tools sometimes only scratch the surface of what is really going on inside your SSD.
The Solution
Perform regular backups of your important data. Throw away any notions that SSDs don’t fail or that you’re going to get some warning. Sometimes SSDs fail out of the blue. Backup strategies such as performing 3-2-1 backups are as relevant with SSDs as they were even with the creakiest spinning disks.
Try to use manufacturer-based tools for diagnosing SSD problems. For example, Samsung Magician for Samsung SSDs or Crucial Storage Executive for Crucial SSDs. These tools tend to be slightly more accurate because they are typically allowed more privileged access to your disk’s telemetry data.
Unbelievably, some SSD manufacturers still don’t provide diagnostic tools for their disks. If this is the case, you can use an SSD diagnostic tool like Smart Disk Checker. This will not only read the SMART logs of your disk but will also perform a time-sensitive sector analysis of your disk. This can give you a much better picture of your SSD’s health. This tool is also bootable from USB meaning you don’t have to remove the HDD or SSD from the system.
Drive Rescue, Dublin, Ireland offer a complete data recovery service from inaccessible S-ATA and M.2 NVMe SSDs. Common SSDs we recover from include models such as Lenovo MZ-VKV5120, Toshiba THNSFJ256GDNU, THNSN5512GPUK, Samsung MZ-NLN5120, MZ-VLB5120 and MZ-VLB2560, WD SN520, WD SN550 and SanDisk X400.
Heat map of an SSD – notice how the controller gets hottest!
Let’s face it, some SSD models belch out more heat than a small nuclear power station. For some SSD models, running hot is their normal mode of operation. In fact, with some S-ATA-based SSDs, their metal chassis is not only designed to protect the electronics of the disk, but to also act as a passive heat-sink. For a standard computer, a typical temperature for an SSD under load is between 30°C and 50°C (86°F and 122°F) but this can vary a little between manufacturers. It is also normal to have spikes of heat when your SSD goes from being idle to performing an intensive task, such as a large data transfer.
SSDs use NAND flash memory. This type of storage is non-volatile, which means it doesn’t require a continuous power supply to retain data. The floating-gate transistor (aka FGT, a metal-oxide semiconductor) is a popular type of NAND that is used in SSDs (such as those produced by Intel). Another semiconductor used in NAND memory is the Charge Trap Flash (CFT), but its thermal properties are similar to FGT, so for the purposes of this blog, the impact of heat on FGT-based SSDs will be discussed.
The FGT is composed basically of two types of gates, the floating gate (FG) and control gate (CG). The procedure of removing the electric charge from the FG is the Erase process (erase data), whereas the procedure of storing is the Program operation (write data). This operation requires power, and the temperature can increase significantly when the SSD is subjected to large workloads.
The “electron tunnelling” process used during Program/Erase (write/erase) cycles can damage the cell (FGT). The tunnel oxide, a layer that composes the FGT (as presented in Figure 1), wears out over time, when it is exposed to high temperatures. This wear-out results in electron leakage and bit-errors.
When an SSD is overheating, the controller can malfunction leading to all sorts of erratic disk behaviour such as:
Your SSD is not recognised by Windows.
Your computer can’t see your SSD
Your SSD appears as unformatted.
When you try to copy files off your SSD, your computer keeps on freezing.
You cannot copy files off your SSD.
Some files seem to have disappeared off your SSD for no particular reason.
The Catch–22 of SDDs and Heat
Be careful here! Many internet commentators mention that read/write operations in SSDs perform better at higher temperatures. This is correct; NAND programming has always worked optimally at higher temperatures. Put simply, when your SSD is hot, the read, write and erase operations will be quicker and smoother compared to a cooler disk. Degradation of the cell oxide layers is also reduced because the heat causes less stress.
The M.2 Form Factor and Heat
User demand for lighter and thinner devices is not helping the situation. For example, the M.2 “stick of chewing gum” sized form factor has a relatively small surface area coupled with high data densities. This specification can draw power of up to 7 watts but can push temperatures up to 100C. (At least the SATA-based SSDs have a larger surface area for heat dissipation and can use their chassis, which is often metal, as a heat-sink).
Enter Thermal Throttling to Cool Things a Bit but also Slow Them Down…
Many SSD manufacturers use a function known as Thermal Throttling to prevent their devices from overheating. This monitors the temperature of the SSD via a built-in sensor. When the disk temperature reaches a pre-defined threshold, the thermal management function slows down the SSD’s performance to prevent it exceeding its maximum temperature. This results in fewer bits flipping due to heat and ultimately prevents premature failure. A simplified process of the Thermal Throttling technique is presented in Figure 2. It can be seen that the temperature of operation is above 70°C (158°F) which is “normal” for an M.2. However, to ascertain the normal operating temperature of your SSD, refer to the manufacturer’s specification sheet.
Each manufacturer will implement thermal throttling differently. For example, Samsung SSDs use Dynamic Thermal Guard (DTG). If a disk exceeds a threshold temperature, DTG will reduce the power to the NAND and MCU (controller). This disk self-preservation mechanism usually kicks in at around 75C. For a lot of their SSD models, such as the 950 Pro, 960 Pro and 970 Pro, thermal throttling can be a fairly common occurrence under sustained workloads, such as heavy video editing or when the disk is being used in a busy VM server.
Small but beautifully formed… Copper heatsinks of just 1.5mm in thickness can be used to cool overheating M.2 NVMe SSDs and have the potential to bring disk temperature down by as much as 20C. prolonging the life of your data.
Cool your Jets… Fixing Overheating SSDs Thermal throttling has the undesirable side-effect of slowing down your SSD. But there must be other ways to cool an overheating SSD, right? Some extraneous cooling options are available if you are dealing with an overheating M.2 NVMe SSD in a laptop. One of the most effective ways to cool an SSD of this type is to use a copper heatsink, space permitting. Pure copper has a thermal conductivity of 401 W/mk and dissipates heat well, lowering your SSD’s temperature anywhere from 5C to 20C. These heatsinks can be got in sizes of just 1.5mm in thickness and fit nicely over 2280 and 2260 form factor SSDs. For the best results, always remove the disk manufacturer’s specification sticker before adding the heatsink. (However, do keep this sticker somewhere safe for future reference).
In terms of desktop computers, there’s a lot more leeway to implement effective cooling measures.
1) Change your SSDs PCIe slot, if possible – ensure this is done away from any other heat-generating components, such as GPUs.
2) Try adding a new case fan, if space permits, and strategically position its airflow towards an overheating SSD to cool it.
3) Finally, you could try using a PCIe riser card. This is a PCIe card which your SSD slots into. It uses a heatsink, fan or both to cool your SSD.
An overheating Intel SSD660p: cool to look at but not so cool to the touch.
Data Recovery form an Intel SSD PCIe 660p M.2 Disk
Last week, we were dealing with an Intel SSD 660p which was proving toasty even after only being connected for ten minutes. This was making sector reads very difficult. We first had to bring the core temperature of the disk down. For this, we used a custom cooling device made for failing SSDs. This uses a heat sink with a very high surface area which means it maximises the dissipation of heat. It also uses a high velocity fan which cools the disk further using convection. This enabled us to bring the disk’s temperature down from 80 to 52 degrees Celsius. Once the Intel 660’s temperature has stabilised, we were now able to connect it to our PCIe data recovery system. Normal reads were proving impossible. Therefore, we had to use a special PCIe disk reader with adjustable read timeout settings, controller power settings and disk reset functions. At a glacial speed of only 64 sectors per read, the disk took around two days to image. Even after this process, the disk’s NTFS partition table needed some repair to its MFT. However, the effort was worth it – most of the client’s files (.DOC. PDF, XLSX, PPTX were successfully recovered.
Drive Rescue Dublin, Ireland offers an advanced data recovery service for failed SSDs such as the Intel 660p,Intel 7600p, Intel H10 SSD M.2, Micron 1100, 1300, 2200, 2300, 5100, WD SN550, SN750 and SK Hynix PC601, HFM256GDJTNG, HFM512GDJTNG. Serving satisfied customers in Dublin since 2007
The SanDisk Cruzer Blade is a popular model of USB 2.0 memory stick on the Irish market. It uses a monolith NAND (usually TSOP48) TLC chip and an in-house controller designed by SanDisk. The Cruzer Blade range comes in capacities of 8GB (SDCZ50-008G),16GB (SDCZ50-016G), 32GB (sdcz50-032g) ,64GB (sdcz50-064g) and 128GB (sdcz50-128g).
However, like with any USB memory device, it is liable to corruption and events where your data is rendered inaccessible. For example, when you connect your Cruzer USB disk to your computer, you may receive an error message such as:
“You need to format the disk in drive E: before you can use it”.
“USB device not recognised”
The “parameter is incorrect”
Alternatively, your SanDisk Cruzer memory stick may appear to be totally dead when connected to your laptop or desktop computer.
Reasons why SanDisk Cruzer Blade USB devices fail. There are several reasons why your memory stick may fail to be recognised in Windows or on MacOS These include:
Its bootloader has failed. The bootloader is the microcode code needed for your memory stick to initialize. When this fails to load, your disk becomes unrecognisable.
There are two main components of a USB flash drive – the NAND chip (where your data is stored) and the controller chip. The controller chip is like the brain of your memory stick. It controls the read, write and erase processes. It also controls processes such as Error Correction Control (ECC) and wear-levelling. If your controller goes corrupt, the data on your stick may become inaccessible.
The NAND cells on your SanDisk Cruzer Blade may have degraded or have developed uncorrectable bit errors. The partition table (FAT32, NTFS, exFAT or HFS) on your Cruzer Blade USB stick may have gone corrupt. Your SanDisk USB device might have been subject to an over-voltage event. This can occur if a USB port such as on your computer, smart TV or NVR delivered too much voltage to your disk and caused damage to a component such as a diode or resistor.
Recovering Data from your SanDisk Cruzer USB memory stick.
Make sure your Cruzer USB memory stick is assigned a drive letter in Windows. You can check this by going into Disk Management (Control Panel > Administrative Tools > Computer Management > Disk Management)
Try using another computer. It is always possible that a glitch on your Windows or MacOS computer is preventing your Cruzer USB stick from being read.
Connect your memory stick directly to your computer. Do not use a USB hub as an interface between your computer and your USB memory stick. This is because a USB hub can sometimes create device recognition issues.
Mini SanDisk Cruzer Blade Data Recovery Case Study
We recently had a case where an employee of a Dublin-based investment company had a problem with their 8GB SanDisk USB Cruzer drive (SDCZ50C-008G). When they connected it to their Windows computer system, it would not appear in Windows Explorer. They had an extensive collection of research reports (PDF) and financial projections (Excel) stored on it which they badly needed to retrieve. The device was encrypted with McAfee Endpoint Encryption for Removable Media. They had assumed this encryption software was causing the issue. However, their IT support department examined their Cruzer USB disk and discovered that the device was not being recognised by any of their systems. They recommended Drive Rescue.
We connected the inaccessible disk to one of our data recovery systems designed to read flash-based storage at a very low-level. We performed a test read. However, after being connected for less than five minutes, we discovered that the USB drive had already disconnected! This was not looking good. A look at our system’s log files showed that the device had disconnected (virtually) from our systems after only 3.49 minutes. We surmised that, even though the disk was being read at a very low level, our recovery system was dropping the disk because of too many read instability issues. In order to circumvent this problem, we would have to use a second tool in our armoury to maintain the connection between our recovery system and the failing Cruzer disk. This specialised USB reader is designed especially for reading data from failing USB devices. It uses an Arm processor, which acts as an intermediary between the recovery system and problem disk. When the disk, is no longer interfacing directly with the operating system, we can control read-timeouts and disk-reinitialise parameters. In this particular case, the Cruzer USB had multiple unreadable NAND cells. So, we changed the read time-out to 10000 milliseconds and then controlled the disk initialisation rate when our equipment encountered bad cells. Our data recovery systems were now able to read the data in a much more stable and predictable way.
Successful Recovery: All files recovered.
After about seven hours on our bench, the 8GB Cruzer disk finally imaged to an SSD. Connecting the SSD to a standard Windows 10 workstation system presented us with a dialogue box requesting an encryption key. A very welcome sight! The client provided us with their McAfee Encryption key. This granted us access to the drive’s data immediately. Our client could now be reunited with their data again. The prospect of having to re-do hours and hours of painstaking work was now over!
Samsung’s exit from the electro-mechanical hard disk market in 2011 shocked a lot of people in the data storage world. Among OEMs, professional users and prosumers, their Spinpoint line-up of disks had developed an enviable reputation for performance and reliability. And while Samsung might not have enjoyed the market share of Seagate or Western Digital – their exit showed that nothing is predictable in the land of hard drives.
Samsung would continue to churn out disks, but only of the solid-state variety. One year preceding their exit from the mechanical disk market the Korean electronics giant launched their 830 series of SSD shortly followed by the 840 series a year later. The latter series of disks was trail blazing because it allowed Samsung to prove to the mainstream market that 3-bit MLC NAND could offer reliability, stability and high-performance in solid state disks.
The pioneering spirit of Samsung did not stop with the type of NAND they used. In 2015, they introduced their T1 credit-card sized external SSDs. They were one of the first large scale disk manufacturers to offer a miniature SSD portable storage offering. The sleek T1 (using an MGX controller) could be easily slipped in a pocket and proved that not all external disks had to be mechanical and could even be quite elegant devices.
A Samsung T5 SSD
In 2017, Samsung launched their T5 external disk (models such as MU-PA250B, MU-PA500B, MU-PA1T0B and MU-PA2T0B) in capacities of 250GB, 500GB, 1TB and 2TB. These disks used 64-layer V-NAND, a USB 3.1 type-C port and used metal casing which doubled as a heat-sink. Not only that, but unusually for an external SSD, it supported TRIM. (This was enabled by a UASP compatible bridge board). In 2020, we saw the introduction of their T7 portable disk such as MU-PC500R, Mu-PC1TOR and MU-PC2TOT. This 128-layer 3D TLC NAND disk (using a “lite” version of their Pablo controller) would be their first NVMe-based external disk and offered blistering sequential read and write speeds of over 1000 Mbps.
As innovative as the Samsung T-series external SSDs are. They are not without their issues. Their MGX and Pablo controllers can lock-up, their firmware can degrade, the bootloader can fail and their NAND cells can develop unrecoverable bit-errors. And, like with any disk, partition tables (exFAT, HFS+) can go corrupt or disappear.
Common symptoms of a failed Samsung T5 or T7 external SSD.
When you connect your Samsung T5 or T7 to a Windows system, you receive a message that “the parameter is incorrect”
You receive a message in Samsung Magician that “No Samsung portable SSD is connected”
Your Samsung T5 or T7 appears as “unformatted” in Windows.
Your Samsung T5 or T7 do not appear in Finder.
Your Samsung T5 or T7 do not appear in Windows Explorer.
The blue light of your T5 or T7 is flashing or blinking, but no data appears.
The light of your T5 or T7 is solid blue, but the disk is not recognised by your computer.
Why your Samsung T5 or T7 is no longer recognised by your computer…
The bootloader in your Samsung SSD might have gone corrupt. The bootloader is a set of instructional microcode used to load firmware when your disk initialises.
Your external disk might have been subject to an over-voltage event. For example, the host computer might have experienced a power surge and your Samsung T5 or T7 got subjected to too much voltage via one of its USB ports. The voltage rating for your Samsung disk is 5V. Any voltage in excess of this can damage it.
That partition table of your disk might have become corrupt. ExFat is the factory default partitioning scheme of the T5 and T7. However, some users will reformat this partition type to NTFS, APFS or HFS+. These file systems can go corrupt due to firmware problems or if your disk has been filled to capacity. These events can result in your drive not being recognised by your computer.
It’s possible that the Flash Translation Layer (or translator) of your T5 or T7 disk has failed. The FTL performs the crucial task of translating the logical sectors on your disk to physical addresses. It acts like the index of a book for your disk, but when it fails your data will be inaccessible.
A Samsung T7 – James Joyce is quoted as saying “Dublin will be written in my heart”, Samsung can claim Dublin is written on their portable SSDs…
There are several possible reasons why your Samsung T5 or T7 portable disk is no longer recognised by Windows 10/11 or MacOS.
How to recover data from your Samsung T5 or T7 portable SSD.
Important note: You might see a message in Windows such as “You need to format the disk in drive E: before you can use it. Do you want to format it?”. Under no circumstances should you click on “format disk” as this can result in irreversible data loss”
Try a Different Cable
Sometimes cables or their connectors can get damaged. Try using a different USB Type-C to C, cable or a USB Type-C to A cable.
Try a Different USB Port
Try using a different USB host port on your computer. Better still, try accessing the data of your T5 or T7 using another computer. It is important to connect your disk directly to your computer. Do not connect your T5 or T7 disk using a USB hub can add another layer of abstraction and can sometimes thwart any data recovery efforts.
Make sure your T5 or T7 disk has been assigned a drive letter
If you’re a Windows user, check Disk Management (Control Panel>Computer Management>Disk Management) to verify that your disk has been assigned a drive letter. If not, assign a letter to your disk.
Mac Users – try running First aid on your T5 or T7 disk.
If your Samsung T5 or T7 SSD drive does not appear in Finder, try running First Aid on your disk. This feature can be found in the Disk Utility settings of your Mac and can sometimes repair small issues with your disk’s file system. If this does not work, you can try using a “fsck” command via Terminal.
Advanced Samsung T5 and T7 data recovery strategies
If you suspect your T5 or T7 disk has a locked controller, a professional data recovery firm should be able to put your disk into “technological mode” to read its data.
If your disk’s Flash Translation Layer has failed or gone corrupt, a data recovery professional will have to use a firmware emulator to read the disk’s data.
Drive Rescue, Dublin is based in Dublin, Ireland. We offer a complete data recovery service for Samsung SSDs such as the T5 and T7. We also recover from mechanical Samsung disks such as the Samsung M3, Samsung ST1000LM024and ST2000LM003.
3 SSDs which we loaded with two data sets. The data sets were then deleted. Which disk would still have its data intact after 24hours?
Drive Rescue recently gave a guest lecture to the computer science class of a well-known Dublin third-level institution. Their lecturer wanted to give his class some real-world insights into how the world of practitioners sometimes differs to the world of academic theory. So, in the name of science and knowledge enhancement for all – we duly obliged.
The topic we decided to talk about was garbage collection in solid-state disks (SSD). Garbage collection is a silent (disk controller) process which runs in the background of most solid-state disks and operates as a sort of clean-up mechanism for data which had been recently subject to the delete command. This makes read, write and erase operations in SSDs more efficient. However, for the forensic investigator, the security analyst, the systems administrator or indeed the data recovery technician, the garbage collection feature has the potential to complicate investigations and recovery cases.
Data Deletion from HDDs
When data is deleted from traditional electro-mechanical hard disks, the space on the volume is marked as free by the disk. But the actual data is not deleted until it’s overwritten to the same location.
Why Garbage Collection is a problem
File deletion with SSDs works differently. Unlike HDDs, they cannot write data to a random area of the disk. SSDs must write to blank pages. Moreover, an SSD cannot erase data at page level, it must be block-level. For this reason, SSDs use TRIM and garbage collection to make sure there always pages ready available for writing.
Most academic texts discussing data deletion in SSDs invariably discuss the topic of TRIM (which is a delete command sent from the operating system). However, the less discussed and underplayed topic is garbage collection. TRIM can simply be disabled by disconnecting the disk from the host system. But with garbage collection, because the process is initiated by the disk’s controller (MCU), the disk only has to be powered up for this process to initiate. This is a massive problem because as soon as the disk is powered up, deleted data or evidence of deleted data starts getting destroyed. It means that the MD5 hash of an SSD can change within minutes, making an SSD forensically unsound.
In order for the class to understand this process a little better, Drive Rescue set up a small experiment. We got three SSDs all of which were of a similar size.
Crucial MX 500 (500GB) – SM2258H
WD Blue (500GB) – Marvell 88SS1074 (Custom WD)
Kingston A400 (480GB) – Phison S11
We put two data sets onto them of the exact same size. Then using Windows Explorer, we deleted the two data sets from each SSD. But, a little bit of background information first. All the disks were brand new. And the data sets were designed to emulate as much as possible the file contents of a standard Windows 10 computer. Data Sample 01 (11.9GB) contained Office Documents such as (.docx .pptx, .xlsx.), video files (.avi and .m4v), photos (JPEG) application and operating system files. While Data Sample 02 (30.8GB) contained .PDF, PST and application files.
So, we connected all three solid state disks to a standard Windows 10 Professional desktop system using three separate disk caddies (all Orico 2.5”). These were then connected to the USB 3.0 port of the host. Fifteen minutes after the delete command was issued, we decided to scan each of the disks using Forensic Toolkit (Access Data). The Crucial MX 500 and WD Blue still had their data intact. The Kingston A400 SSD had lost its Data Sample 01 sample already.
It was now approaching 5pm. We would leave all disks connected to the host overnight. In a move which would have incurred the wrath of Gretta Thunberg, we disabled all power saving features of the Windows 10 host system.
At 9am the next morning, we checked the disks again. They still had their data intact. (Obviously Data Sample 01 on the Kingston was still undetectable). We checked again at 11am. The result was the same. Finally, at 12pm, we discovered that that Data Sample 02 of the Crucial MX was no longer appearing in FTK.
Discussion
It would appear that Phison S11 controller used by the Kingston A400 SSD has a very aggressive garbage collection algorithm deleting all evidence of Data Sample 01 in under 15 minutes. We were expecting that the Crucial and WD disk would lose their Data Sample 01 in line with the Kingston but this did not happen. Instead, the Crucial relinquished all evidence of Data Set 02 – some 19 hours later. And under our twenty-four test conditions, all the data of the WD Blue SSD would have been recoverable. This certainly contradicts the wisdom found on internet forums that once data is deleted from an SSD – it’s gone. Our little experiment proved otherwise. The experiment also proved that there is very little uniformity in the way SSDs from different manufacturers or SSDs using different controllers handle deleted data.
Mitigating the effects of garbage collection
Data Sample 01 on the Kingston SSD was undetectable after just 15 minutes. Had this been a real-life case, it could have posed a major problem for a forensic investigator, system administrator or data recovery technician. One participant in the class suggested that a write blocker could have been used. However, write blockers are traditionally used to block I/O requests from the operating system and not internal commands from the disk controller.
Other Possible Solutions
One possible solution would be to disconnect the NAND chip from the PCB of the SSD in order to prevent garbage collection from operating. However, this “chip off” solution is a high-risk procedure because the controller is needed to read the data. And even reading the NAND chips using an emulator, the investigator might not have the exact controller microcode for the disk model to upload. Some forensic investigators claim that activating “auto-dismount” on the host system can mitigate the effects of garbage collection. While other investigators claim using a write blocker can dampen the garbage collection process. However, none of these researchers have explained specifically how these measures interact with the disk controller to slow or stop the garbage collection process completely. There is also the option to image the SSD completely, however, with an unstable SSD, this might not be possible.
Further investigation
Further investigation of this issue will be difficult as garbage collection algorithms used by SSD / controller vendors are usually proprietary and a source of competitive advantage. The test and observe method might prove to be one of the richest sources of information on this topic. For those involved in disk forensics and recovery, it means there are going to be some interesting years ahead.
Yesterday, we recovered photos from this ST1000DM003 hard drive. The disk had over 2500 JPEG files ensconced inside a Photos library. This APFS formatted Seagate S-ATA disk had firmware issues, but also had extensive bad sectors (over 36,000). When the disk was connected to another MacOS system via a USB 3.0 dock, it was not being recognised by Finder. The client even tried Target Disk Mode to recover the photos, but this also proved unfruitful.
Under the hood, the JPEG (Joint Photographic Experts Group) file format is a compressed format and as file structures go is actually quite complex. It comprised of multiple constituent parts such as the metadata and payload. When a disk goes bad or corrupt, it is usually the metadata which gets damaged
Connection to the disk’s serial port – enables us to access and make reparations to brain of the disk – the firmware!
Problem Solved
Firstly, connected the disk to our recovery system via its S-ATA and power connection. We then connected to disk to our recovery system using its serial port. The serial port on the ST1000DM003 is to the left of the S-ATA data port and can be recognised as having 4 pins. Connecting the disk this way, gives us direct access to the disk’s firmware modules enabling us to repair the corrupt translator module. We then used our specialised data recovery equipment to long-read the damaged sectors of the disk. This equipment is tuned to read data from damaged disks where a standard operating system such as MacOS, Windows or Linux would just generate multiple I/O errors.
Nothing evokes memories with such power as a photograph…
The Result
We achieved a 96% data recovery rate of the client’s MacOS Photos library which was of extreme sentimental value to them. With their memories restored – they could now treasure and enjoy them for years to come.
TRIM hoovers up accidentally deleted files in the same way that Pac-Mac devours the dots…
If you’re using an SSD and accidentally delete a file or folder in Windows (or in macOS), there is a substantial risk that TRIM, along with some other SSD house keeping functions, will thwart recovery efforts by deleting all data remanence. A simple way to think how TRIM works is just to think of Pac-Man – it operates inside your disk hoovering up deleted files in the same way that Pac-Mac devours the dots. Needless to say, this can be very problematic in terms of recovering data from SSDs.
So, let’s say an SSD user has accidentally deleted an important file or folder. Obviously, they should turn off their computer immediately, but before they do, they should be instructed to turn off TRIM as soon as possible. This just might help to make their deleted data more recoverable. If you are a Windows user, type “powershell” into the Windows search bar and “Powershell” should now appear on the menu. Then right-click to bring up the option “run as administrator”. At the command prompt, type “fsutil behavior set DisableDeleteNotify 1” to disable TRIM. (0 to re-enable). Mac users should go to Terminal and type in “sudo trimforce disable”. TRIM will now be disabled when you restart the computer.
Another way of disabling TRIM is to simply disconnect your SSD from your computer’s S-ATA or PCIe connection and access the disk via a USB dock or caddy. (For most disks, TRIM cannot work over USB). However, even with TRIM disabled, there are other background processes running inside your SSD which can also jeopardise the probability of a successful recovery. These will be discussed in another blog post.