A customer recently contacted us saying they did not need data recovery from their 64GB SanDisk SD card but they in fact needed to wipe it. They had already transferred their photos and videos to the computer and verified that the transfer had been successful. With the shops being shut, they wanted to re-use this 64GB card but formatting or wiping it was proving nigh impossible. They tried using “diskutil” on their MacBook but to no avail. They even tried “diskpart” on their Windows laptop but that too proved unsuccessful. Thankfully there is a nice tool from the SD Card Association which solves this seemingly intractable and common problem. This tool formats SD cards with ease and removes the need to perform command line gymnastics using Diskutil or Diskpart.
BitLocker is a common full-disk encryption application used on Windows 10 laptops. We recently had a client where, on their Dell laptop running Windows 10 Pro, a BitLocker dialogue box appeared out of the blue requesting a “recovery key”. Without it, it was looking like they would not be able to access their desktop or their files. The box entitled “BitLocker Recovery” requested that the user “enter the recovery key for this drive”. (This is normally a 48-digit key which decrypts the Volume Master Key which is needed for the decryption process to run) You may also see a request to “enter the PIN to unlock this drive. This was a complete surprise to our client. They had never enabled BitLocker on the system. In fact, just to be sure, they double-checked their Windows online account. There was no evidence of BitLocker ever having been enabled.
Fortunately, this was no big deal. It’s just a little quirk on Dell laptops where the system gives the illusion of being encrypted with BitLocker when it’s actually not. Here are the steps to fix it.
- Acces your Dell’s BIOS. You can normally access this by pressing on F2, just after your press the power-on button.
- Look for the section called “SecureBoot”
- Now navigate down to a section called “Expert Key Management”
- Select “Restore Settings” followed by “Factory Settings”.
- Click “ok” and exit the BIOS.
When you restart the computer, the BitLocker recovery box should have gone away.
Drive Rescue are based in Dublin, Ireland and offers a full data recovery service for BitLockered drives removed from Dell, Lenovo, HP, Fujitsu and Acer laptops even if physically damaged. We also offer a data recovery service Microsoft Surface SSDs which are BitLocker protected. Phone Us on 1890 571 571
The platters are probably one of the most important components of a hard disk. The platter that is used in a modern hard disk is typically comprised of three different layers – the lubrication layer, the carbon layer and the magnetic storage layer. In the lubrication layer, a coating of Perfluoropolyether is used. The viscosity and inertness of this next-generation lubricant are both perfect for platter surfaces. The carbon layer (or diamond-like carbon layer) is used to prevent moisture seeping through to the magnetic layer. It is also nitrogenised to improve durability. The magnetic storage layer consists of cobalt, chromium and platinum. Cobalt is used to provide the orientation of the magnetic crystals, chromium enhances the signal-to-noise ratio, while platinum helps to stabilise the temperature. In order to reduce crosstalk between the layers, Ruthenium is also added to the mix.
Even before they leave the factory, disk platters already exhibit defects in the form of asperities. These microscopic “craters” of alumina on the platter surface are the by-product of the sputtering process (which is used to deposit metallic substrate on the bare metal platters). Even though these surface imperfections are minimised by burnishing and polishing, they cannot be totally eliminated from the finished product.
Manufacturers use what is known as a “P-List” to try and map out these defects so that the firmware knows not to write to these locations. (This list used to be on a piece of paper that accompanied a new disk, but was subsequently put on a 3.5” floppy disk.) Today, the P-list is stored on the disk itself. Run a “V40” terminal command on a brand new HDD and prepare to be shocked at the thousands of errors that have already been logged!) The use of “padding” around defective areas is another counterbalance employed to provide an extra safeguard against bad writes. With this technique, even healthy blocks around the defect areas are marked as “bad”.
Unfortunately, these seemingly innocuous blemishes on the platter surface can lead to potential data loss situations during the life cycle of the disk.
The above diagrams illustrate the life cycle of a platter asperity. Figure A depicts an AFM (atomic force microscopy) scan of platter asperities on a new disk. As disk usage increases (figure B), the sharpness of the asperity will erode a little. The prime cause of this wear-down effect is the sweeping effect of the disk-heads. In figure “C”, you can see disk debris now building up around the asperity. Eventually, the debris on the platters accumulates under the disk heads resulting in them retaining a higher voltage. This voltage increase leads to a higher “blocking temperature” in the disk-heads, resulting in them being unable to perform read operations on disk.
Last week, we were dealing with a WD Elements 2TB external USB disk experiencing the flaking platter problem. While it appeared to spin normally, the disk could not be seen by the customer’s computer.
After removing the plastic shell, we opened the internal disk (2.5” WD20NMVW with integrated USB connection) in our class-100 clean room and found significant amounts of platter debris which had accumulated under the disk-heads. We delicately cleaned the platters using a pharmaceutical-grade lint-free polyester swab and cleaned the disk-heads with same. The Head Disk Assembly (HDA) had to be removed for proper access to heads. Using the right cleaning methodology is key. Even using a “cleaning” chemical such as isopropyl alcohol or acetone can create smearing on the platters caused by the interactions these chemicals have with the air. Likewise, swabbing motion used must strike the right balance between debris particle removal and abrasion avoidance.
Once the cleaning process had finished. We put the HDA back in-situ. We reassembled the disk. We imaged it at a very low-speed onto a new USB disk. Once this process has completed, the disk was connected to one of our systems and the NTFS volume appeared in under 5 seconds! The customer could now be reunited with years worth of photos, Word, Excel, PDF and DWG files.
Drive Rescue are based in Dublin, Ireland and offer a complete data recovery service for 1TB, 2TB, 3TB, 4TB and 5TB WD Elements external hard drives. Common data loss situations we help with include WD Elements disks which are not being recognised by your computer or which are making a clicking, beeping or knocking noise. We can also help you if you can see your disk’s folders and files but cannot copy them. Popular models we recover from include the WD Elements WDBU6Y0020BBK, WDBUZG0010BBK-01, WD1000EB035-01, WDBUZG0010BBK-03, WDBUZG0010BBK-05, WD5000E035-00, WDBU6Y0020BBK-0B, WDBWLG0040HBK, WDBWLG0030HBK-04 and WDBU6Y0020BBK-05. You can find out more information our external hard disk recovery service here.
Last month, Drive Rescue Data Recovery attended Embedded Word 2020 Nuremburg, Germany checking out some of the latest solid state disk technologies. Given the circumstances of the COVID-19 outbreak, the atmosphere was rather funereal but most exhibitors and attendees seemed to make the most of it. Here are some changes and insights happening in the SSD market at the moment.
96-Layer (BiCS4) NAND goes mainstream
As most readers of this blog probably know already, the first generation of flash memory was “planar” or “2D” NAND meaning that the memory cells are layered horizontally across the die (chip).
3D NAND is now going from 64-layer (BiCS3) to 96-layer (BiCS4). This means NAND cells are getting vertically “stacked” on top of each other, creating 96 different layers – rather like the storeys of a skyscraper. For NAND manufacturers, this means adding additional chemical deposition and etching machines to their production lines.
96-layer sounds great, but as ever, manufacturers are pushing the NAND layer envelope even further by using a process known as “string-stacking”. This involves, for example, stacking, a 32L die on top of a 64L die. By adapting their production lines, this layering process is being adopted by most NAND manufacturers. However, Samsung (being Samsung…) is expected to continue using a single-stack process known as High Aspect Ratio Contact in their fabs for etching dies up to 200 layers.
And while multi-layered chip lithography greatly enhances SSD storage densities, it also increases cell-to-cell interference (sometimes referred to as “crosstalk”). To reduce this, a quality SSD controller, which uses effective Error Correction Control (ECC) algorithms is needed now more than ever.
Common ECC algorithms like BCH are simply not cutting it anymore. That is why more sophisticated ECC engines powered by LDPC can offer pseudo-soft bit correction and read-level tracking are needed. SSD controller manufacturers are responding to this need. Phison’s new E16 controller, for instance, uses fourth generation LDPC ECC to complement 3D TLC and even QLC NAND. Silicon Motion, a controller manufacturer exhibiting at Embedded World 2020, have introduced their sixth generation “NANDXtend” ECC technology. This combines LDPC with RAID protection which promises end-to-end data integrity along the entire Host-to-NAND data pathway. Some of Silicon Motion’s controller line-up even employs machine learning algorithms for enhanced data integrity when the disk operates at high temperatures. Machine learning and artificial intelligence are terms which the storage world are probably going to be hearing a lot more of in the future.
From S-ATA to M.2 There is little point in having all of this fast NAND if it’s bottlenecked by the disk’s command interface or connector type. For example, if you buy an S-ATA MLC-based SSD today, more likely than not, the S-ATA connector will be the most likely data throughput bottleneck. (Remember the maximum bandwidth throughput for S-ATA III is only 600 MS/s.) Moreover, most laptop and tablet manufacturers are finding the S-ATA connector too bulky to fit their ultra-slim devices.
This explains why M.2 is now proving such a popular connector type. This connector comes in 5 main “key” or connector types including “A key”, “B key”, “E key” “M key” and “B&M key”. These come in a variety of sizes such as the popular “2280 B+M” form factor which is 80mm in length and 22mm in width. These variations provide manufacturers, systems integrators or end-users with ample spatial flexibility.
From AHCI to PCIe
It’s not only the connector type which can slow down an SSD. The command (or data pathway) protocol used is just as important. Currently, for standard S-ATA III disks, AHCI is the standard command protocol. This has 1 command queue and 32 commands per queue, allowing data transfers of up to 600MB/s. Whereas NVMe allows 65,535 queues and 65,536 commands per queue. Or, to use a roadway analogy, AHCI is your country boreen for data while PCIe is a super-highway.
Enter PCIe 4.0
Solid state disk manufacturers are now beginning to deploy PCIe 4.0 in their drives. PCIe doubles the data throughput from 32GB/s to 64GB/s – that is assuming the underlying hardware supports this new specification. This increase in bandwidth should be very pleasing to those involved in the processing of, for example, stereoscopic images or large data sets required by artificial intelligence.
Optimising PCIe with NVMe
But how about optimising PCIe so that it works smarter and faster while minimising motherboard compatibility issues? That’s where the Non-Volatile Memory Express (NVMe) standard comes into play. NVMe is a data pathway standard devised by PCI-SIG, a consortium of chip makers, controller designers and solid state disk manufacturers (such as SanDisk, Samsung and Intel). Their mission is to enhance I/O functionality and performance of solid state disks using PCI NVMe but also to promote industry-wide interoperability and adoption of the NVMe standard. NVMe is currently on version 1.4.
Already NVMe allows SSDs to exploit parallelism which allows the concurrent processing of I/O requests. One of the biggest changes from NVMe 1.3 however, is a feature known as “I/O determinism”. This feature allows an SSD to be configured into multiple “sub-drives”. Different data types (e.g. Video, photo or database streams) can be partitioned so they don’t interact. A feature which could prove very useful when processing hyperscale data.
Using Namespace Preferred Write Alignment, NVMe 1.4 makes TRIM commands work more efficiently
Not only does NVMe 1.4 allow for better organised data, but it also enables TRIM to work more effectively. Currently, for most SSDs, TRIM operates as a background function to ensure that data blocks or pages which are no longer needed are deleted. While in theory this sounds very efficient, often the data ranges subject to deletion can be very small or misaligned. This leads to write amplification (where entire “blocks” instead of more granular “pages”, of user data and metadata stored on NAND are subject to erase and write cycles). And you really don’t want this process happening too much inside an SSD because it can lead to early wear-out. However, with NVMe 1.4 a process known as Namespace Preferred Write Granularity has been introduced. NPWG helps SSDs to report more rich information back to the host, such as page sizes and erase block cycles. In turn, this allows TRIM commands to get executed in a more granular and targeted way. Executing erase commands executing at page-level greatly reduces write amplification and extends the life the SSD.
The G-List for the SSD era
For years electro-mechanical disks (HDDs) have used a G-List (short for Growth List) which records bad sectors detected by the disk’s firmware. NVMe 1.4 introduces something similar for SSDs called “Get LBA Status”. This allows an SSD to report LBA ranges (areas of blocks) which are likely to return a read-error if a read or verify command is executed. This feature should hopefully enable solid state disk OEMs, vendors and third-party software developers to develop more accurate SSD diagnostic and monitoring tools. Already on the GitHub file repository, we’re seeing the how the relative openness of NVMe standard has led to independent software developers across the world produce fairly nifty firmware hacks for NVMe-based SSDs.
Looking forward: NVMe 2.0 to introduce Zone Name Spacing
Diverse data types dispersed across a disk’s NAND plane can also make write, read and erase cycles very inefficient. For example, for a user like a video producer or editor, whose SSD might contain a various range of file types such as audio, video, music or photos (e.g. MP4, AIFF, RAW). Thanks to the SSD’s controller and a process known as wear-levelling, these files will be written evenly across the SSD’s NAND blocks. This is important because it prevents the same blocks being written to, or erased continually. (Think of wear-levelling as being like a car park attendant. He tries to prevent drivers all parking their cars near the lifts and encourages them to park more evenly throughout the car park) Spreading out the writes in a more distributed manner across the NAND cells reduces disk wear-out. However, there is a problem with this. Each time a read, write or erase command is issued, the controller has a lot more work to do because the data is stored non-sequentially. The consequent problem of latency introduced by storing data non-sequentially is a well-known problem. Some readers might remember how, on operating systems such as Windows 98, ME or XP, non-sequential data stored on the host’s disk would necessitate the running of inbuilt or third-party disk defragmentation applications to improve disk performance. Almost two decades later and SSDs are facing the same problem which that old spinning Maxtor, Seagate (or even a Quantum Fireball…) disk inside your first PC experienced. NVMe 2.0 promises to eliminate this problem using Zoned Name Spaces. This protocol (already in use by Shingled Magnetic Recording HDDs) divides the disk’s logical address spaces into fixed sized ranges and enforces sequential write rules. Each zone must be written sequentially. If the host or application violates this rule, an error is generated. ZNS allows the SSD to match the workloads of the host to the natural erase patterns of flash (NAND) memory. This results in significantly reduced latency, reduced over-provisioning, less garbage collection and less write amplification. And ultimately, it allows for much faster I/O operations.
In the context of data recovery, NVMe 1.4 introduces a potentially useful feature known as “persistent error log”. This basically acts as a black box flight recorder for NVMe storage devices – allowing the vendor, OEM or data recovery technician access the event logs of the device. While disk event logging is nothing new, HDD manufacturers such as Seagate and WD have featured it before. However, the logging was usually specific to a hard disk family or model. (And just to complicate things, for more recent HDD models, the logs are sometimes inaccessible due to locked firmware…) The “persistent error log” feature on NVMe SSDs promises to be completely open (as opposed to locked down or encrypted…) and uses standardised (as opposed to manufacturer-specific) logs. The PEL will record important disk operating telemetry such as health snapshots, NVMe namespace changes, firmware commits, power-on logs, reset logs, hardware error and logs pertaining to disk format and sanitise commands. This should make the diagnosis of complex SSD firmware or File Layer Translation problems less time consuming and hopefully expedite the data recovery process.
Another interesting trend Drive Rescue noticed at Embedded World 2020 is the return of removable WORM (Write Once Read Many) media for the SSD era. Most of you will be already familiar with WORM media such as CD-R, DVD-R and BD-R. Well, flash storage companies like Silicon Power are making WORM versions of their SD cards. These are tamper proof SD cards (at least from soft forms of tampering…) which once written to, cannot be modified. Nor can the write-protection be disabled by flicking the “read only” switch on the side like on standard SD cards. The write protection on WORM SD cards is hard-coded. Wondering about the practical applications of such cards, the super-informative Silicon Power team told me that these non-erasable cards have wide use applications in areas where data integrity is paramount such as POS equipment, body cams, electronic voting and medical systems.
According to the Silicon Power team, USB memory sticks are still being sold in large quantities. They explained how their ease-of-use and off-line availability along with fast transfer speeds are still a very attractive proposition to lots of computer users. The death of USB-based flash storage has been greatly exaggerated. The company has even introduced USB 3.2 memory sticks such as the Helios 202 which offers sequential speeds of up to 5Gbps and capacities of up to 256GB. Silicon Power source their NAND from Toshiba (now called Kioxia) or WD (SanDisk) with controllers supplied from Phison, Silicon Motion or Marvell.
USB-C… More than just a connection type
Transcend, another major player in flash memory products was also at Embedded World 2020. Their team sees USB-C as the next big thing in portable storage. The reason? Well, unlike USB 3.2, USB 3.1 etc., USB-C is generally more widely supported by tablets and phones. It offers multi-platform interoperability between Windows, Apple and Android. It also offers theoretical sequential data transfer speeds of 10 Gbps. In fact, one congenial Transcend representative sees USB-C more than just a connection type but rather a platform in itself. It can, for instance, be connected directly to external displays. (Monitor manufacturers like LG, Benq and Dell now support USB-C in some of their high-end displays). USB-C also supports daisy-chaining – a useful feature which allows multiple external hard disks to be interlinked whilst appearing to the host as independent drives.
RAID hasn’t gone away you know…
While USB memory sticks are likely to be hanging around for many years to come. So too are a lot of other storage technologies borrowed from the world of spinning hard disks (HDDs). As already mentioned, the process of Zoned Storage is borrowed from Shingled Magnetic Recording (HDDs) and is now being deployed in some NVMe-based solid state drives. Native Command Queuing (an adjunct command protocol developed to alleviate AHCI command queue bottlenecks) is another spinning disk technology, which has crossed the chasm into the SSD world. And let’s not forget RAID, a technology which some predicted would fizzle out, has not gone away either. Not only are the principles of RAID virtualisation applied to NAND arrays found on individual SSDs, but RAID is now extensively used in conjunction with SSDs for redundancy and to increase storage capacity (just like with HDDs). For example, modern ATM or ticketing machines extensively use SSD RAID arrays. Modern NAS devices from manufacturers such as Synology and Qnap now offer compatibility for S-ATA and M.2 SSDs. In fact, Synology have introduced PCIe add-in adaptor cards for some of their high-end devices which support M.2 2280, 2260 and 2242 SSDs. WD has recently introduced their SA500 NAS SATA SSD range designed to work with NAS devices.
Hardware manufacturers such as Icydock have introduced products such as the ToughArmor 4-port SSD bay for NVMe M.2 disks. Designed for handling intensive I/O workloads such as deep-learning or 4K/8K video, it can offer speeds of 32Gbps over a MiniSAS connector with the host or RAID card managing the array. Rather than solid state storage sunsetting RAID, it has actually given it a new lease of life.
The role of gamers in solid state disk development
Adata, another major manufacturer of solid state disks and flash-based storage devices was also in attendance. Commonly known for their consumer-level SSDs such as their popular SU630, SU650 and SU800 models. This manufacturer also produces solid state disks for the gaming and industrial use markets. For the gaming market they produce a M.2 PCIe range of “XPG” branded SSDs such as the SX6000, SX7000 and SX8200. For disk manufacturers this market can be quite challenging because gamers are quite a discerning bunch of customers. They tend to perform extensive pre-purchase research and often demand high-performance low-latency disks.
Unsurprisingly then perhaps, gamers have been the progenitors for a substantial number of breakthrough information technology products such as the sound card and the graphics card. (The gaming industry has also been responsible for bestowing upon the world neon-illuminated keyboards and other PC components which look like props from a Star Wars film…) Gamers using virtual and augmented reality today are not just playing, they are fine-tuning technologies. Because once refined, these technologies will most likely trickle into the mainstream in the not too distant future. In this regard, the informative Adata representative explained how the consumer and gaming market, offers his company valuable feedback and insights into real-life disk performance and reliability. (Learnings which could probably never be gleaned from a synthetic SSD benchmark test)
These insights can be leveraged to design high-reliability industrial-class solid state disks such as the Adata ISSS332 (S-ATA III disk using MLC NAND). For the industrial SSD market, not only are criterion such as reliability and power protection important, but also the need for disk models to have a fixed bill of materials (BOM). (Unlike the consumer disk market, where a manufacturer or vendor can change key components willy-nilly whilst retaining the same disk model number). This is important because it ensures reliability, performance and consistency in environments sensitive to component changes. For example, operators of medical diagnostic machinery may require an SSD with a large DRAM-cache for their machinery to run smoothly. If the SSD manufacturer or vendor decides to change or remove the DRAM-cache in subsequent production runs of the disk, this could easily impact the running of a finely tuned machine. Therefore, “fixed BOM” guarantees that all disk components and specifications pertaining to a specific disk model will remain unchanged during the production run of the disk.
Overall, despite the virus threat, Embedded World 2020 exhibitors and speakers provided some very interesting insights into the fast moving world of solid state disks.
Drive Rescue are based in Dublin, Ireland and provide an SSD data recovery service for S-ATA and M.2 drives which are no longer being recognised by your computer. We recover from brands such a Lite-On, Crucial, Samsung, WD and Lenovo. We also recover from USB memory sticks (flash drives) . Phone us on 1890 571 571.
There is an enduring myth out there that if you swap the PCB (printed circuit board) or controller board on your non-working, hard disk with a similar one, it will start working again. Unfortunately, while this might have worked well for disks manufactured in the early 2000s, it no longer works with modern hard disks.
But before we go into specifics, there are multiple reasons why a hard disk won’t work. The disk-heads can fail, the firmware, or file system (NTFS, HFS+, APFS, exFAT, EXT4 etc.) can go corrupt or the spindle can fail. Failure of the PCB is just one of many possible reasons. Without through diagnostics deciding that PCB is the problem is like the drunkard who has lost his keys at night. The first place he will start looking will be under the street light because that seems like the easiest place to find them. For some distressed computer users who’ve just lost data, their disk’s PCB also seems like an easy place to find a quick data recovery solution!
Will swapping my hard disk PCB work?
Unfortunately when recovering data from modern drives, swapping a PCB from a similar drive (even with the same model number) won’t work. This is because of adaptive information which is stored on the ROM chip on your disk. For example, on Western Digital drives, it’s marked as the U12 or U14. Other times, the adaptive information is integrated into the main controller IC. Some adaptive information is also stored in the Service Area of your disk. Adaptive data is unique to your disk and your disk will not run properly without it.
Typical adaptive information might include:
- Microcode for disk initalisation
- Voltage levels for each disk-head
- PMRL channel amplification information
- Disk head write and read currents
- Head allocation for read/write zones
This is all crucial information needed for your hard disk to run. If you’ve performed complete diagnostics on your disk and you’re absolutely certain that your PCB is the problem, the a ROM chip swap is needed. Here the ROM or Main IC needs to be micro-desoldered off and soldered to an exact match donor board. This is an extremely intricate job, which ideally needs to be performed by someone who has successfully completed it hundreds of times before.
Drive Rescue, data recovery Dublin offer a complete hard disk PCB (controller board) repair service for Seagate S-ATA Barracuda disks (7200.9, 7200.10, 7200.12), WD Passport, WD Blue, WD Green, WD Red and HGST (Hitachi) disks.
The Seagate Backup Plus range is an extremely popular range of external hard disks in Ireland. The are widely available from computer retailers, trade suppliers and come in a variety of colors and capacities. They come in a 2.5” form factor (marketed as the Backup Plus Portable and Backup Plus Slim) and a 3.5” form factor (marketed as Backup Plus Hub 3TB, 4TB, 5TB,6TB and 8TB).
Last week a customer dropped in a 2.5” 4TB Seagate Backup Plus disk to us which was not being recognised on their Windows PC. They examined the Disk Management section of their computer and the disk was showing up there as “unallocated”. They then tried swapping the USB cable, but the disk was still not recognised by Windows Explorer. When they tried to run data recovery software on the disk, their whole system just froze. They ran Seatools diagnostics on the disk a SMART test fail result was returned in less than a minute.
The disk inside the enclosure was a Barracuda ST4000LM024. Diagnostics revealed over 27,448 bad sectors and a corrupt MFT. The Master File Table stores crucial information about an NTFS volume structure.
After the disk’s media issues were solved, manual editing of the primary MFT was needed before the volume became mountable again. Over 3.78 TB of geological surveys were recovered which had been created with the Qgis application. Client over the moon.
Drive Rescue Dublin provide a complete data recovery service for Seagate Backup Plus external USB hard disks which are not being recognised, which are clicking or which are appearing completely dead.
The Samsung Evo range of 2.5” S-ATA and M.2 PCI solid state disks has proved extremely popular in Ireland for their speed, capacities and reliability. Typically, the EVO range (840, 850, 860) uses MLC-NAND coupled with in-house Samsung designed controllers. Capacities include 120GB, 240GB, 500GB, 1TB, 2TB and 4TB.
While most users enjoy a trouble-free experience, there are occasions when data recovery is needed for your Samsung EVO SSD. Take last week for instance, we had a customer who had a Samsung EVO (MZ-7TD120) which was no longer initialising on their Windows 10 system. They removed the disk and connected it to a second Windows 10 computer using a USB 2.5” enclosure. However, it still remained inaccessible. They tried running some EaseUS data recovery on the drive but that too proved fruitless. They ran Samsung Magician software to help them diagnose the problem but the software could not even recognise the drive. When they attempted to open it, they were thwarted by some Torx screws. They sent the disk to Drive Rescue.
We first removed the metal casing of the SSD using a Pentalobe screwdriver. Then using a multimeter, we tested the voltages on the PCB. They appeared normal. Using a ESD-safe tweezers we applied them to the short points on the disk’s PCB to put it into Safe Mode. This would allow us to use a Samsung firmware emulator to access the NAND. The terminal read-outs from our Samsung SSD recovery equipment indicated that there seemed to be a problem with the FLT layer (Flash Translation Layer) of the SSD. This layer, found in the FW layer (firmware) assists the SSD with functions such as wear levelling and garbage collection. But more importantly, it peforms the vital function of logical block mapping. This maps the Logical Block Address (LBA) to phyical blocks on the NAND chips. Basically, it acts like an internal roadmap for yoru SSD. When it goes corrupt, the data cannot be accessed.
After a couple of hours of FW reparation work, and a few disk re-initisations, a valid NTFS volume was finally retrieved along with all the client’s data!
Drive Rescue are based in Dublin, Ireland and offer a complete Samsung SSD data recovery service for disks such as the EV0 840, 850, 860, PM863, PM863a, PM883, SM843T, MZ-7WD240HCFV, MZ-7WD480N and MZ-7WD4800.
Zyxel a Taiwanese company mainly known for their networking equipment for home and small business users also produce a range of NAS devices. These have proved popular in Ireland, however, like most NAS devices, Zyxel devices are liable to data loss events necessitating data recovery.
Their network attached storage range, which uses the rather unfortunate prefix of NSA, includes models such as the NSA 221, NSA 310, NSA 320, NSA 326 and NSA 542. Most of these models come in two or four disk-bay varieties, using S-ATA2 or S-ATA3 connectors and employ either EXT3 or EXT4 as their default file system. Most of these devices are configured in RAID 0 or RAID 1. Some of their 4-bay models can be configured in RAID 5, RAID 6, RAID 10 in JBOD.
Typical data loss scenarios include:
- Your NAS device gets accidentally deleted.
- Your Zyxel NAS device gets accidentally knocked over (yes, this happens more than you think…)
- The EXT3 or EXT4 file system on your NAS experiences corruption. (Usually either Journal file or SuperBlock corruption)
- Extensive bad sectors developing on one or more of your disks resulting in your Zyxel NAS not appearing in Windows Explorer or Finder.
- The disk-heads in one or more of your drives degrades or fails resulting in non-responsiveness, freezing or inaccessibility of your NAS.
- The RAID controller on your Zyxel NAS can fail, resulting in accessible data.
- Firmware corruption can occur on your NAS resulting in your device not being seen by Window’s Explorer or by Apple’s (macOS) Finder. This can occur, for instance, due to a defective FW update or power surge.
Recently, Drive Rescue was performing data recovery from an NSA 320 device which was no longer being recognised by Windows. The user was extremely anxious to recover this NAS as there were over seven years worth of photos and videos stored on it. This content was of extreme sentimental value to him.
Of the the two S-ATA disks inside (Seagate 250GB ST3250824AS, part of the Barracuda 7200.9 family), our diagnostic tests revealed that Disk 1 had a problem with head # 4 – failing a simple read test twice. This explained why the RAID 0 volume was not accessible. Remember RAID is actually very fussy when it comes to reading data on even a simple array. (And with bigger disks being used, more read errors are likely thus making RAID less relevant in a modern IT environment)
This model of Seagate disk uses five heads. Using our data recovery equipment, we were able to manipulate the disk heads in RAM substituting head # 3 for # 4. We then imaged this disk along with Disk 0. With two images on hand, we now had to determine the block size, the offset of the array, the parity pattern, the parity delay pattern and find any spare blocks on the array. If you get any of these parameters wrong, you will end up with corrupted data. Thus, a hex editor, an old-school notepad, time, experience (and lots of patience) come in handy.
After several hours, the EXT3 RAID 0 volume successfully rebuilt. Hundreds of the the client’s videos files (.m2ts format) from his Panasonic camcorder and (.CR2) files created using hfis Canon camera were all recovered. They were presented to him on a USB external hard disk – memories that he and his family can treasure for years to come.
Drive Rescue are based in Dublin, Ireland. We offer a complete data recovery service for Zyxel NAS devices along with Synology, QNAP, ReadyNAS and Buffalo. Call us on 1890 571 571. We’re here to help.
Many customers ask us if enterprise class SSDs are more reliable than consumer ones. In a nutshell, yes enterprise SSDs are more robust. However, enterprise class still disks such as the Samsung PM963, PM853T, SM843 and SM863 still fail and need data recovery.
However, there are a number of reasons why enterprise-level SSDs are more reliable than consumer-level disks.
- Most enterprise SSDs tend to use controllers which employ smarter ECC engines. For example, some enterprise SSDs come with 24-Bit ECC along with CRC. This is very useful for minimising data corruption.
- Most enterprise SSDs employ over-provisioning, which means the inclusion of extra or “spare” blocks to increase endurance.
- Most enterprise SSDs often use an SDRAM cache for a more efficient handling of metadata.
There is another class of SSD which most people forget about and that’s industrial-class SSDs. These tend to be even more robust than enterprise-class models. For example, in the event of a sudden power loss, some industrial-class SSDs use special capacitors which provide enough energy for the SSD controller to finish any operations that are currently being processed. Some industrial SSD manufacturers, like Innodisk, take power loss protection even further by using a “low-power detector” in their disks, which triggers a recovery algorithm (iData) that assists the drive in shutting down gracefully while also preventing data loss and ensuring data integrity. In the event of corrupt data getting written to the disk, table-re mapping is deployed to delete it. Moreover, most industrial-class SSDs have much more resilience to temperature extremes, which is very useful if you are deploying an SSD in a cold store or smelting plant…
Drive Rescue are based in Dublin Ireland. We offer a full SSD recovery service for disks such as the Samsung PM963, Intel S3510, S3520 and San Disk X100, X400 and X600.
LaCie external hard disks have always been extremely popular with Apple Mac users in Ireland. Most of their models of external disk tend to be high capacity and come equipped with Thunderbolt 2 or Thunderbolt 3 ports. Their “Rugged” range of external disks (such as Rugged Mini, Rugged Thunderbolt and Rugged Triple) uses Thunderbolt or USB-C ports. Disks in the “Rugged” range are swaddled in a distinctive orange rubberised overcoat to protect against shock damage
LaCie also offers a range of DAS RAID devices such as their 2Big, 2Big Quarda, 2Big Dock Thunderbolt 3, 4Big, 5Big and 8Big devices which offer high capacity local storage without having to use USB or LAN connections.
And LaCie is perhaps the only hard disk manufacturer to have introduced a range of “designer” external hard disks allegedly based on designs by the Porsche design studio in Germany.
While most users have a trouble-free experience. Unfortunately, some LaCie owners can experience data inaccessibility problems with their disks. These include:
- Accidental deletion of HFS+, APFS formatted LaCie disk
- Accidental deletion of HFS+, APFS or NTFS partition
- Accidental shock damage (e.g. dropping your LaCie disk)
- Power surge damage to your LaCie disk
- LaCie disk with corrupted firmware
Such issues can manifest themselves in various ways, such as:
- Your LaCie disk is not recognised by macOS when connected to your iMac or MacBook.
- Your LaCie disk no longer spins ups.
- Your LaCie disk is making a beeping, ticking or a clicking noise.
Such a case happened only last week to a user in Mayo. They used a D2 Thunderbolt disk in their media production business for many years without incident. However, recently they connected the disk to the Mac computer and it was not recognised. Their local IT support company ran data recovery software which could not even recognise the disk let alone find any data. They sent the disk to us. We opened up the case and found a 4TB S-ATA Seagate IronWolf disk. (This was not really surprising as Seagate now owns the LaCie brand). Our diagnostics revealed 2 weak disk-heads. This explained why the disk was not being recognised. One of these heads was needed to read the Master Boot Record of the disk but couldn’t. In our class-100 cleanroom, we removed the old Head Disk Assembly (HDA) using a customised “head-comb” for Seagate Ironwolf disks. We replaced it with a new HDA. Due to the architecture of IronWolf disks, inter-head alignment (getting all disk heads aligned with each other) was time-consuming (but very important in order to minimise NRRO – non-repeatable run out errors). We then slowly imaged the disk overnight. (Trying to operate a disk which has undergone a “head transplant” at full speed can lead to the new disk-heads getting rejected) The following morning, we were able to initialise the disk but, to our dismay, still no HFS+ volume was showing. Further diagnostics, revealed that the file system just needed some repairing (this can happen occasionally after an HDA swap). After repairs to the file system had completed, we finally got the volume to mount successfully. All files were appeared intact and the volume even retained its “LACIE” name – always a good sign after recovery!
Adobe Lightroom, MPEG, AVI files, and FCPX (Apple Final Cut Pro) files were all successfully recovered, saving our customer hours and hours of redoing work. He was now able to get on with his workflow with minimal disruption as if nothing had happened.
Drive Rescue offer a full data recovery service for LaCie D2 disks, LaCie Rugged, LaCie Big (RAID) and LaCie Porsche Design USB disks in Dublin, Ireland. LaCie Data Recovery in Ireland Call us on 1890 571 571