We had a customer in yesterday with a Dell Latitude Windows 10 laptop. The system, running Windows 10, was requesting a BitLocker key even though the user never remembered this full disk (FDE) encryption application being setup. They were starting to panic because their research on Google informed them that losing your BitLocker key can result in accessible data. BitLocker normally uses XTS-AES (128 or 256bit) which is very strong. One website even advised our customer, if they waited a few years, BitLockered disks could be easily cracked when quantum computing becomes more mainstream. But understandably they were not prepared to wait a few years…
However, this is a problem which Drive Rescue had encountered before. On some Dell laptops, the “Expert Key Management” in the system’s BIOS can sometimes go skewways resulting in a BitLocker Recovery key request window appearing unexpectedly.
Recovering from BitLocker
The fix for this problem is simple. Enter the BIOS of the Dell system. Navigate to “SecureBoot” and then click to expand the section called “Expert Key Management”. Now you should see a “Restore Settings” button, followed by “Factory Settings”. Select this and then click on “ok”. When exiting the BIOS, don’t forget to save changes. Restart your system. The BitLocker key request box should now be gone and all your files should be fully accessible. No data recovery needed!
Drive Rescue offer a data recovery service in Dublin, Ireland for BitLocker encrypted disks (S-ATA, PCIe, mSata) even in cases where a TPM chip is used. We frequently recover from disks removed from laptop systems such as the Dell Latitude, HP Elitebook, Fujitsu LifeBook and Lenovo Thinkpad T and X Series of laptops. Phone us on 1890 571 571
Electro-mechanical hard disks are designed to spin continuously. For most 3.5” form factor disks, rotational speed is 5400, 7200 or 10,000 revolutions per minute. If the disk is used in a blade or tower server, for example, it will get cooled by the host’s system fan and will hopefully have a steady supply of clean power. Operating in an ambient temperature, such a disk (whether standalone or RAID) can run for several years without interruption.
However, there is one risk factor which a lot of IT admins forget about. As the disk(s) is running, because it uses an “air bearing”, some external air is inducted. This air is filtered by a tiny filter known as a barometric or breather filter. In addition to this, due to the effects of internal component wear and tear, tiny debris from the platters can also start to accumulate inside the disk chamber. For the most part, even with debris accumulating inside the disk, the read-write process can continue as normal. That is until, some poor IT person gets assigned the task of physically moving the server or migrating its data as they can be in for a nasty surprise.
Take last week, for example where a company in Dublin got into a spot of bother with their old Dell PowerEdge server running Windows Server 2008. Their IT administrator was tasked with the job of decommissioning it. The server was running fine, but was slow and no longer meeting the organisation’s requirements. He turned the system off and carried it back to his basement office with the intention of doing a complete backup. However, back at his office, he switched it on again, only to be greeted with the hue of a Windows Server 2008 “blue screen of death” informing him about an “Unmountable_Boot_Volume”. He removed the disk (Hitachi HDT721010SLA360) and slaved it onto another PC. No dice. In Computer Management, the disk was showing up as “unformatted”. This was the last thing he wanted. So, if this disk was spinning fine for the last 12 years, why did it pick the most inopportune time to kick the bucket?
Well, when you move an old hard disk which has been in-situ for years, the dust and debris collected by its air filter can get displaced. This can result in particulate matter getting strewn across and platters and collecting under the disk-heads, making the drive unreadable.
Drive Rescue took the disk into our clean-room where we removed the head disk assembly and cleaned the disk platters using a process which merits another blog post. We were able to recover 98% of their data.
Lesson: the benefit of in-situ backups…
Servers can be located in the most uncomfortable places such as under staircases or in cramped comms rooms. The temptation for the IT admin to move an old server and perform a full disk backup in a more congenial environment can be quite strong. However, before moving the server anywhere or removing its disks, it would be prudent to a use a disk replication tool such a Macrium Reflect to copy the server’s volume onto another medium. This should be performed while the server is in-situ. This way, you can prevent any nasty surprises and need the call a data recovery service!
Drive Rescue are based in Dublin. Ireland. We offer a full server data recovery service. This includes Windows Server 2003, Windows Server 2008, Windows Server 2012, Windows Server 2016 and Windows Server 2019. Our service covers both standalone disks (S-ATA, SAS) and RAID (0, 1, 5,6,10)
For years, the firmware of most HDDs was open and made easily accessible by just using a serial connection and the right ATA commands. This enabled data recovery technicians to perform essential pre-recovery housekeeping tasks, such as G-List, P-List and SMART clearing. It also allowed technicians to read and write modules to the ROM. However, with the latest multi-terabyte electro-mechanical disks, manipulation is becoming a little trickier due to manufacturer locked firmware. This fairly recent trend of locked disk firmware can partly be explained by explosive revelations made by Kaspersky Lab in 2015. They discovered a strain of malware dubbed EquationDrug and GrayFish that is capable of dropping a customised installer into an operating system. This enables the installation of a modified controller code onto a person’s hard disk that would act as a persistent backdoor, allowing data exfiltration without triggering any alerts in conventional security controls. Given that governments and corporations throughout the world tend to use standardised equipment, this vulnerability was seen by many security and privacy experts as a grave threat to data integrity and confidentiality. In response to this threat, manufacturers such as Seagate have introduced features like their “Locked Diagnostics Port”, which aims to thwart users from accessing or modifying the disk’s firmware. Seagate has also introduced digital signing of firmware modules.
However, there is another, albeit more commercial reason why disk manufacturers are eager to lock their firmware. Most of the disks’ secret sauce, such as algorithms for error correction servo-track control and thermal-fly height control, are stored in this area of the disk. Not wanting their extensive R&D efforts to be stolen by their competition reverse engineering their disks, manufacturers increasingly just lock down their firmware modules.
For the data recovery technician, this can be exasperating. You’re about to perform a firmware repair only to be greeted with the “Diagnostic Port Locked” message… argh!
The side-effect of this development is that data recovery technicians sometimes encounter a brick wall when trying to remedy firmware issues. Moreover, developers of professional data recovery equipment who could previously analyse firmware modules and develop sophisticated disk repair tools are now being thwarted by manufacturer-locked firmware. Not in all cases however.
To circumvent locked firmware modules, some wily data recovery tool developers have designed “special extensions” to the ROM code which can be saved via a boot code and written back to the HDD. Once applied, terminal commands magically start working on the disk again.
Last week, we got this Seagate Ultra Slim Portable drive in with some serious firmware issues. The disk inside, a Mobile HDD ( ST2000LM007), uses Seagate’s Rosewood firmware and was not even recognisable to the BIOS. This means that under normal circumstances, very little could be done to repair the disk and access the data. However, using the aforementioned tools, we added a modifed ROM extension to the disk. This enabled us to repair the disk’s corrupt firmware modules and access the user area of the disk containing .CR2 (Canon raw),.DWG (created with DraftSight) and Microsoft Office files. The customer was happily reunited with all their data again. This proves the truism that everything is indeed hackable…
Drive Rescue are based in Dublin, Ireland. We offer a complete data recovery service for Seagate Ultra Slim Portable and Seagate Mobile HDD drives. We have experience of successfully recovering from models such as the ST500LM034 ST200LM007, ST1000LM0048, ST1000LM0035 and ST2000LM0015. We can help you if your Seagate Ultra Slim or Mobile HDD disk is no longer recognised by your PC or Mac. Or, if your disk has been accidentally dropped. Call us on 1890 571 571.
You’re about to decommission that old desktop or laptop. However, not knowing when you might need the data on it again, you remove its hard disk or just put the whole system up in the attic.
Unfortunately, this is just possibly the worst place to store a hard disk. Attics and hard disks are about as compatible as frogs and lawnmowers or petrol and matches. Your average domestic attic is a place of temperature extremes. Siberian levels of coldness in the winter coupled with Saharan levels of heat in the summer might be fine for storing nostalgic copies of Q Magazine with pictures of Oasis or Blur adorning the cover. Or storing that gym equipment you swore you would use. But it’s not a good place to store a hard disk containing thermally sensitive components. And you don’t need a Harvard degree in Physics or Metallurgy to know that these metallic components will contract when it gets really cold and expand when it gets really hot. That is why, here at Drive Rescue, customers often tell us how the drive they put up in the attic 2 or 3 years ago now clicks when it is switched on or doesn’t spin up at all. They assure us it was working fine when it was first put up there. The type of damage incurred by hard disks which have been exposed to temperature extremes is insidious. You won’t see disk change shape. And unlike that exercise equipment, you won’t see rust marks either. But inside, the delicate disk-head assembly may well be damaged after months of contracting and expanding.
So do yourself a favour, if you value your data, don’t store old hard disks in the attic. Disks need to be stored in an ambient temperature. They don’t like extremes, so store them in a living room cupboard or a bedroom wardrobe, but for God’s sake, not in the attic.
Have you removed a hard disk from your attic only discover that it’s now clicking, chirping, ticking or not being recognised by your computer? Drive Rescue offer a Dublin based complete hard disk recovery service for most hard disk brands including Hitachi Deskstar, HGST, Seagate Barracuda, Seagate FreeAgent, Western Digital Caviar, Samsung SpinPoint, Iomega External, LaCie and WD Blue. We also offer a data recovery service for iMacs and MacBooks. Phone us on 1890 571 571.
This is Mo. He drives a bus between Dublin and Drogheda. During this pandemic, when most people were inside the comfort of their own homes Mo has been driving the deserted highways and byways of our country.
Last week, his Western Digital MyBook hard disk failed. He badly needed photos of his daughter retrieved. We recovered them free of charge because frontline workers like Mo are braving their lives everyday for us.
To all our frontline workers during these unprecedented times, Drive Rescue salutes you. Thank you.
A customer recently contacted us saying they did not need data recovery from their 64GB SanDisk SD card but they in fact needed to wipe it. They had already transferred their photos and videos to the computer and verified that the transfer had been successful. With the shops being shut, they wanted to re-use this 64GB card but formatting or wiping it was proving nigh impossible. They tried using “diskutil” on their MacBook but to no avail. They even tried “diskpart” on their Windows laptop but that too proved unsuccessful. Thankfully there is a nice tool from the SD Card Association which solves this seemingly intractable and common problem. This tool formats SD cards with ease and removes the need to perform command line gymnastics using Diskutil or Diskpart.
BitLocker is a common full-disk encryption application used on Windows 10 laptops. We recently had a client where, on their Dell laptop running Windows 10 Pro, a BitLocker dialogue box appeared out of the blue requesting a “recovery key”. Without it, it was looking like they would not be able to access their desktop or their files. The box entitled “BitLocker Recovery” requested that the user “enter the recovery key for this drive”. (This is normally a 48-digit key which decrypts the Volume Master Key which is needed for the decryption process to run) You may also see a request to “enter the PIN to unlock this drive. This was a complete surprise to our client. They had never enabled BitLocker on the system. In fact, just to be sure, they double-checked their Windows online account. There was no evidence of BitLocker ever having been enabled.
Fortunately, this was no big deal. It’s just a little quirk on Dell laptops where the system gives the illusion of being encrypted with BitLocker when it’s actually not. Here are the steps to fix it.
Acces your Dell’s BIOS. You can normally access this by pressing on F2, just after your press the power-on button.
Look for the section called “SecureBoot”
Now navigate down to a section called “Expert Key Management”
Select “Restore Settings” followed by “Factory Settings”.
Click “ok” and exit the BIOS.
When you restart the computer, the BitLocker recovery box should have gone away.
Drive Rescue are based in Dublin, Ireland and offers a full data recovery service for BitLockered drives removed from Dell, Lenovo, HP, Fujitsu and Acer laptops even if physically damaged. We also offer a data recovery service Microsoft Surface SSDs which are BitLocker protected.Phone Us on 1890 571 571
The platters are probably one of the most important components of a hard disk. The platter that is used in a modern hard disk is typically comprised of three different layers – the lubrication layer, the carbon layer and the magnetic storage layer. In the lubrication layer, a coating of Perfluoropolyether is used. The viscosity and inertness of this next-generation lubricant are both perfect for platter surfaces. The carbon layer (or diamond-like carbon layer) is used to prevent moisture seeping through to the magnetic layer. It is also nitrogenised to improve durability. The magnetic storage layer consists of cobalt, chromium and platinum. Cobalt is used to provide the orientation of the magnetic crystals, chromium enhances the signal-to-noise ratio, while platinum helps to stabilise the temperature. In order to reduce crosstalk between the layers, Ruthenium is also added to the mix.
Even before they leave the factory, disk platters already exhibit defects in the form of asperities. These microscopic “craters” of alumina on the platter surface are the by-product of the sputtering process (which is used to deposit metallic substrate on the bare metal platters). Even though these surface imperfections are minimised by burnishing and polishing, they cannot be totally eliminated from the finished product.
Manufacturers use what is known as a “P-List” to try and map out these defects so that the firmware knows not to write to these locations. (This list used to be on a piece of paper that accompanied a new disk, but was subsequently put on a 3.5” floppy disk.) Today, the P-list is stored on the disk itself. Run a “V40” terminal command on a brand new HDD and prepare to be shocked at the thousands of errors that have already been logged!) The use of “padding” around defective areas is another counterbalance employed to provide an extra safeguard against bad writes. With this technique, even healthy blocks around the defect areas are marked as “bad”.
Unfortunately, these seemingly innocuous blemishes on the platter surface can lead to potential data loss situations during the life cycle of the disk.
The above diagrams illustrate the life cycle of a platter asperity. Figure A depicts an AFM (atomic force microscopy) scan of platter asperities on a new disk. As disk usage increases (figure B), the sharpness of the asperity will erode a little. The prime cause of this wear-down effect is the sweeping effect of the disk-heads. In figure “C”, you can see disk debris now building up around the asperity. Eventually, the debris on the platters accumulates under the disk heads resulting in them retaining a higher voltage. This voltage increase leads to a higher “blocking temperature” in the disk-heads, resulting in them being unable to perform read operations on disk.
Last week, we were dealing with a WD Elements 2TB external USB disk experiencing the flaking platter problem. While it appeared to spin normally, the disk could not be seen by the customer’s computer.
After removing the plastic shell, we opened the internal disk (2.5” WD20NMVW with integrated USB connection) in our class-100 clean room and found significant amounts of platter debris which had accumulated under the disk-heads. We delicately cleaned the platters using a pharmaceutical-grade lint-free polyester swab and cleaned the disk-heads with same. The Head Disk Assembly (HDA) had to be removed for proper access to heads. Using the right cleaning methodology is key. Even using a “cleaning” chemical such as isopropyl alcohol or acetone can create smearing on the platters caused by the interactions these chemicals have with the air. Likewise, swabbing motion used must strike the right balance between debris particle removal and abrasion avoidance.
Once the cleaning process had finished. We put the HDA back in-situ. We reassembled the disk. We imaged it at a very low-speed onto a new USB disk. Once this process has completed, the disk was connected to one of our systems and the NTFS volume appeared in under 5 seconds! The customer could now be reunited with years worth of photos, Word, Excel, PDF and DWG files.
Drive Rescue are based in Dublin, Ireland and offer a complete data recovery service for 1TB, 2TB, 3TB, 4TB and 5TB WD Elements external hard drives. Common data loss situations we help with include WD Elements disks which are not being recognised by your computer or which are making a clicking, beeping or knocking noise. We can also help you if you can see your disk’s folders and files but cannot copy them. Popular models we recover from include the WD Elements WDBU6Y0020BBK, WDBUZG0010BBK-01, WD1000EB035-01, WDBUZG0010BBK-03, WDBUZG0010BBK-05, WD5000E035-00, WDBU6Y0020BBK-0B, WDBWLG0040HBK, WDBWLG0030HBK-04 and WDBU6Y0020BBK-05. You can find out more information our external hard disk recovery service here.
Last month, Drive Rescue Data Recovery attended Embedded Word 2020 Nuremburg, Germany checking out some of the latest solid state disk technologies. Given the circumstances of the COVID-19 outbreak, the atmosphere was rather funereal but most exhibitors and attendees seemed to make the most of it. Here are some changes and insights happening in the SSD market at the moment.
96-Layer (BiCS4) NAND goes mainstream
As most readers of this blog probably know already, the first generation
of flash memory was “planar” or “2D” NAND meaning that the memory cells are
layered horizontally across the die (chip).
3D NAND is now going from 64-layer (BiCS3) to 96-layer (BiCS4). This means
NAND cells are getting vertically “stacked” on top of each other, creating 96
different layers – rather like the storeys of a skyscraper. For NAND manufacturers, this means adding
additional chemical deposition and etching machines to their production lines.
96-layer sounds great, but as ever, manufacturers are pushing the NAND layer envelope even further by using a process known as “string-stacking”. This involves, for example, stacking, a 32L die on top of a 64L die. By adapting their production lines, this layering process is being adopted by most NAND manufacturers. However, Samsung (being Samsung…) is expected to continue using a single-stack process known as High Aspect Ratio Contact in their fabs for etching dies up to 200 layers.
And while multi-layered chip lithography greatly enhances SSD storage densities,
it also increases cell-to-cell interference (sometimes referred to as “crosstalk”).
To reduce this, a quality SSD controller, which uses effective Error Correction
Control (ECC) algorithms is needed now more than ever.
Common ECC algorithms like BCH are simply not cutting it anymore. That is why more sophisticated ECC engines powered by LDPC can offer pseudo-soft bit correction and read-level tracking are needed. SSD controller manufacturers are responding to this need. Phison’s new E16 controller, for instance, uses fourth generation LDPC ECC to complement 3D TLC and even QLC NAND. Silicon Motion, a controller manufacturer exhibiting at Embedded World 2020, have introduced their sixth generation “NANDXtend” ECC technology. This combines LDPC with RAID protection which promises end-to-end data integrity along the entire Host-to-NAND data pathway. Some of Silicon Motion’s controller line-up even employs machine learning algorithms for enhanced data integrity when the disk operates at high temperatures. Machine learning and artificial intelligence are terms which the storage world are probably going to be hearing a lot more of in the future.
From S-ATA to M.2 There is little point in having all of this fast NAND if it’s bottlenecked by the disk’s command interface or connector type. For example, if you buy an S-ATA MLC-based SSD today, more likely than not, the S-ATA connector will be the most likely data throughput bottleneck. (Remember the maximum bandwidth throughput for S-ATA III is only 600 MS/s.) Moreover, most laptop and tablet manufacturers are finding the S-ATA connector too bulky to fit their ultra-slim devices.
This explains why M.2 is now proving such a popular connector type. This connector comes in 5 main “key” or connector types including “A key”, “B key”, “E key” “M key” and “B&M key”. These come in a variety of sizes such as the popular “2280 B+M” form factor which is 80mm in length and 22mm in width. These variations provide manufacturers, systems integrators or end-users with ample spatial flexibility.
From AHCI to PCIe
It’s not only the connector type which can slow down an SSD. The command
(or data pathway) protocol used is just as important. Currently, for standard
S-ATA III disks, AHCI is the standard command protocol. This has 1 command
queue and 32 commands per queue, allowing data transfers of up to 600MB/s.
Whereas NVMe allows 65,535 queues and 65,536 commands per queue. Or, to use a
roadway analogy, AHCI is your country boreen for data while PCIe is a
Enter PCIe 4.0
Solid state disk manufacturers are now beginning to deploy PCIe 4.0 in their drives. PCIe doubles the data throughput from 32GB/s to 64GB/s – that is assuming the underlying hardware supports this new specification. This increase in bandwidth should be very pleasing to those involved in the processing of, for example, stereoscopic images or large data sets required by artificial intelligence.
Optimising PCIe with NVMe
But how about optimising PCIe so that it works smarter and faster while minimising
motherboard compatibility issues? That’s where the Non-Volatile Memory Express
(NVMe) standard comes into play. NVMe is a data pathway standard devised by
PCI-SIG, a consortium of chip makers, controller designers and solid state disk
manufacturers (such as SanDisk, Samsung and Intel). Their mission is to enhance
I/O functionality and performance of solid state disks using PCI NVMe but also
to promote industry-wide interoperability and adoption of the NVMe standard. NVMe
is currently on version 1.4.
Already NVMe allows SSDs to exploit parallelism which allows the concurrent processing of I/O requests. One of the biggest changes from NVMe 1.3 however, is a feature known as “I/O determinism”. This feature allows an SSD to be configured into multiple “sub-drives”. Different data types (e.g. Video, photo or database streams) can be partitioned so they don’t interact. A feature which could prove very useful when processing hyperscale data.
Using Namespace Preferred Write Alignment, NVMe 1.4 makes
TRIM commands work more efficiently
Not only does NVMe 1.4 allow for better organised data, but it also
enables TRIM to work more effectively. Currently, for most SSDs, TRIM operates
as a background function to ensure that data blocks or pages which are no
longer needed are deleted. While in theory this sounds very efficient, often
the data ranges subject to deletion can be very small or misaligned. This leads
to write amplification (where entire “blocks” instead of more granular “pages”,
of user data and metadata stored on NAND are subject to erase and write cycles).
And you really don’t want this process happening too much inside an SSD because
it can lead to early wear-out. However, with NVMe 1.4 a process known as
Namespace Preferred Write Granularity has been introduced. NPWG helps SSDs to
report more rich information back to the host, such as page sizes and erase
block cycles. In turn, this allows TRIM commands to get executed in a more
granular and targeted way. Executing erase commands executing at page-level
greatly reduces write amplification and extends the life the SSD.
The G-List for the SSD era
For years electro-mechanical disks (HDDs) have used a G-List (short for
Growth List) which records bad sectors detected by the disk’s firmware. NVMe
1.4 introduces something similar for SSDs called “Get LBA Status”. This allows
an SSD to report LBA ranges (areas of blocks) which are likely to return a
read-error if a read or verify command is executed. This feature should
hopefully enable solid state disk OEMs, vendors and third-party software
developers to develop more accurate SSD diagnostic and monitoring tools. Already
on the GitHub file repository, we’re seeing the how the relative openness of NVMe
standard has led to independent software developers across the world produce fairly
nifty firmware hacks for NVMe-based SSDs.
Looking forward: NVMe 2.0 to introduce Zone Name Spacing
Diverse data types dispersed across a disk’s NAND plane can also make write, read and erase cycles very inefficient. For example, for a user like a video producer or editor, whose SSD might contain a various range of file types such as audio, video, music or photos (e.g. MP4, AIFF, RAW). Thanks to the SSD’s controller and a process known as wear-levelling, these files will be written evenly across the SSD’s NAND blocks. This is important because it prevents the same blocks being written to, or erased continually. (Think of wear-levelling as being like a car park attendant. He tries to prevent drivers all parking their cars near the lifts and encourages them to park more evenly throughout the car park) Spreading out the writes in a more distributed manner across the NAND cells reduces disk wear-out. However, there is a problem with this. Each time a read, write or erase command is issued, the controller has a lot more work to do because the data is stored non-sequentially. The consequent problem of latency introduced by storing data non-sequentially is a well-known problem. Some readers might remember how, on operating systems such as Windows 98, ME or XP, non-sequential data stored on the host’s disk would necessitate the running of inbuilt or third-party disk defragmentation applications to improve disk performance. Almost two decades later and SSDs are facing the same problem which that old spinning Maxtor, Seagate (or even a Quantum Fireball…) disk inside your first PC experienced. NVMe 2.0 promises to eliminate this problem using Zoned Name Spaces. This protocol (already in use by Shingled Magnetic Recording HDDs) divides the disk’s logical address spaces into fixed sized ranges and enforces sequential write rules. Each zone must be written sequentially. If the host or application violates this rule, an error is generated. ZNS allows the SSD to match the workloads of the host to the natural erase patterns of flash (NAND) memory. This results in significantly reduced latency, reduced over-provisioning, less garbage collection and less write amplification. And ultimately, it allows for much faster I/O operations.
In the context of data recovery, NVMe 1.4 introduces a potentially useful feature known as “persistent error log”. This basically acts as a black box flight recorder for NVMe storage devices – allowing the vendor, OEM or data recovery technician access the event logs of the device. While disk event logging is nothing new, HDD manufacturers such as Seagate and WD have featured it before. However, the logging was usually specific to a hard disk family or model. (And just to complicate things, for more recent HDD models, the logs are sometimes inaccessible due to locked firmware…) The “persistent error log” feature on NVMe SSDs promises to be completely open (as opposed to locked down or encrypted…) and uses standardised (as opposed to manufacturer-specific) logs. The PEL will record important disk operating telemetry such as health snapshots, NVMe namespace changes, firmware commits, power-on logs, reset logs, hardware error and logs pertaining to disk format and sanitise commands. This should make the diagnosis of complex SSD firmware or File Layer Translation problems less time consuming and hopefully expedite the data recovery process.
Another interesting trend Drive Rescue noticed at Embedded World 2020 is the return of removable WORM (Write Once Read Many) media for the SSD era. Most of you will be already familiar with WORM media such as CD-R, DVD-R and BD-R. Well, flash storage companies like Silicon Power are making WORM versions of their SD cards. These are tamper proof SD cards (at least from soft forms of tampering…) which once written to, cannot be modified. Nor can the write-protection be disabled by flicking the “read only” switch on the side like on standard SD cards. The write protection on WORM SD cards is hard-coded. Wondering about the practical applications of such cards, the super-informative Silicon Power team told me that these non-erasable cards have wide use applications in areas where data integrity is paramount such as POS equipment, body cams, electronic voting and medical systems.
According to the Silicon Power team, USB memory sticks are still being sold in large quantities. They explained how their ease-of-use and off-line availability along with fast transfer speeds are still a very attractive proposition to lots of computer users. The death of USB-based flash storage has been greatly exaggerated. The company has even introduced USB 3.2 memory sticks such as the Helios 202 which offers sequential speeds of up to 5Gbps and capacities of up to 256GB. Silicon Power source their NAND from Toshiba (now called Kioxia) or WD (SanDisk) with controllers supplied from Phison, Silicon Motion or Marvell.
USB-C… More than just a connection type
Transcend, another major player in flash memory products was also at
Embedded World 2020. Their team sees USB-C as the next big thing in portable
storage. The reason? Well, unlike USB 3.2, USB 3.1 etc., USB-C is generally
more widely supported by tablets and phones. It offers multi-platform
interoperability between Windows, Apple and Android. It also offers theoretical
sequential data transfer speeds of 10 Gbps. In fact, one congenial Transcend
representative sees USB-C more than just a connection type but rather a
platform in itself. It can, for instance, be connected directly to external
displays. (Monitor manufacturers like LG, Benq and Dell now support USB-C in
some of their high-end displays). USB-C also supports daisy-chaining – a useful
feature which allows multiple external hard disks to be interlinked whilst
appearing to the host as independent drives.
RAID hasn’t gone away you know…
While USB memory sticks are likely to be hanging around for many years to come. So too are a lot of other storage technologies borrowed from the world of spinning hard disks (HDDs). As already mentioned, the process of Zoned Storage is borrowed from Shingled Magnetic Recording (HDDs) and is now being deployed in some NVMe-based solid state drives. Native Command Queuing (an adjunct command protocol developed to alleviate AHCI command queue bottlenecks) is another spinning disk technology, which has crossed the chasm into the SSD world. And let’s not forget RAID, a technology which some predicted would fizzle out, has not gone away either. Not only are the principles of RAID virtualisation applied to NAND arrays found on individual SSDs, but RAID is now extensively used in conjunction with SSDs for redundancy and to increase storage capacity (just like with HDDs). For example, modern ATM or ticketing machines extensively use SSD RAID arrays. Modern NAS devices from manufacturers such as Synology and Qnap now offer compatibility for S-ATA and M.2 SSDs. In fact, Synology have introduced PCIe add-in adaptor cards for some of their high-end devices which support M.2 2280, 2260 and 2242 SSDs. WD has recently introduced their SA500 NAS SATA SSD range designed to work with NAS devices.
Hardware manufacturers such as Icydock have introduced products such as the ToughArmor 4-port SSD bay for NVMe M.2 disks. Designed for handling intensive I/O workloads such as deep-learning or 4K/8K video, it can offer speeds of 32Gbps over a MiniSAS connector with the host or RAID card managing the array. Rather than solid state storage sunsetting RAID, it has actually given it a new lease of life.
The role of gamers in solid state disk development
Adata, another major manufacturer of solid state disks and flash-based storage devices was also in attendance. Commonly known for their consumer-level SSDs such as their popular SU630, SU650 and SU800 models. This manufacturer also produces solid state disks for the gaming and industrial use markets. For the gaming market they produce a M.2 PCIe range of “XPG” branded SSDs such as the SX6000, SX7000 and SX8200. For disk manufacturers this market can be quite challenging because gamers are quite a discerning bunch of customers. They tend to perform extensive pre-purchase research and often demand high-performance low-latency disks.
Unsurprisingly then perhaps, gamers have been the progenitors for a substantial number of breakthrough information technology products such as the sound card and the graphics card. (The gaming industry has also been responsible for bestowing upon the world neon-illuminated keyboards and other PC components which look like props from a Star Wars film…) Gamers using virtual and augmented reality today are not just playing, they are fine-tuning technologies. Because once refined, these technologies will most likely trickle into the mainstream in the not too distant future. In this regard, the informative Adata representative explained how the consumer and gaming market, offers his company valuable feedback and insights into real-life disk performance and reliability. (Learnings which could probably never be gleaned from a synthetic SSD benchmark test)
These insights can be leveraged to design high-reliability industrial-class solid state disks such as the Adata ISSS332 (S-ATA III disk using MLC NAND). For the industrial SSD market, not only are criterion such as reliability and power protection important, but also the need for disk models to have a fixed bill of materials (BOM). (Unlike the consumer disk market, where a manufacturer or vendor can change key components willy-nilly whilst retaining the same disk model number). This is important because it ensures reliability, performance and consistency in environments sensitive to component changes. For example, operators of medical diagnostic machinery may require an SSD with a large DRAM-cache for their machinery to run smoothly. If the SSD manufacturer or vendor decides to change or remove the DRAM-cache in subsequent production runs of the disk, this could easily impact the running of a finely tuned machine. Therefore, “fixed BOM” guarantees that all disk components and specifications pertaining to a specific disk model will remain unchanged during the production run of the disk.
Overall, despite the virus threat, Embedded World 2020 exhibitors and speakers provided some very interesting insights into the fast moving world of solid state disks.
Drive Rescue are based in Dublin, Ireland and provide an SSD data recovery service for S-ATA and M.2 drives which are no longer being recognised by your computer. We recover from brands such a Lite-On, Crucial, Samsung, WD and Lenovo. We also recover from USB memory sticks (flash drives) . Phone us on 1890 571 571.
There is an enduring myth out there that if you swap the PCB (printed circuit board) or controller board on your non-working, hard disk with a similar one, it will start working again. Unfortunately, while this might have worked well for disks manufactured in the early 2000s, it no longer works with modern hard disks.
But before we go into specifics, there are multiple reasons why a hard disk won’t work. The disk-heads can fail, the firmware, or file system (NTFS, HFS+, APFS, exFAT, EXT4 etc.) can go corrupt or the spindle can fail. Failure of the PCB is just one of many possible reasons. Without through diagnostics deciding that PCB is the problem is like the drunkard who has lost his keys at night. The first place he will start looking will be under the street light because that seems like the easiest place to find them. For some distressed computer users who’ve just lost data, their disk’s PCB also seems like an easy place to find a quick data recovery solution!
swapping my hard disk PCB work?
Unfortunately when recovering data from modern drives, swapping a PCB from a similar drive (even with the same model number) won’t work. This is because of adaptive information which is stored on the ROM chip on your disk. For example, on Western Digital drives, it’s marked as the U12 or U14. Other times, the adaptive information is integrated into the main controller IC. Some adaptive information is also stored in the Service Area of your disk. Adaptive data is unique to your disk and your disk will not run properly without it.
information might include:
Microcode for disk initalisation
Voltage levels for each disk-head
PMRL channel amplification information
Disk head write and read currents
Head allocation for read/write zones
This is all crucial information needed for your hard disk to run. If you’ve performed complete diagnostics on your disk and you’re absolutely certain that your PCB is the problem, the a ROM chip swap is needed. Here the ROM or Main IC needs to be micro-desoldered off and soldered to an exact match donor board. This is an extremely intricate job, which ideally needs to be performed by someone who has successfully completed it hundreds of times before.
Drive Rescue, data recovery Dublin offer a complete hard disk PCB (controller board) repair service for Seagate S-ATA Barracuda disks (7200.9, 7200.10, 7200.12), WD Passport, WD Blue, WD Green, WD Red and HGST (Hitachi) disks.