Drive Rescue normally performs data recovery from physical storage devices such as hard disks, SSDs, USB, memory cards, servers and NAS devices. But last week, a customer from an island off the west coast of Ireland called us desperately wanting to know how to recover deleted emails from their Gmail (for business /GSuite account). Thankfully, Gmail stores deleted emails for the last 30 days. Moreover, the data recovery process for this is a cinch.
To recover deleted emails from Gmail (GSuite Edition) is relatively painless.
1) Login to the account needing recovery at: admin.google.com
2) Click on More
3) Click on Restore Data
4) Select the Data Range (remember 30 days is the maximum amount of time you can wind back to)
5) Select the application. Google Drive is the pre-selected as the default, but using the drop-down arrow, you can select Gmail.
6) Click on Restore.
7) Your deleted emails from the last 30 days should now be recovered.
The same more process can be followed for recovering data deleted from Google Drive. You just select “Drive” instead of Gmail. Easy peasy!
Accurate hard disk monitoring and diagnostic tools are an essential part of every IT admins toolbox. So, we’ve put together a short list of hard disk monitoring and testing tools along with some best practice tips on their operation.
Hard Disk Monitoring and Diagnostic Tools 2018
Hard Disk Sentinel – nice simple monitoring tool which indicates disk health and temperature.
HDDLife – another reasonably accurate hard disk monitoring tool. Also comes in an SSD variety (SSDLife).
CrystalDiskInfo – Fantastic benchmarking tool which indicates sector reallocation count and un-correctable sector count.
HD Tune – Offers a nice graphical representation of disk performance and scans for errors. Supports SSDs like OCZ and Samsung.
Computer manufacturers like Dell and Lenovo offer their own built-in disk diagnostic tools. The latter offers their Lenovo Solution Center tools while Dell offers their Pre-Boot System Assessment. Both do a reasonable job of testing mechanical hard disks.
Testing a Seagate, WD or HGST hard disk using manufacturer utilities.
The disk manufactures themselves also offer diagnostic tools like SeaTools (Seagate) and WD Lifeguard (Western Digital). HGST offer their Windows Drive Fitness Test (WinDFT). For some reason, Toshiba offer no disk diagnostic tools at all!
Testing an Apple Mac hard disk
There is a huge paucity of accurate disk monitoring or testing tools for Apple’s OS X. Many end-users erroneously believe that the “First Aid” utility provided by Disk Utility can check the health of a disk – it does not! It just checks the integrity of the file system.
If you need to test the hard disk in an Apple Mac, we highly recommend SMART Utility from Volitans Software which really stands out from the pack for its accuracy and reliability
Solid State Disks are different…
Because SSDs are designed using manufacturer-specific schemas, for best accuracy you really need to download their own diagnostic utilities. Most of these can be found on the relevant manufacturers’ website.
SanDisk – SanDisk SSD Dashboard
Samsung – Magician
Crucial – Crucial Storage Executive
Best Practice Tips on running Accurate Hard Disk Tests
1) If your hard disk diagnostic test halts half-way through and appears to have frozen. This can be indicative of a defective hard disk.
2) It is important to remember that a hard disk diagnostic test will not always detect early stage failure. This is because most diagnostic utilities only randomly scan certain sectors of the disk. Even a long-diagnostic test might “pass” a disk in the early stages of failure. This is because under most jurisdictions, manufacturers are obliged to offer an RMA policy (return merchandise authorisation) and most have set quite a high threshold for a disk to be deemed “failed”.
3) Most firmware issues will not be detected by hard disk diagnostic utilities.
4) If you suspect software, malware or OS issues are interfering with the accuracy of the test, slave the disk to another system (using a direct M.2 / NGFF/ mSATA / S-ATA / P-ATA connection to the system’s motherboard or just use a USB dock) Alternatively, you can use a bootable ISO containing hard disk diagnostic utilities.
5) Remember that some bootable diagnostics such as Seagate’s SeaTools for Dos will only run if the system’s BIOS/UEFI is in IDE mode (as opposed to AHCI mode)
We helped a customer last week who believed that his data was being safely backed up to his Apple iCloud drive when in fact his Mac was performing a phantom backup.
Let me explain. Our customer discovered that the 2.5” hard disk (HGST Z5K-500) inside his MacBook Pro had failed. Believing that iCloud had his back covered, he logged into his iCloud Drive and reassuringly saw the whole folder structure of his failed disk. But to his horror, when he clicked on these folders most of them were empty. The folders were there, but the files were not at home… It transpired that iCloud, whilst giving the impression of a complete disk sync had just synced the folder structure of his disk.
Stored on his iCloud account were – what only can be described – as “Phantom Backups”, empty shell folders created by the iCloud application, which merely gave the illusion of backup. And this is a surprisingly common problem.
The Cause of Phantom Backups
There are a number of reasons why phantom backups get created. First of all, from a software development perspective, online syncing or back-up from an endpoint device such as a laptop to a remote (cloud) server is actually quite a delicate process. The backup or syncing application must get deep support from the operating system to function. It must then replicate the folder structure of the source disk. Then, it must transfer data using data packets of a reasonable size. Then, the app must decide correctly, which files have changed since the last sync or backup. This whole process is dependent on a good quality, stable internet connection with a reasonable upload speed. And for the whole process to complete successfully – it is preferable that the user does not max out their upstream bandwidth by simultaneously uploading boxsets onto P2P sites, letting their battery run flat or turning off their system mid-sync. So, that’s a lot of boxes to be ticked.
Don’t get fooled by a Data Mirages
However, even if this process fails a user can still log in to their iCloud, One Drive, Amazon Drive etc. account and will probably see what looks like a backup of their files. On the surface, the folder structure will all be there which looks very reassuring. But, like a parched trekker in the the desert discovering that the apparent spring ahead of them is a mirage, discovering after a data loss event, empty folders on a cloud server can be just as gut-wrenching.
How do Phantom Backups Happen?
Well, one of the first functions an online sync or backup app will perform will be the re-creation of the folder structure as selected by the user. This might be just a few folders and sometimes it can be a whole disk. Once the folder structure is in place, the application will try to populate these folders with data. But here’s the rub, even if something does go askew during this process, the whole folder structure will appear to the user anyway, even if it is just a pile of shell folders.
Preventing Phantom Backups
The importance of backup verification cannot be stressed enough. Dropbox does this very well. They use a tiny green dot above the folder on the source device to indicate a successful sync. Other backup providers use less slick means like sending the user an email confirmation of backup which is nice but unless it has a complete file listing could be misleading. And of course, there is always the manual “eyeball” verification method where the user can physically log in to the cloud server and inspect their files. And more importantly, using this strategy the user should pick folders at random, drilling down while checking for file completeness and integrity.
It could be construed from this blog post that syncing is a form of backup – it’s not. Yes, it’s better than nothing, but if you’re using a syncing application as a sole backup means – it really needs to be complemented with another (preferably) isolated backup medium. Remember syncing applications can be pretty lousy at protecting your data from sabotage, ransomware or accidental deletion. But that topic alone merits another blog post.
Drive Rescue Data Recovery is based in Dublin, Ireland. We offer an Apple data recovery service for MacBook, MacBook Pro, MacBook Air and Mac Mini devices along with iMacs. Find out more at: www.datarecoverydublin.ie
Last week Drive Rescue was in Nuremburg, Germany for Embedded World 2018. The Embedded World exhibition and conference covers a lot of areas including IoT, defence, aviation, automotive and industrial electronics. It is also one of the largest gatherings of global hard disk manufacturers in Europe, where they display their wares and discuss their future roadmaps. It was fascinating to see the latest in the world of storage devices and interesting to speak to the hard disk manufacturers’ first hand.
A number of themes emerged at this year’s exhibition and conference, including the focus which disk manufacturers are placing on data security (no doubt spurred by the impending GDPR legislation) and the emergence of SSD technology as de facto standard for many industrial and consumer-level devices.
Disk Manufacturers taking the GDPR to Heart
Disk manufacturers, especially European-based ones, seemed to have taken GDPR legislation to heart with companies such as Integral (UK) and SwissBit (Switzerland) now offering an extensive range of SSD disks and USB memory devices equipped with hardware-level AES encryption. For example, Swissbit have now introduced a USB memory stick (PU-50n DP) which uses native AES-256 encryption and which is also capable of storing an audit trail. It’s audit trail functionality is enabled by storing WORM (Write Once Read Multiple) data on a cleverly located hidden partition. To counter any brute-force hacking attempts, the PU-50n DP comes equipped with a hardware retry counter which limits the number of passwords which can be inputted within a set time frame.
Integral also displayed a range of AES-256 encrypted USB memory drives which are FIPS 140-2 approved and employ a dual-password system. So, if the user does forget their password, their IT administrator can regain access using a master password. But, pity the poor user who does decides to go it alone, because only after six failed access attempts the data and the encryption keys are all destroyed. Crikey, only step away from the device spontaneously combusting…
Integral were also displaying their range of AES-256 “Crypto” SSDs. Like their USB drives, they are FIPS 140-2 approved and if the number of password attempts is exceeded the encryption keys and data gets destroyed. A “high strength” 8-16 character alphanumeric password must be set which presumably precludes users inputting “PaSSwoRd” or “fluffy123” as their password. According to the Integral marketing bumf, their encryption is compatible with most endpoint security applications as it uses a configurable “hardware ID” for compatibility. In the next couple of months, it will be interesting to see if hardware SSD encryption schemes such as this pose a challenge to the hegemony of McAfee, Symantec, Sophos et al in the full disk encryption arena.
And speaking of McAfee, Hagiwara Solutions (Japan) have brought out this AES-256 encrypted USB stick which also contains anti-virus detection powered by McAfee software. Virus propagation prevention and encryption all in the same package!
The Last Hurrah of S-ATA
The venerable S-ATA bus interface has served the IT world exceedingly well over the last fifteen years. It liberated many an IT administrator from the tyranny of jumper settings and compared to its P-ATA predecessor, introduced a seismic leap in faster I/O disk operations. But even though S-ATA has progressed to its third iteration (SATA 3.0), for most solid-state disks, this bus standard and its command protocol AHCI has now become a data bottleneck. For example, most NVMe solid-state disks can handle 65536 parallel I/O requests compared with only 32 I/O requests processible by S-ATA 3.0.
And while S-ATA III and AHCI will probably be around for legacy applications for quite some time yet, its days of being the primary interface for internal storage devices are probably numbered. So, if S-ATA is on the way out, what is going to replace it? Enter PCIe and NVMe.
All aboard the NVMe bus
NVMe (Non-Volatile Memory) is a protocol designed by a consortium of hard disk and NAND manufacturers including Western Digital, Samsung, Toshiba, Intel and Micron to standardise the interface and interoperability of PCIe. In essence, the NVMe standard allows disk manufacturers fully exploit the parallelism afforded by PCIe when interfacing with host systems. Operating systems need only one driver for NVMe compatibility. This driver has been included in Windows OS, from version 8.1 upwards and in Apple’s OSX from version 10.10.3 (Yosemite).
The rise of the M.2 Form Factor
There were plenty of SSD disks at Embedded World 2018 still using variants of the S-ATA connector such as mSATA (52pins split into 16 pin and 36 pin sections) and Slim SATA (22 pins standard S-ATA connector). However, the M.2 form factor (pronounced m dot two) was by far the most common on display. The first generation of this form factor came in the guise of NGFF (Next Generation Form Factor) which is rarely seen these days. The present-day M.2 form factor uses three socket types including; “B key” edge connector, “M key” edge connector and “B and M” key connector. A “B key” uses up to two PCIe lanes whereas an “M key” can use up to four. A substantial number of SSDs use the “B and M” key connector so they can connect to any socket. Each connector pin is rated for 50V and 0.5A. Using PCIe x4 the M.2 standard allows for blisteringly fast data throughput speeds of up to 31.5Gb/s compared to 4.8Gb/s offered by SATA 3.0.
It might be easy to dismiss the NVMe standard as just a marketing gimmick. After all, it is designed using PCIe architecture. But, the designers of this protocol appeared to have gone to great lengths to eke out every ounce of SSD I/O speed when interfacing with its host. The real-world throughput difference between an M.2 disk using PCIe x4 and MLC NAND using NVMe versus an M.2 disk of similar specification not using NVMe can be vast.
Goodbye SLC and hello pSLC
Pure SLC NAND was harder to find than hens teeth at Embedded World 2018. This is because it’s ten times more expensive to manufacture than MLC and many producers are unwilling to tie up their expensive fab (semiconductor fabrication) facilities for this niche product. So even industrial-class SSD manufacturers like Innodisk are now using pSLC (pseudo-SLC) as a happy medium between the reliability of SLC and the production costs of MLC. Pseudo-SLC (otherwise known as MLC+) uses MLC NAND but the memory cells are used in single-bit mode. Moreover, the voltage threshold is shifted for increased endurance, reduced error rates and better SSD longevity.
From 2D NAND to 3D NAND
Up until now, most SSD manufacturers have been using planar NAND. Typically, MLC (Multiple Level Cell) stores two bits-per-cell while TLC (Triple Level Cell) stores three bits-per cell. But the quantum physics envelope can only be pushed so far. As die lithographies reduce in size to 19nm,15nm and 14nm issues like cell-to-cell interference and disturbance effects start to kick in. This results in un-correctable bit errors which even sophisticated ECC algorithms such as BCH or LDPC cannot fix. This results in lost or corrupted data. So NAND developers have responded by stacking their cells on top of each other instead of laying them out vertically. This is analogous to building a skyscraper instead of a building which extends over a wide surface area. Samsung were one of the first manufacturers to commercialise this type of NAND and named it “3D NAND”. Other manufacturers like SanDisk/WD, Crucial, Intel and Adata have followed suit. Today the widespread adoption of 3D NAND has been aided by controller designers like Phison and Silicon Motion. The latter having recently released their SM2262 PCIe 3.1 NVMe 1.3 compatible controller designed to work optimally with 64-layer 3D NAND. And Phison have recently released their PS5007-E7 controller optimised for 3D NAND and NVMe.
3D NAND – the floating gate versus charge trap debate
While most SSD manufacturers share the consensus that 3D NAND is the best fit for consumer and enterprise-class disks. After this, however, agreement diverges. 3D NAND can be deployed using floating gate or charge trap transistors. Floating gate transistors use polycrystalline silicon whilst charge trap transistors use silicon nitride. Floating gate transistors have been around the 1970’s. Now remember the purpose of the transistor is to “trap” electrons. To put it in very simple terms, silicon nitride is like cheese, whereas polycrystalline is like water. Some NAND designers believe that electrons “leaking” is a problem with floating gate technology, whereas using charge trap transistors electrons are more likely to be held in place. Thus, your data might have more longevity when stored with 3D NAND using charge trap transistors. Intel is placing their bets on floating gates. Samsung, with their V-NAND technology and Toshiba with their BiCS technology are backing charge trap transistors.
Of course, Intel have got some critics in their choice of technology, namely – why are they deploying 1970’s technology for their 3D NAND solid state disks when an apparently “better” transistor design exists? Well, Intel say they are using “discretised floating gates” which they claim are more compartmentalised and better suited to preventing electron leakage. They also claim that floating gate transistors are a “tried and tested” technology, whereas charge trap flash transistors are not. It will be interesting to see the trajectory of 3D NAND over the coming months.
Drive Rescue Data Recovery are based in Dublin, Ireland. We offer a data recovery service for most SSD disks including Samsung (750 Evo, 850 Evo, 860 Evo, MZ7TY256HDHP, MZNTY256HDHP, PM841, PM851) SanDisk (SSD Plus, Ultra II, Ultra III, X110, X400), Apacer (AST 280, AS220), Crucial (MX200, MX300,MX500, M500,M550) Kingston SSD, Toshiba SSD (Q200, Q300), Toshiba Apple SSD and Intel SSD (320, 530, 540S, S3520, S3700, S4500, S4600) For more information: www.datarecoverydublin.ie Phone : 1890 571 571
While better known for their flash memory devices, the Taiwanese company storage company Transcend also makes a limited range of 2.5” external hard disks. Typically, they use Samsung Momentus disks – in 500GB (ST500LM012) or 1TB (ST1000LM024) sizes. As mechanical hard disks go, these are fairly robust models. Their head disk assembly is tried-and-tested and they use fairly stable firmware.
So, what could possibly go wrong? Well, there is the perennial problem of users dropping their disks. Often, accidentally dropped disks don’t happen out of carelessness just unexpected events. Like, for example, a customer who delivered a disk into our office recently. On his commute home, he had his Transcend external hard disk attached connected to his MacBook Air. He was seated in an aisle seat. His fellow passenger in the window seat had dozed off. As the train was pulling into one of the stations his neighbouring traveller suddenly woke up, looked out the window and discovered to his horror that it was his stop. Our customer, being a gentleman, MacBook-Air-in-hand quickly jumped up from his seat only to suddenly hear the clatter of an object fall onto the aisle of the carriage. It was his external hard disk. His fellow traveller alighted successfully from the train. Our customer gingerly went back to his seat to re-connect the disk only to hear it clicking.
Luckily, only one disk head was damaged, but the head disk assembly still needed to be replaced in our clean-room. The platters had escaped damage. We were able to achieve a 99.5% recovery rate salvaging all of his PPJ (Adobe Premiere) files.
How to prevent this happening? Well, if you plug out an external disk quickly from an OS X (Apple) operating system without going through the “eject” procedure – you risk corrupting the “catalog” file of HFS+ (the file system) which can also lead to data loss. Or, you could use a “wireless hard disk” but I dread to think of the security implications of these especially when used in public places. So, it all goes back to having a robust back-up plan. Or, maybe just not sitting on an aisle seat on a train…
Drive Rescue Data Recovery are based in Dublin, Ireland. We recover data from all brands of dropped external hard disk including Transcend, Adata, Toshiba, Seagate and WD. Phone 1890 571 571 www.datarecoverydublin.ie
Western Digital MyCloud NAS devices are popular in Ireland for their ease of setup, user-friendly OS. And unlike most NAS ranges from other manufacturers, the disks come pre-installed. This is probably not surprising given that WD (unlike Synology, Buffalo, Netgear et al.) manufacture hard disks.
The entry-level models in the WD MyCloud NAS range does not offer any redundancy – they are single-bay only. However, they do come with a USB 3.0 port for external backup which is better than nothing. Models further upstream in the range such as the MyCloud Mirror, MyCloud Gen 2, MyCloud EX2 and MyCloud EX4 offer mirroring with the more advanced models offering RAID 5 and RAID 10 redundancy.
Recently, we helped a customer with an entry-level WD MyCloud which was no longer showing up on his network. He could hear it spinning. But no data appeared when he logged into MyCloud OS. So, he updated the device’s OS to version 3. He removed the 4TB disk from its casing and attached it via USB dock to his Apple Mac. But no volume showed up. He ran some DIY data recovery software on it – but that too proved unfruitful.
He brought the disk (a WD40EFRX NASware 3.0 disk) to us. These “WD Red” (as they are commonly known) are generally reliable disks. A unique feature of them is their disk-to-parking zone timings. Because they are designed for NAS usage their heads will not retreat to the disk’s parking zone until 300 seconds of inactivity compared to 8 seconds for a model from the “WD Green” range.
Our diagnostics revealed several of the firmware adaptive modules were corrupt. These modules are essential in “tuning” the disk heads to the disk platters and are used for the management of disk errors. Secondly, the disk had well over 12000 bad sectors.
The data recovery process went smoothly without any surprises. Over 3.2TB of ORF (Olympus Raw), PSD (Adobe PhotoShop) and .MOV files were all successfully recovered. Everything requested by the client. Every MyCloud does have a silver lining…
Drive Rescue Data Recovery is based in Dublin, Ireland. We recover data from all of the WD MyCloud range including MyCloud Gen 2, MyCloud Pro, MyCloud EX2, MyCloud EX2 Ultra, MyCloud EX4 and MyCloud Home Duo. Call us on 1890 571 571 or find out more at: www.datarecoverydublin.ie
1) Connect your damaged USB memory stick to your Windows computer.
2) Navigate to the Device Manager. Click on the “plus” sign
beside “Disk Drives”.
3) Then right click on your damaged disk and click on “properties”.
4) Navigate to the “drivers” tab and click “disable”.
5) Insert your USB memory stick back into your Windows system. Go to “Device Manager” again and re-enable the device.
6) The volume containing your data should now appear under “My Computer”.
This is just one fix. Other more advanced problems such as a failed NAND controller can also cause your memory stick not to be recognised by Windows.
Drive Rescue (Dublin, Ireland) offer a complete data recovery service from USB memory stick brands such as PNY, SanDisk, Adata, Sony, Integral, Toshiba and Ativa. We also perform recovery from promotional USB memory sticks.
NAS devices have never been so popular. They consume less power than a PC or server, they support RAID and their compactness means they can be stored in even the most space deprived homes or offices. While first generation NAS devices were basically conjoined hard disks with a built-in networking component, modern NAS devices are much more sophisticated. Most come equipped with their own operating system such as DSM (for Synology) or QTS as used by Qnap. Most NAS devices also support file sharing protocols such as SMB and NFS, which make them ideal for OS-agnostic environments.
Even though most NAS devices support RAID redundancy, it is surprising how many users forsake this safety net in lieu of performance by setting up their devices in a RAID 0 configuration.
Recently, we helped a user to recover files from his Synology NAS DS216 configured in RAID 0. Inside the array were two Western Digital Disks – a 1.5TB disk (WD15EARS) and a 2TB disk (WD20EZRX). Our diagnostics revealed that the latter disk had firmware issues. Once these were resolved, we imaged both disks. Using both disk images, we rebuilt the RAID 0 using its original parameters. The file system used was EXT4.
We recovered over 2TB of Final Cut Pro 9 (. fcp) files along with .MOV and.AVI files. – everything which the client needed.
In this case, the user made the mistake of using RAID 0 but can you spot the second mistake from the photo above? He also used two WD Green disks. For NAS devices, this is another common technical faux pas. “Eco-class” disks and many standard “desktop-class” disks do not support TLER (time limited error recovery) functionality needed to minimise errors on a RAID environment. “NAS-class” disks such as WD’s NASware disks or HGST’s Deskstar NAS disks are recommended. But perhaps the greatest step the user could have taken was to have his data backed up! Apps in Synology’s DSM facilitate this as well as third party apps such as BackBlaze B2 or using OS apps such as Resilio.
You have a Microsoft Office file (such as Word, Excel, etc) which you (or your user) have been working on all week. But now, for whatever reason, the file has either gone corrupt or been over-written.
Hours of work wasted? Maybe not. In Windows, the quickest recovery route possible is the much forgotten and underused feature called Shadow Copy. The steps to recovery are easy. 1) Right-click the overwritten or corrupted file and click ‘Properties’. 2) Select ‘Previous Versions’. 3) If you want to view the old version, click ‘View’. To copy the old version to another location, simply click ‘Copy’ and you’re done. The quickest data recovery ever!
Generally speaking good quality CD-R, CD-RW, DVD-R, DVD-RW are a fairly reliable backup medium. This is assuming, however, that they’re kept out of direct UV sunlight and are kept free from deep scratches. But, occasionally user error can result in data loss. Like a gentleman from Kildare we helped last week. He had an Outlook .PST file from his old workplace which he had stored on a Maxell DVD-RW. He inadvertently performed a “quick format” on the disc, erroneously thinking a different disc was in his DVD tray. His heart sunk as he thought that 9 years’ worth of emails and contacts we now gone into the ether. He contacted us.
The data recovery process for this type of data loss is cut and dry. Firstly, we configured the read-speed settings on our DVD reader to read at the slowest possible speed and then ran a utility IsoBuster called on his disc. This recovered his .PST file quickly. However, when this file imported into Outlook, it still would not read. The error “outlook.pst is not an outlook data file” appeared. So, we ran SCANPST on the folder and re-tried. (SCANPST is a utility built into Windows OS to repair minor errors in .PST files). The utility found some errors which it repaired. The second import of the .PST file worked with all the client’s contacts and old emails now appearing. A very quick data recovery case! I hope this post helps someone else who has experienced the same problem.
Drive Rescue Data Recovery are based in Dublin, Ireland. We recover data from most brands of hard disk including Seagate, Toshiba, WD, Samsung, Iomega, LaCie, Intenso, Adata and Transcend. Find out more on: www.datarecoverydublin.ie