1TB HGST Bitlocker protected disk : Data Recovery Case Study

1tb hgst disk data recovery ireland

 

Bitlocker is a common encryption application available in Windows 2008, ultimate and enterprise editions of Windows Vista, Windows 7 and Windows 8. It protects a computer owner from data theft in case of the loss of a system, or a storage device and protects against outside attacks through a network.

Bitlocker uses an Advanced Encryption Standard (AES) algorithm in Cipher Block Chaining mode with or without a diffuser. Most default deployments of Bitlocker use AES-128 or 256 bit encryption with an elephant diffuser algorithm.

Bitlocker uses a Full Volume Encryption Key (FVEK) to protect the data. In turn, this key is protected by a Volume Master Key (VMK). Like a lot of encryption applications, Bitlocker allows for multi-factor authentication via Trusted Platform Media chip, PIN number and USB.

Most deployments of Bitlocker are troublefree. However, occasionally due to disk failure or corruption of the encryption application itself, data recovery from a Bitlockered disk will be needed.

Last week one of our clients, a user from an Irish Government agency, had a Bitlocker-protected disk in a Lenovo laptop. The volume became inaccessible. Their I.T. department removed the m1TB HGST disk from the laptop and attached it to another Windows 7 Entreprise system with a TPM chip onboard. The disk would not mount and was “invisible” to the system.

Challenge

The data was of critical importance and of a confidential nature. Due to confidentiality concerns, the user was not backing up to the department’s server.

Solution

We examined the drive. Using special tools our technicians accessed the Host Protected Area of the HGST disk. The G-List aand Translator tables were all corrupt. Using our equipment, which can access the HPA directly, our technicians repaired the corrupt firmware.

This made the drive bootable again. This time, when connected to one of our recovery systems, a Bitlocker volume or Full Volume Encryption File System (FVE-FS) could now be recognised. This could be recognised by the signature “-FVE-FS-” at the start of the volume – always a promising juncture when recovering from an encrypted disk. The client then emailed us their 48 digit Bitlocker key in a .txt file.

We then used the following command to to unlock the volume:

manage-bde -unlock e: -RecoveryPassword XXX48-digitkeyXXX

where e: was the Bitlockered volume.

After having being inputted, the volume’s partitions appeared. We invited the client to login to our sytsems remotely to view and verify their data.

Result

All their files were recovered – intact. Even though, their drive was bootable again, we extracted a copy of the data onto a USB external drive as a precaution.

Lesson : disk encryption and comprehensive backup policies should be in lockstep with each other.

The main takeaway from this case is that disk encryption and comprehensive backup policies should be in lockstep with each other. Disk encryption applications are not like other PC applications where their actions can be easily reversed. If corruption does occur with a whole disk encrypted volume – it is not unknown for some users to lose access to their data irreversibly. As for the users who deliberately refrain from backing up to their company’s or organisation’s server out of confidentiality concerns – alternate practical back-up policies should be drawn up. This could be in form of local backup or backup to a personal Cloud-based service.

 

 

 

 

Data Recovery from G-Tech NAS (RAID 0, HFS+)

g-tech nas data recovery dublin ireland

 

 

 

 

 

 

 

 

A Dublin-based digital marketing agency were using a G-Tech NAS to store their Photoshop and Final Cut Pro files. There was over 2 years worth of design work backed-up onto the device. Last Monday morning, the folder shares for the device were not accessible from any of their computers. Thinking that it was only a glitch, they rebooted the device. Still no dice. They called their tech support company. Their technician suspected that their RAID array had failed. Not enamoured with the prospect of having to re-do years of graphic design work. Their tech support company recommended Drive Rescue.

 

Our technicians examined their NAS. RAID 0 was being used. The drives were formatted in HFS+ (which is the default file system for Apple). We performed diagnostics the drives (Hitachi Ultrastar 7K4000 4TB X 2). One drive (drive 0) passed the diagnostic test with flying colours. However, it’s counterpart (drive 1) had extensive bad sectors. We imaged both drives using a hardware imager designed for data recovery. Then, using images of drive 0 and drive 1, our technicians used a hex editor to find exact parameters of the G-Tech’s RAID array. This included determining the stripe size, the disk offset and parity. Once these had been calculated, it was now possible to start the rebuild of the array. This took nearly 13 hours to complete. All our work was not in vain however – all of the client’s data was successfully recovered for the very satisfied client.

Data Recovery from Iomega Stor Center iX2 NAS

 

data recovery from iomega nas store center dublin

 

The Iomega Stor Center is a common NAS device used in Irish workplaces and homes. It is fairly robust, intuitive to use and can be easily configured to work with any network.

But, like any storage device, it is prone to failure – like one Dublin-based software development company discovered last week.

They were using their Stor Center ix2 as a surrogate server to store everything from PDF files to back-ups of their C++ and Java files which they use to write their software.

Last week, one user could not access the shares and attributed it to a glitch. Then his colleagues discovered they could not even see the network shares anymore. They went to investigate it further. The indicator warning light on their Iomega StorCenter was flashing. They logged into the management console of the device. It was then they discovered that one of the disks, disk 0, was offline. As the ix2 is only a two bay device, it can only be set-up in RAID 0, RAID 1 or JBOD. This device was setup in RAID 0 which meant that half of their data was on a drive which had gone offline. They removed disk 0 and slaved it to a PC system but it was totally dead. In terms of solutions, they had run out of road.

They delivered the two Seagate ES.2 drives to us. Our diagnostics revealed that Disk 1 was in perfect health but Disk 0 had a failed PCB (failed inductor chips).

ES.2 drives use Seagate’s F3 architecture. This means that the ROM on the PCB holds unique adaptive information needed for the operation of the drive.

The drive was brought to our rework station. Here our technicians used hot-air to carefully remove the drive’s ROM chip. (De-soldering such a delicate chip can be messy operation). We had an ES.2 donor board already in stock. The removed ROM chip was then carefully micro soldered onto the donor PCB (whose original ROM chip had been removed) . The said PCB was then fitted onto the drive.

The drive spun up with a healthy reassuring spin. Now it was time to image both the drives. The imaging process took 3.5 hours to complete.

Using both drive images, our technicians now set about to rebuild the RAID 0 array. Using a hex editor they determined the exact parameters of the array such as disk order, block off set and stripe size. The RAID rebuild took a couple of hours to complete.

Finally, the volume was mountable again. All of their Java, Pyhton and C++ code libraries and PDFs (all of which had taken months to compile) were accessible once again. Result: a very happy software development team who not have to cover old ground in re-writing code and PDF manuals.

 

Irish Internet Association Seminar – Data Protection and Cyber Security

data protection logo ireland

Last week Ultan O’Carroll of the Data Protection Commisioner’s office gave an excellent presentation on best practice policies for data protection.

Below is a quick snapshot of some key points.

Knowing your data – “If there is anything you need to know – know what data you have and categorise it in some way – whether it is personal, financial and so on” . He further advised delegates that apart from categorising your data, “you need to know where your data is – whether it is on tape, on disk, on your production server and so on”

“Access ontrol” data among your employees – “Not everyone needs to see all the personal data that you hold” For example, sometimes your admin staff only need to  have access to the address details of your customers. If the data is not within their remit, they need not be privy to it. All of this goes back to “knowing your data”.

 Use access logging  – Finding out “who logged in when”, “whether it was local or remotely” and “what password they used”. “We often see things go wrong at this level” said O’Carroll.

Have a plan to deal with data breaches within your organisation –  Dealing with data breaches in an ad-hoc fashion is not the best way. Data controllers must have a plan in place.

Software patching – You should have a policy in place for the patching of software and it needs to be enforced. “We often find that top-level security patches get released but they are only applied for 3 – 6 months after that. In that window, hackers will try to do some reconnaisance on your site”.

Passwords – Having a robust password policy in your business or organisation is essential. For example,  users using the same passwords for their Facebook account and their company database is not secure. Moreover,  passwords need to be transmitted and stored securely. For example, emailing or storing passwords in clear text is not good practice.

Use third parties to independently test your security – There are specialists who can independently test the security of your I.T. infrastrcture. These often have their own sub-specialisations. For example, one penetration tester might specialise in         e-commerce payment gateways whilst another might specialise in network penetration testing.”Test it and test it again”  is the advice.

privacy engineers manifesto ireland

Whilst the above points are just guidelines on data protection best practice – the best data protection systems are often built from the ground up. If you want to find out more information implementing better data protection, an excellent ressource  is    The Privacy Engineer’s Manifesto” by Dennedy, Fox and Finneran.  The authors espouse the view that “privacy will be an integral part of the next wave in the technology revolution and that innovators who are emphasizing privacy as an integral part of the product life cycle are on the  right track”.

The ebook version of the book is free to download at:

http://www.apress.com/9781430263555

Bones Break…so do RAID 5 arrays – data recovery for Physiotherapist Practice

raid 5 recovery dublin ireland

Last week, we got a call from a Dublin physiotherapist`s practice. Their Dell Poweredge server, configured in RAID 5, had failed.

Their I.T support technician identified the problem immediately. However, for him data recovery from a RAID 5 server was unknown territory. For this blog post, here is an abridged version of the RAID recovery process which we used.

For recovery we decided to use Mdadm.  It is a powerful linux-based RAID recovery utility. A good knowledge of Linux command line and in-depth experience of this tool are essential prerequisites for it’s operation.

The first step in the recovery process was to deterimine the status of the server’s drives in-situ.

We used the following command on every disk in the array:

            mdadm – -examine

We were able to determine that drives /dev/sdc1 and /dev/sdd1 drives had failed (sdc1 being in worse condtion). Mdadm revealed that this RAID 5 had experienced double-disk failure.We then carefully labelled each drive and removed them from the server. Then, using a specialised hardware disk imager – we imaged the disks. This means that we would be working on copies of the disks rather than the original ones. In the unlikely event of the data recovery process being unsuccessful – the original configuration and data, as we received it, would still be intact.

The imaging process completed successfully. We now put the imaged drives into the server. With all the prep work completed. It was now time to take the RAID array “offline”. This can be achieved by using the “mdadm –stop” command. The last thing we wanted was for the RAID rebuilding process to start using a failed disk in bad condition  (e.g. /dev/sdc1) To prevent this from happening, we cleared the superblock of this drive using the command:

            mdadm –zero-superblock /dev/sdc1

Now using the output we got from mdadm –examine, we used the following command to  rebuild the array:

            mdadm –verbose –create –metadata=0.90 /dev/md0 –chunk=128 –level=5 –        -raid-devices=5 /dev/sdd1 /dev/sde1 missing /dev/sda1 /dev/sdb1

We now had to check whether the array was aligned correctly using the command:

            e2fsck -B 4096 -n /dev/md0

Using e2fsck it is always helpful to specify the block size before a scan to get a more accurate status of the array. We also used the -n prefix in case the array was mis-aligned and e2fsck attempted to fix it. ( e2fsck should never be executed on an array that is potentially mis-aligned)

 

E2fsck completed successfully and correctly identified the status and alignment of the array.

It was now safe to proceed with the repair and fix command

 

            e2fsck -B 4096 /dev/md0

Notice that no “-n” was used this time. The scan took around 5.5 hours to complete. It found over 26 inode errors, hundreds of group errors and some bitmap errors.

Now, it was time to add the first failed drive back into the array. We used the command:

            e2fsck -a /dev/sdc1

The RAID array now began to rebuild. After a couple of hours, the RAID 5 was totally re-created, albeit in degraded mode. But the the volume was mountable again and all data was now accessible.

The client had over 4 years of Sage Micropay and Sage 50 accounts files on the server. In addition, they had over 6 years worth of PhysioTools data files. This is a software package which they used to create customised exercise regimes for their  patients.  Reconstructing accounts and staff payslips would have been very time consuming and costly. For their staff to re-create patient exercise regimes, it would have incurred a huge time burden on them. Moreover, it would probably have been professionally damaging for thier reputation if they had to inform their patients that their customised exercise regimes had been “lost”.

We advised the client on some best-practice back-up strategies so they could prevent data loss in the future. It is deeply satisfying to help a customer like this when the “plan B” option would have been so disruptive for them. They could now get back to helping their patients with minimum downtime to their business.

. 

 

The mystery of the continually degrading RAID 5 array


wd caviar green 1tb recovery ireland

A couple of weeks ago, an I.T. support administrator for a Dublin finance company called us. He was in a spot of bother. Last week, their RAID array on their HP Proliant server failed. Luckily, they had a complete back-up and no data had been lost. The I.T. admin decided to replace the four Western Digital (WD2500YS) Enterprise S-ATA setup in a RAID 5 configuration. He replaced them instead with four Western Digital Caviar Green disks (WD10EZRX). Using S-ATA to SAS adaptors he connected them to the HP Smart Array controller card. 

He re-built the server, re-installed Windows Server 2008. But three days later, the server was down…again. In this short space of time, the RAID array had changed from “normal” to “degraded” status. He ran diagnostics on the disks. All of them passed. He suspected that the Smart controller card was at fault. He had a redudant server in his office using the same model of RAID controller card. He removed it and installed it in the problematic server and, for a second time, did a rebuild of the array. He tested it for a couple of hours. It worked fine. Then, just as he was about to start the data transfer process from the local backup, he rebooted the server . The dreaded “degraded” message appeared on the screen – again.   

Being a past customer of Drive Rescue data recovery, the I.T. admin telephoned us for advice about the mysterious problem. After his description of his problem, we had a fairly good inkling as to what the cause might be. But, inklings or assumptions are dangerous. Most the great technological failures of mankind (nuclear power plant explosions, aircraft disasters,etc.)  can be traced to someone, somewhere making a wrong assumption. The same applies to the data recovery process. Good data recovery methodology is not based – or never has been – on assumptions.  We asked him to email us his server event logs, hard drive model numbers and exact model of RAID controller. After looking at his server logs, the specs of his controller card and model of hard disks used; it was now becoming clearer as to what the root-cause of the problem might be.   

 

He got the disks delivered to us and we tested them using our own equipment. His Western Digital Green disks were indeed perfectly healthy. The problem with a lot of WD Green disks (and other non-entreprise-class disks)  is that when they are used in a server or NAS – the RAID controller can erroneously detect them as faulty. The reason for this is quite simple. In some RAID setups, if the controller card detects that the disk is taking too long to respond, it simply drops it out of the array. But, it is normal for error recovery in some non-entreprise / non-NAS classified disks to take approximately 8 seconds to complete.  With error recovery control (ERC) enabled on a disk, error recovery is usually limted to 8 seconds or under. (For example, with Hitachi branded disks ERC is limited to 7 seconds) This means the RAID controller will be less likely to report ERC-related false positives. 

In this case, the Smart RAID controller, used commonly by HP Proliant servers, was detecting some of these disks as faulty when they were not.  The most common type of error recovery control used by Western Digital is TLER (Time Limited Error Recovery). Most WD Caviar Green drives do not have this function. WD Red (NAS disks) and WD entreprise-class disks do have error recovery control.

 

 RAID controllers (especially dedicated hardware controllers like those from LSI, Perc etc.)  are very sensitive to read / write time delays. When a hard disk does not use error recovery control , a RAID controller will often report false positives as to the status of the array or, like what happpened in this case, the controller will simply drop the “defective” disk out of the array.

Entreprise-class disks (such as the WD Caviar RE2, RE2-GP)  or disks made specifically for NAS devices (WD Red) have error recovery control enabled by default.

In this case, the I.T. admin replaced the WD Caviar Green disks with four 1TB WD RE SAS drives. He then rebuilt the RAID 5 array.

Yesterday, we got a nice email from him. The server has been running smoothly ever since. He has rebooted it a couple of times. The event logs are free from disk errors. He no longer has to worry about the uncertainty of the company’s server continually degrading.  He can even sleep more soundly at night.

Firmware Failure on Western Digital Blue – 1 TB Drive

Firmware is low-level software stored on the PCB and System Area of a hard disk drive.  It contains the most basic parameter information needed for the disk’s operations and provides the lowest level direct hardware control. One can think of the firmware equivalent to the disk’s operating system. It contains the manufacturer’s parameters needed to initialise the disk and contains the servo-adaptive information needed for the drive to operate smoothly. Usually, hard drive firmware contains low-level information, the servo-adaptive information, the disk’s model number, date of production, error logs, the P-List and the G-List (the defects tables). The firmware information can be typically found on the ROM chip of the PCB and on the System Area of the drive platters.

 

In early model hard drives, (such as those produced during the 1990’s) firmware was read-only. Nowadays, firmware is stored on EEPROM chips which means the firmware can be modified by data recovery professionals using specialist firmware emulator equipment with EEPROM read / write functionality.   

hard drive schema

Fig.1 Hard Disk Layout. The firmware is kept totally separate from the user data.

The System Area (usually before LBA 0) also stores firmware information. Typically, it contains the P-List and the G-List. Every hard disk will contain a small number of sector errors before it even leaves the factory. This information is stored on the P-List. When the disk is put into everyday use, more errors will develop on it. These “growth errors” are stored on the G-List (growth defect list). For example, if the disk develops a bad sector, the firmware will add this to the G-List and subsequently the sector will be remapped to the reserved area.

p-list remapping hard diskFig. 2 – P-List Remapping

g-list remapping hard disk recoveryFig. 3 – G-List Remapping – Bad sectors are removed to the reserved area. This type of error correction  has been so successful for the storage industry, it is even emulated on the most sophisticated solid-state drives.

The system of bad-sector mapping and relocation is all very clever. But, sometimes due to physical damage of the EEPROM chip, the System Area or corruption of the firmware code, the partition table(s) will become invisible to the host system. A very common occurrence of this type of failure happened with Seagate’s 11th generation of Barracuda series of drives. However, it is not only Seagate where firmware failure can occur. Firmware failure is also commonly seen on the Western Digital D5000AAKS family drives.

 

 

Thankfully most disk firmware failures can be recovered from successfully. Experience, a sound knowledge of EEPROM firmware, specialised firmware repair equipment and a proper methodology often ends in fruitful results. Take for instance a case we were dealing with last week of a firmware failure of a 1TB Western Digital “Blue” drive. We received the drive from a Dublin architects office. One of the partners in the practice had all of his Archicad drawings from the last 4 months on his 4 month old laptop. These were projects he was currently working on.  He was incredulous when his I.T. support company told him that his hard drive in his relatively new laptop had failed. 

 disk recovery dublin wd firmware 25

We performed diagnosis on his drive and it was immediately apparent that the firmware on the disk’s System Area had gone corrupt. Once diagnosis has been complete, we contacted the client for formal go-ahead to proceed with the recovery. Once formal approval had been received, we firstly backed-up the firmware modules on the EEPROM chip and the System Area to an external source.

Once a complete backup had been made, it was now safe to manipulate or edit the firmware. We used our specialised firmware recovery equipment to check the G-list and P-list. The P-List was corrupt. This was the root-cause of problem. After some careful editing of the faulty P-List and a regeneration of the translator, we were almost done. We switched the drives power supply off and on to make the drive initialize itself again with the new parameters. Success…the data was now accessible again. The recovered data was transferred onto a 1 terabyte external hard drive for delivery to the customer. All the files the client needed (Archicad, Word, Excel and .PDF files) were recovered successfully.      

 

 

Data Recovery from Kingston Data Traveller Flash Drive

data recovery kingston usb  ireland

The owner of a small logistics business in Athlone recently contacted us. His business distributes freight from and to Athlone to Dublin, Cork, Galway and Limerick five days a week. Last week the hard drive in his laptop experienced catastrophic failure. However, two weeks previous, he had made a backup of his “My Documents” folder and his Sage accounts file onto his USB stick. He connected his Kingston USB memory stick to his desktop computer but it was not recognised. He got the message “Please insert a disk into drive E”. He believed that the age of his desktop computer was the problem. The following day he bought a new laptop. He connected his USB stick to one of its ports but got the same message. He brought the USB stick to his I.T. support provider so they could examine it. Unfortunately they were not able to retrieve any data from it. They recommended that he contact Drive Rescue.

Once we had received his USB stick, we tried accessing it using our own systems but we got the same error message. The first data recovery step in the data recovery process for this case was the opening up of the drive in our clean room to physically inspect the inside of the device. The PCB tracks and diodes all appeared to be physically okay. We then tested the diodes using a multi-meter. They all appeared to be working. The device was showing all the symptoms of a failed controller. While the NAND chip stores the actual user data. The controller chip stores a software module which contains the ECC and wear-levelling data of the drive. Error Correction Code is an algorithm built into USB memory sticks (and most NAND memory devices) which helps to fix bit errors that occur in the file system during the life of the drive. Meanwhile, the wear-levelling function helps the data to be evenly distributed throughout the memory cells of the device.

usb recovery ireland nand chip

 

We used a hot-air rework station to remove the NAND memory chip from the PCB. This is an intricate and time-consuming job as the temperature has to be hot enough to melt the solder but, not as hot, as to damage the actual memory chip. During the process, it is also important to note whether lead-free solder is used to join the chip to the PCB. Lead-free solder usually requires a higher temperature to remove than a lead-solder joint. Different temperatures will also have to be used depending on chip size the type of PCB.  Using our precision tools, experience and a steady pair of hands – we successfully removed the NAND memory chip from its PCB.

usb stick data recovery controller chip ireland

In order to read the data from the extracted NAND chip, we would need the exact controller information as used on the original USB for our emulator. While the controller chips on USB sticks are usually easy to spot; their exact model number is sometimes hard to identify because often you will only find a part number on them and a manufacturer logo on them. Common controller chips found on USB memory sticks including those from manufacturers such as ALCOR, KTC, Silicon Motion, Ramos, Phison, OTI, SMI and SSS.  In this case, the controller being using was an SSS (Solid State System) with model number 6692.


Once the exact controller module had been uploaded to our emulator, we determined the memory block sizes and erase block sizes of the data. After some manual reconstruction of the FAT32 file system using a hex-editor we were finally able to see the client’s data. We recovered all of his Sage 50 accounts file, along with all of his Word, Excel and scanned document folders.  For the client’s piece of mind, we invited him to log into our systems remotely to view his recovered files. Everything that he needed was there. The delighted customer was sent his recovered data on a new USB memory stick – this time, he will be backing up his data to three different places instead of just two.

 

NAS Data Recovery from a RAID 1 Buffalo Linkstation

NAS data recovery Ireland

NAS devices have never been so popular. They are compact, relatively easy to administer and can often perform the same storage functions of a traditional server.

There are numerous brands of NAS widely available in the Irish market to cater for most storage requirements and budgets. Some of these manufacturers include Drobo, LaCie, Buffalo, Iomega, ReadyNAS (from Netgear), Seagate, Western Digital, Qnap, ZyXel, Synology and G-Technology. All of these manufacturers have a wide variety of models available with different capacities, I/O specs, RAID levels and file systems.

One popular file system commonly used in NAS servers is the XFS file system (developed by Silicon Graphics International in 1993). It is a powerful, fast and scalable file system which can handle a whopping 8 exabytes of data. It is not surprising therefore that organisations such as the CERN research laboratory and NASA’s supercomputing division use XFS for many of their projects requiring high-capacity data storage. One of the main reasons for the growing popularity of XFS is its speed. It is significantly faster than EXT3. When deploying any operations which utilise sequential buffered writes – it will be faster than EXT4. How does it do this? XFS cleverly deploys Allocation Groups to allow multiple I / O requests. Moreover, XFS has another trick up its sleeve. It uses a feature known as Direct I/O – this means data can be transferred from the file system directly to RAM space – obviating the need for a cache or processor request. Clever stuff – but the dexterity of the XFS file system does not end there. XFS does not just offer great IOPS rates; it also uses file parsing. A lot of files will contain a large number of zeros. XFS – hating to see wasted space – will put metadata in place of these zeros. When the file is accessed – it will revert to its original size. In addition, XFS uses online defragmentation – data fragments are converted into continuous blocks on-the-fly. Both of these attributes of XFS means space allocation on XFS-formatted drives is used very efficiently.

So taking into account its great storage capacity, speed and efficient parsing – you might be thinking “if Carlsberg did file systems…” it would be XFS? Maybe, but XFS does have some drawbacks. For example, XFS does not handle sudden power loss very well. Whilst it is a journalised file system, the journaling system is designed to increase performance and not offer redundancy. If there is a power failure on an NAS or server running XFS – your data will probably be irreversibly lost. (hence, a UPS will be indispensable when using XPS). Moreover, like any file system XFS can go corrupt.

Take for instance a case we were dealing with last week. A multinational medical device company from Limerick contacted us. In one of their research laboratories, their Buffalo Linkstation NAS device became inaccessible. Their IT administrators removed the hard drives (two Seagate Barracuda 2TB – model ST2000DM001) and slaved them onto a Linux system. They ran the “xfs_repair” command. This is a fairly standard repair command for XFS which, unlike, “fsck” is not invoked automatically upon system start-up. However, this operation was continually aborting for them. Errors were being returned to them about primary and superblock corruption. Not to be defeated so easily they unmounted the volume again and ran a “xfs_repair –n” command which performs a more thorough check of the file system. Alas, this operation was also aborting for them also.

They sent the Buffalo NAS box to our Dublin lab. We removed its drives and performed diagnostics on them. Drive 0 had several thousand bad sectors on it. This explained why “xfs_repair” and “xfs_repair –n” commands was unable to complete. As always, in order to maintain the integrity of the data we imaged the drives using a hardware imager designed for data recovery purposes. This means we can perform data recovery using a copy of the volume rather than using the original drives themselves. Once imaging had completed, we then examined the inode maps on the volume. We removed all duplicate blocks. We cleared the lost and found directory. Then we rebuilt the volume’s trees and headers. Any disconnected inodes which we found – we moved to the lost and found folder. The volume was then mounted. We then had a workable volume with which we could rebuild their RAID array with. From our examination of the volume’s stripe size, RAID header and parity – we determined they were using RAID 1. Their array was rebuilt. The rebuilt volume was then mounted and we saw what looked like a valid structure and folder directory. We then invited our client to login to our systems remotely to view and verify their retrieved files. The client was delighted. Every file they needed had been successfully recovered. Without this data, their R&D team would have had to replicate months of work. Their recovered data was extracted onto a high-capacity external USB hard drive and along with their Buffalo NAS box and original drives was dispatched back to Limerick. The lesson: even the best file systems can go corrupt if the underlying hardware starts to fail. NAS devices are not bulletproof. The data stored on them should be backed up onto another drive. Better still, many NAS manufacturers like QNAP and Synology have options in their software to backup directly to the Cloud.

Warning: Apple Mavericks OS 10.9 – Mysterious Partition Loss on External Drives

apple mac data recovery ireland

We have been helping a lot of users recently recover data from external and NAS drives. A lot of these cases had two factors in common. 1) There was sudden partition loss and 2) the user had recently upgraded to Apple’s new Mavericks operating system.

We immediately began to surmise that maybe Mavericks was becoming a little too maverick in the way that it managed external drives.

The first instances of this problem began appearing with Apple users who had Western Digital external hard drives connected to their newly installed operating system. For example, one user after connecting his MyBook to Mavericks was shocked to discover his HFS+ partition has disappeared overnight. In it’s place he got one EFI partition and one called MyBook (which incidentally was completely blank). Three years of business documents and family photos disappeared. (We performed data recovery on his drive and successfully restored all his files)

We were beginning to suspect that maybe Western Digital’s ancillary drive software (the software which comes free with external hard drives) had an incompatibility with Mavericks which was causing this problem. Western Digital even issued a press release advising users to remove their Drive Manager and SmartWare software.
We have tried the following Mac commands and they work pretty well in uninstalling WD’s Drive Manager service:

sudo launchctl unload /Library/LaunchDaemons/com.wdc.WDDMservice.plist
sudo rm -R /Library/LaunchDaemons/com.wdc.drivemanagerservice.plist
sudo rm -R /var/tmp/com.WD.WDDriveManagerService

A Twist in the Tale

But then, an interesting twist developed. We had users of LaCie and Buffalo drives who were reporting similar mysterious partition loss. For example, the owner of a LaCie Rugged drive in the south-east of Ireland had the primary partition on his device disappear. Meanwhile, the owner of a 16 TB Buffalo RAID 5 NAS device in Dublin had his array turn into 4 TB individual disks. (Our RAID data recovery process restored all his data – mainly Sage Accounts and AutoDesk files)

It is really surprising that Apple did not pick up on this bug in their beta-testing of their new operating system. I hope this is not a portent of things to come with Apple and they do not go down the route of Microsoft in rushing sloppily coded software to market. Mavericks is one of Apple’s first “free” operating systems. But, most users would be willing to pay for a quality product than have a free product makes their data do a Houdini-like disappearance.

Our advice is to hold off updating to Mavericks until Apple releases an update which addresses this serious problem.