BlackArmor Data Recovery Part 2
March 10, 2014 Leave a comment
Note: This is a continuation of my previous post
Since I don’t really have the time to play around with installing Debian on the BA440 I’ve decided to add the missing drive back in to the RAiD5 array and use my Debian box with the drives attached in the USB docking ports as an iSCSI endpoint. Not exactly a tidy desktop solution but this will go in the server room. If I get some spare time I will look at replacing the firmware on the BA. If I do, I’ll be sure to write about it.
To add my missing drive, the one the BA “recover” process messed up , I first have to identify it.
# cat /proc/partitions
You should see your system partitions probably /dev/sd[abc…] all 3 of your /dev/sd[bcde] paritions (one of the [bcde] will be the missing drive so it won’t be listed, and then a root partition for the new drive.
The missing or new drive will show up as /dev/sdf
If it’s new all that will be there is /dev/sdf and you can skip this next step.
Lots of partition tutorials will tell you to use fdisk and/or cfdisk for partition work but these won’t work with the GPT format of the BA, we need to use parted. GPT is a newer partition format used for large disk partitions and parted is the tool built to manage them.
The aborted BA recovery process tried to create partitions that weren’t correct (partition sizes did not match the others in the array) so I had to delete the partitions.
IMPORTANT: This destroys any data on the disk so make sure you’ve got the correct disk reference.
To do this run
# parted /sdf <parted> rm 1 <parted> rm 2 <parted> rm 3 <parted> quit
rm removes the partitions. If you rerun cat /proc/partitions you will now see just /dev/sdf.
Next we need to get the exact details of the setup of the disks in the array so we run parted on a member of the array. It doesn’t matter which one as they are all the same structure.
# parted /sdd <parted> print Number Start End Size File system Name Flags 1 100MB 1169MB 1069MB ext3 raid 2 1169MB 2239MB 1070MB linux-swap raid 3 2239MB 2773MB 534MB ext3 raid 4 2773MB 1000GB 997GB primary raid <parted> quit
Copy this output down or print it out. Important: Make sure you quit parted otherwise the next steps you make will be on the good disk in the array. You don’t want that.
Now go back and use parted on the new drive /dev/sdf
While in parted we’ll use mkpart [part-type fs-type name] start end.
For the first 3 partitions we’ll define the part-type which is either ext3 or linux-swap. For the last we’ll assign the fs-name which is “primary”. We do not define a part-type for our 4th partition which will become our data partition.
# parted /sdf <parted> mkpart ext3 100MB 1069MB <parted> mkpart linux-swap 1169MB 2239MB <parted> mkpart ext3 2239MB 2773MB <parted> mkpart primary 2773MB 1000GB <parted> quit
If you rerun cat /proc/partitions you will see that /dev/sdf now has 4 partitions
/dev/sdf1 /dev/sdf2 /dev/sdf3 /dev/sdf4
Now all we need to do is add /dev/sdf4 to our array. To do this we go back to mdadm. From the last post we remember that /dev/md3 is our raid array.
mdadm --manage /dev/md3 --add /dev/sdf4
The disk is now added to the array. You can check its status by running
mdadm --detail /dev/md3 Raid Level: Raid5 Raid Devices: 4 Total Devices: 4 State: clean, degraded, recovering Active devices: 3 Working devices: 4 Failed devices: 0 Spare devices: 1 Rebuild status 3% complete Raid Device 0 active sync /dev/sdb4 2 active sync /dev/sdd4 4 spare rebuilding sync /dev/sdf4 3 active sync /dev/sde4
I’ve edited the output to just the relevant bits. You can see the array is rebuilding. This can take a long time depending on the size of the drives. In the time it took to write this the array has only moved from 3% to 4% complete. This is normal and in 8 -10 hours or so, we’ll have a rebuilt 4 disk RAID5 array.
Once that is complete, I’ll look at setting up some new iSCSI mount points and moving it into the server room.