Quantcast
Channel: Karl Arao's Blog
Viewing all articles
Browse latest Browse all 63

After OOW, my laptop broke down – data rescue scenario

$
0
0

I just got back in the office from a 2 week conference + vacation (SFO,WAS,NY). Then I was finally back in shape to work and do the usual geek stuff again but suddenly my Neo laptop suddenly stopped working! (the one I mentioned here, but it’s now on Fedora)

It can’t even boot to BIOS, certainly a case worse than BSOD.

So after fiddling with the laptop and systematically ruling out other component failures (power cable,monitor,memory,HD), Yes it’s much like troubleshooting an Oracle database! … we decided to bring it to the service center.

But wait! it may take too long to repair the machine & my precious data (wiki,photos,research,downloads,VMs,scripts) is still on the 2.5 hard disk.. So I pulled the disk, then I bought first a sata enclosure at Park Square, now I can plug the disk to another machine (Linux box) so I can get my data. Apparently the only available at that time was my R&D server running on RHEL5.4 64bit.

But I had a problem… all of my data is on LVM (logical volume) with a VG (volume group) name “vgsystem” that has the same name as my VG on the R&D server. So a pvscan errors with a “WARNING: Duplicate VG name”

So I had to rename first the “vgsystem” VG on the R&D server to “vgsystem1″ before I can activate the VG on my 2.5HD.

BTW, I had the following LVs (Logical Volumes) on the R&D server so I need to unmount them all first to be able to rename to “vgsystem1″

[root@beast dev]# ls -l /dev/vgsystem
total 0
lrwxrwxrwx 1 root root 28 Oct  7 14:21 lvhome -> /dev/mapper/vgsystem-lvhome
lrwxrwxrwx 1 root root 27 Oct  7 14:21 lvtmp -> /dev/mapper/vgsystem-lvtmp
lrwxrwxrwx 1 root root 27 Oct  7 14:21 lvu01 -> /dev/mapper/vgsystem-lvu01
lrwxrwxrwx 1 root root 27 Oct  7 14:21 lvusr -> /dev/mapper/vgsystem-lvusr
lrwxrwxrwx 1 root root 27 Oct  7 14:21 lvvar -> /dev/mapper/vgsystem-lvvar

Here are the steps I did to rename the “vgsystem” to “vgsystem1″

1) Edit the /etc/fstab entries of the LVs, comment on them so they will not be detected on boot. Anyway my root is on a separate ext3 partition and I’ll be doing the rename on runlevel1 so it’s all good. But here’s the catch, since I will be unmounting the /usr filesystem the binaries used for the VG rename (/usr/sbin/vgchange, /usr/sbin/vgrename) is not available so I must use of the /sbin/lvm.static (part of LVM tools) for the rename operation
2) Reboot the server, enter runlevel1
3) Execute the commands
# /sbin/lvm.static help
# /sbin/lvm.static vgchange -an
4) Execute the command
# /sbin/lvm.static vgrename vgsystem vgsystem1
5) Execute the command
# /sbin/lvm.static vgchange -ay
6) Edit the /etc/fstab, uncomment the LVs then replace the entries of vgsystem to vgsystem1
7) Reboot, enter runlevel5

Now let’s mount the 2.5HD and get the data!

1) Scan all PVs (physical volume), notice the vgsystem (from my 2.5HD) and vgsystem1 (from my R&D server)

[root@beast ~]# pvscan
  PV /dev/sdf3    VG vgsystem    lvm2 [146.80 GB / 0    free]
  PV /dev/sda5    VG vgsystem1   lvm2 [99.97 GB / 0    free]
  PV /dev/sda6    VG vgsystem1   lvm2 [99.97 GB / 0    free]
  PV /dev/sda10   VG vgsystem1   lvm2 [95.34 GB / 0    free]
  PV /dev/sda11   VG vgsystem1   lvm2 [95.34 GB / 0    free]
  PV /dev/sda12   VG vgsystem1   lvm2 [95.34 GB / 0    free]
  Total: 6 [632.77 GB] / in use: 6 [632.77 GB] / in no VG: 0 [0   ]

2) Activate the Logical Volume on the 2.5HD

[root@beast ~]# lvchange -ay vgsystem

3) Check on the PVs, notice that we both have PVs from vgsystem and vgsystem1

[root@beast ~]# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/sdf3
  VG Name               vgsystem
  PV Size               146.80 GB / not usable 3.31 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              37581
  Free PE               0
  Allocated PE          37581
  PV UUID               Ut6sSm-uCXd-h1Mi-402u-3QoU-IQQQ-wgOljt
   
  --- Physical volume ---
  PV Name               /dev/sda5
  VG Name               vgsystem1
  PV Size               100.00 GB / not usable 30.66 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              3199
  Free PE               0
  Allocated PE          3199
  PV UUID               eMdH4e-POuO-0tDy-LWVs-8lnW-oF0N-ikPxGJ
   
  --- Physical volume ---
  PV Name               /dev/sda6
  VG Name               vgsystem1
  PV Size               100.00 GB / not usable 30.66 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              3199
  Free PE               0
  Allocated PE          3199
  PV UUID               ia3nKZ-Ldyr-3zmz-OD2d-Es9P-lzDm-rfrhuz
   
  --- Physical volume ---
  PV Name               /dev/sda10
  VG Name               vgsystem1
  PV Size               95.37 GB / not usable 28.74 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              3051
  Free PE               0
  Allocated PE          3051
  PV UUID               rgeD2t-TLEh-DRWn-hdiC-5Sio-pczW-st19Yg
   
  --- Physical volume ---
  PV Name               /dev/sda11
  VG Name               vgsystem1
  PV Size               95.37 GB / not usable 28.74 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              3051
  Free PE               0
  Allocated PE          3051
  PV UUID               S2wtrJ-Bw60-VcCo-eE9X-QopS-bac2-3REU9c
   
  --- Physical volume ---
  PV Name               /dev/sda12
  VG Name               vgsystem1
  PV Size               95.37 GB / not usable 28.74 MB
  Allocatable           yes (but full)
  PE Size (KByte)       32768
  Total PE              3051
  Free PE               0
  Allocated PE          3051
  PV UUID               VeBQCq-06XK-02lI-3gP5-nuRw-D0n1-RSdNO4

4) Check on the VGs, notice that we both have VGs from vgsystem and vgsystem1

[root@beast ~]# vgdisplay 
  --- Volume group ---
  VG Name               vgsystem
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               146.80 GB
  PE Size               4.00 MB
  Total PE              37581
  Alloc PE / Size       37581 / 146.80 GB
  Free  PE / Size       0 / 0   
  VG UUID               v4ler4-aC9L-7ugQ-m9yQ-adYK-rrcM-sRLCqD
   
  --- Volume group ---
  VG Name               vgsystem1
  System ID             
  Format                lvm2
  Metadata Areas        5
  Metadata Sequence No  11
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                5
  Open LV               5
  Max PV                0
  Cur PV                5
  Act PV                5
  VG Size               485.97 GB
  PE Size               32.00 MB
  Total PE              15551
  Alloc PE / Size       15551 / 485.97 GB
  Free  PE / Size       0 / 0   
  VG UUID               kvUydX-Abxb-k3LO-q9wq-jF13-dGLz-vG9JUy

5) Check on the LVs, notice that we both have LVs from vgsystem and vgsystem1

[root@beast ~]# lvdisplay 
  --- Logical volume ---
  LV Name                /dev/vgsystem/lvroot
  VG Name                vgsystem
  LV UUID                COUkrC-4ygb-fBQx-3VLr-3CdP-Hupd-InPAGW
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                146.80 GB
  Current LE             37581
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5
   
  --- Logical volume ---
  LV Name                /dev/vgsystem1/lvusr
  VG Name                vgsystem1
  LV UUID                UNyiOM-SvAZ-8KVl-fGbF-SvsB-h9FO-u3YLyq
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                4.00 GB
  Current LE             128
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Name                /dev/vgsystem1/lvhome
  VG Name                vgsystem1
  LV UUID                odNeAE-Krf6-90yn-fN6Q-BDyC-FeKk-U7QeOa
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                9.75 GB
  Current LE             312
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Logical volume ---
  LV Name                /dev/vgsystem1/lvtmp
  VG Name                vgsystem1
  LV UUID                Uyvudz-OCTR-xJrQ-Iftz-CTa4-62XL-HOEN8o
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.00 GB
  Current LE             32
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2
   
  --- Logical volume ---
  LV Name                /dev/vgsystem1/lvvar
  VG Name                vgsystem1
  LV UUID                7uWs2z-hAzj-b3Aj-6KI8-Qaec-pKqd-15UQgk
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.00 GB
  Current LE             32
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3
   
  --- Logical volume ---
  LV Name                /dev/vgsystem1/lvu01
  VG Name                vgsystem1
  LV UUID                mta43w-NEnF-QaAx-td4P-osHg-gOAt-B2KrC5
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                470.22 GB
  Current LE             15047
  Segments               5
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

6) Now the device /dev/mapper/vgsystem-lvroot is ready to be mounted

[root@beast dev]# ls -l /dev/vgsystem
total 0
lrwxrwxrwx 1 root root 27 Oct  7 14:28 lvroot -> /dev/mapper/vgsystem-lvroot
[root@beast dev]# 
[root@beast dev]# ls -l /dev/vgsystem1
total 0
lrwxrwxrwx 1 root root 28 Oct  7 14:21 lvhome -> /dev/mapper/vgsystem1-lvhome
lrwxrwxrwx 1 root root 27 Oct  7 14:21 lvtmp -> /dev/mapper/vgsystem1-lvtmp
lrwxrwxrwx 1 root root 27 Oct  7 14:21 lvu01 -> /dev/mapper/vgsystem1-lvu01
lrwxrwxrwx 1 root root 27 Oct  7 14:21 lvusr -> /dev/mapper/vgsystem1-lvusr
lrwxrwxrwx 1 root root 27 Oct  7 14:21 lvvar -> /dev/mapper/vgsystem1-lvvar

7) Mount it baby!

[root@beast dev]# mount /dev/mapper/vgsystem-lvroot /mnt/usb/

8 ) Now I got my data :)

[root@beast ~]# ls -l /mnt/usb/home/karao/Documents/
total 32
drwx------  8 oracle oracle 4096 Oct  5 08:58 Desktop
drwx------  4 oracle oracle 4096 Sep 17 12:01 KnowledgeFiles
drwxrwxr-x  2 oracle oracle 4096 Jan 10  2010 My Music
drwxrwxr-x  7 oracle oracle 4096 Oct  5 09:03 My Pictures
drwxrwxr-x  3 oracle oracle 4096 Jan 11  2010 My Shapes
drwxrwxr-x  2 oracle oracle 4096 Jan 10  2010 My Videos
drwx------ 11 oracle oracle 4096 Sep  9 19:34 Softwares
drwxrwxrwt  9 oracle oracle 4096 Sep 28 23:29 VirtualMachines

Then I just plugged in my 1TB NTFS drive (I have NTFS-3G on the server) and copied all my files.

9) After copying it’s just a normal unmount operation but you also have to deactivate the Logical Volume

[root@beast ~]# umount /mnt/usb
[root@beast ~]# lvchange -an vgsystem

If you missed copying some files you can just plug again the 2.5HD and reexecute the mounting/unmounting process.

BTW I’m using my old 17inch HP Pavilion laptop which is already 5+ years old but still on good condition.

Hope I’ve shared you some good stuff :)







Viewing all articles
Browse latest Browse all 63

Trending Articles