Views:

Summary

When a VMDK snapshot is mapped to a virtual machine by an agentless full/instant VMDK restore job, the mapped VMDK needs to be recognized by the destination node.

 

Symptoms

After full or instant VMDK restore to a Linux node, no disk or file system was recovered.

 

Resolution

1. Create a full or instant VMDK restore job and restore the VMDK to a destination node. Please note that you cannot restore the VMDK to the original location if file systems are created from logical volumes. In a LVM environment, a duplicate name of a logical volume group is not allowed because the existing logical volume group will have a naming conflict with the restored volume group. For non-LVM environments, the destination node can be an original node. Below are examples of the VMDK before and after the restore job.

Before the VMDK restore job:

As you see, there are two disks.

After the VMDK restore job:

As you see, there are two RDMs (Raw Device Mappings): “Hard disk 3” and “Hard disk 4”.

2. Without any action, there are no newly discovered disks on the destination node. Here is an example of what is discovered on the destination node:

[root@wxk-rhel71 ~]# fdisk -l

Disk /dev/sda: 53.7 GB, 53687091200 bytes, 104857600 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: dos

Disk identifier: 0x000a8bb5

 

Device Boot      Start      End        Blocks   Id  System

 

/dev/sda1   *    2048     1026047      512000   83  Linux

 

/dev/sda2       1026048   87025663    42999808  8e  Linux LVM

Disk /dev/sdb: 375.8 GB, 375809638400 bytes, 734003200 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk label type: dos

Disk identifier: 0x1140c5e1

 

Device Boot      Start         End       Blocks     Id  System

 

/dev/sdb1        2048       734003199   367000576   8e  Linux LVM

3. Run rescan-scsi-bus.sh to discover any newly mapped RDMs. This script will rescan SCSI buses on the destination node. You can run rescan-scsi-bus.sh --help to get more information about usage of the script. If you cannot locate rescan-scsi-bus.sh on the destination node, you will have to install the Linux utlities package sg3_utils or run the manual steps described under “Additional Information” below.

4. Normally, all of the logical volumes are discovered when the physical disks (VMDKs) are recognized. You can verify this by running df –h, vgs and lvscan. For example, the output of df –h is:

[root@wxk-rhel71 ~]# df -h

 

Filesystem              Size    Used   Avail   Use% Mounted on

 

/dev/mapper/rhel-root   3.0G    73M    3.0G          3% /

devtmpfs 2.9G 0 2.9G 0% /dev

tmpfs 2.9G 80K 2.9G 1% /dev/shm

tmpfs 2.9G 18M 2.9G 1% /run

tmpfs 2.9G 0 2.9G 0% /sys/fs/cgroup

/dev/mapper/rhel-usr 8.0G 4.1G 4.0G 51% /usr

/dev/mapper/rhel-var 11G 4.6G 6.5G 42% /var

/dev/mapper/rhel-data2 200G 401M 200G 1% /data2

/dev/mapper/rhel-opt 8.0G 33M 8.0G 1% /opt

/dev/sda1 497M 124M 373M 25% /boot

/dev/mapper/rhel-data 100G 6.4G 94G 7% /data

/dev/mapper/rhel-home 8.0G 33M 8.0G 1% /home

Based on the above output, volume group rhel and its volumes are mapped to mount points.

Run vgs and the output is:

[root@wxk-rhel71 ~]# vgs

 

VG     #PV   #LV   #SN     Attr       VSize     VFree

 

rhel     2     8     0    wz--n-     391.00g    45.00g

 

vg_one   2     5     0    wz--n-     39.65g     8.65g

The volume group vg_one and its volumes are not mapped to mount points. vg_one is a volume group from mapped VMDKs. If a logic volume group is not visible on the destination node, run vgchange –ay.

5. Map a volume from the mapped VMDKs to a mount point by running the mount command. For example, if you want to mount / from the mapped VMDKs, run lvscan. The output is:

[root@wxk-rhel71 ~]# lvscan

ACTIVE '/dev/vg_one/lvroot' [5.00 GiB] inherit

ACTIVE '/dev/vg_one/LogVol03' [8.00 GiB] inherit

ACTIVE '/dev/vg_one/lvswap' [4.00 GiB] inherit

ACTIVE '/dev/vg_one/lvdata' [6.00 GiB] inherit

ACTIVE '/dev/vg_one/lvusr' [8.00 GiB] inherit

ACTIVE '/dev/rhel/swap' [8.00 GiB] inherit

ACTIVE '/dev/rhel/root' [3.00 GiB] inherit

ACTIVE '/dev/rhel/var' [11.00 GiB] inherit

ACTIVE '/dev/rhel/usr' [8.00 GiB] inherit

ACTIVE '/dev/rhel/opt' [8.00 GiB] inherit

ACTIVE '/dev/rhel/home' [8.00 GiB] inherit

ACTIVE '/dev/rhel/data' [100.00 GiB] inherit

ACTIVE '/dev/rhel/data2' [200.00 GiB] inherit

Next, run mount /dev/vg_one/lvroot /mymnt, assuming that /mymnt was created for our mount point.

6. Once all of the mounts are gone, you can use the standard clean-up process for VMDK mapping. For more information about the clean-up process for VDMK mapping, see the DPX User’s Guide.

Comments (0)