The following scenarios describe real-world failures with tactical solutions. Use the scenarios to develop troubleshooting skills and broaden the foundation of your knowledge.
Scenario 6-3: Drives Scan in the Wrong Order
After adding the driver to scan external storage, the drives scan in the wrong order on boot. The boot drive was once at the beginning of the linethat is, /dev/sda or hda. However, now the boot drive fails inline after all other disks are found.
There are several ways to work around the issue. The simplest way is to modify /etc/modules.conf.
alias eth0 tulip alias scsi_hostadapter sym53c8xx alias scsi_hostadapter1 cciss alias scsi_hostadapter2 lpfcdd
In this example, lpfcdd is the last driver loaded. Any devices found on the lpfcdd driver must follow any device found on the cciss and sym53c8xx drivers.
Scenario 6-4: vgcreate Fails
In this scenario, vgcreate fails when using a 1TB LUN.
# vgcreate /dev/vg01 /dev/sdb1 vgcreate -- INFO: using default physical extent size 4 MB vgcreate -- INFO: maximum logical volume size is 255.99 Gigabyte vgcreate -- doing automatic backup of volume group "main" vgcreate -- volume group "main" successfully created and activated
The command completed but only used 256GB of a 1TB disk. How do we correct this issue?
The default PE size for LVM in Linux equals 4MB. To resolve the issue, use the -s option under LVM to choose larger PE size.
Extend the PE size to 32MB, and then you can create the maximum 2TB VG.
# vgcreate -s 32M /dev/vg01 /dev/sdb1
Scenario 6-5: Not Possible to Put /boot under LVM Control with Linux
With Linux, it's not possible to put /boot under LVM control due to bootloader constraints. However, / can be managed by LVM. For this to be successful, we must separate /boot from /.
Bind /boot to a physical partition with the ext2 filesystem to enable the bootloader(s) to find the kernel in /boot. Afterward, / can be placed within LVM control.
An example follows:
/boot /dev/sda1 / /dev/vg00/lvol1
Make sure you have created an lvol (this example uses lvol1). Don't forget to copy / data to your new lvol. /boot is not needed, but for simplicity, use it in the copy command.
Generic steps to achieve this solution follow this example:
lvcreate -n lvol1 -L 200m /dev/vg00 mke2fs /dev/vg00/lvol1 mount /dev/vg00/lvol1 /lvol1fs find / -xdev | cpio -pdumv /lvol1fs
Now for the procedure:
Update the bootloader.
lilo -C /lvol1fs/etc/lilo.conf
/dev/sda1 will be mounted to the /boot directory, but all the kernel's images and initrd images reside in a directory called boot because this was the original / filesystem. We need to clean up this issue at a later date, but it works for now.
Reboot and enjoy our LVM as root-filesystem.
Scenario 6-6: LUN Limitation
We want to have over 140 LUNs visible from our storage array. Each LUN has an alternate path, which equals more than 280 LUNs visible. The problem is that a default Linux kernel only allows 128 LUNs.
One solution is to increase the size of the LUNs to reduce the count and then to create a large number of logical volumes under LVM. However, another solution is to modify the kernel to expand the total number of allowed LUNs.
By default, the maximum number of SCSI LUNs that can be loaded as modules is 128, which depends on the vendor kernel build.
Under "SCSI Support," you could modify CONFIG_SD_EXTRA_DEVS to a number greater than 128 and recompile the kernel.