|< Day Day Up >|| |
We are now finished with our LPAR-specific and VM-specific discussions. From here on, the information applies to both types of installations unless specified.
This section describes SuSE SLES-8 is being installed under z/VM 4.3. At this point you should have Linux IPLed in memory (RAM disk) either from the z/VM reader or in an LPAR.
Installation of Linux on zSeries hardware is quite similar to installation of SuSE SLES-8 on the PC. A major exception is that the zSeries DASD must be manually formatted in the middle of the installation process.
There are many ways to install Linux, and some assumptions are made in the steps that follow. One assumption is that the amount of Domino data will be larger than that which can be stored on a single DASD; therefore, logical volumes are used. Another assumption is that z/VM virtual disks will be used for swap partitions (if you are installing Linux in an LPAR, virtual disks cannot be used, and we discuss that issue a bit later).
Given these assumptions, the following steps are involved in installing and customizing Linux on zSeries:
Answer the networking questions.
Begin the graphical installation process.
Format the DASD from an ssh or telnet session.
Complete the graphical installation process.
Reboot the Linux system from disk and finish the basic install.
Apply the SLES-8 service pack 2 CD.
Install the sys_epoll RPM.
Re-IPL with the new kernel.
Set up the logical volumes.
Set up the virtual disk swap.
Turn off unneeded services.
Make a copy of the root and /opt filesystems - optional (move to after Domino install?)
If you are installing under z/VM, these questions will be asked from a 3270 session. If you are installing in an LPAR, these questions will be asked from the HMC. Either way, you will be presented with a choice of network devices.
Installation is easier if you have all the relevant information handy. A worksheet is provided in Table 6-2 on page 93 for this purpose. If you have not filled that out yet, this would be a good time to do so.
= = ==- Welcome to SuSE Linux Enterprise Server 8 for zSeries -== = = Please select the type of your network device: 0) no network 1) OSA Token Ring 2) OSA Ethernet 3) OSA-Gigabit Ethernet or OSA-Express Fast Ethernet 4) Channel To Channel 5) Escon 6) IUCV 8) Hipersockets 9) Show subchannels and detected devices Enter your choice (0-9): 3
The most common choices are 3 for OSA Express cards (or OSA-2s set up in QDIO mode), 4 for (virtual) channel to channel devices, or 9 for HiperSockets.
If you answer 3 or 9, you will be asked if you want to read the IBM network device driver license and agree with it. You must answer yes to both questions or networking will not be installed:
To set up the network, you have to read and confirm the license information of the network device driver provided by IBM. Do you want to see the license (Yes/No) ? yes .. Do you agree with this license (Yes/No) ? yes Ok, now we can set up the network configuration. ...
Very often, the install program will detect the correct addresses based on the network type. The following example shows this case; the default value is at the end in parenthesis.
If the default value is correct, you can just press Enter twice (once brings you to the VM READ prompt and the second is taken) and that value will be used.
Ok, now we can set up the network configuration. First OSA Express or Gigabit Ethernet Channels that were detected: Device Addresses CHPID(s) 2c08 2c09 2c0a ... Enter the device addresses for the qeth module, e.g. '0x2c08,0x2c09,0x2c0a' (0x2c08,0x2c09,0x2c0a): <Enter><Enter>
You will then be prompted for the port name. On a real OSA card, the first system to specify a port name physically sets that name (up to 8 characters) on the OSA card. After it has been set, any system wanting to share the card must use the same name. If you get the name wrong, you will probably see the error message: qeth: received an IDX TERMINATE on irq 0x0/0x1 with cause code 0x22 -- try another portname.
With Guest LANs the port name is not critical, but you must specify something so the name of the Guest LAN is recommended. Following is an example of specifying the correct port name:
Please enter the portname(case sensitive) to use(suselin7): OSA2C00 ... qeth: Trying to use card with devnos 0x2C08/0x2C09/0x2C0A qeth: Device 0x2C08/0x2C09/0x2C0A is an OSD Express card (level: 0330) with link type Gigabit Eth (portname: OSA2C00) Module qeth loaded, with warnings qeth 153756 0 (unused) qdio 33652 1 [qeth] ipv6 246300 -1 [qeth] eth0 detected!
Answer all the remaining networking questions (using the installation worksheet on page 93). The broadcast address is not included on the worksheet because it is calculated from the IP address and the subnet mask, therefore, you can just take the default value.
When you are finished, your answers will be summarized as shown in the following example. Check them over and answer: yes. (If you have made a mistake, you can answer: no and the questions will be asked again.)
Configuration for eth0 will be: Full host name : linuxa.itso.company.com IP address : 126.96.36.199 Net mask : 255.255.254.0 Broadcast address: 188.8.131.52 Gateway address : 184.108.40.206 DNS IP address : 220.127.116.11 DNS search domain: itso.company.com MTU size : 1500 Is this correct (Yes/No) ? yes
You will be asked for a temporary root password:
Please enter the temporary installation password: <secret>
You will be asked for the type of installation server. NFS or FTP are recommended. Again you will be given a summary.
In this example, an NFS server is referenced. Answer: yes when the information is correct.
Please specify the installation Source: 1) NFS 2) SAMBA 3) FTP 0) Abort Choice:1 ... Is the following correct? Installation Source: nfs IP-Address: 18.104.22.168 Directory: /mnt/sles8cd1 Yes/No: yes
You will be asked for the type of terminal. X-Window (1) is recommended. You will be asked for the IP address of the desktop. Be sure there is an X server session started on this desktop system. If it is a Linux PC, one will probably be running. If it is a Windows PC, you will need third-party X server software such as Hummingbird® eXceed. If you need the IP address from a Windows machine, use the ipconfig command from a DOS prompt.
Which terminal do want to use? 1) X-Window 2) VNC (VNC-Client or Java enabled Browser) 3) ssh Choice:1 Please enter the IP-Number of the host running the X-Server:22.214.171.124
The SuSE SLES-8 installation process now has enough information to begin yast.
A graphical installer uses a great deal of CPU resource and, depending on the network bandwidth between the workstation and the system, it could slow down the install process. This could be a concern if you are installing from a remote location.
Two X Windows should appear on your X server. The one on top should be the SuSE End User License Agreement. Click Accept. You will then be prompted for a language from the main installation window. Choose the relevant language (the default of English (US) is what we accepted), and click Accept again.
Next, you will then be prompted for the DASD devices to be used when the DASD device driver module is installed as shown in Figure 6-2 on page 105. At a minimum, specify all the DASD assigned to the z/VM user ID and click Load Module (but do not click Accept yet - refer to 6.5.4, "Complete the graphical installation process" on page 106 for more information). The specified DASD that exists should now be displayed in the lower portion of the window.
Figure 6-2: DASD Module Parameter Setting panel
You may want to leave some extra "slots" so additional DASD can be added more easily. In the example shown in Figure 6-2 on page 105, we have DASD defined at addresses 200-20f and 210-213, but we specify dasd=200-21f so additional DASD can be added later at addresses 214-21F without having to update the zipl.conf file and to run the zipl command.
The SLES-8 installation process cannot format and partition zSeries DASD. Therefore, you must perform this task manually from an ssh or telnet session. In this example, we are installing the root filesystem (/) onto the device /dev/dasda1 and optional software (/opt) onto the device /dev/dasdb1.
Swap space and logical volumes for Domino data will be set up later. Therefore, we want to format the first two DASD (/dev/dasda and /dev/dasdb) and create a single partition out of each (/dev/dasda1 and /dev/dasdb1).
We ssh into our new Linux through PuTTY from a Windows desktop and display the DASD devices with the following command: cat /proc/dasd/devices. We then format the DASD with the dasdfmt command, using the -b flag to set a 4 KB block size and the -f flag to specify the device to be formatted. Next, we use the d command with the -a flag to create a single partition from all available tracks:
# cat /proc/dasd/devices 0200(ECKD) at (94: 0) is dasda : active at blocksize: 4096, 600840 blocks, 2347 MB 0201(ECKD) at (94: 4) is dasdb : active at blocksize: 4096, 600840 blocks, 2347 MB 0202(ECKD) at (94: 8) is dasdc : active at blocksize: 4096, 600840 blocks, 2347 MB ... # dasdfmt -b 4096 -f /dev/dasda Drive Geometry: 3338 Cylinders * 15 Heads = 50070 Tracks I am going to format the device /dev/dasda in the following way: Device number of device : 0x200 Labelling device : yes Disk label : VOL1 Disk identifier : 0X0200 Extent start (trk no) : 0 Extent end (trk no) : 50069 Compatible Disk Layout : yes Blocksize : 4096 --->> ATTENTION! <<--- All data of that device will be lost. Type "yes" to continue, no will leave the disk untouched: yes Formatting the device. This may take a while (get yourself a coffee). Finished formatting the device. Rereading the partition table... ok # fdasd -a /dev/dasda auto-creating one partition for the whole disk... writing volume label... writing VTOC... rereading partition table...
The same two steps are done for /dev/dasdb.
Now that the two DASD onto which Linux will be installed are formatted and partitioned, you can exit the ssh or telnet session and return to the graphical installation process: # exit
If you are installing Linux in an LPAR and not under z/VM, you may want to use the fdasd command without the -a flag to interactively create a partition that will be used for a swap space; refer to 6.6.1, "Set up swap space on an LPAR" on page 120 for more information on this topic.
When you return to the DASD Module Parameter Setting panel shown in Figure 6-2 on page 105, you can now click Accept. You will probably see the warning panel shown in Figure 6-3. This warning appears regardless of whether or not Linux was previously installed. Accept the default of New Installation and click OK.
Figure 6-3: New installation warning panel
You now see the main Installation Settings window that is shown in Figure 6-4. This is the base screen from which partitioning (filesystems), software (packages), and the time zone are customized.
Figure 6-4: Installation settings panel
Click Partitioning; this brings up the Expert Partitioner window shown in Figure 6-5. You will want to select each DASD partition to be assigned to a filesystem and choose how it will be formatted.
Figure 6-5: Expert Partitioner panel
Partition 0 for each DASD represents the disk itself (for example, /dev/dasda). If you used the fdasd -a command, as recommended, to carve each disk into a single partition, each disk should have a corresponding Partition 1 (for example, /dev/dasda1). These are the partitions you will want to work with.
For example, if you select /dev/dasda1 and click Edit, you should see the panel shown in Figure 6-6.
Figure 6-6: Edit partition panel
Always choose Format, even though you have already formatted the DASD with the dasdfmt command. This allows you to choose a filesystem type. A type of ext2 or ext3 is recommended, mainly because you can remount any filesystem that is one of these types as the other type. The ext3 filesystem has a journal which allows for a more rapid and reliable recovery from a hard crash, while the ext2 filesystem has the best performance.
In our installation, we choose an ext3 filesystem type for /dev/dasda1 which is mounted over the root filesystem (/) and for /dev/dasdb1 which is mounted over /opt where Domino will be installed. Logical volumes that will be used for Domino data are addressed in 6.5.9, "Set up logical volumes" on page 116, and swap space is addressed in 6.6, "Set up swap space" on page 120.
When you have assigned all DASD to filesystems for the initial installation, click Next. If you have not assigned swap space, you will get a warning. Click No to the question: Do you want to change this?. Virtual disk swap space will be added later.
You should now be back at the Installation Settings window ,where you should see a summary of your partitioning scheme. Click Software, and you should be presented with the window shown in Figure 6-7 on page 109.
Figure 6-7: Software Selection window
You can accept the default system, which seems to be quite adequate for many purposes, or you can customize the packages that will comprise your system. To do this, click Detailed selection. This brings up the window in Figure 6-8.
Figure 6-8: Detailed software selection window
For our installation, we remove the KDE Desktop Environment and Gnome system, which are better suited for Linux on other architectures. We add C/C++ Compiler and Tools and, of course, the IBM RedBooks package group! When you are done customizing the packages that will be installed, click Accept.
This will again bring you back to the Installation Settings window. The last piece to customize is the time zone. Scroll down and select Time Zone. This will bring up the Clock and Time Zone Configuration window. Choose the correct time zone and click Accept.
This will again bring you back to the Installation Settings window. Click Accept and you should be presented with the warning window shown in Figure 6-9.
Figure 6-9: Begin installation warning
This comes up as a warning because YaST2 will now begin writing to your selected DASD partitions. Click Yes, install. The process of loading packages takes some time, usually 20 to 60 minutes, depending on a number of factors.
When YaST2 has completed, you will see the message: Your system will now be shut down. After shutdown reload the system with the load address of your root DASD.
Click OK and return to your 3270 or HMC session. There you will see the message: Restarting system. However, it will be the in-memory Linux system IPLed from your reader, so you may very well see the original networking questions awaiting you-do not despair.
On a PC, the device that is booted depends on the boot order. Normally, removing any Linux CD is adequate to ensure the new Linux system that was just copied to disk gets booted. On zSeries hardware, however, the device that is booted (IPLed) is always specified by its address. YaST2 does not decipher this information, so you must manually IPL from your boot (/boot) or root (/) filesystem if you did not specify a separate boot filesystem.
In our example, we did not specify a separate boot filesystem, so we IPL the root filesystem (/dev/dasda) which corresponds to minidisk 200. On a z/VM 3270 session, the #cp prefix allows CP commands to be sent through the Linux command line to z/VM. Therefore, we IPL our new Linux system with the command:
#cp ipl 200
If you are installing on an LPAR, from the HMC you would IPL the root DASD address rather than the Linux install tape address.
When Linux boots from DASD, YaST2 should pick up where it left off. It may continue adding packages from CD 2 and CD 3.
When it is finished, you will be asked to enter the real root password. Enter a strong (non-guessable) password in the two fields supplied and click Next. Remember this password! You will then be presented with the Add a new user window shown in Figure 6-10.
Figure 6-10: Add a new user window
It is recommended that you always have a non-root user. This would be a good time to add the non-root user that Domino requires. In this example, we create the user named domserva. The default group name that Domino uses is the group named notes.
The Additional users/groups button allows creation of a new group; however, it does not appear to allow it to be the new user's primary group. Therefore, we recommend that you add a new group and set it as the primary group later; this step is documented in 7.3, "Pre-installation steps" on page 131. Click Next again.
Now you will see a window entitled Writing the system configuration. The administration command SuSEconfig is run and you see the output.
You will again see the Installation Settings window with the sections Network interfaces and Printers. You should not have to modify either of these. Select Next. You should see a window that says Saving settings that goes away in a few seconds. Your system will reboot again, but this time it knows which DASD to reboot from.
At this time you should be able to start an ssh session to your new Linux image, which is now on DASD.
Prior to June 2003, you would have been done with the SLES-8 installation at this point. However, a service pack 2 (sp2) update CD was subsequently released. This is normally applied with YaST2.
If you are accessing an FTP server rather than an NFS server, an extra step is necessary: you must first manually obtain and install the yast2-online-update RPM. Otherwise, you will not be able to get past the FTP credentials screens.
In this example, it is assumed the SP2 CD is mounted over /mnt/sles8sp2cd1:
# cd /usr/src # ftp <your.ftp.server> ftp> Name : <user name> ftp> Password: <your password> ftp> cd /mnt/sles8sp2cd1 ftp> cd s390/update/SuSE-SLES/8/rpm/s390 ftp> mget yast2-online* mget yast2-online-update-2.6.15-9.s390.rpm [anpqy?]? y ftp> quit # rpm -Fvh yast2-online-update-2.6.15-9.s390.rpm
Set the DISPLAY environment variable to point to a desktop with an X server running and start yast2 in the background. For example, if your X server is running on a desktop PC with IP address 126.96.36.199, the following commands will invoke yast2:
# export DISPLAY=188.8.131.52:0 # yast2 &  3108
An X window should appear on your X server with the YaST2 Control Center, as shown in Figure 6-11.
Figure 6-11: YaST2 Control Center
Click Patch CD Update and you should be presented with a new window, as shown in Figure 6-12 on page 113.
Figure 6-12: YaST2 Package Update window
You have a choice of two update modes: manual and automatic. If you choose manual, you must first manually install the yast2-online-update RPM.
We chose to document the automatic mode, so click Automatic Update and then Expert in the Choice of Installation Source area. You will see the small window shown in the middle of Figure 6-12. Choose the type of URL-probably either FTP or NFS-and click OK. When the next window asks for the NFS or FTP credentials, supply them (refer to Table 6-2 on page 93, if you filled it out) and click Next.
If you are accessing an FTP server with a user ID other than anonymous, you will have to uncheck the anonymous check box in the middle of the window shown on the left side of Figure 6-13 on page 114.
Figure 6-13: FTP authorization screens
Enter the user name and password and click OK. You will then be presented with another window shown on the right side of the figure. Again enter the user name and password, even though the prompt is asking for "Code" and password.
Only a single patch will be applied: patch-5185. You will see the message: Please restart the online update to get all available patches. Click OK. You will get another message. Click OK again. The SuSEconfig command will run again and eventually be brought back to the control center shown in Figure 6-11.
Again click Patch CD Update and go through the same steps again, pointing to the same NFS or FTP server. This time through you should see many patches being applied, as shown in Figure 6-14.
Figure 6-14: YaST2 Online Update Confirmation window
You will see a number of warning screens appear. The most relevant is the first, which gives the following message:
Attention: Run mkinitrd and zipl after installation!
This process can also be lengthy, on the order of 20 to 60 minutes. When the process is complete, you should see the message: Installation Successful. Click OK to that message and then click Next. The SuSEconfig command will again be run automatically and you will return to the Control Center. Click Close.
Return to the ssh session. Even though you just applied all recommended patches to your SLES-8 system, you still have to manually install the sys_epoll RPM (epoll-1.0-9.s390.rpm) that Domino needs. This can be accomplished with yast, but we describe how to do it from the command line. Therefore, you must access the CD either through FTP or NFS from an ssh session.
The following example uses NFS. Verify that nothing is mounted over the directory /mnt and mount the SP2 CD. The RPM can be directly installed from the NFS-mounted directory /mnt/s390/update/SuSE-SLES/8/rpm/s390 using the rpm command. For example:
# ls /mnt . .. # mount 184.108.40.206:/mnt/sles8sp2cd1 /mnt # cd /mnt/s390/update/SuSE-SLES/8/rpm/s390 # rpm -ivh epoll-1.0-9.s390.rpm epoll ##################################################
Run the mkinitrd and zipl commands to create a new initial RAMdisk and to update the IPL record:
# mkinitrd using "/dev/dasda1" as root device (mounted on "/" as "ext3") Found ECKD dasd, adding dasd eckd discipline! Note: If you want to add ECKD dasd support for later mkinitrd calls where possibly no ECKD dasd is found, add dasd_eckd_mod to INITRD_MODULES in /etc/sysconfig/kernel ... Run zipl now to update the IPL record! # zipl building bootmap : /boot/zipl/bootmap adding Kernel Image : /boot/kernel/image located at 0x00010000 adding Ramdisk : /boot/initrd located at 0x00800000 adding Parmline : /boot/zipl/parmfile located at 0x00001000 Bootloader for ECKD type devices with z/OS compatible layout installed. Syncing disks.... ...done
Make a note of the kernel level and build date with the uname command before you shut down and reboot with the new kernel:
# uname -a Linux linux4 2.4.19-3suse-SMP #1 SMP Wed Nov 6 22:34:43 UTC 2002 s390 unknown # shutdown -r now
After your system reboots, you can get a new ssh session and verify that the kernel has been updated:
# uname -a Linux linux4 2.4.19-4suse-SMP #1 SMP Thu Jun 5 23:01:37 UTC 2003 s390 unknown
The DASD that will comprise the Domino data logical volumes must first be formatted and partitioned, as with the two DASD onto which Linux was installed. This is done, again by using the dasdfmt and fdasd commands.
We choose to make one large volume group and create logical volumes from it. Of the 18 packs, two will be used for notesdata, seven for each of mail_1 and mail_2, and the last two for translog. See 4.7, "Placement of other Domino databases" on page 67 for a more detailed discussion.
In order to format multiple DASD simultaneously, we put the dasdfmt jobs in the background. To do this, we found we had to use the /bin/sh shell (shown in the following example with the sh-2.05b# prompt). This time, the -y flag is added to the dasdfmt command, which stacks an answer of "yes", thus bypassing the question: Are you sure?.
In the following example, we format and partition 18 DASD (/dev/dasdc - /dev/dasdt) using a for loop. Note that i is a variable that iterates through the elements in the list and is replaced where $i is encountered. The loop should iterate quickly. Use the pstree command to view the 18 dasdfmt processes that are children of the /bin/sh shell:
# /bin/sh sh-2.05b# for i in c d e f g h i j k l m n o p q r s t > do > dasdfmt -b 4096 -y -f /dev/dasd$i & > done  537  538 ...  554 sh-2.05b# pstree init-+atd | ... |-sshd---sshd---bash---sh-+18*[dasdfmt] | `-pstree | ... ...
Be sure to wait for the dasdfmt jobs to finish (use the pstree command again if you are not sure if they are finished). They will finish much faster than if they were done sequentially; however, they will still take approximately 10 to 20 minutes. When they are done, you can exit the /bin/sh shell.
Now invoke a similar for loop. The fdasd command does not need to be run in the background, as it runs much faster this way:
sh-2.05b# exit # for i in c d e f g h i j k l m n o p q r s t > do > fdasd -a /dev/dasd$i > done ...
Initialize the new "physical" volume with the vgscan command:
# vgscan vgscan -- reading all physical volumes (this may take a while...) vgscan -- "/etc/lvmtab" and "/etc/lvmtab.d" successfully created vgscan -- This program does not do a VGDA backup of your volume group
Create physical volumes for each DASD with the pvcreate command. The shell regular expression /dev/dasd[c-t]1 can be used to address all DASD:
# pvcreate /dev/dasd[c-t]1 pvcreate -- physical volume "dasdc1" successfully created pvcreate -- physical volume "dasdd1" successfully created ... pvcreate -- physical volume "dasdt1" successfully created
Verify physical volumes with the pvscan command:
# pvscan pvscan -- reading all physical volumes (this may take a while...) pvscan -- inactive PV "/dev/dasdc1" is in no VG [2.29 GB] pvscan -- inactive PV "/dev/dasdd1" is in no VG [2.29 GB] ... pvscan -- inactive PV "/dev/dasdt1" is in no VG [2.29 GB] pvscan -- total: 18 [41.25 GB] / in use: 0  / in no VG: 18 [41.25 GB]
The volume group named domservb is created with the vgcreate command. Using the -s flag, the physical extent size is increased from the default of 4 MB to 16 MB, so that the volume group could be increased to about 1 TB, if necessary.
When the command completes, you can see the volume group has been added to the /dev/ directory. The size of about 41 GB is shown with the vgdisplay command:
# vgcreate -s 16M domservb /dev/dasd[c-t]1 vgcreate -- INFO: maximum logical volume size is 1023.97 Gigabyte vgcreate -- doing automatic backup of volume group "domservb" vgcreate -- volume group "domservb" successfully created and activated # ls -ld /dev/domservb/ dr-xr-xr-x 2 root root 4096 Sep 2 14:53 /dev/domservb/ # ls -l /dev/domservb/ crw-r----- 1 root disk 109, 0 Sep 2 14:53 group # vgdisplay domservb| grep Size MAX LV Size 1023.97 GB VG Size 40.78 GB PE Size 16 MB Alloc PE / Size 0 / 0 Free PE / Size 2610 / 40.78 GB
Performance can be greatly increased (in theory) with a striped logical volume. The following recommendations have been made for increasing the performance of logical volumes:
Spread the host adapters used across all host adapter bays
Use as many CHPIDs as possible
Establish connections for each disk to all CHPIDs
Use at least one host adapter, maximum two per CHPID (more than one host adapter per CHPID requires a director/switch)
Spread the disks used over all ranks equally
Avoid (re)using the same resources (CHPID, host adapter, rank) as much as possible
For more details on these performance recommendations, see:
However, striped logical volumes cannot be extended with the lvextend command, so be sure to make each volume large enough up front.
We create logical volumes that are striped across all 18 DASD in the volume group. We express the size of the logical volumes in extents rather than in bytes.
# lvcreate --extents 290 --stripes 18 -n domservb /dev/domservb lvcreate -- INFO: using default stripe size 16 KB lvcreate -- rounding to stripe boundary size lvcreate -- doing automatic backup of "domservb" lvcreate -- logical volume "/dev/domservb/domservb" successfully created # lvcreate --extents 1015 --stripes 18 -n mail_1 /dev/domservb lvcreate -- INFO: using default stripe size 16 KB lvcreate -- rounding to stripe boundary size lvcreate -- doing automatic backup of "domservb" lvcreate -- logical volume "/dev/domservb/mail_1" successfully created # lvcreate --extents 1015 --stripes 18 -n mail_2 /dev/domservb lvcreate -- INFO: using default stripe size 16 KB lvcreate -- rounding to stripe boundary size lvcreate -- doing automatic backup of "domservb" lvcreate -- logical volume "/dev/domservb/mail_2" successfully created # lvcreate --extents 252 --stripes 18 -n translog /dev/domservb lvcreate -- INFO: using default stripe size 16 KB lvcreate -- doing automatic backup of "domservb" lvcreate -- logical volume "/dev/domservb/translog" successfully created
Parameters and other considerations must be taken into account when striping, to be sure that you are getting optimum performance. We do not necessarily recommend striping all of the volumes.
The lvdisplay command shows you the details of logical volumes. For example, to see the details of mail_1:
# lvdisplay /dev/domservb/mail_1 --- Logical volume --- LV Name /dev/domservb/mail_1 VG Name domservb LV Write Access read/write LV Status available LV # 2 # open 0 LV Size 16.03 GB Current LE 1026 Allocated LE 1026 Stripes 18 Stripe size (KByte) 16 Allocation next free Read ahead sectors 1024 Block device 58:1
The mke2fs command makes an ext2 filesystem on each of the four logical volumes. We choose ext2 for performance knowing that recovery time will be longer than with a journalled filesystem such as ext3. If you want to choose a journalled filesystem for faster recovery time, you would include the -j flag in the mke2fs command.
# mke2fs /dev/domservb/domservb mke2fs 1.28 (31-Aug-2002) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 627744 inodes, 1253376 blocks 62668 blocks (5.00%) reserved for the super user First data block=0 39 block groups 32768 blocks per group, 32768 fragments per group 16096 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Writing inode tables: done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 32 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. # mke2fs /dev/domservb/mail_1 ... # mke2fs /dev/domservb/mail_2 ... # mke2fs /dev/domservb/translog ...
If we had not included the DASD addresses 202-213 when we installed, we would have had to update the /etc/zipl.conf file and run zipl (remember, we used DASD addresses 200-21F when installing with yast, which still leaves open "slots" 214-21F if more DASD is needed later).
Because all the addresses exist, all the logical volumes will be picked up when Linux is re-IPLed. We verify by inspecting the first nine lines of the zipl.conf file, which contain the parameters for the default IPL:
# head -9 /etc/zipl.conf # Generated by YaST2 [defaultboot] default=ipl [ipl] target=/boot/zipl image=/boot/kernel/image ramdisk=/boot/initrd parameters="dasd=200-21f root=/dev/dasda1"
We now need to add the logical volumes to the filesystem table in the /etc/fstab file. It is good to first modify the /etc/fstab file, and to specify only the target directory to the mount command in order to test your changes without an IPL. Making a mistake in the /etc/fstab file can easily result in a system that will not IPL, so this method of updating the fstab file is more reliable.
First create the empty directory that the logical volume domservb will be mounted over. Then back up and modify the /etc/fstab file (if you used an ext3 filesystem, the filesystem type in the third field would be ext3 rather than ext2):
# mkdir /domservb # cd /etc # cp fstab fstab.orig # vi /etc/fstab # add 4 lines /dev/dasda1 / ext3 defaults 1 1 /dev/dasdb1 /opt ext3 defaults 1 2 /dev/domservb/domservb /domservb ext2 defaults 1 3 /dev/domservb/mail_1 /domservb/notesdata/mail_1 ext2 defaults 1 3 /dev/domservb/mail_2 /domservb/notesdata/mail_2 ext2 defaults 1 3 /dev/domservb/translog /domservb/notesdata/translog ext2 defaults 1 3 devpts /dev/pts devpts mode=0620,gid=5 0 0 proc /proc proc defaults 0 0
Now test the mount of the /domservb/ directory. Once it is mounted, cd into it and create the additional directories that will serve as mount points:
# mount /domservb/ # cd /domservb/ # mkdir notesdata # cd notesdata/ # mkdir mail_1 mail_2 translog
Again the mount command is used with one parameter to test the other three entries in the /etc/fstab file. When it succeeds, issuing the mount command with no parameters shows that the four logical volumes are mounted:
# mount mail_1 # mount mail_2 # mount translog # mount /dev/dasda1 on / type ext3 (rw) proc on /proc type proc (rw) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) /dev/dasdb1 on /opt type ext3 (rw) shmfs on /dev/shm type shm (rw) /dev/domservb/domservb on /domservb type ext2 (rw) /dev/domservb/mail_1 on /domservb/notesdata/mail_1 type ext2 (rw) /dev/domservb/mail_2 on /domservb/notesdata/mail_2 type ext2 (rw) /dev/domservb/translog on /domservb/notesdata/translog type ext2 (rw)
Test a reboot with the shutdown command. The system should come back up fine. You can monitor progress from the z/VM 3270 console.
# shutdown -r now # exit
More details on LVM can be found on the SuSE Web site in the paper LVM - Logical Volume Manager at:
|< Day Day Up >|| |