Adding and Removing Hardware


The acquisition stage of a new server should include preliminary growth estimates. Capacity planning is an ongoing process and can usually monitor long-term resource consumption. Eventually, either through aging or through the creation of unexpected projects, servers become resource bound.

When possible, a new server will be purchased, with significantly larger capacity, and the functionality transferred to the new machine. In many cases, however, budget or, more typically, time constraints make this impossible.

The most common resource problem is disk space. As users become accustomed to a system or service, they tend to use it more. They copy their local workstation files to the server for proper backups and disaster recovery purposes. In some cases, an individual or department significantly increases its website with new and improved features. Either way, as system manager, you will be required to increase the system's capacity.

The following section examines how additional disk space can be added to a system and targeted to a specific solution. It is understood that additional disk space will have backup and disaster recovery implications. These topics will be covered in Chapter 10, "Data Backup and Disaster Recovery."

Preparations

The most common hardware added to a system is disk space. Other devices you might want to add to your system usually come with vendor-specific instructions for SLES and will not be covered here. Many are peripherals such as scanners, webcams, and audio gear that are not typically incorporated into servers.

For the purpose of this section, we will discuss the concept of a disk as a single physical unit of storage. We will ignore the underlying complexities of how the unit of storage is generated. The "disk" could be simply a single IDE spindle of fixed capacity, or it could be a partition of a larger RAID array managed at the firmware level. In the current discussion, we will treat these as identical in terms of how they are presented to the operating system.

In smaller servers such as a DNS or small web server, it is sometimes simpler to use many of the default install options for SLES. One of the implications of such an installation is that the Logical Volume Manager (LVM) software is not used to configure the environment. LVM allows for the dynamic addition of disk capacity and targets the new disk to specific volume sets on a live system. Though very powerful, such configurations can get very complex and will not be addressed in this section. Additional information on LVM can be found as a series of whitepapers on the SUSE website (http://www.suse.com/en/whitepapers/lvm/lvm1.html).

Before you add a disk device to a system, it is important to know where you are going to target the device. When you're building a system, it is good practice to separate, on different devices or partitions, various portions of the directory structure. If, at any point, your / partition becomes 100% used, your system will not be able to operate.

Segregation of the major branches of the / directory help mitigate accidental consumption of critical disk space. Typically, the / level directory on your SLES server contains the entries in the following listing:

 Athena:~ # ls  / .   bin   dev  home  lost+found  mnt  proc  sbin  sys  usr ..  boot  etc  lib   media       opt  root  srv   tmp  var Athena:~ # 

When you are building a server (see Chapter 1, "Installing SUSE LINUX Enterprise Server"), you have the opportunity of allocating these directories to different locations. On a simple one-disk device system, the default install splits the volume into a swap and a / partition. A more robust approach would be to further partition the single volume into distinct areas to contain the more volatile directory structures. On servers that allow end-user content, placing the /home (user files) and /srv (web content) directories on their own device will balance disk consumption across multiple volumes. If individual devices are not available, placing /home and /srv in separate partitions is still a good idea. The segregation will prevent consumption on one partition from impacting the other. Though you can minimize the risks of a disk-full event through quota management, making the system failsafe is simply the smart thing to do.

Adding a Disk

In our example, a disk will be added to the web server Athena. The current web server will be asked to store a large number of corporate documents instead of the original contact information pages it was originally designed for. A suggested methodology could be as follows:

  • A secondary disk is purchased.

  • A valid full backup of your system must be performed.

  • The disk must be physically added to the machine.

  • A valid partition table must be created on the disk.

  • The partition(s) must be formatted.

  • The formatted partitions must be made live.

  • Data must be transferred to the new disk space.

  • Reboot and sanity checks are performed.

  • User access is restored.

The backup of the system is important because a number of steps in this process could lead to significant data loss or an unbootable system. The physical installation of the disk hardware is machine and interface dependent and will not be covered here. Before the system is shut down, it is important to know the configuration of the disk(s) currently in use on the server. You can accomplish this by using the df command or by looking at the /etc/fstab file:

 Athena:~ # df -h Filesystem            Size  Used Avail Use% Mounted on /dev/sda1             9.4G  2.2G  6.8G  25% / tmpfs                  93M  8.0K   93M   1% /dev/shm Athena:~ # 

Or

 Athena:~ # cat /etc/fstab /dev/sda1     /              ext3    acl,user_xattr        1 1 /dev/sda2     swap           swap    pri=42                0 0 devpts        /dev/pts       devpts  mode=0620,gid=5       0 0 proc          /proc          proc    defaults              0 0 usbfs         /proc/bus/usb  usbfs   noauto                0 0 sysfs         /sys           sysfs   noauto                0 0 /dev/dvd      /media/dvd     subfs fs=cdfss,ro,procuid,nosuid,nodev,exec,iocharset=utf8 0 0 /dev/fd0      /media/floppy  subfs fs=floppyfss,procuid,nodev,nosuid,sync 0 0 Athena:~ # 

fstab FORMAT

The fstab file relates physical partitions and information regarding where and how they will be added to a system. Each record in fstab is split up into six distinct sections:

  1. The first field identifies the block device to be mounted. This is usually the name of the partition you want to mount (for example, /dev/sda1).

  2. The second field indicates the name of the mount point the partition will be associated with (for example, /home).

  3. The third field identifies the file system format of the partition. SUSE can recognize a large number of filesystem types such as MS-DOS, ext2, ext3, reiserfs, the CD format iso9660, and many others.

  4. The fourth column contains options for the mount command. You can specify that a partition is mounted read-only (ro), mounted read-write (rw), accepts access control lists (acl), and supports user-based quotas (usrquota). Many more options are availablesome filesystem dependent. More information on these values can be found in the man pages for the mount command.

  5. The fifth field represents a numeric value that is passed to the dump utility. The dump command is used to back up the data on the partition. The value specified here is normally 1 for a level 1 incremental backup. This value can be overridden using your actual backup tool.

  6. The sixth and last field controls how the filesystem is checked at boot time. The number represents the order in which partitions are checked. The / partition should be checked first and has a value of 1. Subsequent partitions should have values greater than 1. Because of the parallel nature of the check, partitions on the same device are verified concurrently.


From these listings, you can see that there appears to be only one SCSI disk in the system (sda), and it is split into two partitionsnamely, / and a swap partition. When the new disk is added, it is given a unique SCSI ID, in this case, 1, and will appear as the second SCSI disk in the system, sdb. Had we used an IDE-based system, the disks would appear as hda and hdb, respectively.

The first procedure is to decide on the low-level configuration of the disk. A partition table contains information on how the disk is subdivided. The simplest way to create a partition table is to use the fdisk utility. You invoke this tool at the command line by using the fdisk verb, followed by the target device name, as shown in Listing 2.1. It is crucial to ensure that you point this utility to the proper device. Failure to do so can result in the corruption of the partition table on an existing device. Detailed information on this utility can be found in the man pages. Once this verb is invoked, the menu m command can be used within fdisk to list the set of internal commands. The possible values available at this time are shown in Table 2.1.

Table 2.1. The fdisk Internal Command Set

COMMAND

PURPOSE

a

Toggles a bootable flag

b

Edits bsd disklabel

c

Toggles the DOS compatibility flag

d

Deletes a partition

l

Lists known partition types

m

Prints this menu

n

Adds a new partition

o

Creates a new empty DOS partition table

p

Prints the partition table

q

Quits without saving changes

s

Creates a new empty Sun disklabel

t

Changes a partition's system ID

u

Changes display/entry units

v

Verifies the partition table

w

Writes table to disk and exits

x

Provides extra functionality (experts only)


WARNING

A properly configured and intact partition table is mandatory for a system to function. It is best to be overly paranoid at this stage and triple-check what you are doing. A mistake here can make your server a boat anchor and set you back a considerable amount of time. Ensure that you have a proper disaster recovery plan and have valid backups before going any further.


The following will create a proper partition table for the new disk being added:

Step 1.

Look at the existing partition table.

It is not expected that a new disk will contain a valid partition table. If it does, it may be an indication that you have pointed the utility at the wrong volume or you may be using a disk containing data that could be accidentally destroyed. At the console prompt, type fdisk followed by the name of the new volume (sdb):

Listing 2.1. An Example of an fdisk Tool Session

[View full width]

 Athena:~ # fdisk /dev/sdb Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel  Building a new DOS disklabel.  Changes will remain in memory only, until you decide to write them. After that, of course,  the previous content won't be recoverable. The number of cylinders for this disk is set to 5221. There is nothing wrong with that, but this is larger than 1024, and could in certain  setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs    (e.g., DOS FDISK, OS/2 FDISK) Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite) Command (m for help): 

Print out a copy of the current partition table for this disk:

 Command (m for help): p Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes    Device Boot      Start         End      Blocks   Id  System Command (m for help): 

The print command reveals a device with no preexisting partition table. You can now proceed and subdivide the disk. In this case, a partition will be created to contain the user's home directories (/home) and another partition to hold the root directory for the web content folders (/svr). For this purpose, subdivide the disk in two roughly equal halves.

Step 2.

Create the partitions.

Use the n verb in fdisk to create a new primary partition. It is essential to create proper Linux-specific entries in the partition table and define the partition types. Failure to do so may generate a scenario in which the disk thinks a partition is vfat but the formatting of the structure is ext3. This will cause confusion for the kernel and possibly data loss.

NOTE

The original specification for partition tables allowed a single device to contain up to four partitions. In some instances, such as restricting access to structures, more than four partitions are desirable.

Extended partitions allow for the creation of additional partitions, within a pre-existing partition. When created, the last primary partition can be subdivided into a number of subunits. Each subpartition can then be presented to the operating system and recognized as a real partition.

Adding extra layers of complexity should be discouraged. Large-capacity drives are relatively inexpensive. It is recommended that additional partitions be provided by additional devices instead of using extended partitions.

The software indicates the geometry of the disk. In this case, split it roughly in half:

 Command (m for help): n Command action    e   extended    p   primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-5221, default 1): 1 Last cylinder or +size or +sizeM or +sizeK (1-5221, default 5221): 2600 Command (m for help): n Command action    e   extended    p   primary partition (1-4) p Partition number (1-4): 2 First cylinder (2601-5221, default 2601): Using default value 2601 Last cylinder or +size or +sizeM or +sizeK (2601-5221, default 5221): Using default value 5221 Command (m for help): 

It is always a good idea to double-check the configuration. This way, you can verify that the type designation for each partition is correct. The partitions are being added to the system to contain standard Linux files. Hence, a type ID of 83 (Linux) is correct. Had you been adding RAM to the system and were required to change the amount of available swap space, you would need to use the type command to change the partition type ID to 82 (Linux Swap). A list of the available partition types can be generated by using the l command shown in Table 2.1.

Though it is possible to place a filesystem of one flavor into a partition marked as a different partition type, doing so is not recommended. By creating the appropriate type of partition, you can verify the nature of a partition before it is mounted into a live system. Mounting a partition with an inappropriate filesystem type will result in data loss and corruption. This is especially true if the mount forces a filesystem check and discrepancies in formatting are interpreted as corruption.

Step 3.

Confirm the selections.

Print out the current in-memory version of the partition table before committing the changes to the physical device:

 Command (m for help): p Disk /dev/sdb: 42.9 GB, 42949672960 bytes 255 heads, 63 sectors/track, 5221 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes    Device Boot      Start         End      Blocks   Id  System /dev/sdb1               1        2600    20884468+  83  Linux /dev/sdb2            2601        5221    21053182+  83  Linux Command (m for help): 

Now that you have the geometry of the disk as you want it, you need to write the information back to the disk.

Step 4. Commit the new partition information.

The in-memory configuration of the partition table is applied to the physical device through the w (write) command:

 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks. Athena:~ # 

The last procedure that is required before the disk can be brought online for content is to prepare the partitions for the operating system. To do this, you must configure the partition to obey certain rules governing file structures and the way files are accessed and written. You perform this task by creating a file system on the disk. In the Windows and DOS world, this is known as "formatting."

Step 5.

Make a file system for the /home directory structure.

For this step, use the mkfs command. A number of different file systems are available for SLES; they can be found in the man pages. Choosing the correct one for your situation depends on individual corporate policy. For the sake of this example, use ext3.

A good practice is to apply a label to the device partition as you apply the file system. This approach has several benefits. In this case, it could be used to confirm that you allocated the appropriate partition to the intended target before restoring any data.

It is also possible to define a number of additional characteristics for your filesystem. One important consideration is the number of files you expect the partition to contain. Each physical file on the disk is referenced through a structure called an inode. When a filesystem is created, the number of inodes created is based on typical average file size and the size of the partition. If, in your situation, you know that there will be a significant number of very small files, you may need to force a specific inode count. More information on specifying the number of inodes can be found in the man pages for your specific filesystem.

 Athena:~ # mkfs.ext3 -L HOME -v /dev/sdb1 mke2fs 1.34 (25-Jul-2003) Filesystem label=HOME OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 2611200 inodes, 5221117 blocks 261055 blocks (5.00%) reserved for the super user First data block=0 160 block groups 32768 blocks per group, 32768 fragments per group 16320 inodes per group Superblock backups stored on blocks:         32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,         4096000 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 39 mounts or 180 days, whichever comes first.  Use tune2fs -c or -i to override. Athena:~ # 

Step 6.

Make a file system for the /srv directory structure.

 Athena:~ # mkfs.ext3 -L WEB -v /dev/sdb2 mke2fs 1.34 (25-Jul-2003) Filesystem label=WEB OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 2632672 inodes, 5263295 blocks 263164 blocks (5.00%) reserved for the super user First data block=0 161 block groups 32768 blocks per group, 32768 fragments per group 16352 inodes per group Superblock backups stored on blocks:         32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,         4096000 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 29 mounts or 180 days, whichever comes first.  Use tune2fs -c or -i to override. Athena:~ # 

The final processes required to incorporate the new device and partitions into the server demand a significant amount of attention to detail. They also require a significant amount of scheduled downtime. To minimize the service outage, you can perform a number of steps before you take down the system.

First, you can create temporary mount points for the new partitions. They are renamed /home and /srv while the system is in single user mode. In addition, you can prepare a new version of fstab to mount the new partitions on their proper mount points. This way, you can test the fstab file while the system is in single user mode and not have any surprises when the system reboots.

Step 7.

Create temporary mount points and check permissions.

 Athena:~ # Athena:~ # cd / Athena:~ # Athena:/ # mkdir /new_home Athena:~ # Athena:/ # mkdir /new_srv Athena:~ # Athena:~ # Athena:/ # ls -ld *home* *srv* drwxr-xr-x  7 root root 4096 Jan 20 08:23 home drwxr-xr-x  2 root root 4096 Jan 20 10:40 new_home drwxr-xr-x  2 root root 4096 Jan 20 10:40 new_srv drwxr-xr-x  4 root root 4096 Jan  5 04:42 srv Athena:/ # 

The permissions shown here are correct for files in the root of the filesystem. Users will need read access to the directories. Write access will be granted into subdirectories and below. For the /home structures, users will have write access to their $HOME directory. In the /srv structures, users will be granted access based on the websites they maintain.

Step 8.

Clone and add appropriate lines to /etc/fstab.

Clone the fstab file using the cp (copy) command:

 Athena:/ # cp /etc/fstab /etc/new_fstab 

Add the new device partitions, their target mount points, their filesystem types, and some default attributes to the new_fstab file. You can do this in any text editor, such as vi.

 /dev/sdb1   /home      ext3       acl,user_xattr        1 2 /dev/sdb2   /srv       ext3       acl,user_xattr        1 2 

Step 9.

Move to single user mode.

The next step requires that you remove all user access to the file system. This step prevents loss of data in the case of users actively changing content on the server during the switchover. It also has the added benefit of releasing the files used by the web services in the /srv structure. The simplest method for removing all but console access is to bring the system down to single user mode by changing the current runlevel.

Runlevels are covered in more detail in the next chapter. In the current context, a multiuser server is typically at runlevel 3 or runlevel 5 if the X Windows System is active. In single user mode, runlevel 1, the system will have only a minimum number of services running and no interactive sessions other than console access.

For maintenance purposes, the server needs to be transitioned to runlevel 1. You can achieve this by using the init command:

 Athena:/ # init 1 

NOTE

Bringing the machine down to single user mode disables all network services on the server. You need physical access to the console environment to continue on from this point.

You can also query the current runlevel of a server by using the runlevel command. The who -r command indicates the current runlevel as well as the previous state.

As an additional precaution, remove the server's network cable from the NIC. If the machine has multiple NICs, ensure that they are labeled and associated with the appropriate card before you remove them. When you are ready to bring your machine back online, you will want to have a few moments for a sanity check on the work performed.

USER MANAGEMENT

You can rest assured that the user community, especially in the case of an Internet-facing web server, will be waiting to pounce on services, even within the downtime window. Any difficulties encountered during the rebuild will generate distracting phone calls from irate users. It is a good idea to take an extra few minutes to check everything first before you reconnect the server to the real world.


Step 10.

Switch directories.

This step must be completed in a systematic fashion to ensure that no information is lost and with a minimum amount of downtime. When the system is in single user mode, you must reenter the root password at the console prompt. When you are logged on, you are ready to do the following:

  1. Rename the current directories to a backup version and move the prepared mount points to the appropriate names:

     Athena:/ # cd / Athena:/ # mv /home /old_home Athena:/ # mv /srv  /old_srv Athena:/ # mv /new_home  /home Athena:/ # mv /new_srv  /srv 

  2. Back up the active fstab and move the new one into position:

     Athena:/ # mv /etc/fstab /etc/old_fstab Athena:/ # mv /etc/new_fstab /etc/fstab 

  3. Mount the new disk partitions and attach them to the mount points. For this, you use the mount command with the -a parameter. This forces all partitions in /etc/fstab to be mounted. This emulates the state of the mount points after a clean reboot. The mount command used here should be the following:

     Athena:/ # mount -a 

  4. Check to see everything is mounted properly:

     Athena:/ # df -h Filesystem            Size  Used Avail Use% Mounted on /dev/sda1             9.4G  2.2G  6.8G  25% / tmpfs                  93M  8.0K   93M   1% /dev/shm /dev/sdb1              20G   33M   19G   1% /home /dev/sdb2              20G   34M   19G   1% /srv 

Notice the addition of the /home and /srv entries as individual entities and that they both represent 20GB of disk space each.

Step 11.

Move the data.

Move the data from the old_ directories to the new disk space:

 Athena:/ # cd /old_home Athena:/home # tar cf - * | ( cd /home ; tar xfp -) Athena:/ # cd /old_srv Athena:/home # tar cf - * | ( cd /srv ; tar xfp -) 

You have completed the migration of the data from a directory structure to individual mount points associated with the original names. You are now ready to reboot.

Step 12.

Reboot and perform sanity checks.

All the work has been completed at this stage. You now need to confirm the behavior of the system after a restart. This step validates that you did not perform a manual task that is not reflected in the system's normal startup procedures. It also provides a clean shutdown and reinstates the machine to its operational runlevel.

At the console prompt, type reboot:

 Athena:/ # reboot 

After your system has rebooted, ensure that the new versions of /srv and /home reflect the new configuration. Because they are now mount points instead of traditional subdirectories of /, a df command should show a value for the amount of available disk space for each mount point. An additional quick check would be to test services that depend on the contents that were migrated:

 Athena:/ # df -h Filesystem            Size  Used Avail Use% Mounted on /dev/sda1             9.4G  2.2G  6.8G  25% / tmpfs                  93M  8.0K   93M   1% /dev/shm /dev/sdb1              20G   34M   19G   1% /home /dev/sdb2              20G   34M   19G   1% /srv 

In one of the preceding steps, you removed the network cables from the server. In some instances, the network environment will not initialize properly without live network cables connected to each NIC. To test, you may need to connect to a test network or simply to a laptop with a crossover cable. This should provide the appropriate signals for the card to initialize properly. You may have to restart the network services before continuing with testing. You accomplish this by rebooting the server or, more gently, by issuing the following command:

 Athena:/ # /etc/init.d/network restart 

In the case of Athena being a web server, checking the default server web page as well as accessing a few user public_html directories should suffice. This would verify that the Apache service found both environments and that the permissions associated with the locations are correct. Secondary checks should include testing the users' publishing access to the server through FTP or Samba shares. At this point, you can place the machine back in service. Users should be able to connect to their environment, and all service should be running.



    SUSE LINUX Enterprise Server 9 Administrator's Handbook
    SUSE LINUX Enterprise Server 9 Administrators Handbook
    ISBN: 067232735X
    EAN: 2147483647
    Year: 2003
    Pages: 134

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net