Signals

   

HP-UX Virtual Partitions
By Marty Poniatowski

Table of Contents
Chapter 12.  Performance Topics


When you issue the kill command and process number, you are also sending a signal associated with the kill. We did not specify a signal in our kill example; however, the default signal of 15, or SIGTERM, was used. These signals are used by the system to communicate with processes. The signal of 15 we used to terminate our process is a software termination signal that is usually enough to terminate a user process such as the find we had started. A process that is difficult to kill may require the SIGKILL, or 9 signal. This signal causes an immediate termination of the process. I use this only as a last resort because processes killed with SIGKILL do not always terminate smoothly. To kill such processes as the shell, you sometimes have to use SIGKILL.

You can use either the signal name or number. These signal numbers sometimes vary from system to system, so view the manual page for signal, usually in section 5, to see the list of signals on your system. A list of some of the most frequently used signal numbers and corresponding signals follows:

Signal Number

Signal

1

SIGHUP

2

SIGINT

3

SIGQUIT

9

SIGKILL

15

SIGTERM

24

SIGSTOP

To kill a process with id 234 with SIGKILL, you would issue the following command:

 $ kill  -9 234      |     |  |      |     |  |> process id (PID)      |     |> signal number      |>  kill command to  terminate the process 

Showing Remote Mounts with showmount

showmount is used to show all remote systems (clients) that have mounted a local file system. showmount is useful for determining the file systems that are most often mounted by clients with NFS. The output of showmount is particularly easy to read because it lists the host name and directory that was mounted by the client.

NFS servers often end up serving many NFS clients that were not originally intended to be served. This situation ends up consuming additional UNIX system resources on the NFS server, as well as additional network bandwidth. Keep in mind that any data transferred from an NFS server to an NFS client consumes network bandwidth, and in some cases, may be a substantial amount of bandwith if large files or applications are being transferred from the NFS server to the client. The following example is a partial output of showmount taken from a system. showmount runs on the HP-UX, AIX, and Linux systems I have been using throughout this chapter, but not on the Solaris system:

 # showmount -a  sys100.ct.mp.com:/applic  sys101.ct.mp.com:/applic  sys102.cal.mp.com:/applic  sys103.cal.mp.com:/applic  sys104.cal.mp.com:/applic  sys105.cal.mp.com:/applic  sys106.cal.mp.com:/applic  sys107.cal.mp.com:/applic  sys108.cal.mp.com:/applic  sys109.cal.mp.com:/applic  sys200.cal.mp.com:/usr/users  sys201.cal.mp.com:/usr/users  sys202.cal.mp.com:/usr/users 
 # showmount -a  sys203.cal.mp.com:/usr/users  sys204.cal.mp.com:/usr/users  sys205.cal.mp.com:/usr/users  sys206.cal.mp.com:/usr/users  sys207.cal.mp.com:/usr/users  sys208.cal.mp.com:/usr/users  sys209.cal.mp.com:/usr/users 

The three following options are available for the showmount command:

-a

prints output in the format "name:directory," as shown above.

-d

lists all the local directories that have been remotely mounted by clients.

-e

prints a list of exported file systems.

The following are examples of showmount -d and showmount -e:

 # showmount -d  /applic  /usr/users  /usr/oracle  /usr/users/emp.data  /network/database  /network/users  /tmp/working 
 # showmount -e  export list for server101.cal.mp.com 
 # showmount -e  /applic  /usr/users  /cdrom 

Showing System Swap

If your system has insufficient main memory for all the information it needs to work with, it will move pages of information to your swap area or swap entire processes to your swap area. Pages that were most recently used are kept in main memory, and those not recently used will be the first moved out of main memory.

System administrators spend a lot of time determining the right amount of swap space for their systems. Insufficient swap may prevent a system from starting additional processes, hang applications, or not permit additional users to get access to the system. Having sufficient swap prevents these problems from occurring. System administrators usually go about determining the right amount of swap by considering many important factors, including the following:

  1. How much swap is recommended by the application(s) you run? Use the swap size recommended by your applications. Application vendors tend to be realistic when recommending swap space. There is sometimes competition among application vendors to claim the lowest memory and CPU requirements in order to keep the overall cost of solutions as low as possible, but swap space recommendations are usually realistic.

  2. How many applications will you run simultaneously? If you are running several applications, sum the swap space recommended for each application you plan to run simultaneously. If you have a database application that recommends 200 MBytes of swap and a development tool that recommends 100 MBytes of swap, then configure your system with 300 MBytes of swap, minimum.

  3. Will you be using substantial system resources on peripheral functionality such as NFS? The nature of NFS is to provide access to file systems, some of which may be very large, so this use may have an impact on your swap space requirements.

Swap is listed and manipulated on different UNIX variants with different commands. The following example shows listing the swap area on a Solaris system with swap -l:

 # swap -l  swapfile             dev  swaplo blocks   free  /dev/dsk/c0t3d0s1   32,25      8 263080 209504 

These values are all in 512 KByte blocks. In this case, the free blocks are 209504, which is a significant amount of the overall swap allocated on the system.

You can view the amount of swap being consumed on your HP-UX system with swapinfo. The following is an example output of swapinfo:

 # swapinfo              Kb       Kb       Kb   PCT   START/      Kb  TYPE     AVAIL     USED     FREE  USED    LIMIT RESERVE   PRI  NAME  dev      49152    10532    38620   21%        0       -     1  /dev/vg00/lvol2  dev     868352    10888   759160    1%        0       -     1  /dev/vg00/lvol8  reserve      -   532360  -532360  memory  816360   469784   346576   58% 

Following is a brief overview of what swapinfo gives you.

In the preceding example, the "TYPE" field indicated whether the swap was "dev" for device, "reserve" for paging space on reserve, or "memory." Memory is a way to allow programs to reserve more virtual memory than you have hard disk paging space set up for on your system.

"Kb AVAIL" is the total swap space available in 1024-byte blocks.

This includes both used and unused swap.

"Kb USED" is the current number of 1024-byte blocks in use.

"Kb FREE" is the difference between "Kb AVAIL" and "Kb USED."

"PCT USED" is "Kb USED" divided by "Kb AVAIL."

"START/LIMIT" is the block address of the start of the swap area.

"Kb RESERVE" is "-" for device swap or the number of 1024-byte blocks for file system swap.

"PRI" is the priority given to this swap area.

"NAME" is the device name of the swap device.

You can also issue the swapinfo command with a series of options. Here are some of the options you can include:

-m

to display output of swapinfo in MBytes rather than in 1024-byte blocks.

-d

prints information related to device swap areas only.

-f

prints information about file system swap areas only.

sar: The System Activity Reporter

sar is another UNIX command for gathering information about activities on your system. You can gather data over an extended time period with sar and later produce reports based on the data. sar is similar among UNIX variants in that the options and outputs are similar. The Linux system I was using for the examples did not support sar, but the Solaris, HP-UX, and AIX systems had the same options and nearly identical outputs. The following are some useful options to sar, along with examples of reports produced with these options where applicable:

sar -o

Save data in a file specified by "o." After the file name, you would usually also enter the time interval for samples and the number of samples. The following example shows saving the binary data in file /tmp/sar.data at an interval of 60 seconds 300 times:

 # sar -o /tmp/sar.data 60 300 

The data in /tmp/sar.data can later be extracted from the file.

sar -f

Specify a file from which you will extract data.

sar -u

Report CPU utilization with the headings %usr, %sys, %wio, %idle with some processes waiting for block I/O, %idle. This report is similar to the iostat and vmstat CPU reports. You extract the binary data saved in a file to get CPU information, as shown in the following example. The following is a sar -u example:

 # sar -u -f /tmp/sar.data  Header Information for your system  12:52:04    %usr    %sys    %wio   %idle  12:53:04      62       4       5      29  12:54:04      88       5       3       4  12:55:04      94       5       1       0  12:56:04      67       4       4      25  12:57:04      59       4       4      32  12:58:04      61       4       3      32  12:59:04      65       4       3      28  13:00:04      62       5      16      17  13:01:04      59       5       9      27  13:02:04      71       4       3      22  13:03:04      60       4       4      32  13:04:04      71       5       4      20  13:05:04      80       6       8       7  13:06:04      56       3       3      37  13:07:04      57       4       4      36  13:08:04      66       4       4      26  13:09:04      80      10       2       8  13:10:04      73      10       2      15  13:11:04      64       6       3      28  13:12:04      56       4       3      38  13:13:04      55       3       3      38  13:14:04      57       4       3      36  13:15:04      70       4       5      21  13:16:04      65       5       9      21  13:17:04      62       6       2      30  13:18:04      60       5       3      33  13:19:04      77       3       4      16  13:20:04      76       5       3      15                     .                     .                     .  14:30:04      50       6       6      38  14:31:04      57      12      19      12  14:32:04      51       8      20      21  14:33:04      41       4       9      46  14:34:04      43       4       9      45  14:35:04      38       4       6      53  14:36:04      38       9       7      46  14:37:04      46       3      11      40  14:38:04      43       4       7      46  14:39:04      37       4       5      54  14:40:04      33       4       5      58  14:41:04      40       3       3      53  14:42:04      44       3       3      50  14:43:04      27       3       7      64  Average       57       5       8      30 

sar -b

Report buffer cache activity. A database application such as Oracle would recommend that you use this option to see the effectiveness of buffer cache use. You extract the binary data saved in a file to get CPU information, as shown in the following example:

 # sar -b -f /tmp/sar.data  Header information for your system  12:52:04 bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s  12:53:04       5     608      99       1      11      95       0       0  12:54:04       7     759      99       0      14      99       0       0  12:55:04       2    1733     100       4      24      83       0       0  12:56:04       1     836     100       1      18      96       0       0  12:57:04       0     623     100       2      21      92       0       0  12:58:04       0     779     100       1      16      96       0       0  12:59:04       0    1125     100       0      14      98       0       0  13:00:04       2    1144     100       9      89      89       0       0  13:01:04      10     898      99      11      76      86       0       0  13:02:04       0    1156     100       0      14      99       0       0  13:03:04       1     578     100       2      22      88       0       0  13:04:04       5    1251     100       0      12      99       0       0  13:05:04       3    1250     100       0      12      97       0       0  13:06:04       1     588     100       0      12      98       0       0  13:07:04       1     649     100       2      15      86       0       0  13:08:04       1     704     100       2      15      86       0       0  13:09:04       1    1068     100       0      18     100       0       0  13:10:04       0     737     100       1      44      99       0       0  13:11:04       0     735     100       1      13      95       0       0  13:12:04       0     589     100       1      15      93       0       0  13:13:04       0     573     100       0      16      99       0       0  13:14:04       1     756     100       1      16      91       0       0  13:15:04       1    1092     100       9      49      81       0       0  13:16:04       2     808     100       6      82      93       0       0  13:17:04       0     712     100       1       9      93       0       0  13:18:04       1     609     100       0      13      97       0       0  13:19:04       1     603     100       0      10      99       0       0  13:20:04       0    1127     100       0      14      98       0       0                     .                     .                     .  14:30:04       2     542     100       1      22      94       0       0  14:31:04      10     852      99      12     137      92       0       0  14:32:04       2     730     100      10     190      95       0       0  14:33:04       4     568      99       2      26      91       0       0  14:34:04       4     603      99       1      13      91       0       0  14:35:04       1     458     100       1      13      89       0       0  14:36:04      13     640      98       1      24      98       0       0  14:37:04      21     882      98       1      18      95       0       0  14:38:04       7     954      99       0      19      98       0       0  14:39:04       3     620     100       1      11      94       0       0  14:40:04       3     480      99       2      15      85       0       0  14:41:04       1     507     100       0       9      98       0       0  14:42:04       1    1010     100       1      10      91       0       0  14:43:04       5     547      99       1       9      93       0       0  Average        3     782     100       3      37      91       0       0 

sar -d

Report disk activity. You get the device name, percent that the device was busy, average number of requests outstanding for the device, number of data transfers per second for the device, and other information. You extract the binary data saved in a file to get CPU information, as shown in the following example:

 # sar -d -f /tmp/sar.data  Header information for your system  12:52:04   device  %busy   avque   r+w/s  blks/s  avwait  avserv  12:53:04   c0t6d0    0.95    1.41      1      10   16.76   17.28             c5t4d0  100.00    1.03     20     320    8.36   18.90             c4t5d1   10.77    0.50     13     214    5.02   18.44             c5t4d2    0.38    0.50      0       3    4.61   18.81  12:54:04   c0t6d0    0.97    1.08      1      11   10.75   14.82             c5t4d0  100.00    1.28     54     862    9.31   20.06             c4t5d1   12.43    0.50     15     241    5.21   16.97             c5t4d2    0.37    0.50      0       3    3.91   18.20  12:55:04   c0t6d0    1.77    1.42      1      22   13.32   14.16             c5t4d0  100.00    0.79     26     421    8.33   16.00             c4t5d1   14.47    0.51     17     270    5.30   13.48             c5t4d2    0.72    0.50      0       7    4.82   15.69  12:56:04   c0t6d0    1.07   21.57      1      22   72.94   19.58             c5t4d0  100.00    0.60     16     251    6.80   13.45             c4t5d1    8.75    0.50     11     177    5.05   10.61             c5t4d2    0.62    0.50      0       6    4.79   15.43  12:57:04   c0t6d0    0.78    1.16      1       9   13.53   14.91             c5t4d0  100.00    0.66     15     237    7.60   13.69             c4t5d1    9.48    0.54     13     210    5.39   13.33             c5t4d2    0.87    0.50      1      10    4.86   14.09  12:58:04   c0t6d0    1.12    8.29      1      17   54.96   14.35             c5t4d0  100.00    0.60     11     176    7.91   14.65             c4t5d1    5.35    0.50      7     111    5.23   10.35             c5t4d2    0.92    0.50      1      10    4.63   16.08  12:59:04   c0t6d0    0.67    1.53      1       8   18.03   16.05             c5t4d0   99.98    0.54     11     174    7.69   14.09             c4t5d1    3.97    0.50      5      83    4.82    9.54             c5t4d2    1.05    0.50      1      11    4.69   16.29  13:00:04   c0t6d0    3.22    0.67      3      39    8.49   16.53             c5t4d0  100.00    0.60     65    1032    8.46   14.83             c4t5d1   21.62    0.50     31     504    5.30    8.94             c5t4d2    6.77    0.50      5      78    4.86   14.09  13:01:04   c0t6d0    4.45    3.08      5      59   25.83   11.49             c5t4d0  100.00    0.65     42     676    7.85   14.52             c4t5d1   21.34    0.55     30     476    5.87   18.49             c5t4d2    4.37    0.50      3      51    5.32   13.50                     .                     .                     .  14:42:04   c0t6d0    0.53    0.83      0       7   12.21   16.33             c5t4d0  100.00    0.56      7     107    6.99   14.65             c4t5d1    6.38    0.50      7     113    4.97   15.18             c5t4d2    0.15    0.50      0       2    4.53   16.50  14:43:04   c0t6d0    0.52    0.92      0       7   11.50   15.86             c5t4d0   99.98    0.92     17     270    8.28   18.64             c4t5d1   10.26    0.50      9     150    5.35   16.41             c5t4d2    0.12    0.50      0       1    5.25   14.45  Average    c0t6d0    1.43  108.80      2      26    0.00   14.71  Average    c5t4d0  100.00    0.74     25     398    7.83  -10.31  Average    c4t5d1   19.11    0.51     25     399    5.26  -13.75  Average    c5t4d2    1.71    0.53      1      21    5.29   13.46 

sar -q

Report average queue length. You may have a problem any time the run queue length is greater than the number of processors on the system:

 # sar -q -f /tmp/sar.data  Header information for your system  12:52:04 runq-sz %runocc swpq-sz %swpocc  12:53:04     1.1      20     0.0       0  12:54:04     1.4      51     0.0       0  12:55:04     1.3      71     0.0       0  12:56:04     1.1      22     0.0       0  12:57:04     1.3      16     0.0       0  12:58:04     1.1      14     0.0       0  12:59:04     1.2      12     0.0       0  13:00:04     1.2      21     0.0       0  13:01:04     1.1      18     0.0       0  13:02:04     1.3      20     0.0       0  13:03:04     1.2      15     0.0       0  13:04:04     1.2      20     0.0       0  13:05:04     1.2      43     0.0       0  13:06:04     1.1      14     0.0       0  13:07:04     1.2      15     0.0       0  13:08:04     1.2      26     0.0       0  13:09:04     1.5      38     0.0       0  13:10:04     1.5      30     0.0       0  13:11:04     1.2      23     0.0       0  13:12:04     1.3      11     0.0       0  13:13:04     1.3      12     0.0       0  13:14:04     1.4      16     0.0       0  13:15:04     1.4      27     0.0       0  13:16:04     1.5      20     0.0       0  13:17:04     1.3      21     0.0       0  13:18:04     1.1      15     0.0       0  13:19:04     1.2      19     0.0       0  13:20:04     1.4      22     0.0       0                 .                 .                 .  14:30:04     1.5       5     0.0       0  14:31:04     1.6      12     0.0       0  14:32:04     1.4       9     0.0       0  14:33:04     1.1       6     0.0       0  14:34:04     1.3       3     0.0       0  14:35:04     1.1       4     0.0       0  14:36:04     1.2       6     0.0       0  14:37:04     1.4       5     0.0       0  14:38:04     1.2      10     0.0       0  14:39:04     1.3       4     0.0       0  14:40:04     1.1       3     0.0       0  14:41:04     1.6       3     0.0       0  14:42:04     1.1       4     0.0       0  14:43:04     1.3       1     0.0       0  Average      1.3      17     1.2       0 

sar -w

Report system swapping activity.

 # sar -w -f /tmp/sar.data  Header information for your system  12:52:04 swpin/s bswin/s swpot/s bswot/s pswch/s  12:53:04    1.00     0.0    1.00     0.0     231  12:54:04    1.00     0.0    1.00     0.0     354  12:55:04    1.00     0.0    1.00     0.0     348  12:56:04    1.00     0.0    1.00     0.0     200  12:57:04    1.00     0.0    1.00     0.0     277  12:58:04    1.00     0.0    1.00     0.0     235  12:59:04    1.02     0.0    1.02     0.0     199  13:00:04    0.78     0.0    0.78     0.0     456  13:01:04    1.00     0.0    1.00     0.0     435  13:02:04    1.02     0.0    1.02     0.0     216  13:03:04    0.98     0.0    0.98     0.0     204  13:04:04    1.02     0.0    1.02     0.0     239  13:05:04    1.00     0.0    1.00     0.0     248  13:06:04    0.97     0.0    0.97     0.0     170  13:07:04    1.00     0.0    1.00     0.0     166  13:08:04    1.02     0.0    1.02     0.0     209  13:09:04    0.98     0.0    0.98     0.0     377  13:10:04    1.00     0.0    1.00     0.0     200  13:11:04    1.00     0.0    1.00     0.0     192  13:12:04    0.87     0.0    0.87     0.0     187  13:13:04    0.93     0.0    0.93     0.0     172  13:14:04    1.00     0.0    1.00     0.0     170  13:15:04    1.00     0.0    1.00     0.0     382  13:16:04    1.00     0.0    1.00     0.0     513  13:17:04    1.00     0.0    1.00     0.0     332  13:18:04    1.00     0.0    1.00     0.0     265  13:19:04    1.02     0.0    1.02     0.0     184  13:20:04    0.98     0.0    0.98     0.0     212                 .                 .                 .  14:30:04    0.00     0.0    0.00     0.0     301  14:31:04    0.00     0.0    0.00     0.0     566  14:32:04    0.00     0.0    0.00     0.0     539  14:33:04    0.00     0.0    0.00     0.0     400  14:34:04    0.00     0.0    0.00     0.0     242  14:35:04    0.00     0.0    0.00     0.0     286  14:36:04    0.00     0.0    0.00     0.0     295  14:37:04    0.00     0.0    0.00     0.0     249  14:38:04    0.00     0.0    0.00     0.0     300  14:39:04    0.00     0.0    0.00     0.0     296  14:40:04    0.00     0.0    0.00     0.0     419  14:41:04    0.00     0.0    0.00     0.0     234  14:42:04    0.00     0.0    0.00     0.0     237  14:43:04    0.00     0.0    0.00     0.0     208  Average     0.70     0.0    0.70     0.0     346 

Using timex to Analyze a Command

If you have a specific command you want to find out more about, you can use timex, which reports the elapsed time, user time, and system time spent in the execution of any command you specify.

timex is a good command for users because it gives you an idea of the system resources you are consuming when issuing a command. The following two examples show issuing timex with no options to get a short output of the amount of cpu consumed; the second example shows issuing timex -s to report "total" system activity on a Solaris system:

 martyp $ timex listing  real        0.02  user        0.00  sys         0.02  martyp $ timex -s listing  real        0.02  user        0.00  sys         0.01  SunOS 5.7 Generic sun4m    08/21  07:48:30    %usr    %sys    %wio   %idle  07:48:31      32      68       0       0  07:48:30 bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s  07:48:31       0       0     100       0       0     100       0       0  Average        0       0     100       0       0     100       0       0  07:48:30   device        %busy   avque   r+w/s   blks/s  avwait  avserv  07:48:31   fd0               0     0.0       0        0     0.0     0.0             nfs1              0     0.0       0        0     0.0     0.0             nfs219            0     0.0       0        0     0.0     0.0             sd1               0     0.0       0        0     0.0     0.0             sd1,a             0     0.0       0        0     0.0     0.0             sd1,b             0     0.0       0        0     0.0     0.0             sd1,c             0     0.0       0        0     0.0     0.0             sd1,g             0     0.0       0        0     0.0     0.0             sd3               0     0.0       0        0     0.0     0.0             sd3,a             0     0.0       0        0     0.0     0.0             sd3,b             0     0.0       0        0     0.0     0.0             sd3,c             0     0.0       0        0     0.0     0.0             sd6               0     0.0       0        0     0.0     0.0  Average    fd0               0     0.0       0        0     0.0     0.0             nfs1              0     0.0       0        0     0.0     0.0             nfs219            0     0.0       0        0     0.0     0.0             sd1               0     0.0       0        0     0.0     0.0             sd1,a             0     0.0       0        0     0.0     0.0             sd1,b             0     0.0       0        0     0.0     0.0             sd1,c             0     0.0       0        0     0.0     0.0             sd1,g             0     0.0       0        0     0.0     0.0             sd3               0     0.0       0        0     0.0     0.0             sd3,a             0     0.0       0        0     0.0     0.0             sd3,b             0     0.0       0        0     0.0     0.0             sd3,c             0     0.0       0        0     0.0     0.0             sd6               0     0.0       0        0     0.0     0.0  07:48:30 rawch/s canch/s outch/s rcvin/s xmtin/s mdmin/s  07:48:31       0       0     147       0       0       0  Average        0       0     147       0       0       0  07:48:30 scall/s sread/s swrit/s  fork/s  exec/s rchar/s wchar/s  07:48:31    2637       0      95   15.79   15.79       0   19216  Average     2637       0      95   15.79   15.79       0   19216  07:48:30 swpin/s bswin/s swpot/s bswot/s pswch/s  07:48:31    0.00     0.0    0.00     0.0     116  Average     0.00     0.0    0.00     0.0     116  07:48:30  iget/s namei/s dirbk/s  07:48:31       0     195     121  Average        0     195     121  07:48:30 runq-sz %runocc swpq-sz %swpocc  07:48:31     2.0     526  Average      2.0     526  07:48:30  proc-sz    ov  inod-sz    ov  file-sz    ov  lock-sz  07:48:31   45/986     0  973/4508    0  357/357     0   0/0  07:48:30   msg/s  sema/s  07:48:31    0.00    0.00  Average     0.00    0.00  07:48:30  atch/s  pgin/s ppgin/s  pflt/s  vflt/s slock/s  07:48:31    0.00    0.00    0.00  505.26 1036.84    0.00  Average     0.00    0.00    0.00  505.26 1036.84    0.00  07:48:30  pgout/s ppgout/s pgfree/s pgscan/s %ufs_ipf  07:48:31     0.00     0.00     0.00     0.00     0.00  Average      0.00     0.00     0.00     0.00     0.00  07:48:30 freemem freeswap  07:48:31   15084  1224421  Average    15084  1224421  07:48:30 sml_mem   alloc  fail  lg_mem   alloc  fail  ovsz_alloc  fail  07:48:31 2617344 1874368     0 17190912 10945416     0    3067904      0  Average   186953  133883     0 1227922  781815     0     219136      0 

       
    Top
     



    HP-UX Virtual Partitions
    HP-UX Virtual Partitions
    ISBN: 0130352128
    EAN: 2147483647
    Year: 2002
    Pages: 181

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net