Tuning Kernel Parameters


Many of the tunable performance items can be configured directly by the kernel. The command sysctl is used to view current kernel settings and adjust them. For example, to display all available parameters (in a sorted list), use:

 sudo sysctl -a | sort | more 
Note 

There are a few tunable parameters that can only be accessed by root. Without sudo, you can still view most of the kernel parameters.

Each of the kernel parameters are in a field = value format. For example, the parameter kernel.threads-max = 16379 sets the maximum number of concurrent processes to 16,379. This is smaller than the maximum number of unique PIDs (65,536). Lowering the number of PIDs can improve performance on systems with slow CPUs or little RAM since it reduces the number of simultaneous tasks. On high-performance computers with dual processors, this value can be large. As an example, my 350 MHz iMac is set to 2,048, my dual-processor 200 MHz PC is set to 1024, and my 2.8 GHz dual processor PC is set to 16,379.

Tip 

The kernel configures the default number of threads based on the available resources. Installing the same Ubuntu version on different hardware may set a different value. If you need an identical system (for testing, critical deployment, or sensitive compatibility), be sure to explicitly set this value.

There are two ways to adjust the kernel parameters. First, you can do it on the command line. For example, sudo sysctl -w kernel.threads-max=16000. This change takes effect immediately but is not permanent; if you reboot, this change will be lost. The other way to make a kernel change is to add the parameter to the /etc/sysctl.conf file. Adding the line kernel.threads-max=16000 will make the change take effect on the next reboot. Usually when tuning, you first use sysctl -w. If you like the change, then you can add it to /etc/sysctl.conf. Using sysctl -w first allows you to test modifications. In the event that everything breaks, you can always reboot to recover before committing the changes to /etc/sysctl.conf.

Computing Swap

The biggest improvement you can make to any computer runnning Ubuntu is to add RAM. In general, the speed impact from adding RAM is bigger than the impact from a faster processor. For example, increasing the RAM from 128 MB to 256 MB can turn the installation from a day job to an hour. Increasing to 1 GB of RAM shrinks the installation to minutes.

There is an old rule-of-thumb about the amount of swap space. The conventional wisdom says that you should have twice as much swap as RAM. A computer with 256 MB of RAM should start with 512 MB of swap. Although this is a good idea for memory limited systems, it isn't practical for high-end home user systems. If you have 1 GB of RAM, then you probably will never need swap space-and you are very unlikely to need 2 GB of swap unless you are planning on doing video editing or audio composition.

There is a limit to the amount of usable RAM. A 32-bit PC architecture like the Intel Pentium i686 family can only access 4 GB of RAM. This is a hardware limitation. Assuming you can insert 16 GB of RAM into the computer, only the first 4 GB will be addressable. The second limitation comes from the Linux 2.6 kernel used by Ubuntu's Dapper Drake (as well as other Linux variants). The kernel can only access 1 GB of RAM. Any remaining RAM is unused. For this reason, it is currently not worth investing in more than 1 GB of RAM. Also, since the total virtual memory (RAM + swap) is limited to 4 GB, you should not need to allocate more than 3 GB of swap on a 1 GB RAM system.

Note 

If you compile the kernel from scratch, there is a HIGHMEM flag that can be set to access up to 4 GB of RAM on a 32-bit architecture. Unfortunately, there is a reported performance hit since most hardware drivers cannot access the high memory. If you set this flag, then do not bother with any swap space-the total amount of memory (RAM + swap) cannot be larger than 4 GB.

Modifying Shared Memory

Some sections of virtual memory can be earmarked for use by multiple applications. Shared memory allows different programs to communicate quickly and share large volumes of information. Applications such as X-Windows, the Gnome Desktop, Nautilus, X-session manager, Gnome Panel, Trashcan applet, and Firefox all use shared memory. If programs cannot allocate or access the shared memory that they need, then programs will fail to start.

The inter-process communication status command, ipcs, displays the current shared memory allocations as well as the PIDs that created and last accessed the memory (see Listing 7-3).

Tip 

There are many other flags for ipcs. For example, ipcs -m -t shows the last time the shared memory was accessed, and ipcs -m -c shows access permissions. In addition, ipcs can show semaphore and message queue allocations.

Listing 7-3: Viewing Shared Memory Allocation

image from book
 $ ipcs -m ------ Shared Memory Segments -------- key        shmid      owner      perms      bytes      nattch     status 0x00000000 327680     nealk     600        393216     2          dest 0x00000000 360449     nealk     600        393216     2          dest 0x00000000 196610     nealk     600        393216     2          dest 0x00000000 229379     nealk     600        393216     2          dest 0x00000000 262148     nealk     600        393216     2          dest $ ipcs -m -p ------ Shared Memory Creator/Last-op -------- shmid      owner      cpid       lpid 327680     nealk      7267       7307 360449     nealk      7267       3930 196610     nealk      7182       7288 229379     nealk      7265       3930 262148     nealk      7284       3930 393221     nealk      7314       3930 425990     nealk      7280       3930 458759     nealk      7332       3930 $ ps -ef | grep -e 3930 -e 7280  # what processes are PID 3930 and 7280? root      3930  3910  0 09:24 tty7     00:00:02 /usr/bin/X :0 -br - audit 0 -auth /var/lib/gdm/:0.Xauth -nolisten tcp vt7 nealk     7280     1  0 15:22 ?        00:00:01 update-notifier 
image from book

Programs can allocate shared memory in two different ways: temporary and permanent. A temporary allocation means the memory remains shared until all applications release the memory handle. When no applications remain attached (the ipcs nattch field), the memory is freed. In contrast, permanent allocations can remain even when no programs are currently using it. This allows programs to save state in a shared communication buffer.

Sometimes shared memory becomes abandoned. To forcefully free abandoned memory, use ipcrm. You will need to specify the shared memory ID found from using ipcs (the shmid column).

More often than not, you will be more concerned with allocating more shared memory rather than freeing abandoned segments. For example, databases and high-performance web servers work better when there is more shared memory available. The sysctl command shows the current shared memory allocations:

 $ sysctl kernel | grep shm kernel.shmmni = 4096 kernel.shmall = 2097152 kernel.shmmax = 33554432 

In this example, there is a total of 33,554,432 bytes (32 MB) of shared memory available. A single application can allocate up to 2 MB (2,097,152 bytes), and the minimal allocation unit is 4096 bytes. These sizes are plenty for most day-to-day usage, but if you plan to run a database such as Oracle, MySQL, or PostgreSQL, then you will almost certainly need to increase these values.

Changing Per User Settings

Beyond the kernel settings are parameters for configuring users. The ulimit command is built into the bash shell and provides limits for specific application. Running ulimit -a shows the current settings:

 $ ulimit -a core file size          (blocks, -c) 0 data seg size           (kbytes, -d) unlimited max nice                        (-e) 20 file size                (blocks, -f) unlimited pending signals                  (-i) unlimited max locked memory        (kbytes, -l) unlimited max memory size          (kbytes, -m) unlimited open files                       (-n) 1024 pipe size            (512 bytes, -p) 8 POSIX message queues      (bytes, -q) unlimited max rt priority                  (-r) unlimited stack size               (kbytes, -s) 8192 cpu time                (seconds, -t) unlimited max user processes               (-u) unlimited virtual memory           (kbytes, -v) unlimited file locks                       (-x) unlimited 

These setting show, for example, that a single user-shell can have a maximum of 1,024 open files and core dumps are disabled (size 0). Developers will probably want to enable core dumps, with ulimit -c 100 (for 100 blocks). If you want to increase the number of open files to 2,048, use ulimit -n 2048.

The user cannot change some limits. For example, if you want to increase the number of open files to 2,048 then you would use ulimit -n 2048, except only root can increase this value. In contrast, the user can always lower the value.

Note 

Some values have an upper limit defined by the kernel. For example, although root can increase the number of open file handles, this cannot be increased beyond the value set by the kernel parameter fs.file-max (sysctl fs.file-max).

On a multi-user system, the administrator may want to limit the number of processes any single user can have running. This can be done by adding a ulimit statement to the /etc/ bash.bashrc script:

 # Only change the maximum number of processes for users, not root. if [ `/usr/bin/id -u` -ne 0 ] ; then   ulimit -u 2048 2>/dev/null # ignore errors if user set it lower else   ulimit -u unlimited # root can run anything fi 



Hacking Ubuntu
Hacking Ubuntu: Serious Hacks Mods and Customizations (ExtremeTech)
ISBN: 047010872X
EAN: 2147483647
Year: 2004
Pages: 124
Authors: Neal Krawetz

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net