Virtual Partition Kernel-Related Work

   

HP-UX Virtual Partitions
By Marty Poniatowski

Table of Contents
Chapter 4.  Building an HP-UX Kernel


Each Virtual Partition has its own instance of HP-UX 11i, which has its own HP-UX kernel. It is likely that you'll customize these kernels in a variety of ways to suit the applications you have running in the respective vPars. The kernel is automatically built for you when you install vPars software. Several drivers are included in the kernel, and the kernel is configured so that it can be relocated in memory. Relocation is required because you'll have multiple HP-UX kernels in memory and they must be resident in different locations. In this section, in order to get vPars running, you have to reconfigure the kernel to include the vPar drivers and make the kernel relocatable. Creating the new kernels takes place for you on every volume that has HP-UX 11i on it and will run a vPar. Let's take a look at some of the kernel-related work that is taking place for you behind the scenes.

Because memory is shared among multiple vPars, the kernel must be relocatable in memory. At the time of this writing there are patches that allow the kernel to be built as a relocatable kernel. We won't peform any checks related to patches.

The file /sbin/vecheck is a vPar file that is required on the system. The following listing is a portion of /usr/conf/gen/config.sys that checks to see if /sbin/vecheck has been loaded on the system:

 # Determine whether the linker supports kernel relocation. If it does,  # link the kernel using the relocation options.  LOADOPTS_ADDL=` \          if [ -f /sbin/vecheck ]; then \                  ${WHAT} ${LD} | \                  ${AWK} '$$0 ~ /92453-07 linker/ { \                          split($$7, vers, "."); \                          if ( vers[1] == "B" && \                          ( vers[2] == 11 && vers[3] >= 25 ) || vers[2] > 11 ) \                                  print "${LOADOPTS_RELOC}"; \                          else print "${LOADOPTS_STATIC}"; }'; \          else \                  echo "${LOADOPTS_STATIC}"; \          fi; \       ` 

The following is a long listing of /sbin/vecheck that was loaded with the vPar software:

 # ll /sbin/vecheck  -r-xr-xr-x  1 bin        bin         20533 Mar 5 19:01 /sbin/vecheck  # 

Next, let's take a look at /stand/system to see the vpar driver that has been added to the file:

 # cat /stand/system  ***************************************  * Source: /ux/core/kern/filesets.info/CORE-KRN/generic  * @(#)B.11.11_LR  *  ***************************************  * Additional drivers required in every machine-type to  * create a complete  * system file during cold install. This list is every driver that the  * master.d/ files do not force on the system or is not identifiable by  * ioscan.  * Other CPU-type specific files can exist for their special cases.  * see create_sysfile (1m).  ***************************************  *  * Drivers/Subsystems  sba  lba  c720  sctl  sdisk  asio0  cdfs  cxperf  olar_psm  olar_psm_if  dev_olar  diag0  diag1  diag2  dmem  dev_config  iomem  nfs_core  nfs_client  nfs_server  btlan  maclan  dlpi  token_arp  inet  uipc  tun  telm  tels  netdiag1  nms  hpstreams  clone  strlog  sad  echo  sc  timod  tirdwr  pipedev  pipemod  ffs  ldterm  ptem  pts  ptm  pckt  td  fddi4  gelan  GSCtoPCI  iop_drv  bs_osm  vxfs  vxportal  lvm  lv  nfsm  rpcmod  autofsc  cachefsc  cifs  prm  vpar                            <--- vpar driver added here  STRMSGSZ                 65535  nstrpty                  60  dump lvol  maxswapchunks            2048  # 

The vpar driver is a master driver described in /usr/conf/master.d as shown below:

 # pwd  /usr/conf/master.d  # cat vpar  $CDIO  vpar       0  $$$  $DRIVER_INSTALL  vcn             -1              209  vcs             -1              -1  vpar_driver     -1              -1  $$$  $DRIVER_DEPENDENCY  vcn            vpar  vcs            vpar  vpar           vcs vcn vpar_driver  vpar_driver    vpar  $$$  $DRIVER_LIBRARY  *  * The driver/library table. This table defines which libraries a given  * driver depends on. If the driver is included in the dfile, then the  * libraries that driver depends on will be included on the ld(1) command  * line. Only optional libraries *need* to be specified in this table,  * (but required ones can be included, as well).  *  * Driver handle    <libraries>  *  * subsystems first  vcn            libvpar-pdk.a  vcs            libvpar-pdk.a  vpar           libvpar-pdk.a  vpar_driver    libvpar-pdk.a  $$$  $LIBRARY  *  * The library table. Each element in the library table describes  * one unique library. The flag member is a boolean value, it is  * initialized to 1 if the library should *always* be included on  * the ld(1) command line, or 0 if the library is optional (i.e. it  *is only included when one or more drivers require it). The order  *of the library table determines the order of the libraries on the  * ld(1) command line, (i.e. defines an implicit load order). New  * libraries must be added to this table.  * Note: libhp-ux.a must be the last entry, do not place  * anything after it.  *  * Library <required>  *  libvpar-pdk.a 0  $$$  # 

You can see in this file that there are multiple drivers present. The vcn and vcs drivers are used to support the console in a vPars environment. Since you'll probably only have one physical console for multiple partitions, you need a way to share the physical device.

The following is a description of these two drivers:

vcn

This is a Virtual CoNsole for use by vPars. Since only one physical console exists on a system, vcn is used as a virtual console. Console information is sent by vcn to vpmon, where it is buffered. The buffered information is sent to the vcs driver (see the next item), where it can be viewed.

vcs

This is the Virtual Console Slave driver. This driver is the focal point for all Virtual Consoles. Information stored by vpmon is sent to vcs when a connection is established.

Figure 4-1 shows the relationship between vcs, vcn, and vpmon.

Figure 4-1. Virtual Consoles Implemented with vcn and vcs with First vPar Running

graphics/04fig01.gif

If the partition that owns the physical console is not running, in Figure 4-1 this is the leftmost partition, then vpmon emulates the vcs driver and manages the physical console. This ensures that the physical console will be available even if the vPar that owns it is not running.

We can also use SAM to view all aspects of the kernel and make changes to the kernel for vPars. Keep in mind that running SAM in different vPars is like running SAM on different systems. Figure 4-2 shows a screen shot of SAM displaying drivers for vPar cable1 (hostname of cvhdcon4 shown in the top of all of the SAM windows):

Figure 4-2. SAM Showing Virtual Console-Related Drivers

graphics/04fig02.gif

Note in the lower left window of Figure 4-2 that vcn, vcs, and vpar_driver have a Current State of In, indicating that the kernel has these three drivers present in it.

Chapter 11 covers SAM and vPars. Suffice it to say for the purposes of our kernel discussion that all of the work we've been performing on vPars at the command line can be performed in SAM as well.

Although we didn't have to manually build the kernel to include these drivers, we could have done so at the command line with mk_kernel to build the new kernel and kmupdate to move the new kernel-related files into place, as shown in the following listing:

 # mk_kernel  Generating module: krm...  Compiling /stand/build/conf.c...  Loading the kernel...  Generating kernel symbol table...  # kmupdate    Kernel update request is scheduled.    Default kernel /stand/vmunix will be updated by    newly built kernel /stand/build/vmunix_test    at next system shutdown or startup time.  # 

kmupdate is moving numerous kernel-related new files into place and saving a copy of your old files. Although kmupdate saves you some typing, it is important to know that it is peforming the following moves for you:

 # mv /stand/vmunix /stand/vmunix.prev  # mv /stand/dlkm /stand/dlkm.prev  # mv /stand/build/vmunix_test /stand/vmunix  # mv /stand/build/dlkm.vmunix_test /stand/dlkm 

Keep in mind that this procedure for rebuilding the kernel is performed for you when vPars software is loaded and configured. It is important to know what is taking place in the background, however; in the event that you have to troubleshoot a problem related to your updated kernel. Kernel creation and modification can also be done in SAM as described in Chapter 11.

Since you will have more than one kernel to maintain on your system when using vPars you'll want to become familar with this process so you can customize your kernel for each vPar to ensure that it is optimized for the application(s) running in each vPar.

You may want to review the background portion of this chapter that describes rebuilding the kernel if you don't have much experience doing so.


       
    Top
     



    HP-UX Virtual Partitions
    HP-UX Virtual Partitions
    ISBN: 0130352128
    EAN: 2147483647
    Year: 2002
    Pages: 181

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net