Chapter TWO. Partitioned Servers: Node Partitions

     

Chapter Syllabus

2.1 A Basic Hardware Guide to nPars

2.2 The Genesis Partition

2.3 Cell Behavior During the Initial Boot of a Partition

2.4 Partition Manager

2.5 Other Boot-Related Tasks

Partitioning is not a new concept in the computing industry. Man vendors have provided some form of partitioning as a software and/or a hardware solution for some years . The basic idea of partitioning is to create a configuration of hardware and software components that supports the running of an independent instance of an operating system. HP currently supports two types of partitioning:

  • nPar or Node Partition

  • vPar or Virtual Partition

This chapter deals with Node Partitions and Chapter 3 deals with Virtual Partitions.

An nPar is a collection of electrically independent components that support the running of a separate instance of the operating system completely independent of other partitions. The collection of hardware components that support Node Partitions is collectively known as a server complex . By using software management tools, we can configure the complex to function as either a large, powerful, single server or a collection of powerful but smaller, independent servers . HP's recent foray into the Node Partitions started in 2000 with the introduction of the first range of Superdome complexes. HP now provides Node Partitions via a range of complexes running either PA-RISC or Itanium-2 processors (for more details on HP's partitioning continuum initiative, see http://www.hp.com/products1/unix/operating/manageability/partitions/index.html). Node partitionable machines utilize a cell-based hardware architecture in order to support the electronic independence of components, which in turn allows the complex to support Node Partitions.

The flexibility in configuration makes partitioning a popular configuration tool. Some key benefits of partitioning include:

  • Better system resource utilization

  • Flexible and dynamic resource management

  • Application isolation

  • Server consolidation

In the future, different partitions will be able to run various versions of HP-UX, Windows, and Linux simultaneously with different processors in each partition within the same complex. This offers significant investment protection as well as configuration flexibility and cost savings with respect to server consolidation within the datacenter.

As you can imagine, trying to cover all permutations of configuration in this chapter would take considerable time. Consequently during our discussions, we use a PA-RISC Superdome (SD32000) complex to display some of the techniques in creating and managing nPars. The concepts are the same regardless of the complex you are configuring. Many of the components that are used in Superdome complexes are also used in the other Node Partitionable machines. I use screenshots and photographs from a real-life Superdome system to explain the theory and practice of the concepts discussed. We start by looking at the partition configuration supplied by HP when your complex is delivered. We then discuss why, how, and if I would want to change that configuration including scrapping the entire configuration and starting again, which is known as creating the Genesis Partition. We also discuss day-to-day management tasks involved with partitioned servers. I would suggest having access to your own system configuration while reading through this chapter as well as access to the excellent HP documentation: HP Systems Partitions Guide available at http://docs.hp.com/hpux/onlinedocs/5187-4534/5187-4534.html. Most of the concepts relating to Node Partitions relate to any of the Node Partitionable complexes supplied by HP. Where a specific feature is unique to a certain operating system release of a particular architecture (PA-RISC or Itanium), I highlight it.



HP-UX CSE(c) Official Study Guide and Desk Reference
HP-UX CSE(c) Official Study Guide and Desk Reference
ISBN: N/A
EAN: N/A
Year: 2006
Pages: 434

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net