| | A1: | This list is not exhaustive but offers some ideas: -
My server does not support nPars but does support vPars. -
I want a finer granularity of control on how much hardware is assigned to each partition. With an nPar, I must have an entire cell and IO chassis as part of the nPar. With a vPar, I can have as few as one CPU, minimal memory, and a "few" PCI cards. On a Node Partitionable server, you could create a single nPar containing all hardware components . From this nPar, you could create the necessary vPars. While this is possible, the administrator must be aware of High Availability and High Performance requirements of such a configuration. -
It is necessary to be able to dynamically reconfigure the number of CPU in a partition. Cell OLA/R is currently not available; hence, true dynamic cell reconfiguration is not possible in an nPar. In a vPar, we can redefine the number of CPU by adding and removing UNBOUND CPU without requiring a reboot. | | | A2: | Once the Virtual Partition software is installed, the kernel (/stand/vmunix) is relocatable to an address controlled by vpmon. The original kernel is not relocatable. If your kernel becomes corrupt, your current non-relocatable kernel cannot be used to boot a Virtual Partition. You would have to take other measures in order to fix a corrupted kernel in a Virtual Partition, e.g., temporarily turning off vPar functionality and manually fixing the corrupt kernel. | | | A3: | The important task here is to reboot vpmon. To accomplish this, all vPars need to be in a DOWN state. To accomplish this, we will: -
Shut down/halt all existing vPars. -
Use the REBOOT command to reboot vpmon. -
If we have an nPar, we may need to reconfigure the nPar to accommodate the new CPUs by adding additional cells (if necessary). -
Halt the server or PE the nPar cells affected. -
-
PE the nPar cells, if appropriate. -
Reboot/boot the server/nPar. -
Ensure that all vPars are booted . -
Ensure that we can see the available CPUs from vPar0. -
Reconfigure vPar0 (use vparmodify / vparmgr ) to include the additional CPUs. | | | A4: | It is important to understand the IO requirements of each application housed in a separate vPar, as this will influence the proportion of bound and unbound CPUs in each vPar. A bound CPU will be able to process IO interrupts, while an unbound CPU cannot. If an application/vPar performs lots of IO, then a higher number of bound CPUs will be required. However, bound CPUs cannot be relocated to a different vPar without performing a reboot. This limitation can have an influence on the entire partition configuration as well as the flexibility in reassigning CPUs across the entire configuration. | | | A5: | Virtual Partitions DO offer isolation from Operating System and application software faults because they are running independent instances of the software. If a particular "bug" affects a portion of software not utilized by a particular partition, it may be that that partition never experiences the problem and never comes across the bug; e.g., the bug could be in FDDI software, and a particular vPar does not utilize any FDDI cards. It is the independence of each instance of the Operating System that provides the isolation from individual software faults. | | |