What are the mainframe attributes that make it valuable from a TCO perspective?
6.2.1 Availability factors
To assess the impact of an unavailable system and hence what it is worth to you to have the system available, you should consider these factors:
Sources of outages generally include software errors and hardware errors. The goal of mainframe hardware development is to have no downtime. Most changes to I/O and most memory upgrades can now be done on the fly, as can some CPU updates.
Statistically, outage-causing errors are most often found in the applications. Just as you decide what hardware to use, you decide what applications to use, whether to buy them, build them yourself, or use Open Source. Choosing software carefully can help in maintaining good availability.
6.2.2 Mainframe availability characteristics
Availability involves reducing the duration and frequency of outages.
The mean time between failures (MTBF) for the mainframe of up to 30 years can be attributed to the mainframe's unique combination of availability features (refer to Chapter 2, "Introducing the Mainframe" for details):
Relevant chapter in book: Chapter 11, "Achieving Higher Availability" and Chapter 13, "Availability Management."
6.2.3 Partitioning and virtualization
The value of Linux on the mainframe is in large part due to the possibility of hosting multiple servers on a single hardware machine. On the mainframe, there are two methods for hosting multiple operating systems: logical partitioning (LPAR) and virtualization using z/VM. The mainframe offers the most mature and dynamic partitioning and virtualization technology in the industry.
zSeries LPAR implementation is unique in comparison to the other partitioning implementations available from other hardware vendors. LPAR can exploit the PR/SM microcode, which provides a flexibility superior to that of a static hardware solution. Logical partitions each have their own allocation of memory, and either dedicated or shared processors, as well as dedicated and shared channels for I/O operations.
z/VM presents a unique virtualization approach. It provides each end user with an individual working environment known as a virtual machine. The virtual machine simulates the existence of a dedicated real machine, including processor functions, storage, and I/O resources. For example, you can run multiple z/OS and Linux on zSeries images on the same z/VM system that is supporting z/VM applications and end users. As a result, application development, testing, and production environments can share a single physical computer.
Relevant chapter in book: Chapter 7, "The Value of Virtualization."
6.2.4 High utilization rates
The average server farm tends to have low CPU utilization. If you have low utilized Linux (or UNIX) servers, you could run them all on one mainframe, which would then run at a higher utilization. Using Linux guests on z/VM gives you the capability to utilize even more such logical servers. With z/VM, you can also manage the priority of the guests and provide more resources to important guests in a peak situation.
The CPU has historically been considered the "expensive" part of a (mainframe) computer, and the part to utilize fully. The rest of the system should feed the CPU work so that it can run at nearly 100%. It takes clever design to build a fast CPU and the supporting I/O and memory structure to allow "normal" work to be able to drive it to 100% capacity. When we talk about balance, we mean that memory and I/O support the CPU.
Thus, in utilizing a mainframe, it is not so much a question of how high the load can be (it can even be 100%). The question is, is there enough workload to keep it busy?
Relevant chapter in book: Chapter 7, "The Value of Virtualization" and Chapter 15, "Performance and Capacity Planning."