Overview of Key Components


Overview of Key Components

You've looked at several of the components covered in this section in some detail already, but not as part of a heterogeneous system in the context of an overall platform architecture.

A WebSphere implementation, from the physical view at the highest level, can be broken into four main components:

  • Web server or Web tier

  • Application server or application/business tier

  • Database server or data tier

  • Network infrastructure

These four parts are the highest level view of a single-site WebSphere implementation. From those four key components, you're able to extend and expand your design both horizontally and vertically to achieve the exact mix of scalability, reliability, and performance that you need. Let's consider a brief example.

Suppose your application server was operating at greater than 70 percent utilization and customers were starting to notice a performance degradation. You could scale your application or business tier either vertically (i.e., add more CPUs, memory, disk, network interfaces, etc.) or horizontally (i.e., add more servers). As you'll see in detail in this chapter, both approaches have pros and cons, and although the choice may appear to be simple for resolving your performance issues, the answer isn't always straightforward.

Not all applications that operate under WebSphere can be simply scaled either way. With the onset of technologies such as blade servers, it's a common but incorrect assumption that scaling your application environment can always be achieved simply by throwing more servers, or blades, into your WebSphere or J2EE application server farm.

Yes, this works for smaller applications, which may happen to be mainly JSPs, servlets, and static HTML content. For these types of applications, serving this type of content and processing this type of load can be minimal. If you introduce heavy workload components such as CPU and or memory-bound application modules, or if your application consists of many parts (e.g., many EJB-JARs, WARs [Web Archives], etc.), then you'll quickly run out of memory and CPU cycles on lower-end systems. For this reason, larger systems are needed and vertical scaling starts to become a necessity rather than a choice.

In the following sections, you'll look at vertical and horizontal scaling in a little more detail, especially in the context of WebSphere.

Horizontal Scaling with WebSphere

Horizontal scaling of a system or a system's components essentially means that you're looking to increase, in some way, the capacity of your environment by adding in additional servers. For example, you may have an x86-based platform with a memory limit of 4GB, and a number of your application's users are in need of more memory for session-based data. To extend beyond the 4GB limit and allow your customers to still be able to use the application, you would add in an additional node with 4GB of memory. The sum of total session memory available to your application will then be 8GB of memory. Although a single process can't extend beyond 4GB of physical memory, it does allow you to cater for additional growth.

Figure 5-1 illustrates an example of horizontal scaling.

click to expand
Figure 5-1: Horizontal scaling example with WebSphere

As you can tell from Figure 5-1, horizontal scaling is a relatively straightforward approach to extending a WebSphere environment from the point of view of a basic or noncomplex J2EE application running within WebSphere. However, once you start to need to look at ensuring that users' data is maintained across all nodes, for varying levels of application code, and for even more varying levels of legacy and data tier integration, horizontal scaling can become a nightmare if it isn't properly designed.

You'll explore the concepts of cells , server groups, and clones in more detail later in this chapter, but for now you should know that these options provide the ability to drop in additional servers and further distribute the WebSphere load. The same "drop in" approach be used for vertical scaling, and although it's similar to horizontal scaling in terms of basic environment implementation, vertical scaling does introduce several additional headaches for your application and system architects .

Vertical Scaling with WebSphere

Vertical scaling involves scaling "upward" within your existing servers. That is, you add in, or upgrade, additional processing power, memory, disks, and so on within existing servers rather than purchase additional servers (which is what is involved in horizontal scaling). This form of platform scaling is best suited to large applications that require centralized, workhorse-type servers to be able to process large numbers of requests .

Vertical scaling is also the optimal choice for sites that are short on support resources. Generally speaking, the more servers that you have to manage, the more support processes and resources you need to have in place to adequately manage your many horizontally scaled servers. More infrastructure can also cost more from a facilities management point of view ”more rack space, more power requirements, more networking infrastructure, and so on. Therefore, the decision to vertically scale or horizontally scale may be based on factors other than application architecture requirements.

You'll need to consider vertical scaling in the way you configure your WebSphere server and, depending on your application architecture, in the way you design your application. What this means is that it's all very well to purchase a server with plenty of memory, but you need to specifically tune and configure your WebSphere platform to take advantage of memory. You need to consider such factors as maximum JVM heap space, JVM threads to kernel thread ratios, and so on. Most 32-bit JVMs have memory heap limitations of 2GB. Therefore, unless you design the rest of your WebSphere application environment correctly to be able to load balance with more than one servicing J2EE application (which is effectively a JVM), you won't be able to take advantage of large amounts of memory. (You'll look at these types of limitations and rules of thumb in more detail throughout the rest of this chapter.)

The same considerations apply to CPUs. There's no benefit to having a system with ten CPUs and, due to some form of application design limitation, only being able to operate a single JVM instance of your application and effectively wasting nine CPUs!

One downside to vertical scaling for the majority of WebSphere implementations is that, with a less distributed environment (i.e., a less horizontally scaled environment), you may be exposed to risks associated with a less redundant environment. Having fewer physical servers for complex and mission-critical environments can lead to downtime risks in the event of a system or site failure. For this reason, vertical scaling alone isn't a good choice for critical or customer- facing systems. Ever more so in today's global climate, disaster recovery and geographically split site configurations are essential for mission-critical systems. For these types of systems to work, you must employ horizontal scaling alongside vertical scaling.

Combined Horizontal and Vertical Scaling with WebSphere

At the end of the day, the best design uses a mixture of horizontal and vertical scaling. This satisfies availability requirements as well as scalability and performance requirements for application environments:

  • Multiple servers support redundancy and availability, and aid in scalability.

  • The servers can be upgraded or vertically scaled to support growing application-processing demands.

A mixture of horizontal and vertical scaling can also provide the best of both worlds from an operational and facilities costs point of view, with fewer, higher capacity servers keeping the water and feed costs down for the majority of application environments and data center housing.

Figure 5-2 highlights a basic vertically and horizontally scaled application environment.

click to expand
Figure 5-2: Example of a horizontally and vertically scaled environment

Figure 5-2 illustrates how customer traffic and requests are distributed to a two-node WebSphere application server cluster, which uses multiple application-servicing JVMs to provide performance, availability, and redundancy. This type of configuration could be labeled as the basic production-ready WebSphere platform architecture. My recommendation is that this type of configuration is the basic building block for any WebSphere environment ”anything less in a production environment leaves the door open to myriad problems.

That said, if you're looking to roll out a smaller WebSphere implementation, you should conduct a return on investment (ROI) analysis on the costs associated with the additional infrastructure. In Chapter 2 you looked at some examples for ROI analysis. When lined up against the costs associated with actually having downtime, the result from the ROI model may suggest that it isn't cost-effective to employ additional infrastructure for redundancy when the platform requirements are small.

I know systems managers who are responsible for small WebSphere production environments running on x86 clone machines. Their train of thought is that, if the server dies, all they need to do is run down to the local PC shop and pick up the new replacement part(s), install it, and then restore the data from the last backup. The cost for that replacement part may be only a few hundred dollars, and given the general low complexity of these types of systems, installing the part and bringing the system up again may take only as long as 30 to 90 minutes.

For this reason, having additional servers to cater for redundancy may not be an issue for some sites.




Maximizing Performance and Scalability with IBM WebSphere
Maximizing Performance and Scalability with IBM WebSphere
ISBN: 1590591305
EAN: 2147483647
Year: 2003
Pages: 111
Authors: Adam G. Neat

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net