Logical Network Architecture

 < Day Day Up > 

The logical network design is composed of segregated networks that are implemented physically using virtual local area networks (VLANs) defined by network switches. The internal network uses private IP address space (10.0.0.0) for security and portability advantages. FIGURE 7-1 shows a high-level overview of the logical network architecture.

Figure 7-1. Logical Network Architecture Overview


The management network provides centralized data collection and management of all devices. Each device has a separate interface to the management network to avoid contaminating the production network in terms of security and performance. The management network is also used for automating the installation of the software using Solaris JumpStart™ technology.

Although several networks physically reside on a single active core switch, network traffic is segregated and secured using static routes, ACLs, and VLANs. From a practical perspective, this is as secure as separate individual switches.

IP Services

The following subsections provide a description of some emerging IP services that are often an important component in a complete network design for a Sun ONE deployment. The IP services are divided into two categories:

  • Stateful Session Based This class of IP services requires that the switch maintain session state information so that a particular client's session state is maintained across all packets. This requirement has severe implications for highly available solutions and limits scalability and performance.

  • Stateless Session Based This class of IP services does not require that the switch maintain any state information associated with a particular flow.

Many functions can be implemented either by network switches and appliances or by the Sun ONE software stack. This section describes how these new IP services work and the benefit they provide. It then discusses availability strategies. Later sections describe similar functions that are included in the Sun ONE integrated stack.

Modern multilayer network switches perform many Layer 3 IP services in addition to vanilla routing. These services are implemented as functions that operate on a packet by modifying the packet headers and controlling the rate at which the packet is forwarded. IP services include functions such as QoS, server load balancing, application redirection, network address translation, and others. This section starts our discussion on an important service for data centers server load balancing and then describes adjacent services that can be cascaded.

Stateless Server Load Balancing

The server load balancing (SLB) function maps incoming client requests destined to a virtual IP (VIP) address and port to a real server IP address and port. The target server is selected from a set of identically configured servers based on a predefined algorithm that considers the loads on the servers as criteria for choosing the best server at any instant in time. The purpose of SLB is to provide one layer of indirection to decouple servers from the network service that clients interface with. Thus, the server load balancer can choose the best server to service a client request. Decoupling increases availability because if some servers fail, the service is still available from the remaining functioning servers. Flexibility is increased because servers can be added or removed without impacting the service. Other redirection functions can be cascaded to provide compound functionality.

SLB mapping functions differ from other mapping functions such as redirection, which makes mapping decisions based on criteria such as ensuring that a particular client is redirected to the same server to take advantage of caches or cookie persistence. FIGURE 7-2 shows an overview of the various mapping functions and how the IP header is rewritten by various functions.

Figure 7-2. IP Services Switch Functions Operate on Incoming Packets


FIGURE 7-2 shows that a typical client request is destined for an external VIP with IP address a.b.c.d and port 123. Various functions, as shown, can intercept this request and rewrite it according to the provisioned configuration rules. The SLB algorithm will eventually intercept the packet and rewrite the destination IP address destined to the real server, which was chosen by a particular algorithm. The packet is then returned as indicated by the source IP address.

Stateless Layer 7 Switching

Stateless Layer 7 switching, which is also called the application redirection function, intercepts a client's HTTP request and redirects the request to another destination usually a group of cache servers. Application redirection rewrites the IP destination field. This is different from proxy switching, where the socket connection is terminated and a new one is created to the server to fetch the requested Web page.

Application redirection serves the following purposes:

  • Reduces the load on one set of Web servers and redirects it to another set, which is usually cache servers for specific content

  • Intercepts client requests and redirects to another destination for control of certain types of traffic based on filtered criteria

FIGURE 7-3 illustrates the functional model of application redirection, which only rewrites the IP header.

Figure 7-3. Application Redirection Functional Model


Stateful Layer 7 Switching

Stateful Layer 7 switching, which is also called content switching, proxy switching, or URL switching, accepts a client's incoming HTTP request, terminates the socket connection, and creates another socket connection to the target Web server, which is chosen based on a user-defined rule. The difference between this and application redirection is the maintenance of state information. In application redirection, the packet is rewritten and continues on its way. In content switching, state information is required to keep track of client requests and server responses and to make sure they are tied together. The content switching function fetches the requested Web page and returns it to the client.

FIGURE 7-4 shows an overview of the functional content switching model.

Figure 7-4. Content Switching Functional Model


Content switching with full NAT serves the following purposes:

  • Isolates internal IP addresses from being exposed to the public Internet.

  • Allows reuse of a single IP address. For example, clients can send their Web requests to www.a.com or www.b.com, where DNS maps both domains to a single IP address. The proxy switch receives this request with the packet containing an HTTP header in the payload that contains the target domain, for example a.com or b.com, and makes a decision to which group of servers to redirect this request.

  • Allows parallel fetching of different parts of Web pages from servers optimized and tuned for that type of data. For example, a complex Web page might need GIFs, dynamic content, or cached content. With content switching, one set of Web servers can hold the GIFs and another can hold the dynamic content. The proxy switch can make parallel fetches and retrieve the entire page at a faster rate than would be possible otherwise.

  • Ensures that requests with cookies or SSL session IDs are redirected to the same server to take advantage of persistence.

FIGURE 7-3 shows that the client's socket connection is terminated by the proxy function. The proxy retrieves as much of the URL as needed to make a decision based on the retrieved URL. FIGURE 7-3 shows various URLs mapped to various server groups, which are VIP addresses. The next step is to forward the URL directly or pass it off to the SLB function that is waiting for traffic destined to the server group.

The proxy is configured with a VIP, so the switch forwards all client requests destined to this VIP to the proxy function. The proxy function rewrites the IP header, particularly the source IP and port, so that the server sends back the requested data to the proxy, not the client directly.

Stateful Network Address Translation

Network Address Translation (NAT) is a critical component for security and proper traffic direction. There are two basic types of NAT: half and full. Half NAT rewrites the destination IP address and MAC address to a redirected location such as Web cache, which returns the packet directly to the client because the source IP address is unchanged. Full NAT is where the socket connection is terminated by a proxy, so the source IP and MAC are changed to that of the proxy server.

NAT serves the following purposes:

  • Security Prevents exposing internal private IP addresses to the public.

  • IP Address Conservation Requires only one valid exposed IP address to fetch Internet traffic from internal networks with non-valid IP addresses.

  • Redirection Intercepts traffic destined to one set of servers and redirects it to another by rewriting the destination IP and MAC addresses. The redirected servers can send the request directly back to the clients with half NAT translated traffic because the original source IP address has not been rewritten.

NAT is configured with a set of filters, usually a 5-tuple Layer 3 rule. If the incoming traffic matches a certain filter rule, the packet IP header is rewritten or another socket connection is initiated to the target server, which itself can be changed, depending on the rule.

Stateful Secure Sockets Layer Session ID Persistence

Secure Sockets Layer (SSL) can be implemented in software, hardware, or both. SSL can be terminated at the target server, an intermediate server, an SSL network appliance, or at an SSL-capable network switch. An SSL appliance, such as netscaler or array networks, tends to be implemented with a PC board and have a PCI-based card, which contains the SSL accelerator ASIC. Hence, the SSL acceleration is implemented in libraries, which offload only the mathematical computations. The rest of the SSL processing is implemented in software, with selective functions being directed to the hardware accelerator. Clearly, one immediate limitation is the PCI bus. Other newer SSL devices have an SSL accelerator integrated in the datapath of the network switch. These advanced products are just emerging from startups such as Wincom Systems. This section discusses the switch and appliance interactions. A later section covers the server SSL implementation.

FIGURE 7-5 shows that once a client makes initial contact to a particular server, which may have been selected based on SLB, the switch ensures that subsequent requests are forwarded to the same SSL server based on the SSL ID that the switch has stored during the initial SSL handshake. The switch keeps state information about the client's initial request based on HTTPS and port 443, which contain a hello message. This first request is then forwarded to the server selected by the SLB algorithm or by another function. The server responds to the client's hello message with an SSL session ID. The switch then intercepts this SSL session and stores it in a table. The switch forwards all of the client's subsequent requests to the same server as long as each request contains the SSL session ID in the HTTP header. FIGURE 7-5 shows there may be several different TCP socket connections that span the same SSL session. State is maintained by the SSL session ID in each HTTP request sent by the same client.

Figure 7-5. Network Switch with Persistence Based on SSL Session ID


An appliance can be added for increased performance in terms of SSL handshakes and bulk encryption throughput. FIGURE 7-7 illustrates how an SSL appliance would be potentially deployed. Client requests first come in on a specific URL with the HTTPS protocol on port 443. The switch recognizes that these requests must be directed to the appliance, which is configured to provide that SSL service. A typical appliance such as Netscaler can also be configured, in addition to SSL acceleration, to provide content switching and load balancing. The appliance then reads or inserts cookies and resubmits the HTTP request to an appropriate server, which can maintain state based on the cookie that was in the HTTP header.

Figure 7-7. Network Availability Strategies


Figure 7-6. Tested SSL Accelerator Configuration RSA Handshake and Bulk Encryption


Stateful Cookie Persistence

The HTTP 1.0 protocol was originally designed to provide static pages in one transaction. As more complex Web sites evolved, requiring that multiple HTTP requests access the same server, performance was severely limited by the closing and opening of TCP socket connections. This was solved by HTTP 1.1, which allowed persistent connection. Immediately after a socket connection, the client can pipeline multiple requests. However, as more complex Web sites evolved to include applications such as the shopping cart, which required persistence across multiple HTTP 1.1 requests that were further complicated by proxies and load balancers that interfere with the traffic being redirected to the same Web server, another mechanism was required to maintain state across multiple HTTP 1.1 requests. The solution was the introduction of two new headers in the HTTP request: Set-Cookie and Cookies as defined in RFC 2109. These headers carried the state information between the client and server. Typically, most load-balancing switches have enough intelligence to ensure that a particular client's session with a particular server is maintained based on the cookie inserted by the server and maintained by the client.

Design Considerations: Availability

FIGURE 7-7 shows a cross section of the tier types and functions that are performed at each tier. Also shown are the availability strategies for the Network and Web tier. External tier availability strategies are outside the scope of this book. We will limit our discussion to the services tiers, which include Web, Application Services, Naming, and so on.

Designing network architectures for optimal availability requires maximizing two orthogonal components:

  • Intra Availability Refers to maximizing the function that estimates failure probability of the components themselves. The components that cause the failure are only considered by the following equation:


  • Inter Availability Refers to minimizing the impact of failures caused by factors external to the system such as single points of failure (SPOFs), power outages, or a technician accidently pulling out a cable.

It is not sufficient to simply maximize the FAvailability function. The SPOF and environmental factors also must be considered. The networks designed in this chapter describe a highly available architecture that conforms to these design principles and is described in further detail later.

FIGURE 7-8 is repeated here to simplify a detailed discussion. The diagram shows an overview of the logical network architecture, showing how the tiers map to the different networks, which are also mapped to segregated VLANs. This segregation allows inter-tier traffic to be controlled by filters on the switch or a firewall, which is the only bridge point between VLANs. The following describes each subnetwork:

  • External network The external facing network that directly connects to the Internet. All IP addresses must be registered and should be secured with a firewall.

    The following networks are assigned non-routable IP addresses based on RFC 1918, which can also be based on the following:

    10.0.0.0 10.255.255.255 (10/8 prefix)

    172.16.0.0 172.31.255.255 (172.16/12 prefix)

    192.168.0.0 192.168.255.255 (192.168/16 prefix)

  • Web services network A dedicated network that contains Web servers. Typical configurations include a load-balancing switch, which can be configured to allow the Web server to return the client's HTTP request directly or to require the load balancing device to return the request on behalf of the provider Web server.

  • Naming services network A dedicated network that consists of servers that provide LDAP, DNS, NIS, and other naming services. The services are for internal use only and should be highly secure. Internal infrastructure support services must be sure that requests originate and are destined to internal servers. Most requests tend to be read intensive, hence their potential for caching strategies for increased performance.

  • Management network A dedicated service network that provides management and configuration of all servers, including jumpstart of new systems.

  • Backup network A dedicated service network that provides backup and restore operations pivotal to minimizing disturbances to other production service networks during backup and other network bandwidth-intensive operations.

  • Device network This is a dedicated network that attaches IP storage devices and other devices.

  • Application services network A dedicated network that typically consists of large multi-CPU servers that host multiple instances of the Sun ONE Application server software image. These requests tend to be low network bandwidth intensive but may span multiple protocols, including HTTP, CORBA, proprietary TCP, and UDP. The network traffic can also be significant when Sun ONE Application server clustering is enabled. Every update to a stateful session bean triggers a multicast update to all servers on this dedicated network so that participating cluster nodes update the appropriate stateful session bean. Network utilization increases in direct proportion to the intensity of session bean updates.

  • Database network A dedicated network that typically consists of one or two multi-CPU database servers. The network traffic typically consists of Java DataBase Connectivity™ (JDBC) traffic between the application server or the Web server.

Figure 7-8. Logical Network Architecture Design Details


Collapsed Layer 2/Layer 3 Network Design

Each service is deployed in a dedicated Class C network where the first three octets represent the network number. The design represents an innovative approach where separate Layer 2 devices are not required because the functionality is collapsed into the core switch. Decreasing the management and configuration of separate devices while maintaining the same functionality is a major step toward cutting costs and increasing reliability.

FIGURE 7-9 shows how a traditional configuration requires two Layer 2 switches. A specific VLAN spans the six segments that give each interface access to the VLAN on failover.

Figure 7-9. Traditional Availability Network Design Using Separate Layer 2 Switches


The design shown in FIGURE 7-10 results in the same network functionality, but eliminates the need for two Layer 2 devices. This is accomplished using a tagged VLAN interconnect between the two core switches. By collapsing the Layer 2 functionality, there is a reduction in the number of network devices, providing fewer units that might fail, lower cost, and reduced manageability issues.

Figure 7-10. Availability Network Design Using Large Chassis-Based Switches


Multi-Tier Data Center Logical Design

The logical network design for the multi-tier data center (FIGURE 7-11) incorporates server redundant network interfaces and integrated VRRP and IPMP. See "Integrated VRRP and IPMP" on page 280 for more information.

Figure 7-11. Logical Network Architecture with Virtual Routers, VLANs, and Networks


TABLE 7-1 summarizes the eight separate networks and associated VLANs.

Table 7-1. Network and VLAN Design

Name

Network

Default Router

VLAN

Purpose

client

172.16.0.0

172.16.0.1

client

Client load generation

edge

192.16.0.0

192.16.0.1

edge

Connects client network to the data center

web

10.10.0.0

10.10.0.1

web

Web services

ds

10.20.0.0

10.20.0.1

ds

Directory services

db

10.30.0.0

10.30.0.1

db

Database services

app

10.40.0.0

10.40.0.1

app

Application services

dns

10.50.0.0

10.50.0.1

dns

DNS services

mgt

10.100.0.0

10.100.0.1

mgt

Management and administration


The edge network connects to the internal network in a redundant manner. One of the core switches has ownership of the 192.16.0.2 IP address, which means that switch is the master and the other is in slave mode. When the switch is in slave mode, it does not respond to any traffic, including ARPs. The master also assumes ownership of the MAC that floats along with the virtual IP address of 192.16.0.2.

Note

If you have multiple NICs, make sure each NIC uses its unique MAC address.


Each switch is configured with the identical networks and associated VLANS, as shown in TABLE 7-1. An interconnect between the switches extends each VLAN but is tagged to allow multiple VLAN traffic to share a physical link (this requires a network interface, such as the Sun ge, that supports tagged VLANS). The Sun servers connect to both switches in the appropriate slot, where only one of the two interfaces will be active.

Although most switches support Routing Information Protocol (RIP and RIPv2), Open Shortest Path First (OSPF), and Border Gateway Protocol v4 (BGP4), static routes provide a more secure environment. A redundancy protocol based on virtual router redundancy protocol (VRRP, RFC 2338) runs between the virtual routers. The MAC address of the virtual routers floats among the active virtual routers so that the ARP caches of the servers do not need any updates when a failover occurs.

How Data Flows Through the Service Modules

When a client makes a request, it can be handled in one of two ways, depending on the type of request. A Web server might return information to the client directly or it might forward the request to an application server for further processing.

In the case where the client's request is for static content such as images, the request is handled directly by the Web server module. These requests are handled quickly and do not present a heavy load to the client or server.

In the case where the client requests dynamically generated content that requires Java Server Pages (JSP) or servlet processing, the request is passed to the application service module for processing. This is often the bottleneck for large-scale environments.

The application server runs the core of the application that handles the business logic to service the client request, either directly or indirectly. Over the course of handling the business logic, the application server can use many supporting resources, including directory servers, databases, and perhaps even other Web application services.

FIGURE 7-12 illustrates how the data flows through the various system interfaces during a typical application services request. TABLE 7-2 provides a description of each numbered interaction.

Figure 7-12. Logical Network


Table 7-2. Sequence of Events for FIGURE 7-12

Item

Interface1

Interface2

Protocol

Description

1

Client

Switch

HTTP/HTTPS

Client initiates Web request. Client communication can be HTTP or HTTPS (HTTP with secure socket layer). HTTPS can be terminated at the switch or at the Web server.

2

Switch

Web server

HTTP/HTTPS

Switch redirects client request to appropriate Web server.

3

Web Server

Application server

Application server Web connector over TCP

The Web server redirects the request to the application server for processing. Communication passes through a Web server plug-in over a proprietary TCP-based protocol.

4

Application server

Directory server

LDAP

The Java™ 2 Enterprise Edition (J2EE) application hosted by the application server identifies the requested process as requiring specific authorization. It sends a request to the directory server to verify that the user has valid authorization.

5

Directory server

Application server

LDAP

The directory server successfully verifies the authorization through the user's LDAP role. The validated response is returned to the application server. Application server then processes business logic represented in J2EE application.

6

Application server

Database server

JDBC

The business logic requests data from a database as input for processing. The requests may come from servlets, Java™ Data Objects, or Enterprise Java Beans (EJBs) that in turn use Java DataBase Connectivity (JDBC) to access the database.

7

Database server

Application server

JDBC

The JDBC request can contain any valid SQL statement. The database processes the request natively and returns the appropriate result through JDBC to the application server.

8

Application server

Web server

Application server Web connector over TCP

The J2EE application completes the business logic processing, packages the data for display (usually through a JSP that renders HTML) and returns the response to the Web server.

9

Web server

Switch

HTTP/HTTPS

Switch receives reply from Web server.

10

Switch

Client

HTTP/HTTPS

Switch rewrites IP header and returns request to client.


     < Day Day Up > 


    Networking Concepts and Technology. A Designer's Resource
    Networking Concepts and Technology: A Designers Resource
    ISBN: 0131482076
    EAN: 2147483647
    Year: 2003
    Pages: 116

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net