DESIGNING A DATA CENTER ACCESS PLATFORM: OVERALL CONSIDERATIONS

Several seemingly disparate factors come into play when designing a server-based computing data center that, when considered together, provide the overall solution of a secure, reliable, and cost-effective environment. Some of these factors, such as disaster recovery, are traditional concerns of the mainframe world, but they take on additional facets when considered as part of a computing environment using Citrix and Terminal Services. We will examine disaster recovery and business continuity at length in Chapter 19, but we touch on it briefly here because it is such an important topic in today's world.

Disaster Recovery and Business Continuity

When initially considering the consolidation of distributed corporate servers, an organization may be concerned about " putting all its eggs in one basket ." In most distributed computing environments, a single failed server probably affects only a small group of people. When everyone is connected to the same server, however (even a "virtual" one), its failure could be disastrous. Fortunately, an on-demand access environment running Terminal Services with Citrix Access Suite 4 provides a very flexible and cost-effective approach to building redundancy across multiple geographies, power grids, data access grids, and user access points. Chapter 19 will provide greater detail on why we strongly recommend organizations utilize two data centers (one main data center and one geographically separate data center) and how to technically configure this solution. For the purposes of this chapter, though, we will focus on the requirements of the first data center, with the assumption that additional data centers will be similar, if not identical.

Note 

The on-demand access model is a high-availability solution, not a fail-over solution, as data that is residing in memory within a session that has not been written to disk will be lost when a user is moved to another server due to hardware failure, a server reboot, or a server blue screen.

Outsourcing

Once a company performs an assessment of its ability to host a data center using some of the criteria presented in this chapter, they may find that they do not have adequate facilities or infrastructure in place. It may be too costly to create the proper infrastructure, or it may be undesirable to take on the task for a variety of reasons. In this case, the organization may consider taking on a partner to build and run its data center. Many companies find that even if they can build and run a data center internally, outsourcing is still attractive due to cost, staffing, location, or built-in resilience. Let's look more closely at the advantages and limitations of outsourcing a data center. These are some of the potential advantages of outsourcing:

  • Facilities built specifically for data center hosting already exist, and in fact, most data hosting facilities currently have significant excess capacity. Thus, new construction is rarely necessary.

  • Redundant power sources, such as UPS systems, backup generators with automatic transfer switches (ATSs), redundant power grids, online as well as backup cooling systems, intruder detection systems, secure access, and fire suppression are often already in place.

  • Physical security is usually better than the individual companies' internal security. Guards on duty, biometric authentication, escorted access, and other measures are typical.

  • Hosting facilities are often built very close to the points of presence (POPs) of a local exchange carrier (LEC). In some cases, they are built into the same location as a LEC, which can dramatically decrease WAN communication costs.

  • Managed services that can supplement a company's existing staff are usually available. These services are invariably less expensive than hiring someone to perform routine operations such as exchanging tapes or rebooting frozen servers.

  • Hosting facilities carry their own liability insurance, which could have a significant impact on the cost of business continuity insurance.

  • Many facilities can customize the service-level agreement they offer or bundle hosting services with network telecommunication services.

These are some of the limitations of outsourcing:

  • A company's access to its equipment is usually restricted or monitored . Outsourcing puts further demands on the design to create an operation that can run unattended.

  • WAN connectivity is limited to what the hosting center has available. It can be more difficult to get upgraded bandwidth because the hosting center has to filter such requests through the plans in place for the entire facility.

  • It may be more difficult to get internal approval to outsource the expense because the hosting services appear as a bottom-line cost, whereas many information technology costs are buried in other areas such as facilities and telecommunications.

  • If unmanaged space is obtained, it may be difficult or impractical for a company to have one of its own staff onsite at the hosting facility for extended periods of time.

image from book
CASE STUDY:Home State Bank Builds Their Own SBC Data Center

Home State Bank (HSB), a regional bank headquartered in Colorado with 180 employees , seven branch banking centers, and assets of $370 million, decided to build a data center to host their server-based computing environment following a consolidation with American Bank, another mid- sized regional bank.

Jim Hansen, Chief Information Officer of HSB, commented on the decision to build a new data center: "Consolidation of the two locally owned and independent community banks forced us to bring two distinct network environments into one. The consolidation also brought about a change in the means of providing end-user connectivity and access to their applications and services. The bank decided to move to publishing applications where applicable through Citrix MetaFrame Presentation Server via Web Interface to help minimize end-user support and keep upgrades to a minimum. We knew we needed to centralize everything from both banks, and there were no large data centers in our region to outsource to, so we decided we needed to build our own."

HSB built their first data center in March of 2003 for $130,000, with plans to replicate their data center to an off-site data center within one year. HSB's data center currently houses ten Terminal servers, 15 application servers, the routers and telecommunication equipment for the branch bank WAN, firewalls, Internet banking equipment, and a large tape jukebox backup system.

Some additional details of the data center include

  • The data center was built in a bank clearing house basement next to a bank vaultthus, it was protected on three sides by a bank vault and on the fourth side by ground.

  • 500 square feet of data center space, with 1500 square feet of accompanying office space.

  • A Liebert 16KVA uninterruptible power supply, expandable to 20KVA, capable of maintaining power in the data center for 15 minutes.

  • Water, moisture, fire, and physical security alarm systems.

  • Ceiling-mounted data cable and power racks.

    HVAC environmental control (ten tons of air-conditioning).

image from book
 
image from book
CASE STUDY:ABM Chooses AT&T to House Their Main Data Center

ABM Industries is a Fortune 1000 Company that provides outsourced facilities services. ABM has 63,000 employees worldwide. Their SBC infrastructure required a data center that would support over 50 servers and 2500 concurrent users.

Anthony Lackey, Vice President of MIS, and Chief Technology Officer, for ABM Industries, commented on the decision to outsource the data center in 1999: "The decision to co-locate the data center was a simple one. First, the single biggest vulnerability point for a thin-client solution is the network portal into the data center. Second, the physical connection from one's office to the network provider's central office is typically the most likely failure point. By co-locating our data center facilities with our network provider, we significantly reduced our vulnerability. Besides eliminating the risk of the last mile, we also eliminate a great deal of expense."

ABM saved approximately $25,000 per month on their ATM circuit by locating their data center inside a POP where AT&T maintained a hosting facility. In this case, there was no local exchange carrier (LEC) involved, and the customer could connect directly to the national carrier's backbone on a different floor of the same building. Key features of the AT&T facility that were important to ABM in the evaluation process were the following:

  • Uninterruptible power: four (expandable to six) 375kVA UPS systems (N+1), dual (N+1) solar turbine generators 750kW (with an 8000-gallon fuel capacity).

  • Dual power feeds to each cabinet from two different power systems.

  • HVAC environmental control from central plant (150 tons of air-conditioning equipment cooling 60 watts per square foot ).

  • Switched and diverse paths for WAN links; redundant OC-3/OC-12/OC-48 connections to multiple network access points.

  • Fully staffed network operations center with trained systems administrators, data center technicians, and network engineers on duty 24 hours a day, seven days a week.

  • Secured cabinets or caged environment with customized space allocation.

  • State-of-the-art VESDA fire detection system (100 times more sensitive than conventional fire detection systems) backed up by a cross-zoned conventional system to prevent emergency power-off due to early detection.

  • State-of-the-art Inergen fire suppression system.

image from book
 

Outage Mitigation Strategies

Having a good disaster recovery plan in place is small comfort to users if they are experiencing regular interruptions in service. Centralizing computing resources makes it that much more important to incorporate a high degree of resilience into a design. This goes far beyond just making sure the hard drives in the file server are in a RAID configuration. Companies must take a global view of the entire infrastructure and make the following assessments:

  • Identify single points of failure . Even if the file server is clustered, what happens if the WAN connection fails?

  • Implement redundancy in critical system components . If one server is good, two are better. If possible, they should carry balanced loads or, at the very least, have an identical backup server to put online in case one fails, a "cold spare."

  • Establish a regular testing schedule for all redundant systems . Many organizations have backup plans that fail when called upon. Thus it is important to test and document the backup systems until you are comfortable that they can be relied upon in a time of crisis.

  • Establish support escalation procedures for all systems before there is an outage . Document the support phone numbers , customer account information, and what needs to be said to get past the first tier of support.

  • Review the vendor service levels for critical components, and assess where they may need to supplement them or have spare parts on hand . Is the vendor capable of meeting their established service level? What is the recourse if they fail to perform as promised ? Is support available somewhere else? Is the cost of having an extra, preconfigured unit on hand in case of failure justified?

  • Establish a process for management approval of any significant change to the systems . Two heads are always better than one when it comes to managing change. Companies should ensure that both peers and management know about, and approve of, what is happening at the data center.

  • Document any change made to any system . For routine changes, approval may not be necessary, but companies should make sure there is a process to capture what happened anyway. The audit trail can be invaluable for troubleshooting.

  • Develop a healthy intolerance for error . An organization should never let itself say, "Well, it just works that way." They should obtain regular feedback from the user community by establishing a customer survey around items like perceived downtime, system speed, and so one, and give feedback to their vendors and manufacturers. They must keep pushing until things work the way they want them to work.

  • Build some extra capacity into the solution . Being able to try a new version of an application or service pack or hot fix without risking downtime of the production system is extremely important.

Chapter 10 has more information on establishing service levels and operational procedures as well as samples for documenting various processes at the data center and throughout your organization.

Organizational Issues

Whether an organization decides to outsource the data center or run it themselves , it is crucial they not underestimate the organizational impact of moving toward this sort of unattended operation. Unless such a center is already running, the following needs to be done:

  • Come up with a three-shift staffing plan (or at least three-shift coverage).

  • Decide whether current staff has sufficient training and experience to manage the new environment.

  • Determine whether current staff is culturally ready to deal with the "mainframe mind-set " required to make the server-based computing environment reliable and stable. In other words, can they manage the systems using rigorous change control and testing procedures?

  • Decide which of the existing staff needs to be on-site and when.

  • If outsourcing, determine which services the vendor will be providing and which will be handled internally.

  • If outsourcing, make sure there is a clean division and escalation procedure between internal and external support resources.



Citrix Access Suite 4 for Windows Server 2003. The Official Guide
Citrix Access Suite 4 for Windows Server 2003: The Official Guide, Third Edition
ISBN: 0072262893
EAN: 2147483647
Year: 2004
Pages: 137

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net