Host Hardware and Bandwidth


There are a lot of issues to address when trying to determine the hardware and bandwidth requirements. The following sections attempt to address most if not all of them.

How Many Players per Physical Server?

This will depend on the game's design and how the design affects the size of the world, where critical in-game facilities such as banks, training masters, shops , and other player gathering places are located. The tradeoff on this is the more players in one place, the better the socialization , up to some point where there are so many in one location that server-side lag becomes a problem.

This points back to the design and doing your best to anticipate population problems before coding begins.

How Many Servers per World Iteration?

The tradeoff: The more physical machines per world iteration, the more people you can simultaneously host per iteration. More machines, however, means higher hardware costs.

Also, you have to consider how many players to host per physical server vs. the size and popularity of the "world" terrain that server machine hosts . If each physical machine is designed to host 500 simultaneous players, but the region of the world is of such interest that more than 500 regularly congregate there, what is that going to do to performance?

Multi-Processor PCs or Suns?

In most cases, the cheaper alternative is multi-processor PCs, if you have a firm handle on how many machines you need per world iteration. The value of dual-processor PC motherboards is in faster traffic-handling at the server end. Both Sun and Intel announced in December 2001 that they are working on multiple processors on one chip, tied together using simultaneous multithreading to allow each processor to handle two or more application threads simultaneously. [1] This should eventually decrease the costs of server farms, as one physical machine will probably be able to handle more traffic and players.

[1] See the full article at CMP's Silicon Strategies web site: www.siliconstrategies.com/story/OEG20011210S0069.

Just plain clumping commodity, single-processor PCs isn't advised for an application as intensive as a multiplayer game. They aren't designed for this type of job. The higher cost of multi-processor PCs is generally offset by better performance and less downtime, which result in higher player satisfaction.

How Much RAM?

The easy answer here is "as much as you can shove into the machine." You should have no less than 1GB of system RAM for any type and style of online game, and as much as you can reasonably afford is best; there is no such thing as too much.

Will That Be One Hard Drive or Five?

The view on this among developers we've talked to is split. Some prefer having one large hard drive with a backup in a fault-tolerant configuration; some prefer two or more relatively smaller drives with one fault-tolerant backup drive in place.

Purely for redundancy's sake, it makes sense to split the load off to more than one drive if your technical design allows for it. Regular, multiple backups are mandatory in any case; there hasn't been an online game in existence that hasn't had a catastrophic failure that required backup game data to be loaded onto live production servers.

To Linux or Not to Linux

This one has become almost a religious argument in the community. Microsoft backend products such as NT/2000/XP tend not to be as stable or cost effective as Linux. Windows NT/2000/XP isn't considered as stable as Linux, can't host as many people per server before server-side lag starts setting in, and costs money to license versus the free use of Linux. On the other hand, NT/2000/XP is generally easier for most engineers to work with, and Microsoft and their affiliated training partners have pretty good training programs and materials available.

In general, most shops are going with Linux because it is free, open source, more flexible, and generally more scalable than NT/2000/XP.

Bandwidth: How Big a Pipe Do You Need?

This is one of the key questions in this whole process. Bandwidth is one of the few variable costs you'll have to contend with; it is also one of your biggest expenses because you pay a peak usage rate; that is, the higher your peak usage, the higher your rate. This isn't like a personal account with your ISP, nor is it like having an account with a phone, water, or electricity provider. It's more like a highway ”it's going to be as wide as you build it, and you need to build it for your busiest rush-hour traffic, even when nearly everyone is home sleeping. Going over a peak usage cap is expensive. (Think rush- hour traffic that is so congested that you need a helicopter to evacuate a seriously injured motorist. Compare the cost of a helicopter with the cost of an ambulance.)

Controlling bandwidth usage is critical to the profit margins for the game, yet it is rarely a consideration in the design stage of a project. Designers don't want to be shackled to a bit rate target number; they want to shove as much data down the line as they can because that gives them leeway to add more features to the game. They are rarely challenged on this during the design process because, due to inexperience with the process, executive producers and other leaders rarely even think about it. It is generally during the latter stages of Beta testing, when the data transfer figures are run to determine how much bandwidth needs to be ordered for launch, that executives see the potential cost, realize that bandwidth is going to eat up a bunch of margin points, and turn white with fear.

Thus, one of the first ground rules an executive producer has to lay down is a target goal of how much data is going to be transferred back and forth between the player and the game servers. The bit rate isn't a sexy thing to work on, but there simply must be some common-sense goal to shoot for that won't break the maintenance budget in bandwidth costs after launch. This target can be refined as development proceeds in the testing phase, but not having such a target makes it very likely the game's bandwidth costs will be out of sync with the rest of the budget.

In the US and European markets, a good goal to shoot for is 4 “6 kilobits per second (kps)/player or less. You'll find it is difficult to find a living space in that range; several of today's more popular games live in the 8 “10kps range. If you can get the bit rate down to 2kps, you're "golden." It's hard to see how that can happen, however, without putting dangerous amounts of data directly into the client, which is just asking for trouble from talented cheaters and hackers. The problem with "golden" is that it's the part of the flame between red and blue.

After you have made your code as elegant, streamlined, and compact as possible, the remaining technique for reducing bit rate is to have some parts of it reside on the client side (each player's computer) instead of on the server side. As code is shifted from server to client, players have access to more critical functions. Most players just want to have fun with your game, but some players would just love to "have fun" with your code. Their definition of "fun" will cost you money and time.

Asian markets generally have more tolerance in bandwidth because the governments there tend to lay a lot of fiber- optic bandwidth and offer price supports to keep it inexpensive. South Korea is the best example of this, and it shows in how Korean PWs use bandwidth for games; the average seems to be a 30MB connection to support a server that can hold 1,000 “3,000 simultaneous players. This is also one reason why few Korean games will be appearing in the US market until massive recoding and optimization are done.

Then there is the consideration of the player's connection to the Internet. Hard- core gamers tend to upgrade hardware much faster than moderate or mass-market gamers. In general, they are seeing better Internet performance right now, especially in the US, where less than 8% of households have broadband access as of February 2002. There are plenty of myths abounding about that access, however.

Bandwidth Will Not Save You

Bandwidth: It's a clarion call, a wizard's chant to create the spell of no-lag. All we need is more of that super-big, mystical stuff, "They" say, and all will be well.

More bandwidth, "They" say, translates to more speed for data. You know the line: big pipes, no waiting, and an end to the nefarious lag monster. Imagine 50 “80-millisecond latency rates for everyone! We could play all those Internet action games and flight simulators and the frame rate might actually match the data transmission rate.

And cable modems and DSL lines, those deity-blessed saviors known collectively as "broadband," will give us that access, "They" say. Why, as soon as everyone is on a cable modem or DSL line, we'll all experience low ping times, and playing a session of Quake III or UO will be a lagless exercise worldwide. Broadband, the experts trumpet , shall save us all.

Understand something up-front: What you hear about broadband these days is marketing fluff, and it's about as honest as marketing fluff ever is. That is to say, it is riddled with misdirection , incomplete information, and lies by omission. All the marketers want you to see is the perfect case; the reality of the situation can wait until after you've plunked your money down on the table.

What "They" want you to see and believe is that broadband in the form of cable and DSL will remarkably improve your Internet performance; what "They" don't want you to see is that bandwidth is only one part of the puzzle and that all parts have to be fixed for broadband to have any lasting effect on lag.

If you believe we're saying that certain cable companies, cable access providers, DSL providers, and content providers ”the ubiquitous "They" ”are fudging the truth about the efficacy of broadband access for their own purposes, score yourself a 10. Let's have a little reality check:

  • Lack of bandwidth alone is not the cause of lag. Yes, lack of adequate bandwidth is a major cause of latency. The US Department of Commerce estimated in 1999 and 2000 that the amount of data sent out over the Internet doubles every 100 days. Compare that to the amount of fiber and copper that is laid on an annual basis and you will find that only about 12.5% of the needed bandwidth is being laid every year.

    However, that is not the whole story. Other major contributors to latency include obsolete routers, obsolete and badly configured servers, badly programmed databases and applications on those servers, and the existence of certain critical data-routing chokepoints on the Internet, such as at the metropolitan area Ethernets (MAEs).

    What this means, my friends , is that you cannot control lag at the end user 's home. It doesn't help just to open the broadband spigot into the home; in fact, without fixing the other parts that create lag, it hurts more than helps. All those additional bits and bytes are going to be crowding the lines at those obsolete routers, badly configured servers, and data chokepoints, and that will just make the problem even worse in the short run. By "that," I mean over the next three to five years , overall lag for the majority of Internet users is actually going to get worse, not better.

    If you think lag during multiplayer games is bad now, just wait two years.

  • Massively multiplayer (MMP) games can and will take advantage of broadband to reduce lag. We're not talking about a one-time shot of 20k of data from a web page, which can then be cached and redisplayed. PWs (and retail hybrids such as Tribes and Quake III ) are dynamic; the information needed, such as player locations or the effects of combat or magic, changes quickly.

    It can be bad trying to shoot out data for a twitch game such as Quake . With massively multiplayer games (MMGs) such as EQ and Air Warrior , significant lag can be ”and usually is ”caused by the backend server programming. Look at it this way: If 1,000 simultaneous users are sending in commands to the server, and those commands each effect anywhere from 1 “50 other players, the amount of data that has to be correlated and sent back out is tremendous. At the risk of making some enemies, most programmers in the MMG arena haven't been doing it very long, and they can be sloppy about how much data needs to be transferred in these situations.

    In other words, this is a technical design issue, not a broadband access issue. Just opening up the bandwidth pipe is less effective than improving the programming on the game's server or PC client. In most MMGs today, much more data than necessary is being transferred. This may be one of the easier problems to solve, although it won't help that much if Internet lag is still bad, which it will be for some time to come. Read on.

  • Cable modems can deliver speeds of up to 100 times standard telephone modems. Well, yes, they can ”occasionally ”on a perfect day, in a perfect situation, if you are the only person on the line, or if you wave a dead chicken over the cable modem and invoke the correct spirits, and if other factors outside the user's control don't intervene.

    Sarcasm aside, a cable modem is a pretty good deal ”right now. This will not last. Unlike DSL, in which you lease a certain portion of bandwidth that only you have access to, cable modem users share the bandwidth. The usual configuration is 500 users to a neighborhood "head end," sharing something like 10MB of bandwidth. Now, if only you and a couple other people are using that bandwidth, which is the case today at many head ends, you can get pretty good performance on downloads. I know; I've used cable modems more than once since 1998 and I love them for pure downloads, especially late at night. I can grab a 30MB file in 5 “8 minutes.

    But imagine the situation two or three years from now, when all 500 slots on your head end are filled, with all those people downloading huge movie files and MP3s and 30 “200MB game demos. Ten megabytes divided by 500 equals 20,000, according to my calculator. That's about 20k in tech-speak, give or take some hexadecimal reasoning. When that happens, you'll long for the days when you had 56k worth of bandwidth all to yourself.

    This is a shell game with rapidly diminishing returns. The cable providers know this and they hope you are not smart enough to realize it. This is one of the reasons why they are so resistant to open the lines to other ISPs, such as AOL and Earthlink. If all those millions of users clog the cable lines too soon, their best marketing fluff blows away like a dandelion on the wind. This is also why they are placing governors on their systems already, so they can limit how much bandwidth you actually use. They are also quietly altering their user agreements to note that they can limit your bandwidth use at any time, for any reason.

  • DSL is a good alternative to cable modem access. Well, it would be, if the stupidly greedy telephone companies would drop the price today. Right now, it costs about $40 a month for a DSL line that provides about 360k of bandwidth into the home and about 128k of uploading from the home to the Internet. This is not that great a deal. Even though you don't have to share that bandwidth with 500 other subscribers, it's only about six times as much download power as you get from a 56k modem, and you still have to deal with the traffic jams elsewhere on the Internet. As a result, DSL users are starting to see traffic jams at trunks, those nexus points where many telephone lines come together.

    However, two or three (or five) years from now, when the cable lines are clogged like an 80-year-old saturated fat eater's arteries, this could be a great deal ” if the stupidly greedy telephone companies don't raise the price after a few hundred thousand users subscribe to it, which is exactly what they did with ISDN broadband lines. This is why ISDN hasn't grown into the millions of users the phone companies predicted five years ago.

So before you plop down $40 or more a month for broadband access in the expectation of superior gaming, understand this: Broadband will not save you ”not for a long while anyway, and not until a lot of routers and servers are replaced with newer equipment, pressure is relieved at the chokepoints of the Internet, and more programmers learn how to code games for more economical data transfer. Yes, you probably will see a performance increase, but it will not be the Nirvana-like experience promised , and it will get worse over time as more people subscribe to broadband outlets.

What does all this mean? It means that programming a game (or web site) to appeal to broadband users is going to actually cost you more in bandwidth. If you are willing to suck up this cost and can afford to pay for it, that's one thing. Just make sure you go in with your eyes open; more data transfer might make for a better online game, but it also might drain your pocketbook faster than you expected.

Co-Locate Your Servers or Set Up a Big Farm at Home?

Where to place your servers is a question you need to consider because it will have an impact on how much physical space you need and how many operations employees you need to hire to launch. If you're planning on being published/hosted by a third-party publisher, you'll want to know how they do it because it will have an impact on their bottom expenses.

To "colocate" simply means to place your hardware at someone else's network operations centers (NOCs). The big Internet backbone providers, like Sprint and Exodus, have them all over the country and either own NOCs internationally or have cut deals with firms overseas to provide that capacity.

This one is a toss-up and may depend greatly on whether you're going after an international market right away or sticking with home territory for a while. You can see examples of both methods in the US: EA's UO colocates servers in each US time zone and in the international territories it services, and Sony Online Entertainment's EQ uses one large farm in San Diego for US players.

Another major tradeoff here is in the potential for player-side lag versus close-at-home control of the hardware. By co-locating hardware at the NOCs of a major backbone provider, you give the option to the player to reduce the overall number of Internet data transfer hops from home to a game server, which generally reduces lag time.

The big issue is cost. Setting up your own NOC can be expensive, and not just in hardware, software, building a clean room to house the servers, and leasing bandwidth to connect it to the Internet. You also have to have operations people to monitor the NOC 24/7/365 and fix problems as they arise. At a bare minimum, you need at least 6 people to cover the 21 8-hour shifts in a standard work week, and that doesn't take into account vacations , sick time, or emergencies like having to take the dog to the vet or picking up ill children from school. Most companies try to slide by this by having the servers page an on-call operator at home if they go down, but this works about as well as you'd expect; in the truly critical incidents, when thousands of customers can't connect to the servers, the Law of the General Perversity of the Universe dictates that the server paging software will fail. This is especially true on holidays, patch days, and for highly anticipated scheduled events in a game.

Co-locating at a backbone NOC can solve many of these problems. It isn't a fail-safe solution, but at least you don't have to build a NOC; you can spend that money on operators to watch the servers and correct problems.



Developing Online Games. An Insiders Guide
Developing Online Games: An Insiders Guide (Nrg-Programming)
ISBN: 1592730000
EAN: 2147483647
Year: 2003
Pages: 230

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net