Section 4.1. Hardware Considerations


4.1. Hardware Considerations

Managing a reasonably large network requires an NMS with substantial computing power. In today's complex networked environments, networks can range in size from a few nodes to thousands of nodes. The process of polling and receiving traps from hundreds or thousands of managed entities can be taxing on the best of hardware. Your NMS vendor will be able to help you determine what kind of hardware is appropriate for managing your network. Most vendors have formulas for determining how much RAM you will need to achieve the level of performance you want, given the requirements of your network. It usually boils down to the number of devices you want to poll, the amount of information you will request from each device, and the interval at which you want to poll them. The software you want to run is also a consideration. NMS products such as OpenView are large, heavyweight applications; if you want to run your own scripts with Perl, you can get away with a much smaller management platform.

Is it possible to say something more helpful than "ask your vendor"? Yes. First, although we've become accustomed to thinking of NMS software as requiring a midrange workstation or high-end PC, desktop hardware has advanced so much in the past year or two that running this software is within the range of any modern PC. Specifically, surveying the recommendations of a number of vendors, we have found that they suggest a PC with at least a 2 or 3 GHz CPU, 512 MB to 1 GB of memory, and 1-2 GB of disk space. Requirements for Sun SPARC and HP workstations are similar.

Let's look at each of these requirements:


2 or 3 GHz CPU

This is well within the range of any modern desktop system, but you probably can't bring your older equipment out of retirement to use as a management station.


512 MB to 1 GB of memory

You'll probably have to add memory to any off-the-shelf PC; Sun and HP workstations come with more generous memory configurations. Frankly, vendors tend to underestimate memory requirements anyway, so it won't hurt to upgrade to 2 GB. Fortunately, RAM is usually cheap these days, though memory prices fluctuate from day to day.


1-2 GB of disk space

This recommendation is probably based on the amount of space you'll need to store the software, and not on the space you'll need for logfiles, long-term trend data, etc. But again, disk space is cheap these days, and skimping is counterproductive.

Let's think a bit more about how long-term data collection affects your disk requirements. First, you should recognize that some products have only minimal data-collection facilities, while others exist purely for the purpose of collecting data (for example, MRTG). Whether you can do data collection effectively depends to some extent on the NMS product you've selected. Therefore, before deciding on a software product, you should think about your data-collection requirements. Do you want to do long-term trend analysis? If so, that will affect both the software you choose and the hardware on which you run it.

For a starting point, let's say that you have 1,000 nodes, you want to collect data every minute, and you're collecting 1 KB of data per node. That's 1 MB per minute, 1.4 GB per dayyou'll fill a 40GB disk in about a month. That's bordering on extravagant. But let's look at the assumptions:

  • Collecting data every minute is certainly excessive; every 10 minutes should do. Now your 40GB disk will store almost a year's worth of data.

  • A network with 1,000 nodes isn't that big. But do you really want to store trend data for all your users' PCs? Much of this book is devoted to showing you how to control the amount of data you collect. Instead of 1,000 nodes, let's first count interfaces. And let's forget about desktop systemswe really care about trend data for our network backbone: key servers, routers, switches, etc. Even on a midsize network, we're probably talking about 100 or 200 interfaces.

  • The amount of data you collect per interface depends on many factors, not the least of which is the format of the data. An interface's status may be up or downthat's a single bit. If it's being stored in a binary data structure, it may be represented by a single bit. But if you're using syslog to store your log data and writing Perl scripts to do trend analysis, your syslog records are going to be 80 bytes or so, even if you are storing only 1 bit of information. Data-storage mechanisms range from syslog to fancy database schemesyou obviously need to understand what you're using, and how it will affect your storage requirements. Furthermore, you need to understand how much information you really want to keep per interface. If you want to track only the number of octets going in and out of each interface and you're storing this data efficiently, your 40GB disk could easily last the better part of a century.

Seriously, it's hard to estimate your storage requirements when they vary over two or three orders of magnitude. But the lesson is that no vendor can tell you what your storage requirements will be. A gigabyte should be plenty for log data on a moderately large network, if you're storing data only for a reasonable subset of that network, not polling too often, and not saving too much data. But that's a lot of variables, and you're the only one in control of them. Keep in mind, though, that the more data you collect, the more time and CPU power will be required to grind through all that data and produce meaningful results. It doesn't matter whether you're using expensive trend-analysis software or some homegrown scriptsprocessing lots of data is expensive. At least in terms of long-term data collection, it's probably better to err by keeping too little data around than by keeping too much.




Essential SNMP
Essential SNMP, Second Edition
ISBN: 0596008406
EAN: 2147483647
Year: 2003
Pages: 165

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net