One Data Model


The data stored and maintained in the managed network must, at some point, be imported (in whole or in part) into the NMS and stored in some type of persistent repository. Normally, this is a relational (or even an object-oriented) database. The schema for this repository represents the NMS data model. Repository data is manipulated by the NMS and, for actions such as provisioning, is written to the network as MIB object instance values. Similarly, traps received from the network must be processed and stored for reporting purposes. The data model is the glue for bringing together the managed NE data and the user 's view of the network. In a sense, NEs implement a type of distributed database in their MIBs. It is this database that the NMS tracks and modifies. Extracting data from NEs is achieved using SNMP get s and traps. Similarly, pushing data into NEs is achieved using SNMP set s. These three message types all consume precious NE and network resources. So, maintaining parity between an NMS and its managed network is fundamentally limited by:

  • Network size and bandwidth

  • NE density ”the number of managed objects (connections, interfaces, protocols, etc.)

  • NE agent resources (speed, allocated CPU cycles, and I/O)

The NMS must try to maintain data parity and, at the same time, minimize NE access.

The database model (or schema) should be a superset of the MIB. All the applications in Figure 4-3 could benefit from the deployment of a single data model. This applies particularly to the bidirectional applications that both read from and write to the network. The single data model allows for flow-through to and from the user if MIBs are simple. This helps to keep the device-access layer thin and fast because all (or almost all, if access is needed to an object like mplsTunnelIndexNext , mentioned in Chapter 3) the required data for NE write operations can be gathered from the database and written to the network. There is then no need for intermediate processing at the device-access layer.

One issue with the stovepipe structure is that the FCAPS applications are written to share a single data repository hosted on one server. This can give rise to database contention between the applications and possibly even deadlocks where multiple applications update the same tables. Careful code and data design is needed to avoid this. Another problem closely allied to this is host resource contention between the applications. This occurs when the applications are written to run essentially independently of each other on a single host machine. The result can be an unbalanced host machine with high levels of CPU and disk activity. This can be improved using either some scheme for interapplication cooperative multitasking (rather than depending on the host operating system) or by distribution (discussed in the next section).

Distributed Servers and Clients

NMS are increasingly large, complex application suites. Rather than using a single server host with multiple distributed clients, more than one server machine can be used. This helps to distribute the processing among a number of host machines. The availability (in standard programming languages) and function of technology such as RPC, Java RMI, and middleware products based on CORBA considerably ease the task of integrating multiple NMS application hosts . The applications shown in Figure 4-3 could therefore be deployed on different machines; this reflects the fact that computing power is increasingly inexpensive. This approach can help to offload some processing from a single host, but it may increase the interserver network traffic. This tradeoff between saving host resources and consuming network bandwidth is a problem typical of networked applications in general. Clients can also be distributed, accessing the NMS via standard, desktop Web browsers.

NMS can also be operated in redundant mode. This consists of deploying a primary server with one or more backup servers. Failure of the primary results in a switchover (or failover) to a secondary server. This allows for the entire NMS to be backed up in a number of configurations:

  • Hot standby : The secondary takes over with no data loss.

  • Warm standby : The secondary takes over with some data loss.

  • Cold standby : The secondary is started up and switched into service.

Hot standby is used for critical systems that require 99.999% (the five 9s) uptime. A good example of this is an SS7 protocol stack used for signaling in a mobile (or fixed) telephony network. Two copies of the SS7 stack run in parallel, but only one of them writes to the database and network. If the primary system fails, then the standby takes over. This primary and secondary configuration often provides a convenient means for applying software upgrades. When the operator wants to upgrade both primary and secondary, the primary is stopped, which causes a changeover to the secondary. The primary system software is then upgraded and started up (to back up the secondary). Then the secondary is stopped , causing a switch back to the original primary. Then the secondary software can be updated. Some DBMS vendors , such as Oracle and Informix, also provide standby support in their products.



Network Management, MIBs and MPLS
Network Management, MIBs and MPLS: Principles, Design and Implementation
ISBN: 0131011138
EAN: 2147483647
Year: 2003
Pages: 150

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net