12.4 Security

12.4 Security

When we monitor systems that are executing, their patterns of execution of program modules readily become apparent. This is not surprising. We saw in Chapter 9 that there is a direct mapping from what a user does to a specific set of program modules. That is, when a user exercises an operation, this operation will be implemented by one or more system functionalities. One or more program modules will, in turn, implement these functionalities. The pattern of activity of the modules under each user operation becomes the behavior of the system. Each user will ultimately generate an operational profile that describes how he or she has used the system. At the module level, there will be a rather small set of module profiles that characterize this operation profile. These profiles constitute the normal system behavior.

Most of the computer security methods now in place attempt to characterize the abnormal conditions that occur during each security violation. Signatures of this abnormal activity can be developed for the behavior of the system such that whenever we recognize a certain pattern of behavior, we will know that we are under attack by the Manama virus or the Blorch Buffer Overflow exploit. The basic problem with this approach is that the state space for abnormal behavior is unknowably large. This creates a real advantage for the attacker who would exploit our system. The state space for normal behavior in a typical operating system or any other large software system is really quite small. We can know and understand this normal behavior quite well. If we know what a system's normal activity is, we can quite easily determine when the system is no longer operating within the bounds of normal activity.

The vast majority of security vulnerabilities in software are there for a very simple reason: the software has not been engineered. It has been crafted. It would be difficult to conceive of a nuclear power plant that was operated without process monitoring hardware in place. It would be equally difficult to imagine an oil refinery that could safely operate without extensive process monitoring probes in place. In both of these hardware systems, vital information is monitored in real-time for all aspects of plant operation. This information is obtained in a sufficiently timely manner to react to operating conditions that exceed normal parameters. In essence, both of these systems are operating under real-time feedback control systems.

Modern operating systems are far more complex than most oil refineries or nuclear power plants. Modern command and control software is far more complex than the hardware systems controlled by these C4I systems, yet no modern software systems have any form of process control built into them. They are essentially running out of control. It is not surprising, therefore, that even a novice hacker can gain control of these complex systems. No one is watching. No mechanism has been provided to exercise the necessary restraints to prevent unwanted incursions into these systems.

Software systems have been created by developers who have not been schooled in basic engineering principles. Each software system is hand-crafted by software craftsmen. These craftsmen are simply not aware of the concepts of dynamic software measurement or control systems. It is not surprising that the systems that they build are vulnerable and fragile. These handcrafted systems are not constructed according to the most fundamental principles of sound engineering practice.

The solution to the current crisis in software security and reliability lies in the application of basic engineering principles that have dominated mechanical, electrical, and civil engineering for some time. Software systems simply need to be measured as they are designed and tested. They must have the basic real-time controls built into them that we have come to enjoy in our microwave ovens and automobiles. It would be very difficult to sabotage a military installation because of the real-time monitoring that is built in to protect it. It is relatively easy to render this same installation inoperable in that the software that forms the nucleus of the entire system is not subject to the same level of engineering discipline.

In essence, we do not have a security problem. We have a control problem. Interested parties can hijack our software because we are not watching its operation. Similarly, we do not have a software reliability problem. We have a monitoring problem. Software is allowed to perform uncertified functions that will certainly increase the risk of its failure. Our current problems with software security can easily be eliminated when we understand the need for building real-time controls into these systems.



Software Engineering Measurement
Software Engineering Measurement
ISBN: 0849315034
EAN: 2147483647
Year: 2003
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net