A Quick Look at How We Got Where We Are

Most of the formal methods and metrics around today had their start back in the 1970s and 1980s when industry began to use computers. Computer professionals of that time were scientists, usually mathematicians and electrical engineers. Their ideas about how to conduct business were based on older, established industries like manufacturing; civil projects like power plants; and military interests like avionics and ballistics.

The 1980s: The Big Blue and Big Iron Ruled

By the 1980s, computers were widely used in industries that required lots of computation and data processing. Software compilers were empowering a new generation of programmers to write machine-specific programs.

In the 1980s computers were mainframes: big iron. Large corporations like IBM and Honeywell ruled the day. These computers were expensive and long-lived. We expected software to last for five years, and we expected hardware to last even longer-that is, how long it takes to depreciate the investment. As a result, buying decisions were not made lightly. The investments involved were large ones, and commitments were for the long term, so decisions were made only after careful consideration and multiyear projections.

Fact: 

Computers in the 1980s: expensive, long-term commitment, lots of technical knowledge required

Normally a vendor during the 1980s would sell hardware, software, support, education, and consulting. A partnership-style relationship existed between the customer and the vendor. Once a vendor was selected, the company was pretty much stuck with that company until the hardware and software were depreciated; a process that could take 10 or more years.

These consumers demanded reliability and quality from their investment. Testing was an integral part of this arrangement and contributed greatly to the quality of the product. Only a few vendors existed, and each had its own proprietary way of doing things. Compared to today's numbers, only a few people were developing and testing software, and most of them were engineers with degrees in engineering.

For the most part, during this period any given application or operating system only ran in one environment. There were few situations where machines from more than one vendor were expected to exchange information or interact in any way. This fact is very significant, since today's software is expected to run in many different environments and every vendor's hardware is expected to integrate with all sorts of devices in its environment.

The 1990s: PCs Begin to Bring Computing to "Every Desktop"

In the 1990s, the PC became ubiquitous, and with it came cheap software for the public consumer. All through the 1990s, computers kept getting more powerful, faster, and cheaper. The chip makers successfully upheld Moore's law, which states that the number of circuits on a single silicon chip doubles every 18 to 24 months. To put that in perspective, in 1965 the most complex chip had 64 transistors. Intel's Pentium III, launched in October 2000, has 28 million transistors.

Fact: 

Computers in the 1990s: keep getting cheaper, no commitment involved, almost anybody can play

The price of a PC continued to fall during the 1990s, even though their capabilities expanded geometrically. The software developers were driven to exploit the bigger, better, and faster computers. And the consumers were driven to upgrade, just to remain competitive in their business; or at least that was the perception.

Software makers adopted rapid application development (RAD) techniques so that they could keep up with the hardware and the consumers' demands in this new industry, where being first to market was often the key to success. Development tools make it easier and easier for people to write programs. So, a degree becomes less important.

Unlike the 1980s, when the "next" release would be a stronger version of the existing software, in the 1990s, a new version of a product was often significantly different from the previous version and often contained more serious bugs than its predecessor.

Fact: 

What we got from rapid application development in the 1990s was a new product, complete with new bugs, every 18 to 24 months.

The demanding delivery schedule left little time for testing the base functionality, let alone testing the multiple environments where the product might be expected to run, such as computers made by different vendors, and different versions of the operating system. So, it was mostly tested by the users, with product support groups plugging the holes.

Who would have dreamt then how developments in software and the Internet would eventually affect the state of software testing? The outcome proves that truth is stranger than fiction. Consider the following note I wrote about software testing methods and metrics in 1995:

In the last couple of years, there has been a marked increase in interest in improved product reliability by several successful shrink-wrap manufacturers. I had wondered for some time what factors would cause a successful shrink-wrap marketing concern to become interested in improving reliability. I used to think that it would be litigation brought on by product failures that would force software makers to pay more attention to reliability. However, the standard arguments for accountability and performance do not seem to have any significant effect on the commercial software industry. It seems that the force driving reliability improvements is simple economics and market maturity.

First, there is economics of scale. The cost of shipping the fix for a bug to several million registered users is prohibitive at the moment. Second, there are the decreasing profit margins brought on by competition. When profit margins become so slim that the profit from selling a copy of the software is eaten up by the first call that a user makes to customer support, the balance point between delivery, features, and reliability must change in order for the company to stay profitable. The entrepreneurial company becomes suddenly interested in efficiency and reliability in order to survive.

At the time, I honestly expected a renaissance in software testing. Unfortunately, this was the year that the Internet began to get serious notice. It was also the year that I spent months speaking at several large corporations telling everyone who would listen that they could radically reduce the cost of customer support if they developed support Web sites that let the customers get the information and fixes they needed for free, anytime, from anywhere, without a long wait to talk to someone. Somebody was listening. I was probably not the only one broadcasting this message.

Enter: The Web

Within months, every major hardware and software vendor had a support presence on the Web. The bug fix process became far more efficient because it was no longer necessary to ship fixes to everyone who purchased the product-only those who noticed the problem came looking for a solution. Thanks to the Internet, the cost of distributing a bug fix fell to almost nothing as more and more users downloaded the fixes from the Web. The customer support Web site provided a single source of information and updates for customers and customer service, and the time required to make a fix available to the users shrank to insignificance.

The cost of implementing these support Web sites was very small and the savings were huge; customer satisfaction and profit margins went up. I got a new job and a great job title: Manager of Internet Technology. Management considered the result a major product quality improvement, but it was not achieved through better test methods. In fact, this process improvement successfully minimized any incentive for shipping cleaner products in the first place. Who knew? But don't despair, because it was only a temporary reprieve.

The most important thing was getting important fixes to the users to keep them happy until the next release. The Internet made it possible to do this. What we got from the Internet was quick relief from the new bugs and a bad case of Pandora's box, spouting would-be entrepreneurs, developers, and experts in unbelievable profusion.

Consumers base their product-buying decisions largely on availability and advertising. Consumers are most likely to buy the first product on the market that offers features they want, not necessarily the most reliable product. Generally, they have little or no information on software reliability because there is no certification body for software. There is no true equivalent in the software industry to institutions like Underwriters Laboratory (UL) in the United States, which certifies electronics products. Software consumers can only read the reviews, choose the manufacturer, and hope for the best. Consequently, software reliability has been squeezed as priorities have shifted toward delivery dates and appealing functionality, and the cost of shipping fixes has plummeted-thanks to the Web.

Given this market profile, the PC software market is a fertile environment for entrepreneurs. Competitive pressures are huge, and it is critically important to be the first to capture the market. The decision to ship is generally based on market-driven dates, not the current reliability of the product. It has become common practice to distribute bug-fix releases (put the patches and fixes on the Web site) within a few weeks of the initial release-after the market has been captured. Consequently, reliability metrics are not currently considered to be crucial to commercial success of the product. This trend in commercial software exists to one degree or another throughout the industry. We also see this trend in hardware development.

The next major contribution of the Web was to make it possible to download this "shrink-wrap" software directly. This type of software typically has a low purchase price, offers a rich appealing set of functionality, and is fairly volatile, with a new release being offered every 12 to 18 months. The reliability of this software is low compared to the traditional commercial software of the 1970s and 1980s. But it has been a huge commercial success nonetheless. And the Web has helped keep this status quo in effect by reducing the cost of shipping a bug fix by letting users with a problem download the fix for themselves. And so we coasted through the 1990s.

The Current Financial Climate

In the aftermath of the dot-corn failures and the market slump in 2001 and 2002, investors are demanding profitability. I always expected consumers to rebel against buggy software. What happened was that investors rebelled against management gambling with their money. This change is inflicting fiscal responsibility and accountability on management. It is not uncommon today to have the chief financial officer (CFO) in charge of most undertakings of any size.

Fact: 

Nobody seems to feel lucky right now.

The first task is usually to cut costs, adjust the margins, and calm investors. Along with the CFO come the auditors. It is their job to find out what the information technologies (IT) department is, what it does, and if it is profitable or not. If it is not profitable, it will either become profitable, or it will be cut. The financial managers are quick to target waste in all its forms.

Slowing Down

The 1990s were a time of rapid growth, experimentation, and great optimism. We were always eager to buy the "next" version every time it became available, without considering if we really needed it or not. It was sort of the I-feel-lucky approach to software procurement. We kept expecting "better" products, even though what we got were "different" products. But we kept buying these products, so we perpetuated the cycle. There always seemed to be a justification for buying the next upgrade. A new term was coined-shelfware-to describe software that was purchased but never installed.

Further, even software that did get installed was rarely fully utilized. Studies showed that users rarely used over 10 percent of the functionality of most common business software. There was obviously a feature bloat.

Fat client/server applications were quickly replaced by lightweight, limited-function, browser-based clients. Most users never missed the 90 percent of the functions that were gone, but they appreciated the fast response, anytime, anywhere.

Getting More from What We Have

It seems that there is a limit to how small transistor etchings on a silicone wafer can get. To make microchips, Intel and AMD etch a pattern of transistors onto a silicon wafer. But the more you cram onto a chip, the smaller everything gets. Electrons carry the 0 and 1 information through the transistors that power our current computers' computing capabilities. When the transistors get down to the atomic level, electrons are too large to flow.

Fact: 

Many prognosticators believe that the dominance of Moore's law is coming to an end.

In addition, the cost of producing "Moore" complex chips is rising. As chips become more complex, the cost to manufacture them increases. Intel and AMD now spend billions to create fabrication plants.

With silicon chips nearing the end of their feasibility, scientists and engineers are looking to the future of the microprocessor. Chip makers are now focusing on the next generation of computing. But it is going to be expensive to ramp up new technologies like DNA computers and molecular computers.

DNA computing is a field that will create ultra-dense systems that pack megabytes of information into devices the size of a silicon transistor. A single bacterium cell is about the same size as a single silicon transistor, but it holds more than a megabyte of DNA memory and it has all the computational structures to sense and respond to its environment. DNA computers and molecular computers do not use electrons and 0/1 bits. They can solve more complex problems faster than transistor-based microchips because of the way in which they work. So, in the meantime, we will probably have the chance to create some new uses for the technology that we have.

Fact: 

We are not buying.

Microsoft's corporate vision statement was "A PC on every desktop." They have come a long way toward achieving this goal. However, the indications are that hardware prices won't fall much lower, and even though the price of some software is going up, sales are falling.

When Microsoft introduced the Windows 2000 operating system, it failed to sell at the rate they had expected; the climate had begun to change. In the following year, Microsoft Office XP, with its short-sighted and inflexible licensing, also failed to gain acceptance. Most of us decided not to upgrade.

In the 1990s, developers successfully argued that investing in better tools would build a better product, rather than investing in a better test process. Since most of the quality improvements in the past 10 years have come from standardization and development process improvements, they usually got what they wanted.

However, the real product failures had to do with products that missed the mark on the functionality, and applications that simply did not run well in large systems or systems that were so costly to maintain that they lost money in production. These are things that development tools cannot fix. They are things that testing can identify, and things that can be fixed and avoided.

In today's climate, the financial people will not allow that server from last year to be tossed out until it has been fully depreciated. Neither will they approve the purchase of new operating systems nor office software without a cost-benefit justification. Customers are not in the mood to go out and spend lots of money upgrading their systems either.

Note 

Testers, here is our chance!

When consumers are not willing to buy a new product just because it's "new," things are starting to change. When consumers demand reliability over features and cost, the quality balance shifts back from trendy first-to-market toward reliability. The value of using formal methods and metrics becomes the difference between the companies that survive and the ones that fail.

With so many groups competing for budget, the test group must be able to make a compelling argument, or it will become extinct. A test manager who can make a good cost-benefit statement for the financial folks has a chance. The bottom line for testers is that the test effort must add value to the product. Testers must be able to demonstrate that value.

Note 

The way to develop a good cost-benefit statement, and add real credibility to software testing, is to use formal methods and good metrics.

Regardless of the cause, once a software maker has decided to use formal methods, it must address the question of which formal methods and metrics to adopt. Once methods or a course toward methods has been determined, everyone must be educated in the new methods. Moving an established culture from an informal method of doing something to a formal method of doing the same thing takes time, determination, and a good cost-benefit ratio. It amounts to a cultural change, and introducing culture changes is risky business. Once the new methods are established, it still takes a continuing commitment from management to keep them alive and in use.

In ancient times this was accomplished by fiat, an order from the king. If there were any kings in the 1990s, they must have lived in development. Today, however, it is being accomplished by the CFO and the auditors.

Guess What? The Best Methods Haven't Changed

The auditors are paid to ask hard questions. They want to know what it is. The auditors are paying attention to the answers. And, since the financial folks use a very stringent set of formal methods in their work, they expect others to do the same.

What the Auditors Want to Know from the Testers

When testing a product the auditors want to know:

  • What does the software or system do?

  • What are you going to do to prove that it works?

  • What are your test results? Did it work under the required environment? Or, did you have to tweak it?

Clearly, the test methods used need to answer these questions. Before we try to determine the best methods and metrics to use to ensure that proper, thorough testing takes place, we need to examine the challenges faced by testers today.



Software Testing Fundamentals
Software Testing Fundamentals: Methods and Metrics
ISBN: 047143020X
EAN: 2147483647
Year: 2005
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net