The Challenges a Tester Faces Today

The position and contribution of a tester has been severely eroded since I joined testing in 1987. The average tester today is facing quite a few challenges. Many are new to testing, many are experienced testers facing poor budgets, and many are facing a "testers don't get no respect" climate that is dominated by ship date pressures, where testers can easily be seen as a problem rather than a part of the solution. Sadly, this is true both for testers in commercial software development and for testers in the more traditionally formal areas such as business and safety-critical and high-reliability software.

No Specification Means No Testing

The first problem in making a convincing case for software testing today is that no one can test without a specification. In software development, the word test is even more misunderstood than the word quality.

Note 

To test means to compare an actual result to a standard.

If there is no standard to compare against, there can be no test. In the survey discussed earlier, only one person in 50 provided the correct definition for the word test. In shops where some form of RAD is in use, people think they are testing. However, since specifications are produced after the software is finished, testing is an impossibility. This is also true for most of the RAD descendants: the Agile methodologies; eXtreme Programming (XP), Lean Development (LD), Adaptive Software Development (ASD), and so on. The one possible exception to this situation is the Dynamic Systems Development Method (DSDM). We will discuss the RAD/Agile methodologies and how to accomplish testing them in more detail in the next chapters.[2]

The Institute of Electrical and Electronics Engineers (IEEE) defines test as "a set of one or more test cases." The IEEE defines testing as "the process of analyzing a software item to detect the differences between existing and required conditions [that is, bugs] and to evaluate the features of the software item." This definition invites the tester to go beyond comparing the actualities to a standard (verification) and evaluate the software (validation). Effectively, the definition invites testers to express opinions without giving them guidelines for the formulation of those opinions or tools (metrics) to defend those opinions. The IEEE definition makes testers responsible for both verification and validation without distinction. This practice, when pursued energetically, is more likely to incite riot among developers than it is to lead to quality improvements. To understand what I am getting at, consider the definitions of the words verification and validation.

According to Webster's New World Dictionary, verify means "(1) to prove to be true by demonstration, evidence, or testimony; confirm or substantiate; (2) to test or check the accuracy or correctness of, as by investigation, comparison with a standard or reference to the facts." Verification is basically the same process as testing with a bias toward correctness, as in, "to verify that a thing performs according to specification." Verification answers the question "Does the system do what it's supposed to do?"

Webster's New World Dictionary defines validity as "the state, quality, or fact of being valid (strong, powerful, properly executed) in law or in argument, proof or citation of authority." Validation is the process by which we confirm that a thing is properly executed. Validation requires a subjective judgment on the part of the tester. Such a judgment must be defended by argument, for example, "I think it's a bug because . . . ." Validation answers the question "Is what the system doing correct?" Just because a system was designed to do things a certain way and is doing those things in that way does not mean that the way things are being done is the right way or the best way.

Comparing the system's response to the standard is straightforward when there is a specification that states what the correct system response will be. The fundamental problem with testing in a RAD/Agile environment is that, since there are generally no standards, it is impossible to test. RAD/Agile testers are exploring the software and performing bug finding and validation on the fly. To convince development that something is invalid when there are no standards to quote, one must have a convincing argument and high professional credibility. How much chance does a tester have of convincing development that something is invalid or seriously wrong if they are using it metrics and the best argument they can give is "I think it is a bug because I think it is a bug."?

Also, it is virtually impossible to automate testing if there is no standard for the expected response. An automated test program cannot make on-the-fly subjective judgments about the correctness of the outcome. It must have a standard expected response to compare with the actual response in order to make a pass/fail determination.

Being First to Market: Market/Entrepreneurial Pressures Not to Test

In our entrepreneur-driven first-to-market development environment, managers are eager to cut any costs or activities that don't add to the bottom line. They are also eager to remove any barriers that might negatively impact a ship date. Testing has not demonstrated that it is a requirement for success in the shipped product.

The fact is, it has not been necessary to use formal software test methods or metrics in many parts of commercial software development in order to succeed commercially. This type of software that I call commercial software is intended for business and home consumption, generally on the PC platform; hopefully, it is not used in safety-critical systems. This is software that anyone can buy at a store or over the Internet, like word processors, graphics programs, and spreadsheets.

Common reasons given for not using formal test methods are usually of the form, "We don't need formal methods. We are just a small shop." The general conception seems to be that formal methods have to be written by somebody else and that they must be specialized and complicated. Formal simply means following a set of prescribed or fixed procedures. The real problem here is the lack of really productive testing. It is a cultural problem.

Testing is perceived to be a cost center-not a contributor to the bottom line. So in some shops the perception is that testing doesn't add much value to the product. If it doesn't add much value, it won't get much funding.

Since most commercial test efforts are typically underfunded and staffed with warm bodies rather than trained testers, mediocre test results are the norm, and so over the past years, I have seen more and more companies disbanding the software test group altogether.

The Lack of Trained Testers

One of my first mentors when I started testing software systems had been a tester in a boom-able[3] industry for many years. He explained to me early on how a very good analyst could get promoted to programmer after about five years of reviewing code and writing design specifications; then after about five years in development, the very best programmers could hope for a promotion into the system test group. The first two years in the system test group were spent learning how to test the system.

This situation still exists in some safety-critical shops, but it is not the norm in commercial software shops at all. The simple fact is that few testers or developers have received any training in formal methods, especially test techniques. Dorothy Graham, a noted author in the field of test inspection and tester certification, estimated in the late 1990s that only 10 percent of testers and developers had ever had any training in test techniques. The results of the survey I mentioned earlier support this assertion.

Where do software testers get their training? In the United States, software testers are homegrown, for the most part. The bulk of test training available in North America comes through public and private seminars.

In Europe, a larger percentage of students attending the test seminars have science or engineering degrees than attendees from the United States, but, again, the bulk of software test training is done in public and private seminars. Few metrics are in use even among the better-educated testers.

Few universities offer software testing classes. Even fewer require software testing classes as part of the software engineering curriculum. Unfortunately, this sends the message to business and the development community that software testing is not worthwhile.

Academia is largely uninvolved with the actual business of producing commercial software. Software testing is not the only topic that is missing from the curriculum. Cellular communications, digital video editing, and multimedia development represent other omissions. University instructors are busy teaching well-established subjects and exploring future technologies. Few institutions cover the ground that serves the current needs of industry, such as training the next generation of professional testers.

Traditionally, in the United States, test groups were staffed with computer science graduates looking for entry-level programming positions. But since 1990, we have seen the number of testers with any type of science degree dwindle. People currently being hired to perform testing do not come from a tradition of experimental practice or science or engineering because the entrepreneurs see no need to pay for such people to fill testing positions. This trend is reinforced by the focus on market demands rather than product reliability. Even if the need for these skills were recognized, few formally trained testers would be available.

In the 1990s, finding information on many testing topics was difficult to do. Few college courses were available on software testing, and only a few conferences were devoted to the subject. Since the advent of the Internet, this situation has changed dramatically. The Internet has made it possible for testers to find a great deal of information on software testing easily. If you enter "Software+Testing" on your favorite Internet search engine today, you are likely to get hundreds of thousands of matches. But these improvements have not improved the overall status of the software tester or the test effort.

I don't think that there is one simple answer for this situation. The situation is a result of several factors. One contributor to the current situation is that in most companies, testing is not a respected career; it is a phase. Most testers are transients-they are moving though testing to get to something else. For example, it's common for nontechnical personnel or just-out-of-school computer scientists to use a stint in the test group to bridge themselves into a job in operations or development. So, they don't stay in testing.

The poor funding that test groups routinely get today also contributes to it being a phase rather than a career. There aren't enough resources for education (especially the time necessary to go and take a class). Management must consider the questions "Why educate testers if they are just going to move on to other careers?" and "Why spend money on a test effort that probably won't be very good?"

Testing lacks the credibility that it once had. So, as the knowledge level of testers is reduced to it metrics and ad hoc methods, the quality of the test effort is reduced. The fact is, the real quality improvements in commercial software are coming about because of the Internet and the international acceptance of standards. Let me explain.

Standards Reduce the Amount of Testing Required

Fact: 

Quality improvements in the 1990s have been driven by standardization, not testing or quality assurance.

I already mentioned how the Web allowed software makers to cut support costs and get bug fixes to users quickly and efficiently, instead of spending more to remove bugs in the first place. Another kind of improvement that has caused testing to be less important is the rapid adoption of standards in our large systems.

When I wrote my first paper on system integration in 1989, I described integrating the system as building a rock wall with my bare hands out of a mismatched combination of oddly shaped stones, wires, and mud. The finished product required operators standing by in the data center, 24/7, ready to stick a thumb or a monkey wrench into any holes that appeared.

Each vendor had its own proprietary thing: link library, transport protocol, data structure, database language, whatever. There were no standards for how various systems would interoperate. In fact, I'm not sure that the term interoperate existed in the early 1990s. For example, when we created online banking at Prodigy, we wanted our IBM system to "talk" to the Tandem at the bank. We had to invent our own headers and write our own black boxes to translate IBM messages to Tandem and vice versa. All the code was new and rightfully untrustworthy. It had to be tested mercilessly.

System modules were written in machine-specific languages; each machine had its own operating system. The modems and routers had their own manufacturer-specific ways of doing things. A message could be broken down and reconstructed a dozen times between the application that built it and the client on the other side of the modem receiving it. Testing a system required that each component be tested with the full knowledge that something as simple as a text string might be handled differently by each successive element in the network.

Integrating applications into the networks of that day required major-league testing. During my first two years as a systems integrator, my best friend and only tool was my line monitor. I actually got to the point where I could read binary message headers as they came across the modem.

We have come a long way in the intervening years. I am not saying that all manufacturers have suddenly agreed to give up their internal proprietary protocols, structures, and ways of doing things-they have not. But eventually, it all runs down to the sea, or in our case, the Internet, and the Internet is based on standards: IP, HTML, XML, and so on. This means that, sooner or later, everyone has to convert their proprietary "thing" to a standards-based "thing" so that they can do business on the Web. (See the sidebar on standards that have improved software and systems.)

Because of standardization, a lot of the more technical testing chores are no longer necessary, like me and my line monitor. This has also contributed to management hiring fewer senior technical testers and more entry-level nontechnical testers. The rise of fast-paced RAD/Agile development methods that don't produce a specification that the tester can test against has also eliminated many testing chores.

Obviously, there is great room for improvement in the software testing environment. Testing is often insufficient and frequently nonexistent. But valuable software testing can take place, even in the constraints (and seeming chaos) of the present market, and the test effort can and should add value and quality to the product. Our next chapter examines that very topic.

start sidebar
Some of the Standards That Have Improved Software and Systems

Several standards are in use today that support e-business interoperability. They are used to enable interoperability to put information flows into a form that can be processed by another component in the system, between various business services, applications, and legacy systems. Open Buying on the Internet (OBI), cXML, and XML/EDI are a few of the most popular business-to-business (B2B) standards in use today. BizTalk, another standardized offering, is a framework of interoperability specifications. BizTalk applications support information flows and workflows between companies, allowing rules to be created that govern how flows from one process are translated, stored, and otherwise manipulated before being sent on to the next component in the flow.

With the adoption of XML, it is now possible to host a Web service on an intranet or the Internet. A Web service is simply an application that lives on the Web and is available to any client that can contract with it. It represents a "standardized" version of an application that can be located, contracted, and utilized (Microsoft calls this "consuming" the Web service) dynamically via the Web.

In the near future we will find that we don't know where our information is coming from as our applications automatically and transparently reach out and query universal description discovery and integration (UDDI) servers anywhere on the planet to locate and contract with Internet-hosted Web services to do X, Y, and Z as part of the application.

Today, bringing up an e-commerce application does not require a line monitor. Nor does it require the exhaustive testing that was required before the Web. Applications have a higher reliability from the beginning because they are based on standards. Typically, availability of a Web-based system is measured in 9s, with 99.999 percent availability being the norm for a commercial system. That translates to less than 8 hours downtime each year.

What we can do, how we can interoperate, and how reliable our systems are has improved enormously as a result of our adoption of the Internet and its standards.

DEVELOPMENT TOOLS ALSO SUPPORT STANDARDS

Our development tools have gotten a lot better as well. For example, the .NET development API, Visual Studio .NET, can be set up to enforce design and coding standard and policies on developers through the use of templates. These templates can be customized at the enterprise level. They can impose significant structure on the development process, limit what programmers can do, and require that they do certain things, such as the following:

  • Always use the company-approved name for a specific feature.

  • Always use a certain data structure to hold a certain kind of information.

  • Always use a certain form to gather a certain kind of information.

  • Submit a program module only after every required action has been completed on it, such as providing all the tool tips and help messages.

When this template approach is applied to an enterprise, it eliminates entire classes of bugs from the finished product.

Installation is a matter of copying compiled files to a directory and invoking the executable. Programs do need to be registered with the system where they are running. The .NET framework contains a standardized application execution manager that controls just-in-time (JIT) compilation and application loading into managed memory. A memory manager ensures that programs run in their own space, and only in their space.

The .NET framework is based on a set of unified libraries that are used by all languages. The result of these features is that all programs are using the same set of link libraries, regardless of what language they were written in. Consequently, if a library routine is tested in one module, it can be assumed that it will behave the same way when used by any other module. A string, for example, will always be treated in the same way instead of each different language compiler bringing with it its own set of link libraries to the system, complete with their own different bugs.

Programmers can write in the language that fits the job at hand and their skill set. The end product will perform the same no matter which language it was written in, because all languages are compiled into a standardized binary file that uses the unified library routines and runs in its own protected memory space.

This architecture is very similar to the one run at Prodigy in 1987 using IBM's Transaction Processing Facility (TPF) operating system and Prodigy's own object-oriented language and common code libraries. It worked very reliably then, and it will probably work very reliably now as well.

end sidebar

[2]I will refer to all the RAD descendants as RAD/Agile efforts for simplicity.

[3]So-called "boom-able" because if something goes wrong, something goes "boom."



Software Testing Fundamentals
Software Testing Fundamentals: Methods and Metrics
ISBN: 047143020X
EAN: 2147483647
Year: 2005
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net