Testing Visual Basic Code

When you're writing a piece of code in any language, it is important to continually ask yourself, "How am I going to test this?" There are several general approaches that you can take.

Partner with Another Developer

One good approach to testing is to partner with another developer with the understanding that you will test each other's code. Then, as you type, you will be asking yourself, "How would I test this if I were looking at this code for the first time? Would I understand it, and would I have all the information that I needed?" Some questions are inevitable, but I have found that if you know from the outset that somebody else is going to perform a unit-level test on your code without the same assumptions or shortcuts that you have made, that is excellent news! How many times have you spent ages looking through your own code to track down a bug, only to spot it as soon as you start to walk through it with another developer? This is because we often read what we think we have written rather that what we actually have written. It is only in the process of single-stepping through the code for the benefit of another person that our brains finally raise those page faults and read the information from the screen rather than using the cached copy in our heads. If you're looking at somebody else's code, you don't have a cached copy in the first place, so you'll be reading what is actually there. One further benefit of this approach is that it will prompt you to comment your code more conscientiously, which is, of course, highly desirable.

Test as You Go

Testing as you go has been written about elsewhere, but it is something that I agree with so strongly that I'm repeating it here. As you produce new code, you should put yourself in a position where you can be as certain as possible of its performance before you write more code that relies on it. Most developers know from experience that the basic architecture needs to be in place and stable before they add new code. For example, when writing a remote ActiveX server that is responsible for handling the flow of data to and from SQL Server, you will need a certain amount of code to support the actual functionality of the server. The server will need some form of centralized error handler and perhaps some common code to handle database connections and disconnections. If these elements are coded, but development continues on the actual data interfaces before these common routines are tested, the first time you try to run the code, there will be many more things that can go wrong. It's common sense, I know, but I've seen this sort of thing happen time and again.

The first and most obvious way to test a new piece of code is to run it. By that, I don't mean just calling it to see whether the screen draws itself properly or whether the expected value is returned. I mean single-stepping through the code line by line. If this seems too daunting a task, you've already written more code than you should have without testing it. The benefit of this sort of approach is that you can see, while it's still fresh in your mind, whether the code is actually doing what you think it's doing. This single concept is so important that Steve Maguire devotes an entire chapter to it in his book Writing Solid Code (Microsoft Press, 1995).

TIP
Sometimes you will need to code routines that perform actions that will be difficult or impossible to reverse. When such routines fail, they might leave your application in an unstable state. An example might be a complicated file moving/renaming sequence. Your ability to test such code will be limited if you know that it might fail for unavoidable reasons. If you can predict that a sequence of operations might fail and that you can't provide an undo facility, it helps the user to have a trace facility. The idea is that each action that is performed is written to a log window (for example, a text box with the multiline property set to True). If the operation fails, the user has a verbose listing of everything that has occurred up to that point and can therefore take remedial action.

Create Regular Builds

I have always been a fan of regular system builds. They force the development team to keep things tidy. If everybody knows that whatever they are doing is going to have to cooperate with other parts of the system every Friday (for example), it is less likely that horrendously buggy bits of half-completed code will be left in limbo. Code left in this state will sometimes not be returned to for several weeks, by which time the original developer will have forgotten which problems were outstanding and will have to rediscover them from scratch.

If I'm writing a set of remote ActiveX servers, I will generally try to have a new build ready each Monday morning for the other developers to use. If I'm working on a large GUI-based system, I will probably look more toward a build every other week. It's hard to be precise, however, because there are always influential factors and, of course, you need the necessary numbers of staff to do this. If you are in a team development, I suggest that this is something you should discuss among yourselves at the beginning of the coding cycle so that you can obtain a group consensus as to what is the best policy for your particular project. It is likely that you will get some slippage, and you might well decide to skip the occasional build while certain issues are resolved, but overall, creating regular builds will allow everybody to get a feel for how the project is shaping up.

If you consider yourself to be a professional developer or part of a development team, you should be using a source code control system (for example, Microsoft Visual SourceSafe). I recommend that you only check in code that will not break a build. This helps maintain the overall quality of the project by keeping an up-to-date, healthy version of the system available at all times.

Write Test Scripts at the Same Time You Code

Having stepped through your code, you need to create a more formal test program that will confirm that things do indeed work. Using a test script allows for the same test to be run again in the future, perhaps after some changes have been made. The amount of test code that you write is really a matter of judgment, but what you're trying to prove is that a path of execution works correctly and any error conditions that you would expect to be raised are raised. For critical pieces of code—the mortgage calculation algorithm for a bank, for example—it might be worthwhile to actually write the specific code a second time (preferably by someone else) and then compare results from the two. Of course, there is a 50 percent chance that if there is a discrepancy, it is in the test version of the algorithm rather than the "real" version, but this approach does provide a major confidence test. I know of a company that was so sensitive about getting the accuracy of an algorithm correct that they assigned three different developers to each code the same routine. As it happened, each piece of code produced a slightly different answer. This was beneficial because it made the analyst behind this realize that he had not nailed down the specification tight enough. This is a good example of the prototype/test scenario.

Decide Where to Put Test Code

This might seem like a strange heading, but what we need to consider is whether the nature of the development warrants a dedicated test harness program or whether a bolt-on module to the application itself would be suitable. Let's examine this further.

A major component—for example, a remote ActiveX server—has clearly defined public interfaces. We want to test that these interfaces all work correctly and that the expected results are obtained, and we also need to be sure that the appropriate error conditions are raised. Under these circumstances, it would be most suitable to write an application that links up to the remote server and systematically tests each interface. However, let's say a small, entirely self-contained GUI-based application is being created (no other components are being developed and used at the same time for the same deliverable). In this case, it might be more appropriate to write the test code as part of the application but have the test interfaces (for example, a menu item) only be visible if a specific build flag is declared.

Ensure Source Code Coverage During Testing

A series of test scripts should, of course, run every single line of code in your application. Every time you have an If statement, or a Select Case statement, the number of possible execution paths increases rapidly. This is another reason why it's so important to write test code at the same time as the "real" code—it's the only way you'll be able to keep up with every new execution path.

The Visual Basic Code Profiler (VBCP) add-in that shipped with Visual Basic 5 is able to report the number of times each line of code is executed in a run. Using VBCP while testing your code will allow you to see which lines have been executed zero times, enabling you to quickly figure out which execution paths have no coverage at all.

Understand the Test Data

This is an obvious point, but I mention it for completeness. If you are responsible for testing a system, it is vital that you understand the nature and meaning of whatever test data you are feeding to it. This is one area in which I have noticed that extra effort is required to coax the users into providing the necessary information. They are normally busy people, and once they know that their urgently needed new system is actually being developed, their priorities tend to revert to their everyday duties. Therefore, when you ask for suitable test data for the system, it should be given to you in a documented form that is a clearly defined set of data to be fed in. This pack of test data should also include an expected set of results to be achieved. This data should be enough to cover the various stages of testing (unit, integration, and system) for which the development team is responsible. You can bet that when the users start user acceptance testing, they will have a set of test data ready for themselves, so why shouldn't they have a set ready for you? Coax them, cajole them, threaten them, raise it in meetings, and get it documented, but make sure you get that data. I realize that if you are also the developer (or one of them), you might know enough about the system to be able to create your own test data on the users' behalf, but the testing process should not make allowances for any assumptions. Testing is a checking process, and it is there to verify that you have understood the finer points of the design document. If you provide your own test data, the validity of the test might be compromised.

Get the Users Involved

The intended users of a system invariably have opinions while the system is under development and, if given the opportunity to express these opinions, can provide valuable feedback. Once a logical set of requirements starts to become a real-life set of windows, dialog boxes, and charts that the user can manipulate, ideas often start to flow. This effect is the true benefit of prototyping an application because it facilitates early feedback. It is inevitable that further observations will be forthcoming that could benefit the overall usability or efficiency of the finished result. Unless you are working to very tight deadlines, this feedback should be encouraged throughout the first half of the development phase (as long as the recommendations that users make are not so fundamental that the design specification needs to be changed). A good way of providing this allowance for feedback is to make a machine available with the latest system build that is accessible to anybody. This will allow people to come along at any time and play. This is a very unstructured approach, but it can lead to a lot of useful feedback. Not only can design flaws be spotted as the system progresses, but other pairs of eyes become involved in the debugging cycle.

To make this informal approach work, it is necessary to provide a pile of blank user feedback forms that anybody can fill out and leave in some prearranged in-tray for the attention of the development team. A nominated individual should be responsible for maintaining a log of these feedback reports and should coordinate among the development team any actions that arise out of them. I've included a sample feedback form on the accompanying CD (see the list of Word templates at beginning of this chapter). Of course, a more elegant and up-to-date approach would be to use an intranet-based electronic form that captures all such feedback and bug reports.

Having extolled the virtues of allowing the users to give you continual feedback, I must point out one disadvantage with this approach. If the currently available build is particularly buggy or slow (or both), this could quite possibly cause some anxiety among the users and thus could earn the system a bit of bad publicity before it gets anywhere near to going live. Again, common sense is the order of the day. Some users are familiar with the development cycle and will take early-build system instabilities in their stride, but others won't. Make the most of the users and the knowledge that they can offer, but don't give them a reason to think that the final system will be a dog!

Track Defects

I mentioned earlier the importance of good project management, and now we are going to return to this concept. Depending on the size and structure of your project, the responsibility for keeping a record of the defects will either rest with a dedicated test lead or with the project lead (who is therefore also the test lead). Developers will find bugs in their own code, and in many cases will fix them there and then. However, some bugs will be too elusive to track down quickly, or there might not be time to fix them, so they should be raised with the test lead. Faults will also be raised by the users during their own testing, and also by anybody else involved in the test program. Unfortunately, it is quite possible that faults may continue to be found after the application has been released.

A suitable defect-tracking system will allow for the severity of defects to be graded to different levels (from show-stopper to irrelevant), and for each defect to be allocated to a specific member of the development team. Ideally it should also tie in with the local e-mail system. It is important to maintain an efficient means of tracking all defects so that the overall health of the project can be continually monitored. Toward the end of a development of any size there is normally considerable pressure from the business for it to be released. Before this can happen, however, the project lead and the user lead will need to continually review the defect status list until a point is reached when the user lead is satisfied with the correctness of the system. This can only be properly achieved by maintaining a thorough, central log of the current health of the system.



Ltd Mandelbrot Set International Advanced Microsoft Visual Basics 6. 0
Advanced Microsoft Visual Basic (Mps)
ISBN: 1572318937
EAN: 2147483647
Year: 1997
Pages: 168

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net