There are many skills involved in the development and delivery of successful Microsoft Access 2002 applications. The database designers need to be able to understand the principles of relational database design , so that they can design the tables that will hold the data and the relationships between those tables. The application developers need to have a feel for the graphical user interface (GUI) design, so that the forms they design for users to interact with will be intuitive and easy to use. They will also need to understand both SQL (Structured Query Language) and VBA so that they can write queries and procedures that not only return the correct data or perform the required task, but also do so quickly and efficiently .
There are other less technical (but no less complex) skills to master. Analysts need to be able to understand the business requirements of the users for whom the application is being designed, and to translate these requirements into a design specification from which the developers can work. Technical documenters need to be able to articulate how the application works, to anticipate confusions that users might experience and to clearly express their thoughts in documentation that is both accessible and informative. Test engineers need to be rigorous in their approach, perhaps using formal methodologies to check for errors, and must not take anything for granted when evaluating the application. Last, but certainly not least, project managers need to know how to monitor progress and track resource usage to ensure that the application is delivered on time and within budget.
Sometimes, if the application being developed is large-scale or complex, then there will be many different people involved in the application development lifecycle. Some will be responsible purely for analysis or design, others will work solely on designing queries or developing forms, and yet others will be responsible for other tasks, such as migrating legacy data into the Access database or producing user documentation. But at other times, particularly if the application is less complex, or if resources (such as money or people) are scarcer, then it is not uncommon for many of these tasks to be undertaken by individuals. Indeed, in many situations, a single person can be responsible for the entire analysis and development process.
Irrespective of the number of people involved, or the development methodology employed, the development lifecycle for an Access application will typically involve the following steps:
Analysis ( Design ( Coding ( Testing ( Documentation ( Acceptance ( Review
In practice, however, these steps do not rigidly follow one after another. There can be significant overlaps and the project can iterate through some of these steps before progressing on to others. It is beyond the scope of this book to enter into a detailed discussion of different project lifecycle plans. However, it is undoubtedly true that the speed with which Access forms and reports can be produced makes Access an excellent tool for using in a more iterative lifecycle model. In such a situation, the lifecycle would look more like this:
Irrespective of the type of project lifecycle mode, the first stage, and one of the most important to get right, is inevitably one of analysis. Without adequate analysis you will not be able to determine what the user wants from the application, the technical infrastructure within which the application will be implemented, and the constraints imposed by the data with which you will be working. Repairing the damage done by inadequate analysis, at a later date, can prove very costly or even kill a project completely.
The starting point for creating any successful Access application is to have a clear understanding of what the users of an application want out of it. You need to know this before you can even start to think about how to design any solution. The sorts of questions you will need to ask in this stage include, among others:
What is the business process we are trying to automate?
What benefit is the new application designed to achieve? How will we measure the benefit?
Do we simply want to automate the existing process, or restructure the process and automate it?
Will the application have to interoperate with other existing or planned systems or processes?
What volume of data will the application be expected to handle?
How many users will be using the system at the same time (concurrently)? How many in total?
What is the anticipated mix of insert and update activity compared to query and reporting activity?
The problem is that the only people who can answer these questions are the customers who will use the finished application, and sometimes it can prove difficult to get answers out of them. It might be that the demands of their current business are so pressing that they have little time to answer questions about some future application. It might be that the sensitivities of internal office politics will make them unwilling to be too helpful in designing an application in which they feel they have too little ownership. Or it may be that they are trying to be helpful, but just don't know the answers to these questions, because it is something they have never thought about. Don't feel bashful about asking questions, however stupid some of them might sound. What might seem illogical and "obviously wrong" to an outsider might turn out to be a vital, but unspoken , business practice that simply must be implemented in order for the application to be acceptable. Once you think you understand a process it is often useful to run it past the client again for confirmation. Use these discussions to prompt further questioning to fill in any gaps. In any case, it is vital to try to approach this phase of the project with as few preconceptions as possible.
Requirements analysis is a skilled art and many organizations fail to appreciate the fact that good developers do not necessarily make good analysts. In fact, in many ways, developers make the worst analysts. By their very nature, good developers are constantly looking for detailed solutions to problems. Someone mentions a business requirement and you can almost hear the cogs whirring in their brains as their eyes glaze over and they start working out how they will produce a solution to that requirement; without stopping to ask what the value of satisfying that requirement is, or even if the requirement truly exists. That is not what you want from an analyst. You want an analyst to be able to take an objective look at the requirement expressed by the user, to check that they understand it correctly, to ask what the relevance of this requirement is to the business, to determine the metrics (ways of measuring) by which the successful implementation of the requirement can be judged and to express that requirement in a way that other parties involved in the project will understand.
A variety of tools and methods are available to assist the requirements analysis process. For example, JAD (Joint Application Development) is a technique that assists requirements definition by bringing all of the various parties who are interested in the development together in intense off-site meetings to focus on the business problem to be solved rather than worrying about specific technical issues.
Whether you use such techniques is up to you. What is important is that you value the requirements analysis process. This phase of a project is so important because it is the fundamental mechanism for defining both the scope of the project and the critical success factors that show when the project has achieved its requirements. It forms the basis for the contract between the users and the developers, and is the touchstone to be used when resolving conflict or confusion later on in the project lifecycle.
Ironically, the importance of sound requirements analysis is most clearly seen in its absence. When requirements are not properly defined or documented, one of two consequences almost inevitably follows . Either the requirements remain unmodified, with the result that the application fails to achieve its client's objectives; or the requirements are modified later in the development cycle. Late changes such as these can have a huge impact on project costs as their effects 'ripple' out and affect other areas such as documentation, design, coding, testing, personnel assignments, subcontractor requirements, and so on. Indeed, some studies indicate that such changes can be 50 to 200 times more expensive than they would have been if they had been made at the appropriate time!
Prototypes, pre-development versions, proof of concepts, nailed-up versions it doesn't matter what name you give them; they are all attempts to further refine the analysis phase of the project. Access used to be seen as being "only" a prototyping tool with the "real" development given over to something more "industrial strength". This is no longer the case, however (if indeed it ever really was). Access is perfectly up to the job of all but the most demanding projects. It still makes a marvellous tool for prototyping though.
A prototype is a scaled-down version of the final application, which can be achieved at low cost and within a short timescale . It may have certain functionality incomplete or missing entirely. It may even only implement a tiny part of the whole project. The whole point of the prototype is to test those areas of the design that are currently uncertain . This may include a number of different methods to achieve the desired functionality, or perhaps alternative GUI designs to see which is easiest to use, or maybe sample queries with test data to determine what kind of hardware platforms will be required to attain the desired performance.
One thing you should never see is a prototype as v1.0 of the application. This is invariably a recipe for disaster. The temptation is to take the prototype and keep developing it without first going through all the other formal processes described below. You should always see the prototype as part of the analysis phase of the project and not the end of it; it exists to ask questions and to get them answered . If certain parts of the prototype later make their way into the final application (the GUI design would be a good example) then all well and good, but ideally you should plan and cost the work separately.
As well as determining the nature of the solution required by the users of the application, it is also necessary to determine the technical infrastructure that will support this solution. The types of questions posed are ones such as these:
What operating system will the application run on?
Will we need different versions of the application for different clients (for example, one for customers and one for managers)?
What is the specification (that is, in terms of processor, memory, disk space) of the machines that the application will run on?
What type of network will connect the computers? Will lack of available bandwidth prove a problem?
What security policy will the application need to operate ?
What type of fault tolerance or recovery issues will need to be considered ?
The purpose of the technical analysis should be to produce an application architecture and implementation framework within which the resultant application will nestle. Again, it may well turn out that developers are not the best people to undertake this type of analysis. Technical analysis requires a good understanding of networking, security, and technical issues and these skills may not be present in all of your developers.
By this stage, you should have a contract in place that defines what the application is meant to provide and you will probably have a good idea of the technical infrastructure within which the design will be implemented. The next stage is to analyze the data that you will be working with. Now, I must confess that I frequently find this task less than stimulating, but I know that it is imperative if I am to achieve a sound database design. As tedious as data analysis is, it sure beats the pants off rewriting an application because a fundamental misunderstanding of the underlying data only comes to light two weeks before the project is due to be delivered.
Now this is not a primer on data analysis. Although simple enough in theory, data analysis can be quite complex in practice and if you are new to the subject, you would do well to get some specialized training in this discipline.
One term you are likely to come across again and again is normalization . This is a formal technique for the elimination of dependencies in our data. This, in turn, realizes the twin benefits of reducing redundancy and minimizing opportunities for inconsistency being introduced into our data. Some of the principles of normalization are intuitive and you probably already follow them (like trying not to store a company's address twice in two different tables), but it can take a high degree of skill and substantial experience to know when and how to apply some of the more detailed rules, and gains can be elusive . Because of this we will not attempt to cover the subject here. In any case, for smaller projects, formal normalization is rarely useful.
If you are likely to be doing a lot of data analysis or are simply interested in the subject then one of the most authoritative discussions of the theory of normalization can be found in 'An Introduction to Database Systems' by CJ Date (Addison-Wesley, 1995, ISBN 0-201-54329-X). A less theoretical (and thus much more accessible) approach can be found in Database Design for Mere Mortals: A Hands-On Guide to Database Design by Michael J. Hernandez (Addison-Wesley, 1997, ISBN 0-201-69471-9) and Professional SQL Server 2000 Database Design by Louis Davidson (Wrox Press, 2001, 1-861004-76-1).
The principles behind sound data analysis are straightforward enough:
Identify all of the data entities you will be dealing with
Establish the attributes of these entities
Define the relationships between these entities
Document, document, document
As with the requirements analysis and the technical analysis, a variety of methods and tools can be employed to assist in the task of data analysis. Whichever you choose to employ , I would encourage you to bear the following two principles in mind.
First, when selecting a tool, it is paramount that you choose one that allows you to clearly document the results of the analysis in formats that everyone involved can understand. Remember that you may have to present your designs to a wide range of people from clients through developers and possibly up to management they will all have different technical abilities and focus, so bear in mind that you will need to be able to easily vary the level of detail included. Your primary audience, however, is technical, so use diagrams to illustrate the relationships between your entities, by all means, but don't forget the fine detail (however boring it might be to gather!). Complex entity relationship diagrams may look very impressive and professional, but if you can't use your documentation to tell you whether Widget Part Codes are 8 or 9 characters long, then you are going to struggle.
Second, it is very seldom that I have come across data analysis that has suffered from being too detailed. Document everything data types, field lengths, allowable values, calculated values get it all down on paper and do so while it is fresh in your mind. If there is something you are not sure about, don't guess. Go back, check it out, and write it down. The temptation is always there to wrap up the data analysis early and get on with the fun part of the project (design and development). Resist the temptation. You'll thank yourself for it later on.
So then, now that the analysis is out of the way, it's time to get on with coding. Right? Wrong! As tempting as it might be to just plunge in and start coding straight away, you first need to spend some time deciding on an appropriate design for the application. One of the chief aims of the design process is to establish the blueprints from which the application can be built. A few of the issues that you will need to consider when designing the solution are:
Data Storage / Location
Import / Export Mechanisms
But design is not just about establishing an immutable set of blueprints. Successful applications are normally those where the application designers have designed for change.
The concept of "Designing for Change" was first discussed by David Parnas in the early 1970s. It is a principle that recognizes the fact that, however good the analysis has been, there will frequently occur during the lifetime of a project a number of influences that will necessitate change to occur after the initial design has been completed. It might be a change in the legal or business environment in which the customers operate, a change in available technology, or simply a change in understanding on the part of either the customer or the developer. The purpose of designing for change is to ensure that these changes can be accommodated into the project with the minimum possible disruption or delay.
Three of the most important techniques involved in designing for change are described below:
Some issues are more liable to change than others during a development project. These include business rules, file formats, sequences in which items will be processed , and any number of other difficult design areas. The first step is to identify all such volatile areas and document them.
Once these issues have been listed, you can employ information hiding. The principle here is to wrap up, or encapsulate, these volatile issues in a module or procedure that hides, or partitions off, the complexity or volatility of the processes involved. These modules or procedures should have an interface that can remain the same, irrespective of any changes that may occur within the module or procedure as a result of any of the influences we identified earlier. If a change occurs, it should only affect that module or procedure. Other modules that interact with it should not need to be aware of the fact that anything has changed. This is often called "black box" coding, in that "stuff goes in" and "stuff comes out" but what happens in the box is hidden.
For example, you might be designing an application that is used for sending out pre-renewal reminder notices to customers prior to the expiry of their insurance policies. Perhaps a business rule states that pre-renewal notices are to be sent out 2 months before expiry. This is just the type of rule that could easily change and a good design will account for this. Accordingly, a procedure could be written which encapsulated that rule and which was invoked whenever various parts of the application needed to know when the reminder should be sent. Changes to the business rule would only need to be incorporated in a single procedure; the procedure would still be invoked in the same way and yet the effects of the change would be available throughout the application. We will look at this subject in more detail when we examine the use of classes in Access in Chapter 13.
As well as information hiding, there are other techniques that can assist in reducing the impact of change, and these should be prescribed in a change plan. For example, the change plan might specify that:
Named constants should be used wherever possible in place of hard-coded values.
If the application is to be multi-lingual , then care must be taken to identify and separate out all the text to be used so that it can be easily localized (translated). This may mean allowing additional screen space to accommodate certain languages or considering the use of pictorial icons in place of text.
Settings and configuration options should be stored in the Registry rather than hard-coded within the application itself.
Generic and widely used processes should be identified and grouped together in modules, separate from code with specialized functionality only called by specific parts of an application.
One of the best ways to determine which elements to incorporate into a change plan is to perform post-implementation reviews just after a project has been delivered (or post-mortems if they don't get that far!). Identify what changed during the project lifecycle, what impact that change had, and how the impact of that change could have been lessened. Then put that knowledge into your next change plan and make sure you don't make the same mistake twice!
Once your design is complete, you can start to code. That's the part of the process that we will be examining in most detail throughout the rest of this book. We will start by looking at the specifics of the VBA language and the structure of VBA procedures and modules. Then we will look at the Access object model and how this can be manipulated in code. After a short look at some more advanced programming techniques, we will look at how to handle errors that might occur in our application, how to make the best use of class modules, libraries and add-ins, and how to optimize the performance of our application. We will also look at some of the issues we need to be aware of if our application is being used in a multi-user environment and how we can bring some of the power of the Internet to our Access application. Finally, we will look at the finishing touches we can apply to round out our application and give it a more professional look and feel.
There are a number of quality assurance practices that you can apply to your project, but by far the most basic is testing. This involves unit testing (or component testing) where the developer verifies that the code he or she has written works correctly; system testing where someone checks that the entire application works together as expected; and acceptance testing, where the users of the application check that the results the application produces are those they desire (or, at least, that they are those they asked for in the first place!).
The purpose of testing is to break code, to determine ways of making an application misbehave, to expose flaws in either the design or execution of the development process. For this reason, many developers dislike the testing phase (in the same way that many authors dislike the editing phase). If you have spent endless weeks working late to get a tough reporting module finished, if you have missed the ball game for the last four weeks in a row trying to get that import routine to work, if you couldn't make the Christmas party because you were wrestling with a suite of reports that you had to finish, then it is unlikely that you will approach the testing phase with anything other than fear and loathing.
The problem is that testing has a propensity for delivering bad news at the wrong time. The solution is to allow plenty of time for testing, to test early in the development cycle, and to build plenty of time for reworking code after the testing has completed. Being told that a routine you have written does not produce the right results is seldom welcome news to any developer, but it is a lot easier to bear if the developer is told this early on and knows that there is plenty of time to correct the offending code. Test early and allow for rewrites!
It also bears mentioning that a proper test plan is essential for both system and user acceptance testing, and the basis for this test plan should be the documentation that was produced during the requirements analysis stage. In particular, the test plan should define not just what is to be tested , but also what results the application should generate in response to that testing.
One particularly effective technique that is growing more and more popular is the use of "Use Cases". These provide a method for describing the behavior of the application from a user's standpoint by identifying actions and reactions . For more information on how to produce Use Cases, you might want to have a look at Jake Sturm's VB6 UML Design and Development, ISBN 1-861002-51-3, from Wrox Press.
Documentation is a bit like ironing. It's one of those things you have to do, but I have yet to meet anyone who enjoys doing it. It's one of those things that we all know we should do, but we all find boring. It's not surprising. I enjoy playing soccer, but I would soon get bored if I had to write a detailed game report every time I played , explaining what tactics we employed, why we employed them, when we scored, and so on. It's the same with documenting development projects. For most developers, the fun is in creating the solution and putting it into action. Writing it up is major-league boredom.
However, few people who play soccer are called upon to remember what color boots they were wearing at a match 2 years ago, what the coach said before the game started, and how many oranges were on the refreshments table at half-time. Programmers are frequently called upon to fix or update code many months after it was first written. Often they will be in the middle of an entirely different project, maybe even 2 or 3 projects down the line. It might not even be their code! It's at times like these that those little notes you made to yourself when you wrote the original code are worth their weight in gold. A few well chosen words can save days or weeks of making exactly the same mistakes you did the first time around.
Yes, I know it is important. I know that I am as likely to benefit from it as anyone else when I revisit my code later. I know that the users have paid for it! I know it makes the difference between a good application and a great application. That's why I do it and why I make sure that everyone working with me does it and does it well. But I am not going to pretend for a moment that I enjoy it!
In practice, the best approach to this time-consuming chore is a mixture of notes and in-line comments made as you write the code, followed by reports and descriptions when you're done. Then all you have to do is check everything through carefully and keep everything up to date every time anything gets changed at the last minute!
Ah, the bliss! It's all over and the users love the application you have written for them. Great! If you have any sense, you will seize the moment and make sure that three things happen.
First, get the users to sign off the project. If you have drawn up a comprehensive requirements definition and have met all of the success factors identified by the users at the start of the project, this should be a formality . But it is no less an important step for all that.
Second, get the users to tell their colleagues about the new application they are using. Many users have very short memories, and it won't be long before the users forget just how bad the manual processes were that they had to rely on before you wrote this application for them and just what a difference this application makes. Get them to sing your praises while they are still hooked. That's when you will get the best recommendations, whether you are collecting them for your company's marketing brochure or for your own personnel review (and, hopefully, pay rise) in three months' time.
Finally, get the users to start thinking about the next release. Some features might have been axed because there wasn't time to implement them; others might have been identified too late to make it into this release; and others might have always been destined for future releases. Once you are convinced that the users love the product you have given them, remind them about what it doesn't do yet!
The final stage is the post-implementation review. This is the point where you look back at the project and decide what worked and what didn't, what caused problems, and how those problems could have been avoided or their impact minimized. Did you hit all of your deadlines? Did all of the intended functionality make it into the final product? How are relations with the customer at the end of it all? What state are your developers in at the end of it all? Given the opportunity, would you do it all again?
The purpose of the post-implementation review is not just to give everyone a chance to whine and moan about what went wrong. Instead, the purpose is to identify the changes that need to be made to your project methodology and practices to make sure that the same problems don't happen again next time. At the same time, it is an opportunity to identify the successes and to make sure that the benefits of these can be reaped by future projects.
A final benefit of conducting post-implementation reviews is that it gives an appropriate opportunity for recognizing the efforts and contributions of everyone who worked on the project. Sincere praise in response to specific achievements is essential to the self-respect of individual developers and the continued morale of the team as a whole.
OK, that's enough for now on the theory behind designing and delivering software projects. If you want to learn some more about this subject there is ample reading material available, but perhaps one of the most interesting books on this subject is "Clouds to Code" (Wrox Press, 1998, ISBN 1-861000-95-2) in which Jesse Liberty documents the design and delivery of a real project with no holds barred. But this is where we leave behind the theory. From now on, this book will be a hands-on guide with real code examples for you to try out yourself and as we go through the book, we will rapidly find that we are building up a fully functional Access application.