The creation of a new software product often involves large teams of specialists applying complementary skills with a high overhead for coordination. Especially with application software, achieving a successful product requires close coordination with users and eventual customers (including managers and operators) and the effective management of large teams of programmers. Effective software creation is thus an organizational and management challenge as much as a technical challenge.
Example An individual productivity application like a word processor or spreadsheet may require a team of roughly one hundred team members turning out a new version every couple of years. A large infrastructure project like an operating system would require the coordinated efforts of one thousand to a few thousand team members creating an upgrade every couple of years. Of these workers, a typical breakdown would be 30 percent developers (who actually write software programs), 30 percent testers, 30 percent program managers (who generate the specifications and drive the process forward), and 10 percent workers ensuring a variety of functions (usability, accessibility, internationalization, localization) and architecture design.
Software projects today have reached an order of size and complexity that warrants careful consideration of this process. A significant differentiator among software suppliers is their development process, which can affect added value (see section 3.2), productivity, costs, and time to market. Success, in the sense of creating software that is viable, deployed, and used, is not even assured. A significant number of large software developments fail, often because the development organization lacks the requisite skills and experience.
Since physical limits such as processing power and storage capacity are decreasing in importance (see section 2.3), the most significant constraints on software creation relate to managing complexity, managing development, meeting windows of opportunity, and limited financial and human resources.
The aspect of creating software most closely associated with technology is software development, which refers to the range of activities (including design, implementation, testing, maintenance, and upgrade) surrounding the creation of working software programs (Pressman 2000). Software development processes warrant their own complete books, so the goal here is simply to convey the range of options and some of their strengths and weaknesses. Specifically, we now discuss three distinct but overlapping approaches: sequential, iterative, and community-based development, followed recently by the family of agile development processes.
The most obvious approach to defining a software process is to decompose its distinct functions into individual phases that can be performed sequentially, each phase building on the previous one. The resulting model of software development is called a waterfall model (Royce 1970). While there can be different classifications, table 4.1 shows a typical list. While the waterfall model is useful for identifying some distinct development activities, it is oversimplified because it does not recognize that most software today is built on an existing code base, because these phases must be overlapping rather than sequential, because there are numerous stakeholders to satisfy (not simply users), and because the model fails to recognize that requirements change throughout a software life cycle. Thus, it is a useful categorization of required functions that provides little clue as to the actual organization of the development process.
Develops a vision for what the software is to accomplish, and why it is a good opportunity.
A story that convinces executives the project is worth pursuing and secures a financial commitment to the next phase.
Qualifies the opportunity in sufficient depth and scope to justify a major investment in the remaining phases. Includes defining in more detail the features and capabilities of the software with the assistance of eventual users.
A business plan, including estimates of development, maintenance, upgrade resources and costs. For software sold externally, customer willingness to pay, assessment of competition, description of market window. A commitment to pursue the development, with a detailed development plan.
Develops a detailed definition of features and capabilities, and other characteristics such as performance and usability.
A requirements document, which guides programmers as they implement individual features and helps coordinate their efforts.
A "divide and conquer" plan in which the overall software is decomposed into pieces called modules that can be implemented (mostly) independently.
A plan identifying modules that can be assigned to development teams, requirements imposed on each module, and how the modules interact and will later be integrated.
Programs individual modules in accordance with the architectural plan.
Working software programs representing each module.
Brings all modules together and makes them work together to realize the overall system functionality. Often involves integration with other software acquired rather than developed.
A working software system in its entirety.
Testing and evaluation
Runs the software through its paces, establishing that it works in accordance with the requirements document or the needs of end-users. (depending on whether it is specification- or satisfaction-driven).
A software system ready to be deployed and moved into operation.
Gathers reports of defects or performance issues from operators and users, and makes repairs and improvements (without changing the features and capabilities).
A service release (replacement for the software code) that can be distributed to operators and users.
Based in part on feedback from the field, adds significant new features and capabilities that track changes in user needs or maintain competitive advantage.
A version release that can be sold to new or existing customers.
As the software industry matures, most software is created from an established repertoire of source code available to and mastered by the development organization. The impracticality of wholesale modification or replacement of existing code must be taken into account throughout development, even (at the extreme) in the setting of requirements. Testing, integration, and user feedback should occur early and throughout the implementation. A given project must compete for resources within a larger organization and be coordinated with other projects competing for those resources. The internal and external (user and competitive) environments change, and the process must be flexible to be effective.
In addition, there is increasing emphasis on software reuse and software components (see chapter 7). The idea is to enhance productivity and quality by using software code from other projects or purchased from the outside, and conversely, each project should attempt to generate code that can be reused elsewhere in the organization. Thus, even new projects can use existing software modules, and all projects should take into account the needs of an entire development organization.
The assumption that requirements are static, defined once and for all at the beginning of the project, has several shortcomings. Users' needs and market conditions evolve over time. If the requirements tend to be satisfaction-driven rather than specification-driven (see section 3.2.11), there is significant advantage to refining these requirements based on real-world experience with users. The best way to do this is to involve users throughout the project, not just at the beginning or end. Testing should be conducted with users and should look for compliance with all the value indicators described in section 3.2, not simply uncover defects or match specifications.
Another problem with the waterfall model is the top-down style of development it fosters. Whenever a development effort needs to take into account existing code or components, it needs be bottom-up rather than top-down. Bottom-up development proceeds from what is already available and constructs new software step-by-step by building on existing software.
Example A "divide and conquer" approach to software architecture attempts to find modules that cater to requirements known from the beginning. However, it is extremely unlikely that any modules identified this way will coincide precisely with existing code.
The waterfall model can be adjusted to follow a bottom-up approach. However, just as top-down approaches fail to build well on existing code, bottom-up approaches fail to meet user requirements fully. Somehow an intermediate ground must be established.
In rethinking the waterfall model, development processes should emphasize characteristics of the resulting software that matter—productivity, quality, usability, value, and so on. Then iterative processes involving rapid prototyping, experimentation, and refinement can lead to successive refinements of the software by repeating more than once some or all of the functions identified in the waterfall model. (We can think of these as eddies in the waterfall.)
An early attempt at adapting the waterfall process introduced "back edges" between the stages of the waterfall. Instead of emphasizing a downstream flow of results from one stage of the waterfall to the next, an upstream flow of feedback to previous stages was introduced. In other words, problems discovered at a later stage resulted in corrections to a previous stage's outcomes. Taken to the extreme, the back-propagation of severe problems discovered in very late stages (such as integration testing) can result in redoing early stages (such as analysis). While theoretically complete, this refined process does not provide much guidance about practical execution and can lead to dramatic cost and schedule overruns.
Example Suppose that testing during the integration phase uncovers a mistake in the architectural design. The overall decomposition then needs to be redone, and then all subsequent steps may need to be fully redone for affected portions of the architecture. Assuming that such a catastrophic mistake isn't anticipated in the original plan, the project is likely to fail completely because its resources may be exhausted or its market window of opportunity may close.
The spiral model of software development (Boehm 1988; Software Productivity Consortium 1994) was an early attempt at fundamentally modifying the waterfall model to be iterative in the large. The basic idea was to allow the outcomes of the integration and testing stages to flow back to the requirements and analysis stages. An appropriate model is not information flowing upstream, but a spiral that explicitly repeats the stages of the waterfall, adding more specificity and substance in each iteration. Geometrically, this can be considered a spiral that grows outwards, each 360-degree rotation of the spiral corresponding to a complete cycle through the waterfall model. Since the waterfall stages are repeated, they are more appropriately seen as phases. The four major phases of each cycle of the spiral model are the following:
Capture and define objectives, constraints, and alternatives for this cycle. The emphasis is on identifying action alternatives and constraints rather than on achieving full specificity. One goal is to consciously preserve as much flexibility as possible later, while explicitly recognizing practical limitations on that flexibility.
Evaluate the alternatives with respect to the objectives and constraints. Identify and resolve major sources of risk.
Define the product and process. This is the phase where the architecture defined in the previous cycle is refined, and the code from that cycle is refined and appended.
Plan the next cycle and update the life cycle plan, including partitioning of the system into subsystems to be addressed in future cycles. This includes a plan to terminate the project if it is too risky or proves to be infeasible. Obtain management's commitment to implement the plan through the next cycle.
In essence, the first phase corresponds to the conceptualization, analysis, and requirements stages of the waterfall model. The third phase corresponds to the remaining waterfall stages, except for the final maintenance phase, which is not included in the spiral model. The second and fourth phases provide risk management, cycle planning, and the assessment of relative success or failure. These are crucial, since the spiral model could easily lead to an open-ended sequence of cycles without converging to a useful product. The spiral model has been applied successfully in numerous projects (Frazier and Bailey 1996; Royce 1990).
The spiral model provides little guidance as to where to begin and how to augment the project in each cycle, or how many full cycles should be expected. Unlike the waterfall model, the spiral model is best described as a philosophy rather than a complete process. Rather than planning and executing the entire project at once, the philosophy is to start with incomplete prototypes, learn as much as possible from the experience, and refine and extend the implementation in subsequent cycles.
The WinWin spiral model (Boehm and Bose 1994), refines the spiral model by offering guidance on the identification of objectives, alternatives, and constraints. This is a critical improvement, needed to establish cycles that are productive and that converge on a good outcome. As figure 4.1 illustrates, the WinWin spiral model adds three new phases to the beginning of each cycle:
Identify the system or subsystem's key stakeholders. These include finance, marketing, product management, developers, testers, users, and anybody else who has a stake in the outcomes of the following cycle.
Identify the stakeholders' win conditions—outcomes that will benefit or satisfy those stakeholders—for each system or subsystem. Often these win conditions will conflict—a win for one stakeholder or subsystem may be a loss for another.
Negotiate win-win reconciliation of the stakeholders' win conditions across subsystems, trying to arrive at a set of conditions that satisfies as many stakeholders as possible and justifies any resulting stakeholder losses. There are software tools to help organize this negotiation.
Figure 4.1: WinWin spiral model of software development.
While the spiral model and its refinements (such as WinWin) move toward a principled iterative software development process, they leave open many issues such as scheduling and cost estimates for individual phases. While it would take us too far afield to delve more deeply into complete and principled processes, especially since many alternatives have been proposed and used, a couple of examples illustrate some possibilities.
Example The Rational Unified Process (RUP) (Kruchten 2000) aims to be a single universal process for software development. While RUP can be configured and customized to adapt to specific organization and project needs, it focuses on one process. An alternative approach, illustrated by the OPEN process framework (Firesmith and Henderson-Sellevs 2001), is to avoid a single (even customizable) process but construct a framework and toolkit of process and method ingredients, accompanied by guidelines and recipes that can be used to construct custom processes and methods. Both RUP and OPEN provide process-level guidance: team structure; costing, scheduling and resourcing; quality assurance; client involvement; and so on. RUP takes a best-practice approach to establishing a single overall process. OPEN identifies many process modules and acknowledges that organizations of different sizes, skills, experience levels, ambitions, and so on, will want to custom create their processes from such modules, guided but not dictated by OPEN rules and principles.
A principled process for a single software project does not address the real problem in most development organizations, which must manage a few or even many projects overlapping in time and sharing skilled personnel. Meeting schedules and achieving overall productivity for the organization as a whole must be balanced against quality and cost-effective outcomes for specific projects.
Principled processes such as RUP and OPEN explicitly allow for the concurrent and overlapping execution of development activities, with clear definitions of work flow.
Example As soon as the architectural design of a subsystem has stabilized, detailed design within that subsystem can proceed. While software architects are still working on other subsystems, implementers can concurrently drive forward their contributions on subsystems with stable architectures.
Keep in mind that different phases require distinct skill sets and different specialists. The waterfall model embodies another development process weakness: if all activities are done sequentially, most specialists have nothing to do most of the time. Controlling top-level system partitioning and enabling concurrent progress in separate phases of the overall process can yield higher organizational productivity.
Keeping all development specialists engaged is a matter of organizational efficiency. One option is to stagger the order of multiple independent projects, so that later phases of one project proceed concurrently with earlier phases of another project but utilize different specialists. However, this leads to inefficiency in another way—a specialist moving from one project to another expends valuable effort in the context switch.
Minimizing the time to delivery is a matter of project efficiency. If an organization runs at maximum efficiency by overlapping projects, the tendency is for deep sequential activity chains to lead to long completion times per project. This creates several problems:
The market window of opportunity can be missed. The longer a project's completion time, the larger the likely variation in completion time and the greater the opportunity to miss deadlines. A window of opportunity is closed when competitors move more quickly, technology becomes obsolete, market conditions change, or user needs or requirements change.
The efficiency of specialists suffers if there is no stable context over extended time. As the project completion time extends, more specialists must enter and leave and spend time reconstructing problems and solutions encountered earlier. Departing team members, even if documentation is performed conscientiously, take tacit knowledge with them, setting back the project. New team members need time to adjust to team culture as well as to understand objectives and project status. Greater completion time increases these inefficiencies.
For most projects and organizations it is best to optimize project rather than organizational efficiency, since software development projects require substantial creativity and highly qualified people, are front-loaded in investments, and thus carry both high sunk costs and risks.
Example It is usually better to assign a specialist to one project (or a small number of projects) rather than to aim for full utilization of his or her time. Creative work is usually characterized by particularly high context-switching cost for individuals, suggesting that it is actually more efficient for project and organization to not aim for full utilization. Productive use can often be made of project dead time, such as professional development or project tool building.
As projects get larger, it becomes easier to balance project and organizational efficiency because there are more opportunities for specialists to contribute productively to different aspects of the same project. One important source of organizational efficiency is software reuse (see chapter 7).
A third and distinctively different model for software development is community-based development. In this model, a loosely knit group of software developers collaborate on a software project utilizing the Internet and online collaboration tools. Often these developers do not belong to a common organization or even get paid—they're simply volunteers.
The foundation of community-based development is making source code available to the community; indeed, it is usually made available to anybody who is interested. Most commercial software suppliers do not make source code available to anyone outside the organization—it is a closely held trade secret (see chapter 8). Making source code available is a radical departure from traditional commercial practice.
The term open source describes a particular type of community-based development and associated licensing terms as trademarked by the Open Source Initiative (OSI), which enthusiastically describes open source as follows: "The basic idea behind open source is very simple. When programmers can read, redistribute, and modify the source code for a piece of software, the software evolves. People improve it, people adapt it, and people fix bugs. And this can happen at a speed that, if one is used to the slow pace of conventional software development, seems astonishing." There are many possible licensing terms as well as different motivations for making source code available (see chapter 8).
What benefits can accrue from making source code publicly available? What issues and problems can arise? What would motivate programmers to volunteer their time and effort? Successful community-based development projects can shed light on these questions.
Example The most famous open source software project is Linux. This UNIX-like operating system was originally developed by Linus Torvalds, and he made source code available to the community. Subsequently, a dedicated band of programmers upgraded the software, while Torvalds maintained control. Companies such as Red Hat Software and Caldera Systems distributed versions with support, and vendors such as Dell and IBM offer Linux installed on computers they sell.
Example FreeBSD is another UNIX operating system based on a version of UNIX developed at the University of California at Berkeley in the 1970s and 1980s. Sendmail is a widely used application for handling and forwarding e-mail. Both FreeBSD and sendmail are widely used by Internet service providers. IBM offers another open source e-mail program, Secure Mailer. Apache Web Server is a widely used open source program and has been adopted in IBM's WebSphere product. Apple Computer made the core of its newest operating system, Mac OS X, open source and called it Darwin. Netscape created an open source version of its Web browser and called it Mozilla. This partial list properly conveys the impression that open source is gaining popularity.
The open source licensing terms (O'Reilly 1999; Open Source Initiative) require that source code be available without charge and redistribution be unrestricted. Although open source software typically has a license that restricts its use (licenses are associated with ownership under the copyright laws; see chapter 8), it must permit modifications and derivative works and must allow derivatives to be redistributed under the same terms as the original work. The essence of these restrictions is to create software that is not only available free (although the user may want to pay for support services) but will always remain free even after modification and incorporation into other works. Other types of licenses are discussed in chapter 8.
Community-based development has not yet proven that it can contribute to the conceptualization through architecture phases of the waterfall model. It may be that these phases inherently require a single individual or close-knit group. Today's community-based development projects have focused on the incremental advancement and maintenance of an existing piece of software. This also addresses the issue of who owns the software and who can place licensing restrictions on it. The answer to date is whoever contributed the first working software to the community. Community-based development can greatly extend the reach of such a person by bringing in a whole community of programmers to extend and maintain his or her original vision.
What organizational structure is imposed on the community to avoid anarchy? One manifestation of anarchy would be n different variations of the software for each of the n participants without any mechanism to merge results. The most successful projects have maintained the discipline of a sequence of versions resulting from a merging of collective efforts, with a governance mechanism to decide which changes are accepted. (See chapter 5 for discussion of how a software project must be managed through multiple releases.) A prime goal of the governance is to preclude "forking" of two or more coexisting variants of the software, which would result in duplication of effort and, through market fragmentation, reduce the network effect component of user value (see section 3.2.3).
Example The arbiter for Linux is its originator, Torvalds, who has always controlled what went into each successive version. If the software is managed by a commercial firm (like Apple's OS X or IBM's Secure Mailer), it is that firm that arbitrates, merges, and integrates all changes.
What would motivate programmers to donate their time to a project? The commercial interests of large corporations have not proven to be a strong motivator for donated efforts, and as a result, community-based development projects started by companies have been noticeably less successful than projects originated by individuals or originating in academe. A viable community-based development project must have a fairly large community of interested programmers from the perspectives of need (the programmers are themselves users) and interest (they find it intellectually challenging and interesting). Most such projects involve infrastructure software, since the community of users is large and the technical challenges interesting. In fact, this kind of development seems the most appropriate methodology for a certain type of infrastructure that is, by nature, a commodity (see chapter 7). Programmers are motivated by a technical challenge (much like solving mathematical or crossword puzzles, which is usually also an unpaid activity) and by the recognition conferred by the community. The latter should not be underestimated, since peer recognition of technical merit and innovation are powerful motivators.
Community-based development can also be appropriate for developing and maintaining an application that for practical reasons cannot be made commercially available.
Example Many scientific communities maintain a collective body of software for modeling or simulation. In this case, rather than technical challenge or peer recognition, the motivation comprises the division of effort and sharing of scientific results. The software is more than an enabler, it is itself a research outcome. Again, the developer community and the end-users are one and the same.
What would motivate a user to adopt community-based development software? For one thing, it is free, although this is probably a secondary benefit (as discussed in chapter 5, the cost of licensing software is often dwarfed by the operational expenses). Because of the scrutiny that it receives from many programmers, much community-based development software has a reputation for quality. Similarly, security is generally enhanced by scrutiny of the code (although there is also a danger that crackers could insert their own back-door security holes). Some users appreciate software not associated with any commercial vendor, believing it less likely to enhance the interests of that vendor and free from constraints imposed by the vendor.
The most powerful reason for adopting community-based development software is the possibility of modifying it to specific needs (although doing so makes it more difficult to benefit from future enhancements). One such modification is to remove unwanted capabilities, which can save resources and enhance security (Neumann 1999). Modification of commercial software distributed only in object code is at best difficult and usually violates the software license.
An observed weakness is (at least thus far) the lack of a methodology for systematically gathering user needs and requirements, and perpetuating the usability of the software. As a result, successful community-based development projects have a user community that is one and the same as the developer community—participants work most effectively when they are focused on satisfying their personal needs. The methodology also violates some of the basic tenets of modularity, as discussed later in this chapter.
While community-based software development appears to have its limitations, it is an interesting phenomenon, one that teaches us about the software development process and hacker culture. Commercial software suppliers have been struggling to understand and accommodate this movement, both in terms of its potential for competition with their interests and understanding how the movement might benefit their commercial interests.
Example Microsoft's shared source model makes source code available to support learning processes by enabling the study of the source code and its experimental modification. However, shared source excludes the right to redistribute or to derive commercial products. Shared source can allow the community to examine code for security loopholes and also precludes the introduction of new loopholes by crackers. However, just as with open source, crackers could also discover existing loopholes and use such knowledge to mount attacks.
Since the ultimate goal of software development is to achieve high satisfaction on the part of all stakeholders—there are many, and their relative importance and number changes over time—development processes should be flexible. Another of the laws of satisfaction-driven systems (see table 3.1) is "satisfaction-driven evolution processes constitute multi-level, multi-loop, multi-agent feedback systems and must be treated as such to achieve significant improvement over any reasonable time" (Lenman et al. 1997). A recent advance is families of lightweight development processes called agile processes (Fowler and Highsmith 2001). Agile processes are really frameworks for the creation and customization of processes, for every project cycle and over multiple cycles. Thus the property of agility: a process found lacking is immediately adjusted.
Agile processes emphasize a tight interaction with the user or customer, including operators, at all stages. Rapid prototyping is an important enabler of user interaction, since user feedback can be gathered and assessed early and often. Rather than assuming that requirements can be captured ahead of time (or even at all), agile processes assume that a dialogue with the user is maintained continuously throughout the entire software life cycle.
Agile processes are not really about rapid development, an approach favored in the early 1990s that aimed for very rapid development and delivery of solutions. Rapid development, as practiced then, is now viewed as more of a "hack" than a principled approach: accelerating the stages of production without embedding an appropriately agile process probably cannot deliver much value. Agile processes emphasize speed, but indirectly, by insisting on small incremental steps (much smaller than in traditional incremental processes) that provide feedback and assessment early and often.
Arguably some small and large software companies had adopted forms of agile processes even before they were identified and named.
Example Microsoft introduced the job classification "program manager" in the early days of its Excel spreadsheet application. This professional represents users, since mass-marketed software has no individual user or customer to benchmark. The development process validated and corrected course with feedback from program managers early and often. For instance, a program manager performed competitive analyses to continually understand and size the target market and its complementing and competing players, thus influencing development decisions. The delivery of beta releases and release candidates for testing with representative customers gathered direct input at less frequent intervals.
Significant differences in orientation, culture, and methodology exist between the programming phases of a software project (implementation, testing, repair, and maintenance) and the earlier design phases (requirements, architecture). Historically much application software has been weak in its ability to provide natural and forgiving interfaces, especially for naive users. This is accentuated as applications become more pervasive, diverse, and sophisticated. For this reason, new professions are emerging (Gerlach and Kuo 1991) that focus on this aspect of software creation, as distinct from programming.
Example These professions are analogous to the relation of building architecture to civil engineering: the architect is primarily concerned with the needs of the building occupant, the aesthetics of the building, how it relates to its environment, and various policy issues (like handicapped access). The civil engineer is primarily concerned with the structural integrity of the building and the functionality of its various subsystems, and with managing the actual construction.
In software at least three largely separate professions already exist or are emerging:
Industrial design is concerned with the physical appearance of physical products and how appearance relates to functionality. It is applicable to many products, including computers and information appliances.
Graphics design is concerned with the artistic representation of concepts and ideas through pictures rendered by computer programs and is most relevant to the graphical user interfaces of programs.
Interaction design addresses the interaction between people and complex systems, analyzing that interaction and finding ways to decompose it to make it more intuitive and functional.
The sophisticated programmers emerging from undergraduate computer science programs are highly technical, and as a result they are often less well equipped by training and orientation to deal with these human-centered design issues. Industrial and graphics design benefit from artistic training and are considered to be aspects of the design arts. Interaction design is an emerging discipline that draws heavily on the liberal arts and social sciences (particularly psychology and cognitive science). All these activities benefit from ethnography, a branch of the social sciences that focuses, from a human perspective, on the interaction between people and a rich environment.
A significant contribution to software functionality can occur after software is licensed to a customer. Many software products specifically offer opportunities to configure, customize, and extend. This is a good way to accommodate the increasing specialization of applications, trying to achieve a reasonable trade-off between accommodating the differing and changing needs of users and the benefits of an off-the-shelf application (see chapter 6). In the future we can expect improvements in tools enabling a significantly greater involvement of users in defining the functionality of the software they use. In their more sophisticated forms, end-user activities resemble programming more than merely adjusting a configuration.
The central idea behind computing is to define the functionality of a product after it is in the hands of the customer rather than at the time of manufacture. Software applications that offer no user configurability or programmability essentially move the definition of functionality to a second entity in the supply chain, the software supplier. Leaving increasingly rich and sophisticated configuration and programmability options to the user is in the best computing tradition.
Programming-like activities performed by users can be traced back to the earliest interactive applications.
Example Spreadsheets come to life through models defined by users, models that may include complex mathematical formulas, decision logic, and a customized user interface. Client-side database applications such as FileMaker Pro or Microsoft Access allow complex queries (a form of programming) and the design of custom user interfaces. Word processors support style sheets and templates, which allow users to define document formatting. Most applications support macros, the automatic execution of common sequences of actions recorded or authored by a user.
End-user programming is sometimes taken to a level close to sophisticated programming. Such capabilities are not intended for the ordinary user, but rather enable an aftermarket of independent or captive software developers to add value based on local needs.
Example Visual Basic for Applications (VBA), as integrated into the Microsoft Office suite, is a powerful development environment, rich and featured enough so that many users quickly reach the limits of their own programming skills. Microsoft Office is also a software development platform, commonly used by enterprise information systems departments to create custom applications. A variant called Office Developer addresses this derivative market.
There are published statistics on the failure rate, but they are not cited here because they mask major gradations of failure (ranging from time and budget overruns to outright cancellation to deployment followed by abandonment) and wide variation in the causes of failure. "Failed" projects can also yield useful outcomes like experience or fragments of code later used in other projects.
Extreme programming (Beck 1999) was probably the first widely acknowledged agile process. At its heart, extreme programming requests a close feedback loop with the customer of the software being developed, development in rapid cycles that maintain a working system throughout, the hand-in-hand development of increments and corresponding tests, and work in pairs to minimize mistakes caused by oversight.