Unlike other industries, software development has been around for a relatively short time. On an automobile assembly line, we can easily predict how many vehicles will be produced on any given day. In turn, this predictability provides an automobile company with the ability to be able to forecast what the profits will be for any given year and how much they are progressing. Now let's apply these principles to software development. How can you determine how much time it will take to build a feature or release a product? The short answer is that most of us don't even try - the conventional approach is to create artificial time limits and to impose these time limits on your development team.
Unfortunately, this approach doesn't work. A specification on a piece of paper may look deceptively simple, but in implementation may require a great deal of overhead. For example, let's look at a simple three-word instruction such as "build a car." What exactly does this mean? You have to create a functioning model of a vehicle, engineer or procure thousands of parts (or build them from scratch if they don't exist), walk through the process of assembling the automobile piece by piece, test it extensively (for security, emissions, etc.) and hopefully at the end, you'll have a functioning vehicle. The process involves a ridiculous amount of complexity, yet it appears deceptively simple on paper.
If you look at current statistics around the success and failure rates of software development projects around the world, you'll come to the realization that a huge number of projects are doomed to fail. Will your next project be part of this statistic?
For more information about these statistics, please refer to this revealing 1994 Standish Group report entitled the "CHAOS Report": standishgroup.com/sample_research/chaos_1994_1.php.
So who is to blame for all these failures? The scapegoats are usually the developers who were unable to deliver the software or features within a set timeframe. The chief technology officer (CTO) who has to justify the expenses might be also blamed (and fired). It's difficult to provide visibility on how IT spends money. If you are a developer, you may want to blame the decision makers who have no concept of the scope and complexity of the requested software. The problem can't be attributed to a specific group - the problem exists in the way we currently manage our software development projects. This culture of blame and distrust fosters heavy documentation and process - this is something, as an industry, that we need to change. For example, in Japan there is more trust placed on the worker; also, there is a strong emphasis on shared ownership of responsibility.
Software companies (like companies in other industries) can't afford to have this level of uncertainty in the process. That is why the software industry has made inroads in developing techniques to improve predictability and reliability. The Capability Maturity Model Integration (CMMI) is one of the ways an organization can implement process improvement and show the level of maturity of a process. One of the primary goals of CMMI is the objective measurement of IT resources and assessing the maturity of an organization by using statistical and scientific approaches for the estimate, control, and objective measurement of software projects. By improving the maturity, we also improve productivity and profitability.
Philip Crosby, the former head of a major management consulting firm wrote Quality is Free and Quality Without Tears, two easy-to-read books on process quality improvement including. His writings were influenced by the works of Dr. W. Edward Deming and Dr. Joseph M. Juran - both pioneers in the field of quality management. Crosby's approach focuses on attaining zero defects. CMMI levels are strongly based on Crosby's Manufacturing Maturity Model. Levels 2 through 4 work for the elimination of special cause variation. Level 5 is the complete attainment of continuous improvement. Most of the training you can get today focuses on a Crosby-esque approach for doing things; as such it tends to be rigid and not very agile. Briefly, CMMI is used to track the maturity of any software design organization, from requirements to validation. The CMMI has six maturity capability levels outlined in the following table:
Performed Process. You have little to no controls in your project. The outcome is unpredictable and reactive. Frequent instances of special cause variations. All the process areas for performed process have been implemented and work gets done. However, the planning and implementation of process has not yet been completed.
Managed Process. You have satisfied all the requirements for the implementation of a managed process. Work is implemented by skilled employees according to policies. Processes are driven according to specific goals such as quality and cost. Planning and review are baked into the process. You are managing your process.
Defined Process. You have a set of standard processes (or processes that satisfy a process area) within your organization that can be adapted according to specific needs.
Quantitatively Managed Process. All aspects of a project are quantitatively measured and controlled. Both your operational and project process are within normal control limits.
Optimizing Process. Continuous project improvement. CMMI Level 5 focuses on constant process improvement and the reduction of common cause variation. The project process is under constant improvement.
There are two models for implementing CMMI: the continuous model and the staged model. In the continuous model, elements such as engineering, support, project management, and process management are each composed of a set number of process areas. A process area is a description of activities for building a planned approach for improvement. Using the staged model, the process areas are set up according to the five maturity levels. MSF for CMMI Process Improvement was designed to support the staged model.
In Team System, one of the primary goals of MSF for CMMI Process Improvement is to gather Standard CMMI Appraisal Method for Process Improvement (SCAMPI) evidence to help a company to be appraised at a CMMI Level 3 process maturity according to the SEI. Another important goal of MSF for CMMI Process Improvement guidance is to provide a framework for creating a formal process within a software development team. In other words, storing information for the purpose of audits, process improvement, and quality assurance (QA). The CMMI specifications are quite detailed (over 700 pages long). Here are the characteristics of CMMI Level 3 (boiled down to three main points):
CMMI Level 3 is customized to the organization's set of standard processes according to the organization's guidelines.
CMMI Level 3 has a process description that is constantly maintained. This is implemented in Team System using work items and iterations.
CMMI Level 3 must contribute work products, metrics, and other process improvement information to the organization's process assets. Process templates and the project site enable project managers to share metrics and documents with the rest of the team.
One of the differences between MSF for Agile Software Development and the MSF for CMMI Process Improvement frameworks is the nature of the process guidance. The process guidance in MSF for Agile Software Development implies a development process. MSF for CMMI Process Improvement has very specific process steps in the guidance. This makes a lot of sense if you look at the complexity and sheer number of CMMI requirements that need to be managed. The process guidance for MSF for CMMI Process Improvement is approximately 150 percent larger than MSF for Agile Software Development.
Statistics often fly in the face of what is generally perceived as common sense. For example, it's easy to perceive that controlling each line item of a task list will provide more predictive control over the process. This is completely wrong - change and variation are innate parts of the process. As long as the variation or fluctuation is within normal bounds (common-cause variation), the project is healthy. CMMI and other process improvement methods seek to deal with issues and situations that fall outside of normal boundaries (special or chance cause) variations.
Here is the main challenge with MSF for CMMI Process Improvement: CMMI was originally designed for the aerospace and defense industries. How do you take something that has heavy auditing and lack of trust in the process and make it accessible to hundreds of thousands of Visual Studio users (Microsoft's target audience) and get high adoption levels? How is it possible to take something like CMMI, make it smaller, manageable, and Agile?
MSF for CMMI Process Improvement takes a radical new approach for helping you pass your CMMI appraisals. David J. Anderson (the creator of MSF for CMMI Process Improvement) found a strong correlation between the work of W. Edwards Deming and Agile methodologies. Deming is considered the father of quality management, and his theories are the basis of Philip Crosby's work. MSF for CMMI Process Improvement offers a range of work products and reports to help eliminate special cause variation and is based on Deming's quality control and statistical process.
To learn more about MSF for CMMI Process Improvement, we would highly recommend you read David J. Anderson's book Agile Management for Software Engineering - Applying the Theory of Constraints for Business Results and visit his blog at agilemanagement.net/Articles/Weblog/blog.html.
Deming popularized statistical process control in the field of business and manufacturing in his theory of profound knowledge. To properly implement process controls, you must be able to identify the difference between common cause variations (CCVs) and special cause variations (SCVs). Common cause variations are natural fluctuations found in any process. Special cause variations are caused by special occurrences, environmental factors, and problems that affect a process. The challenge as a project manager is to correctly identify an instance of a special cause variation and reduce it. Complicating this goal is the fact is that these variations usually occur randomly. The reporting component of Team System provides visual metrics to measure factors such as project health. These charts can help you determine whether the upper control limits (UCLs) and lower control limits (LCLs) are within operational boundaries. Figure 6-2 illustrates the two varieties of variation and the attainment of process improvement.
Another principle that influenced the development of MSF for CMMI Process Improvement is the theory of constraints (TOC) documented by Dr. Eliyahu M. Goldratt in his novel The Goal. The focus behind the theory is the goal of constant improvement. Constraints (as defined by Goldratt) are bottlenecks preventing you from reaching a specific goal. By identifying physical and nonphysical constraints, you can focus your energies on eliminating (or reducing) the constraints, making your process more effective.
A spreadsheet (MSF CMMI Reference.xls) was devised by David Anderson as a roadmap to create the process guidance. It contains all the SCAMPI requirements for CMMI Level 3 and mappings to what is contained in the process template.
Let's see how W. Edward Deming's fourteen principles from his book Out of the Crisis maps out to common Agile practices:
Create constancy of purpose toward improvement of product and service, with the aim to become competitive, to stay in business, and to provide jobs. The constant improvement of product (which is in our case, software) can be assured through continuous integration and refactoring, which are centerpieces of the Agile approach.
Adopt the new philosophy. Western management must take on leadership for change. This can be construed as advocating an adaptable approach in developing software (as opposed to having development run with heavy, monolithic processes). In the Agile approach and MSF for CMMI Process Improvement, developers are first-class citizens and advocates for code quality. The team-of-peers principle in MSF makes everyone on the team an equal and a thought leader, which benefits the project.
Cease dependence on inspection to achieve quality. Eliminate the need by building quality into the product in the first place. This principle is very akin to test driven development, where quality is assured early in the process.
End the practice of awarding business based on a price tag. Instead, minimize total cost. Move toward a single supplier for any one item, on a long-term relationship of loyalty and trust. This is akin to the Agile principle of keeping a close connection with the customer to be responsive to change and deliver right on target.
Improve constantly and forever the system of production and service to improve quality and productivity, and thus constantly decrease costs. Constant improvement of code can be achieved using test driven development, continuous integration, and constant refactoring. This closely resembles Deming's continuous improvement cycle that advocates Plan, Do, Check, and Act.
Institute training on the job. Agilists use pair programming and code reviews to improve knowledge sharing and quality of code. Unit testing code provides a solid, documented roadmap of customer requirements, which can then be picked up by any other developer.
Institute leadership. The aim of supervision should be to help people and machines - and gadgets - do a better job. In the Agile approach, the goal is to remove the supervision of people, create a vision to follow, and create an environment for great work.
Drive out fear. Fear can be driven out of a project by providing consistency, visibility, and transparency in the process. By deeply involving the customer, the Agile development shop provides an environment of trust. Team System provides visibility in your project on a continuous basis through metrics in project reports and by providing query views across all work items.
Break down barriers between departments. People in research, design, sales, and production must work as a team to foresee problems in production and use. The team-of-peers concept found in MSF encompasses this principle. A similar analogy in XP is the concept of collective ownership.
Eliminate slogans, exhortations, and targets for the work force asking for zero defects and new levels of productivity. Between the lines, you can read such slogans and the like as calls for the elimination of time to task and the need to individually calculate the velocity of your development team. Also implied by this principle is the need for mutual respect, which is reflected in many Agile processes such as Scrum and XP - also found in MSF for CMMI Process Improvement.
Eliminate quotas, management by objective, and numerical goals. The Agile approach uses nonnumerical ways of estimating, such as using NUTs and ROMs.
Remove barriers that rob workers of their right to pride of workmanship. Abolish merit ratings (and performance appraisals). Agile methodologies espouse joint ownership of work by everyone on the team. If you measure your work velocity using variation analysis rather than time to task or lines of code, you will realistically assess the performance of your team within an accepted baseline rather than worry about variables that are out of your control. In America, in some ways there is too much emphasis on the measurement of self. It is important to note that in other countries (such as Germany), it is illegal to measure the performance of an individual.
Institute a vigorous program of education and self-improvement. Using Agile modeling, pair programming, peer reviews, and promiscuous pairing (swapping partners), you can improve the abilities of your programmers.
Put everyone in the company to work to accomplish the transformation. In MSF for CMMI Process Improvement, the concept of the team of peers advocates that everyone on the team is a stakeholder in the project and is allowed to suggest and make changes to the process or product to improve it.
Deming also demonstrates agility through his PDCA Cycle (which stands for Plan, Do, Check and Act - the concept was originally developed by Walter A. Shewhart) as shown in Figure 6-3.
This cycle closely resembles the process behind the Agile approach of test driven development; however, one is applied to quality improvement, and the other applies to code improvement. Let's now compare these two processes:
Plan - In the context of quality management, the Plan phase denotes the design or revision for process improvement. In testing, unit tests are written to assert business requirements before the implementation code is written. As you continue the process, unit tests are rewritten to improve code quality and fidelity with the requirements.
Do - The Do phase is the implementation of process. The logical analogy in a test context is the implementation of code.
Check - In a process context, Check tests the results of implementation. In a very similar way, unit testing can be automated during the build process to check the implementation of code to see if it breaks any of your coded assertions.
Act - The Act phase is the decision-making process before the next implementation. This is where the process improvement occurs. In a testing context, if during testing your unit tests are not correctly capturing the requirements, they need to be refactored.
The framework for MSF for CMMI Process Improvement has been reviewed and accepted by prominent members of the SEI, including the originators of CMMI. MSF for CMMI Process Improvement covers 20 out of the 25 process areas. The missing process areas include:
Supplier agreement management (SAM) - Level 2
Organizational training (OT) - Level 3
Organizational process focus(OPF) - Level 3
Organizational environment for integration(OEI) - Level 3
Integrated supplier management(ISM) - Level 3
David J. Anderson explains the reasoning for the missing process areas on his blog: agilemanagement.net/Articles/MSF/WhyTheMissingFive.html.
Relative to other framework implementations of CMMI, MSF for CMMI Process Improvement is quite lean, characterized by a small number of work products, activities in the hundreds (not the thousands), and roughly 50 queries and reports. Microsoft's goal is to bring CMMI to the market for Visual Studio developers, add agility in a formalized process by incorporating Agile principles in the framework, and provide integrated tools to streamline and automate the adoption and management of processes that promote continuous improvement.
Fundamentally, a project is a collection of iterations. Within MSF, there is a loose approximate project plan, which encompasses end-to-end scenarios broken out into iterations. In a MSF for CMMI Process Improvement–based project, you have to consider the project on many levels. What activities are you undertaking to gather the SCAMPI evidence? What activities are you doing to incorporate governance in your project? What activities are actually driving the development itself? Here is a detailed break-down of these activities (and how they overlap in places):
The process of gathering SCAMPI evidence is documented in a work product spreadsheet called MSF CMMI Reference.xls. This spreadsheet is found in your general documents folder at the team project level. In the MSF for CMMI Process Improvement process guidance, there is also a CMMI tab, which is an appraiser's view to allow you to assess the way your process is coming along against the CMMI requirements. You'll notice that there isn't a one-to-one predefined list of work items generated against the CMMI reference list to provide you with a paint-by-numbers approach to get appraised. There is a reason behind this flexibility; you can define your own development process and measure it up against the list to see if you are on track.
The governance process is documented very well in the MSF for CMMI Process Improvement process guidance. To get general information about governance, click the link on the left side of the screen on the main page of the process guidance screen. You can also see how governance works within your development cycle by looking at the Tracks view (Views⇨Tracks).
Your development process is outlined in the process documentation but also as part of the work item instance (predefined collection of work items) that ships with MSF for CMMI Process Improvement.
With MSF, you can do postponed iteration planning. In other words, the function of the project is locked at iteration start, it's scenario based, and the scenarios are prioritized according to the value to the customer. Like many things in MSF, estimates are flexible. We would recommend that you refer to the Project Management Body of Knowledge (PMBOK) for a detailed rundown of estimation techniques. You can obtain the PMBOK from the Project Management Institute (pmi.org).
MSF for CMMI Process Improvement was designed with seven quality-of-service (QOS) requirements in mind: one customer-related and six software-related. Examples include customer requirements, security requirements, functionality requirements, and interaction requirements. Setting up your QOS requirements in advance allows you to filter activities by quality-of-service requirements and scenario later in the process.
All tasks in MSF for CMMI Process Improvement are structured according to entry criteria, tasks, and exit criteria (ETX). This structure provides level consistency and predictability in your process, which will help you track the natural flow of your project.
MSF for CMMI Process Improvement deals with the management of process rather than conformance to plan or specification. The MSF for CMMI Process Improvement framework not only provides you with documentation and workflow to obtain the SCAMPI evidence, but it also includes metrics to allow you to understand the variation within your projects. Because if it can't be accounted for, there's no way that you can control it!
One of the ways you can determine the capability of your process is by measuring it against Donald J. Wheeler's four-process state diagram (seen in Figure 6-4). Wheeler advocates that a process will never be at a standstill. It constantly shifts and moves from one state to another based on variation.
You'll notice that the top-right quadrant is the ideal state. This is a state where your processes are in control and you can analyze your reports to track down any problems that might come up. In the ideal state, there is a lot of predictability in your process; everything is coming up to specification. This is where you want to be in terms of capability and maturity.
Let's look at the bottom-right quadrant. If you have an unstable process, but, for some reason or another, you are still hitting your targets, you are on the brink of chaos. This can occur if you manage a project using traditional time-to-task approach. What will happen is that assignable cause variation will creep into the project. Then you will start seeing the project fall off course, and worse yet, you won't know when it will occur. An example of this is a software project with complex features and a very tight timeframe. Everything looks fine initially - the project looks like it's on track. Then the inevitable happens - the estimation that was made does not correspond to your team's velocity and you are understaffed to deliver on time, developers decide to quit, and so forth - and the project slips out of control. The only way to correct a process that is on the brink of chaos is to identify and remove assignable cause variation. In other words, do a risk analysis at the beginning of the project. Recognize that you may be understaffed. Come up with a plan to mitigate the risk. Understand your velocity using loose or hard statistical methods and you will be back on the right track.
The top-left quadrant of the figure represents the threshold state. In this state, you are generating results but there is also unpredictability in your process. For example, you are running a project, but code is being produced inconsistently. You may have significant gaps of time where little to no code is being generated then, suddenly, the developers produce code in spurts of productivity. You manage to get through and deliver, but it's on the edge. You can move from a threshold state to an ideal state by improving your process, specifically, to reduce the amount of common cause variation.
In the lower-left quadrant is chaos. Chaos implies a process out of control, where you are constantly putting out fires. Sometimes specialized managers or consultants are brought in to establish order in a process. Unfortunately, unless you have a process improvement strategy in place, the pattern will continue to reoccur.
Statistical Process Control (SPC) is a huge topic and well outside the scope of this book. If you want a pragmatic book to explain how to apply it in real-world situations, we would recommend that you pick up and read Understanding Statistical Process Control by Donald J. Wheeler and David S. Chambers.
In Wheeler's diagram, the Y-axis (north to south) represents on-time-on-budget development and is the specification limit. It is the management maturity of your process. The X-axis (left to right) represents how mature your development process is coming along. By moving from the bottom of the chart to the top, you are theoretically moving from CMMI Level 2 to CMMI Level 4.
Cumulative flow diagrams are central in evaluating the effectiveness of your process. Such a diagram deemphasizes time-to-task estimation and makes you focus on tracking down your variation. Using an iterative cumulative flow diagram, you can look at your work in process (WIP). The WIP helps you predict lead time - you can use the IterationPlan.xls spreadsheet in the planning stage to work out the details. When the lead-time gets too long, you'll notice a dramatic dip in quality. That is because leadtime correlates to defects (or bugs).
If you consider a statistical analysis of your process, you'll notice a normalization process that occurs for your team. In the Agile approach, everyone works collaboratively; this normalization makes your process work closer as a common-cause system. Of course, this largely depends on the team - it has to execute on their process and methods very well.
Using a statistical approach, you can change fear into trust. MSF for CMMI Process Improvement has the resources to help you deal with fear. It provides the statistical visibility into your project. By working and buffering with variation, you can work in the true spirit of continuous improvement. Plan your scenarios according to priority. To prioritize, ask some of the following questions:
Should we evaluate the amount of resources, time, and effort to allocate based on each task?
What if a scenario appears to be more challenging than another?
Should we allocate more time and/or effort if there is more complexity?
The rule of thumb is to use standard CMMI transaction cost; the underlying value will determine the iterations.
The remaining work report (Figure 6-5) is a classic example of a cumulative flow diagram. It indicates bottlenecks, queue of work and resources, how much work is left to be done, and when it will be done. The middle part represents work queuing for testing. The part on the left represents the difference between requirements of an iteration and the work that has been started, for example, work queuing for development.
You can measure a work remaining trend against a velocity report to indicate how quickly you are planning the process. It's very much like tack time on an assembly line. You can then track variation and averages over time to provide estimates and ask questions such as, What's the degree of variation? Can we normalize using training?
A velocity chart has nothing to do with conformance to plan or specification. This report (Figure 6-6) will provide metrics on how long code stabilization will take within an iteration.
The triage report (Figure 6-7) represents proposed work queuing for approval. It includes tasks, risks for mitigations, requirements for scope, and more. The triage deals with project management artifacts including issues that are blocking work items. If you see the triage lines growing, it means that your velocity isn't good enough, and your process is slowing down.
The bug rates report (Figure 6-8) has similar information as cumulative flow diagram. It is a favorite report within Microsoft's internal teams. Cumulative flow diagrams are important; you shouldn't have to worry about variation.
The quality indicators report (Figure 6-9) provides a powerful view into your project. It indicates the bugs, percentage of code coverage, and code churn against test results.
Here is how you can interpret the results of the unplanned work report. This report is generated for each iteration. The lighter color indicates work items that were generated at the beginning of the iteration. As you go along, new work items are added to the list - these additions are represented using the darker color (in the middle). There are three types of unplanned work: regressive work (in other words, work that was completed but must be done again due to bugs or defects found in the code), change requests, and scope change from a customer (otherwise known as scope creep). You can use the unplanned work report (Figure 6-10) to give you the delta between planned and unplanned work, so you can add a buffer during your iteration planning cycle. This type of report is not commonly found in Agile methodologies.
This is by no means an exhaustive list of CMMI reports, but it provides you with an overview of the techniques used to assess the progress of your project. For more in-depth information about extensibility and other features of SQL Server Reporting Services, Team System report analysis, and Business Intelligence (BI) related information, please refer to Chapter 16.
It is important to measure risk in any CMMI-driven project. Anything that generates a 100-percent potential of causing special cause variation can be construed as risk. The goal of risk analysis is to identify a list of potential risks and analyze for likelihood of occurrence. You have to measure the impact on your schedule, budget, resources, and the quality of the work generated within the scope of the project. As with MSF for Agile Software Development, you can track and manage risks using a risk work item. Once you have identified your important risks, you can then propose mitigation schemes as a series of task work items associated to each risk.
You can also correlate issues work items with risk work items by looking at the issue logs with tracking links to the risk logs. Walter A. Shewhart introduced the concepts of assignable cause variation (predictable special cause variation) and chance cause variation (unpredictable variation in your project), which was embraced by Deming.
The diagram in Figure 6-11 shows the Issues and Blocked Work Items report, which is essentially a cumulative flow diagram used to track work items. It indicates any issues causing blockage in your process. When blocking issues do come up, you should create a work item. When you link the issue work item to the blocked work item, you can then track the issues more effectively. You can then use this report and the issue log to manage the issues and bring them back under control. Typically, you start with zero issues at the beginning and they build up as time goes on. Note that this report is not available in MSF for Agile Software Development, only in MSF for CMMI Process Improvement.
In MSF for CMMI Process Improvement, you can quantify all assignable cause variation using risk work items. Chance cause variation is harder to pin down, of course. You can use the issue work item to document issues that come up during the course of a project. If you are consistent with the way you document all your risks and issues, you will be able to learn and mitigate them in future projects. Using the Issues and Blocked Work Items report, you can use the evidence of impact to escalate issues for prompt resolution. The rate of resolved issues can serve as an indication of process maturity.
If you can't find risk in your project, you are either functioning at an extremely high level of process maturity, or you are having difficulty measuring the variation in your projects. As unpleasant as risk can be, it's important to recognize that you can't avoid it; you must adjust to working with risk. In MSF for CMMI Process Improvement, it's an important way we can empirically document and eliminate special cause variation from the environment.
MSF for CMMI Process Improvement has a defined methodology for dealing with risk. You should propose and devise mitigation schemes at the beginning of the project. An analysis has to be done to estimate risk and identify triggers for mitigation. Not all risk will cause show-stopping results. Using techniques such as Pareto's Principle (the 80/20 rule), you must prioritize what risks are the most important and what are their thresholds are for triggering a mitigation. (Working on tasks to mitigate risk will take your team away from the task of developing code - mitigation tasks should be worked on only if there is a real chance that it will adversely affect your project.) The prioritization of risk can be done using four factors: impact, cost, likelihood, and mitigation. As all projects will be affected by risk at one point or another, you should create a contingency plan and buffer to recover from risk. Once you have worked out the predictable risk factors, you can then enter them as work items (along with the mitigation tasks). For more detail on handling risk, refer to the documentation available in the MSF for CMMI Process Improvement process guidance.
MSF for CMMI Process Improvement is a radical new approach for helping companies get appraised at CMMI Level 3. It uses Agile techniques and principles based on Deming's variation model to mitigate risk and issues associated to your project. It also moves away from traditional definitions of a project to provide scientific and empirical insight in your process.
Team System is quite versatile because it is adaptable to any process and environment. What if you don't want to use the MSF model as the basis of your software development project? Luckily, there are third-party vendors and organizations working on process templates for established Agile and formal processes. Let's look at some of them.