XP and Options


A basic understanding of how options work helps us understand how some of the basic XP value propositions can be justified using the fundamental tenets of XP. In this section, we examine two of these value propositions.

  • Proposition 1: Delaying the implementation of a fuzzy feature creates more value than implementing the feature now.

  • Proposition 2: Small investments and frequent releases create more value than large investments and mega-releases.

We begin with the technical premise of XP and its relation to the first proposition. Then we tackle the second proposition in the context of staged investments and learning. The option pricing models used to analyze each scenario are introduced just in time along the way.

The Technical Premise of XP

The software development community has spent enormous resources in recent decades trying to reduce the cost of change better languages, better database technology, better programming practices, better environments and tools, new notations … It is the technical premise of XP. [Beck2000]

XP challenges one of the traditional assumptions of software engineering: that the cost of changing a program rises exponentially over time, as illustrated in Figure 43.6.

Figure 43.6. The traditional assumption about the cost of change

graphics/43fig06.gif

The technical premise of XP is that this pathological behavior is no longer valid. Better technologies, languages, practices, environments, and tools objects, database technologies, pair programming, testing, and integrated development environments come to mind all help keep software pliable. The result is a cost-of-change function that resembles the dampened curve in Figure 43.7.

Figure 43.7. The technical premise of XP

graphics/43fig07.gif

Why is a flattened cost curve important for an options-driven process? A flattened cost curve amplifies the impact of flexibility on value. It does so by creating new options that would not have existed under an exponential cost function and by reducing the exercise cost, and therefore increasing the value, of existing options.

You Aren't Going to Need It: Now or Later?

We are traditionally told to plan for the future … Instead, XP says to do a good job … of solving today's job today … and add complexity in the future where you need it. The economics of software as options favor this approach. [Beck2000]

One of the most widely publicized principles of XP is the "You Aren't Going to Need It (YAGNI)" principle. The YAGNI principle highlights the value of delaying an investment decision in the face of uncertainty about the return on the investment. In the context of XP, this implies delaying the implementation of fuzzy features until uncertainty about their value is resolved. YAGNI is a typical example of option to delay, an all too common type of real option.

Extreme Programming Explained provides an example of the application of options theory to YAGNI.

Suppose you're programming merrily along and you see that you could add a feature that would cost you $10. You figure the return on this feature (its present value) is somewhere around $15. So the net present value of adding this feature [now] is $5. Suppose you knew in your heart that it wasn't clear at all how much this new feature would be worth it was just your guess, not something you really knew was worth $15 to the customer. In fact, you figure that its value to the customer could vary as much as 100% from your estimate. Suppose further that it would still cost you about $10 to add that feature one year from now. What would be the value of the strategy of just waiting, of not implementing the feature now? … Well, at the usual interest rates of about 5%, the options theory calculator cranks out a value of $7.87. [Beck2000]

The scenario is illustrated in Figure 43.8.

Figure 43.8. YAGNI scenario

graphics/43fig08.gif

The delay option underlying the YAGNI scenario is much akin to a financial call option, an option to acquire a risky asset on a future date. We will analyze the scenario using the famed Black-Scholes formula for calculating the value of a call option on an uncertain asset the same formula used by Beck in Extreme Programming Explained. To understand in what way the cost of change affects the value proposition underlying the YAGNI scenario, we need to dig a little deeper into the option pricing theory.

Option Pricing 101

Three financial economists, Fisher Black, Myron Scholes, and Robert Merton, undertook the groundbreaking work on option pricing in the early '70s. Their efforts won them a Nobel Prize in economics in 1997. The equation published in a seminal paper in 1973 on the pricing of derivatives and corporate liabilities became known as the Black-Scholes formula [Black+1973]. The Black-Scholes formula revolutionized the financial options trading industry. Both the theory and the resulting formula in various forms are now widely used.

The Black-Scholes equation is illustrated in Figure 43.9. In the equation, C denotes the value of a call option on a non-dividend-paying asset with a strike price of L. M is the current value of the underlying asset, the asset on which the option is written. The option expires at time t. The risk-free interest rate is denoted by rf, expressed in the same unit as t. The risk-free rate is the current interest rate on the risk-free asset, such as a short-term Treasury bill or government bond. Its value can simply be looked up in the business section of a daily newspaper. N(.) is the cumulative normal probability distribution function, and "exp" denotes exponential function.

Figure 43.9. The Black-Scholes formula for the value of a call option

graphics/43fig09.gif

The parameter s denotes the volatility of the underlying asset. Volatility is a measure of total risk, which subsumes both market and private risk. It is given by the standard deviation of the continuous rate of return on the asset's price (value) over time. Usually, this parameter is estimated using historical data. For a stock option, volatility can be estimated by calculating the standard deviation of the stock's past returns over small intervals spanning a representative period for example, using weekly returns over the past 12 months. For real assets, estimation of volatility is much trickier, but sometimes market data can still be used. An example from software development is provided in [Erdogmus2001B].

The parameters M (the current value or price of the underlying asset), s (the volatility of the underlying asset), t (the option's time to expiration), L (the option's strike price), and rf (the risk-free interest rate) correspond to the five standard parameters of option pricing illustrated in Figure 43.4.

How did Black, Scholes, and Merton invent this magic equation? All earlier attempts at solving the option pricing problem involved calculating the net payoff of the option at expiration under the rational exercise assumption and then discounting this payoff back to the present to determine its current value. This approach required identifying the proper discount rate for the uncertain payoff. Essentially, the risk of an option is different from, and often much higher than, the risk of its underlying asset. Even if the discount rate for the underlying asset is known, choosing the proper discount rate for all possible payoffs of the option under different exercise scenarios is inherently problematic. Black, Scholes, and Merton succeeded not by solving the discount rate problem, but by avoiding it. Their solution is based on two key concepts:

  • Replicating portfolio

  • The law of one price, also known as no arbitrage

The first concept, replicating portfolio, states that the behavior of an option can be replicated by a portfolio consisting of a certain amount of the underlying asset and a risk-free loan to partially finance the purchase of the underlying asset. Thus, it is not necessary to buy options one can create a do-it-yourself or synthetic option through a combination of the underlying asset and a loan. The option is then effectively equivalent to a levered position in the underlying asset. Indeed, the idea of financial leveraging has been known for a long time: Buying on margin has been widely practiced, especially during the stock market boom of the '90s.

The second concept, the law of one price, or no arbitrage, states that an efficient market lacks money machines. If one can replicate the behavior of an option exactly by a corresponding portfolio, the portfolio and the option are interchangeable for all practical purposes and thus must be worth the same. The two assets the option and the replicating portfolio, with exactly the same payoffs under the same conditions must have the same price. If the exact composition of the replicating portfolio, and therefore how much it is worth in the present, can be determined, then how much the option is worth in the present will also be known. Option pricing problem solved!

The original derivation of the Black-Scholes equation is based on solving a specific stochastic differential equation in continuous time. Cox, Ross, and Rubinstein provide a much simpler derivation originating from a discrete model [Cox+1979], which we will also take advantage of later in the chapter. In the YAGNI example, we will stick with the Black-Scholes model.

Evaluation of the YAGNI Scenario

Table 43.2 illustrates the application of the Black-Scholes formula to the YAGNI scenario.

The NPV of implementing the feature now is $5 (the $15 present value of expected benefits, minus the $10 cost of implementation). If the implementation is deferred for one year, at a volatility of 100%, the Black-Scholes model yields an option value of $7.87, provided that the cost of implementation stays the same. Because deferring implementation incurs no initial cost, the option value equals the NPV of waiting a year before deciding whether to implement the feature. This value takes into account the possibility that the planned feature may be worthless in one year, which would force its implementation to be forgone, as well as the possibility that the actual benefit of the feature may very well exceed today's estimate of $15 (because of uncertainty), which would make the feature a much more profitable investment. The flexibility of deferring the decision increases the value created, because it helps limit the downside risk of the investment without a symmetric limitation on its upside potential.

Table 43.2. Calculation of the Option Value of the YAGNI Scenario
B-S Variable Value Explanation
M 15.00

B-S: Current price of underlying asset

YAGNI: PV of benefits from proposed, deferrable feature implementation

L 10.00

B-S: Strike (exercise) price of the call option

YAGNI: Cost of implementing proposed, deferrable feature

rf 0.05

B-S: The risk-free rate of return

YAGNI: The opportunity cost of implementation; the return that the implementation cost would earn if invested in a risk-free security

t 1.00

B-S: Years until expiration of the option

YAGNI: Date on which feature implementation decision must be taken

s 1.00

B-S: Volatility of the underlying asset (standard deviation of the asset's rate of return)

YAGNI: Volatility of the feature's benefits (the standard deviation of the return of feature's benefits)

C 7.87

B-S: Value of Black-Scholes call option

YAGNI: Value of waiting one year before implementing the feature

Uncertainty is a key factor in this example. Figure 43.10 illustrates how the value created by waiting in the YAGNI scenario varies in response to the level of uncertainty, everything else being equal. Uncertainty is captured by the volatility of the feature's benefit. As the volatility increases, the option value of waiting also increases.

Figure 43.10. Sensitivity of the value of the YAGNI scenario to uncertainty

graphics/43fig10.gif

A Deeper Look at YAGNI

The YAGNI example discussed in the previous section assumes that the cost of change is constant over time. More insight can be gained through a closer look at the value of the YAGNI delay option under other cost functions. Consider the following two cost curves:

  • A traditional cost curve, where the cost of change exponentially increases over time

  • A flattened cost curve, where the cost of change gradually increases over time at a diminishing rate

An example of each type of cost curve is plotted in Figure 43.11. To see how the shape of the cost curves and waiting time affect the value created, we reevaluate the YAGNI scenario, using these sample curves.

Figure 43.11. Sample cost curves: traditional versus flattened cost of change

graphics/43fig11.gif

Assume that the volatility of the feature's benefit is constant at 100% per year. Because this is per-period volatility, as waiting time (or the expiration date of the option) increases, cumulative volatility total uncertainty around the benefit also increases. The longer one waits, the more likely it is for the actual benefit to wander up and down and deviate from its expected PV of $15.

Figure 43.12 shows the result of reevaluating of the YAGNI option under the two cost curves. The option value, the value of waiting before implementing the feature, is shown for different waiting times for each curve. The dashed line represents the benchmark NPV of $5 the value of implementing the feature now, without any delay.

Figure 43.12. Option value of waiting under traditional and flattened cost curves

graphics/43fig12.gif

The bottom curve in Figure 43.12 reveals that under the traditional cost curve, waiting does not make much economic sense. Delaying the implementation decision destroys value because the increase in the cost of change overtakes the benefit of the flexibility to make the implementation decision later. As a result, the longer we wait, the less value we create.

Under the flattened cost curve (the top curve in Figure 43.12), however, the behavior is drastically different. If the uncertainty is expected to be resolved within a threshold waiting time, waiting is not profitable because of the initial ramp-up in the cost of change. After this initial, rapid ramp-up, the cost curve flattens, and waiting becomes increasingly profitable. The option value crosses over the $5 benchmark at approximately ten months, the threshold waiting time. Beyond this point, delaying the implementation decision creates more value than the immediate implementation of the feature.

In summary, the option pricing theory confirms that under the traditional cost model of change, decisions about system features should be committed to as soon as possible: Waiting is not desirable in this situation. However, under a flat cost curve, the timing of commitment depends on the level of uncertainty and when uncertainty about the benefits of the features is expected to be resolved. If uncertainty is high or it is expected to be resolved over the long term, decisions about system features should be committed to as late as is feasible; otherwise, they should be committed to now. Finally, under a constant cost function, commitment should always be made later rather than sooner. Figure 43.13 summarizes these conclusions.

Figure 43.13. YAGNI scenario and the cost of change: implement now or implement later

graphics/43fig13.gif

Why Small Investment and Frequent Releases?

Another important principle of XP is to start with a small initial investment. How can XP afford to start a project with few rather than many resources? What is the rationale behind this principle? Consider this statement from the CEO of an international consulting firm, made during a discussion of the strategy of a start-up venture in Silicon Valley.

I'm convinced that successful new ventures successful new anythings come from thinking big, but starting small. Most big failures come from thinking big and starting big and getting into trouble financially or strategically because there hasn't been enough learning to translate the big idea into a workable idea before overcommitting the amount of money or how the big idea is implemented. Iridium the Motorola satellite-based mobile phone venture comes to mind as an example. Note how [the president of the start-up being discussed] is gradually building up his capital base through a series of small financing rounds rather than a big-bang financing that, had he been successful in getting it, probably would have led to poor use of the money because he hadn't learned enough about how to translate his big idea into a workable one.

K. Favaro, CEO, Marakon Associates

In XP, the rapid feedback supplied by tight iterations resolves uncertainty, whether technical or business-related, and permits the results of the learning process to be incorporated into subsequent iterations. Tight implementation cycles and frequent releases provide decision points where the information that has been revealed can be taken advantage of to modify the course of the project. If the project is going badly, it can always be stopped. If it's going well, there is an option to continue with the next cycle. This process of continuous learning and acting based on the information revealed improves flexibility and minimizes risk. The cost of learning is limited to the small investment required to complete a small cycle, and its impact is therefore proportionately small. Taking proper action after learning increases value if the cost of learning is relatively small.

In the rest of this section, we illustrate exactly how small investments and frequent releases increase the value created.

A Black Hole: Large Investment, No Learning

First consider the complete opposite of small investments and frequent releases: a scenario involving a large initial commitment, but no learning, no decision points. Essentially, this is a single-stage project with a large investment in the beginning and a mega-release at the end.

Figure 43.14 illustrates the scenario. The only decision in the scenario is that of go/no-go in the beginning. Alas, whether the large investment will pay off is not known a priori. Uncertainty about the success of the project is resolved only once the release has gone out the door, at the end. The probability of the project ending up worthless may be substantial because the course of the project could not be modified in the face of new information. There are no opportunities to take corrective action.

Figure 43.14. A single-stage project with no intermediate learning

graphics/43fig14.gif

Now to lighten up things a little, assume that the probability of complete failure is zero. Throwing in a few numbers will make things more concrete.

  • The large investment will cost $110 in present value terms.

  • The total duration of the project is four months.

  • The expected benefit of the whole project, again in present value terms, is $100.

  • The benefit is subject to a monthly volatility of 40%.

Where multiple sources of uncertainty are present, the volatility measure collapses the different factors involved into a single factor. Each of these factors may have both a private and a market component. In this case, let's suppose that changing customer requirements are the sole source of uncertainty as with the YAGNI scenario, which again may be affected by both external and internal developments. What does the 40% figure imply? If the product were ready now, the customer would expect an immediate benefit of $100 in PV terms. Think of the 40% volatility as the standard deviation of the monthly percentage change in this expectation based on past experiences.

NPV in the Black Hole

The NPV of the single-stage project is simply the PV of the expected benefit, net of the PV of the large investment. Because all figures are expressed in PV terms, they have already been discounted. Thus, the NPV is calculated as follows:

graphics/43equ04.gif


A negative NPV! The project does not look attractive. According to the NPV rule, it should not be undertaken, because it is expected to destroy rather than create value.

Remarkably, here we did not use the volatility of the benefit in the calculation of the NPV. This is because the benefit was already specified in expected PV terms that is, the risk of the benefit is factored into the $100 estimate. Alas, such is not always the case. As we will see, the volatility plays a crucial role when the project involves decision points in the middle.

Light at the End of the Tunnel: Small Investments with Learning

Having established a benchmark for comparison with the single-stage project, let's now consider the alternative scenario, which is the real focus of the current discussion. This time, the same project is undertaken in multiple stages, each stage requiring a relatively small investment and resulting in a new release. This new scenario is illustrated in Figure 43.15.

Figure 43.15. Staged project with small investments

graphics/43fig15.gif

Here are some characteristics of the new scenario. The releases progress in small increments. The stages can be ordered to implement the higher-value features first so that the PV of the total value realized is maximized (earn early, spend late). Moreover, each stage provides a learning opportunity. The customer can revise the estimates of future benefits and make an informed decision on whether to stop, continue as is, or modify the course of the project. The development team can similarly learn and steer technical choices and manage customer expectations according to the revised estimates. Uncertainty is gradually resolved in multiple steps.

Most remarkably though, each stage effectively creates a real option to undertake a subsequent stage. If the project is abandoned midstream, the value created during previous stages can at least be partially preserved: Only the investment associated with the last release will be completely lost. The additional value created by staging over the benchmarked single-stage scenario may be substantial. The more uncertain the expected benefits are, the higher this difference will be.

A Project with Two Stages

To see why, consider a seemingly small improvement over the simple single-stage scenario discussed in the previous subsection: a two-stage version of the same project with a single, midpoint decision.

Each stage covers half the original scope, takes half the total time, yields half the expected benefit, and incurs half the total cost of the single-stage project. Learning is incorporated into the scenario as follows. At the end of the first stage, the customer will revise the estimate of the remaining benefit, the expected benefit of the second stage, and decide whether to continue. Therefore, the second stage is conditional on the outcome of the first stage. Initially, the project benefits are subject to the same uncertainty as the benchmarked single-stage project, at a volatility of 40% per month. Though, unlike in the single-stage project example, this time the volatility will have a serious effect on value. Table 43.3 summarizes the setup of the two-stage project.

The costs and benefits in each column of Table 43.3 are stated in PV terms relative to the beginning of the period covered by the column. The risk-free rate is assumed to be a constant 5% per year, or 0.41% per month. The overall cost of 110 is the sum of the first-stage cost and the second-stage cost, but the latter is first discounted at the risk-free rate back two months from the beginning of the second stage.

The correct way to calculate the NPV of this scenario is by viewing the second stage as an option that will be exercised only if its expected benefits (estimated at the end of the first stage) exceed its expected cost of 52.2. This contrasts with the DCF approach, which would view undertaking the second stage as a given.

Table 43.3. Setup of the Two-Stage Project
  Stage 1 Stage 2 Overall

Flexibility:

Purpose:

Uncertainty:

Mandatory

Learning

More uncertain

Optional

Completion

Less uncertain

 
Cost 55.2 55.2 110
Benefit 50 Stage 1 outcome ?
Volatility (per month) 40% ? ?
Duration (months) 2 2 4
Risk-free rate (per month) 0.41% 0.41% 0.41%

To value the option underlying the two-stage project, we need a model that is richer and more accommodating than that of Black-Scholes. We will employ a closely related but more general model, of which the Black-Scholes model is a special case. Figure 43.16 demonstrates how to calculate the expanded NPV of the two-stage project the NPV including the option value using this model and an accompanying technique called risk-neutral valuation. The details of the calculation are given next.

Figure 43.16. Valuation of the two-stage project

graphics/43fig16.gif

Uncertainty in a Staged Project: The Binomial Model

The first step is to determine how to model uncertainty. The binomial model [Sundaram1997] is frequently used in option pricing to model uncertainty for solving problems with more complex structures than standard option pricing formulas can accommodate.

In the binomial model, the underlying asset of an option is modeled using a two-state, discrete-time random walk process. Starting from an initial value, the asset moves either up or down in a fixed interval. The process is then repeated for successive intervals such that two consecutive opposite moves always take the asset to its previous value, generating a binomial lattice. The resulting structure represents the possible evolution of the asset in discrete time, starting with an initial value. It is essentially a binary tree with merging upward and downward branches.

In the two-stage scenario, the underlying, uncertain asset is the benefit of the first stage. The value of the overall scenario depends on the behavior of this asset. On the left side of Figure 43.16, a binomial lattice is shown for this asset. The root of the lattice is represented by the value 50, which is the specified expected PV of the first-stage benefit. Recall that the total duration of the first stage is two months. Suppose that at the end of the first month, enough information will exist to revise this estimate. Thus, we divide the duration of the project's first stage into two equal intervals, resulting in an interval size of one month.

The values of the subsequent nodes of the binomial lattice are determined using the volatility estimate of 40% per month. From the volatility, first we calculate an upward factor, u, that is greater than unity and a downward factor, d, that is smaller than unity. Over each interval, the value of the asset either increases by a factor of u or decreases by a factor of d. The upward and downward factors are chosen to be consistent with the volatility estimate, the standard deviation of the rate of percentage change in the asset's value. If the volatility is s, u and d can be chosen as follows [Cox+1979]:

graphics/43equ05.gif


where t is the chosen interval size, expressed in the same unit as s, and "exp" denotes the exponential function. In the current example, the volatility is 40% per month, and the selected interval size is one month. These choices yield the upward factor u = 1.49 and the downward factor d = 0.67. Before proceeding, we need to verify that the monthly risk-free rate of 0.41% + 1 = 1.0041 is greater than d and smaller than u, a condition that must be satisfied so that we can apply the principles of replicating portfolio and law of one price to the scenario.

Treating Nonstandard Payoffs

As shown in Figure 43.16, the PV of the stage 1 benefit is 50, which constitutes the root node of the binomial lattice. The lattice is rolled out beginning with this initial value and multiplying it repeatedly with the upward and downward factors to cover two intervals, which takes us to the end of the first stage. This process yields three terminal nodes 111, 50, and 22 each representing a possible stage 1 outcome. For each of these states, the expected stage 2 benefit equals the stage 1 outcome, as was stipulated in Table 43.3. This yields the estimate of the stage 2 benefit, conditional on the actual benefit of stage 1. Stage 2 will be undertaken only if its estimated benefit at the end of stage 1 exceeds its estimated cost of 52.2. Thus, applying the rational exercise assumption at the end of the first stage yields the following for each terminal node of the binomial lattice:

graphics/43equ06.gif


The overall net value, or payoff, at the end of the first stage therefore equals the following:

graphics/43equ07.gif


From top to bottom, the payoffs are calculated as 168, 50, and 22 for the three terminal nodes. Note that stage 2 will be undertaken only for the top node, the one with a payoff of 168. For the remaining nodes, the payoff simply equals the stage 1 outcome because the subsequent option on stage 2 is not exercised in those states.

Now comes the tricky part: recursively folding back the lattice to obtain the PV of the calculated payoffs. We perform this by invoking the same two concepts that underlie the Black-Scholes option pricing model: replicating portfolio and law of one price. Note that the Black-Scholes formula couldn't be used directly here, because the payoff function is not exactly the same as that of a standard call option: It does not simply equal the greater of zero or the maturity value of the asset net of an exercise price. We develop the general technique on the fly using the current example.

Calculating the Present Value of the Payoffs

Consider the top two terminal nodes of the binomial lattice in Figure 43.16 with the corresponding benefits of 111 and 50 and payoffs of 168 and 50. The terminal benefits of 111 and 50 are derived from the benefit at the parent node using the upward and downward factors 111 = 75u and 50 = 75d. What is the expected discounted payoff at the beginning of the preceding interval? We can always attach probabilities to the upward and downward branches, calculate the expected payoff using these probabilities, and then discount the result back one interval using a proper discount rate. This procedure would have worked, except that (a) we don't know what those probabilities are, and (b) we don't know what the proper discount rate is. Besides, even if the probabilities were given, we would have to figure different discount rates for different branches, because the risk of the project changes after the option has been exercised. For large lattices, this procedure is simply impractical.

Instead, we appeal to the concept of replicating portfolio. According to this concept, the payoffs of 168 and 50 at the terminal states can also be realized artificially by forming a portfolio composed of a twin security and a fixed-interest loan. Assume now that there exists such a security one whose movement parallels that of the benefit. The absolute value of the twin security is not important, but it must be subject to the same upward and downward factors. When the benefit moves up or down, the twin security also moves up or down by the same factor. Assume that the value of the twin security at the beginning of an interval is M.

The replicating portfolio is formed at the beginning of the interval this way.

  • Buy n units of the twin security. This represents the position of the replicating portfolio in the underlying asset.

  • Take out a loan in the amount of B at the risk-free rate of interest to partly finance this purchase. This represents the position of the replicating portfolio in the risk-free asset.

The worth of the replicating portfolio at the beginning of the interval then equals nM B. If we can determine the value of n and B, we can calculate the exact value of the replicating portfolio (as we will see, we don't need to know the value of M). This is the right point to apply the law of one price: The value of the replicating portfolio must equal the expected value of the terminal payoff at the beginning of the interval, the price one would have to pay at that time to acquire the option to continue with the second stage at the end of the interval.

Now let's consider the possible values of the portfolio at the end of the interval. After one interval, the loan must be paid back with interest to receive the payoff. Regardless of what happens to the price of the twin security, the amount of the loan will be B(1 + rf), including the principle and the interest accrued. Here rf is the risk-free rate, the total interest rate on the loan over one interval.

On the one hand, if the price of the twin security moves up to uM, the portfolio will then be worth uMn B(1 + rf). For the portfolio to replicate the payoff, this amount should equal 168, the payoff after the upward movement. On the other hand, if the price of the twin security falls to dM, the portfolio will be worth dMn B(1 + rf), which must equal 50, the payoff after the downward movement. Thus the law of one price provides us with two equations.

If the price moves up:

graphics/43equ08.gif


If the price moves down:

graphics/43equ09.gif


Because rf, u, and d are all known, we can solve these two equations for B and n as a function of M, and then calculate the portfolio value at the beginning of the interval by plugging the solution into the expression nM B. Fortunately, the unknown M is eliminated during this process, yielding a value of 104. This amount is precisely how much the option to continue with the second stage would be worth at the node labeled 75 in the binomial lattice. We can repeat the same procedure for the middle and bottom terminal nodes to obtain a value of 35, and then once again with the two computed values 104 and 35, regarding them as new payoffs, to reach the root of the lattice. In the end, we obtain a final root value of 67. This amount is precisely how much the option to continue with the second stage would be worth at the beginning of the project.

A Simple Procedure: Risk-Neutral Valuation

The procedure described in the previous subsection may seem somewhat cumbersome. Fortunately, there is an easier way. Solving a system of simultaneous equations to obtain the portfolio value at the beginning of an interval is equivalent to computing the expected value of the payoffs at the end of the interval using an artificial probability measure, and then discounting back this expected value at the risk-free rate by one interval. Figure 43.17 illustrates this simple technique.

Figure 43.17. Risk-neutral valuation in the binomial model

graphics/43fig17.gif

In the middle portion of Figure 43.16, the portfolio values at the intermediary nodes and at the root of the binomial lattice are computed using the simplified procedure as follows. Starting with the terminal payoffs and recursively moving back in time:

graphics/43equ10.gif


where:

graphics/43equ11.gif


The quantities 1 and 1 p here and in Figure 43.17 are referred to as risk-adjusted, or risk-neutral, probabilities. They are not the actual probabilities of the upward and downward movements of the underlying asset, yet they are used to compute an expected value (in Figure 43.17, the numerator in the equation on the left). The expected value is simply discounted back at the risk-free rate rf. The artificial probabilities p and 1 p depend on the spread between u and d, the upward and downward movement factors of the twin security. In a way, then, p and 1 p capture the variation or the total risk of the underlying asset relative to the risk-free asset.

The general, recursive process of computing the present value of an asset based on replication and law of one price (no arbitrage) principles is referred to as risk-neutral valuation.

A number of features are remarkable about this technique. First, the value calculated does not require the actual probability distribution of the underlying price movement. Second, it does not require a discount rate, given the initial value of the underlying asset. Third, the procedure is independent of how the future payoffs are calculated. Because the rules used to calculate the payoffs don't matter, the process is the same for any payoff function.

Two-Stage Project: NPV with Option Value

The root value of 67 obtained in the previous subsection represents the PV of stage 1 and stage 2 combined, viewing stage 2 as an option on stage 1. This amount, however, does not account for the initial cost, the cost of stage 1, or the investment necessary to create the option on stage 2 in the first place. If we subtract this cost of 52.2 (which is already given in PV terms) from the calculated value of 67, we obtain an expanded NPV of 12, as shown on the right side of Figure 43.16. This value is an expanded NPV in the sense that it subsumes the value of the staging option.

Remarkably, the new NPV is not only positive, but also significantly higher than the benchmark NPV of the single-stage project, which was calculated to be 10. The difference of 22 is sizable compared with the total expected benefit of the single-stage project. Although they incur the same cost in PV terms, the two-stage project with learning creates a lot more value at the given level of volatility.

Impact of Uncertainty on the Option Value of Staging

The uncertainty of the expected benefit has a great impact on how much value is created when learning and additional flexibility are incorporated into the scenario. In the previous subsection, we calculated the expanded NPV using a volatility of 40% per month. This volatility captures the uncertainty of the benefit in terms of the variation in percentage changes in the estimate of the benefit from the start to the end of the first stage. What happens when this volatility increases or decreases?

Figure 43.18 plots the expanded NPV of the two-stage project as a function of volatility. As the volatility increases, the project value increases as well: The more uncertainty there is, the more important it is to have flexibility in the project. Remarkably, this effect was not observed in the single-stage project: As long as the PV of the benefit does not change, NPV remains constant. Although uncertainty also exists in the single-stage project, it was not accompanied by a discretionary, midproject action that depended on the uncertainty. Consequently, volatility has no further impact on the project value as long as it has already been accounted for in the PV estimate of the benefit.

Figure 43.18. Effect of uncertainty on the value of the two-stage project

graphics/43fig18.gif



Extreme Programming Perspectives
Extreme Programming Perspectives
ISBN: 0201770059
EAN: 2147483647
Year: 2005
Pages: 445

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net