5.5 Expectation

 < Free Open Study > 



5.5 Expectation

Although both the distribution and density functions of a random variable provide all of the information necessary to describe its behavior, we often wish to have a single quantity (or a small number of them) that provides summary information of the random variable. One such measure is the expected value, or expectation, of a random variable. The expected value is also called the mean. Expectation for a discrete random variable X is defined as:

(5.67) 

and for a continuous random variable X with density function f(x) as:

(5.68) 

Suppose now that we have a function of a random variable X, say g(X). The expectation is given as:

(5.69) 

for continuous random variables, and as:

(5.70) 

for discrete random variables.

If we have jointly distributed random variables, the expectation is defined for discrete random variables as:

(5.71) 

and for continuous random variables as:

(5.72) 

for the function g(X,Y). These formulations for expected values are valid if the right-hand sides of the respective equations are less than infinity.

There are a few useful laws relating to expectation that we will now discuss. Suppose that we wish to find the following:

(5.73) 

The expression on the right becomes:

(5.74) 

From equations (5.51) and (5.69), equation (5.74) becomes:

(5.75) 

Setting either a or b to zero results in the following:

(5.76) 

(5.77) 

Now suppose that we have the following:

(5.78) 

The integral becomes:

(5.79) 

From equation (5.68), we obtain:

(5.80) 

Similarly, for functions of two random variables, we get:

(5.81) click to expand

which becomes:

(5.82) click to expand

From equation (5.72), we obtain:

(5.83) 

Similarly,

(5.84) 

Consider the case of two independent random variables, X and Y, by equation (5.72):

(5.85) 

which, because of equation (5.58), becomes:

(5.86) 

Separating the integrals by integrands yields:

(5.87) 

From equations (5.69) and (5.85), we get:

(5.88) 

for the independent random variables X and Y.

For one special function of a random variable, g(X) = xn, the expectation of g(X) is known as the "nth moment" of the random variable X. The first moment of g(X) is defined as the mean of the random variable X for g(X) = X. Moments, as defined previously, are centered at the origin and are thus called "moments about the origin." A more common and useful definition of moments involves the shifting of the density function so that the mean is centered at the origin. Moments defined as such are called "central moments," because they are defined on density functions that have been centered at the origin. Thus, the function of the random variable becomes:

(5.89) 

where the mean is given by:

(5.90) 

The central moment, or moment about the mean, is therefore defined as:

(5.91) 

for the discrete random variable X, and as:

(5.92) 

for the continuous random variable X.

An important measure of the variability of the distribution of a function about the mean is called the "variance." This measure tells us, loosely speaking, how concentrated the values of the functions are relative to the mean. A small variance, therefore, indicates that the probability is that the range of function values is concentrated near the mean, while a large variance suggests that the values are more spread out. The variance of a random variable is defined by its second central moment and represented as:

(5.93) click to expand

Note the use of several different notations; all are common. For some functions, f(x), the integral of equation (5.93) may be difficult to evaluate. Fortunately, we can derive an alternative expression for the variance, as follows:

(5.94) 

The standard deviation of a random variable is defined as the square root of the variance and is denoted as:

(5.95) 

The covariance of two random variables is a measure of the degree of linear dependence, also called "correlation," of the two variables. The covariance is defined as:

(5.96) 

If X and Y are independent, the covariance is equal to zero. This results from the following:

(5.97) click to expand

by equation (5.90):

by using equation (5.88) we get:

Equation (5.97) gives a more convenient means for calculating covariance. Two random variables are said to be uncorrelated if Cov [X,Y] = 0.

There are several useful properties of the variance, which we will now discuss; this will be followed by the method for developing a lower bound on the probability for any random variable, given a distance from the mean measured in standard deviations.

From equations (5.75, 5.76, 5.77, and 5.94), we can easily show that:

(5.98) 

by using (5.93):

From equations (5.76) and (5.93):

(5.99) 

For two jointly distributed random variables, X and Y, the variance is defined as:

(5.100) 

Given any random variable, it is possible to derive an expression that defines the minimum probability of a random variable lying within k standard deviations of its mean. The theorem is known as Chebyshev's Theorem and is stated as follows:

(5.101) 

Equation (5.101) can be derived as follows. From equation (5.93):

(5.102) 

(5.103) 

Because the middle integral is positive or zero, we can remove it from the expression to get:

(5.104) 

Within the range:

(5.105) 

and

we have:

(5.106) 

so that:

(5.107) 

Thus, we can substitute into equation (5.104):

(5.108) 

and divide to get:

(5.109) 

rewriting equation (5.107):

(5.110) click to expand

and from equation (5.50):

(5.111) 

From equation (5.48) we have:

(5.112) 

Thus, equation (5.101) results:

(5.113) 



 < Free Open Study > 



Computer Systems Performance Evaluation and Prediction
Computer Systems Performance Evaluation and Prediction
ISBN: 1555582605
EAN: 2147483647
Year: 2002
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net