10.6 Iterative Improvement In Chapter 5, we examined root-finding algorithms, such as Newton's algorithm, that converge toward a correct solution. Each iteration step generated an approximate solution, which was then used in the next iteration to generate a better approximation . We stopped when we deemed an approximation to be "close enough." We can do something similar when we use matrices to solve systems of equations. Let's say we use the matrix operations described in the previous section to solve a system of equations Ax = b, and we compute our first solution x 1 . Most likely, this solution isn't exact because of roundoff errors, and so we can compute the residual vector
We use r 1 to solve the system
for z 1 . The values of z 1 represent the discrepancies in the corresponding values of x 1 that caused the residual values in r 1 . In other words, if we compute x 2 = x 1 + z 1 , then x 2 is the correct solution to the original equation Ax = b. That's assuming z 1 was computed correctly, but, of course, it probably wasn't, either. So we iterate again:
We solve
for z 2 , and then x 3 = x 2 + z 2 . We continue iterating, and if all goes well, the x i will converge toward the correct solution. We stop when we decide we're "close enough." |
Top |