Details


Computational Formulas

The theoretical foundations for the thin-plate smoothing spline are described in Duchon (1976, 1977) and Meinguet (1979). Further results and applications are given in Wahba and Wendelberger (1980), Hutchinson and Bischof (1983), and Seaman and Hutchinson (1985).

Suppose that m is a space of functions whose partial derivatives of total order m are in L 2 ( E d ) where E d is the domain of x .

Now, consider the data model

click to expand

where f ˆˆ H m .

Using the notation from the section 'The Penalized Least Squares Estimate' on page 4497, for a fixed » , estimate f by minimizing the penalized least squares function

click to expand

There are several ways to define J m ( f ). For the thin-plate smoothing spline, with x of dimension d , define J m ( f ) as

click to expand

where & pound ; i ± i = m .

When d = 2 and m = 2, J m ( f ) is as follows :

click to expand

In general, m and d must satisfy the condition that 2 m ˆ’ d > 0. For the sake of simplicity, the formulas and equations that follow assume m = 2. Refer to Wahba (1990) and Bates et al. (1987) for more details.

Duchon (1976) showed that f » can be represented as

click to expand

where click to expand

If you define K = ( K ) ij = E 2 ( x i ˆ’ x j ) and T = ( T ) ij = ( x ij ), the goal is to find coefficients ² , , and that minimize

click to expand

A unique solution is guaranteed if the matrix T is of full rank and T K .

If and X = ( T : Z ), the expression for S » becomes

click to expand

The coefficients ± and can be obtained by solving

click to expand

To compute ± and , let the QR decomposition of X be

click to expand

where ( Q 1 : Q 2 ) is an orthogonal matrix and R is upper triangular , with X T Q 2 = (Dongarra et al. 1979).

Since X T = , must be in the column space of Q 2 . Therefore, can be expressed as = Q 2 ³ for a vector ³ . Substituting = Q 2 ³ into the preceding equation and multiplying through by gives

click to expand

or

click to expand

The coefficient ± can be obtained by solving

click to expand

The influence matrix A ( » ) is defined as

and has the form

click to expand

Similar to the regression case, and if you consider the trace of A ( » ) as the degrees of freedom for the information signal and the trace of ( I n ˆ’ A ( » )) as the degrees of freedom for the noise component, the estimate ƒ 2 can be represented as

click to expand

where RSS ( » ) is the residual sum of squares. Theoretical properties of these estimates have not yet been published. However, good numerical results in simulation studies have been described by several authors. For more information, refer to O'Sullivan and Wong (1987), Nychka (1986a, 1986b, and 1988), and Hall and Titterington (1987).

Confidence Intervals

Viewing the spline model as a Bayesian model, Wahba (1983) proposed Bayesian confidence intervals for smoothing spline estimates as follows:

click to expand

where a ii ( » ) is the i th diagonal element of the A ( » ) matrix and z ± / 2 is the ± / 2 point of the normal distribution. The confidence intervals are interpreted as intervals 'across the function' as opposed to point-wise intervals.

Suppose that you fit a spline estimate to experimental data that consists of a true function f and a random error term , ˆˆ i . In repeated experiments, it is likely that about 100(1 ˆ’ ± )% of the confidence intervals cover the corresponding true values, although some values are covered every time and other values are not covered by the confidence intervals most of the time. This effect is more pronounced when the true surface or surface has small regions of particularly rapid change.

Smoothing Parameter

The quantity » is called the smoothing parameter, which controls the balance between the goodness of fit and the smoothness of the final estimate.

Alarge » heavily penalizes the m th derivative of the function, thus forcing f ( m ) close to 0. A small » places less of a penalty on rapid change in f ( m ) ( x ), resulting in an estimate that tends to interpolate the data points.

The smoothing parameter greatly affects the analysis, and it should be selected with care. One method is to perform several analyses with different values for » and compare the resulting final estimates.

A more objective way to select the smoothing parameter » is to use the 'leave-out-one' cross validation function, which is an approximation of the predicted mean squares error. A generalized version of the leave-out-one cross validation function is proposed by Wahba (1990) and is easy to calculate. This Generalized Cross Validation (GCV) function ( V ( » )) is defined as

click to expand

The justification for using the GCV function to select » relies on asymptotic theory. Thus, you cannot expect good results for very small sample sizes or when there is not enough information in the data to separate the information signal from the noise component. Simulation studies suggest that for independent and identically distributed Gaussian noise, you can obtain reliable estimates of » for n greater than 25 or 30. Note that, even for large values of n (say n 50), in extreme Monte Carlo simulations there may be a small percentage of unwarranted extreme estimates in which = 0 or = ˆ (Wahba 1983). Generally, if ƒ 2 is known to within an order of magnitude, the occasional extreme case can be readily identified. As n gets larger, the effect becomes weaker.

The GCV function is fairly robust against nonhomogeneity of variances and non-Gaussian errors (Villalobos and Wahba 1987). Andrews (1988) has provided favorable theoretical results when variances are unequal . However, this selection method is likely to give unsatisfactory results when the errors are highly correlated.

The GCV value may be suspect when » is extremely small because computed values may become indistinguishable from zero. In practice, calculations with » = 0 or » near 0 can cause numerical instabilities resulting in an unsatisfactory solution. Simulation studies have shown that a » with log 10 ( n » ) > ˆ’ 8 is small enough that the final estimate based on this » almost interpolates the data points. A GCV value based on a » 10 ˆ’ 8 may not be accurate.

ODS Tables Produced by PROC TPSPLINE

PROC TPSPLINE assigns a name to each table it creates. You can use these names to reference the table when using the Output Delivery System (ODS) to select tables and create output data sets. These names are listed in the following table. For more information on ODS, see Chapter 14, 'Using the Output Delivery System.'

Table 74.1: ODS Tables Produced by PROC TPSPLINE

ODS Table Name

Description

Statement

Option

DataSummary

Data summary

PROC

default

FitSummary

Fit parameters and fit summary

PROC

default

FitStatistics

Model fit statistics

PROC

default

GCVFunction

GCV table

MODEL

LOGNLAMBDA, LAMBDA

By referring to the names of such tables, you can use the ODS OUTPUT statement to place one or more of these tables in output data sets.

For example, the following statements create an output data set named FitStats containing the FitStatistics table, an output data set named DataInfo containing the DataSummary table, an output data set named ModelInfo containing the FitSummary and an output data set named GCVFunc containing the GCVFunction.

  proc tpspline data=Melanoma;   model Incidences=Year /LOGNLAMBDA=(   4 to 0 by 0.2);   ods output FitStatistics = FitStats   DataSummary   = DataInfo   FitSummary    = ModelInfo   GCVFunction   = GCVFunc;   run;  



SAS.STAT 9.1 Users Guide (Vol. 6)
SAS.STAT 9.1 Users Guide (Vol. 6)
ISBN: N/A
EAN: N/A
Year: 2004
Pages: 127

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net