Details


Automatic Derivatives

Depending on the optimization method you select, analytical first- and second-order derivatives are computed automatically. Derivatives can still be supplied using the DER.parm syntax. These DER.parm derivatives are not verified by the differentiator. If any needed derivatives are not supplied, they are computed and added to the program statements. To view the computed derivatives, use the LISTDER or LIST option.

The following model is solved using Newton's method. Analytical first- and second-order derivatives are automatically computed.

  proc nlin data=Enzyme method=newton list;   parms x1=4 x2=2 ;   model Velocity = x1 * exp (x2 * Concentration);   run;  
start figure
  The NLIN Procedure   Listing of Compiled Program Code   Stmt    Line:Col      Statement as Parsed   1    285:74        MODEL.Velocity = x1 * EXP(x2   * Concentration);   1    285:74        @MODEL.Velocity/@x1 = EXP(x2   * Concentration);   1    285:74        @MODEL.Velocity/@x2 = x1 * Concentration   * EXP(x2 * Concentration);   1    285:74        @@MODEL.Velocity/@x1/@x2 = Concentration   * EXP(x2 * Concentration);   1    285:74        @@MODEL.Velocity/@x2/@x1 = Concentration   * EXP(x2 * Concentration);   1    285:74        @@MODEL.Velocity/@x2/@x2 = x1   * Concentration * Concentration   * EXP(x2 * Concentration);  
end figure

Figure 50.5: Model and Derivative Code Output

Note that all the derivatives require the evaluation of EXP(X2 * Concentration ). If you specify the LISTCODE option in the PROC NLIN statement, the actual machine level code produced is as follows .

start figure
  The NLIN Procedure   Code Listing   1 Stmt MODEL      line 296 column 78.   (1)   arg=MODEL.Velocity   argsave=MODEL.   Velocity   Source Text:          model Velocity = x1 * exp   (x2 * Concentration);   Oper *        at 296:108 (30,0,2).  * : _temp1 <- x2 Concentration   Oper EXP      at 296:104            EXP : _temp2 <- _temp1   (103,0,1).   Oper *        at 296:98 (30,0,2).   * : MODEL.Velocity <- x1 _temp2   Oper eeocf    at 296:98 (18,0,1).   eeocf : _DER_ <- _DER_   Oper =        at 296:98 (1,0,1).    = : @MODEL.Velocity/@x1 <- _temp2   Oper *        at 296:104 (30,0,2).  * : @1dt1_1 <- Concentration _temp2   Oper *        at 296:98 (30,0,2).   * : @MODEL.Velocity/@x2   <- x1 @1dt1_1   Oper =        at 296:98 (1,0,1).    = : @@MODEL.Velocity/@x1/@x2   <- @1dt1_1   Oper =        at 296:98 (1,0,1).    = : @@MODEL.Velocity/@x2/@x1   <- @1dt1_1   Oper *        at 296:104 (30,0,2).  * : @2dt1_1 <- Concentration   @1dt1_1   Oper *        at 296:98 (30,0,2).   * : @@MODEL.Velocity/@x2/@x2   <- x1 @2dt1_1  
end figure

Figure 50.6: LISTCODE Output

Note that, in the generated code, only one exponentiation is performed. The generated code reuses previous operations to be more efficient.

Hougaard's Measure of Skewness

A 'close-to-linear' nonlinear regression model, first described by Ratkowsky (1990), is a model that produces parameters having properties similar to those produced by a linear regression model. That is, the least squares estimates of the parameters are close to being unbiased , normally distributed, minimum variance estimators.

A nonlinear regression model sometimes fails to be close to linear due to the properties of a single parameter. When this occurs, bias in the parameters can render inferences using the reported standard errors and confidence limits invalid. You can often fix the problem with reparameterization, replacing the offending parameter by one with better estimation properties.

You can use Hougaard's measure of skewness, g 1 i , to assess whether a parameter is close to linear or whether it contains considerable nonlinearity. Specify the HOUGAARD option in the PROC NLIN statement to compute Hougaard's measure of skewness.

According to Ratkowsky (1990), if g 1 i < . 1, the estimator of parameter i is very close-to-linear in behavior and, if 0 . 1 < g 1 i < . 25, the estimator is reasonably close-to-linear. If g 1 i > . 25, the skewness is very apparent. For g 1 i > 1, the nonlinear behavior is considerable.

Hougaard's measure is computed as follows

click to expand

where the sum is a triple sum over the number of parameters and

click to expand

In the preceding equation, J m is the Jacobian vector and H m is the Hessian matrix evaluated at observation m . This third moment is normalized using the standard error as

click to expand

Missing Values

If the value of any one of the SAS variables involved in the model is missing from an observation, that observation is omitted from the analysis. If only the value of the dependent variable is missing, that observation has a predicted value calculated for it when you use an OUTPUT statement and specify the PREDICTED = option.

If an observation includes a missing value for one of the independent variables, both the predicted value and the residual value are missing for that observation. If the iterations fail to converge, all the values of all the variables named in the OUTPUT statement are missing values.

Special Variables

Several special variables are created automatically and can be used in PROC NLIN program statements.

Special Variables with Values that are Set by PROC NLIN

The values of the following six special variables are set by PROC NLIN and should not be reset to a different value by programming statements:

_ERROR_

is set to 1 if a numerical error or invalid argument to a function occurs during the current execution of the program. It is reset to 0 before each new execution.

_ITER_

represents the current iteration number. The variable _ITER_ is set to ˆ’ 1 during the grid search phase.

_MODEL_

is set to 1 for passes through the data when only the predicted values are needed, not the derivatives. It is 0 when both predicted values and derivatives are needed. If your derivative calculations consume a lot of time, you can save resources by coding

if _model_ then return;

after your MODEL statement but before your derivative calculations. The derivatives generated by PROC NLIN do this automatically.

_N_

indicates the number of times the PROC NLIN step has been executed. It is never reset for successive passes through the data set.

_OBS_

indicates the observation number in the data set for the current program execution. It is reset to 1 to start each pass through the data set (unlike the _N_ variable).

_SSE_

has the error sum of squares of the last iteration. During the grid search phase, the _SSE_ variable is set to 0. For iteration 0, the _SSE_ variable is set to the SSE associated with the point chosen from the grid search.

Special Variable Used to Determine Convergence Criteria

The special variable _LOSS_ can be used to determine convergence criteria:

_LOSS_

is used to determine the criterion function for convergence and step shortening. PROC NLIN looks for the variable _LOSS_ in the program statements and, if it is defined, uses the (weighted) sum of this value instead of the residual sum of squares to determine the criterion function for convergence and step shortening. This feature is useful in certain types of maximum- likelihood estimation where the residual sum of squares is not the basic criterion.

Weighted Regression with the Special Variable _WEIGHT_

To get weighted least squares estimates of parameters, the _WEIGHT_ variable can be given a value in an assignment statement:

  _weight_ =    expression    ;  

When this statement is included, the expression on the right-hand side of the assignment statement is evaluated for each observation in the data set to be analyzed . The values obtained are taken as inverse elements of the diagonal variance-covariance matrix of the dependent variable.

When a variable name is given after the equal sign, the values of the variable are taken as the inverse elements of the variance-covariance matrix. The larger the _WEIGHT_ value, the more importance the observation is given.

The _WEIGHT_ variable can be a function of the estimated parameters. For estimation purposes the derivative of the _WEIGHT_ variable with-respect-to the parameters is not included in the gradient and the Hessian of the loss function. This is normally the desired approach for iteratively reweighted least squares estimation. With the _WEIGHT_ variable a function of parameters, the gradient and the Hessian used may lead to poor convergence or non-convergence of the requested estimation. To have the derivative of the _WEIGHT_ variable with-respect-to the parameters included in the gradient and the Hessian of the loss function, do not use the _WEIGHT_ variable. Instead, redefine the model as

click to expand

where y is the original dependent variable, f ( x, ² ) is the nonlinear model, and wgt ( ² ) is the weight that is a function of the parameters.

If the _WEIGHT_= statement is not used, the default value of 1 is used, and regular least squares estimates are obtained.

Troubleshooting

This section describes a number of problems that can occur in your analysis with PROC NLIN.

Excessive Time

If you specify a grid of starting values that contains many points, the analysis may take excessive time since the procedure must go through the entire data set for each point on the grid.

The analysis may also take excessive time if your problem takes many iterations to converge since each iteration requires as much time as a linear regression with predicted values and residuals calculated.

Dependencies

The matrix of partial derivatives may be singular, possibly indicating an overparameterized model. For example, if b0 starts at zero in the following model, the derivatives for b1 are all zero for the first iteration.

  parms b0=0 b1=.022;   model pop=b0*exp(b1*(year-1790));   der.b0=exp(b1*(year-1790));   der.b1=(year-1790)*b0*exp(b1*(year-1790));  

The first iteration changes a subset of the parameters; then the procedure can make progress in succeeding iterations. This singularity problem is local. The next example displays a global problem.

You may have a term b2 in the exponent that is nonidentifiable since it trades roles with b0 .

  parms b0=3.9 b1=.022 b2=0;   model pop=b0*exp(b1*(year-1790)+b2);   der.b0=exp(b1*(year-1790)+b2);   der.b1=(year-1790)*b0*exp(b1*(year-1790)+b2);   der.b2=b0*exp(b1*(year-1790)+b2);  

Unable to Improve

The method may lead to steps that do not improve the estimates even after a series of step halvings. If this happens, the procedure issues a message stating that it is unable to make further progress, but it then displays the warning message

  PROC NLIN failed to converge  

and displays the results. This often means that the procedure has not converged at all. If you provided the derivatives, check them very closely and then check the sum-of-squares error surface before proceeding. If PROC NLIN has not converged , try a different set of starting values, a different METHOD= specification, the G4 option, or a different model.

Divergence

The iterative process may diverge, resulting in overflows in computations . It is also possible that parameters enter a space where arguments to such functions as LOG and SQRT become illegal. For example, consider the following model:

  parms b=0;   model y=x / b;  

Suppose that y happens to be all zero and x is nonzero. There is no least squares estimate for b since the SSE declines as b approaches infinity or minus infinity. The same model could be parameterized with no problem into y = a * x .

If you have divergence problems, try reparameterizing, selecting different starting values, increasing the maximum allowed number of iterations (the MAXITER= option), specifying an alternative METHOD= option, or including a BOUNDS statement.

Local Minimum

The program may converge to a local rather than a global minimum. For example, consider the following model.

  parms a=1 b=-1;   model y=(1-a*x)*(1-b*x);  

Once a solution is found, an equivalent solution with the same SSE can be obtained by swapping the values of a and b .

Discontinuities

The computational methods assume that the model is a continuous and smooth function of the parameters. If this is not true, the method does not work. For example, the following models do not work:

  model y=a+int(b*x);   model y=a+b*x+4*(z>c);  

Responding to Trouble

PROC NLIN does not necessarily produce a good solution the first time. Much depends on specifying good initial values for the parameters. You can specify a grid of values in the PARMS statement to search for good starting values. While most practical models should give you no trouble, other models may require switching to a different iteration method or an inverse computation method. Specifying the option METHOD=MARQUARDT sometimes works when the default method (Gauss-Newton) does not work.

Computational Methods

For the system of equations represented by the nonlinear model

click to expand

where Z is a matrix of the independent variables, ² * is a vector of the parameters, is the error vector, and F is a function of the independent variables and the parameters, there are two approaches to solving for the minimum. The first method is to minimize

where e = Y ˆ’ F ( ² ) and ² is an estimate of ² *.

The second method is to solve the nonlinear 'normal' equations

where

In the nonlinear situation, both X and F ( ² ) are functions of ² and a closed-form solution generally does not exist. Thus, PROC NLIN uses an iterative process: a starting value for ² is chosen and continually improved until the error sum of squares is minimized.

The iterative techniques that PROC NLIN uses are similar to a series of linear regressions involving the matrix X evaluated for the current values of ² and e = Y ˆ’ F ( ² ), the residuals evaluated for the current values of ² .

The iterative process begins at some point ² . Then X and Y are used to compute a such that

click to expand

The four methods differ in how is computed to change the vector of parameters.

click to expand

The default method used to compute ( X ² X ) ˆ’ is the sweep operator producing a reflexive generalized ( g 2 ) inverse. In some cases it would be preferable to use a Moore-Penrose ( g 4 ) inverse. If the G4 option is specified in the PROC NLIN statement, a g 4 inverse is used to calculate on each iteration.

The Gauss-Newton and Marquardt iterative methods regress the residuals onto the partial derivatives of the model with respect to the parameters until the estimates converge. The Newton iterative method regresses the residuals onto a function of the first and second derivatives of the model with respect to the parameters until the estimates converge. Analytical first- and second-order derivatives are automatically computed.

Steepest Descent (Gradient)

The steepest descent method is based on the gradient of :

click to expand

The quantity ˆ’ X ² e is the gradient along which increases . Thus = X ² e is the direction of steepest descent.

If the automatic variables _WEIGHT_ and _RESID_ are used, then

is the direction, where

W SSE

is an n n diagonal matrix with elements of weights from the _WEIGHT_ variable. Each element contains the value of _WEIGHT_ for the i th observation.

r

is a vector with elements r i from _RESID_ . Each element r i contains the value of _RESID_ evaluated for the i th observation.

Using the method of steepest descent, let

where the scalar ± is chosen such that

click to expand

Note: The steepest descent method may converge very slowly and is therefore not generally recommended. It is sometimes useful when the initial values are poor.

Newton

The Newton method uses the second derivatives and solves the equation

where

click to expand

and H i ( ² ) is the Hessian of e :

click to expand

If the automatic variables _WEIGHT_ , _WGTJPJ_ , and _RESID_ are used, then

click to expand

is the direction, where

click to expand

and

W SSE

is an n n diagonal matrix with elements of weights from the _WEIGHT_ variable. Each element contains the value of _WEIGHT_ for the i th observation.

W XPX

is an n n diagonal matrix with elements of weights from the _WGTJPJ_ variable.

Each element contains the value of _WGTJPJ_ for the i th observation.

r

is a vector with elements r i from the _RESID_ variable. Each element r i contains the value of _RESID_ evaluated for the i th observation.

Gauss-Newton

The Gauss-Newton method uses the Taylor series

click to expand

where X = ˆ‚ F/ ˆ‚² is evaluated at ² = ² 0.

Substituting the first two terms of this series into the normal equations

click to expand

and therefore

Caution: If X ² X is singular or becomes singular, PROC NLIN computes using a generalized inverse for the iterations after singularity occurs. If X ² X is still singular for the last iteration, the solution should be examined.

Marquardt

The Marquardt updating formula is as follows:

click to expand

The Marquardt method is a compromise between the Gauss-Newton and steepest descent methods (Marquardt 1963). As » 0, the direction approaches Gauss-Newton. As » ˆ , the direction approaches steepest descent.

Marquardt's studies indicate that the average angle between Gauss-Newton and steepest descent directions is about 90°. A choice of » between 0 and infinity produces a compromise direction.

By default, PROC NLIN chooses » = 10 ˆ’ 7 to start and computes a . If SSE( ² + ) < SSE( ² ), then » = » / 10 for the next iteration. Each time SSE( ² + ) > SSE( ² ), then » = 10 » .

Note: If the SSE decreases on each iteration, then » 0, and you are essentially using the Gauss-Newton method. If SSE does not improve, then » is increased until you are moving in the steepest descent direction.

Marquardt's method is equivalent to performing a series of ridge regressions and is useful when the parameter estimates are highly correlated or the objective function is not well approximated by a quadratic.

Step- Size Search

The default method of finding the step size k is step halving using SMETHOD=HALVE. If SSE( ² + ) > SSE( ² ), compute SSE( ² + 0.5 ), SSE( ² + 0.25 ),..., until a smaller SSE is found.

If you specify SMETHOD=GOLDEN, the step size k is determined by a golden section search. The parameter TAU determines the length of the initial interval to be searched, with the interval having length TAU or 2 TAU, depending on SSE( ² + ). The RHO parameter specifies how fine the search is to be. The SSE at each endpoint of the interval is evaluated, and a new subinterval is chosen. The size of the interval is reduced until its length is less than RHO. One pass through the data is required each time the interval is reduced. Hence, if RHO is very small relative to TAU, a large amount of time can be spent determining a step size. For more information on the GOLDEN search, refer to Kennedy and Gentle (1980).

If you specify SMETHOD=CUBIC, the NLIN procedure performs a cubic interpolation to estimate the step size. If the estimated step size does not result in a decrease in SSE, step halving is used.

Output Data Sets

The data set produced by the OUTEST= option in the PROC NLIN statement contains the parameter estimates on each iteration including the grid search.

The variable _ITER_ contains the iteration number. The variable _TYPE_ denotes whether the observation contains iteration parameter estimates ('ITER'), final parameter estimates ('FINAL'), or covariance estimates ('COVB'). The variable _NAME_ contains the parameter name for covariances, and the variable _SSE_ contains the objective function value for the parameter estimates. The variable _STATUS_ indicates whether the estimates have converged.

The data set produced by the OUTPUT statement contains statistics calculated for each observation. In addition, the data set contains all the variables in the input data set and any ID variables that are specified in the ID statement.

Confidence Intervals

Parameter Confidence Intervals

The parameter confidence intervals are computed using the Wald based formula:

click to expand

where stderr i is the standard error of the i th parameter i and t ( N ˆ’ P, . 05 / 2) is a t statistic with N ˆ’ P degrees of freedom, N is the number of observations, and P is the number of parameters. The confidence intervals are only asymptotically valid.

Model Confidence Intervals

Model confidence intervals are output when an OUT= data set is specified and one or more of the options L95M=, L95=, U95M=, or U95= is specified. The values of these terms are

click to expand

where X = ˆ‚ f/ ˆ‚² and x i is the i th row of X . These results are derived for linear systems. The intervals are approximate for nonlinear models.

Parameter Covariance Matrix

For unconstrained estimates (no active bounds), the parameter covariance matrix is

for the gradient, Marquardt, and Gauss methods and

for Newton method. The mse is computed as

where nused is the number of non-missing observations and np is the number of estimable parameters. The standard error reported for the parameters is the sqrt of the corresponding diagonal element of this matrix.

Equality restrictions can be written as a vector function

Inequality restrictions are either active or inactive. When an inequality restriction is active, it is treated as an equality restriction.

For the following, assume the vector h ( ) contains all the current active restrictions. The constraint matrix A is

The covariance matrix for the restricted parameter estimates is computed as

where H is Hessian or approximation to the Hessian, and Z is the last ( np ˆ’ nc ) columns of Q. Q is from an LQ factorization of the constraint matrix, nc is the number of active constraints, and np is the number of parameters. Refer to Gill, Murray, and Wright (1981) for more details on LQ factorization.

The covariance matrix for the Lagrange multipliers is computed as

Reported Convergence Measures

NLIN computes and reports four convergence measures labeled R, PPC, RPC, and OBJECT.

R

is the primary convergence measure for the parameters. It measures the degree to which the residuals are orthogonal to the Jacobian columns, and it approaches 0 as the gradient of the objective function becomes small. R is defined as

click to expand

PPC

is the prospective parameter change measure. PPC measures the maximum relative change in the parameters implied by the parameter-change vector computed for the next iteration. At the k th iteration, PPC is the maximum over the parameters

where is the current value of the i th parameter and is the prospective value of this parameter after adding the change vector computed for the next iteration. These changes are measured before steplength adjustments are made. The parameter with the maximum prospective relative change is displayed with the value of PPC, unless the PPC is nearly 0.

RPC

is the retrospective parameter change measure. RPC measures the maximum relative change in the parameters from the previous iteration. At the k th iteration, RPC is the maximum over i of

where is the current value of the i th parameter and is the previous value of this parameter. These changes are measured before steplength adjustments are made. The name of the parameter with the maximum retrospective relative change is displayed with the value of RPC, unless the RPC is nearly 0.

OBJECT

measures the relative change in the objective function value between iterations:

where O k ˆ’ 1 is the value of the objective function ( O k ) from the previous iteration. This is the old CONVERGEOBJ= criterion.

Displayed Output

In addition to the output data sets, PROC NLIN also produces the following items:

  • the estimates of the parameters and the residual Sums of Squares determined in each iteration

  • a list of the residual Sums of Squares associated with all or some of the combinations of possible starting values of parameters

  • an analysis-of-variance table including as sources of variation Regression, Residual, Uncorrected Total, Corrected Total, and F test

If the convergence criterion is met, PROC NLIN produces

  • Estimation Summary Table

  • Parameter Estimates

  • an asymptotically valid standard error of the estimate, Asymptotic Standard Error.

  • an Asymptotic 95% Confidence Interval for the estimate of the parameter

  • an Asymptotic Correlation Matrix of the parameters

Incompatibilities with 6.11 and Earlier Versions of PROC NLIN

The NLIN procedure now uses a compiler that is different from the DATA step compiler. The compiler was changed so that analytical derivatives could be computed automatically. For the most part, the syntax accepted by the old NLIN procedure can be used in the new NLIN procedure. However, there are several differences that should be noted.

  • You cannot specify a character index variable in the DO statement, and you cannot specify a character test in the IF statement. Thus DO I=1,2,3; is supported, but DO I='ONE','TWO','THREE'; is not supported. And IF 'THIS' < 'THAT' THEN ... ; is supported, but "IF 'THIS' THEN ... ;" is not supported.

  • The PUT statement, which is used mostly for program debugging in PROC NLIN, supports only some of the features of the DATA step PUT statement, and it has some new features that the DATA step PUT statement does not.

    • The PUT statement does not support line pointers, factored lists, iteration factors, overprinting, the _INFILE_ option, the ˜:' format modifier, or the symbol ˜$'.

    • The PUT statement does support expressions inside of parentheses. For example, PUT (SQRT(X)); produces the square root of X.

    • The PUT statement also supports the option _PDV_ to display a formatted listing of all the variables in the program. The statement PUT _PDV_; prints a much more readable listing of the variables than PUT _ALL_; does.

  • You cannot use the ˜*' subscript, but you can specify an array name in a PUT statement without subscripts. Thus, ARRAY A ... ; PUT A; is acceptable, but PUT A[*] ; is not. The statement PUT A; displays all the elements of the array A. The PUT A=; statement displays all the elements of A with each value labeled by the name of the element variable.

  • You cannot specify any arguments in the ABORT statement.

  • You can specify more than one target statement in the WHEN and OTHERWISE statements. That is, DO/END groups are not necessary for multiple WHEN statements, for example, SELECT; WHEN(exp1); stmt1; stmt2; WHEN(exp2); stmt3; stmt4; END;.

  • You can specify only the options LOG, PRINT, and LIST in the FILE statement.

  • The RETAIN statement retains only values across one pass through the data set. If you need to retain values across iterations, use the CONTROL statement to make a control variable.

The ARRAY statement in PROC NLIN is similar to, but not the same as, the ARRAY statement in the SAS DATA step. The ARRAY statement is used to associate a name (of no more than 8 characters ) with a list of variables and constants. The array name can then be used with subscripts in the program to refer to the items in the list.

The ARRAY statement supported by PROC NLIN does not support all the features of the DATA step ARRAY statement. You cannot specify implicit indexing variables; all array references must have explicit subscript expressions. You can specify simple array dimensions; lower bound specifications are not supported. A maximum of six dimensions are accepted.

On the other hand, the ARRAY statement supported by PROC NLIN does accept both variables and constants as array elements. (Constant array elements cannot be changed with assignment statements.)

  proc nlin data=nld;   array b[4] 1 2 3 4;     /* Constant array */   array c[4] (1 2 3 4); /* Numeric array with initial values */   b[1] = 2;               /* This is an ERROR, b is a constant array*/   c[2] = 7.5;             /* This is allowed */   ...  

Both dimension specification and the list of elements are optional, but at least one must be specified. When the list of elements is not specified, or fewer elements than the size of the array are listed, array variables are created by suffixing element numbers to the array name to complete the element list.

If the array is used as a pure array in the program rather than a list of symbols (the individual symbols of the array are not referenced in the code), the array is converted to a numerical array. A pure array is literally a vector of numbers that are accessed only by index. Using these types of arrays results in faster derivatives and compiled code.

  proc nlin data=nld;   array c[4] (1 2 3 4); /* Numeric array with initial values */   c[2] = 7.5;             /* This is C used as a pure array */   c1 =   92.5;             /* This forces C to be a list of symbols */  

ODS Table Names

PROC NLIN assigns a name to each table it creates. You can use these names to reference the table when using the Output Delivery System (ODS) to select tables and create output data sets. These names are listed in the following table. For more information on ODS, see Chapter 14, 'Using the Output Delivery System.'

Table 50.1: ODS Tables Produced in PROC NLIN

ODS Table Name

Description

Statement

ANOVA

Analysis of variance

default

CodeDependency

Variable cross reference

LISTDEP

CodeList

Listing of program statements

LISTCODE

ConvergenceStatus

Convergence status

default

CorrB

Correlation of the parameters

default

EstSummary

Summary of the estimation

default

FirstDerivatives

First derivative table

LISTDER

IterHistory

Iteration output

default

MissingValues

Missing values generated by the program

default

ParameterEstimates

Parameter estimates

default

ProgList

Listing of the compiled program

LIST

Convergence Status Table

The ConvergenceStatus table can be used to programmatically check on the status of an estimation. The ConvergenceStatus table contains the variable STATUS that takes on one of the values, 0, 1, or 3. If STATUS equals 0, then the convergence criterion were met. If STATUS equals 1, then the convergence criterion were met but notes were added to the log that may indicate a problem with the model. If STATUS equals 3, then the convergence criterion were not met.

The following sample program demonstrates how the ConvergenceStatus table can be used.

  /* Save the ConvergenceStatus    */   /* table to the dataset "status" */   ods output ConvergenceStatus=status;   proc nlin data=a ;   parameters a=1 b=1 c=1;   model wgt=a+x/(b*y+c*z);   run;   data _NULL_; set status;   if status > 0 then put "A problem Occurred";   run;  



SAS.STAT 9.1 Users Guide (Vol. 4)
SAS.STAT 9.1 Users Guide (Vol. 4)
ISBN: N/A
EAN: N/A
Year: 2004
Pages: 91

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net