Contents Previous Next Subchapters

Optimization Using The Conjugate Gradient Method
Syntax conjgrad(function fvalxinscaleepsmaxitlevel)
See Also conjdir , minline , sqp , nlsq

Description
Uses the conjugate gradient method to minimize an objective f(x) with respect to x. The real or double-precision column vector xini specifies the initial estimate for the value that minimizes the objective f. The column vector scale has the same type and dimension as x. The i-th element of scale specifies the maximum absolute change for the i-th element of x between iterations. Every element of scale must be greater than 0. The scalar eps has the same type as xini and specifies the convergence criteria. Convergence is accepted when the absolute change in x(i) is less than eps times scale(i) for all i. The integer scalar maxit specifies the maximum number of iterations to attempt before giving up on convergence. The integer scalar level specifies the amount of tracing. If level > 1, the current value of f(x) is printed at each iteration. If level > 2, the current argument value x and the gradient g are printed at each iteration. If level > 3, the step size and the function value are printed at each iteration of the line search.

The (i+1)-th column of the return value is the value of x at the i-th iteration. (The first column contains the initial value xin.) The return value has same type and row dimension as xini. Its column dimension is equal to the number of iterations plus 1.

fval(xfout)
fval(xfoutgout)
Computes the value of the objective and its gradient at the point specified by x, where x is a column vector with the same type and dimension as xini. The input value of fout has no effect. On output, fout is equal to f(x). If the gout argument is present, its input value has no effect and its output value is the gradient of f(x) evaluated at the point x and has the same type and dimension as x. The i-th element of gout is the derivative of f(x) with respect to the i-th element of x.

Example
The program below solves the problem
                      2            2
     minimize (x  - 1)  +  (x  - 2)   with respect to x
                1            1
The solution to this problem is x = {1, 2}

clear
#
function fval(x, fout, gout) begin
     fout = (x(1) - 1.)^2. + (x(2) - 2.)^2
     if arg(0) == 3 then begin
          f_x1  = 2. * (x(1) - 1.)
          f_x2  = 2. * (x(2) - 2.)
          gout  = {f_x1, f_x2}
     end
     return
end

xini  = {0., 0.}
scale = {3., 3.}
eps   = 1e-4
maxit = 20
level = 0
x     = conjgrad(function fval, xini, scale, eps, maxit, level)
xout  = x.col(coldim(x))
print xout