## Linear Systems

IDL offers a variety of methods for the solution of simultaneous linear equations. In order to use these routines successfully, the user should consider both existence and uniqueness criteria and the potential difficulties in finding the solution numerically.

The solution vector x of an n-by-n linear system Ax = b is guaranteed to exist and to be unique if the coefficient array A is invertible. Using a simple algebraic manipulation, it is possible to formulate the solution vector x in terms of the inverse of the coefficient array A and the right-side vector b: x = A-1b. Although this relationship provides a concise mathematical representation of the solution, it is never used in practice. Array inversion is computationally expensive (requiring a large number of floating-point operations) and prone to severe round-off errors.

An alternate way of describing the existence of a solution is to say that the system Ax = b is solvable if and only if the vector b may be expressed as a linear combination of the columns of A. This definition is important when considering the solutions of non-square (over- and under-determined) linear systems.

While the invertabiltiy of the coefficient array A may ensure that a solution exists, it does not help in determining the solution. Some systems can be solved accurately using numerical methods whereas others cannot. In order to better understand the accuracy of a numerical solution, we can classify the condition of the system it solves.

The scalar quantity known as the condition number of a linear system is a measure of a solution's sensitivity to the effects of finite-precision arithmetic. The condition number of an n-by-n linear system Ax = b is computed explicitly as |A||A-1| (where | | denotes a Euclidean norm). A linear system whose condition number is small is considered well-conditioned and well suited to numerical computation. A linear system whose condition number is large is considered ill-conditioned and prone to computational errors. To some extent, the solution of an ill-conditioned system may be improved using an extended-precision data type (such as double-precision float). Other situations require an approximate solution to the system using its Singular Value Decomposition.

The following two examples show how the singular value decomposition may be used to find solutions when a linear system is over- or underdetermined.

### Overdetermined Systems

#### Example

In the case of the overdetermined system (when there are more linear equations than unknowns), the vector b cannot be expressed as a linear combination of the columns of array A. (In other words, b lies outside of the subspace spanned by the columns of A.) Using IDL's SVDC procedure, it is possible to determine a projected solution of the overdetermined system (b is projected onto the subspace spanned by the columns of A and then the system is solved). This type of solution has the property of minimizing the residual error E = b - Ax in a least-squares sense.

Suppose that we wish to solve the following linear system:

The vector b does not lie in the two-dimensional subspace spanned by the columns of A (there is no linear combination of the columns of A that yield b), and therefore an exact solution is not possible.

It is possible, however, to find a solution to this system that minimizes the residual error by orthogonally projecting the vector b onto the two-dimensional subspace spanned by the columns of the array A. The projected vector is then used as the right-hand side of the system. The orthogonal projection of b onto the column space of A may be expressed with the array-vector product A(ATA)-1ATb, where A(ATA)-1AT is known as the projection matrix, P.

In this example, the array-vector product Pb yields:

and we wish to solve the linear system

In many cases, the explicit calculation of the projected solution is numerically unstable, resulting in large accumulated round-off errors. For this reason it is best to use singular value decomposition to effect the orthogonal projection of the vector b onto the subspace spanned by the columns of the array A.

The following IDL commands use singular value decomposition to solve the system in a numerically stable manner. Begin with the array A:

```A = [[1.0, 2.0], \$
[1.0, 3.0], \$
[0.0, 0.0]]
; Define the right-hand side vector B:
B = [4.0, 5.0, 6.0]
; Compute the singular value decomposition of A:
SVDC, A, W, U, V
```

Create a diagonal array WP of reciprocal singular values from the output vector W. To avoid overflow errors when the reciprocal values are calculated, only elements with absolute values greater than or equal to 1.0 ´ 10-5 are reciprocated.

```N = N_ELEMENTS(W)
WP = FLTARR(N, N)
FOR K = 0, N-1 DO \$
IF ABS(W(K)) GE 1.0e-5 THEN WP(K, K) = 1.0/W(K)
```

We can now express the solution to the linear system as a array-vector product. (See Section 2.6 of Numerical Recipes for a derivation of this formula.)

```X = V ## WP ## TRANSPOSE(U) ## B
; Print the solution:
PRINT, X
```

IDL Prints:

```2.00000
1.00000
```

### Underdetermined Systems

#### Example

In the case of the underdetermined system (when there are fewer linear equations than unknowns), a unique solution is not possible. Using IDL's SVDC procedure it is possible to determine the minimal norm solution. Given a vector norm, this type of solution has the property of having the minimal length of all possible solutions with respect to that norm.

Suppose that we wish to solve the following linear system.

Using elementary row operations it is possible to reduce the system to

It is now possible to express the solution x in terms of x1 and x3:

The values of x1 and x3 are completely arbitrary. Setting x1 = 0 and x3 = 0 results in one possible solution of this system:

Another possible solution is obtained using singular value decomposition and results in the minimal norm condition. The minimal norm solution for this system is:

Note that this vector also satisfies the solution x as it is expressed in terms of x1 and x3.

The following IDL commands use singular value decomposition to find the minimal norm solution. Begin with the array A:

```A = [[ 1.0, 3.0, 3.0, 2.0], \$
[ 2.0, 6.0, 9.0, 5.0], \$
[-1.0, -3.0, 3.0, 0.0]]
; Define the right-hand side vector B:
B = [1.0, 5.0, 5.0]
; Compute the decomposition of A:
SVDC, A, W, U, V
```

Create a diagonal array WP of reciprocal singular values from the output vector W. To avoid overflow errors when the reciprocal values are calculated, only elements with absolute values greater than or equal to 1.0 ´ 10-5 are reciprocated.

```N = N_ELEMENTS(W)
WP = FLTARR(N, N)
FOR K = 0, N-1 DO \$
IF ABS(W(K)) GE 1.0e-5 THEN WP(K, K) = 1.0/W(K)
```

We can now express the solution to the linear system as a array-vector product. (See Section 2.6 of Numerical Recipes for a derivation of this formula.) The solution is expressed in terms of x1 and x3 with minimal norm.

```X = V ## WP ## TRANSPOSE(U) ## B
;Print the solution:
PRINT, X
```

IDL Prints:

```-0.211009
-0.633027
0.963303
0.110092
```

### Complex Linear Systems

#### Example

We can use IDL's LU_COMPLEX function to compute the solution to a linear system with real and complex coefficients. Suppose we wish to solve the following linear system:

```; First we define the real part of the complex coefficient array:
re = [[-1, 1, 2, 3], \$
[-2, -1, 0, 3], \$
[3, 0, 0, 0], \$
[2, 1, 2, 2]]
; Next, we define the imaginary part of the coefficient array:
im = [[0, -3, 0, 3], \$
[0, 3, 1, 1], \$

[0, 4, -1, -3], \$
[0, 1, 1, 1]]
; Combine the real and imaginary parts to form
; a single complex coefficient array:
A = COMPLEX(re, im)
; Define the right-hand side vector B:
B = [COMPLEX(15,-2), COMPLEX(-2,-1), COMPLEX(-20,11), \$
COMPLEX(-10,10)
; Compute the solution using double-precision complex arithmetic:
Z = LU_COMPLEX(A, B, /DOUBLE)
PRINT, TRANSPOSE(Z), FORMAT = '(f5.2, ",", f5.2, "i")'
```

IDL prints:

```-4.00, 1.00i
2.00, 2.00i
0.00, 3.00i
-0.00,-1.00i
```

We can check the accuracy of the computed solution by computing the residual,
Az-b:

```PRINT, A##Z-B
```

IDL prints:

```(      0.00000,      0.00000)
(      0.00000,      0.00000)
(      0.00000,      0.00000)
(      0.00000,      0.00000)
```

### Routines for Solving Simultaneous Linear Equations

Below is a brief description of IDL routines for solving simultaneous linear equations.

 Constructs Cholesky decomposition of a matrix. Solves set of linear equations (use with CHOLDC). Computes the condition number of a square matrix. Solves system of linear equations using Cramer's rule. Computes vector cross product. Computes the determinant of a square matrix. Solves linear system using Gauss-Seidel iteration. Returns an identity array. Computes the inverse of a square array. Solves a set of sparse linear equations using the iterative biconjugate gradient method. Solves complex linear system using LU decomposition. Replaces array with the LU decomposition. Uses LU decomposition to iteratively improve an approximate solution. Solves a set of linear equations. Use with LUDC. Computes Euclidean norm of vector or Infinity norm of array. Computes Singular Value Decomposition of an array. Solves set of linear equations using back-substitution. Computes the trace of an array. Solves tridiagonal systems of linear equations.