**Format****Pages****Chapters**

# The Comparison Of Gaussian Elimination And Cholesky Decomposition Methods To Linear System Of Equations

*THE COMPARISON OF GAUSSIAN ELIMINATION AND CHOLESKY DECOMPOSITION METHODS TO LINEAR SYSTEM OF EQUATIONS*

*THE COMPARISON OF GAUSSIAN ELIMINATION AND CHOLESKY DECOMPOSITION METHODS TO LINEAR SYSTEM OF EQUATIONS*

**Format: Ms Word Document****Pages: 85****Price: N 3,000****Chapters: 1-5****Get the Complete Project**

Linear System

**ABSTRACT**

This project work is concerned with study of __the comparison of Gaussian elimination and cholesky decomposition methods to linear system of equations__.

In chapter one, we are concerned with linear systems and the various methods of solving them. In chapter two and chapter three, we dealt with the indept study of Gaussian Elimination method and the cholesky Decomposition method, with their good points and bad points respectively. While chapter four deals with various application area of both methods, their comparison and finally conclusion.

**CHAPTER ONE**

**LINEAR SYSTEM OF EQUATIONS**

**INTRODUCTION**

A wide variety of problems lead ultimately to the need to solve a linear system of equation linear system of equations are associated with many problems in engineering and science as well as with applications of mathematics to the social sciences and the quantitative study of business and economic problems.

In 1985, according to Atkinson, system of Simultaneous linear equation occur in solving problems in a wide variety of areas with respect to mathematics, statistics, physical quantities (examples are temperature, voltage, population management and displacement). Social sciences, engineering and business. They arise directly in solving real life problems.

The world sometimes reveals itself to us as observable relationships among the relevant variables what it does make evident are relationship that describe how both the variable and their rate of change affect each other.

Apparently, such life changing problem gives rise to systems of simultaneous linear equation. In almost every human activities, man seems to be compelled to uncover fundamental relationship that exist among the objects he observes. According to Maron in 1982, he said in order to make the relationship that exist between variables explicit, we frequently attempt to make a mathematical model that will accurately reflect real life situation. Many mathematical model that will accurately reflect real life situation. Many mathematical models have the same basic structure although disparity in Symbolic rotation may be utilized, which can arise from economics, transportation, which need may arise to make efficient allocation among several points or to solve the growth of population in which units of x_{1}, x_{2} …., x_{n }arises from net flow from one point to another or in relationship to population growth, that is, number of individuals in a particular age group at a particular time.

There are various methods in solving linear system of simultaneous equations. In numerical analysis the techniques and methods for solving system of linear equations belongs to two categories: Direct and Iterative methods. The direct methods obtain the exact solution (in real arithmetic) in finitely many operations where as iterative method generate a sequence of approximations that only converge in the limit to the solution. The direct method falls into two categories or clam that is the Gaussian elimination method and cholesky decomposition method. Some others are matrix inverse method and LU factorization method and the Cramer’s rule method.

The elimination approach reduces the given system of equations to a form from which the solution can be obtained by simple substitution since calculators and computers have some limit to the number of digits for their use this may lead to round-off errors and produces poorer results. Generally, the direct method are best for full or bounded matrices where as iterative methods are best for very large and sparse matrices. The iterative method provide an alternative to the direct methods for solving systems of linear equations. This method involves assumption of some initial values which are then refined repeatedly till they reach some accepter rang of accuracy. The Jacobi and Gawn-siedel methods are good examples of the iterative method.

Systems of linear equations may be grouped as follow

The system of linear equations are divided into consistent and inconsistent and inconsistent.

The inconsistent equation is an equation that has numbers that are not all zero that is the equation has no solution.

For example

X + 2y – 3z = 4

Y + 2z = 5

The consistent equation is an equation that has numbers that are all zero, that is, the equation has a solution. There are two cases

**CASE I**

r = n, that is, there are so many non-zero equations as unknowns. Then we can successfully solve uniquely for the unknowns and there exist a unique solution for the system.

**CASE II**

r < n, m that is, there are more unknowns than there are non-zero equations. Thus, we arbitrarily assign values to the unknowns and then solve uniquely for the unknowns. Accordingly, there exist an infinite number or solutions. For example

Since there is no equation of the form 0 = c with c0 the system is consistent. Furthermore since there are three unknowns and three non zero equations the system has a unique solution.

Also, the system

X + 2y – 3z + w = 4

y + 2z +3w = 5

is consistent and since there are more unknowns than non-zero equations, the system has an infinite number of solution. On the other hand, a linear system can be written in matrice form as follows

A x = b.

A linear equation X_{1}, X_{2}, …, X_{n} is obtained by requiring the linear combination to assume a prescribed value b, that is

Such equation arise frequently generally as n x n linear system is

** **

However it will be necessary to introduce the basic fundamental principles of matrices in linear systems because matrices makes it possible to describe linear systems in a simple way that makes solving n x n linear systems seem like solving ordinary system of equations as follows:-

Besides, matrices can be used to show how to generalize the Newton Raphson method to solve non-linear n x n systems.

**Solving Linear Systems**

The linear system in explicit form

For X_{1}, X_{2}, X_{3},…, Xn are unknowns,

Given aij for each ij = 1, 2, …, n and bi for each i = 1,2,…,n are scalars

According to Maron and Atkinson in 1982 and 1985 respectively, says, that the definition of matrix multiplication makes it possible to write this linear system compactly as a single matrix equation.

AX = b ……………………………………………………….(3)

Where A is a given n x n matrix assumed to be non-singular, b is a given column vector and X is the solution vector to be determined that is

….. (4)

If A is a non-singular, then the numerical vector A^{-1}b satisfies equation (2) above because

However, any numerical vector X for AX=b must by A^{-1} satisfy vector X = A^{-1}b

Also, if A is non-singular, then any b, the system AX=b has a unique solution given by X = A^{-1}b

But X = A^{-1}b makes it easy to solve the system AX=b where A^{-1} is known.

However, the easiest way to solve a 2 x 2 linear system is to use

It is invertible if the only if ad – bc such that

The above equation can be called formula for A^{-1} when n= 2

It can easily be seen or verified that if n > 2 and all that is required is the solution of a single n x n system AX = b, then finding A^{-1} and then multiplying b is generally not the most efficient way to obtain the solution. However, we shall consistently denote the solution of AX = b as A^{-1}b even if is not actually obtained as the product A^{-1}b.

**METHODS OF SOLVING LINEAR SYSTEM OF EQUATION.**

We are interest at this point in discussing methods for solving linear systems and we shall restrict ourselves to system of the form AX = b where a unique solution exist.

In trying to find solutions to system for linear equation of the type AX = b, many methods can be adopted. In reality, there are many methods of solution but choice of methods varies in computation and also depends on efficiency and accuracy.

A large part of numerical analysis is concerned with why and how errors exist in mathematical problems. So there is the search for methods which minimizes the totality of these errors, such methods include

1) Cramer’s Rule

2) Triangular Decomposition method

3) Iterative method

4) Direct method

5) Repeated direct method

** **

**CRAMER’S RULE**

We consider the linear system (3) Supposed that A is non-singular, the equation (3) can be re-written as X = A^{-1}b

If det A 0, then the unique solution of AX = b is

And Aj is the matrix obtained by replacing the jth column of A by b.

Finding X by Cramer’s rule requires evaluating the determinant of A and of n additional n x n matrices A_{1}, A_{2}, …, A_{n}. The arrow rules makes crammer’s rule convenient when n = 2 and reasonably easy to use when n = 3. However, for n the efficient evaluation of det A alone is det A =

(-1)^{p} (product of the pivots of L/U) where P is the number of row interchanges used to get L/U is even or odd.

Let

Be a linear system of n equations in n unknown and let A = [aij] be the coefficient matrix so that we can write the given system as AX = b where

If , the system has a unique solution

Where A_{1} is the matrix obtained from A by replacing the ith column of A by b. If /A/0, then A is non-singular, hence

This means that

Now let

If we evaluate /Ai/ by expanding about the ith column, we find that /A_{i}/ = A_{1i} b_{1} + a_{2i}b_{2 }+ … + A_{ni}b_{n}

Hence

For i = 1, 2, …, n. In the expression for X_{i}, the determinant /A_{i}/ of A_{1} can be calculated by another method.

**THE TRIANGULAR DECOMPOSITION METHOD**

The method of triangular decomposition is set out to decompose a square matrix into a product of lower triangular matrix and the upper triangular matrix where A =

But for this project, we shall be discussing the method used by cholesky. There are two ways:-That which have unit lower triangular matrix and that which has unit upper triangular matrix .

In the case of a 4 x 4 matrix we have

Multiplying the rows of L by the first column of U we get L_{11}=a_{11}, L_{21} =a_{21}, L_{31}= a_{31}, L_{41} = a_{41} the first column of L is the same as the first of a. we now multiply the first row of L by the columns of U.

Which

Thus the first row of U is determined. In this method, we alternate between getting a column of L and a row of U, so we get column of L equation for the second column of L by multiplying the rows of L

By

Which gives

Proceeding in the same fashion, the equation we need are

_{ }The general formula for getting elements of L and U corresponding to the coefficient matrix for n simultaneously can be written as

For j = 1, the rule for L reduces to L_{ij} = a_{ij}

For i =1, the rule for U reduces to

**ITERATIVE METHOD**

In iterative method, we start from an approximation to the true solution and if successful, obtain better approximations from a computational cycle repeated as often as may be necessary for achieving a required accuracy so that the amount of arithmetic depends upon the accuracy required. Iterative methods are used mainly in those problems for which convergence is known to be rapid and for systems of large order but with many zero coefficients.

Generally, these methods have more modest storage requirements than direct methods and may also be faster depending on the iterative method and the problem. They usually also have better vectorization and parallelization properties.

The jacobi and Gauss-siedel methods are examples of the iterative methods.

**JACOBI ITERATIVE METHOD**

The jacobi iteration is also known as the method of simultaneous displacements.

………….(1)

By solving the first equation X_{1} in terms of the remaining unknowns, solving the second equation for X_{2} in terms of the remaining unknowns, solving the third equation for X_{3} in terms of the remaining unknowns e.t.c.

…………… (2)

The system 20X_{1} + X_{2} – X_{3} = 17

X_{1} – 10X_{2} + X_{3} = 13

-X_{1} + X_{2} + 10X_{3} = 18

Would be re-written as

………………… (3)

Or

……………………… (4)

To solve the system by Jacobi iteration, make an initial approximation to the solution. When no better choice is available use X_{1} (0), i=1(1)n substitute the initial approximation into the right hand side of equation (2) above and use the values of X_{1}, X_{2} …, X_{n} that result on the left hand side as a new approximation to the new solution. Thus, as new values are generated they are not immediately used for rather are retained for the next iteration.

To solve system (3) by Jacobi method we would substitute the initial approximation X_{1}=0, X_{2}= 0, X_{3}=0 into right hand side of system (4) and calculate the new approximation.

To improve the approximation we would repeat the subsequent process.

**GAUSS-SIEDEL ITERATION METHOD**

History had it that this iteration was used by Gauss in his calculations and was independently discovered by siedel. The striking factor about this method is that whenever each component of X^{(r+1)} is computed, for this reason Gauss-siedel method is sometimes called the method of successive displacements recalls the system of linear equations

Gauss-siedel decided to write this equation

As

Provided the diagonal elements

**DIRECT METHOD**

Direct methods are those which in the absence of round-off or other errors, will yield the exact solution in a finite number of elementary arithmetic operations. In practise, because a computer works with a finite word length, direct methods do not lead to exact solutions the methods used for exact solutions are Gaussians elimination method and cholesky decomposition method. We shall discuss these methods in chapter two and three of this project work respectively.

**REPEATED DIRECT METHOD**

If the system of equation is ill-conditioned, that is if the determinant of matrix coefficients is infinitesimal in comparison with the size of the element of the matrix, we use the repeated direct method, that is we interprete the equation of the system as hyper-planes in an n-dimensional co-ordinate system, then the elements (a_{11}, a_{12}, …, a_{1n}) of row represent the direction numbers of the “normal” to the ith hyper-plan. In this context, we can interprete an ill-conditional system of linear equation as a set n hyper-planes whose “normal” are very nearly “parallel”. If the systems of equation are thus ill-conditional, then the solution obtained from direct method may be a poor representation of the true solution because of the accumulating effect of round-off error. This round-off error can be limited to some degree by using pivotal condensation.