martes, 27 de julio de 2010

Iterative or approximated Methods

Iterative or approximated Methods----Iterative or approximated Methods----Iterative or approximated Methods----Iterative or approximated Methods----Iterative or approximated Methods

Two techniques similar to obtain the roots of a single equation are the iterative methods. The approaches consisted of guessing a value and then using a systematic method to obtain a refined estimated of the root.

Gauss-Seidel
Assume some equations set:



If the diagonal elements are all non zero, the equation can be solved for:

The process begin guesses x’s values, the easiest way to obtain initial guesses is to assume all x’s values in zero. The zeros are used in the first equation to obtain a new x1 value, them will be use in the second equation where x3 will be supposed (zero is the suggestion); with x1 and x2 values can be obtained x3. By the iterations, use the new x1, x2 and x3 to substitute the equations again until the solution perform:

For all i, where j and j-1 are the present and previous iterations.

Example:

5X-Y+Z=10
8Y+2X-Z=11
-X+Y+4Z=3



First, solve each of the equations:


For the second iteration, take the x1, x2, and x3 values from the first iteration and calculated the error as the solution converge on the solution:

Gauss-Seidel method is similar to the fixed-point iteration technique to solve the roots of a single equation so, this method have to problems to: sometimes the solution no converge and when it converge, the process is very slowly.

The Gauss-Seidel algorithm can be expressed as:

The partial derivates of these equations can be evaluated with respect to each of the unknowns as:

This can be substituted in the convergence fixed-point criteria to obtain:

This criterion is not necessary for convergence.
--Rremember, the convergence fixed-point criteria are:



RELAXATION-SOR (Simultaneous over-relaxation)

Relaxation is a modification of Gauss-Seidel method. The x values obtained in the Gauss-Seidel iteration equation are computed in the equation:

When λ is a weighting factor that is assigned a value between 0 and 2. * If λ is set at a value between 0 and 1, the result is a weighted average of the present and the previous results. This type of modification is called under-relaxation. It is typically employed to make a non-convergent system converge or to hasten convergence by dampening out oscillations. For values of λ from 1 to 2, extra weight is placed on the present value. In this instance, there is an implicit assumption that the new value is moving in the correct direction toward the true solution but at too slow rate. Thus, the added weight of λ is intended to improve the estimate by pushing it closer to the truth. Hence, this type of modification which is called over-relation is designed to accelerate the convergence of an already convergent system*.


REFERENCES
 CHAPRA, Steven C. y CANALE, Raymond P.: “Numerical Methods for engineers”. McGraw Hill, fifth edition, 2006.*
 www.lawebdelprogramador.com


miércoles, 21 de julio de 2010

MATRIX

MATRIX---MATRIX---MATRIX---MATRIX---MATRIX---MATRIX---MATRIX---MATRIX---MATRIX


A matrix consists of a rectangular array of elements represented by single symbol. As depicted in next figure,[A] is the shorthand notation for the matrix and aij designates an individual element of the matrix.

A horizontal set of elements is called a row and a vertical set is called a column. The first subscript i designates the number of the row in which the element lies, the second subscript j designates the column.

SPECIAL TIPES OF MATRICES


MATRIX OPERATION RULES

Addition of two matrices: [A] and [B] , is accomplished by adding corresponding terms in each matrix, the elements of the resulting matrix [C] are computed:
Cij = aij + bij
The additions of two matrices are similar to the subtraction of two matrices: subtracting corresponding terms.
Cij = aij – bij
Addition and subtraction are commutative and associative.
The product of two matrices is represented [C]=[A][B], where the elements of [C] are defined as According to Chapra, where n= the column dimension of [A] and the row dimension of [B]. that is, the cij elements obtained by adding the product of individual elements from the ith row of the firs matrix, in this case [A], by jth column of the second matrix [B]. According to this definition, multiplication of two matrices can be performed only if the firs matrix has as many columns as the number of rows in the second matrix. Thus, if [A] is an n by m matrix, [B] could be an m by l matrix. For this case, the resulting [[C] matrix would have the dimension of n by l. Remember, multiplication is not generally commutative.


TECHNIQUES FOR IMPROVING SOLUTIONS


Cramer Rules
This rule applies to systems where: the equations numbers is equal at the unknown quantity, and that the determinant from the matrix be different to zero.
If the determinant from the matrix is Δ:


And be Δ1, Δ2, Δ3, …, Δn determinants getting from the substitution of the independence term in the first column, second column, third column, … and the n th column; using the Cramer rule:
X1= Δ1/ Δ x2= Δ2/ Δ x3= Δ3


Example:




X= 21/2 y= -8/2=-4 z=-11/2



GAUSS-JORDAN
This is a variation of Gauss elimination. The major difference is that when an unknown is eliminated in the Gauss-Jordan method, it’s eliminated from all other equations rather than just the subsequent ones. In addition, all rows are normalizes by dividing them by their pivot elements. Thus, the elimination step results in an identity matrix rather than a triangular matrix. Consequently, it is not necessary to employ back substitution to obtain the solution*.



Example:
3.0 X1 - 0.1 X2 - 0.2 X3 = 7.8500
0.1 X1 + 7.0 X2 - 0.3 X3 = - 19.3
0.3 X1 - 0.2 X2 + 10 X3 = 71.4000

The first line is normalized, dividing in 3 the first row to obtain:

The x1 term can be eliminated from second rows subtracting 0,1 times the first from the second row. Similarly, subtracting 0,3 times the first row from the third row willeliminated the x1 term from the third row:


Next, normalize the second row by dividing it by 7,00333:

Reduction of the x2 terms from the first and third equations gives:

The third row is them normalized by dividing it by 10,0120:

Finally, the x3 terms can be reduced from the first and the second equations to give:

LU DECOMPOSITION “Lower and Upper”
An original matrix is upset in two triangular matrixes: a lower and upper.
Methodology
1-Obtain the lower triangle matrix “L” and the upper triangle “U”.
2-Solve Ly=b
3-Save the new matrix with name “y”
4-Solve Ux=y (to find X)
5-The new matrix “x” offer the unknown quantities.



THOMAS METHOD
This method is a simplification of “LU” method upon a diagonal matrix.



If say A=LU, and apply Doolite where lii=1 to i=1 far as n, obtain:


Based in before matrix product, obtained the next expressions:
U11=b1
Ln,n-1=(an /U n-1,n-1)
Un-1,n= cn-1
Un,n= bn-Ln,n-1*Un-1,n
Where: a1=0 and cn=0
If Lux=r and Ux=d then Ld=r well them:


d1=rr1
Since k=2 as n
Dk= rk - Lk,k-1*dk-1
Finally solve Ux=d from a substitution:


Example:

U11=b1=1
For k=2
L11=(a2/U11)=(3/1)=3
U12=c1=3
U22=b2- L21*U12=1-3(3)=-8
For k=3
L32=(a3/U22)=(2/-8)=-1/4
U23=c2=2
U33=b3- L32*U23=1+(1/4)(2)=1,5
For k=4
L43=(a4/U33)=(5/1,5)=3,33
U34=c3=5
U44=b4- L43*U34=1-(10/3)(5)=-47/3
For k=5
L54=(a5/U44)=(3/(-47/3))=-9/47
U45=c4=3
U55=b5- L54*U45=1+(9/47)(3)=(74/47)


Finding the L matrix and solving L*d=r
D1=r1=5
K2---d2=r2-L21*d1 = 11-3(5)= -4
K3---d3=r3-L32*d2 = 8
K4---d4=r4-L43*d3 = -6,
K5---d5=r5-L54*d4 = 4,72



Solving U*x=d through regressive substitution:
X5= d5/U55= (3,6/(74/47))=3
To k= n-1 =4
X4=(d4-(U45*X5))/U44 =(-6,66-(3*2,27))/(-17/3)=1
To k= n-2 =3
X3=(d3-(U34*X4+U35*X5))/U33 =2
….

Cholesky Method
According to Chapra, this algorithm is based on the fact that a symmetric matrix can be decomposed as in:
[A]=[L][L]^(T)
That is, the resulting triangular factors are the transpose of each other. The terms of the last equations can be multiplied out and set equal to each other. The result can be expressed simply by recurrence relations. For the k th row*:

REFERENCES


CHAPRA, Steven C. y CANALE, Raymond P.: “Numerical Methods for engineers”. McGraw Hill, fifth edition, 2006.*
CARRILLO, Eduardo, “Lu para resolver métodos abiertos”, UIS, 2010.
Pinto F, Toledo M, Plata A, Gomez E, Budez J, Mancilla R;”Matrices y sistemas de ecuaciones”; UIS 2010.
http://personal.redestb.es/ztt/tem/t6_matrices.htm
http://docencia.udea.edu.co/GeometriaVectorial/uni2/seccion21.html

lunes, 19 de julio de 2010

EQUATION ROOTS

The methods described here in employ different strategies to reduce the answer interval in some bracket. The objective is find a x valor of root that f(x)=0.

EQUATION ROOTS----EQUATION ROOTS----EQUATION ROOTS----EQUATION ROOTS----EQUATION ROOTS----EQUATION ROOTS

The methods described here in employ different strategies to reduce the answer interval in some bracket. The objective is find a x valor of root that f(x)=0.

BRACKETING METHODS

GRAPHICAL METHODS
To obtain a f(x)=0 with this method, you can draw a plot of the function and observe where it crosses x axis, this point represent an approximation of the root.

Example:
f(x)=(667,38/x)*(1-e^(-0,1468*x))-40 ; interval: (10, 20):



Possible cases in [a,b] interval

THE BISECTION METHOD

Initial guesses: 2
Convergence rate: slow
Stability: always

Incremental search methods capitalize on this observation by locating an interval where the function changes sign. Then the location sign change is identified more precisely by dividing the interval into a number of subintervals. The process is repeated and the root estimated refined by dividing the subintervals into finer increments*.
To identify the answered interval:

-If f(xi)*f(xr)<0>

-If f(xr)*f(xs)<0 the root is in lower interval

Example: f(x)= (e^ (-x))-x; interval (0,1)





THE FALSE POSITION METHOD

Initial guesses: 2
Convergence rate: slow/medium
Stability: always


An alternative method that exploits the graphical insight is to join f(xi) and f(xs) by a straight line. The intersection of this line with the x axis represents an improved estimated of the root. The fact that replacement of the curve by straight lines give a “false position” of the root.
This method is very similar to bisection method, just defer in:
Xr = Xs- ((fxs)*(xi-xs))/((f(xi)-f(xs)).

Example: f(x)= (x^3)+(4*(x^2))-10





OPEN METHODS


The open methods are based on formulas that required only a single starting value of x or two starting values that do not necessary bracket the root*.

SIMPLE FIXED-POINT METHOD

Initial guesses: 1
Convergence rate: slow
Stability: possibly divergent

The simple fixed-point iteration by rearranging the function f(x)=0 so that x is on the left-hand side of equation*.
X i= g(xo)
Xi+1 = g(xi)
The approximate error for this equation can be determined using the error estimator, if the approximate error is low, the root is nearly them valor.

Example: f(x)= (2400/(30-X))^(1/2)

Can report four situations at the moment to want a root:








When the solution converge from the root, and





Case when the iterations diverge from the root.




NEWTON-RAPHSON METHOD

Initial guesses: 1
Convergence rate: fast
Stability: possibly divergent

If the initial guess at the root is xi, a tangent can be extended from the point [xi, f(xi)]. The point where this tangent crosses the x axis usually represents an improved estimated of the root. The Newton-Raphson method can be derived on basis of geometrical interpretation where the first derivate from function at x is equivalent to slope:


f’(xi) = (f(xi) - 0)/(Xi - Xi+1)
Xi+1 = xi – (f(xi)/f’(xi))----Newton-Raphson formula.


Example:
f(x)=[1,27*10^(-4)*X^(2)]-[6,77*10^(-9)*X^(4)]-[2,67*10^(-6)*X^(3)]-0,0185
f’(x)= [2,55*10^(-4)*X]-[(2,71*10^(-8)*X^(3)]-[8*10^(-6)*X^(2)]




SECANT METHOD


Initial guesses: 2
Convergence rate: medium to fast
Stability: possibly divergent


In some cases is complicate meet the derivate of the function, through similar triangle associated at the union of two original points, obtained the next equation which describe the two triangle shaped between the two points mention.
Xi+1 = xi - [f(xi)*(Xi-1 – Xi)]/[f(xi-1)-f(xi)]

Example: f(x) = (e^(-X))-X






Summary

REFERENCES

 CHAPRA, Steven C. y CANALE, Raymond P.: “Numerical Methods for engineers”. McGraw Hill, fifth edition, 2006.*
 CARRILLO, Eduardo, “Métodos Abiertos”, UIS, 2010.