logo

Numerical Methods | Report

   

Added on  2022-09-14

6 Pages1322 Words28 Views
 | 
 | 
 | 
Running head: NUMERICAL METHODS & STATISTICS. 1
Name of the Student
Name of the University
Author’s Note.
Numerical Methods | Report_1

NUMERICAL METHODS & STATISTICS. 2
This report seeks to find out 4 Numerical Methods; specifically describing the methods of
how to evaluate their performance, and how they are related to the statistical inferential field.
Root Finding Algorithms.
Root finding algorithms are given in the form (Dey, 2015):find x s . t f ( x )=0, then such an
x is the root of the function f. During evaluation of methods, time and the iteration that yield the
root for the root finding methods is usually the key consideration (Dey, 2015). The rate of
convergence thus will always depend on the first initial value given and could be either linear,
quadratic or more. (Dey, 2015).
First, considering the Bisection method (most primitive method),applies the Intermediate
value theorem: i.e. a function f ( x )=0 that is continuous everywhere at f ( a ) and at f ( b ) will
always have opposite signs and less than zero so that there is a root in between. The convergence
is linear especially during error finding but has however, a slow rate.
For the Newton-Raphson method, this is the most employed root-finding formula. It is a
generalization of the Taylor series expansion i.e. f ( xn +1 )=
n=0
f (n ) ( xn )
n ! ( xn +1xn )n where the version
is approximated by truncation i.e. f ( xn +1 ) f ( xn ) + f ( 1 ) ( xn ) ( xn+1xn ) = 0 at the x axis (Chapra &
Canale, 2012; Dey, 2015). This can be solved as
xn+1=xn f ( xn )
f (1 ) ( xn ) .
The convergence of the errors obtained is usually quadratic in nature (Dey, 2015). This implies
that it’s rate of convergence is faster as compared to bisection method hence its effectivity. The
selection of initial guesses determines the convergence of the iteration
Numerical Methods | Report_2

NUMERICAL METHODS & STATISTICS. 3
Optimization.
Furthermore, Optimization (genetic algorithm) bases itself on natural selection process to
solve both constrained and unconstrained optimization problems (Genetic Algorithm - MATLAB
& Simulink, n.d.). Discontinuous, nondifferentiable, stochastic, or nonlinear problem can also be
optimized by the genetic algorithm (Genetic Algorithm - MATLAB & Simulink, n.d.). Accessing
its performance relies on the following.
The number of objective and constraints function evaluation.
The number of iterations (speed).
Accuracy of solutions (whether it is correlated with computational effort).
Numerical Integration.
Delving into Numerical integration (Quadrature: rectangle rule, trapezoidal rule),
algorithms are based on finding the numerical solutions to differential equations (Press et al.,
2007). Its application is directed to a one-dimensional integral i.e., approximating solutions and
finding degree of accuracy for
a
b
f ( x ) dx. The midpoint rule:
a
b
f ( x ) dx (ba)f a+b
2
is the simplest method and uses interpolating polynomials that pass through ¿ on a line (Chapra
& Canale, 2012; Press et al., 2007) . For the trapezoidal rule, interpolating polynomials that
passes through points (a , f ( a ) ) and (b , f ( b )) through the affine model. Trapezoidal rule:

a
b
f ( x ) dx (ba) f (a)+ f (b)
2 .
Numerical Methods | Report_3

End of preview

Want to access all the pages? Upload your documents or become a member.