Statistical Inference | Assignment

Verified

Added on  2022/08/29

|7
|647
|19
AI Summary

Contribute Materials

Your contribution can guide someone’s learning journey. Share your documents today.
Document Page
Statistical Inference
By
Name ()
Course Name ()
Instructor Name ()
Name of University ()
Submission Date ()

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
Exercise 4.2
Question One
Consider a random sample of size n x1,x2,x3,…….xn from a density function f (x , θ), with
unknown parameter θ . Ones we observe the values of x1,x2,x3,………..xn, the likelihood
function is given by L( θ) =
i=1
n
f ( xi θ)ie the likelihood function is the product of the marginal
distributions.
The score function S( θx1 : n) is derived by taking the differential of the log likelihood fuction
with respect to θ. Ie dl(θ)
. It worthy noting that, the L( θ) and In L(θ) have their maximum at the
same value of θ and therefore it is easier to find the maximum using the logarithms of the
likelihood function.
It follows that, given the pdf of x1,x2,x3……xn which are iid, which is
F ( x ,θ ) = e xθ
1+ exθ x R, then the likelihood function is given by by;
i=1
n
f (xi θ) where i = 1,2,
….n
f ( x , θ ) = d
dx F ( x ,θ )= exθ
1+ exθ x R d
dx exθ (1+exθ )1 = d
dx = exθ+1
f ( x , θ )=exθ x R,
e x1θ
1+ ex 1θ * ex 2θ
1+ ex 2θ * ex 3θ
1+ ex 3θ *………………….* exnθ
1+ exnθ , taking the cumulative function
F ( x ,θ ) = e xθ
1+ exθ x R, = f ( x , θ )=exθ (1+ exθ)1 x R, = exθ(1+ e x+θ) = exθ+1
2 | P a g e
Document Page
f ( x , θ )=exθ +1 x R

i=1
n
f ( xi θ)=
i=1
n
e xiθ
+1 x R = e
i=1
n
(xθ)
+1
Log L f (xi θ) =
i=1
n
( xiθ ) log e +log 1 = L f (xi θ) =
i=1
n
( xiθ ) +log1
Question two
S(θx1 : n) = dl(θ)
L f ¿ = dl(θ)

i=1
n
( xiθ ) +log 1 =
i=1
n
(x 1+θ )
L(θx1 : n) = Var(S( θx1 : n)) Var(
i=1
n
(x 1+θ )) = Var(
i=1
n
xi)+Var( n θ) (Smirnov, 2011)
Considering the Var(x1) = π 2
3 it follows that Var(
i=1
n
xi) =
i=1
n
Var ¿ ¿) = 1
n2
i=1
n
var xi
= 1
n2 * n* π 2
3 = 2
3 n
J( θx1 : n) is given by the negative expectation of the second differential of the log likelihood
function. Given as: J(θx1 : n) = -E( d2 l(θ)

i=1
n
( xiθ ) +log 1) (Donnelly et al. 2016)
d2 l(θ)

i=1
n
( xiθ ) =¿ dl(θ)
.
i=1
n
( x 1+θ ) =¿.
i=1
n
(x 1+θ ) using the chain rule differentiation
This implies that, S( θx1 : n) = J( θx1 : n) =
i=1
n
(x 1+θ )
3 | P a g e
Document Page
-E(.
i=1
n
(x 1+θ )) = -E(
i=1
n
xi) +-E(
i=1
n
θ)

i=1
n
–E(xi) = -n θ
Question three
In statistical analysis, the unbiasedness property of estimates is consider to be of import and
therefore when we are comparing various competing estimates, we restrict to the estimates to
chosen to be unbiased.
Let ^θ be an estimator of θ. Then we say that ^θ is unbiased estimator if MSE( ^θ) = Var( ^θ)
^θ =
i=1
n
xi
n
= x
if MSE( ^θ) = Var( ^θ) = Var(=
i=1
n
xi
n
) = 1
n 2
1
n
var ( xi) =
1
n 2 nπ2
3
= π2
3 n
since MSE( ^θ) = Var( ^θ) = π2
3 n
then ^θ) is unbiased estimator
the MSE consistency is defined as; let { T n } be a sequence of of estimators of T (θ) from
random sample of size n. such kind of a sequence of estimators is said to be MSE consistent iff
lim
n
E [ T nT (θ) ] = 0 θ ie lim
n
MSE( Tn ) = 0 (Gyeong, 2010)
4 | P a g e

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
lim
n
MSE ( T n ) = lim
n
¿ π2
3 n
π 2
3 = π 2
0 = 0; thus ^θ is MSE consistent.
Question four
The Cramer Rao inequality give the lower bound on the variance of any unbiased estimator of
let’s sayT (θ), it is defined as :
Vat(T) [ T ' (θ) ] 2
n E [ d
log f ( x ,θ ) ]
2
, Var (θb)=CRLB (θ), we say that b the CRLB is attained implying that θ is the best unbiased
estimator in terms of variance
T (θ) = θ, T ' (θ) = 1; d
log f ( x , θ ) =¿.
i=1
n
(x 1+θ )
1
n E [ d
log f ( x , θ ) ]2 =
1
n E [
i=1
n
(x 1+θ ) ]2 ¿ 1
n
i=1
n
[ E ( x 1+θ ) ] 2
¿ 1
nnπ 2
3 n
¿ 1
nπ2
3
= 3
nπ2
CRLB ( θ )= 1
I (θX 1: n) = 1
n θ 3
nπ2
5 | P a g e
Document Page
References
1. Donnelly, R.; Abdel-Raouf, F. Statistics; Alpha, a member of Penguin Random House LLC:
Indianapolis, Indiana, 2016.
2. Kubácek, L. Foundations of estimation theory; Elsevier: Amsterdam, 1988.
3. Gyeong, K. The Construction Of The Uniformly Minimum Variance Unbiased Estimator.
Tsukuba Journal of Mathematics 2010, 34 (1), 47-58.
4. Smirnov, O. Maximum Likelihood Estimator For Multivariate Binary Response Models.
SSRN Electronic Journal 2011.
6 | P a g e
Document Page
7 | P a g e
1 out of 7
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]

Your All-in-One AI-Powered Toolkit for Academic Success.

Available 24*7 on WhatsApp / Email

[object Object]