Statistical Inference | Assignment
VerifiedAdded on 2022/08/29
|7
|647
|19
AI Summary
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
Statistical Inference
By
Name ()
Course Name ()
Instructor Name ()
Name of University ()
Submission Date ()
By
Name ()
Course Name ()
Instructor Name ()
Name of University ()
Submission Date ()
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Exercise 4.2
Question One
Consider a random sample of size n x1,x2,x3,…….xn from a density function f (x , θ), with
unknown parameter θ . Ones we observe the values of x1,x2,x3,………..xn, the likelihood
function is given by L( θ) = ∏
i=1
n
f ( xi θ)ie the likelihood function is the product of the marginal
distributions.
The score function S( θ∨x1 : n) is derived by taking the differential of the log likelihood fuction
with respect to θ. Ie dl(θ)
dθ . It worthy noting that, the L( θ) and In L(θ) have their maximum at the
same value of θ and therefore it is easier to find the maximum using the logarithms of the
likelihood function.
It follows that, given the pdf of x1,x2,x3……xn which are iid, which is
F ( x ,θ ) = e x−θ
1+ ex−θ x ∈ R, then the likelihood function is given by by; ∏
i=1
n
f (xi θ) where i = 1,2,
….n
f ( x , θ ) = d
dx F ( x ,θ )= ex−θ
1+ ex−θ x ∈ R d
dx ex−θ (1+ex−θ )−1 = d
dx = ex−θ+1
f ( x , θ )=ex−θ x ∈ R,
e x1−θ
1+ ex 1−θ * ex 2−θ
1+ ex 2−θ * ex 3−θ
1+ ex 3−θ *………………….* exn−θ
1+ exn−θ , taking the cumulative function
F ( x ,θ ) = e x−θ
1+ ex−θ x ∈ R, = f ( x , θ )=ex−θ (1+ ex−θ)−1 x ∈ R, = ex−θ(1+ e− x+θ) = ex−θ+1
2 | P a g e
Question One
Consider a random sample of size n x1,x2,x3,…….xn from a density function f (x , θ), with
unknown parameter θ . Ones we observe the values of x1,x2,x3,………..xn, the likelihood
function is given by L( θ) = ∏
i=1
n
f ( xi θ)ie the likelihood function is the product of the marginal
distributions.
The score function S( θ∨x1 : n) is derived by taking the differential of the log likelihood fuction
with respect to θ. Ie dl(θ)
dθ . It worthy noting that, the L( θ) and In L(θ) have their maximum at the
same value of θ and therefore it is easier to find the maximum using the logarithms of the
likelihood function.
It follows that, given the pdf of x1,x2,x3……xn which are iid, which is
F ( x ,θ ) = e x−θ
1+ ex−θ x ∈ R, then the likelihood function is given by by; ∏
i=1
n
f (xi θ) where i = 1,2,
….n
f ( x , θ ) = d
dx F ( x ,θ )= ex−θ
1+ ex−θ x ∈ R d
dx ex−θ (1+ex−θ )−1 = d
dx = ex−θ+1
f ( x , θ )=ex−θ x ∈ R,
e x1−θ
1+ ex 1−θ * ex 2−θ
1+ ex 2−θ * ex 3−θ
1+ ex 3−θ *………………….* exn−θ
1+ exn−θ , taking the cumulative function
F ( x ,θ ) = e x−θ
1+ ex−θ x ∈ R, = f ( x , θ )=ex−θ (1+ ex−θ)−1 x ∈ R, = ex−θ(1+ e− x+θ) = ex−θ+1
2 | P a g e
f ( x , θ )=ex−θ +1 x ∈ R
∏
i=1
n
f ( xi θ)=∏
i=1
n
e xi−θ
+1 x ∈ R = e∑
i=1
n
(x−θ)
+1
Log L f (xi θ) = ∑
i=1
n
( xi−θ ) log e +log 1 = L f (xi θ) = ∑
i=1
n
( xi−θ ) +log1
Question two
S(θ∨x1 : n) = dl(θ)
dθ L f ¿ = dl(θ)
dθ ∑
i=1
n
( xi−θ ) +log 1 = ∑
i=1
n
(−x 1+θ )
L(θ∨x1 : n) = Var(S( θ∨x1 : n)) Var(∑
i=1
n
(−x 1+θ )) = Var(∑
i=1
n
xi)+Var( n θ) (Smirnov, 2011)
Considering the Var(x1) = π 2
3 it follows that Var(∑
i=1
n
xi) = ∑
i=1
n
Var ¿ ¿) = 1
n2 ∑
i=1
n
var xi
= 1
n2 * n* π 2
3 = nπ2
3 n
J( θ∨x1 : n) is given by the negative expectation of the second differential of the log likelihood
function. Given as: J(θ∨x1 : n) = -E( d2 l(θ)
dθ ∑
i=1
n
( xi−θ ) +log 1) (Donnelly et al. 2016)
d2 l(θ)
dθ ∑
i=1
n
( xi−θ ) =¿ dl(θ)
dθ .∑
i=1
n
( −x 1+θ ) =¿.∑
i=1
n
(−x 1+θ ) using the chain rule differentiation
This implies that, S( θ∨x1 : n) = J( θ∨x1 : n) = ∑
i=1
n
(−x 1+θ )
3 | P a g e
∏
i=1
n
f ( xi θ)=∏
i=1
n
e xi−θ
+1 x ∈ R = e∑
i=1
n
(x−θ)
+1
Log L f (xi θ) = ∑
i=1
n
( xi−θ ) log e +log 1 = L f (xi θ) = ∑
i=1
n
( xi−θ ) +log1
Question two
S(θ∨x1 : n) = dl(θ)
dθ L f ¿ = dl(θ)
dθ ∑
i=1
n
( xi−θ ) +log 1 = ∑
i=1
n
(−x 1+θ )
L(θ∨x1 : n) = Var(S( θ∨x1 : n)) Var(∑
i=1
n
(−x 1+θ )) = Var(∑
i=1
n
xi)+Var( n θ) (Smirnov, 2011)
Considering the Var(x1) = π 2
3 it follows that Var(∑
i=1
n
xi) = ∑
i=1
n
Var ¿ ¿) = 1
n2 ∑
i=1
n
var xi
= 1
n2 * n* π 2
3 = nπ2
3 n
J( θ∨x1 : n) is given by the negative expectation of the second differential of the log likelihood
function. Given as: J(θ∨x1 : n) = -E( d2 l(θ)
dθ ∑
i=1
n
( xi−θ ) +log 1) (Donnelly et al. 2016)
d2 l(θ)
dθ ∑
i=1
n
( xi−θ ) =¿ dl(θ)
dθ .∑
i=1
n
( −x 1+θ ) =¿.∑
i=1
n
(−x 1+θ ) using the chain rule differentiation
This implies that, S( θ∨x1 : n) = J( θ∨x1 : n) = ∑
i=1
n
(−x 1+θ )
3 | P a g e
-E(.∑
i=1
n
(−x 1+θ )) = -E(∑
i=1
n
xi) +-E( ∑
i=1
n
θ)
∑
i=1
n
–E(xi) = -n θ
Question three
In statistical analysis, the unbiasedness property of estimates is consider to be of import and
therefore when we are comparing various competing estimates, we restrict to the estimates to
chosen to be unbiased.
Let ^θ be an estimator of θ. Then we say that ^θ is unbiased estimator if MSE( ^θ) = Var( ^θ)
^θ = ∑
i=1
n
xi
n
= x
if MSE( ^θ) = Var( ^θ) = Var(= ∑
i=1
n
xi
n
) = 1
n 2 ∑
1
n
var ( xi) =
1
n 2 ∗n∗π2
3
= π2
3 n
since MSE( ^θ) = Var( ^θ) = π2
3 n
then ^θ) is unbiased estimator
the MSE consistency is defined as; let { T n } be a sequence of of estimators of T (θ) from
random sample of size n. such kind of a sequence of estimators is said to be MSE consistent iff
lim
n → ∞
E [ T n−T (θ) ] = 0 ∀ θ ie lim
n → ∞
MSE( Tn ) = 0 (Gyeong, 2010)
4 | P a g e
i=1
n
(−x 1+θ )) = -E(∑
i=1
n
xi) +-E( ∑
i=1
n
θ)
∑
i=1
n
–E(xi) = -n θ
Question three
In statistical analysis, the unbiasedness property of estimates is consider to be of import and
therefore when we are comparing various competing estimates, we restrict to the estimates to
chosen to be unbiased.
Let ^θ be an estimator of θ. Then we say that ^θ is unbiased estimator if MSE( ^θ) = Var( ^θ)
^θ = ∑
i=1
n
xi
n
= x
if MSE( ^θ) = Var( ^θ) = Var(= ∑
i=1
n
xi
n
) = 1
n 2 ∑
1
n
var ( xi) =
1
n 2 ∗n∗π2
3
= π2
3 n
since MSE( ^θ) = Var( ^θ) = π2
3 n
then ^θ) is unbiased estimator
the MSE consistency is defined as; let { T n } be a sequence of of estimators of T (θ) from
random sample of size n. such kind of a sequence of estimators is said to be MSE consistent iff
lim
n → ∞
E [ T n−T (θ) ] = 0 ∀ θ ie lim
n → ∞
MSE( Tn ) = 0 (Gyeong, 2010)
4 | P a g e
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
lim
n → ∞
MSE ( T n ) = lim
n → ∞
¿ π2
3 n
π 2
3∗∞ = π 2
0 = 0; thus ^θ is MSE consistent.
Question four
The Cramer Rao inequality give the lower bound on the variance of any unbiased estimator of
let’s sayT (θ), it is defined as :
Vat(T) ≥ [ T ' (θ) ] 2
n E [ d
dθ log f ( x ,θ ) ]
2
, Var (θb)=CRLB (θ), we say that b the CRLB is attained implying that θ is the best unbiased
estimator in terms of variance
T (θ) = θ, T ' (θ) = 1; d
dθ log f ( x , θ ) =¿.∑
i=1
n
(−x 1+θ )
1
n E [ d
dθ log f ( x , θ ) ]2 =
1
n E [∑
i=1
n
(−x 1+θ ) ]2 ¿ 1
n∑
i=1
n
[ E ( −x 1+θ ) ] 2
¿ 1
n∗n∗π 2
3 n
¿ 1
n∗π2
3
= 3
n∗π2
CRLB ( θ )= 1
I (θ∨X 1: n) = 1
−n θ ≠ 3
n∗π2
5 | P a g e
n → ∞
MSE ( T n ) = lim
n → ∞
¿ π2
3 n
π 2
3∗∞ = π 2
0 = 0; thus ^θ is MSE consistent.
Question four
The Cramer Rao inequality give the lower bound on the variance of any unbiased estimator of
let’s sayT (θ), it is defined as :
Vat(T) ≥ [ T ' (θ) ] 2
n E [ d
dθ log f ( x ,θ ) ]
2
, Var (θb)=CRLB (θ), we say that b the CRLB is attained implying that θ is the best unbiased
estimator in terms of variance
T (θ) = θ, T ' (θ) = 1; d
dθ log f ( x , θ ) =¿.∑
i=1
n
(−x 1+θ )
1
n E [ d
dθ log f ( x , θ ) ]2 =
1
n E [∑
i=1
n
(−x 1+θ ) ]2 ¿ 1
n∑
i=1
n
[ E ( −x 1+θ ) ] 2
¿ 1
n∗n∗π 2
3 n
¿ 1
n∗π2
3
= 3
n∗π2
CRLB ( θ )= 1
I (θ∨X 1: n) = 1
−n θ ≠ 3
n∗π2
5 | P a g e
References
1. Donnelly, R.; Abdel-Raouf, F. Statistics; Alpha, a member of Penguin Random House LLC:
Indianapolis, Indiana, 2016.
2. Kubácek, L. Foundations of estimation theory; Elsevier: Amsterdam, 1988.
3. Gyeong, K. The Construction Of The Uniformly Minimum Variance Unbiased Estimator.
Tsukuba Journal of Mathematics 2010, 34 (1), 47-58.
4. Smirnov, O. Maximum Likelihood Estimator For Multivariate Binary Response Models.
SSRN Electronic Journal 2011.
6 | P a g e
1. Donnelly, R.; Abdel-Raouf, F. Statistics; Alpha, a member of Penguin Random House LLC:
Indianapolis, Indiana, 2016.
2. Kubácek, L. Foundations of estimation theory; Elsevier: Amsterdam, 1988.
3. Gyeong, K. The Construction Of The Uniformly Minimum Variance Unbiased Estimator.
Tsukuba Journal of Mathematics 2010, 34 (1), 47-58.
4. Smirnov, O. Maximum Likelihood Estimator For Multivariate Binary Response Models.
SSRN Electronic Journal 2011.
6 | P a g e
7 | P a g e
1 out of 7
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.