Solutions for STAT 2006 Assignment 2: Inferential Statistics Problems

Verified

Added on  2022/08/23

|31
|1132
|18
Homework Assignment
AI Summary
This document presents a comprehensive solution set for an inferential statistics assignment (STAT 2006). It covers a range of problems, including finding the moment-generating function (MGF) for jointly normal random variables, analyzing ordered random variables from an exponential distribution, and determining their independence. The solution also delves into maximum likelihood estimation (MLE), moment estimators, and the application of Chebyshev's inequality. Furthermore, it addresses the properties of bivariate normal distributions, the calculation of sample size, and the analysis of shifted exponential distributions. The document provides detailed step-by-step solutions to each problem, offering valuable insights into the concepts and techniques of inferential statistics. The assignment covers topics such as calculating the MGF, working with exponential distributions, finding joint and marginal probability density functions, and estimating parameters using various methods.
tabler-icon-diamond-filled.svg

Contribute Materials

Your contribution can guide someone’s learning journey. Share your documents today.
Document Page
Running Head: PROBLEMS ON INFERENTIAL STATISTICS
PROBLEMS ON INFERENTIAL STATISTICS
Name of the Student:
Name of the University:
Author Note:
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
1PROBLEMS ON INFERENTIAL STATISTICS
Answer 1
The pdf of ( X , Y )is given by,
f X , Y ( x , y )= 1
2 π σ X σY 1ρ2 exp [ 1
2(1ρ2) {( xμX
σ X )2
+
( y μY
σY )2
2 ρ ( xμX
σ X )( y μY
σY ) }]
The MGF of ( X , Y )can be obtained as,
M X , Y ( s , t ) =E [ esX+ tY ]
M X ,Y ( s , t )=


esx ety . 1
2 π σ X σY 1ρ2 exp [ 1
2 ( 1 ρ2 ) {( xμX
σ X )2
+ ( yμY
σY )2
2 ρ ( xμX
σ X )( yμY
σY ) }]dxdy
Let x=μX +σ X u , y=σY 1 ρ2 v+ ρ σY u+ μY
dx=σ X du , dy=σY 1ρ2 dv
exp [ 1
2 ( 1ρ2 ) {( xμX
σ X )
2
+( yμY
σY ) 2
2 ρ ( xμX
σ X )( y μY
σY )}]
¿ exp [ 1
2 ( 1ρ2 ) {( μX +σ X uμX
σ X )2
+ ( σY 1ρ2 v + ρ σY u + μY μY
σ Y )2
2 ρ ( μX + σ X uμX
σ X ) ( σ Y 1ρ2 v + ρ σY u+ μY μ
σY
¿ exp [ 1
2 ( 1ρ2 ) { u2 + ( 1 ρ2 ) v2+ ρ2 u2 +2 ρ 1ρ2 uv 2 ρ 1ρ2 uv2 ρ2 u2 } ]
¿ exp [ 1
2 { u2+ v2 } ]
M X ,Y ( s ,t ) =


es ( μX +σ X u ) et ( σY 1 ρ2 v+ρ σ Y u+ μY ) . 1
2 π σ X σ Y 1ρ2 exp [ 1
2 { u2+ v2 } ] . σ X σY 1ρ2 dudv
Document Page
2PROBLEMS ON INFERENTIAL STATISTICS
M X ,Y ( s , t ) =e ( X+Y )


e ( X + ρσ Y t ) u . 1
2 π e
1
2 u2
du .


et σ Y 1 ρ2 v . 1
2 π e
1
2 v2
dv
Note that the part within the integral is nothing but the MGF of standard normal variable.
M X ,Y ( s ,t ) =e( X +Y ) . e
( X+ ρ σ Y t ) 2
2 . e¿ ¿
M X ,Y ( s , t ) =eX +Y + 1
2 ( σ X
2 s2+ σY
2 t 2+2 ρst σ X σ Y )
Answer 2
Y 1, Y 2 ……. Y n are iid exp ( θ ) random variables.
X1, X2 ……. X n are the ordered random variables of Y i’s with joint pdf.
f X1 , X 2, X n ( x1 , x2 . xn )= n !
θn exp {1
θ
i=1
n
xi }
where 0 x1 x2 xn
a. Let U1= X1 X1=U1
U2= X2X1 X2=U2+U1
U3= X3 X2 X3=U3 +U2 +U1

Un =Xn Xn1 Xn=U n +Un 1 +Un 2 + U1
|J |= X
U =
|[ 1 0

1 1 ]|=1
the joint pdf of ¿, U2 ……. Un) is
Document Page
3PROBLEMS ON INFERENTIAL STATISTICS
g ¿, u2 ……. un) = n !
θn exp 1
θ ¿
= n !
θn exp [1
θ {n u1+ ( n1 ) u2 + ( n2 ) u3++un }]
¿ n
θ e
n
θ u1
. n1
θ e
n1
θ u2
. 1
θ e
1
θ un
b. From the joint pdf, it can be observed that U1, U2 ……. Un are mutually independent.
Marginal distribution of Ui is given by
Ui exp(¿ θ
ni )¿ i=0 , 1 , 2, .(n1)
c. X1 =¿ U1 exp ( θ
n )
E ¿) = θ
n
E ¿) = E[
i=1
n
Ui ]
=
i=0
n 1 θ
nii
¿ θ
n + θ
n1 ++θ
¿ θ[1+ 1
2 ++ 1
n1 + 1
n ]=Hnθ Hn is the harmonic number
Answer3
Z1 N ( 0,1 ) , Z2 N ( 0,1 ) , Z1 Z2 areindependent .
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
4PROBLEMS ON INFERENTIAL STATISTICS
X and Y are two random variables defined as,
X =aX Z1+ bX Z2+ cX Y =aY Z1+ bY Z2+ cY
where aX , bX , c X , aY , bY cY are constants.
a. E ( X ) =E ( aX Z1+ bX Z2 + cX )
E ( X ) =aX E ( Z1 ) +bX E ( Z2 ) +c X
E ( X ) =aX .0+bX .0+c X [ E ( Z1 ) =E ( Z2 ) =0 ]
E ( X ) =cX
E ( Y )=E ( aY Z1+ bY Z2+ cY )
E ( Y ) =aY E ( Z1 ) +bY E ( Z2 ) +cY
E ( Y ) =aY .0+bY .0+cY [ E ( Z1 ) =E ( Z2 ) =0 ]
E ( Y ) =cY
Var ( X ) =Var ( aX Z1 +bX Z2 +c X )
Var ( X ) =aX
2 Var ( Z1 ) + bX
2 Var ( Z2 ) [ Z1Z2 are independent , cov ( Z1 , Z2 ) =0]
Var ( X )=aX
2 .1+bX
2 .1 [ Var ( Z1 )=Var ( Z2 ) =1 ]
Var ( X ) =aX
2 +bX
2
Var ( Y )=Var ( aY Z1 +bY Z2 +cY )
Var ( Y ) =aY
2 Var ( Z1 ) +bY
2 Var ( Z2 ) [ Z1Z2 are independent , cov ( Z1 , Z2 ) =0]
Var ( Y ) =aY
2 .1+bY
2 .1 [ Var ( Z1 ) =Var ( Z2 ) =1 ]
Var ( Y ) =aY
2 +bY
2
Now ,Cov ( X ,Y ) =E ( XY ) E ( X ) E (Y )
Document Page
5PROBLEMS ON INFERENTIAL STATISTICS
XY = ( aX Z1+ bX Z2 + cX ) . ( aY Z1+ bY Z2+ cY )
XY =aX aY Z1
2 +bX bY Z2
2+ aX cY Z1+bX cY Z2+ aY cX Z1+bY cX Z2+ aX bY Z1 Z2 +bX aY Z1 Z2+ cX cY
E ( XY )=aX aY E ( Z1
2 )+bX bY E ( Z2
2 )+ aX cY E ( Z1 ) +bX cY E ( Z2 )+ aY c X E ( Z1 ) + bY cX E ( Z2 ) +aX bY E ( Z1 Z2 ) +bX
E ( XY ) =aX aY .1+ bX bY .1+ aX cY .0+bX cY .0+ aY cX .0+ bY cX .0+ aX bY .0 .0+bX aY .0 .0+c X cY
¿ E ( XY )=aX aY +bX bY +cX cY
Cov ( X , Y )=E ( XY ) E ( X ) E (Y )
Cov ( X , Y ) =aX aY +bX bY + cX cY c X cY
Cov ( X , Y )=aX aY +bX bY
b. Here it is given that,
aX = 1+ ρ
2 σ X , bX = 1 ρ
2 σ X , cX =μX ,
aY = 1+ρ
2 σY , bY = 1ρ
2 σY , cY =μY ,
where μX , σ X , μY , σY ρ are constants ,1 ρ 1.
E ( X ) =cX =μX , E ( Y ) =cY =μY
Var ( X ) =aX
2 + bX
2 =( 1+ ρ
2 σ X )
2
+ ( 1ρ
2 σ X )
2
Var ( X )= 1+ ρ
2 σ X
2 + 1ρ
2 σ X
2 =σ X
2
( 1+ ρ
2 + 1ρ
2 )
Var ( X )=σ X
2
Document Page
6PROBLEMS ON INFERENTIAL STATISTICS
Var ( Y ) =aY
2 +bY
2 =( 1+ ρ
2 σ Y )
2
+( 1ρ
2 σY ) 2
Var ( Y ) =1+ ρ
2 σY
2 + 1ρ
2 σY
2 =σY
2
( 1+ ρ
2 + 1 ρ
2 )
Var ( Y ) =σY
2
Cov ( X , Y )=aX aY +bX bY = 1+ ρ
2 σ X σY 1 ρ
2 σ X σY =ρ σ X σY
Corr ( X ,Y )= Cov (X ,Y )
Var ( X ) Var (Y )
Corr ( X , Y )= ρ σ X σY
σ X
2 σ Y
2
Corr ( X , Y )= ρ
c. X N ( μX , σ X
2 ) , Y N ( μY , σY
2 ) , corr ( X , Y )= ρ.
Now two independent unit normal variables Z1 , Z2 are generated using the
transformation,
X =σ X Z1+ μ X , Y =σ Y [ ρ Z1 + 1ρ2 Z2 ]+ μY
Z1= X μX
σ X
1ρ2 Z2= Y μY
σ Y
ρ Z1 Z2= 1
1 ρ2 [ Y μY
σY
ρ X μX
σ X ]
The Jacobian of the transformation,
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
7PROBLEMS ON INFERENTIAL STATISTICS
J=det [ ( x , y )
( z1 , z2 ) ]=det
[ x
z1
x
z2
y
z1
y
z2
]=det
[ 1
σ X
0
ρ
σ X 1ρ2
1
σY 1ρ2 ]
J = 1
σ X σY 1ρ2
Therefore the joint distribution of X and Y is given by,
f ( x , y ) =f ( z1 , z2 )|J|
f ( x , y ) = 1
2 π exp ¿
f ( x , y )= 1
2 π σ X σY 1 ρ2 exp [ 1
2 {( xμX
σ X )2
+ 1
1 ρ2 ( yμY
σY
ρ xμX
σ X )2
}]
f ( x , y )= 1
2 π σ X σY 1 ρ2 exp [ 1
2(1ρ2 ) {( xμX
σ X )2
+ ( yμY
σ Y )2
2 ρ ( xμX
σ X )( yμY
σY ) }]
Which is the pdf of a bivariate normal distribution.
Hence, the joint distribution of X , Y follows bivariate normal distribution with
parameters μX , μY , σ X , σ Y , ρ .
Answer 4
Xi ' s are iid exp ( θ ) random variables i=1 , 2 , . n
Y j ' s are iid exp ( θ ) random variables j=1 ,2 , . m
Document Page
8PROBLEMS ON INFERENTIAL STATISTICS
Xi ' s and Y j ' s are independent.
E(X ¿ ¿i)=θ ¿ E(Y ¿ ¿ j)=θ ¿ [ Xi ' s and Y j ' s are independent, covariance is 0]
Var ( X ¿¿ i)=θ2 ¿ Var (Y ¿¿ j)=θ2 ¿
E( X )=E ¿
E(Y )= E ¿
Var ( X )= 1
n2 .
i=1
n
var ( X ¿ ¿i)= 1
n2 . n θ2= θ2
n ¿
Var ( Y )= 1
m2 .
i=1
m
var (Y ¿ ¿ j)= 1
m2 . mθ2= θ2
m ¿
a. Now, T α =α X + ( 1α ) Y
E(T α )=αE ( X )+ ( 1α ) E ¿
¿ αθ + ( 1α ) θ=θ
V ar ( T α ) =α2 var (X )+(1α )2 var ¿
¿ α 2 . θ2
n + (1α )2 . θ2
m
¿ α 2 . θ2
n + (1+α 22 α ¿ . θ2
m
¿ θ2 [α 2
( 1
m + 1
n )+ 1
m ( 12 α )] ]
b. By Chebyshev’s Inequality,
P [|T α|>ε ] var ( T α )
ε2 ε > 0
Document Page
9PROBLEMS ON INFERENTIAL STATISTICS
P [|T α|>ε ]
θ2
[α2
( 1
m + 1
n )+ 1
m ( 12α ) ]
ε2
when m , n , the numerator of the R. H. S. becomes 0.
P [|T α|>ε ] 0 as m , n
Answer 5
X1, X2 ……. X n are iid U (0,1) random variables
f ( xi ) =1 0< xi ¿ 1
¿ 0 0. w .
a. Let Y =ln X1
y =ln x1
x1=e y< y< 0
The Jacobian of thee transformation
|J |=| x1
y |=ey
pdf of y is
f ( y ) =e y< y <0
E ( Y )=

0
y e y dy
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
10PROBLEMS ON INFERENTIAL STATISTICS
¿ [ y e y1 . ey dy ]
0
¿ [ y e ye y ]
0
=01=1
E ( Y 2 ) =

0
y2 e y dy
¿ [ y2 e y2 y e y dy ]
0
=[ y2 e y2 y e y +2 e y ]
0
= 2
var ( Y )=E ¿) - e2 (Y )=21=1
b. Xi U ( 0,1 ) 2 ln Xi χ2
2 2 ln Xi χ2 n
2
2 ln Xi2 n
4 n = 1
n ln Xi + n d

N ( 0,1 ) , as n , by CLT
lim
n
P (a ( X1 X2 X3 X n )n
1
2
. en
1
2
b )=¿ lim
n
P (lna 1
n ln Xi + n lnb )=¿ Φ ( lnb )Φ ( ln a ) ¿¿
Answer 6
Let X be a random variable with pdf
f ( x ;θ )=θ xθ1 0 x 1, 0<θ <
Document Page
11PROBLEMS ON INFERENTIAL STATISTICS
a. Let ¿, X2 ……. X n ) be an iid random sample of size n drawn from the population with
pdf f ( x ;θ ) ,
If X1, X2 ……. X n takes values x1, x2 , ……. xn , then the likelihood function can be written
as
L ¿ [ x= ( x1 , x2 , . xn ) , 0 xi 1,0<θ< ]
¿
i=1
n
θ xi
θ1
¿ θn

i=1
n
xi
θ1 ----------------------(1)
Taking logarithm (natural logarithm) in both sides of (1)
lnL ( θx )=nlnθ+ ( θ1 )
i=1
n
ln xi
The MLE of the parameter θ can be obtained by solving the equation
lnL (θx )
θ =0

θ [ nlnθ+ ( θ1 )
i=1
n
ln xi ] =0
n
θ +
i=1
n
ln xi=0
n
θ = ln xi
θ= n
ln xi
Document Page
12PROBLEMS ON INFERENTIAL STATISTICS
the MLE of θ is
^θ= n

i=1
n
ln Xi
b. The observed random samples are 0.55, 0.88, 0.43, 0.78, 0.66
ln xi=ln ( 0.55 ) +ln ( 0.88 ) + ln ( 0.43 )+ln ( 0.78 ) +ln ( 0.66 )=2.23
n=5
^θ= 5
(2.23 ) =2.24
c. X1 is a random variable with pdf f ( x1 ; θ )=θ x1
θ1 0 x 1, 0 θ<
Y =ln X1 x1 =e y
The Jacobian of the transformation,
|J|=| x1
y |=|e y
|=e y
Range of y :0< y <
The pdf of Y is
f ( y ) =f x ( y ) ,|J |
¿ θ ( e y )θ1
. e y
¿ θ eθy
which is nothing but the pdf of exp ( 1
θ )
Y =ln X1 exp ( 1
θ )
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
13PROBLEMS ON INFERENTIAL STATISTICS
d. Y exp ( 1
θ )
MGF of Y is,
M Y ( t )=E [ etY ]
¿
0

ety . θ eθy dy
¿ θ
0

e (θt ) y dy
¿ θ [ e (θt ) y
θt ]0

¿ θ
θt
Let Y 1, Y 2 ……. Y n be a random sample from exp ( 1
θ )
Let S=
i=1
n
Y i
M S ( t ) =E ( etS )
¿ E ( et Y i )
¿ E ( et Y 1
.et Y 2
. et Y n )
¿ E ( et Y 1 ) . E ( et Y2 ) .. E ( et Y n ) [ Y 1 , Y 2 .Y n are independent ]
M y ( t ) =
i=1
n
M Y i ( t )
¿
i=1
n θ
θt
Document Page
14PROBLEMS ON INFERENTIAL STATISTICS
¿ ( θ
θt )
n
which is nothing but the MGF of Gamma distribution with parameters (n , 1
θ ),
since MGF uniquely defines a distribution.
Therefore, it can be concluded that
S= Y = ln Xi Gamma (n , 1
θ )
e. E ( ^θ ) =E [ n
ln Xi ]= n
E ( S )
¿ E ( S ) =
t [ MX ( t ) ]t =0
¿
t [( θ
θt )
n
]t =0
¿
t [(1 t
θ )
n
]t=0
¿ [n ( 1 t
θ )
n1
. ( 1
θ ) ] t=0
¿ n
θ [(1 t
θ )
n1
]t =0
¿ n
θ
Document Page
15PROBLEMS ON INFERENTIAL STATISTICS
E ( ^θ ) = n
n
θ
=θ
^θ is the unbiased estimator of θ
Answer 7
a. The joint pdf of ( X1 , X2 , , Xn ¿ is given by,
L ( x ;θ ) =
i=1
n 2 xi
θ2 =
2n

i=1
n
xi
θ2 n
2n

i=1
n
xi
[ max ( xi ) ]2 n
Since 0<x <θ , X(n)=max( X1 , X2 , , Xn ¿<θ.
Hence the smallest possible value for θ=X ( n )
Hence, the MLE of θ is ^θ=X(n ).
b. FX ( x ) =
0
x
2 t
θ2 dt= x2
θ2 ,0< x<θ .
F X ( n )
( x ) =
( x2
θ2 )
n
= x2n
θ2 n ,0 < x<θ
f X ( n )
( x ) = 2n x2 n1
θ2 n , 0<x< θ
E ( c ^θ )=cE ( ^θ ) =c
0
θ
x 2 n x2n1
θ2 n dx= 2 nc
2 n+1 θ=θ
c =2 n+1
2 n
c. The median of the distribution can be obtained from the equation,
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
16PROBLEMS ON INFERENTIAL STATISTICS
F ( x )= 1
2
x2
θ2 = 1
2 x= θ
2
Hence, the median of the distribution is θ
2
Therefore, the MLE of the median of the distribution is
^θ
2 = X(n)
2 .
Answer 8
a. Let ¿, X2 ……. X n ) be an iid random sample from P ( λ ).
f ( xi ; λ ) = eλ λxi
xi ! θ xθ1 [ xi=0 , 1 ,2 , . ,i =1, 2 , . n, λ> 0 ]
the joint distribution is given by
L¿) =
i=1
n eλ λxi
xi ! f ( xi ; λ )
=
i=1
n eλ λxi
xi ! [ x= ( x1 , x2 , .. xn ) , xi=0 ,1 , 2 , . , λ> 0 ]
¿ e λ
i=1
n
xi
i=1
n
xi !
e
λ
lnL ( x ; λ )=+ xiln λ+ ln (xi ¿!)¿
The MLE can be obtained by solving
Document Page
17PROBLEMS ON INFERENTIAL STATISTICS
lnL (x ; λ)
λ =0

λ ¿¿
n+ 1
λ xi= 0
1
λ xi=n
λ= xi
n =x
MLE of λ is ^λ=¿ x
b. X has a Poisson distribution with mean λ .
Here n=50
x= 1
50 [ 3 ×0+5 × 1+ 5× 2+8 ×3+12 × 4+9 × 5+8 ×6 ]
¿ 1
50 [180]=¿ 3.6
Answer 9
a. f ( x ;θ )= θ4
6 x3 eθx where 0<x < , 0<θ<
The joint pdf is
L ( θ )=
i=1
n
(¿ θ4
6 xi
3 eθ xi
)¿
¿ θ4 n
6n (i=1
n
xi
3
)eθ
i=1
n
xi
Document Page
18PROBLEMS ON INFERENTIAL STATISTICS
lnL ( θ ) =4 nlnθnln6+
i =1
n
3 ln xi - θ
i=1
n
xi
the MLE of θ can be obtained by solving
lnL ( θ )
θ =0
4 n
θ
i=1
n
xi=0
θ= 4 n

i=1
n
xi
= 4
1
n xi
the MLE of θ is ^θ = 4
X
b. When θ=1 , f ( x ; θ )=1 0< x <1
When θ=2 , f ( x ; θ ) = 1
2 x 0< x< 1
Note that the MLE of
θ will be a value for which θ> xi for i=1 , 2 , nwhich maximises f ( x ;θ ) .
For the above cases, the possible values of θ=max ¿ ¿since θ is strictly greater than each
of the observed xi ' s. Therefore, MLE does not exist for both cases.
c. f ( x ;θ )=θ ,0 x 1
θ
From the above pdf, it can be seen that
xi 1
θ for all i=1 , 2, . n
θ 1
xi
for all i=1 , 2, . n
θ 1
max(x1 , x2 xn ) for all i=1 , 2, . n
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
19PROBLEMS ON INFERENTIAL STATISTICS
θ 1
x(n )
the MLE of θ is ^θ = 1
X(n )
Answer 10
X1 , X2 , , Xnare i.i.d. random variables with pdf
f ( x ;θ )= 1
θ ,0 x θ
The first order raw population moment can be obtained as,
μ1
' =E ( X )=
0
θ
x
θ dx
μ1
' = 1
θ [ x2
2 ]0
θ
= 1
θ . θ2
2 =θ
2
Let m1
' be the sample moment. The moment estimator of a parameter is that value which satisfies
the moment equation,
E ( X r )=μr
' =mr
' = 1
n
i=1
n
Xi
r ,r =1,2 , , k
Therefore equating population moment with sample moment,
μ1
' =m1
' ^θ
2 =1
n
i=1
n
Xi =X
^θ=2 X
Document Page
20PROBLEMS ON INFERENTIAL STATISTICS
E ( ^θ ) =2 E ( X )=2. 1
n
i=1
n
E ( X ¿¿ i)=2. 1
n ( θ
2 )n
= θn
n 2n1 ¿
Now, Var ( Xi )=E ( Xi
2 )E2 ( Xi )= θ2
3 θ2
4 = θ2
12
Var ( ^θ ) =4 Var ( X )= 4
n2
i=1
n
Var (X ¿ ¿i )= 4
n2 ( θ2
12 )n
¿
The joint pdf of ( X1 , X2 , , Xn ¿ is given by,
L ( x ;θ ) = 1
θn , 0 xi θ , θ>0
It can be observed that the MLE of θ is a value for which xi θ , i=1,2 ,.. , n and which maximizes
1
θn . Since 1
θn is a decreasing function of θ, the MLE will be the smallest possible value for θfor
which xi θ.
Hence the MLE of θis ^θ=max ( X1 , X2 , , Xn )=X(n) .
The pdf of X ( n ) is given by,
f X ( n )
( x ) =nf ( x ) F ( x )n1= n xn 1
θn ,0 xi θ
E( X ¿¿ ( n ) )=
0
θ
x . n xn1
θn dx= n
θn [ x(n)
n +1
n+1 ]0
θ
=
n+ 1 ¿
E ( X ( n )
2
) = n θ2
n+2
Var ( X ( n ) ) = nθ2
n+ 2 (
n+1 )
2
=n θ2
( 1
n+2 n
( n+1 ) 2 )= n θ2
( n+1 ) 2 (n+2)
Document Page
21PROBLEMS ON INFERENTIAL STATISTICS
Answer 11
a. The pdf of X is defined as,
f ( x ; θ ) = 4
θ2 x , 0<x< θ
2
¿ 4
θ2 x+ 4
θ , θ
2 < x<θ
¿ 0 , otherwise
The first order raw population moment can be obtained as,
μ1
' =E ( X )=
0
θ
2
x . 4
θ2 x . dx +
θ
2
θ
x . (4
θ2 x+ 4
θ ). dx
μ1
' = 4
θ2
0
θ
2
x2 . dx 4
θ2
θ
2
θ
x2 . dx+ 4
θ
θ
2
θ
x . dx
μ1
' = 4
θ2 [ x3
3 ]0
θ
2
4
θ2 [ x3
3 ]θ
2
θ
+ 4
θ [ x2
2 ]θ
2
θ
μ1
' = 4
θ2 [ θ3
24 ] 4
θ2 [ θ3
3 θ3
24 ] + 4
θ [ θ2
2 θ2
8 ]
μ1
' = θ
6 7 θ
6 + 3θ
2
μ1
' = θ
2
Let m1
' be the sample moment. The moment estimator of a parameter is that value which
satisfies the moment equation,
E ( X r )=μr
' =mr
' = 1
n
i=1
n
Xi
r ,r =1,2 , , k
Therefore equating population moment with sample moment,
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
22PROBLEMS ON INFERENTIAL STATISTICS
μ1
' =m1
' ^θ
2 =1
n
i=1
n
Xi =X
^θ=2 X
b. The sample observations of X are 0.3209, 0.2412, 0.2557, 0.3544, 0.4168, 0.5621,
0.0230, 0.5442, 0.4552 and 0.5592.
xi=3.7327 ,n=10 , thus x=0.3733 .
Therefore, the point estimate of θ is ^θ=20.3733=0.7466 .
Answer 12.
X1, X2 ……. X n are iid random variables from exponential distribution with unknown
mean θ
property of exponential distribution,
Y =
i=1
n
Xi Gamma ¿ )
pdf of Y is
f ( y ) = yn1 ¿ ¿
a. Let W =2
θ Y y= θ
2 w
The Jacobian of the transformation
J= θ
2
The range of w is (0 , )
pdf of W is
Document Page
23PROBLEMS ON INFERENTIAL STATISTICS
f ( w )=( θ
2 w)
n1
¿ ¿
= ( θ
2 )
n
¿ ¿
1
2n Γ n . wn1 . e
1
2 w
Which is nothing but pdf of χ2 with 2 n degrees of freedom.
W= 2
θ
i=1
n
Xi χ2 n
2
b. Since W χ2 n
2
Var( W ¿=2. 2 n=4 n
the 100 ( 1α ) % C . I . for θis is
(x ± Z α
2 var ( W )
n )
¿ ¿)
¿ ¿)
c. x=65.2 , n=8 , Z α
2
=1.64
90 % of C . I . for θ is
( 65.2 ±1.64 × 2 )=(61.92 ,68.48)
Answer 13
a. The pdf of Xi is
f ( x ¿¿ i)=α . ¿ ¿
¿ α
β .¿
Document Page
24PROBLEMS ON INFERENTIAL STATISTICS
The pdf is ,
L ( α , β )=
i=1
n
f ¿ ¿)
¿
i=1
n
¿ ¿
¿ αn
βn .

i1
n
xi
α1
β ( α1 )n
¿ αn
βαn .
i=1
n
xi
α1
lnL ( α , β )=nlnαnαlnβ+ ( α 1 )
i =1
n
ln xi
The MLEs can be obtsined by solving
lnL
α = 0 n
α - nαlnβ + ln xi=0
1
α + ln xi
n =lnβ
^α= n
nlnβ
i=1
n
ln Xi
Note that ,
xi β for all i=1 , 2 , n
max (¿ xi) β for all i=1 ,2 , n ¿
x (n ) β for all i=1 , 2 , n
MLE of β is ^β=X(n)
MLE of α is ^α= n
nln ^β
i=1
n
ln Xi
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
25PROBLEMS ON INFERENTIAL STATISTICS
b. ln xi=34.95 n=14
^β=26.0 ln ^β=3.26
^α = 14
14 ×3.2644.95 =1.31
c. P [ X ( n ) x ]=¿ ¿
¿( x
β )
n α0
0< x< β
CDF of the pivot quantity
Q ( X , β ) = X(n)
β is P [ X(n)
β x ] =P ( X ( n ) βx )
¿ ¿ 0¿ x <1
For a 95 % C I ,isrequired ¿ calculateb such that
P ( b < X ( n )
β )=0.95
Where ,
P [ ( X (n )
β )> b ]=1P [ ( X ( n )
β ) b ]=1bn α0
1bn α 0
=0.95 bn α 0
=0.05 b=( 0.05)
1
n α 0
d. 95 %C I for β is
[ x(n)< β < x(n)
b ]
¿
[ x(n )< β< x(n)
(0.05)
1
n α0 ] or, [ x(n) , x(n )
(0.05)
1
n α0
]
Here α0 =1.31
Document Page
26PROBLEMS ON INFERENTIAL STATISTICS
^β=26.0 , n=14
95 % C I for ^β is (26.0, 30.61)
Answer 14
a. Y follows exponential distribution with parameter ( 1
λ ).
Given that, X =θ1 +θ2 Y Y = Xθ1
θ2
;0< y< θ1< x <
The Jacobian of the transformation,
J= dy
dx = 1
θ2
Therefore, the pdf of X is given by,
f ( x ) = 1
λ e
Xθ1
θ2
λ . 1
θ2
= 1
λθ2
e
X θ1
θ2
λ , λ>0 , θ1 < x <
Hence, X follows shifted exponential distribution with location parameter θ1 and the
scale parameter λθ2.
b. E ( X ) =θ1 +θ2 E (Y )=θ1 +θ2 λ
Var ( X ) =θ2
2 Var ( Y ) =θ2
2 λ2
E ( X2 )=θ2
2 λ2 + ( θ1 +θ2 λ ) 2
Document Page
27PROBLEMS ON INFERENTIAL STATISTICS
Let m1
, ,m2
, be the sample raw moments. The MME of the parameters θ1 , θ2 can be
obtained by equating population moments with sample moments.
E ( X r )=μr
' =mr
' = 1
n
i=1
n
Xi
r ,r =1,2 , , k
E ( X )= ^θ1 + ^θ2 λ=1
n
i=1
n
Xi= X
E ( X2 ) = ^θ2
2 λ2 + ( ^θ1+ ^θ2 λ )2
= 1
n
i=1
n
Xi
2
^θ2
2 λ2= 1
n
i=1
n
Xi
2 X2
^θ2= 1
λ 1
n
i=1
n
Xi
2 X2 , ^θ1=X 1
n
i=1
n
Xi
2 X2
c. f ( xi ) = 1
λθ2
e
xθ1
λθ2
,θ1<xi <
The joint pdf of ( X1 , X2 , , Xn ¿ is given by,
L ( θ1 ,θ2 ) =i=1
n
( 1
λθ2
. e
xiθ1
λθ2
)
L ( θ1 ,θ2 )= ( 1
λθ2 )n
.e
1
λθ2

i=1
n
( xiθ1 )
When θ2 is fixed and θ1 x (1 )
θ1 xi , i=1,2 , n
( xiθ1 ) 0 , i=1,2, n
L ( θ1 ,θ2 ) is strictly increasing.
If θ1 > x ( 1 ) , θ2is fixed,then the likelihood function will be 0 since θ1 xi , i=1,2 , n.
Therefore, MLE of θ1 is ^θ1= X(1)
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
28PROBLEMS ON INFERENTIAL STATISTICS
Now,
lnL=nlnλnln θ2 1
λθ2

i=1
n
( xiθ1 )

θ2
lnL=0 n
θ2
+ 1
θ2
2
i=1
n
( xiθ1 ) =0
^θ2= 1
n
i=1
n
( xiθ1 )
Answer 15
It is given that,
c=90 % , margin of error (E)=10 , x =1580 , s=58
If the sample size is large, then the population standard deviation σ is equal to the sample
standard deviations .
σ s=58
The formula for the sample size is,
n=( z α
2
σ
E )
2
Here α=0.10 α
2 =0.05
The value of z α
2
=z0.05=1.64 ,from the standard normal table.
Therefore, the sample size is (rounded up to nearest integer),
Document Page
29PROBLEMS ON INFERENTIAL STATISTICS
n=( 1.6458
10 )
2
91
Hence the sample size is 91.
Document Page
30PROBLEMS ON INFERENTIAL STATISTICS
Bibliography
Lowry, R. (2014). Concepts and applications of inferential statistics.
Sahu, P. K., Pal, S. R., & Das, A. K. (2015). Estimation and inferential statistics. Springer.
chevron_up_icon
1 out of 31
circle_padding
hide_on_mobile
zoom_out_icon
logo.png

Your All-in-One AI-Powered Toolkit for Academic Success.

Available 24*7 on WhatsApp / Email

[object Object]