Solutions for STAT 2006 Assignment 2: Inferential Statistics Problems
VerifiedAdded on 2022/08/23
|31
|1132
|18
Homework Assignment
AI Summary
This document presents a comprehensive solution set for an inferential statistics assignment (STAT 2006). It covers a range of problems, including finding the moment-generating function (MGF) for jointly normal random variables, analyzing ordered random variables from an exponential distribution, and determining their independence. The solution also delves into maximum likelihood estimation (MLE), moment estimators, and the application of Chebyshev's inequality. Furthermore, it addresses the properties of bivariate normal distributions, the calculation of sample size, and the analysis of shifted exponential distributions. The document provides detailed step-by-step solutions to each problem, offering valuable insights into the concepts and techniques of inferential statistics. The assignment covers topics such as calculating the MGF, working with exponential distributions, finding joint and marginal probability density functions, and estimating parameters using various methods.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.

Running Head: PROBLEMS ON INFERENTIAL STATISTICS
PROBLEMS ON INFERENTIAL STATISTICS
Name of the Student:
Name of the University:
Author Note:
PROBLEMS ON INFERENTIAL STATISTICS
Name of the Student:
Name of the University:
Author Note:
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.

1PROBLEMS ON INFERENTIAL STATISTICS
Answer 1
The pdf of ( X , Y )is given by,
f X , Y ( x , y )= 1
2 π σ X σY √1−ρ2 exp [ −1
2(1−ρ2) {( x−μX
σ X )2
+
( y −μY
σY )2
−2 ρ ( x−μX
σ X )( y −μY
σY ) }]
The MGF of ( X , Y )can be obtained as,
M X , Y ( s , t ) =E [ esX+ tY ]
⇒ M X ,Y ( s , t )=∬
−∞
∞
esx ety . 1
2 π σ X σY √1−ρ2 exp [ −1
2 ( 1− ρ2 ) {( x−μX
σ X )2
+ ( y−μY
σY )2
−2 ρ ( x−μX
σ X )( y−μY
σY ) }]dxdy
Let x=μX +σ X u , y=σY √ 1− ρ2 v+ ρ σY u+ μY
∴ dx=σ X du , dy=σY √ 1−ρ2 dv
∴ exp [ −1
2 ( 1−ρ2 ) {( x−μX
σ X )
2
+( y−μY
σY ) 2
−2 ρ ( x−μX
σ X )( y −μY
σY )}]
¿ exp [ −1
2 ( 1−ρ2 ) {( μX +σ X u−μX
σ X )2
+ ( σY √1−ρ2 v + ρ σY u + μY −μY
σ Y )2
−2 ρ ( μX + σ X u−μX
σ X ) ( σ Y √1−ρ2 v + ρ σY u+ μY −μ
σY
¿ exp [ −1
2 ( 1−ρ2 ) { u2 + ( 1− ρ2 ) v2+ ρ2 u2 +2 ρ √ 1−ρ2 uv −2 ρ √ 1−ρ2 uv−2 ρ2 u2 } ]
¿ exp [ −1
2 { u2+ v2 } ]
∴ M X ,Y ( s ,t ) =∬
−∞
∞
es ( μX +σ X u ) et ( σY √ 1− ρ2 v+ρ σ Y u+ μY ) . 1
2 π σ X σ Y √ 1−ρ2 exp [ −1
2 { u2+ v2 } ] . σ X σY √ 1−ρ2 dudv
Answer 1
The pdf of ( X , Y )is given by,
f X , Y ( x , y )= 1
2 π σ X σY √1−ρ2 exp [ −1
2(1−ρ2) {( x−μX
σ X )2
+
( y −μY
σY )2
−2 ρ ( x−μX
σ X )( y −μY
σY ) }]
The MGF of ( X , Y )can be obtained as,
M X , Y ( s , t ) =E [ esX+ tY ]
⇒ M X ,Y ( s , t )=∬
−∞
∞
esx ety . 1
2 π σ X σY √1−ρ2 exp [ −1
2 ( 1− ρ2 ) {( x−μX
σ X )2
+ ( y−μY
σY )2
−2 ρ ( x−μX
σ X )( y−μY
σY ) }]dxdy
Let x=μX +σ X u , y=σY √ 1− ρ2 v+ ρ σY u+ μY
∴ dx=σ X du , dy=σY √ 1−ρ2 dv
∴ exp [ −1
2 ( 1−ρ2 ) {( x−μX
σ X )
2
+( y−μY
σY ) 2
−2 ρ ( x−μX
σ X )( y −μY
σY )}]
¿ exp [ −1
2 ( 1−ρ2 ) {( μX +σ X u−μX
σ X )2
+ ( σY √1−ρ2 v + ρ σY u + μY −μY
σ Y )2
−2 ρ ( μX + σ X u−μX
σ X ) ( σ Y √1−ρ2 v + ρ σY u+ μY −μ
σY
¿ exp [ −1
2 ( 1−ρ2 ) { u2 + ( 1− ρ2 ) v2+ ρ2 u2 +2 ρ √ 1−ρ2 uv −2 ρ √ 1−ρ2 uv−2 ρ2 u2 } ]
¿ exp [ −1
2 { u2+ v2 } ]
∴ M X ,Y ( s ,t ) =∬
−∞
∞
es ( μX +σ X u ) et ( σY √ 1− ρ2 v+ρ σ Y u+ μY ) . 1
2 π σ X σ Y √ 1−ρ2 exp [ −1
2 { u2+ v2 } ] . σ X σY √ 1−ρ2 dudv

2PROBLEMS ON INFERENTIAL STATISTICS
⇒ M X ,Y ( s , t ) =e ( sμ X+tσY ) ∫
−∞
∞
e ( sσX + ρσ Y t ) u . 1
√ 2 π e
−1
2 u2
du . ∫
−∞
∞
et σ Y √1− ρ2 v . 1
√ 2 π e
−1
2 v2
dv
Note that the part within the integral is nothing but the MGF of standard normal variable.
∴ M X ,Y ( s ,t ) =e( sμX +tσY ) . e
( sσ X+ ρ σ Y t ) 2
2 . e¿ ¿
⇒ M X ,Y ( s , t ) =esμX +tσY + 1
2 ( σ X
2 s2+ σY
2 t 2+2 ρst σ X σ Y )
Answer 2
Y 1, Y 2 ……. Y n are iid exp ( θ ) random variables.
X1, X2 ……. X n are the ordered random variables of Y i’s with joint pdf.
f X1 , X 2, … … X n ( x1 , x2 … … . xn )= n !
θn exp {−1
θ ∑
i=1
n
xi }
where 0 ≤ x1 ≤ x2 ≤… ≤ xn
a. Let U1= X1 ⇔ X1=U1
U2= X2−X1 ⇔ X2=U2+U1
U3= X3− X2 ⇔ X3=U3 +U2 +U1
⋮ ⋮
Un =Xn− Xn−1 ⇔ Xn=U n +Un −1 +Un −2 +… U1
|J |= ∂ X
∂U =
|[ 1 ⋯ 0
⋮ ⋱ ⋮
1 ⋯ 1 ]|=1
∴ the joint pdf of ¿, U2 ……. Un) is
⇒ M X ,Y ( s , t ) =e ( sμ X+tσY ) ∫
−∞
∞
e ( sσX + ρσ Y t ) u . 1
√ 2 π e
−1
2 u2
du . ∫
−∞
∞
et σ Y √1− ρ2 v . 1
√ 2 π e
−1
2 v2
dv
Note that the part within the integral is nothing but the MGF of standard normal variable.
∴ M X ,Y ( s ,t ) =e( sμX +tσY ) . e
( sσ X+ ρ σ Y t ) 2
2 . e¿ ¿
⇒ M X ,Y ( s , t ) =esμX +tσY + 1
2 ( σ X
2 s2+ σY
2 t 2+2 ρst σ X σ Y )
Answer 2
Y 1, Y 2 ……. Y n are iid exp ( θ ) random variables.
X1, X2 ……. X n are the ordered random variables of Y i’s with joint pdf.
f X1 , X 2, … … X n ( x1 , x2 … … . xn )= n !
θn exp {−1
θ ∑
i=1
n
xi }
where 0 ≤ x1 ≤ x2 ≤… ≤ xn
a. Let U1= X1 ⇔ X1=U1
U2= X2−X1 ⇔ X2=U2+U1
U3= X3− X2 ⇔ X3=U3 +U2 +U1
⋮ ⋮
Un =Xn− Xn−1 ⇔ Xn=U n +Un −1 +Un −2 +… U1
|J |= ∂ X
∂U =
|[ 1 ⋯ 0
⋮ ⋱ ⋮
1 ⋯ 1 ]|=1
∴ the joint pdf of ¿, U2 ……. Un) is

3PROBLEMS ON INFERENTIAL STATISTICS
g ¿, u2 ……. un) = n !
θn exp −1
θ ¿
= n !
θn exp [−1
θ {n u1+ ( n−1 ) u2 + ( n−2 ) u3+…+un }]
¿ n
θ e
−n
θ u1
. n−1
θ e
−n−1
θ u2
. … … 1
θ e
−1
θ un
b. From the joint pdf, it can be observed that U1, U2 ……. Un are mutually independent.
Marginal distribution of Ui is given by
Ui exp(¿ θ
n−i )¿ i=0 , 1 , 2, ….(n−1)
c. X1 =¿ U1 exp ( θ
n )
E ¿) = θ
n
E ¿) = E[ ∑
i=1
n
Ui ]
= ∑
i=0
n −1 θ
n−ii
¿ θ
n + θ
n−1 +…+θ
¿ θ[1+ 1
2 +…+ 1
n−1 + 1
n ]=Hnθ Hn is the harmonic number
Answer3
Z1 N ( 0,1 ) , Z2 N ( 0,1 ) , Z1 ∧Z2 areindependent .
g ¿, u2 ……. un) = n !
θn exp −1
θ ¿
= n !
θn exp [−1
θ {n u1+ ( n−1 ) u2 + ( n−2 ) u3+…+un }]
¿ n
θ e
−n
θ u1
. n−1
θ e
−n−1
θ u2
. … … 1
θ e
−1
θ un
b. From the joint pdf, it can be observed that U1, U2 ……. Un are mutually independent.
Marginal distribution of Ui is given by
Ui exp(¿ θ
n−i )¿ i=0 , 1 , 2, ….(n−1)
c. X1 =¿ U1 exp ( θ
n )
E ¿) = θ
n
E ¿) = E[ ∑
i=1
n
Ui ]
= ∑
i=0
n −1 θ
n−ii
¿ θ
n + θ
n−1 +…+θ
¿ θ[1+ 1
2 +…+ 1
n−1 + 1
n ]=Hnθ Hn is the harmonic number
Answer3
Z1 N ( 0,1 ) , Z2 N ( 0,1 ) , Z1 ∧Z2 areindependent .
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.

4PROBLEMS ON INFERENTIAL STATISTICS
X and Y are two random variables defined as,
X =aX Z1+ bX Z2+ cX ∧Y =aY Z1+ bY Z2+ cY
where aX , bX , c X , aY , bY ∧cY are constants.
a. E ( X ) =E ( aX Z1+ bX Z2 + cX )
⇒ E ( X ) =aX E ( Z1 ) +bX E ( Z2 ) +c X
⟹ E ( X ) =aX .0+bX .0+c X [ ∵ E ( Z1 ) =E ( Z2 ) =0 ]
⟹ E ( X ) =cX
E ( Y )=E ( aY Z1+ bY Z2+ cY )
⇒ E ( Y ) =aY E ( Z1 ) +bY E ( Z2 ) +cY
⟹ E ( Y ) =aY .0+bY .0+cY [ ∵ E ( Z1 ) =E ( Z2 ) =0 ]
⟹ E ( Y ) =cY
Var ( X ) =Var ( aX Z1 +bX Z2 +c X )
⇒ Var ( X ) =aX
2 Var ( Z1 ) + bX
2 Var ( Z2 ) [∵ Z1∧Z2 are independent , cov ( Z1 , Z2 ) =0]
⟹ Var ( X )=aX
2 .1+bX
2 .1 [ ∵ Var ( Z1 )=Var ( Z2 ) =1 ]
⟹ Var ( X ) =aX
2 +bX
2
Var ( Y )=Var ( aY Z1 +bY Z2 +cY )
⇒ Var ( Y ) =aY
2 Var ( Z1 ) +bY
2 Var ( Z2 ) [∵ Z1∧Z2 are independent , cov ( Z1 , Z2 ) =0]
⟹ Var ( Y ) =aY
2 .1+bY
2 .1 [ ∵ Var ( Z1 ) =Var ( Z2 ) =1 ]
⟹ Var ( Y ) =aY
2 +bY
2
Now ,Cov ( X ,Y ) =E ( XY ) −E ( X ) E (Y )
X and Y are two random variables defined as,
X =aX Z1+ bX Z2+ cX ∧Y =aY Z1+ bY Z2+ cY
where aX , bX , c X , aY , bY ∧cY are constants.
a. E ( X ) =E ( aX Z1+ bX Z2 + cX )
⇒ E ( X ) =aX E ( Z1 ) +bX E ( Z2 ) +c X
⟹ E ( X ) =aX .0+bX .0+c X [ ∵ E ( Z1 ) =E ( Z2 ) =0 ]
⟹ E ( X ) =cX
E ( Y )=E ( aY Z1+ bY Z2+ cY )
⇒ E ( Y ) =aY E ( Z1 ) +bY E ( Z2 ) +cY
⟹ E ( Y ) =aY .0+bY .0+cY [ ∵ E ( Z1 ) =E ( Z2 ) =0 ]
⟹ E ( Y ) =cY
Var ( X ) =Var ( aX Z1 +bX Z2 +c X )
⇒ Var ( X ) =aX
2 Var ( Z1 ) + bX
2 Var ( Z2 ) [∵ Z1∧Z2 are independent , cov ( Z1 , Z2 ) =0]
⟹ Var ( X )=aX
2 .1+bX
2 .1 [ ∵ Var ( Z1 )=Var ( Z2 ) =1 ]
⟹ Var ( X ) =aX
2 +bX
2
Var ( Y )=Var ( aY Z1 +bY Z2 +cY )
⇒ Var ( Y ) =aY
2 Var ( Z1 ) +bY
2 Var ( Z2 ) [∵ Z1∧Z2 are independent , cov ( Z1 , Z2 ) =0]
⟹ Var ( Y ) =aY
2 .1+bY
2 .1 [ ∵ Var ( Z1 ) =Var ( Z2 ) =1 ]
⟹ Var ( Y ) =aY
2 +bY
2
Now ,Cov ( X ,Y ) =E ( XY ) −E ( X ) E (Y )

5PROBLEMS ON INFERENTIAL STATISTICS
XY = ( aX Z1+ bX Z2 + cX ) . ( aY Z1+ bY Z2+ cY )
⟹ XY =aX aY Z1
2 +bX bY Z2
2+ aX cY Z1+bX cY Z2+ aY cX Z1+bY cX Z2+ aX bY Z1 Z2 +bX aY Z1 Z2+ cX cY
∴ E ( XY )=aX aY E ( Z1
2 )+bX bY E ( Z2
2 )+ aX cY E ( Z1 ) +bX cY E ( Z2 )+ aY c X E ( Z1 ) + bY cX E ( Z2 ) +aX bY E ( Z1 Z2 ) +bX
⟹ E ( XY ) =aX aY .1+ bX bY .1+ aX cY .0+bX cY .0+ aY cX .0+ bY cX .0+ aX bY .0 .0+bX aY .0 .0+c X cY
¿⟹ E ( XY )=aX aY +bX bY +cX cY
∴ Cov ( X , Y )=E ( XY ) −E ( X ) E (Y )
⟹ Cov ( X , Y ) =aX aY +bX bY + cX cY −c X cY
⟹ Cov ( X , Y )=aX aY +bX bY
b. Here it is given that,
aX = √ 1+ ρ
2 σ X , bX = √ 1− ρ
2 σ X , cX =μX ,
aY = √ 1+ρ
2 σY , bY =− √ 1−ρ
2 σY , cY =μY ,
where μX , σ X , μY , σY ∧ρ are constants ,−1 ≤ ρ≤ 1.
∴ E ( X ) =cX =μX , E ( Y ) =cY =μY
Var ( X ) =aX
2 + bX
2 =( √ 1+ ρ
2 σ X )
2
+ ( √ 1−ρ
2 σ X )
2
⟹ Var ( X )= 1+ ρ
2 σ X
2 + 1−ρ
2 σ X
2 =σ X
2
( 1+ ρ
2 + 1−ρ
2 )
⟹ Var ( X )=σ X
2
XY = ( aX Z1+ bX Z2 + cX ) . ( aY Z1+ bY Z2+ cY )
⟹ XY =aX aY Z1
2 +bX bY Z2
2+ aX cY Z1+bX cY Z2+ aY cX Z1+bY cX Z2+ aX bY Z1 Z2 +bX aY Z1 Z2+ cX cY
∴ E ( XY )=aX aY E ( Z1
2 )+bX bY E ( Z2
2 )+ aX cY E ( Z1 ) +bX cY E ( Z2 )+ aY c X E ( Z1 ) + bY cX E ( Z2 ) +aX bY E ( Z1 Z2 ) +bX
⟹ E ( XY ) =aX aY .1+ bX bY .1+ aX cY .0+bX cY .0+ aY cX .0+ bY cX .0+ aX bY .0 .0+bX aY .0 .0+c X cY
¿⟹ E ( XY )=aX aY +bX bY +cX cY
∴ Cov ( X , Y )=E ( XY ) −E ( X ) E (Y )
⟹ Cov ( X , Y ) =aX aY +bX bY + cX cY −c X cY
⟹ Cov ( X , Y )=aX aY +bX bY
b. Here it is given that,
aX = √ 1+ ρ
2 σ X , bX = √ 1− ρ
2 σ X , cX =μX ,
aY = √ 1+ρ
2 σY , bY =− √ 1−ρ
2 σY , cY =μY ,
where μX , σ X , μY , σY ∧ρ are constants ,−1 ≤ ρ≤ 1.
∴ E ( X ) =cX =μX , E ( Y ) =cY =μY
Var ( X ) =aX
2 + bX
2 =( √ 1+ ρ
2 σ X )
2
+ ( √ 1−ρ
2 σ X )
2
⟹ Var ( X )= 1+ ρ
2 σ X
2 + 1−ρ
2 σ X
2 =σ X
2
( 1+ ρ
2 + 1−ρ
2 )
⟹ Var ( X )=σ X
2

6PROBLEMS ON INFERENTIAL STATISTICS
Var ( Y ) =aY
2 +bY
2 =( √ 1+ ρ
2 σ Y )
2
+( − √ 1−ρ
2 σY ) 2
⟹ Var ( Y ) =1+ ρ
2 σY
2 + 1−ρ
2 σY
2 =σY
2
( 1+ ρ
2 + 1− ρ
2 )
⟹ Var ( Y ) =σY
2
Cov ( X , Y )=aX aY +bX bY = 1+ ρ
2 σ X σY − 1− ρ
2 σ X σY =ρ σ X σY
Corr ( X ,Y )= Cov (X ,Y )
√Var ( X ) Var (Y )
⟹ Corr ( X , Y )= ρ σ X σY
√σ X
2 σ Y
2
⟹ Corr ( X , Y )= ρ
c. X N ( μX , σ X
2 ) , Y N ( μY , σY
2 ) , corr ( X , Y )= ρ.
Now two independent unit normal variables Z1 , Z2 are generated using the
transformation,
X =σ X Z1+ μ X , Y =σ Y [ ρ Z1 + √1−ρ2 Z2 ]+ μY
∴ Z1= X −μX
σ X
√ 1−ρ2 Z2= Y −μY
σ Y
−ρ Z1 ⇒ Z2= 1
√ 1− ρ2 [ Y −μY
σY
−ρ X −μX
σ X ]
The Jacobian of the transformation,
Var ( Y ) =aY
2 +bY
2 =( √ 1+ ρ
2 σ Y )
2
+( − √ 1−ρ
2 σY ) 2
⟹ Var ( Y ) =1+ ρ
2 σY
2 + 1−ρ
2 σY
2 =σY
2
( 1+ ρ
2 + 1− ρ
2 )
⟹ Var ( Y ) =σY
2
Cov ( X , Y )=aX aY +bX bY = 1+ ρ
2 σ X σY − 1− ρ
2 σ X σY =ρ σ X σY
Corr ( X ,Y )= Cov (X ,Y )
√Var ( X ) Var (Y )
⟹ Corr ( X , Y )= ρ σ X σY
√σ X
2 σ Y
2
⟹ Corr ( X , Y )= ρ
c. X N ( μX , σ X
2 ) , Y N ( μY , σY
2 ) , corr ( X , Y )= ρ.
Now two independent unit normal variables Z1 , Z2 are generated using the
transformation,
X =σ X Z1+ μ X , Y =σ Y [ ρ Z1 + √1−ρ2 Z2 ]+ μY
∴ Z1= X −μX
σ X
√ 1−ρ2 Z2= Y −μY
σ Y
−ρ Z1 ⇒ Z2= 1
√ 1− ρ2 [ Y −μY
σY
−ρ X −μX
σ X ]
The Jacobian of the transformation,
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

7PROBLEMS ON INFERENTIAL STATISTICS
J=det [ ∂ ( x , y )
∂ ( z1 , z2 ) ]=det
[ ∂ x
∂ z1
∂ x
∂ z2
∂ y
∂ z1
∂ y
∂ z2
]=det
[ 1
σ X
0
−ρ
σ X √1−ρ2
1
σY √1−ρ2 ]
⇒ J = 1
σ X σY √1−ρ2
Therefore the joint distribution of X and Y is given by,
f ( x , y ) =f ( z1 , z2 )|J|
⇒ f ( x , y ) = 1
2 π exp ¿
⇒ f ( x , y )= 1
2 π σ X σY √1− ρ2 exp [ −1
2 {( x−μX
σ X )2
+ 1
1− ρ2 ( y−μY
σY
−ρ x−μX
σ X )2
}]
⇒ f ( x , y )= 1
2 π σ X σY √1− ρ2 exp [ −1
2(1−ρ2 ) {( x−μX
σ X )2
+ ( y−μY
σ Y )2
−2 ρ ( x−μX
σ X )( y−μY
σY ) }]
Which is the pdf of a bivariate normal distribution.
Hence, the joint distribution of X , Y follows bivariate normal distribution with
parameters μX , μY , σ X , σ Y , ρ .
Answer 4
Xi ' s are iid exp ( θ ) random variables i=1 , 2 ,… . n
Y j ' s are iid exp ( θ ) random variables j=1 ,2 , … . m
J=det [ ∂ ( x , y )
∂ ( z1 , z2 ) ]=det
[ ∂ x
∂ z1
∂ x
∂ z2
∂ y
∂ z1
∂ y
∂ z2
]=det
[ 1
σ X
0
−ρ
σ X √1−ρ2
1
σY √1−ρ2 ]
⇒ J = 1
σ X σY √1−ρ2
Therefore the joint distribution of X and Y is given by,
f ( x , y ) =f ( z1 , z2 )|J|
⇒ f ( x , y ) = 1
2 π exp ¿
⇒ f ( x , y )= 1
2 π σ X σY √1− ρ2 exp [ −1
2 {( x−μX
σ X )2
+ 1
1− ρ2 ( y−μY
σY
−ρ x−μX
σ X )2
}]
⇒ f ( x , y )= 1
2 π σ X σY √1− ρ2 exp [ −1
2(1−ρ2 ) {( x−μX
σ X )2
+ ( y−μY
σ Y )2
−2 ρ ( x−μX
σ X )( y−μY
σY ) }]
Which is the pdf of a bivariate normal distribution.
Hence, the joint distribution of X , Y follows bivariate normal distribution with
parameters μX , μY , σ X , σ Y , ρ .
Answer 4
Xi ' s are iid exp ( θ ) random variables i=1 , 2 ,… . n
Y j ' s are iid exp ( θ ) random variables j=1 ,2 , … . m

8PROBLEMS ON INFERENTIAL STATISTICS
Xi ' s and Y j ' s are independent.
∴ E(X ¿ ¿i)=θ ¿ E(Y ¿ ¿ j)=θ ¿ [ Xi ' s and Y j ' s are independent, covariance is 0]
Var ( X ¿¿ i)=θ2 ¿ Var (Y ¿¿ j)=θ2 ¿
E( X )=E ¿
E(Y )= E ¿
Var ( X )= 1
n2 .∑
i=1
n
var ( X ¿ ¿i)= 1
n2 . n θ2= θ2
n ¿
Var ( Y )= 1
m2 .∑
i=1
m
var (Y ¿ ¿ j)= 1
m2 . mθ2= θ2
m ¿
a. Now, T α =α X + ( 1−α ) Y
E(T α )=αE ( X )+ ( 1−α ) E ¿
¿ αθ + ( 1−α ) θ=θ
V ar ( T α ) =α2 var (X )+(1−α )2 var ¿
¿ α 2 . θ2
n + (1−α )2 . θ2
m
¿ α 2 . θ2
n + (1+α 2−2 α ¿ . θ2
m
¿ θ2 [α 2
( 1
m + 1
n )+ 1
m ( 1−2 α )] ]
b. By Chebyshev’s Inequality,
P [|T α|>ε ] ≤ var ( T α )
ε2 ε > 0
Xi ' s and Y j ' s are independent.
∴ E(X ¿ ¿i)=θ ¿ E(Y ¿ ¿ j)=θ ¿ [ Xi ' s and Y j ' s are independent, covariance is 0]
Var ( X ¿¿ i)=θ2 ¿ Var (Y ¿¿ j)=θ2 ¿
E( X )=E ¿
E(Y )= E ¿
Var ( X )= 1
n2 .∑
i=1
n
var ( X ¿ ¿i)= 1
n2 . n θ2= θ2
n ¿
Var ( Y )= 1
m2 .∑
i=1
m
var (Y ¿ ¿ j)= 1
m2 . mθ2= θ2
m ¿
a. Now, T α =α X + ( 1−α ) Y
E(T α )=αE ( X )+ ( 1−α ) E ¿
¿ αθ + ( 1−α ) θ=θ
V ar ( T α ) =α2 var (X )+(1−α )2 var ¿
¿ α 2 . θ2
n + (1−α )2 . θ2
m
¿ α 2 . θ2
n + (1+α 2−2 α ¿ . θ2
m
¿ θ2 [α 2
( 1
m + 1
n )+ 1
m ( 1−2 α )] ]
b. By Chebyshev’s Inequality,
P [|T α|>ε ] ≤ var ( T α )
ε2 ε > 0

9PROBLEMS ON INFERENTIAL STATISTICS
⇒ P [|T α|>ε ] ≤
θ2
[α2
( 1
m + 1
n )+ 1
m ( 1−2α ) ]
ε2
when m , n→ ∞ , the numerator of the R. H. S. becomes 0.
⇒ P [|T α|>ε ] → 0 as m , n→ ∞
Answer 5
X1, X2 ……. X n are iid U (0,1) random variables
∴ f ( xi ) =1 0< xi ¿ 1
¿ 0 0. w .
a. Let Y =ln X1
⇒ y =ln x1
⇒ x1=e y−∞< y< 0
The Jacobian of thee transformation
|J |=| ∂ x1
∂ y |=ey
∴pdf of y is
f ( y ) =e y−∞< y <0
E ( Y )=∫
−∞
0
y e y dy
⇒ P [|T α|>ε ] ≤
θ2
[α2
( 1
m + 1
n )+ 1
m ( 1−2α ) ]
ε2
when m , n→ ∞ , the numerator of the R. H. S. becomes 0.
⇒ P [|T α|>ε ] → 0 as m , n→ ∞
Answer 5
X1, X2 ……. X n are iid U (0,1) random variables
∴ f ( xi ) =1 0< xi ¿ 1
¿ 0 0. w .
a. Let Y =ln X1
⇒ y =ln x1
⇒ x1=e y−∞< y< 0
The Jacobian of thee transformation
|J |=| ∂ x1
∂ y |=ey
∴pdf of y is
f ( y ) =e y−∞< y <0
E ( Y )=∫
−∞
0
y e y dy
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.

10PROBLEMS ON INFERENTIAL STATISTICS
¿ [ y e y−∫1 . ey dy ] −∞
0
¿ [ y e y−e y ]−∞
0
=0−1=−1
E ( Y 2 ) =∫
−∞
0
y2 e y dy
¿ [ y2 e y−2∫ y e y dy ]−∞
0
=[ y2 e y−2 y e y +2 e y ]−∞
0
= 2
var ( Y )=E ¿) - e2 (Y )=2−1=1
b. Xi U ( 0,1 ) ⇒−2 ln Xi χ2
2 ⇒−2∑ ln Xi χ2 n
2
⇒ −2∑ ln Xi−2 n
√ 4 n = 1
√n ∑ ln Xi + √n d
→
N ( 0,1 ) , as n → ∞ , by CLT
lim
n → ∞
P (a ≤ ( X1 X2 X3 … X n )n
−1
2
. en
1
2
≤ b )=¿ lim
n→ ∞
P (lna ≤ 1
√n ∑ ln Xi + √ n≤ lnb )=¿ Φ ( lnb )−Φ ( ln a ) ¿¿
Answer 6
Let X be a random variable with pdf
f ( x ;θ )=θ xθ−1 0 ≤ x ≤ 1, 0<θ <∞
¿ [ y e y−∫1 . ey dy ] −∞
0
¿ [ y e y−e y ]−∞
0
=0−1=−1
E ( Y 2 ) =∫
−∞
0
y2 e y dy
¿ [ y2 e y−2∫ y e y dy ]−∞
0
=[ y2 e y−2 y e y +2 e y ]−∞
0
= 2
var ( Y )=E ¿) - e2 (Y )=2−1=1
b. Xi U ( 0,1 ) ⇒−2 ln Xi χ2
2 ⇒−2∑ ln Xi χ2 n
2
⇒ −2∑ ln Xi−2 n
√ 4 n = 1
√n ∑ ln Xi + √n d
→
N ( 0,1 ) , as n → ∞ , by CLT
lim
n → ∞
P (a ≤ ( X1 X2 X3 … X n )n
−1
2
. en
1
2
≤ b )=¿ lim
n→ ∞
P (lna ≤ 1
√n ∑ ln Xi + √ n≤ lnb )=¿ Φ ( lnb )−Φ ( ln a ) ¿¿
Answer 6
Let X be a random variable with pdf
f ( x ;θ )=θ xθ−1 0 ≤ x ≤ 1, 0<θ <∞

11PROBLEMS ON INFERENTIAL STATISTICS
a. Let ¿, X2 ……. X n ) be an iid random sample of size n drawn from the population with
pdf f ( x ;θ ) ,
If X1, X2 ……. X n takes values x1, x2 , ……. xn , then the likelihood function can be written
as
L ¿ [ x= ( x1 , x2 , … … . xn ) , 0 ≤ xi ≤1,0<θ< ∞ ]
¿ ∏
i=1
n
θ xi
θ−1
¿ θn
∏
i=1
n
xi
θ−1 ----------------------(1)
Taking logarithm (natural logarithm) in both sides of (1)
lnL ( θ∨x )=nlnθ+ ( θ−1 ) ∑
i=1
n
ln xi
The MLE of the parameter θ can be obtained by solving the equation
∂lnL (θ∨x )
∂ θ =0
⇒ ∂
∂ θ [ nlnθ+ ( θ−1 ) ∑
i=1
n
ln xi ] =0
⇒ n
θ +∑
i=1
n
ln xi=0
⇒ n
θ =−∑ ln xi
⇒ θ= −n
∑ ln xi
a. Let ¿, X2 ……. X n ) be an iid random sample of size n drawn from the population with
pdf f ( x ;θ ) ,
If X1, X2 ……. X n takes values x1, x2 , ……. xn , then the likelihood function can be written
as
L ¿ [ x= ( x1 , x2 , … … . xn ) , 0 ≤ xi ≤1,0<θ< ∞ ]
¿ ∏
i=1
n
θ xi
θ−1
¿ θn
∏
i=1
n
xi
θ−1 ----------------------(1)
Taking logarithm (natural logarithm) in both sides of (1)
lnL ( θ∨x )=nlnθ+ ( θ−1 ) ∑
i=1
n
ln xi
The MLE of the parameter θ can be obtained by solving the equation
∂lnL (θ∨x )
∂ θ =0
⇒ ∂
∂ θ [ nlnθ+ ( θ−1 ) ∑
i=1
n
ln xi ] =0
⇒ n
θ +∑
i=1
n
ln xi=0
⇒ n
θ =−∑ ln xi
⇒ θ= −n
∑ ln xi

12PROBLEMS ON INFERENTIAL STATISTICS
∴ the MLE of θ is
^θ= −n
∑
i=1
n
ln Xi
b. The observed random samples are 0.55, 0.88, 0.43, 0.78, 0.66
∴ ∑ ln xi=ln ( 0.55 ) +ln ( 0.88 ) + ln ( 0.43 )+ln ( 0.78 ) +ln ( 0.66 )=−2.23
n=5
∴ ^θ= −5
(−2.23 ) =2.24
c. X1 is a random variable with pdf f ( x1 ; θ )=θ x1
θ−1 0≤ x ≤ 1, 0≤ θ<∞
Y =−ln X1 ⇒ x1 =e− y
The Jacobian of the transformation,
|J|=| ∂ x1
∂ y |=|−e− y
|=e− y
Range of y :0< y <∞
The pdf of Y is
f ( y ) =f x ( y ) ,|J |
¿ θ ( e− y )θ−1
. e− y
¿ θ e−θy
which is nothing but the pdf of exp ( 1
θ )
∴ Y =−ln X1 exp ( 1
θ )
∴ the MLE of θ is
^θ= −n
∑
i=1
n
ln Xi
b. The observed random samples are 0.55, 0.88, 0.43, 0.78, 0.66
∴ ∑ ln xi=ln ( 0.55 ) +ln ( 0.88 ) + ln ( 0.43 )+ln ( 0.78 ) +ln ( 0.66 )=−2.23
n=5
∴ ^θ= −5
(−2.23 ) =2.24
c. X1 is a random variable with pdf f ( x1 ; θ )=θ x1
θ−1 0≤ x ≤ 1, 0≤ θ<∞
Y =−ln X1 ⇒ x1 =e− y
The Jacobian of the transformation,
|J|=| ∂ x1
∂ y |=|−e− y
|=e− y
Range of y :0< y <∞
The pdf of Y is
f ( y ) =f x ( y ) ,|J |
¿ θ ( e− y )θ−1
. e− y
¿ θ e−θy
which is nothing but the pdf of exp ( 1
θ )
∴ Y =−ln X1 exp ( 1
θ )
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

13PROBLEMS ON INFERENTIAL STATISTICS
d. Y exp ( 1
θ )
MGF of Y is,
M Y ( t )=E [ etY ]
¿∫
0
∞
ety . θ e−θy dy
¿ θ∫
0
∞
e− (θ−t ) y dy
¿ θ [ −e− (θ−t ) y
θ−t ]0
∞
¿ θ
θ−t
Let Y 1, Y 2 ……. Y n be a random sample from exp ( 1
θ )
Let S=∑
i=1
n
Y i
∴ M S ( t ) =E ( etS )
¿ E ( et ∑ Y i )
¿ E ( et Y 1
.et Y 2
.… … et Y n )
¿ E ( et Y 1 ) . E ( et Y2 ) … … .. E ( et Y n ) [ ∵ Y 1 , Y 2 … … .Y n are independent ]
⇒ M y ( t ) =∏
i=1
n
M Y i ( t )
¿ ∏
i=1
n θ
θ−t
d. Y exp ( 1
θ )
MGF of Y is,
M Y ( t )=E [ etY ]
¿∫
0
∞
ety . θ e−θy dy
¿ θ∫
0
∞
e− (θ−t ) y dy
¿ θ [ −e− (θ−t ) y
θ−t ]0
∞
¿ θ
θ−t
Let Y 1, Y 2 ……. Y n be a random sample from exp ( 1
θ )
Let S=∑
i=1
n
Y i
∴ M S ( t ) =E ( etS )
¿ E ( et ∑ Y i )
¿ E ( et Y 1
.et Y 2
.… … et Y n )
¿ E ( et Y 1 ) . E ( et Y2 ) … … .. E ( et Y n ) [ ∵ Y 1 , Y 2 … … .Y n are independent ]
⇒ M y ( t ) =∏
i=1
n
M Y i ( t )
¿ ∏
i=1
n θ
θ−t

14PROBLEMS ON INFERENTIAL STATISTICS
¿ ( θ
θ−t )
n
which is nothing but the MGF of Gamma distribution with parameters (n , 1
θ ),
since MGF uniquely defines a distribution.
Therefore, it can be concluded that
S=∑ Y =−∑ ln Xi Gamma (n , 1
θ )
e. E ( ^θ ) =E [ −n
∑ ln Xi ]= n
E ( S )
¿ E ( S ) = ∂
∂ t [ MX ( t ) ]t =0
¿ ∂
∂ t [( θ
θ−t )
n
]t =0
¿ ∂
∂ t [(1− t
θ )
−n
]t=0
¿ [−n ( 1− t
θ )
−n−1
. ( −1
θ ) ] t=0
¿ n
θ [(1− t
θ )
−n−1
]t =0
¿ n
θ
¿ ( θ
θ−t )
n
which is nothing but the MGF of Gamma distribution with parameters (n , 1
θ ),
since MGF uniquely defines a distribution.
Therefore, it can be concluded that
S=∑ Y =−∑ ln Xi Gamma (n , 1
θ )
e. E ( ^θ ) =E [ −n
∑ ln Xi ]= n
E ( S )
¿ E ( S ) = ∂
∂ t [ MX ( t ) ]t =0
¿ ∂
∂ t [( θ
θ−t )
n
]t =0
¿ ∂
∂ t [(1− t
θ )
−n
]t=0
¿ [−n ( 1− t
θ )
−n−1
. ( −1
θ ) ] t=0
¿ n
θ [(1− t
θ )
−n−1
]t =0
¿ n
θ

15PROBLEMS ON INFERENTIAL STATISTICS
E ( ^θ ) = n
n
θ
=θ
∴ ^θ is the unbiased estimator of θ
Answer 7
a. The joint pdf of ( X1 , X2 , … , Xn ¿ is given by,
L ( x ;θ ) =∏
i=1
n 2 xi
θ2 =
2n
∏
i=1
n
xi
θ2 n ≤
2n
∏
i=1
n
xi
[ max ( xi ) ]2 n
Since 0<x <θ , X(n)=max( X1 , X2 , … , Xn ¿<θ.
Hence the smallest possible value for θ=X ( n )
Hence, the MLE of θ is ^θ=X(n ).
b. FX ( x ) =∫
0
x
2 t
θ2 dt= x2
θ2 ,0< x<θ .
⇒ F X ( n )
( x ) =
( x2
θ2 )
n
= x2n
θ2 n ,0 < x<θ
⇒ f X ( n )
( x ) = 2n x2 n−1
θ2 n , 0<x< θ
∴ E ( c ^θ )=cE ( ^θ ) =c∫
0
θ
x 2 n x2n−1
θ2 n dx= 2 nc
2 n+1 θ=θ
⇒ c =2 n+1
2 n
c. The median of the distribution can be obtained from the equation,
E ( ^θ ) = n
n
θ
=θ
∴ ^θ is the unbiased estimator of θ
Answer 7
a. The joint pdf of ( X1 , X2 , … , Xn ¿ is given by,
L ( x ;θ ) =∏
i=1
n 2 xi
θ2 =
2n
∏
i=1
n
xi
θ2 n ≤
2n
∏
i=1
n
xi
[ max ( xi ) ]2 n
Since 0<x <θ , X(n)=max( X1 , X2 , … , Xn ¿<θ.
Hence the smallest possible value for θ=X ( n )
Hence, the MLE of θ is ^θ=X(n ).
b. FX ( x ) =∫
0
x
2 t
θ2 dt= x2
θ2 ,0< x<θ .
⇒ F X ( n )
( x ) =
( x2
θ2 )
n
= x2n
θ2 n ,0 < x<θ
⇒ f X ( n )
( x ) = 2n x2 n−1
θ2 n , 0<x< θ
∴ E ( c ^θ )=cE ( ^θ ) =c∫
0
θ
x 2 n x2n−1
θ2 n dx= 2 nc
2 n+1 θ=θ
⇒ c =2 n+1
2 n
c. The median of the distribution can be obtained from the equation,
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.

16PROBLEMS ON INFERENTIAL STATISTICS
F ( x )= 1
2
⇒ x2
θ2 = 1
2 ⇒ x= θ
√2
Hence, the median of the distribution is θ
√ 2
Therefore, the MLE of the median of the distribution is
^θ
√ 2 = X(n)
√ 2 .
Answer 8
a. Let ¿, X2 ……. X n ) be an iid random sample from P ( λ ).
f ( xi ; λ ) = e−λ λxi
xi ! θ xθ−1 [ xi=0 , 1 ,2 , … . ,i =1, 2 , … . n, λ> 0 ]
∴the joint distribution is given by
L¿) = ∏
i=1
n e−λ λxi
xi ! f ( xi ; λ )
= ∏
i=1
n e−λ λxi
xi ! [ x= ( x1 , x2 , … .. xn ) , xi=0 ,1 , 2 , …. , λ> 0 ]
¿ e−nλ λ∑
i=1
n
xi
∏i=1
n
xi !
e
− λ
lnL ( x ; λ )=−nλ+∑ xiln λ+∑ ln (xi ¿!)¿
The MLE can be obtained by solving
F ( x )= 1
2
⇒ x2
θ2 = 1
2 ⇒ x= θ
√2
Hence, the median of the distribution is θ
√ 2
Therefore, the MLE of the median of the distribution is
^θ
√ 2 = X(n)
√ 2 .
Answer 8
a. Let ¿, X2 ……. X n ) be an iid random sample from P ( λ ).
f ( xi ; λ ) = e−λ λxi
xi ! θ xθ−1 [ xi=0 , 1 ,2 , … . ,i =1, 2 , … . n, λ> 0 ]
∴the joint distribution is given by
L¿) = ∏
i=1
n e−λ λxi
xi ! f ( xi ; λ )
= ∏
i=1
n e−λ λxi
xi ! [ x= ( x1 , x2 , … .. xn ) , xi=0 ,1 , 2 , …. , λ> 0 ]
¿ e−nλ λ∑
i=1
n
xi
∏i=1
n
xi !
e
− λ
lnL ( x ; λ )=−nλ+∑ xiln λ+∑ ln (xi ¿!)¿
The MLE can be obtained by solving

17PROBLEMS ON INFERENTIAL STATISTICS
∂lnL (x ; λ)
∂ λ =0
⇒ ∂
∂ λ ¿¿
⇒−n+ 1
λ ∑ xi= 0
⇒ 1
λ ∑ xi=n
⇒ λ=∑ xi
n =x
MLE of λ is ^λ=¿ x
b. X has a Poisson distribution with mean λ .
Here n=50
x= 1
50 [ 3 ×0+5 × 1+ 5× 2+8 ×3+12 × 4+9 × 5+8 ×6 ]
¿ 1
50 [180]=¿ 3.6
Answer 9
a. f ( x ;θ )= θ4
6 x3 e−θx where 0<x < ∞ , 0<θ< ∞
The joint pdf is
L ( θ )=∏
i=1
n
(¿ θ4
6 xi
3 e−θ xi
)¿
¿ θ4 n
6n (∏i=1
n
xi
3
)e−θ ∑
i=1
n
xi
∂lnL (x ; λ)
∂ λ =0
⇒ ∂
∂ λ ¿¿
⇒−n+ 1
λ ∑ xi= 0
⇒ 1
λ ∑ xi=n
⇒ λ=∑ xi
n =x
MLE of λ is ^λ=¿ x
b. X has a Poisson distribution with mean λ .
Here n=50
x= 1
50 [ 3 ×0+5 × 1+ 5× 2+8 ×3+12 × 4+9 × 5+8 ×6 ]
¿ 1
50 [180]=¿ 3.6
Answer 9
a. f ( x ;θ )= θ4
6 x3 e−θx where 0<x < ∞ , 0<θ< ∞
The joint pdf is
L ( θ )=∏
i=1
n
(¿ θ4
6 xi
3 e−θ xi
)¿
¿ θ4 n
6n (∏i=1
n
xi
3
)e−θ ∑
i=1
n
xi

18PROBLEMS ON INFERENTIAL STATISTICS
lnL ( θ ) =4 nlnθ−nln6+∑
i =1
n
3 ln xi - θ ∑
i=1
n
xi
∴the MLE of θ can be obtained by solving
∂lnL ( θ )
∂ θ =0
⇒ 4 n
θ −∑
i=1
n
xi=0
⇒ θ= 4 n
∑
i=1
n
xi
= 4
1
n ∑ xi
∴ the MLE of θ is ^θ = 4
X
b. When θ=1 , f ( x ; θ )=1 0< x <1
When θ=2 , f ( x ; θ ) = 1
2 √ x 0< x< 1
Note that the MLE of
θ will be a value for which θ> xi for i=1 , 2 ,… n∧which maximises f ( x ;θ ) .
For the above cases, the possible values of θ=max ¿ ¿since θ is strictly greater than each
of the observed xi ' s. Therefore, MLE does not exist for both cases.
c. f ( x ;θ )=θ ,0 ≤ x ≤ 1
θ
From the above pdf, it can be seen that
xi ≤ 1
θ for all i=1 , 2, … . n
θ ≤ 1
xi
for all i=1 , 2, … . n
⇒ θ ≤ 1
max(x1 , x2 … … xn ) for all i=1 , 2, … . n
lnL ( θ ) =4 nlnθ−nln6+∑
i =1
n
3 ln xi - θ ∑
i=1
n
xi
∴the MLE of θ can be obtained by solving
∂lnL ( θ )
∂ θ =0
⇒ 4 n
θ −∑
i=1
n
xi=0
⇒ θ= 4 n
∑
i=1
n
xi
= 4
1
n ∑ xi
∴ the MLE of θ is ^θ = 4
X
b. When θ=1 , f ( x ; θ )=1 0< x <1
When θ=2 , f ( x ; θ ) = 1
2 √ x 0< x< 1
Note that the MLE of
θ will be a value for which θ> xi for i=1 , 2 ,… n∧which maximises f ( x ;θ ) .
For the above cases, the possible values of θ=max ¿ ¿since θ is strictly greater than each
of the observed xi ' s. Therefore, MLE does not exist for both cases.
c. f ( x ;θ )=θ ,0 ≤ x ≤ 1
θ
From the above pdf, it can be seen that
xi ≤ 1
θ for all i=1 , 2, … . n
θ ≤ 1
xi
for all i=1 , 2, … . n
⇒ θ ≤ 1
max(x1 , x2 … … xn ) for all i=1 , 2, … . n
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

19PROBLEMS ON INFERENTIAL STATISTICS
⇒ θ ≤ 1
x(n )
∴ the MLE of θ is ^θ = 1
X(n )
Answer 10
X1 , X2 , … , Xnare i.i.d. random variables with pdf
f ( x ;θ )= 1
θ ,0 ≤ x ≤ θ
The first order raw population moment can be obtained as,
μ1
' =E ( X )=∫
0
θ
x
θ dx
⇒ μ1
' = 1
θ [ x2
2 ]0
θ
= 1
θ . θ2
2 =θ
2
Let m1
' be the sample moment. The moment estimator of a parameter is that value which satisfies
the moment equation,
E ( X r )=μr
' =mr
' = 1
n ∑
i=1
n
Xi
r ,r =1,2 ,… , k
Therefore equating population moment with sample moment,
μ1
' =m1
' ⇒ ^θ
2 =1
n ∑
i=1
n
Xi =X
⇒ ^θ=2 X
⇒ θ ≤ 1
x(n )
∴ the MLE of θ is ^θ = 1
X(n )
Answer 10
X1 , X2 , … , Xnare i.i.d. random variables with pdf
f ( x ;θ )= 1
θ ,0 ≤ x ≤ θ
The first order raw population moment can be obtained as,
μ1
' =E ( X )=∫
0
θ
x
θ dx
⇒ μ1
' = 1
θ [ x2
2 ]0
θ
= 1
θ . θ2
2 =θ
2
Let m1
' be the sample moment. The moment estimator of a parameter is that value which satisfies
the moment equation,
E ( X r )=μr
' =mr
' = 1
n ∑
i=1
n
Xi
r ,r =1,2 ,… , k
Therefore equating population moment with sample moment,
μ1
' =m1
' ⇒ ^θ
2 =1
n ∑
i=1
n
Xi =X
⇒ ^θ=2 X

20PROBLEMS ON INFERENTIAL STATISTICS
E ( ^θ ) =2 E ( X )=2. 1
n ∑
i=1
n
E ( X ¿¿ i)=2. 1
n ( θ
2 )n
= θn
n 2n−1 ¿
Now, Var ( Xi )=E ( Xi
2 )−E2 ( Xi )= θ2
3 − θ2
4 = θ2
12
Var ( ^θ ) =4 Var ( X )= 4
n2 ∑
i=1
n
Var (X ¿ ¿i )= 4
n2 ( θ2
12 )n
¿
The joint pdf of ( X1 , X2 , … , Xn ¿ is given by,
L ( x ;θ ) = 1
θn , 0 ≤ xi ≤ θ , θ>0
It can be observed that the MLE of θ is a value for which xi ≤ θ , i=1,2 ,.. , n and which maximizes
1
θn . Since 1
θn is a decreasing function of θ, the MLE will be the smallest possible value for θfor
which xi ≤ θ.
Hence the MLE of θis ^θ=max ( X1 , X2 , … , Xn )=X(n) .
The pdf of X ( n ) is given by,
f X ( n )
( x ) =nf ( x ) F ( x )n−1= n xn −1
θn ,0 ≤ xi ≤ θ
E( X ¿¿ ( n ) )=∫
0
θ
x . n xn−1
θn dx= n
θn [ x(n)
n +1
n+1 ]0
θ
= nθ
n+ 1 ¿
E ( X ( n )
2
) = n θ2
n+2
Var ( X ( n ) ) = nθ2
n+ 2 −( nθ
n+1 )
2
=n θ2
( 1
n+2 − n
( n+1 ) 2 )= n θ2
( n+1 ) 2 (n+2)
E ( ^θ ) =2 E ( X )=2. 1
n ∑
i=1
n
E ( X ¿¿ i)=2. 1
n ( θ
2 )n
= θn
n 2n−1 ¿
Now, Var ( Xi )=E ( Xi
2 )−E2 ( Xi )= θ2
3 − θ2
4 = θ2
12
Var ( ^θ ) =4 Var ( X )= 4
n2 ∑
i=1
n
Var (X ¿ ¿i )= 4
n2 ( θ2
12 )n
¿
The joint pdf of ( X1 , X2 , … , Xn ¿ is given by,
L ( x ;θ ) = 1
θn , 0 ≤ xi ≤ θ , θ>0
It can be observed that the MLE of θ is a value for which xi ≤ θ , i=1,2 ,.. , n and which maximizes
1
θn . Since 1
θn is a decreasing function of θ, the MLE will be the smallest possible value for θfor
which xi ≤ θ.
Hence the MLE of θis ^θ=max ( X1 , X2 , … , Xn )=X(n) .
The pdf of X ( n ) is given by,
f X ( n )
( x ) =nf ( x ) F ( x )n−1= n xn −1
θn ,0 ≤ xi ≤ θ
E( X ¿¿ ( n ) )=∫
0
θ
x . n xn−1
θn dx= n
θn [ x(n)
n +1
n+1 ]0
θ
= nθ
n+ 1 ¿
E ( X ( n )
2
) = n θ2
n+2
Var ( X ( n ) ) = nθ2
n+ 2 −( nθ
n+1 )
2
=n θ2
( 1
n+2 − n
( n+1 ) 2 )= n θ2
( n+1 ) 2 (n+2)

21PROBLEMS ON INFERENTIAL STATISTICS
Answer 11
a. The pdf of X is defined as,
f ( x ; θ ) = 4
θ2 x , 0<x< θ
2
¿− 4
θ2 x+ 4
θ , θ
2 < x<θ
¿ 0 , otherwise
The first order raw population moment can be obtained as,
μ1
' =E ( X )=∫
0
θ
2
x . 4
θ2 x . dx +∫
θ
2
θ
x . (−4
θ2 x+ 4
θ ). dx
⇒ μ1
' = 4
θ2 ∫
0
θ
2
x2 . dx− 4
θ2 ∫
θ
2
θ
x2 . dx+ 4
θ ∫
θ
2
θ
x . dx
⇒ μ1
' = 4
θ2 [ x3
3 ]0
θ
2
− 4
θ2 [ x3
3 ]θ
2
θ
+ 4
θ [ x2
2 ]θ
2
θ
⇒ μ1
' = 4
θ2 [ θ3
24 ]− 4
θ2 [ θ3
3 − θ3
24 ] + 4
θ [ θ2
2 −θ2
8 ]
⇒ μ1
' = θ
6 − 7 θ
6 + 3θ
2
⇒ μ1
' = θ
2
Let m1
' be the sample moment. The moment estimator of a parameter is that value which
satisfies the moment equation,
E ( X r )=μr
' =mr
' = 1
n ∑
i=1
n
Xi
r ,r =1,2 ,… , k
Therefore equating population moment with sample moment,
Answer 11
a. The pdf of X is defined as,
f ( x ; θ ) = 4
θ2 x , 0<x< θ
2
¿− 4
θ2 x+ 4
θ , θ
2 < x<θ
¿ 0 , otherwise
The first order raw population moment can be obtained as,
μ1
' =E ( X )=∫
0
θ
2
x . 4
θ2 x . dx +∫
θ
2
θ
x . (−4
θ2 x+ 4
θ ). dx
⇒ μ1
' = 4
θ2 ∫
0
θ
2
x2 . dx− 4
θ2 ∫
θ
2
θ
x2 . dx+ 4
θ ∫
θ
2
θ
x . dx
⇒ μ1
' = 4
θ2 [ x3
3 ]0
θ
2
− 4
θ2 [ x3
3 ]θ
2
θ
+ 4
θ [ x2
2 ]θ
2
θ
⇒ μ1
' = 4
θ2 [ θ3
24 ]− 4
θ2 [ θ3
3 − θ3
24 ] + 4
θ [ θ2
2 −θ2
8 ]
⇒ μ1
' = θ
6 − 7 θ
6 + 3θ
2
⇒ μ1
' = θ
2
Let m1
' be the sample moment. The moment estimator of a parameter is that value which
satisfies the moment equation,
E ( X r )=μr
' =mr
' = 1
n ∑
i=1
n
Xi
r ,r =1,2 ,… , k
Therefore equating population moment with sample moment,
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.

22PROBLEMS ON INFERENTIAL STATISTICS
μ1
' =m1
' ⇒ ^θ
2 =1
n ∑
i=1
n
Xi =X
⇒ ^θ=2 X
b. The sample observations of X are 0.3209, 0.2412, 0.2557, 0.3544, 0.4168, 0.5621,
0.0230, 0.5442, 0.4552 and 0.5592.
∑ xi=3.7327 ,n=10 , thus x=0.3733 .
Therefore, the point estimate of θ is ^θ=2∗0.3733=0.7466 .
Answer 12.
X1, X2 ……. X n are iid random variables from exponential distribution with unknown
mean θ
∴property of exponential distribution,
Y =∑
i=1
n
Xi Gamma ¿ )
∴pdf of Y is
f ( y ) = yn−1 ¿ ¿
a. Let W =2
θ Y ⇒ y= θ
2 w
The Jacobian of the transformation
J= θ
2
The range of w is (0 , ∞ )
∴pdf of W is
μ1
' =m1
' ⇒ ^θ
2 =1
n ∑
i=1
n
Xi =X
⇒ ^θ=2 X
b. The sample observations of X are 0.3209, 0.2412, 0.2557, 0.3544, 0.4168, 0.5621,
0.0230, 0.5442, 0.4552 and 0.5592.
∑ xi=3.7327 ,n=10 , thus x=0.3733 .
Therefore, the point estimate of θ is ^θ=2∗0.3733=0.7466 .
Answer 12.
X1, X2 ……. X n are iid random variables from exponential distribution with unknown
mean θ
∴property of exponential distribution,
Y =∑
i=1
n
Xi Gamma ¿ )
∴pdf of Y is
f ( y ) = yn−1 ¿ ¿
a. Let W =2
θ Y ⇒ y= θ
2 w
The Jacobian of the transformation
J= θ
2
The range of w is (0 , ∞ )
∴pdf of W is

23PROBLEMS ON INFERENTIAL STATISTICS
f ( w )=( θ
2 w)
n−1
¿ ¿
= ( θ
2 )
n
¿ ¿
1
2n Γ n . wn−1 . e
−1
2 w
Which is nothing but pdf of χ2 with 2 n degrees of freedom.
∴ W= 2
θ ∑
i=1
n
Xi χ2 n
2
b. Since W χ2 n
2
Var( W ¿=2. 2 n=4 n
∴the 100 ( 1−α ) % C . I . for θis is
(x ± Z α
2 √ var ( W )
n )
¿ ¿)
¿ ¿)
c. x=65.2 , n=8 , Z α
2
=1.64
90 % of C . I . for θ is
( 65.2 ±1.64 × 2 )=(61.92 ,68.48)
Answer 13
a. The pdf of Xi is
f ( x ¿¿ i)=α . ¿ ¿
¿ α
β .¿
f ( w )=( θ
2 w)
n−1
¿ ¿
= ( θ
2 )
n
¿ ¿
1
2n Γ n . wn−1 . e
−1
2 w
Which is nothing but pdf of χ2 with 2 n degrees of freedom.
∴ W= 2
θ ∑
i=1
n
Xi χ2 n
2
b. Since W χ2 n
2
Var( W ¿=2. 2 n=4 n
∴the 100 ( 1−α ) % C . I . for θis is
(x ± Z α
2 √ var ( W )
n )
¿ ¿)
¿ ¿)
c. x=65.2 , n=8 , Z α
2
=1.64
90 % of C . I . for θ is
( 65.2 ±1.64 × 2 )=(61.92 ,68.48)
Answer 13
a. The pdf of Xi is
f ( x ¿¿ i)=α . ¿ ¿
¿ α
β .¿

24PROBLEMS ON INFERENTIAL STATISTICS
The pdf is ,
L ( α , β )=∏
i=1
n
f ¿ ¿)
¿ ∏
i=1
n
¿ ¿
¿ αn
βn .
∏
i−1
n
xi
α−1
β ( α−1 )n
¿ αn
βαn .∑
i=1
n
xi
α−1
lnL ( α , β )=nlnα−nαlnβ+ ( α −1 ) ∑
i =1
n
ln xi
The MLEs can be obtsined by solving
∂lnL
∂ α = 0 ⇒ n
α - nαlnβ +∑ ln xi=0
⇒ 1
α + ∑ ln xi
n =lnβ
⇒ ^α= n
nlnβ −∑
i=1
n
ln Xi
Note that ,
xi ≤ β for all i=1 , 2 , … n
⇒ max (¿ xi)≤ β for all i=1 ,2 , … n ¿
⇒ x (n ) ≤ β for all i=1 , 2 , … n
∴ MLE of β is ^β=X(n)
MLE of α is ^α= n
nln ^β−∑
i=1
n
ln Xi
The pdf is ,
L ( α , β )=∏
i=1
n
f ¿ ¿)
¿ ∏
i=1
n
¿ ¿
¿ αn
βn .
∏
i−1
n
xi
α−1
β ( α−1 )n
¿ αn
βαn .∑
i=1
n
xi
α−1
lnL ( α , β )=nlnα−nαlnβ+ ( α −1 ) ∑
i =1
n
ln xi
The MLEs can be obtsined by solving
∂lnL
∂ α = 0 ⇒ n
α - nαlnβ +∑ ln xi=0
⇒ 1
α + ∑ ln xi
n =lnβ
⇒ ^α= n
nlnβ −∑
i=1
n
ln Xi
Note that ,
xi ≤ β for all i=1 , 2 , … n
⇒ max (¿ xi)≤ β for all i=1 ,2 , … n ¿
⇒ x (n ) ≤ β for all i=1 , 2 , … n
∴ MLE of β is ^β=X(n)
MLE of α is ^α= n
nln ^β−∑
i=1
n
ln Xi
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser

25PROBLEMS ON INFERENTIAL STATISTICS
b. ∑ ln xi=34.95 n=14
^β=26.0 ln ^β=3.26
^α = 14
14 ×3.26−44.95 =1.31
c. P [ X ( n ) ≤ x ]=¿ ¿
¿( x
β )
n α0
0< x< β
CDF of the pivot quantity
Q ( X , β ) = X(n)
β is P [ X(n)
β ≤ x ] =P ( X ( n ) ≤ βx )
¿ ¿ 0¿ x <1
For a 95 % C I ,isrequired ¿ calculateb such that
P ( b < X ( n )
β )=0.95
Where ,
P [ ( X (n )
β )> b ]=1−P [ ( X ( n )
β )≤ b ]=1−bn α0
⇒ 1−bn α 0
=0.95 ⇒ bn α 0
=0.05 ⇒b=( 0.05)
1
n α 0
d. 95 %C I for β is
[ x(n)< β < x(n)
b ]
¿
[ x(n )< β< x(n)
(0.05)
1
n α0 ] or, [ x(n) , x(n )
(0.05)
1
n α0
]
Here α0 =1.31
b. ∑ ln xi=34.95 n=14
^β=26.0 ln ^β=3.26
^α = 14
14 ×3.26−44.95 =1.31
c. P [ X ( n ) ≤ x ]=¿ ¿
¿( x
β )
n α0
0< x< β
CDF of the pivot quantity
Q ( X , β ) = X(n)
β is P [ X(n)
β ≤ x ] =P ( X ( n ) ≤ βx )
¿ ¿ 0¿ x <1
For a 95 % C I ,isrequired ¿ calculateb such that
P ( b < X ( n )
β )=0.95
Where ,
P [ ( X (n )
β )> b ]=1−P [ ( X ( n )
β )≤ b ]=1−bn α0
⇒ 1−bn α 0
=0.95 ⇒ bn α 0
=0.05 ⇒b=( 0.05)
1
n α 0
d. 95 %C I for β is
[ x(n)< β < x(n)
b ]
¿
[ x(n )< β< x(n)
(0.05)
1
n α0 ] or, [ x(n) , x(n )
(0.05)
1
n α0
]
Here α0 =1.31

26PROBLEMS ON INFERENTIAL STATISTICS
^β=26.0 , n=14
∴ 95 % C I for ^β is (26.0, 30.61)
Answer 14
a. Y follows exponential distribution with parameter ( 1
λ ).
Given that, X =θ1 +θ2 Y ⇒ Y = X−θ1
θ2
;0< y< ∞⇒ θ1< x < ∞
The Jacobian of the transformation,
J= dy
dx = 1
θ2
Therefore, the pdf of X is given by,
f ( x ) = 1
λ e
− X−θ1
θ2
λ . 1
θ2
= 1
λθ2
e
− X −θ1
θ2
λ , λ>0 , θ1 < x <∞
Hence, X follows shifted exponential distribution with location parameter θ1 and the
scale parameter λθ2.
b. E ( X ) =θ1 +θ2 E (Y )=θ1 +θ2 λ
Var ( X ) =θ2
2 Var ( Y ) =θ2
2 λ2
⇒ E ( X2 )=θ2
2 λ2 + ( θ1 +θ2 λ ) 2
^β=26.0 , n=14
∴ 95 % C I for ^β is (26.0, 30.61)
Answer 14
a. Y follows exponential distribution with parameter ( 1
λ ).
Given that, X =θ1 +θ2 Y ⇒ Y = X−θ1
θ2
;0< y< ∞⇒ θ1< x < ∞
The Jacobian of the transformation,
J= dy
dx = 1
θ2
Therefore, the pdf of X is given by,
f ( x ) = 1
λ e
− X−θ1
θ2
λ . 1
θ2
= 1
λθ2
e
− X −θ1
θ2
λ , λ>0 , θ1 < x <∞
Hence, X follows shifted exponential distribution with location parameter θ1 and the
scale parameter λθ2.
b. E ( X ) =θ1 +θ2 E (Y )=θ1 +θ2 λ
Var ( X ) =θ2
2 Var ( Y ) =θ2
2 λ2
⇒ E ( X2 )=θ2
2 λ2 + ( θ1 +θ2 λ ) 2

27PROBLEMS ON INFERENTIAL STATISTICS
Let m1
, ,m2
, be the sample raw moments. The MME of the parameters θ1 , θ2 can be
obtained by equating population moments with sample moments.
E ( X r )=μr
' =mr
' = 1
n ∑
i=1
n
Xi
r ,r =1,2 ,… , k
⇒ E ( X )= ^θ1 + ^θ2 λ=1
n ∑
i=1
n
Xi= X
E ( X2 ) = ^θ2
2 λ2 + ( ^θ1+ ^θ2 λ )2
= 1
n ∑
i=1
n
Xi
2
⇒ ^θ2
2 λ2= 1
n ∑
i=1
n
Xi
2− X2
⇒ ^θ2= 1
λ √ 1
n ∑
i=1
n
Xi
2 −X2 , ^θ1=X − √ 1
n ∑
i=1
n
Xi
2− X2
c. f ( xi ) = 1
λθ2
e
−x−θ1
λθ2
,θ1<xi <∞
The joint pdf of ( X1 , X2 , … , Xn ¿ is given by,
L ( θ1 ,θ2 ) =∏i=1
n
( 1
λθ2
. e
− xi−θ1
λθ2
)
⇒ L ( θ1 ,θ2 )= ( 1
λθ2 )n
.e
−1
λθ2
∑
i=1
n
( xi−θ1 )
When θ2 is fixed and θ1 ≤ x (1 )
θ1 ≤ xi , ∀ i=1,2 , … n
⇒ ( xi−θ1 ) ≥ 0 , ∀ i=1,2, … n
⇒ L ( θ1 ,θ2 ) is strictly increasing.
If θ1 > x ( 1 ) , θ2is fixed,then the likelihood function will be 0 since θ1 ≤ xi , ∀ i=1,2 , … n.
Therefore, MLE of θ1 is ^θ1= X(1)
Let m1
, ,m2
, be the sample raw moments. The MME of the parameters θ1 , θ2 can be
obtained by equating population moments with sample moments.
E ( X r )=μr
' =mr
' = 1
n ∑
i=1
n
Xi
r ,r =1,2 ,… , k
⇒ E ( X )= ^θ1 + ^θ2 λ=1
n ∑
i=1
n
Xi= X
E ( X2 ) = ^θ2
2 λ2 + ( ^θ1+ ^θ2 λ )2
= 1
n ∑
i=1
n
Xi
2
⇒ ^θ2
2 λ2= 1
n ∑
i=1
n
Xi
2− X2
⇒ ^θ2= 1
λ √ 1
n ∑
i=1
n
Xi
2 −X2 , ^θ1=X − √ 1
n ∑
i=1
n
Xi
2− X2
c. f ( xi ) = 1
λθ2
e
−x−θ1
λθ2
,θ1<xi <∞
The joint pdf of ( X1 , X2 , … , Xn ¿ is given by,
L ( θ1 ,θ2 ) =∏i=1
n
( 1
λθ2
. e
− xi−θ1
λθ2
)
⇒ L ( θ1 ,θ2 )= ( 1
λθ2 )n
.e
−1
λθ2
∑
i=1
n
( xi−θ1 )
When θ2 is fixed and θ1 ≤ x (1 )
θ1 ≤ xi , ∀ i=1,2 , … n
⇒ ( xi−θ1 ) ≥ 0 , ∀ i=1,2, … n
⇒ L ( θ1 ,θ2 ) is strictly increasing.
If θ1 > x ( 1 ) , θ2is fixed,then the likelihood function will be 0 since θ1 ≤ xi , ∀ i=1,2 , … n.
Therefore, MLE of θ1 is ^θ1= X(1)
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.

28PROBLEMS ON INFERENTIAL STATISTICS
Now,
lnL=−nlnλ−nln θ2− 1
λθ2
∑
i=1
n
( xi−θ1 )
∂
∂θ2
lnL=0 ⇒− n
θ2
+ 1
θ2
2 ∑
i=1
n
( xi−θ1 ) =0
⇒ ^θ2= 1
n ∑
i=1
n
( xi−θ1 )
Answer 15
It is given that,
c=90 % , margin of error (E)=10 , x =1580 , s=58
If the sample size is large, then the population standard deviation σ is equal to the sample
standard deviations .
σ ≈ s=58
The formula for the sample size is,
n=( z α
2
σ
E )
2
Here α=0.10 ⇒ α
2 =0.05
The value of z α
2
=z0.05=1.64 ,from the standard normal table.
Therefore, the sample size is (rounded up to nearest integer),
Now,
lnL=−nlnλ−nln θ2− 1
λθ2
∑
i=1
n
( xi−θ1 )
∂
∂θ2
lnL=0 ⇒− n
θ2
+ 1
θ2
2 ∑
i=1
n
( xi−θ1 ) =0
⇒ ^θ2= 1
n ∑
i=1
n
( xi−θ1 )
Answer 15
It is given that,
c=90 % , margin of error (E)=10 , x =1580 , s=58
If the sample size is large, then the population standard deviation σ is equal to the sample
standard deviations .
σ ≈ s=58
The formula for the sample size is,
n=( z α
2
σ
E )
2
Here α=0.10 ⇒ α
2 =0.05
The value of z α
2
=z0.05=1.64 ,from the standard normal table.
Therefore, the sample size is (rounded up to nearest integer),

29PROBLEMS ON INFERENTIAL STATISTICS
n=( 1.64∗58
10 )
2
≈ 91
Hence the sample size is 91.
n=( 1.64∗58
10 )
2
≈ 91
Hence the sample size is 91.

30PROBLEMS ON INFERENTIAL STATISTICS
Bibliography
Lowry, R. (2014). Concepts and applications of inferential statistics.
Sahu, P. K., Pal, S. R., & Das, A. K. (2015). Estimation and inferential statistics. Springer.
Bibliography
Lowry, R. (2014). Concepts and applications of inferential statistics.
Sahu, P. K., Pal, S. R., & Das, A. K. (2015). Estimation and inferential statistics. Springer.
1 out of 31

Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.