Statistical Science Assignment: University Statistics Course Homework

Verified

Added on  2022/09/16

|9
|950
|33
Homework Assignment
AI Summary
This assignment explores several key concepts in statistical science. It begins by examining Bernoulli and beta distributions, deriving the posterior distribution using the likelihood principle. The solution then delves into the method of moments and maximum likelihood estimation for a given distribution. Furthermore, the assignment discusses principles of data reduction, providing the likelihood principle as an example, and explores methods of finding estimators like maximum likelihood estimation. The document then assesses unbiased estimators for both continuous and discrete random variables, comparing their efficiency using the mean squared error (MSE). References to relevant statistical literature are included to support the analysis.
Document Page
Running head: Statistical science. 1
Name of the Student
Name of the University
Author’s Note.
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
Statistical science. 2
Question 1.
Let X1, …,Xn be iid Bernoulli( p). Then Y = Xiis Binomial(n , p). Assume the prior distribution
on p is beta( α , β ) . Find the Posterior distribution of p given y .
Solution.
The Distribution functions are:
f ( x ) = { p ,x=1
1 p ,x=0 for the Bernoulli( p)
Y =f (x , p)= {( n
x ) px (1p)n x for all x >0
0 otherwise
for the Bin(n , p)
The prior π ( p ) has a beta ( α , β ) with pdf .
π ( p ) = γ ( α+ β )
γ ( α ) γ ( β ) pα 1 ( 1 p ) β1
Applying the likelihood principle:
f ( p|x ) f ( X1 , , X p|p )π ( p )
f ( p|x ) py ( 1 p )n ypα1 ( 1 p ) β1
Where y= xi and the constants are γ ( α + β )
γ ( α ) γ ( β ) and ( n
x ).
Or f ( p|x ) beta ( y + α , n+β y )
Question 2.
Suppose that X Bin ( n , p ) ,
solution
Document Page
Statistical science. 3
a) The Method of moments of p is given as follows(Taylor, 2020).
f ( x , p ) =( n
x ) px ( 1p ) nx
for x=1, 2…, p and 0 elsewhere.
By MGF technique,
M ( t ) = [ ( 1 p ) + p et ] n
.
In order to find the first raw population moment since the number of parameters is only one:
Then μ1
' =M ' ( 0 )=n ( p e0 ) [ ( 1 p ) + p et ]n1
.
applying some differential calculus .
μ1
' =E ( x ) =M ' ( 0 ) =np
Now, the corresponding sample raw moment is:
M 1
' = 1
n
i=1
n
Xi=X the sample mean.
Equating the two i.e. μ1
' =M 1
' yields
np=X ^p= X
n
This implies that given some random sample, then the MoM estimator is the mean of the sample
mean.
b) The likelihood estimate of p is given as follows.
L ( p )=
i=1
n
f ( x , p )
Or
i=1
n
( n
x ) px ( 1 p ) nx k p xi
( 1 p ) n xi
Where k =
i=1
n
(n
x )
Then the loglikelihood function if
Document Page
Statistical science. 4
l ( p )=ln (k p xi
( 1 p )n xi
)
¿ lnk + ¿
now taking the derivative of the loglikelihoodequating ¿ zero yeilds :¿
l( p)
p = xi
p n xi
1 p setting to 0 and multiplying through by p(1-p) gives:
xi p xinp + p xi=0
Or xinp=0
^p= xi
n
The ^pmle is not similar to the ^pmme
Question 3.
In no more than 2 pages for each of the following item below, describe the concept/method and
provide one example for each of the following: Principles of data reduction.
During the inferential process about an unknown parameter θ the sample size might be too large
to interpret (Tuorla, n.d.). Any statistic, T ( x ) , can be used to define a form of data summary for
instance the sample mean, sample variance, maximum, minimum (Tuorla, n.d.) etc. An examply
is the Likelihood principle, which describes a function of parameters from observed samples
containing information about a parameter θ . If x and y are two sample points such that L(θx)
is proportaional to L(θ y), then there exists a constant C ( x , y ) such that:
L ( θ|x )=C ( x , y ) L ( θ|y ) θ. This implies that the conclusion obtained from the two samples
should be identical (Tuorla, n.d.).
tabler-icon-diamond-filled.svg

Secure Best Marks with AI Grader

Need help grading? Try our AI Grader for instant feedback on your assignments.
Document Page
Statistical science. 5
One method of finding estimators you have learnt in the topic.
One method of finding estimators is the Maximum likelihood estimation. Suppose the likelihood
function depends on k parameter θ1 , θ2 ,, θk (Dey et al., 2017). Choose as estimators those
values of the parameter that maximize the likelihood function
L ( θ1 ,θ2 , , θk ) =L( y1 , y2 , , yn θ1 ,θ2 , , θk ) (Dey et al., 2017). Take the log of the likelihood
function as l ( θ ) =ln L(θ). Both the likelihood and its log have their maximums at the same value
of ^θ. However, it is easier to maximize the l ( θ ) (Dey et al., 2017; Properties of Point of Point
Estimators and Methods of Estimation. (n.d.). Estimators and Methods of Estimation, n.d.). An
example for this is: the Bernoulli random variable Y has a likelihood function of the form,
L ( p )= p ( y1|p ) p ( yn|p ) = py'
( 1p )n y'
y' = yi
And a log likelihood function of the form,
l ( p ) = y' ln p+ ( n y' ) ln (1 p) which yields an MLE estimate equal to the sample mean.
^p= y' /n
Question 4.
Let θϵ R be a parameter. Consider the following two random variables. X is a continuous
random variable with density function.
f x ( x ) = {1
2 , θ1 x θ+1
0 , other wise ;
i.e. X uniform ( θ1 ,θ+1 ) .
Y is a discrete random variable with probability function.
py ( y )= { 0.8 , y =θ ,
0.1 , y =θ1
0.1 , y=θ+1
a) Verify that both X an Y are unbiased estimators of θ .
Document Page
Statistical science. 6
For unbiasedness then;
E ( ^θ ) =θE ( ^θ )θ=0
For the R.V X the unbiased estimator of θ is given by;
E ( x )= ba
2 = [ ( θ+1 ) + ( θ1 ) ]
2 = θ and
^θ=θ in this case. thus X is unbiased.
For the R.V. Y;
E ( Y )=θ( 0.8 )+ ( θ1 )( 0.1 ) + ( θ+1 ) ( 0.1 )=θ
Which is equivalent to θ for this case thus unbiased.
b) Determine which is the better estimator of θ .
For the R.V X;
The mean value of the distribution = θ.
The second moment will be:
E ( X2 ) = [ ( θ+1 ) ( θ1 ) ] 2
12 +θ2
¿ 1
3 +θ2
By method of moments estimate;
E ( X ) =1
n xi so that ^θ=1
n xi which was seen as unbiased previously. This
estimator is consistent since n tends to infinity as the value of the estimator tends to zero.
E ( X2 ) = 1
n xi
2 so that ^θx= 1
n xi
2+ 1
3 This estimator is consistent since n tends to
infinity as the value of the estimator tends to zero.
Document Page
Statistical science. 7
Applying the MSE of point estimators then
MSE( ^θx)=E[ ( ^θxθ )2
].
¿ E[ ( 1
n xi
2 + 1
3 θ )2
]
For the R.V Y;
The mean value of the distribution = θ.
The second moment will be:
E ( Y 2 ) =θ( 0.82 ) + ( θ1 )( 0.12 ) + ( θ+ 1 )( 0.12 ) =0.66 θ
By method of moments estimate;
E ( Y )= 1
n yi so that ^θ y=1
n yi which was seen as unbiased previously. This
estimator is consistent since n tends to infinity as the value of the estimator tends to zero.
E ( Y 2 ) =1
n yi
2 so that ^θ=
1
n yi
2
0.66
Applying the MSE of point estimators then
MSE ( ^θ y)=E [ ( ^θyθ )
2
].
¿ E[ ( 1
n yi
2
0.66 θ )
2
]
As compared, MSE( ^θ y) MSE( ^θx)
Hence ^θx is better than ^θ y
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
Statistical science. 8
References.
Dey, S., Raheem, E., & Mukherjee, S. (2017). Statistical properties and different methods of
estimation of transmuted Rayleigh distribution. Revista Colombiana de Estadística.
https://doi.org/10.15446/rce.v40n1.56153
Properties of PointProperties of Point Estimators and Methods of Estimation. (n.d.). Estimators
and Methods of Estimation. (n.d.).
Taylor, C. (2020). Moment Generating Function for Binomial Distribution.
https://www.thoughtco.com/moment-generating-function-binomial-distribution-3126454
Tuorla, S. M. (n.d.). Principles of data reduction : optical. 1–11.
References.
Dey, S., Raheem, E., & Mukherjee, S. (2017). Statistical properties and different methods of
estimation of transmuted Rayleigh distribution. Revista Colombiana de Estadística.
https://doi.org/10.15446/rce.v40n1.56153
Properties of PointProperties of Point Estimators and Methods of Estimation. (n.d.). Estimators
and Methods of Estimation. (n.d.).
Taylor, C. (2020). Moment Generating Function for Binomial Distribution.
https://www.thoughtco.com/moment-generating-function-binomial-distribution-3126454
Document Page
Statistical science. 9
Tuorla, S. M. (n.d.). Principles of data reduction : optical. 1–11.
chevron_up_icon
1 out of 9
circle_padding
hide_on_mobile
zoom_out_icon
logo.png

Your All-in-One AI-Powered Toolkit for Academic Success.

Available 24*7 on WhatsApp / Email

[object Object]