This text covers topics such as maximum likelihood function, Fisher information matrix, complete sufficient statistic, and UMVU estimator. It also includes solved examples and explanations for regression and distribution problems in statistics.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
Last Name1 REGRESSION AND DISTRIBUTION PROBLEMS Student's Name Professor's Name Course Name Date
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Last Name2 Regression and Distribution Problems Question 1 Part A The likelihood functions of different parameters can be maximized with respect to parameterϴ to get the estimators of these parameters1. The maximum likelihood function: ^β1=β1=∑(Xi−X)(Yi−Y) ∑(Xi−X) 2 ^β2=β2=Y−B1X ^v=∑Yi n v= ∑ i (Yi−^Yi)2 n Part B Fisher Information Matrix Segment 1 For Model A I(β1,β2,v)=∑ −i=1 n log(1 (2πσ2) n 2 e(−1 2σ2∑ i=1 n (y−β1−β2t)2¿) )¿ For Model B 1Montgomery, Douglas C., Elizabeth A. Peck, and G. Geoffrey Vining.IntroductiontoLinear RegressionAnalysis.Hoboken: John Wiley & Sons, 2012.
Last Name3 I(β1,β2,v)=∑ −i=1 n log(1 (2πσ2) n 2 e(−1 2σ2∑ i=1 n (y−β1 1+14e−β2t)2 ¿) )¿ Segment 2 For Model A I(^β1,^β2,^v)−1 =log¿ For Model B I(^β1,^β2,^v)−1 =log¿ Part C The scatter line and fitted line Part D The results are in the Appendix for the estimation computations Hence for
Last Name4 Model A X=0.92655t-0.59207 Therefore, yield at time 0 and 80 X=(0.92655*0)-0.59207 Pasture Yield=-0.59207 X=(0.92655*80)-0.59207 Pasture Yield=73.53193 Model B X=72.28575 1+14e−0.0680184t Therefore, yield at time 0 and 80 X=72.28575 1+14e−0.0680184∗0 Pasture Yield=4.8 X=72.28575 1+14e−0.0680184∗80 Pasture Yield=68.14571 Part E Standard errors for the maximum likelihood estimators Model A ^β1=0.0450088 ^β2=2.20625 Model B ^β1=1.408413
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Last Name5 ^β2=0.0018188 Part F The best model to utilize would have to be model B it fits a majority of the data points and it explains a significant proportion of the change in the dependent variables (Pasture Yield) Question 2 Part A X follows a binomial distributionXBi(m,p) Hence the pmf of this can be rewritten as follows (m x)px(1−p)m−x=exp[(logp 1−p)x−mlog(1−p)](m x) Giving us the Exponential Family form withn(p)=logp 1−p Part B X is a complete sufficient statistic because for Eϴ(g(X))=0; for allϴ, implies therefore that Pϴ(g(X)=0)= 12 Part C (i) If for a sufficient statistic like X, there exists a measurable function where g:ϴ→Xsuch that X=g(ϴ) (ii) if we letϴbe equivalent to 0 then Pj(ϴ)=1. As such, X is a complete sufficient statistic. 2Schervish, Mark J.TheoryofStatistics.Berlin: Springer Science & Business Media, 2012.
Last Name6 Question 3 Part A Give a statistic T and we haveVarθ(T)<∞forallθ,We also haveS=X1+X2+X3 0<l(θ)<∞. With this in mind we can state that for allθ Varθ(T)≥|φ'(θ)| 2 l(θ) Whereφ(θ)=θ θ+1 So thatφ'(θ)= d(θ θ+1) dθ φ'(θ)=Cov⌈∂logp(S;θ) dθ,T⌉forS=X1+X2+X3 And we know the cov(P,Q)≜E⌈(P−E[P])(Q−E⌈Q⌉)⌉ We also know that l(θ)=−Ep(x,θ){∂2 ∂θ2logp(S;θ)}for S= X1+X2+X3 As such Varθ(T)≥ Cov⌈∂logp(S;θ) dθ,T⌉ −Ep(x,θ){∂2 ∂θ2logp(S;θ)}forS=X1+X2+X3 By substituting in the values of S=X1+X2+X3 into the inequality above and simplifying we get: varθ(T)≥1 (0+1)4{1 θ⌈1 θ+1+1+1 1−e−1⌉−e−θ (1−e−θ) 2−1 (θ+1)2} −1 Therefore, proving our assumption about T being an unbiased estimator ofφ(θ)
Last Name7 Part B We need to prove that for whenθassumes a value of zero S will assume a value of 1 Where S=X1+X2+X3 Therefore at least one of the three has to equate to one (while others assume values of 0) Focusing on X1 We get that S is a complete sufficient statistic because f1(g(t))=0 for allϴwe see that S(g(t)=0)=1 (NB: X2 and X3 will be equivalent to 0) We can test the preposition made above as follows Forg(θ)=0hence j=0 and in all cases wherej{j=0,1forabernoullidistribution j=0,1,…k−1foraPoissondistribtion} Let X1=f1(0|θ)=1 θ+1forθ=0wegetX1=1 Let X2=f2(0|θ)=e−θθj j!forθ=0wegetX2=0 Let X3=f3(0|θ)=e−θθj j!(1−e−θ)wegetX3=0becausedoesnothaveg(θ)=0i.e.j=1,2,… Hence S=X1+X2+Xe S=1+0+0 S=1 Since S=1, we conclude that S is a complete sufficient statistic ofθ Part C Since S is a complete sufficient statistic Therefore the expected value of the following expression will tend toφ(θ)
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Last Name8 Exp⌈S(2S−1−1) 2S−1+S(2S−1−1)⌉ This will create a 1:1 relationship withφ(θ) Thus implying the expression about is a good UMVU estimator ofφ(θ) Bibliography Montgomery, Douglas C., Elizabeth A. Peck, and G. Geoffrey Vining.IntroductiontoLinear RegressionAnalysis.Hoboken: John Wiley & Sons, 2012. Schervish, Mark J.TheoryofStatistics.Berlin: Springer Science & Business Media, 2012.