Homework 2: Statistical Analysis, Regression, Modeling, and R Code

Verified

Added on  2023/06/10

|5
|715
|100
Homework Assignment
AI Summary
This document contains solutions to a statistics homework assignment, focusing on regression analysis and model evaluation. The assignment covers topics such as linear regression, ridge regression, and LASSO regression, with an emphasis on understanding shrinkage parameters and their effects on coefficient values. The solutions include R code implementations, standard error calculations, and interpretations of model outputs, including confusion matrices and false negative rates. The document also addresses specific questions related to the impact of omitted lines of code and the behavior of error metrics in the context of the provided data and model. The analysis involves interpreting statistical results, including p-values and significance levels, and understanding the implications of different modeling choices.
Document Page
Homework2
1(a)


n
i
n
i
ii xyS 1 1
2
1010 )(),(





f (x , y )
x =0
f ( x , y )
y =0
S ( β0 , β1)
β0
=0
S ( β0 , β1)
β1
=0
Now let β0
¿ =β0+ β1 x where x=

i=1
n
xi
n
S ( β0 , β1)=
i=1
n
( yiβ0 β1 xi )2=
i=1
n
( yi( β0+ β1 x ) β1 ( xix ))2
=
i=1
n
( yiβ0
¿ β1( xi x ))2=S ( β0
¿ , β1).
We obtain
0
and 1
through minimizing S ( β0
¿ , β1) moreover b0 is obtained
through b0
¿ =b0+ b1 x . Therefore:
S ( β0
¿ , β1)
β0
¿ =
(
i=1
n
( yiβ0
¿ β1 ( xi x ))2)
β0
¿ =
i=1
n
2( yiβ0
¿ β1( xix ))=0
Where:
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page

i=1
n
β1( xix )=β1
i=1
n
(xix )=β1(
i=1
n
xin x )=β1 (
i=1
n
xi
i=1
n
xi )=0 ,
And

i =1
n
2( yiβ0
¿ β1 (xix ))=2
i=1
n
( yiβ0
¿ )=2(
i=1
n
yi0
¿ )=0 .
Therefore:
b0
¿ =

i=1
n
yi
n =¯y
But
n
i
iy
1 was originally 0, clearly
b0
¿ = 0
n =0
1(b)
S (b0
¿ , β1 )
β1
=
(
i=1
n
( yi yβ1( xix ))2 )
β1
=2
i=1
n
( yi yβ1( xix ))( xix )=0

i =1
n
( yi y )( xix )β1
i=1
n
( xix )2=0
Given that:
s XX=
i =1
n
( xi ¯x )
2=
i=1
n
xi
2n ¯x2
and
s XY =
i =1
n
( xi ¯x )( yi y )=
i=1
n
xi yi n x y
Therefore
Document Page
b1=

i=1
n
( yi y )( xix )

i=1
n
( xix )2
= sXY
sXX
But
therefore

1
1
'
1
brb
Implying that the estimated β is close to the true β
1(c)(1)
In Ridge regression and LASSO, λ is considered a shrinkage parameter. It therefore:
i. controls the size of the coefficients
ii. controls amount of regularization of the model
iii. Moreover as λ tends to 0 we obtain the least squares solutions
iv. Also as λ tends to infinity βrigde=0
Therefore both are shrinkage methods
1(c)(2)
In Ridge regression the assumption is often similar to that of least squares regression
except that normality is not assumed same as in LASSO regression. However, ridge
regression shrinks the coefficient values but not exactly to zero unlike in LASSO
where the coefficients are shrunk to zero in order to aid in feature selection.
2(a)
Document Page
6a
Lag2 is the only significant predictor with a p value of 0.0296 at 0.05 level of
significance.
6(b)
6(c)
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
Confusion matrix
6(d)
6(e)
chevron_up_icon
1 out of 5
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]