COMS W4721: Machine Learning - Homework 2, Spring 2019 - Solution

Verified

Added on  2023/04/23

|11
|1171
|139
Homework Assignment
AI Summary
This document presents a solution to a homework assignment related to Machine Learning, specifically focusing on Naive Bayes and K-Nearest Neighbors (K-NN) classifiers. The solution includes a theoretical explanation of the Naive Bayes classifier, including formulas for prior and posterior probabilities, and a discussion of parameter estimation using Gamma priors. It also covers the implementation of the Naive Bayes classifier in Python. Furthermore, the solution discusses the implementation of the K-NN classifier, including finding the optimal value of K and plotting prediction accuracy. The document further delves into the implementation of the steepest ascent algorithm for logistic regression and the application of Newton's method. The homework addresses topics such as prediction model implementation, parameter tuning, and evaluation of classification performance. The document also includes a 2X2 table prediction plot on the y values.
Document Page
Admin
[COMPANY NAME] [Company address]
Statistics
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
Table of Contents
Question 1.........................................................................................................................................2
Question 2.........................................................................................................................................4
References........................................................................................................................................9
Document Page
Question 1
a)
Let we can assume the naive bayes classifier
y€(0,1)
yo = argmaxy P(yo =y/π) πd =1
D P(xo, d| λ y,d )
yi iid Bern (π)
x i,d Pois (λ yi,d) , d=1, 2,.......D)
Prior L λ y,d iid Gamma (1,2)
F(π, yo) = argmaxy P (yo =y/π) πd =1
D P(xo, d| λ y,d )
= argmaxy P (yi iid Bern (π) πd =1
D P (xo, d| λ y,d )
= Posterior dist n α prior dist n x like hood
F(π, yo) = F(π, yo) x yi iid Bern (π)
= argmaxy P (yo =y/π) πd =1
D P (xo, d| λ y,d )
^π, ^λ 0,1 , ^λ 0,D = argmax ^π, ^λ 0,1 , ^λ 0,D
=
i=1
n
¿ P( yi¿ ¿ π )¿ +
d =1
D
¿ ¿λ y,d)+
i=1
D
¿¿
Let yi iid Bern (π)
So ( P( xi=1, y=i) = y1,i
P (yo =y/π) πd =1
D yi ,1
D (xi =1)(1- y1,i) I(x=0)
Let can be consider the (y1x1 ) .........( (ynxn )
Prior: λ y,d iid Gamma (2,1)
(y1x1 ) .........( (ynxn ) iid Gamma (2,1)

y, d
λ
¿1
= P (y/π) = πd =1
D yi ,1
D (xi =1) = πy
Y=1 only one bit can be used
Y=c
Document Page
Y=2 =( y1,y2, y3) = (0,1,2)
P (y/π) = πy y1,2
(all) = 0.2
Bayes rule
¿ P ( y0= y
π ) P( x
y = y
π )
p/ π
πd =1
D yi ,1
D (xi =1)(1- y1,i) I(x=0)
P( π ¿
^π, ^λ 0,1 , ^λ 0,D = argmax ^π, ^λ 1,1 , ^λ 1,D
=
i=1
n
¿ P( yi¿ ¿ π )¿ +
d =1
D
¿ ¿λ y,d)+
i=1
D
¿¿
Log p( π) = log πy +
i=1
d
I ( xi=1 ) log ( yi
π =1 )+ I ( xi =1 ) log(1¿ ¿¿ yi
π )log( yi
π ) ¿ ¿¿
P( π ¿
Log p( π) = log [ P ¿ ¿ yo , x ]
= log [ πy , F yo]
B = max (yo/x)
^π c πd =1
D P (x3, ^λ 1,1 , ^λ 1,D )
B)
^λ y,d
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
^π, ^λ 0,1 , ^λ 0,D = argmax ^π, ^λ 0,1 , ^λ 0,D max
d =1
D
¿ ¿ ¿+¿ ¿λ y,d)+
i=1
D
¿¿
x i,d Pois (λ yi,d) , d=1, 2,.......D)
Prior L λ y,d iid (2,1)
P( xi=1, y=i) = c
P(λ y,d / yi ) = (, ^λ i,d , ^λ 1,d)
λ y,d iid (2,1)
^λ i,d = Pois d=1, 2,.......D
P(y= yi,d , D) α P(y= Xi,d , D)
j=1
d
P ( xi / y )= (yi,d / y1 , 0)
Question 2
a)
The naïve Bayes classifier using prediction model of the data point can be implementing
in python (Stefanoiu, n.d.).
Document Page
b) Plot the stem point on 54 to 57 poisson parameters of the class’s on averages of 10 runs
values
c) Implementing the K-NN classifier using python
Let assume the values of K-NN classifier k=1, 2,......20 .we can find the distance of the
Predication accuracy value of function is k-NN=15. Plot the prediction accuracy values is1….
20.
Document Page
Plot the prediction accuracy as a function of k,
d) Implement the steepest ascent algorithm discussed in class using plot the logistic
regression training data (Chio et al., 2012).
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
e) Testing result of Newton’s method
L(w) ≈ L0 (w) ≡ L(wt) + (w − (wt) T L((wt) + 1/ 2 (w − (wt) T 2L(( wt)(w − ( wt)
Document Page
wt +1 = w t − αtf((wt), where f((wt) = XT (Xw) − XT y
wt +1= argmaxw Lw
'
argmaxw Lw
' = w t − αtf((wt), where f((wt) = XT (Xw) − XT y
Training data function can 1….100 and let as consider the data on 10
F)
The resulting of the testing data on the newton’s method can be implementing on the naïve
Bayes classification of the 2X2 table prediction on the y values is given below plot the graph
Let us consider the y’=0
Document Page
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
References
Chio, C., Agapitos, A., Cagnoni, S., Cotta, C., Vega, F., Caro, G., Drechsler, R., Ekárt,
A., Esparcia-Alcázar, A., Farooq, M., Langdon, W., Merelo-Guervós, J., Preuss, M.,
Richter, H., Silva, S., Simões, A., Squillero, G., Tarantino, E., Tettamanzi, A., Togelius,
J., Urquhart, N., Uyar, A. and Yannakakis, G. (2012). Applications of Evolutionary
Computation. Berlin, Heidelberg: Springer Berlin Heidelberg.
Stefanoiu, D. (n.d.). Optimization in engineering sciences.
chevron_up_icon
1 out of 11
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]