Limited-time offer! Save up to 50% Off | Solutions starting at \$6 each

# Assignment on Data Science PDF

Added on - 16 Nov 2021

• 11

Pages

• 1698

Words

• 22

Views

• 0

• Save

Share

Showing pages 1 to 4 of 11 pages
Data Science
1.Question....................................................................................................................................3
2.Question....................................................................................................................................5
3.Question....................................................................................................................................6
4.Question....................................................................................................................................7
References......................................................................................................................................10
1.Question
Ridge regression of same weighted vectors wεRm+1
Lagrange multipliers
argmin1
2i=1
n
¿¿-wix(i))
||W||22t
a)
State the constraints
Let us consider the optimization problems of constraints which are used for the Lagrange
multiplier that considersλ¿=0. The constraintyk(x*) does not constraint the minimum ofyj(x)
=0 andλ=0, where a small change ofjthconstraint.
From (y(t)-w.x(t))2, the w value is used to apply for (|w|22-t).
λ=o
w=¿)2+0
2[|W|22-t])=0
b)
Optimization problem using language of multiplier
λ=0λ>0
argmin1
2i=1
n
¿¿-wix(i)) +
λ
2||W||22
λ=0
¿¿)2+0
2[|W|22])
=i=1
n
¿¿)2
A constraint optimization problem is used for the objective functions of constraints. In other case
of particular constraints, the optimization problem considers the convex function ofw1x. The
optimization of the Lagrange can be considered as w.x(i)is the other one which can be used for
w.x(t), as the passive definite of which follows the variable (Adebayo Olorunsola, 2014).
Let us consider x,
F(x) = w.x(T)
From matrix x, Quadratic can use,

x(w.x(T))=wx+x(T)x = (w+wT) x
In other case, it can be used for the Lagrange multiplier,

w(wTIw)= (I+I) w = 2w,λ=1
1
2i=1
n
¿¿)2+1
2[|W|22] = 0)
1
2¿)2+ [|W|22] = 0)]
2
xwTIw=2>0can be strictly used in w.
C)
Analytical matrix algebra
The analytical matrix can be used for the formulate matrix. The regression ridge of RSS is
expressed on the following condition,
RSS (wiλ) = (y-xw)T(y-xw) +λwTw
Let us consider the minimizing straight forward application of matrix, from the calculus part that
includes,

xRSS (wiλ¿=2(xTx) w-2xTy + 2λw=0
That are the simplified expressions,
2(xTx) w + 2λw= 2 xTy
(xTx+λI) w = xTy
Right estimators,
w=¿((xTx+λI)-1xTy)
w=¿(xTx)-1)-1wOLS
D)
Relationship between t andλ

##### You’re reading a preview

To View Complete Document