Stochastic Optimization Techniques Report
VerifiedAdded on 2022/09/09
|5
|669
|17
AI Summary
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
Stochastic Optimization
Problem 1
Consider a simple Linear Programming with 2 variables and 2 constraints.
Minimize Objective Function maxsin(C1X1 +C2X2)
Subject to :a11x1 + a12x2 ≤ b1 a=1,..2
a12x2 + a22x2 ≤ b2 b=1,..2
x1,x2 ≥0
But a11x1 + a12x2 ≤ b1 and a12x2 + a22x2 ≤ b2 hold a specified probability of an unknown
values of p1 and p2
Suppose alpha(α) =0.05
Minimize the objective function maxsin( C1X1 +C2X2)
Subject to :P1 +P2 >= 1-alpha
You can enumerate possibilities for x1=0 and x2=1,alpha =0.05
a 11 a 12 Is a110 + a211≤b1 ?
1 1 No P1
2 1 No p2
a 21 a 22 Is a21 + a22≤b2 ?
1 1 No P1
2 1 No P2
Hence x1 =0, X2 =0 is not a feasible solution.
Since the probability of ½ =0.5, you can suggest that the constraint is infeasible. It is not possible for the
constraint to have a feasible solution with the probability 0.95, as you can realize that 1-0.5 =0.5.
Thus the problem;
Minimize the objective function maxsin( C1X1 +C2X2)
Problem 1
Consider a simple Linear Programming with 2 variables and 2 constraints.
Minimize Objective Function maxsin(C1X1 +C2X2)
Subject to :a11x1 + a12x2 ≤ b1 a=1,..2
a12x2 + a22x2 ≤ b2 b=1,..2
x1,x2 ≥0
But a11x1 + a12x2 ≤ b1 and a12x2 + a22x2 ≤ b2 hold a specified probability of an unknown
values of p1 and p2
Suppose alpha(α) =0.05
Minimize the objective function maxsin( C1X1 +C2X2)
Subject to :P1 +P2 >= 1-alpha
You can enumerate possibilities for x1=0 and x2=1,alpha =0.05
a 11 a 12 Is a110 + a211≤b1 ?
1 1 No P1
2 1 No p2
a 21 a 22 Is a21 + a22≤b2 ?
1 1 No P1
2 1 No P2
Hence x1 =0, X2 =0 is not a feasible solution.
Since the probability of ½ =0.5, you can suggest that the constraint is infeasible. It is not possible for the
constraint to have a feasible solution with the probability 0.95, as you can realize that 1-0.5 =0.5.
Thus the problem;
Minimize the objective function maxsin( C1X1 +C2X2)
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Subject to ( a11x1 + a12x2 ≤ b1 and a12x2 + a22x2 ≤ b2 ) >=1-alpha
X,y>=0
Is now well-defined.
The problem contains only two variables which can easily be solved through a simple search
procedure. For instance, when α =0.05,the solution x1=1,x2=2
When α= 0.01,x1=3, and x2=0
Problem 2
Consider b to be a random.
From a general formulation of an optimization problem under uncertainty of;
Min f(x,Ƹ)
Subject to: g(x,Ƹ)=0
h(x,Ƹ)≥ 0 ,
X,y>=0
Is now well-defined.
The problem contains only two variables which can easily be solved through a simple search
procedure. For instance, when α =0.05,the solution x1=1,x2=2
When α= 0.01,x1=3, and x2=0
Problem 2
Consider b to be a random.
From a general formulation of an optimization problem under uncertainty of;
Min f(x,Ƹ)
Subject to: g(x,Ƹ)=0
h(x,Ƹ)≥ 0 ,
P(h(x,Ƹ)≥0)≥ p where p ∈[0,1] is the probability level.
You can design an optimization problem of the random constraints.
You can conclude the optimization model as;
Max ∑
1
2
ai 2 xi
Subject to:P(∑
1
2
aijxi≥ Ƹj (j=1……………….2)) ≥ p
Assumptions:
f is the objective function.
g is the inequality constraints,
x is the typical decision vector.
Ƹ is the vector of uncertainity.
Problem 3
Markowitz theory of mean-Variance optimization provides a technology of portfolio selection of
securities in a way that trades off the expected return and risk of of a potential portfolio.
Consider S1,,S2 with random return.
Assume that Ui is the expected return and σ I is the standard deviation of the assets Si and Sj.
Assume that u=[u1,,..u2]T
Expected return =E[X] =X1u1 +X2u2=uTx
Var[x]=∑
i= j
❑
pijσijxij =XT∑ x
Problem Formulation:
i)For Two Independent Random Coefficients:
Mean-Variance objective Function MaxxSin[u1TX +u2TX]
You can design an optimization problem of the random constraints.
You can conclude the optimization model as;
Max ∑
1
2
ai 2 xi
Subject to:P(∑
1
2
aijxi≥ Ƹj (j=1……………….2)) ≥ p
Assumptions:
f is the objective function.
g is the inequality constraints,
x is the typical decision vector.
Ƹ is the vector of uncertainity.
Problem 3
Markowitz theory of mean-Variance optimization provides a technology of portfolio selection of
securities in a way that trades off the expected return and risk of of a potential portfolio.
Consider S1,,S2 with random return.
Assume that Ui is the expected return and σ I is the standard deviation of the assets Si and Sj.
Assume that u=[u1,,..u2]T
Expected return =E[X] =X1u1 +X2u2=uTx
Var[x]=∑
i= j
❑
pijσijxij =XT∑ x
Problem Formulation:
i)For Two Independent Random Coefficients:
Mean-Variance objective Function MaxxSin[u1TX +u2TX]
xT
∑
i
Xi ≤ σ2
Subject to : a11X1 + a12X2 =b1
a 21x1 +a22X2 =b2
ii)For Two Joint Random Coefficients:
Mean-Variance Objective Function Max Sin[uTX]
XT
∑ x =σ2
Subject to : a11X1 + a12X2 ≤b1
a 21x1 +a22X2 ≤b2
The objective function is risk -adjusted function is risk-aversion constant.
Problem 4
Non-linearities often arise when some of the coefficients in the model are random variables.
Suppose a Linear Programming having two constraints.
Maximize Sin (c1x2 +c2X2)
Subject to : a11X1 + a12X2 ≤b1
a 21x1 +a22X2 ≤b2
The coefficients a1 and a 2 are independently distributed .
Gi(y) represents that the random variable ai is at least as large as y.
The probability of both constraints being satisfied is at least β.
P[a11x1 +a22x2 ≤b1]x[a 21x1 +a22X2 ≤b2]≥ β
The condition can be written as the following sets of constraints.
-y1 + a110 + a120 ≤b1
-Y2 + a11 0+ a1202 ≤b1
G1(y1) X G2(y2)≥ β
∑
i
Xi ≤ σ2
Subject to : a11X1 + a12X2 =b1
a 21x1 +a22X2 =b2
ii)For Two Joint Random Coefficients:
Mean-Variance Objective Function Max Sin[uTX]
XT
∑ x =σ2
Subject to : a11X1 + a12X2 ≤b1
a 21x1 +a22X2 ≤b2
The objective function is risk -adjusted function is risk-aversion constant.
Problem 4
Non-linearities often arise when some of the coefficients in the model are random variables.
Suppose a Linear Programming having two constraints.
Maximize Sin (c1x2 +c2X2)
Subject to : a11X1 + a12X2 ≤b1
a 21x1 +a22X2 ≤b2
The coefficients a1 and a 2 are independently distributed .
Gi(y) represents that the random variable ai is at least as large as y.
The probability of both constraints being satisfied is at least β.
P[a11x1 +a22x2 ≤b1]x[a 21x1 +a22X2 ≤b2]≥ β
The condition can be written as the following sets of constraints.
-y1 + a110 + a120 ≤b1
-Y2 + a11 0+ a1202 ≤b1
G1(y1) X G2(y2)≥ β
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
1 out of 5
Related Documents
Your All-in-One AI-Powered Toolkit for Academic Success.
+13062052269
info@desklib.com
Available 24*7 on WhatsApp / Email
Unlock your academic potential
© 2024 | Zucol Services PVT LTD | All rights reserved.