trffi&ffi ffiffi Ťffi Simulation Modeling Simulation is the next best thing to observing a real system.It allows us to collect perti_ nent information about the behavior of the ,yri"- with the passage of time. Simulation is not an oPtimizationtechnique. Rathei, it is used to estimate the measures of performance of a modeled system Modern simulation typically deals with situations that can be described in the context of a waiting line. This is not a limitation on the use of simulation because prac_ ticallY anY oPerational situation can be viewed in some form as a waiting line. This is the reason simulation has enjoyed tremendous applications in communication net_ works, manufacturing, inventory control, consumer behavior, economic forecasting, biomedical systems, and war strategies and tactics. A forerunner to Present-day simulation is the Monte Carlo technique, a scheme that is aimed at estimating stochastic or deterministic parameters based on random samPling, The main difference between the two techniques is that in Monte Carlo the time element is not a Pertinent factor. Examples of Monte Carlo applications include estimation of the area under a curve or, mor.e generally, evaluation of multiple integrals, estimation of the constant Ť (= 3.1,41,59),irrd mat.Í* inversion. Simulation is a statistical exPeriment, and hence its output must be interpreted by aPProPriate statistical tests. This important point is emphasized throughout the chapter. í8.1 MoNTE cARLo slMULATloN This section uses an example to demonstrate the Monte Carlo technique. The objective of the examPle is to emPhasize the statistical nature of the simulation experiment. Example 18.1-'| We will use Monte Carlo sampling to estimate the area of a circle whose equation is (*-I)'+()/-2)':25 The radius of the circle is r : 5 cm, and its center is (x, y) : (I, 2). 639 640 Chapter 18 Simulation Modeling F|GURE 18.1 Monte carlo estimation of the area of a circle (-4,7,) (6,7) (6, -3)(-4, -3) The procedure for estimating the area requires enclosing the circle "snugly" in a square whose side equals the diameter of the circle as shown in Figure 18.1. The corner points are determined from the geometry of the square. The estimation of the area of the circle is based on the assumption that all the points in the square are equally likely to occur. Thus, if out of a random sample of n points in the square, ru points are within the circle, then To ensure that all the points in the square are equally likely to occur, we represent the coordinates x and y of. apoint in the square by the following uniform distributions: / Ertir,rute of the \ ^( or.1_:r_ ) : +(10 x 10) (ň;i the circle ) = ;\the square/ n \- ír(*) : ír(y) : 0 Determine a random sample / from í(t). The cumulative density function is determined as 4t) : !'}rr-*d,* - 1 - e-\', t > O Setting R : f(r), \4/e can solve for t, which yields | - -.í1\,.,,3 -(x/t"lt - R) Because 1 - R is the complement of R, we may replace ln(1 - R) with ln(R). In terms of simulation, the result means that arrivals are spaced / time units apart. For example, given }, : 4 customers per hour and R : .9, the time period until the next arrival occurs is computed as t., : -(1),r', - .9) : .577 hour : 34.5 minutes The values of ^R used to obtain successive samples must be selecte d randomly from a uniform (0, 1) distribution. We will show in Section 18.4 how these (0, 1) random values are generated during the course of the simulation. 18.3 Elements of Discrete Event Simutation PRoBLEM sET 18.3B In ExamPle 1'8,3-2,suPPose that the first customer arrives at time 0. Use the first threerandom numbers in column 1 of Table 18.1 to generate the arrival times of the next threecustomers and graph the resulting events on the time scale. Uniform Distribution, SuPPose that the time needed to manufacture a part on amachine is given by the following uniform distribution: í(l:rJ-, a 0 is the scale parameter. Convolution Method. The idea of the convolution method is to express the desired sample as the statistical sum of other easy-to-sample random variables. Typical among these distributions are the Erlang and the Poisson whose sample can be obtained from the exponential distribution samples. Example 18.3-3 (Erlang Distribution) The m-Erlang random variable is defined as the statistical sum (convolutions) of m independent and identically distributed exponential random variables.Let y represent the m-Erlang random variable;then |:lt*yz-| l!where yi,i:1-,2,...,,ffi, are independent and identically distributed exponential random variables whose probability density function is defined as í(y): },e-\}', .}; ) 0, i : L, 2, ... ,, ffi From Example 1,8.3-2,the ith exponential sample is li : -(i)r,^ ;), i : 1,2, ..., ffi Thus, the m-Erlang sample is computed as y - -(*)o",^l : -(i),n(Rl^2 ... R-) 650 18,3 Elements of Discrete Event Simulation 651 To illustrate the use of the formula, suppose The first 3 random numbers in column i of (.4799) : .0].90, which yields that m : 3, and }' : 4 events per hour. Táble 18.1 yield R'R2R3 : (.0589X.6733) : .991, houry : -e\n(.019) Example 18.3-4 (Poisson Distribution) Section 17.3.1, shows that if the distribution of the time between the occurrence of suc_ cessive events is.exPonential, then the distribution of the number of events perunit time must be Poisson, and vice versa. We use this relationship to sample the'poisson distribution. Assume that the Poisson distribution has a mean value of \ events per unit time. Then the time between events is exponential with mean I time units. This means that a Poisson sample, n, w1ll occur during r time units if, and oriiy if, Period until eyent /? occurs = t < Period until event n i I occurs This condition translates to h+t2+...t,=tlh+t2+ ...tn+l,, n > 0 t 1 tr, n:00< where ti, i : I,2, ... , ft, is a sample from the exponential distribution with mean From the result in Example 18.3-3, we have R,)=t< n}0 n:0 which reduces to RrRr... Rn =_ e-\' ) RrRz ...Rn+I, n ) 0 1,>e-\')Rr,n:0 Rt : .0589 is less than e-\t : .1353. Frence, the corresponding sample is zz : 0. Example 18.3-5 (Normal Distribution) The Central LimitF.9.9F (see Section12.4.4) states that the sum (convolution) of n indePendent and identically distributed random variables becomes asymptotically nor_ mal as n becomes sufficiently large. We use this result to generate sampt.i fro,o ,ró.mat distribution with mean p and standard deviation o. Define x:Rt+R2+ lR, -(*),r* tRz...R,+t), 0 í(*),-oo 0, forall iandk *q*{ Ét,:ts8J \ l:t á",( Žnr,r) 19.4 Linear Programming 5olution 691 N:1" that.Pilis a function of the policy selected and hence of the specific alternatives fr of the policy. The Problem can be converted into a linear program by making proper substitutions involving 4f. Observe that the formulationls equivalónt to the original one in Section 19.3.1, onlY if q! : ]. for exactly one k for each l, which will reduce the sum Zf=rqfvf to v!-,wherá k- is the optimal alternative chosen.The linear program we develop here does account for this condition automatically. - --- r- Define wik : niQ!, for all i and k BY definition wi1, rePresents the joint probability of state i making decision fr. From probability theory K ni: žwit k=I Hence, qf :#_ Zt:tWit Thus the restriction 27tn,: 1 can be written as mK i=l k:I mK ž)',oi:I k=I Also, the restriction )f:, q! : ]- is_ automatically implied by the way we defined qr in terms of wi1,. (Verify!) Thus the problem can be written as subject to mK MaximizeE: >Zrfr,oi=l k:I mK k:I i=I k:L -1 wit ž0, i : 1,,2, ..., ffii k : I,2, ..., K The resulting model is a linear program in w,1,.Its optimal solution automatically guarantees that qf : 1 for one k toieactr l. First, note that the linear program has m indePendent equations (one of the equations associated with n : mp is redundant). Hence, the Problem must have m basic variables. It can be shown íhat wil,must be strictly positive for at least one k for each i. From these two results, we conclude that q!:+_žt:ů|it can assume a binary value (0 or 1) only. (As a matter of fact the preceding result also shows that,oi : 2f;= ik : wik-,where'k* irit " at"Áuil"" colTesponding to wit, > 0) 692 Chapter 19 Markovian Decision process Example 19.4-1 The following is an LP formulation of the gardener problem without discounting: Maximize E:5.3wrr* 4.7wp* 3w21 * 3.1,w22- wl* .4w32 subject to wl*wtz-(.Zwrr1.3wp 1.1,w22 *0.5w32) :0 wzt * wzz - (.5rr, * .6w9 * .5w21 -l .6w22 + .4w32) : 0 wst l wy - (.3rr1 1 .Lwp * .5w21 * .3w22 l wr, * .55w32) : 0 Wn * Wn l Wzt l Wzz * Wst * Wzz : I . w*ž0, foralliandk The optimal solution is w11 : wI2 : \V3t : 0 and wI2 : .L0I7, w22 : .5254, and w32 : 37ž9.This result means that q] : qz : q2z : 1. Thus, the optimal policy selects alternative k : 2 for i : 1,2, and 3. The optimal value of. E is 4.7(.1017) + 3.t(.5254) + .4(.3729) : 2.256 The positive values of wi1, exactly equal the values of 'm; associated with the optimal policy in the exhaustive enumeration procedure of Example l9.3-1,,which demonstrates the direct relationship between the two methods. We next consider the Markovian decision problem with discounting. In Section 19.3.2 the problem is expressed by the recursive equation (m) í(i): m9x{ vf + u)p!,rril|, i : 1,,2, ..., ffi k t J:' ) These equations are equivalent to m í(i)> "2pkí0 * ,f , for all i and k i:1, provided that f(i) achieves its minimum value for each l. Now consider the objective function Minimize Žu,r(r) i=I where b,(> 0 for alll) is an arbitrary constant.It can be shown that the optimization of this function subject to the inequalities given will result in the minimum value of /(l). Thus, the problem can be written as Minimize Žr,r(r)i:I subject to í(i)- > vf , for all i andk /(i) unrestricted in sign for all l Now the dual of the problem is 19.5 m ">Prjí(j)i=1, _ ^ ..- and Rcts )- fa )of qds ]ion subject to 19.5 Appendix: Review of Markov Chains mK i:7 k:1, bpj:I,2,,...)m mi k:I,2,...,K K žr,ok=1, Wrt ž0, mK - ..) ZPk',o:i:1, k:I for i : 1,,2, Example 19.4-2 Consirler the 8ardele_r problem given the discounting factor ct : .6. If we let b, : bz : bs: 1, the dual LP problem may be written as Maximize 5.3w11 t 4.7w9 l 3w9 * 3.Lw22 - wl l 4w32 subject to wl * wp - .6|.2w1 * .3wp * .Iw22 -| O.Swrrf : 1 wzt*rilzz-.6|5w1 *.6wp*.5w21 1.6w22 + .4w32f :I wl * wn - .6|.3w1 * .Iwp t .5w21 l .3w22 -| wst t .55w32! : 1, wit = 0, for all i and k The optlmal solution is wp : w2t : w3I: 0 and wu, : t.5678, w22 : 3.3528, and, wr, : 2.8145.The solution shows that that optimal policy is (1, 2, 2). PRoBLEM sET í9.4A 1. Formulate the following problems as linear programs. (a) Problem 1", Set I9.3b (b) Problem 2, Set 19.3b (c) Problem 3, Set 19.3b APPENDIX: REV|EW OF MARKOV CHA|NS Consider the discrete points in time {ra} for k : 1,, 2, ... , and let |,obe the random variable that characterizes the state of the system at t1,.Thefamily of rbndom variables { ,-} forms a sÚochastic process. The states at time to actually repiesent the (exhaustive and mutually exclusive) outcomes of the system at that timé. Thi number of states may thus be finite or infinite. For example, the Poisson distribution ,-tttlsln Pr(t): n:0, 1,,2, rePresents a stochastic process with an infinite number of states.The random variable ru rePresents the number of occurrences between 0 and í(assuming that the system starts at time 0). The states of the system at any time / are thus given by n : 0, I, 2, 19.5 694 Chapter 19 Markovian Decision process Another example is the coin tossing game with k trials. Each trial may be viewed as a point in time. The resulting sequence of trials forms a stochastic process. The state of the system at any trial is either a head or a tail. This section presents a summary of a class of stochastic systems that includes Markov proce se and Markov chains. A Markov chain is a special case of Markov processe .It is used to study the short- and long-run behavior of certain stochastic systems. 19.5.1 Markov Processes The occurrence of a future state in a Markov process depends on the immediately precedingstate andonlyonit. If to < t, < 1t,(n:0, 1,,2, ...)representspointsin time, the family of random variables{{,,} is a Markov process if it possesses the following Markovian property: P{E,, : xrlÉ,,_,: xn_I, ... ,, E^: xo} : P{E,, : xrlÉ,,, : xr_t) for all possible values of Éro,,,, ... , ,,. The probabihty p*,_,, *, : P{t, : x,lt,,_, : xn_t}is called the transition probability. It represents the conditional probability of the system being in x, at t, givenit was in xn-l at /,_1. This probability is also referred to as the one-step transition because it describes the system between t,4díLdtn. Anm-step transition probability is thus defined by Prn, *,*- : P{Er,*- : Xn+mlÉ,: Xr) 19.5.2 Markov Chains Let Ep j : 0, 1,,2,, ... , represent the exhaustive and mutually exclusive outcomes (states) of a system at any time. Initially, at time /6, the system may be in any of these states. Leta!o), j : 0, !,, 2, ... , be the absolute probability that the system is in state E7 at /6. Assume further that the system is Markovian. Define Pti : P{É,,:ilE,,_, : i} as the one-step transition probability of moving from state i at tn_l to state j at t,, and assume that these probabilities are stationary over time. The transition probabilities from state Eito state Elcanbe more conveniently arranged in a matrix form as follows: PThe matrix P is called a homogeneou transition matrix because all the transition probabilities pilare fixed and independent of time.The probabilities pry must satisfy the conditions 2P,,: 1, for all l i Pi1 > 0, for all i and j l4oo Pot Poz Pol ' ' ' \ l Prn Ptt Ptz Pts " ' l I rro Pzt Pzz Pzs l \oi, o:, ':, o?., l prels in llorn - f,lln _:i _]_: 19.5 Appendix: Review of Markov Chains 695 The transition matrix P together with the initial probabilities {of)} associatedwith the states { completely define a Markov chain. orr" uruutt ,t irrt, of a Markovchain as describing the transitional behavior of a system ou., equal intervals. Situations exist where the length of the interval depends on the characteristics of thesYstem and hence may not be equal. This case is referred to as an imbedded Markovchain. AbsoluÚe and Transition Probabilities. Given {o?) and p of a Markov chain, the absolute Probabilities,of the system after a specified number of transitions are deter_ mined as follows. Let {r?} be the absoluJe probabilities of the syste m after n transitions-that is, at tn.The general expressioi of {at!} in terms ot ior}a{|} and P are computed as Also, lf::Í;_;,t?ťrť?_':"1|. tlTrt:! or second-order transition probability-that is, the probability of going from state k to statei in exactry t*""'.*riiiitions. g,(r) - '(o)pn o:D : a?pti + ag)pzi + af)4 + . . . : ?oíorr, E : ?o!"o,: ) (?,Urr,)r,:;,rr( Zpo,o,,): |,?r3} It can be shown by induction that a?) : ?,Ir( 2p9,-,)p,u) : ?,,!,pÝ, where rP isthe n-step or n-order transition probability given by the recursive formula p? : Zpft-Dpo, k rn general, pÝ) : I r?o-^lp!r, O 1 m 1 n, for all i and,j k These equations are known as chapman-kolomogorov equations. The elements of a higher transition matrii llpqi1l il ;" ;btained directly bymatrix multiplication. Thus, llpe..ll : |lr,jlllr,Jl : l' llpf..ll : |lp?)Illr,jl : t' and, in general, llr |l - p"-.p : p, Hence, if the absolute probabilities are defined in vector form as a(,):@f),o),o),...) then 696 Chapter 19 Markovian Decision Process Classiíication of States in Markov Chains. In Markov chains, we may be interested in the behavior of the system over a short period of time. This is represented by the absolute probabilities as shown in the preceding section. An important investigation involves the long-run behavior of the system as the number of transitions tends to infinity. In such a case we need a systematic procedure that will predict the long-run behavior of the system. Irreducible Markov Chain. A Markov chain is said to be irreducible if. every state Ei can be reached from every other state E, after a finite number of transitions-that is, P9) >O,i+j,1, 1,. The number of steps before the system returns to { is called the first return time. L?' Í';)denote the probability that the first return to {. occurs at the ruth step. Then given the transition matrix p : llpall an expression for í?| "anbe determined as follows: Pii : íÍ) p?: rf) * í#'p,, or í?: pQ - íPrii By induction n-l ív):p(:) - ?__íy'p?-*l The probability of at least one return to state {.is then given by íii:2í) 698 Chapter 19 Markovian Decision process Thus, the system is certain to return to 7 if íii: l.In this case, if F,7 defines the mean return (recurrence) time, tf íii< ]., it is not certain that the system will returnto \ and, consequently, Fť: Ň. The states of a Markov chain can be classified based on the definition of the first return times as follows: 1. A state is úransientif. f11 < ]_ - that is, F,ť : N. 2. A state is recurrent (persistent) if. f11 : 1,. 3. A recurrent state is null if [t/ : oo and nonnull if pr, { oo. 4. A state is periodic with period / if a return is possible only in t,2t,3t, ...steps. This means that p@) : 0 whenever n is not divisible by r. 5. A recurrent state is ergodic if it is nonnull and aperiodic (not periodic). If all the states of a Markov chain are ergodic, then the chain is irreducible.Inthis case, the absolute probabilities a@) _ x(o)pn always converge uniquely to a limiting distribution as n + oo, where the limiting distribution is independent of the initial probabilities a(0). The following theorem is now in order: Theorem 19.5-1. All the states in an irreducible infinite Markov chain may belong to one, and only one, of three states: transient, recurrent null, and recurrent nonnull. In every case all the states communicate, and they have the same period. For the special case where the chain has a finite number of states, the chain cannot consist of transient states only nor can it contain any null states. Limiting Distribution of lrreducible Chains. Example I9.5-I shows that as the number of transitions increases, the absolute probability becomes independent of the initial distribution. This is the long-run property of Markov chains. In this section determination of the limiting (long-run) distribution of an irreducible chain is presented. The discussion will be restricted to the aperiodic type, because it is the only type needed in this text. The existence of a limiting distribution in an irreducible aperiodic chain depends on the class of its states. Thus, considering the three classes given in Theorem ].9.5-1, the following theorem can be stated: Theorem 19.5-2. In an irreducible aperiodic Markov chain, (a) If all the states are transient or null, then p? - 0 as n -> oo for all i and j and no Iimitin g distribution exists. (b) If all the states are ergodic, then lLii : 2"í9)n:1, 19.5 Appendix: Review of Markov Chains 699 where rrlis the limiting (steady-state) distribution. The probabilities n, exist uniquelY and are indePendent of a\q.In this case, Ťlcan be determine"d from the set of equations2 n, : n,o" 1:žr; ] The mean recurrence time for state j is then given by I ILjj : \ Example 19.5-3 To determine the steady-state probability distribution in Example ].9.5_1_, consider Ť1 :.2t1 *.6n2 Ť2: .8r1 l .4rr2 ŤtlŤz:I The solutiorr Yields Ť1, : .4286 and, 12 : .571"4.These results are very close to the row values of a(8)in Example 19.5-1. The mean recurrence time for states 1 and 2 are 1 ^^lltt : -: Z.3 Tl1 1 lJ..o. :-:l.,/5 íll - ] -,,-l ', Ii: _ -;_iť __: ; __: ]: _,:rn -.-i! ;: js Example 19.5-4 consider the following Markov chain with three states: 0,],211 1 1r 0l1 4 4\ r:r|} i il '\ó + +l This is called a doubly stochastic matrix, because )a Žr,, :ZPii :I i:0 l:0 In such cases, the steady-state probabilities are Ť0: Ť1,: Ť2: 2One of the equations íTj: Zirripiiis redundant. = ,-- 700 Chapter 19 Markovian Decision process SELECTED REFERENCES Derman, C., Finite State Markovian Decision Processes, Academic Press, New York,1970. Howard, R., Dynamic Programming and Markov Processes, MIT Press, Cambridge, MA, 1 ind their stationary distributions. 00\ 0 0l p Ol.p tq:,], 0 pl 0 0/ l te of the following Markov chain: 11t.3 3l l 1lq ll l tl= =l5 5/ fi lme 9.5A bllow ln rec T 19 he fo meanm I sET ify tht the m PRoBLEM 1. Classi 2. Find t and fi 1\ zI 0l !lzl 0 p 0 0 0 ch sta lL l? lz \, \5 ains an 1 4 jt 0 p 0 0 0 0 lr each chai (1 |l : for (ov (u) (b) rkr ( 0 : tirce wing recurren ťfi*&pTffiffi 3ffi Classical Optim ization Theo ry Classical oPtimization theory uses differential calculus to determine points of maxima and minima (extrema) for unconstrained and constrained functiorr.. Th" methods may not be suitable for efficient numerical computations, but the underlying theory pro_ vides the basis for most nonlinear programming algorithms (see Chapi "r-zt1.This chaPter develops necessary and sufficient conditions foi determining unconstrained extrema, the Jacobian and Lagrangean methods for problems with equal_ itY constraints, and the Karush-Kuhn-Tucker conditions for probl.*, with inequutity constraints. 20.1 UNcoNsTRAlNED PRoBLEMs An extreme point of a function /(X) defines either a maximum or a minimum of the function. Mathematically, a point Xo : (r?, ..., *|, ..., x})is a maximum if /(}h+h)=/(x0) for all h . (hu ..:.1hj, ..., h,) and |h| is sufficiently small for all7. In other words, X9 is a maximum if the value of f at every point in ihe neighbortrood of X9 does noi exceed /(X.). In a similar manner, X6 is a minimum if /(Xo+h)>/(xo) Figure 20.]" illustrates the maxima and minima of a single-variable function /(x) over the interval|a,b].The Points x1, x2, x3, x4,zfld, x6elía ail Óxtrem a of f(x),with x1, x3,áíLd J6 ás maxima and x2and xaas minima. Because í(xu): max{f(x1), í@r), í@u)l /(-16) is a global or absolute maximum, and f(x1) and f(4) arelocal or relative maxima. Similarly, f@o)is a local minimum and f(x)is a global minimum. Although 11 (in Figure 20.I) is a maximum point, it differs from remaining local maxima in that the value of /corresponding to at least one point in the neighborhood 701 702 Chapter 20 Classical Optimization Theory í(*) FlGURE 20.,| Examples of extreme points for a single-variable function of x1 is equal to í@r).In this respect, x1 is a weak maximum, whereas 4 and x6 zía strong maxima. In general, X6 is a weak maximum if /Qfu + h) = /(&) and a strong maximum if /(X0 + h) < /(Xo), where h is as defined earlier. In Figure Z01, the first derivative (slope) of /equals zero at al| extrema. This property is also satisfied at inflection and saddle points, such ás J5.If a point with zero slope (gradient) is not an extremum (maximum or minimr*), then it must be an inflection or a saddle point. 20.1.1 Necessary and Sufficient Conditions This section develops the necessary and sufficient conditions for an n-variable function /(X) to have extrema.It is assumed that the first and second partial derivatives of /(X) are continuous at every X. Theorem 20.L-L A necessary condition forXgto be an extreme point oí í(X)is that V/(Xg) : 0 Proof, ByTaylor's theorem,for 0 < e < ]., /(xo + h) - /(&) : V/(xo)h + jhTHh|**ol, where h is as defined earlier. For sufficiently small |h|,theremainder term }trZHtr is of the order lfi hence /(xo + h) - /(xo) : V/(xŇ + O(l,h = V/(x6)h It can be shown by contradiction that V/(X6)must vanish at a minimum point X6. Otherwise if it does not, then for a specifici the following condition will hold: Ť(Oor{f ,o :_ ,:_ \ 20.1 Unconstrained Problems By selecting hlwith appropriate sign, it is always possible to have t,P. o ' dXj setting all other hlequalto zero,,Taylor's expansion yields /(Xo+h)/(xo),h+0 Thus, for )fu to be a minimum point, it must be true that jh'Hh1*.*er, ) 0 Given that the second Partial derivative is continuous, the expression irrrirrr must have the same sign at both X6 and Xo * 0h. Because hTIh|-. o"ti'rr". a quadratic form (see Section A,3), this expression (and hen;.e hrHh|*o+il iJpositive if, and only if, H|", is Positive-definite. This means that a sufficient coňdition ior the stationary point Xg to be a minimum is that the Hessian matrix, H, evalu ated atthe same point is positive_ definite, A similar proof for the maximization case shows that the corresponding Hessian matrix must be negative-definite. Example 20.1-1 consider the function í(*r, *r, xl): h * 24 * xzxs - x! - x) - x! \ 704 Chapter 20 Classical Optimization Theory The necessary condition Ia":L-2x,:g Ia,i: x3 - 2x2: 0 af 'a*r:2*x2-2x,:g The solution of these simultaneous equations is given by Xo : (L,?, t) To establish sufficiency, consider V/(Xg) : 0 a'í a'í a'í a*? 0xlTx2 0xllx3 a'í a'í a'í 0 x20 x1 a *?, E x2d x3 a'f a'f a'f 04Ex1 04Ex2 ax3 x0 "-:( |-z :|3 0 -z 1The principal minor determinants of H|*. have the values -2,4, and -6, respectively. Thus, as shown in Section A.3, H|*o is negative-definite and X6 : (;,!, !1 reptesents a maximum point. In general, if H|,. is indefinite, X6 must be a saddle point. For nonconclusive cases, Xg may or may not be an extremum and the sufficiency condition becomes rather involved because higher-order terms in Taylor's expansion must be considered. The sufficiency condition established by Theorem 20.1-2 applies to singlevariable functions as follows. Given y6 is a stationary point, then L yois a maximum if "f'(yo) < 0. 2. yois a minimum if "f'(yo) > 0. If in the single-variable case f"(yo) : 0, higher-order derivatives must be investigated as the following theorem requires. Theorem 20.L-3. Given yg, a stationary point oí í(y),iíthe first (n - I) derivatives are zero and í@)Uo) * 0,then L. yo is an inflection point if n is odd. 2. yois a minimum if n is even and 7(")110) > 0. 3. yo is a maximum if n is even and f@)1ro) a 0. 20.1 Unconstrained Problems 7O5 Example 20.1-2 Figure 20.2 graphs the following two functions: í(y):y^ Forf(y) _ y+ g(Y) : y' í'(y):4y3 : 0 which yields the stationary point |o :0. Now í,(0): í,,(0) - /(3)(0) _ 9, ;(+)(0) : 24 > 0 Hence, lo : 0 is a minimum point (see Figure20.2). yo FlGURE 20.2 Extreme points ot í(y) - ya and s(ň : y' For g(y) : !3, S'0):3y2:0 This yields./o : 0 as a stationary point. Also 8'(0) : 8"(0), 8(3)(0) : 6 * 0 Thus, lo : 0 is an inflection point. PRoBLEM sET 20.1A 1. Examine the following functions for extreme points. (a) í(x): x3 + x (b) í(x): xa + x2 (c) í(*): 4xa _ x2 + 5 (d) í(*): Qx - zYQ* - 3)' (e) f(*): 6x5 _ 4x3 + 10 2. Examine the following functions for extreme points. (a) ,f(X) : x3, + x) - 3xp2 (b) /(x) :2x? + x) + x! + 6(x, * x2 l xs) t 2xp2x3 3. Verify that the function í(xr, xr, xs): 2xp2x3 - 4xp3 - 2x24 + x| + x| + x] - 2xt has the stationary points (0,3, 1), (0,1, -1), (I,2,0),(2,I,I),and, (2,3, ciency condition to find the extreme points. ' - 4x, -f 4x3 -1). Use the suffi- Chapter 20 Classical Optimization Theory 4. Solve the following simultaneou equations by converting the system to a nonlinear objective function with no constraints. x2-x]:g X2 - X1:2 (Hint: r^n f'(*r, x2) occurs at f (x1, xz) : 0.) 5. Prove Theorem 20.1,-3. 20.1.2 The Newton-Raphson Method In general, the necessary condition equations, V/(X) : 0, may be difficult to solve numerically. The Newton-Raphson method is an iterative procedure for solving simultaneous nonlinear equations. Although the method is presented here in this context, it is actually part of the gradient methods for optimizing unconstrained functions numerically (see Secti on 21.1,.2). Consider the simultaneous equations ,f(E:0, i:I,2,...)m Let Xt be a given point. Then by Taylor's expansion ,flx) = í,(Xo)+ Ví(x9(x - xo), i : 1,, 2, ... ) m Thus, the original equations, f (X) : 0, i -- 1-, 2, ... , ffi,may be approximated as f,(xo) + Ví(xo)(x - Xo) : 0, i : 1,, 2, ... , ffi These equations may be written in matrix notation as AttBo(X-Xo):0 If Bk is nonsingular, then X: Xk - Bo'Ao The idea of the method is to start from an initial point Xo.By using the foregoing equation, a new point;lt+1 is determined from Xe. The procedure ends with X- as the solution when X" x Yn-t. A geometric interpretation of the method is illustrated by a single-variable function in Figure 20.3.The relationship between xk and xk*I fot a single-variable function /(x) reduces to _.k+I _ _.k í(-1I -* -7@ or < í,(ro) : ,Í(,\. , xk _ xk+1, xk+I is determined from the slope of /(x) at xk, whereThe figure shows that tan 0 : í'(*o). 20.1 Unconstrained Problems 7a7 í(x) í(rk) Tangent to f(x) at xk Convergence point (solution) FlGURE 20.3 Illustration of the iterative process in the NewtonRaphson Method One difficulty with the method is that convergence is not always guaranteed unless the function /is well behaved. In Figure z}.3,,ifltre initial point is'a, Íhemethod will diverge. There is no easy way for locating a 'ogood'' initial poirrt. Example 20.1-3 To demonstrate the use of the Newton-Raphson method, consider determining the sta_ tionary points of the function í(*):(3*-ZYQv-3Y The equation we need to solve to determine the stationary points is /'(x) : 0, which gives 72x3 - 234x2 + 24Ix - 78 : 0 Excel template ch2ONewtonRaphson.xls can be used to solve equation. Figure 20.4 provides the iterations for solving f'(x) requires entering the following ratio in cell C3, with the varňble i 72x3-234x2+24Ix-78 21,6x2-468x+241, Note that the denominator is the first derivative of the numerator, as required by the Newton-RaPhson method. We set tolerance limit A : .001 and select initiat staiting P9jlt .X0 :.10. The tolerance limit specifies the allowable difference u"t*."" i; i"aX"-' that signals the termination of the iterations. The method converges to x : ]..5. any single-variable : 0. The template replaced with ,A.3: 708 Chapter 20 Classica l Opti mization Theory FlGURE 20.4 Excel solution oí72x3 - 234xz + 24tx - 78 : 0 by the NewtonRaphson Method Actually, /(x) has three stationary points at x :'1, x : fi, and x :'r. The remaining two points can be found by selecting different values for initial x6. In fact, xg : .5 and x0 : ]. should yield the missing stationary points. you are encouraged to use different initial .16 to get a feel of how the method works. In general, the Newton-Raphson method requires making several attempts before "all" the solution can be found. In the present example, we know beforehand that the equation has three roots. This will not be the case with complex or multivariable functions, however. PRoBLEM sET 20.1B 1. [Jse ch20NewtonRaphson.xls to solve Problem 1_(c), Set 20.7a. 2. Solve Problem 2(b), Set Z0.Iaby the Newton-Raphson method. 2o,2 coNsTRAlNED PRoBLEMS This section deals with the optimization of constrained continuous functions. Section 20.2.1, introduces the case of equality constraints, and Section ZO.Z.2 deals with inequality constraints. The presentation in Section 20.2.1 is covered for the most part in Beightler and associates (l979,pp. a5-55). 20.2.1 Equality Constraints This section presents two methods:The Jacobian and the Lagrangean.The Lagrangean method can be developed logically from the Jacobian method. This relationship provides an interesting economic interpretation of the Lagrangean method. j,igo4lÉr t,ozsts+l,"",_,o,illiltg89i''_""_"_""",,,,. 1 9?5154i 1 E94452i 0 230i02l6Ei,,;,,,- ,,,,,,,,,,,,,,,,:,,,,,, - t- "- , 1,694452, 15E5453; 012B9995/Bi l.JiU lU+; l,UJ++U4| U.iJUl Ul, lUU!,,;,,,- ,,,,,,,,,,,,,,,,:,,,,,, - t- "- , 1,694452: 15E5453; 9]2B9995/Bi 1 5E5453i 1 511296] 0 054156405i 1 5'129Ei 1.500432i 0 !108E4068i 20.2 Constrained Problems 7Og Constrained Derivaúives (Jacobian) Method. Consider the problem Minimize z : Í(X) subject to g(X) : 0 where X : (xy x2, ... , xr) g : (8u 8z, ... , 8-)' The functions/(X) and g(X),, i : I, 2, ... , ffi,are twice continuously differentiable. The idea of using constrained derivatives is to develop a closeá-form expression for the first Partial derivatives of /(X) at all points that satisty tt " constraints g(X) : g. The corresPonding stationary points are identified as the points at which these partial derivatives vanish. The sufficiency conditions introducedln Section 20.1, canthen be used to check the identity of stationary points. clarifY the ProPosed concept, consid er f (x1, x2) illustrated in Figure 20.5. This function is to be minimized subject to the constraint 7t@t,*r):x2-b:0 í(\, xz) FlGURE 20.5 Demonstration of the idea of the Jacobian Methodí@t,*z) constrained curve constrained minimum Constraints (X) : x2 - b : 0 contour of constrained optimum objective value 71O Chapter 20 Classical Optimization Theory where b is a constant. From Figure 20.5,the curve designated by the three points A, B, and C represents the values of. f(x1, x2)fot which the constraint is always satisfied. The constrained derivatives method defines the gradient of í@r, xr) at any point on the curve ABC. Point B at which the constrained derivative vanishes is a stationary point for the constrained problem. The method is now developed mathematically. ByTaylor's theorem, for X + AX in the feasible neighborhood of X, we have /(x + Ax) - /(E : V/(EAX + o(^ ) and g(X + AX) - g(E : Vg(X)AX + O(^4) As Ax, J 0, the equations reduce to ó/(E : V/(Eóx and as(E : Vg(X)aX For feasibility, we must have g(D : 0, óg(E : 0, and it follows that ó/(D - V/(x)óX:0 Vs(Eax : 0 This gives(m + 1) equations in (n + l) unknowns, ó.f(X) and óX. Note that ó/(X) is a dependent variable and, hence, is determined once óX is known. This means that we have m equatíons in n unknowns. If. m > n, at\east (m - rr) equations are redundant. Eliminating redundancy, the system reduces to m < n.If.ffi: ft, the solution is óX:0, and X has no feasible neighborhood, which means that the solution space consists of one point only. The remaining case, where m < n,requites further elaboration. Define such that X:(Y,Z) Y : (yr, yr, ... , !-), Z : (Zr, Zz, ... , Zr-^) The vectors Y and Z are called the dependent and independent variables, respectively. Rewriting the gradient vectors of /and g in terms of Y andZ,we get V/(Y, Z) : (Y"f, Yrfl Vg(Y, Z) : (Vvg, Vzg) Define f:Vv8: ffi) l 20.2 Constrained Problems 711 l, \ /%s, \ C-V,g:| : l \v,r^l I-x- is called the Jacobian matrix and C*x,_mthe control matrix. The Jacobian J isassumed nonsingular. This is always possiUte Uecause the given m equations are inde_pendent by definition. The components of the vector y muit thus be selected such thatthe matrix J is nonsingular. The original set of equations in ó/(E and ó x may be written as aí(Y, Z) : Vy/óY + Y,f aZ and JóY : -C 0Z Because J is nonsingular, its inverse J-1 exists. Hence, óY: - I-LCaZ Substituting for ÓY in the equation for ó/(E gives ó/ as a function of óZ-that is, a í(Y,Z) : (v,í- YYíI-IC)aZ From this equation, the constrained derivative with respect to the independent vectorZ is given by Y,í:U#:Y,í-vyru-lc where %/ is the constrained gradient vector of /with respect to Z.Thus, y,í(y, Z)must be null at the stationary points. The sufficiencY conditions are similar to those developed in Section Z1.L.TheHes_ sian matrix will correspond to the independent vector z,and,the elements of the Hessianmatrix must be the constrained second-derivatives. To show how this is obtained,let Y,í: Y,í- WC It thus follows the ith row of the (constrained) Hessian matrix is 0y,flOe;. Notice thatW is a function of Y and Y is a function of Z,.Thus,the partial derivative of V./ withrespect to e, is based on the following chain rule: U', :|wi |yi 6zi 0y1 Ezi Example 20.2-1 Consider the following problem: /(E: x?+3x2r+5x,,x! 8l(X) : xlx3 t 2x2 + *3 - ].]. : 0 sz(X) - x? + 2xp2 + *r, - 14 : 0 Given the feasible point X0 : (I, 2,3), we wish to study the variation in f (: a ,f ) inthe feasible neighborhood of Xd. 712 Chapter 20 Let Thus, Classical Optimization Theory Y:(4,4) and Z:xz + 5x|, I0xp3)y,f :(#,#):Qxt Yrí:#: u*, Jl as, ó81\ la *, l _(*, l as, ag, | - \Zx1 \a ,"l /as,\ Iu|:?;:,,) \r"l X1 \ + 2x2 Zh) C: Suppose that we need to estimate d , íin the feasible neighborhood of the feasible point ť : (I, 2,3)given a small change dxz : .01 in the independent variable x2.We have J- (s ",(i) :(_Ž Ž)()= (_1!3)'": (u e ) Hence, the incremental value of constrained/is given as 6,í:(y,í-v".fJ-,c)az: (urr, - (47,rrl(_3:!3))r" - -.46.01,óxz By specifying the value of 6x2 for the independent vaiable x2, feasible values of óx1 and óx2are determined for the dependent variables.r1 snd x3 using the formula óY : -I-ICaZ Thus, for 0x2: .01, /ar,\ _ r_lrr l__ _ /-.ozss\ (;;]/ : -J-lCr", : ( .Ó;ZÓ) We now compare the value of. a,f as computed above with the difference /(X0 + óX) - /(X), given 0x2 : .01,. X0 + óX: (1 - .0283,2 +.01_,3 + .025) : (.97L7,z.01,,3.025) This yields /(x) : 58, /(xo + óx) : 57.523 or /(xo+óx) -/ď):-.477 The amount -.477 compares favorably with 0,í: -46.0t|xz: -.460L. The difference between the two values is the result of the linear approximation in computíng 0,f at ť. 20.2 Constrained Problems 713 PRoBLEM sET 20.2A 1. Consider Example 20.2-1,. (a) ComPute 6,Í by the two methods presented in the example, using 6xz :.00]. instead of 0x2: .0].. Does the effect of linear approximation become more negligible with the decrease in the value of 0x2? (b) (c) Specify a relationship among 6x1, dx2,and dxr at the feasible point X0 : (!,2, 3,) that will keep the point ("? + 6x1, x| * dx2, x! + a4)feasible. If Y : (*r,, *r) and Z : x1, what is the value of lxlthat will produce the same value of 0, f given in the example? a^f /_? Y,í:,*:Zxs - Q*r, '"r( ; 10 28:Ťrr -'žxrt2x3 Example 20.2-2 This example illustrates the use of constrained derivatives. Consider the problem Minimize,f(E : x? + x| + x! subject to gl(X) : h* x2*34-2:0 sz(E:5xt+2x2+ x3-5:0 To determine the constrained extreme points,let Y : (*r, x2) and Z : ,, Thus, Vv,f : J- Hence, aí óx^ aí )xz ,( : Qxt,2x2), Y7f : tZ 1r -':(-i _i ),.:\ 3 3/ : Zxs i) ( af ó/\ \*" ;.-.,) (l L),, (,i -,iil(il:(i) il(ll The equations for determining the stationary points are thus given as Y,í:0 gl(X) : 0 g2(X) : 0 or The solution is X0 = (.81, .35, .28) 714 Chapter 20 CIassical Optimization Theory The identity of X0 is checked using the sufficiency condition. Given x3 is the independent variable, it follows from Y,f that a?f 0,Xi *2:+W)-T(k) la*,\ rŤ, -Ťl| ';:l - , \r"l :(j) From the Jacobian method, : -J-lC Substitution gives a2,7ta,x! : 0. Hence, ť is the minimum point. Sensiúivity Analysis in the Jacobian Method. The Jacobian method can be used to study the effect of small changes in the right-hand side of the constraints on the optimal value of / Specifically, what is the effect of changing 8;(X) : 0 to s;(E : 08, on the optimal value ot f? This type of investigation is called sensitivity analysis and is similar to that carried out in linear programming (see Chapter 4). However, sensitivity analysis in nonlinear programming is valid only in the immediate neighborhood of the extreme point. The development will be helpful in studying the Lagrangean method. We have shown previously that aí(Y,Z) : Vy/óY + Vzf aZ óg:JóY+CaZ Given 0E+ 0,then óy:J-'ag-I-tCaZ Substituting in the equation for af(Y, Z)gives aí(y, Z) : y"íí'óg+ y,íaZ where Y,í:Yrí-Vyru-lc as defined previously.The expression for ó/(Y, Z) canbe used to study variation in /in the feasible neighborhood of a feasible point X0 resulting from making small changes 0g and EZ. At the extreme (indeed, any stationary) point Xo : (Yo, Z the constrained gradient V./ must vanish.Thus ó,f(Yo, Zo) : V",.fJ-'óg(Yg, Z9) or Yr,íJ-' (-) T, aí_ óg --.--* Consider the same _problem of Exampte 20.2-2r.ft" optimum point is given by Xo : (r?, *2, x2) : (.81, .35, .28). Giver, Ýo : (r?, *|),th".r--------- V",.f : (#,#) : Q*?,,2x|1 : G,62, .70) Consequently, (at óf\ 7_? 1r (rr,, *) : Vvo,fJ-| : (I.62, D( ; _i) : (0876, .3067) This means that for 692:I,Í will incr_e?|9approximately by.0867. Similarly, for d8, : l,/will increase approximately by .3067. 20.2 Constrained Problems 715 The effect of thesmall change óg on the optimumvalue of f canbe studied by evaluat_ ing the rate of change of / with respect to g. These rates are usually refeired to as sensitivity coeíIicients. Example 20.2-3 APPlication of the Jacobian Method to an LP Problem. Consider the linear pro_ gramming problem Maximizez:2xr*3x2 subject to xllx2*x3 Xt- Xz l X4: 5 a J XI, X2, X3, X4 > 0 To account for the nonnegativity constraints xr. > 0, substitute x1 : l, .witnthis substitution, the nonnegativity conditions become implicit and the original problem becomes Maximizez:2w?+3w} subject to wi: 5 wzq: 3 To apply the Jacobian method,let Y : (rr, rr),Z : (wz, wq) (In the terminology of linear programming, Y and Z correspond to the basic and non_ basic variables, respectively.) Thus w?+w3+ w|-w|+ J- C(1:;, _?;:),,-,: (* ('U' ,|,,),y,í : (4,,, 11 o:r, \, wl and,w2 * 0 4w"/ 6wz),Yď: (0,0) 716 Chapter 20 Classical Optimization Theory so that 7I 1rz^ y,í:(0, 0) - (4,,, uň|T* #)|"K' Y,í: (4,,,0) - (6w2,r(T DG:,: 'U,) - (2w1,6w3) The solution of the equations comprised of V./ : 0 and the constraints of the problem yield the stationary point (r, : 2, wz : 1-, w3 : 0, w4 - 0). The Hessian is given by I a?f alr \ l u,r', ó"wjó,w4 | _ ( -5 0\ H.:t=r- ,I-1:(ó í) \ó"w3 d.wa 6,wi l Because H. is indefinite, the stationary point does not yield a maximum. The reason the preceding solution does not yield the optimum solution is that the specific choices of Y and Z are not optimum. In fact, to find the optimum, we need to keep on altering our choices of Y and Z until the sufficiency condition is satisfied. This will be equivalent to locating the optimum extreme point of the linear programming solution space. For example, consider Y : (wr, wa) and Z : (rr, w3). The corresponding constrained gradient vector becomes The corresponding stationary point is given by w,,: 0, w2: Ý5, w3 : 0, w+: \6. Because m.: (-? 9), \ U -6/ is negative-definite, the solution is a maximum point. The result is verified graphically in Figure 20.6.The first solution @t : 4, x2 : l) is not optimal, and the second @t : 0, xz: 5) is.You can verify that the remaining two extreme points of the solution space are not optimal. In fact, the extreme point (*, : 0, x2 : 0) can be shown by the sufficiency condition to yield a minimum point. The sensitivity coefficients Vv,fl-l when applied to linear programming yield the dual values. To illustrate this point for the given numerical example,let u1 and u2be FlGURE 20,6 Extreme points of the solution space of the linear program 20.2 Constrained Problems 717 the corresoonding dual variables. At the optimum point (rr: 0, wz: Ý5, w3 : 0, w4 : V8), these dual variables are given by (ur, ur): VvoJ-l : (6w2, 0) The corresponding dual objective value is 5u1 * 3u2: 15, which equals the optimal Primal objective value. The given solution also satisfies the dual constraints and hence is optimal and feasible. This shows that the sensitivity coefficients are indeed the Lp dual variables.In fact, both have the same interpretation. We can draw ome general conclusions from the application of the Jacobian method to the linear programming problem. From the numerical example, the necessary conditions require the independent variables to equal zero.Also, the sufficiency conditions indicate that the Hessian is a diagonal matrix.Thus, all its diagonal elements must be positive for a minimum and negative for a maximum. The observations demonstrate that the necessary condition is equivalent to specifying that only basic (feasible) solutions are needed to locate the optimum solution. In this case the indePendent variables are equivalent to the nonbasic variables in the linear programming Problem. Also, the sufficiency condition demonstrates the strong relationship between the diagonal elements of the Hessian matrix and the optimality indicato, ii - c7 (see Section 7.2) inthe simplex method.1 PRoBLEM sET 20.28 1. SuPpose that Example20.2-2 is solved in the following manner. First, solve the constraints expressing x1 and x2 in terms of 4;then use the resulting equations to express the objective function in terms of x3 only. By taking the derivative of the new objective function with respect to x3,we can determine the points of maxima and minima. (a) Would the derivative of the new objective function (expressed in terms of x3) be different from that obtained by the Jacobian method? (b) How does the suggested procedure differ from the Jacobian method? 2. APPly the Jacobian method to Exampl e 20.2-I by selecting Y : (xr,, *r) and Z : (xr). 3. Solve by the Jacobian method: Minimize í(X): Ž*? subject to fr.,: 'where C is a positive constant. Suppose *"' 'n" right-hand side of the constraint is changed to C * 6, where 6 is a small positive quantity. Find the corresponding change in the optimal value of / 1Fo| 1foryal proof of the validity of these results for the general linear programming problem, see H. Taha ! d G. CurrY, "Classical Derivation of _t!re Necessary and Sufficient CinditionŠ ^for opíimal Linear Ptograms," Operations Research, Vol. 19, L97t,pp.1045-1049. The paper shows that the key id|as of the sim_ plex method can be derived by the Jacobian méthod. 1 n\ 'i, :):(3,0)2wo Zwo/ 718 Chapter 20 Classical Optimization Theory 4. Solve by the Jacobian method Minimize /(X) : 5x! + x} + 2xrx, subject to S6) : xtxz- ]-0:0 (a) Find the change in the optimal value of /(X) if the constraint is replaced by XrXz-9.99:0. (b) Find the change in value of /(X) in the neighborhood of the feasible point (2,5) given that xlx2 : 9.99 and óx1 : .0].. 5. Consider the problem: Maximize /(X) : x! + zx) + LOx] l 5xp2 subject to sr(8 : xl l x) + 3xrx, - 5 : 0 gz(X):x?+5xp2+x3-7:0 Apply the Jacobian method to find a/(E in the feasible neighborhood of the feasible point (1_,1,1).Assume that this feasible neighborhood is specified by 68t : -.01, 68z: .0Z,andóx, : .61. 6. Consider the problem Minimize /(X) : x] + x| + x] + x?o subject to sr(X) : xI * 2x2 * 34 * Sxq - ]-0 : 0 sz(D : xl * Zxz t 5x, t 6xl - ]-5 : 0 (a) Show that by selecting x3 and xa as independent variables, the Jacobian method fails to provide a solution and state the reason. (b) Now solve the problem using x1 and x3 as independent variables and apply the sufficiency condition to determine the type of the resulting stationary point. (c) Determine the sensitivity coefficients given the solution in (b). 7. Consider the linear programming problem. Maximize.f(E : Žr,*,j=I subject to n 8;(8 : 2o,i*i - br : 0 i : I,2, ..., ffi j:1 xjž0, j:1,2, ",,fl Neglecting the nonnegativity constraint, show that the constrained derivatives V./(X) for this problem yield the same expression for {z1- cr.} defined by the optimality condition of the linear píogramming problem (Section 7.2)-thatis, {zi - ,}: {CBB-lPi - c}, for alli Can the constrained-derivative method be applied directly to the linear programming problem? Why or why not? Lagrangean Method. In tivity coefficients-that is 20.2 Constrained Problems 71g the Jacobian method, let the vector )r represent the sensiI : VvJ-1 Thus, aÍ-)\óg-9 This equation satisfies the necessary conditions for stationary points because "_ť is com_ Puted such that Y,Í : 0. A more convenient form for prese"ti"g these .quutib^ is to take their partial derivatives with respect to all xi. This yi.to, 6 ,, _ axjT- )\g) : 0, j : I,2, ...) n The resulting equations together with the constraint equations g(x) - 0 yield the fea_ sible values of X and )t that satisfy the necessary conditions for siationaryioints. The given Procedure defines the Lagran7ean methodfor identifyňg the stationarY Points of oPtimization problems with eqiality constraints. The procedure can be developed formally as follows. Let L(X,I):/(E-Is(E The function L is called the Lagrangean function and the parameters )\ the Lagrange multiPliers. BY definition, these mtlltipliers have the sameinterpretation as the sensi_ tivity coefficients of the Jacobian method. The equations 0L _ n aL ax:O,ux:0 give the necessarY conditions for determining stationary points of /(x) subject to ďX) : 0. The sufficiency conditions for the La rangean method will be stated without proof. Define řť: (frt) (m+n)x(m+n) where l_ .__.\ p : Í V'l,(x) ), ., : lló2r(x, r)|| \or. (x)l_,, a : ll ,;ďll,-; forall iand, j The matrix HB is the bordered Hessian matrix. Given the statiorlry point 0b, Io) for the Lagrangean function Z(X, )r) and the bordered Hessian matrix HB evalu ated at(Xo, Io), tňen o is 1-, A maximum Point if, starting with the principal major determinant of order Qm + 1), the last (n - m) principal minoi determinanis of H'form u" ur,.".rur_ ing sign pattern starting with (-1)-+1. 2, A minimum Point i{ starting with the principal minor determinant of order Qm + 1), the last (n - m)principal minor diterminants of HB have the sign ot(_1)-. :aíóg Chapter 20 Classical Optimization Theory These condition are sufficient, but not necessary, for identifying an extreme point. This means that a stationary point may be an extreme point without satisfying these conditions. Other conditions exist that are both necessary and sufficient for identifying extreme points. However, the procedure may be computationally intractable. Define the following matrix at the stationary point (X6, )r9): where p is an unknown parameter. Consider the determinant I A |;then each of the real (n - *) roots p of the polynomial lA| :0 must be 1. Negative if )h is a maximum point. 2. Positive if X6 is a minimum point. Example 20.2-4 Consider the problem of Example }}.Z-Z.The Lagrangean function is L(X,I) : x? + x?, + x? - }ríxtl xz * 34 - 2) - },2(5x1 * 2x2 + x3 - 5) This yields the following necessary conditions: aL . tr:2x,-N1-5\,:6 aL : 6xz aL : 0xz aL :óNr aL : óNz The solution to these simultaneous equations yields Xo : @r, xr, xs) : (.8043, .3478,, .2826) tr : (}',, \r) : (.0870, .3043) This solution combines the results of Examples 20.2-2 and 20.2-3. The values of the Lagrange multipliers )t equal the sensitivity coefficients obtained in Example 20.2-3 (allowing for the roundoff error). The result shows that these coefficients are independent of the choice of the dependent vector Y in the Jacobian method. 2xz-N1-2N2:Q Zxr-3Nr-Nz:0 -(*r+x2*34-2):0 -(5", t2x2 lxs- 5) :0 20.2 Constrained Problems 721 , check the determinant of HB ry point Xo to be a minimum. iaI :( |- (-1 am : -1, : x ( a Lt is íB= ,m of( isa int [IlHB pom H ,n- sign ). x0 tven 1 :) the s >0, glVTo show that the Because n:3 andm only, which must have Because det HB : 460 Example 20,2-5 Consider the problem subject to The Lagrangean function is Minimizez:x?I+x}+x! 4x1 tx22+Zxr-14:0 L(X,, N) : ,? + x?, + x! - x(+x, + x3. * 24 - 14) The associated necessary conditions are given as: + :2x,, - 4}, : 0 dXt aL ^;-- : Zxz - Z}txr: g dX" i :2x, -2}, : 0 dXz aL ffi : -(4r, + x?, + 2x, - 14) : g These, equations yield infinity of solutions becaur. # : 0 is independent of x2. For the sake of the example, we will consider the followingihree solutions: (Xo, Xo), : (2,2, 1,, I) (Xo, No), : (2, -2, I, t) (Xo, No), : (2.8,, 0, I.4, I.4) The sufficiency conditions yields l0 4 2x, 2\ rur:I_4 ? 0 0lll -|r*r 0 2- 2}\ 0l \2 0 0 2l Chapter 20 Classical Optimization Theory Because m : 1, and n : 3, for a stationary point to be a minimum, the sign of the last (3 - 1) : 2 principal minor determinants must be that of (-1)^ : -1-. Thus, for (Xo, No), : Q, Z, t, 1) -3z < 0, -64<0 For (X6, No)z : Q, -2, 1, I), l l ; -ál : |J0 ó| lo442l l+200l l+000l |z 0 0 z| lo 44l Á 2 0l : |l 0 o| Io 4 -4 14 2 0 -32<0, 1_; 0 0 Iz 0 0 : zl sl z| :-64 10 Finally, for (X6, },6)3 lo I |4 |0 : (2.8,0, 1,.4, 1,.4) lo 4 0 zl :I2.8=o, lá 3 _3 3l :32>0 |z 0 0 z| This shows that (X6) and (Xo), are minimum points.(Xo). does not satisfy the sufficiency conditions of either a maximum or a minimum.This does not mean that it is not an extreme point because the given conditions are sufficient only. To illustrate the use of the other sufficiency condition that employs the roots of polynomial, consider :9p'-26p*16:0 all p > 0, (Xo)t : Q,2, 1) is a minimum point. For lAl : 9p' - 26p" + ]_6 : 0 which is the same as in the previous case. Hence (Xo), : (2,, -2, I)is a minimum point. Finally, for (Xg, No): : Q.8, 0, 1,.4, 1,.4), IAI : 5p' - 6p_8 : 0 This gives p : 2 and -.8, which means that the identity of (Xg), : (2.8,0 1.4) is not known. PRoBLEM sET 20.2c 1. Solve the following linear programming problem by both the Jacobian and the Lagrangean methods: 40l 2 0l 0 -.8| 2( 4 2x, 2 \ 2-p, 0 0 l 0 2-2}t-1l" 0 l 0 0 Z-pl l0 o:(:" \z' Now, for (\, tr'o)r : Q,2, 1,,, l) lAl This gives p : 2 or f;. Because (Xo, No), : (2, -2, 1,,, L), 20.2 Constrained Problems 723 Maximize"f(X) : 5x1 * 3x2 subject to gl(X): x1 *2x2*4 -6:0 sz(X):3x1 * x2 *xa-9:0 Xb X2, X3, X4 > 0 Find the optimal solution to the problem Minimize /(X) : x| + zxl + LOx! subject to sr(X): x1, l +,, - 5:0 sz(X) :xt*5x2*4-7:0 Suppose that gl(X) : .0]- and g2(X) : .}2.Find the corresponding change in the optimal value of/(X). 3, Solve Problem 6, Set 20.2b by the Lagrangean method and verify that the values of the Lagrange multiPliers are the ame as the sensitivity coefficients obtained in problem 6, Set20.2b. 20.2.2 lnequality Constraints This section extends the Lagrangean method to handle inequality constraints. The main contribution of the section is the development of the geneial Karush_Kuhn_ Tucker (KKT) conditions, which provide the basii theory for nonlinear programming. Extension of the Lagrangean Method. Consider Maximize z : í(X) subject to &(E=0, i:1,,2,...,Fn The nonnegativity constraints X > 0, if any, are included in the mconstraints. If the unconstrained optimum of /(X) does not satisfy all constraints, the con_ strained oPtimum must occur at a boundary point of the sólution space. This means that at least one constraint must be satisfiělin equation form. Thó procedure thus involves the following steps. S ep 1. Solve the unconstrained problem Maximize z : í(X) If the resulting optimum satisfies all the constraints, stop because all constraints are redundant. otherwise, set k : 1 and go to siep 2. SteP 2. Activate anY k constraints (i.e., convert them into equalities) and optimize /(X) subject to the k active constraints using the Lagrangean method.If the resulting solution is feasible with respect toihe,.-áirrirr-g constraints, stop; 724 Chapter 20 Classical Optimization Theory it is a local optimum.2 Otherwise, activate another set of k constraints and repeat the step. If" all sets of active constraints taken k at a time are considered without encountering a feasible solution, go to step 3. Step 3. If. k : m,stop;no feasible solution exists. Otherwise, set k : k * 1 and go to step 2. An important point often neglected in presenting the procedure is that it does not glarantee global optimality even when the problem is well behaved (possesses a unique optimum). Another important point is the implicit misconception that, for p 1 Q,, the optimum of /(X) subject to p equality constraints is always better than its optimum subject to q equality constraints. This is true, in general, only if the q constraints form a subset of the p constraints. The following example is designed to illustrate these points. Example 20.2-6 subject to Maximize z : *Q.x, - 5)' - Q*, - I)' \*2x2=2 Xy X2> 0 The graphical representation in Figure 20.7 should assist in understanding the analytic procedure. Observe that the problem is well behaved (concave objective function subject to a convex solution space), which means that a reasonably well defined algorithm should guarantee global optimality. Yet, as will be shown, the extended Lagrangean method produces a local maximum only. The unconstrained optimum is obtained by solving #: -4(2x1- 5) : 0 0' - -4(2x1- 1) : 0 óxz -\-,-z _) This give s@t,, xz) : (Z, };, *t icrr does not satisfy the constraint x1 * 2x2 < 2. Thus, the constraints are activated one at a time. Consider x! : 0. The Lagrangean function is L(*r, xz, }r) : -(2x1 - 5)' - (2*, - I)' - },.r1 Thus, #:-4Q*r-5)-tr:0 aL Exz :-4Qxz-I) -0 2A local optimum is defined from among all the optima resulting from optimizing f(X) subject to all combinations of. k equality constraints, k : I, 2, ... , ffi. 20.2 Constrained Problems 725 - _ -,r<1- LJ z : -1,0 1 Q,b 1\ ol ,a FlGURE 20.7 Solution space of Example 20.2-6 aL _-. : -.Tr - 0 dA H:E*'_:h*"^'^"}T:ry"I9r: *r).:. (0, | *ti9q.can_be shown by the sufficiency dure terminates with @tllz)_,: (0; ij H;'b*ňiffii[ilŤ,:1"TTí:'tuuli lvr'.llll'l,LgĎ wllll \ň1,, x2) : (U, ž)a: ? tocal opttmal solution to the problem. The objective value is z : r?i (Th ; ráaining constia ints x2 = 0 ;J xl, : 2x, = 2, acti-vated one at a time, yield infeasible sotutioňs.) 2x, = 2, acti- ln"Tqu:. 20.7, the.feasible solution @r, !r) : (2,0), which is the point of intersec_ This value is better than thebne obtainóo *itÉ """ ;ti;; *"rr.ái.,. The Procedure just described illustrates that the best to be hoped for in using the extended Lagrangean method is a (possibly) good feasible solution. This is particularly true if the objective function is not unimodal. If the functions of the problem are wellrr slv vv *i"XiJ: i:^'^h^., T"9'"T po..:::"1 1 u.nilue constrained optimum as in Example vgrrJ, v\ sider the unconstrained optimum and the constraineá optimá subject to allsets of onef ----- - ---J, active constraint, then two active constraints, and so on, until all mconstraints are actil_ vated. The best of- all the feasible optimais the gtobal optimum. If this Procedure is followed by Exampie 20.2-a, it will be necessary to solve Seven Problems before global optimality is vérified. This indicates the limited use of the method in solving problems of any piactical size. The Karush,Kuhn,Tucker (KKT) Conditions.3 This section develops the KKT nec_ eSSarY conditions for identifying stationary points of a nonlinear constrai, rul lLl'Iltrryrng S.aTlonary pomts ot a nonlinear constrained problem subject to inequality constraints. The development is based on the LagranseanLagrangean method, These conditions are also sufficient ,r.rd", certain rules that will be stated later. 3Historically, W, Karush was the first to develop the KKT conditions in 1939 as part of his M.S. thesis at fhit'J;'sitY of Chicago. The same conditions *"r" a"u"táped indepe;;l;;try in 1951 by W iuňr,-ana 726 Chapter 20 Classical Optimization Theory Consider the problem Maximize z : í ) subject to g(X) š0 The inequality constraints may be converted into equations by using nonnegative slack variables.Let 57 (> 0)be the slack quantity added to the ith constraint g;(X) < 0 and define S: (Sr, Sr,..,,S^)', s': (S?, 8,,...,S'^)' where rn is the total number of inequality constraints. The Lagrangean function is thus given by L(X, s, I) : /(E - )\[ďx) + s'] Given the constraints g(E<0 a necessary condition for optimality is that }, be nonnegative (nonpositive) for maximization (minimization) problems. This result is justified as follows. The vector )r measures the rate of variation of /with respect to g-that is, af r:É In the maximization case, as the right-hand side of the constraint g(E < 0 changes from 0 to óg(> 0), the solution space becomes less constrained and hence/cannot decrease.This means that )\ > 0. Similarly for minimization, as the right-hand side of the constraints increases,/cannot increase, which implies that )t < 0. If the constraints are equalities, that is, g(X) : 0, then }, becomes unrestricted in sign (see Problem 2, Set Z0.2d). The restrictions on )t are part of the KKT necessary conditions. The remaining conditions will now be derived. Taking the partial derivatives of Z with respect to X, S, and )t, we obtain aL:=-:V/(E_NVg(X):6 óx! aL ffi : -2},", : 0, i : 1,2, ",, m aL ffi:-(g(x) fs2):6 The second set of equations reveals the following results: 1. If },, + 0, then S? :0, which means that the corresponding resource is scarce and, hence, it is consumed completely (equality constraint). 2. If S? > 0, then N; : 0. This means resource j is not scarce and, consequently, it has no effect on the value of /(i.e., N, : # * 0). I ,l ,: :, 20.2 Constrained Problems 727 From the second and third sets of equations, we obtain },;g;(X) : 0, i : l,, 2, ... , ffi This new condition essentially repeats the foregoing argument, because if },, > 0, &(E: Oor S?:0;andif&(E < 0,,S? ) 0,andN;:0. The KKT necessary conditions for the maximization problem can now be summarized as follows: )\>0 V/(E-IVg(X) :6 }';g;(X) : 0, i : I, 2, ... ) m g(X) š0 These conditions apply to the minimization case as well, except that )t must be nonpos_ itive (verifY!).In both maximization and minimization, theLágrange multipliers.or."_ sponding to equality constraints must be unrestricted in sign. SuÍficiencY of the KKT Conditions. The Kuhn-Tucker necessary conditions are also sufficient if the objective function and the solution space satiďy the conditions in Thble 20.1,. TABLE 20.1 Sense of optimization Required conditions Objective function Solution space Maximization Minimization Concave Convex convex set convex set It is simPler to verify that a function is convex or concave than to prove íhat a solution SPace is a convex set. For this reason, we provide a list of conditirons that are easier to aPPlY in Practice in the sense that the convexity of the solution space can be established bY checking the convexity or concavity of the constraint functi,ons. To pro_ vide these conditions, we define the gener alizednonlinear problems as subject to Maximize or minimize z : f(X) g;(X) < 0, i : 1-, 2, ... ) r 8;(X) >0, i:r*I,...,p g,(X) :0, i:p*!,...)m where },, is the Lagrangean multiplier associated with constraint l. The conditions for establishing the sufficiency of the KKT conditions are summarized in Table 2O.z. i:I i:r+1 728 Chapter 20 Classical Optimization Theory TABLE 20.2 Required conditions Sense of optimization /(x) S;(E }', Concave Convex íCorru",. { Co.r"uu" II-in"u. íCorru"* { Corr"uu" Irirr"u, >0 <0 unrestricted <0 >0 unrestricted (l=i 0 and concave if N, < 0. Similar interpretations can be established for all the remaining conditions. Observe that a linear function is both convex and concave. Also, if a function/is concave, then (-/)is convex, and vice versa. Example 20.2-7 Consider the following minimization problem: Minimize /(E : x! + x] + x! subject to sr(X) : Zxt g2(X) : Xl g3(X) : l ga(X) : 2 gs(X) : *xz-5š0 *xs-2=0 -X1 -X2 <0 <0 -X3 <0 This is a minimization problem; hence )\ šO.The KKT conditions are thus given as (Nr, \r, N:, N+, },5) š0 (2x1,,2x2, Zxs) - (\r, Nr, },3, },4, X,5) Nt$t: )yz!z: : Ns$s:0 g(X) š0 210\ 1 0 1\ -1 0 ol:o 0 -1, 0l 00-Il },5x3 : 0 Zxltxz=5 hlxsš2 The solution, i, ,11 : ,,,ir.::,':r':;:r'j^: : \5 : 0, N: : _2,N+ : -4.Because both/(X) and the solution.puÓ" 8(X) = 0 áre.orrrr"*, i(x,s,ó'rirust be convex and the resulting stationary poini yiótO, a global constrained minimum. The examPle shows^that the p.oc"drrň is nót suitablě for numeri.ur .ó,,'puiuiiorr. because it maY be difficult to sólve the resulting conditions ."píi.iily. The KKT condi_ tions are central to the development of the ňonhnea, p. ;il*1rg urgóritt ,,', i., Chapter 21,. These conditions reduce to 20.2 Constrained Problems 72g }'t, \z, Na, },+, Xs = 0 2xr-2)tr-\z*\::0 2xr-Nr*N+:0 Zxr-Nzl-Ns:0 }rrQxrlxz-5) :0 \2(x1 *x3-2):0 \l(1 -xJ:0 }roQ-xr):0 Maximize /(X) g(x) = o Show that the KKT conditions are the same as in Secti on20.2.2,except that the Lagrange multipliers )\ are nonpositive. 2. Consider the following problem: Maximize /(X) g(x) : 0 V/(x)-IV(X):6 8(X) : 0 )\ unrestricted in sign PRoBLEM sET 20.2D 1. Consider the problem: subject to subject to show that the kkT conditions are 730 Chapter 20 Classical Optimization Theory 3. Write the KKT necessary conditions for the following problems. (a) Maximize /(X) : x', - x?z + xp! subject to x1 *x)*xr:5 5*?-x?2-4>0 XI, X2, X3 ž 0 (b) Minimize /(X) : x! + x! l 5xp2x3 subject to 4, Consider the problem Maximize /(X) subject to g(X) : 0 Given/(X) is concave ands,(X) (' : L,2, ..., m)ísalinear function,show that the KKT necessary conditions are also sufficient. Is this result true if g;(E is a convex nonlinear function for all i? Why? 5. Consider the problem Maximize /(X) subject to sr(X) > 0, g2(X) : 0, g3(X) = 0 Develop the KKT conditions and give the stipulations under which the conditions are sufficient. SELECTED REFERENCES Bazarra, M., H. Shrali, and C. Shetty, Nonlinear Programming Theory and Algorithms,Znd ed., Wiley, New York,1993. Beightler, C., D.Phillips, and D. Wilde, Foundations of Optimization,Znd ed., Prentice Hall, NJ, 1979. Rardin, R. Optimization in Operations Research,Prentice Hall, Nl1998. x?-x3+x]<10 x]+x|+4x!>20 21.1 21.1.1 21.1 21.1.1 trFtApTffiffi x Nonlinear Programming Al9orithms The solution methods of nonlinear programming generally can be classified as either direct or indirecr algorithms. Examples of direct methods are the gradient algorithms, where the maximum (minimum) of a problem is sought by followiňg the fasteit rate of increase (decrease) of the objective function. In indirect methodr, trr" original prob_ lem is rePlaced by another from which the optimum is determined. ExampÉsof ihese situations include quadratic programming, separable programming, an-d stochastic programming. UNcoNsTRAlNED ALGoR|TH M 5 This section Presents two algorithms for the unconstrained problem: the direct search algorithm and the gradient algorithm. Direct Search Method Direct search methods apply primarily to strictly unimodal single-variable functions. Although the case may appear trivial, Section 2t.I.2 shows that Óptimizationof single_ variable functions plays a key role in the development of the 111or. general multivári_ able algorithms. The idea of direct search methods is to identify the interval of uncertainty that includes the oPtimum solution point.The procedure lócates the optimum by iteraiively narrowing the interval of uncertainty to any desired level of accuiacy. Two closely related algorithms are presented in this section: Dichotomous and golden section search methods. Both algorithms seek the maximizationof a unimodal function/(x) over the interva| a < x < b,which is known to include the optimum point x*. The two methods start with 16 : (a, Ď) representing the initial interval of uncertáinty. 731 732 Chapter 21 Nonlinear Programming Algorithms General Step i. Let I,_1 : @r, xa) be the current intervaI of uncertainty (at iteration 0, xL : a and xR : b). Next, define x1 and x2 such that xtlxtlX2lxp The next interval of uncertainty, d, is determined in the following manner: 1. It í@r) > f@r), then x7 1 x* ZI.I|a|). 2, It f(x) < í@r),,then h 1 x* < 21.1[b]). 3. Iíí@r):í@r.),then xtlx* < x2. Set xL: xt,xR: xz,andd:(xt,xz). The manner in which x7 and x2 áío determined guarantees that I, 1 I,_t, as will be shown shortly. The algorithm terminates at iteration k if" Ik < A, where A is a userspecified level of accuracy. The difference between the dichotomous and golden section methods occurs in the mannoí.T1 &íId x2día computed.The following table provides the formulas. Dichotomous method Golden section method -r,Q 1_ xl : xR - ('=X-n - xr) ^ /< x2:xL+(' Xxa-xt) xt: L(xa + x1 - A) xz:i(xa+x.+A) 1 xz. Set xa : x2 and Ii : (xr, x2) (see Figure xp. Set xL :.r1 and Ii : @y xp) (see Figure aXLXlX2Xp , Ii-II r-l _| , Ii F|GURE 21.1 Illustration of the general step of the dichotomous/golden section search methods , Ii- 7| ,-r _l ,, I' ,| ,10 (b)(u) í@t) ,. /o ,] 21.1 Unconstrained Algorithms In the dichotomous method, the values x1 and x, sit symmetrically around the midpoint of the current interval of uncertainty. This means that Ii:.5(Ii_, + A) Repeated application of the algorithm guarantees that the length of the interval of uncertainty will approach the desired accuracy, A. In the golden section method, the idea is more involved. We notice that each iteration of the dichotomous method requires calculating the two values /(xJ and f(x2),but ends up discarding one of them. What the golden section proposes is to save computations by reusing the discarded value in the immediately succeeding iteration. Definefor0 ( ct ( 1, x1,:xR-a(xn-*r) x2: xL * ct(.ra - *r) Then the interval of uncertainty Ii at iteration i equals @L, xz) or (x1, x6). Consider the case d : @r, r2), which means that xl is included in d. In iteration j * 1, we select x2 equal to x1 in iteration i, which leads to the following equation: xr(iteratíoni i 1) : xl(iterationl) Substitution yields xr l a|xr(iteration l) - xLl : xR - c(xn - ,r) or xr * a|x1 -| a(;n - ,r) - xLf : xp - a(xp - x7) which finally simplifies to a2 + a-].:0 This equation yields o : !ž . Because 0 = ct š 1, we select the positive root -1 r \/< ct : -Ť ^, .681. The design of the golden section computations guarantees an ct-reduction in successive intervals of uncertainty;that is Ii,r1 : UIi Compared to the dichotomous method, the golden section method converges more quicklY to the desired level of accuracy.In addition, each iteration in the golden section method requires half the computations because the method always recycles one set of computations from the immediately preceding iteration. Example 21.1-1 Maximize í(*) : Í3*, lit-" + 2o), 0 í(*r)+ x7: I.45, 11 : (L45,3) Iteration 2 11: (1.45,3) = (xt, xp) x1 : .5(3 + I.45 - .1) : 2.175, Í(x1): 5.942 x2: .5(3 + t.45 + .1) : 2.275, f(x) : 5.908 í@r) > í@r)+ xp: 2.275, 12: (L45,2.275) Iteration 1 1o : (0, 3) = (x7, xp) xt : 3 - .618(3 - 0) : 1,.146, f(x1) : 3.438 xz : 0 +,61E3 - 0) : I.854, f(x2) : 5.562 í@r) > í(rr)+ x1 : 1-.1-46, 11 : (,J,46,3) Iteration 2 11 : (I.L46,3) = (x2, xp) xl : xzin iteration 0 : ]..854, f(x1) : 5.562 xz: I.146 + .618(3 - 1,.1,46) : 2.292, f(x2) : 5.903 í@r) > í(x)+x7: 1.854, { : (1,854, 3) Continuing in the same manner, the interval of uncertainty will eventually narrow down to the desired A-tolerance. Excel template ch2]_DichotomousGoldenSection.xls is designed to handle either method automatically. The input data include í(*),o, b, and A. The function /(.r) is entered in cell E3 as =IF (C3<=2, 3 *C3 , (-C3+2 0 ) /3 ) Note that C3 plays the role of x in f(x). Limits a and b are entered in cells 84 and D4 to represent the admissible search range for/(x). Also, the tolerance limit, A, is entered in cell 83. The search method is selected by entering x in either D5 (dichotomous) or F5 (golden section). Figure 2'l,.2 compares the two methods. Not only does the golden section method requires 40% less iterations, it also involves less calculations per iteration as we explained previously. PRoBLEM sET 21.1A 1. Use Excel template ch2].DichotomousGoldenSection.xls to solve Example 21,.1,-1, assuming that A : .01. Compare the amount of computations and the accuracy of the results with those h Figttr e 21,.2. 2. Find the maximum of each of the following functions by dichotomous search. Assume that A : .05. (a) í(x):r-]^., 2=x=4 l(x - Jrl (b) f(*):xcos.tr, 0 0) may be handled in the following manner.Let 61 and 62 be positive constants and define WI: x1 * 61 W2: X2 * 62 The new variables wl and w2 zta strictly positive. Now xIxz : lvtwz - 6zwt - 6twz + 6162 Letting | : wtwz, the problem is expressed as Maximize z : y - 6zwt - 6twz + 6162 subject to lny:Inw1 lInw2 Y > 0, wtž6ywz= 6z The new problem is separable. ExamPles of other functions that can be made separable using substitution are gll+xz and fi'.A variant of the procedure just presented Can be applied to such cases to effect separability. Separable programming deals with nonlinear problems in which the objective function and the constraints are separable. This section shows how an approximate 740 Chapter 21 Nonlinear Programming Algorithms solution can be obtained for any separable problem by linear approximation and the simplex method of linear programming. The single-variable function/(x) can be approximated by a piecewise linear function using mixed integer programming (Chapter 9). Suppose that f(x) is to be approximated over an interval Io, b]. Define ar, k : I, 2, ... , K, as the kth breaking point on the r-axis such that at 1 az 1 1 ax.The points a1 and a6 coincide with end points a and b of the interval under investigation. Thus,/(*) is approximated as follows: í(*) = 5í(oo),o k:7 ' : ,Ooto k:I where tpísanonnegative weight associated with the kth breaking point such that ž'o:,k:I Mixed integer programming ensures the validity of the approximation. Specifically, the piecewise linear approximation is valid if 1_. At most two tp are positive. 2. If r* is a positive, then only an adjacent tpal oítpl ca;TL assume a positive value. To show how these conditions are satisfied, consider the separable problem Maximize (or minim ize) z : Žf,@,) i:I subject to n )s1(") ' bj, i : I,2, ..., ffi i:I This problem can be approximated by a mixed integer program as follows. Let the number of breaking points for the ith variable x; equal K, and let af be its kth breaking value. Let { be the weight associated with the kth breaking point of variable 1.1 Then the equivalent mixed problem is nKi i:| k:I subject to nKi Z2sj,@bt = i:I k:1, 00 ír(*r) : *, fr@r) : *1 The ^exact oPtimum solution to this problem, obtained by inspection, is x1, : 0, X2 : 2.I2,and z* : 20.2.To show how thďapproximating meth"od is used, consid|r the separable functions 742 Chapter 2'| Nonlinear Programming Algorithms 81(xI) :3x.t s?(x): Lxl The functions ír@r)and g|(x1) are left in their present form because they are already linear. In this case, x1 is treated as one of the variables. Considering f2@2) and s?@r),we assume that there are four breaking points (Kz: 4). Because the value of x2 cannot exceed 3, it follows that ír(obs',@5) This yields ír(*r)ž12í2@1)+ ír(a7)+ f,ír(a))+ t)ír(ar) = 0l2 + tt| + I6t) + 8tt}: t1 + L6f2 + llt1' Similarly, s1@)N2t?+a32+Lst| The approximating problem thus becomes subject to Maximize Z : X1, + P2 + 1,6t) + 81t) 3x1 t2t)+8t)+1,8t)=l t)+t)+t)+t):1, tr>O,k:I,2,3,,4 ír>0 The solution must satisfy the restricted basis condition. The initial simplex tableau (with rearranged columns to give a starting solution) is given by Basic X1 t) Solution .l1 tl, The variable s1(> 0)is a slack. (This problem happened to have an obvious starting solution.In general, one may have to use the artificial variables techniques, Section 3.4.) From the z-row coefficients, rj is the entering variable. Because r] is currently basic at positive level, the restricted basis condition dictates that it must leave before t} can enter the solution. By the feasibility condition, ,1 íIlust be the leaving variable. This means that t)cannot enter the solution. The next best entering variable, t),requires t) L až 1000 21,1,2 32168 438118 -81-16-t-I ť2.1t1ttr 109 011 3z818 0111 2'|.2 Constrained Algorithms 743 to leave the basic solution, a condition that happens to be guaranteed by the feasibility condition. The new tableau thus becomes Basic X1 tI t) .1 t) Solution _6515-1 .1t!t)t',X1 1,616 J1 3-6010 1 t)011I0 1 l _8 l 'l0l0r0 o _l l8 9 " l0 l0 l0 tl *a -]0-q- 0 t) -.l ]á 1 -8 1 , Next, t)isthe entering variable, which is admissible because /; is positive.The simplex method shows that s1 will leave. Thus, t) Solution _24 -36 a-r7 LL, The tableau ShoWS that t) and ttr are candidates for the entering variable. Because rj is not adjacent to basic t) or t),it cannot enter. Similarly, t| cannót enter u..uur. t) can_ not leave- The process ends at this point, and the solutiorr given is the best feasiblďsolu_ tion for the approximate problem. The optimum solution to the original problem is Xt:0 Xz = 2t) + 3t} - z(h) + 3G+) : 2.1 z:0+2.I4:L9.45 The aPproximate optimum value of. x2(: 2.I) approximately equals the true optimum value (: 2.12). SeParable Convex Programming. A special case of separable programming occurs Ih_." Sj,@,) is convex for all l and7, thus énsuring a convex solution space. Additionally, if Í,@) is convex (minimization) or concave (maximization) for all t, tnenthe problem has a global optimum (see Table 20.2, Section 20.2.2). Under such conditions, a simplified approximation can be used. Consider a minimization problem and let í,@) b" as shown in Figure Zt4.The breakingpointsof thefunction fi@)arexi: aki,k:0, I, ...,Ki.Letxpidefinethe increment of the variable x; in the range (oo-r,,, at), k: I,2, ..., Kiiund l.t ptibe the corresponding slope of the line segment in the same range. Then Ki í,@)= )ptixti + í,(ao) k:7 Ki xi : 2xtik:I 0 = Xo, - aki - at_l, i,, k : l, 2, ... , K, Chapter 21 Nonlinear Programmin9 Algorithms fi@i) FlGURE 21.4 Piecewise linear approximation of a convex function The fact that fi(x) is convex ensures that p1, 1 pzi 1 < pr,i.Thus, in a minimization problem, for p 1 q, the variab\e xri is more attractive than /q;, which means that xpiwill always enter the solution before xn;. The convex constraint functions ť,@) are approximated in essentially the same way. Let pjt ibe the slope of the kth line segment corresponding to ť,@).It follows that the lth function is approximated as Ki k:I The complete problem is thus given by Minimize e : á(Žrixtil í,(,,)) subject to '/K, ; (ž PjtiXti 0šxtišatt \ + 8',@o)] = bi, j : I,2, ",)m / - at_t,i, k : Ir2, ..., Ki, i : lr2, ...) n where Pkt : i P'ki : í,(oo) - í,(oo-r) am - at-t,t Tj,@o,) - Ti,(oo-r,) au - at-yt The maximization problem is treated essentially the same way. In this case, pi; ) pzt ) > pK,i,which means that, for p 1 q,the variable xpiwíll always enter the solution before xn; (see Problem 7, Set 21,.2a for proof). The new problem can be solved by the simplex method with upper bounded variables (Section 7.3). The restricted basis concept is not needed because the convexity (concavity) of the functions guarantees correct selection of basic variables. Example 21.2-2 Consider the problem MaximizeZ:xt-x2 subject to 21.2 Constrained Algorithms 745 x2 = 243 2x?, = 32 >)1_ -.L x2 > 3.5 The separable functions of this problem are fr(*r) : x1, ír(xr) - -x2 sl(x) : 3*t, 8L@) : xz s?(xr) : xI,, (*r) : 2x3 These functions satisfy the convexity condition required for the minimization problem. ^ The ranges of the variables _x1 and x2 (estimated from the constraints) are 0 = *, < 3 and 0 - xr< 4.Leí Kt.:3 and kr:4 with a,I: a02:0.The.Íop.. corresponding to the separable functions are determined as foiiows. Fori:I, am ír(oor) : a^ Ptl sl@tJ: 3a|,,, ďr(oor) : oo, Pzt l 3x! + h* X1 XuPItt Xn Xzt Xg, 1 1 I 0 1, 2 J J 45 195 0 J 48 243 1, 1 I 0 1, 2 J 00 11 )) 33 Fori:2, atz ír(aor) : -ak2 ll@td: at, Pltz s7@,,) :Za'kz Pzrz. Xlt Xzl Xst X,Q 2 6 10 1,4 0 2 8 18 32 0 1, 2 J 4 00 11 22 33 44 0 -1 -2 -J -4 -1 -1 -1 -1 The complete problem then becomes Maximize Z = xu l xzt l xl - xn - xzz - xzz - xn subject to 3x1 l 45x21 -| 19541 -| xn xl* xzll ryrl2x9 xll xzt* xzt + + + + 0 0 Xnl šxnšL, šxna1, xzz l Xzz 6x2, * 1042 xzz l xzz l k : I,2,3 k : I,2,3,4 xa2 = 243 I4xa2 < 32 > 2.1, xa2 > 3.5 i1 1 1 Chapter 21 Nonlinear Programming Algorithms TORA optimum solution is Z : -.52, XII: XIz: 1, XI3: .98, X21 : Xzz: X23: I, X24: .5 The solution translates to (*r, ,r): (2.98, 3.5). PRoBLEM sET 21.2A 1. Approximate the following problem as a mixed integer program. Maximize z : e''' l x1 * @, + I)' subject to x]+ x2<3 Xy X2> 0 2. Repeat Problem 1 using the restricted basis method. Then find the optimal solution. 3. Consider the problem Maximize Z : xlx2x3 subject to x!+xr-l 4š4 XI, X2, X: = 0 Approximate the problem as a linear program for use with the restricted basis method. 4. Show how the following problem can be made separable. subject to Maximize Z : xlx2 * x, * xp3 x 2-| x,* xp3 < ].0 XlrX2rX3 2 0 5. Show how the following problem can be made separable. Minimize z - ebt+ + @, - 2)' subject to x1 lx2-1 436 X!, X2, Xa > 0 6. Show how the following problem can be made separable. Maximizez-ď"'tx}4*xa subject to x1 * x2x3 -l x3 < 10 X7, X2, X3 ž 0 xa unrestricted in sign 21.2.2 746 21.2 Constrained Algorithms 747 Show that in seParable convex programming, it is never optimal to have xt i } 0 when xp_1,; is not at its upper bound. Solve as a separable convex programming problem. subject to Minimize z : x1 -| 2x2 * x! xzr+xr+x!=4 |x1 *x2|<0 xl,x3>0 x, unrestricted in sign 9- solve the following as a separate convex programming problem. Minimize z : (xt - 2)' + 4(x2 - 6)' subject to 6x1*3(x2+I)2=12 xyx2 > 0 Quadratic Programming A quadratic programming model is defined as Maximizez : CX + XZDX Ax0 subject to where X : @y ,r, ... ,, xr)' C : (q, c2, ... , cr) b : (by b2, ... , b-)' o:(':, ,;, '],)1l \o-, .. o^,) |O,, d,,\ o : |r', o',,) The function XZDX defines a quadratic from (Section A.3).The matrix D isLLLgv^ LlL U assumed symmetric and negative-definite. This means that e is strictly concave. The UJrrrrrrvlllv (lttLt rrgěcrtrv('-uglullle. rnls means tnat e ts strict constraints are linear, which guarantees a convex solution space. The solution to this Problem is based on the KKT necessary conditions. Because z is strictly concave and the solution space is convex, tt "r.-.oíaiti ltlon space ls convex, these conditions (as shown in Table z}.z,Section 20.2.2) are also suffióient for a global optimum. 748 Chapter 21 Nonlinear Programming Algorithms The quadraticprogramming problem will be treated for the maximization case.It is trivial to change the formulation to minimization.The problem may be written as Maximizee: CX + XIDX subject to G(X) : )! : (}'r, },r, ... , N-)' U : (!rr, l"r, ... , Wr)r be the Lagrange multipliers corresponding to the two sets of constraints AX - b = 0 and -X < 0, respectively. Application of the KKT conditions yields I>0,U>0 Yz - ()t'r, UlVG(x) : 0 n,(r, - Žo,,*,) : O, i : 1,, 2, ... , ffi FlX1 :0, j:I,2,...)n Ax 0 be the slack variables of the constraints. The conditio -zXrD+)\?"A-Ur:C AX+S:b ILixj : 0 : \;,S; for all i and j tr,U,x,s>0 Because D7 : D, the transpose of the first set of equations can be written -Z,DX + Ar)\, - [J: C7 Hence, the necessary conditions may be combined as /x\ ( -zn 4r -I o\l l l /c'\ ( A 0 0 t)lul :(o/ \r/ 21.2 Constrained Algorithms 749 Fjíl:0:\;S;, N, [J, x, ExcePt for the conditions Fixl : 0 : }";,S,, the remaining equations are linear functions in X, I, (J, and S. The Problem is thus equivalent to solving a set of linear equations, under the additional conditions Fixi : 0 : X,, ,. Beca.rr. el, strictly concave and the solution SPace is convex, the feasibte sotution satisfying all these conditions must be unique and optimum. The solution of the system is obtained by using phase I of the two-phase method (Section 3.4.Z). The only restriction is to satisfy the conditions NiS, : b : p;x7. This means that }"; and ,S; cannot be positive simultaneously. Similarly, l| and x7 caháot be Positive simultaneously. This is the same idea of the restricted -bas'is u."ď in Section 2I.2.I. Phase I will render all the artificial variables equal to zero only if the problem has a feasible space. Example 21.2-3 Consider the problem Maximize z : 4x, * 6x2 - 2*1 - 2xp2 - 2r3 subject to x1l2x2<2 xyX2>0 This problem can be put in matrix form as follows: for all i and j s>0 _L)(t) o, zl(") = z ilffi:É) l+ 2 I lz 4 2 \r 2 0 Maximize Z : (, q(::) t (*,,,r(_i subject to Xy X2> 0 The Kuhn-Tucker conditions are given as -I 0 0-1 00 750 Chapter 21 Nonl inear Programming Algorithms The initial tableau for phase I is obtained by introducing the artificial variables R1 and R2. Thus Basic X1 },1 ,1 Solution -1 0 0-1 00 Iteration1_. Because F1 : 0, the most promising entering variabla x1 ea;íI be made basic with R, as the leaving variable. This yields the following tableau: ,91 Solution 3 -1 X1 R2 .l1 0 -1 0 Iteration 2. The most promising variable x2 ca;TL be made basic because lL2 : 0. This gives Basic X1 S1 Solution R1 421, R2242 ,1 1, 2 0 R2R1X2 10-I-1 R2R1Nr 1004 0106 001,2 R2R1FtN1X2X1 -1 joo1 -:104 -i011 l!1_11244 03'r+ Oi-ii -z-1-1 X1 R1 X2 lo-++jJ 01-22 -á03'. 101-i 0020 01-áá 100-+ái-L0 0010-:0:-1 010*_+_ž+: 0 -1 0 Iteration 3. Because 1 : 0, \1 can be introduced into the solution. This yields Basic X1 S1 Solution X1 }'1 X2 R2R1N1X2 -1-1 The last tableau gives the optimal solution for phase I. Because f : O,the solution,.T1 : j, xz : o', is feasible. The optimal value of e is computed from the original problem and is equal to 4.1,6. 3 1 21,2 Constrained Algorithms 751 F|GURE 21.5 Excel solution of the quadratic programming problem of Example 2I.Z-3 Excel Solver can be used to solve the quadratic programming problem. Figure 21,.5 provides the solution for Example 2I.2-3 (see file ch2lSolverQuadratic Programming.xls). The data are entered in a manner similar to the one used in linear Programming (see Section 2.4.2).The main difference is the way the nonlinear function is entered. Specifically, in Example 2l.2-3,the nonlinear objective function z : 4x1 l 6x2 - 2r1 - 2xp2 - 2*3 is entered in target cell D5 as =4 *81 0+ 6 * Cla-2 *B1 0 ^2 - 2 *81 0 *C La -2* CIO^2 Here, the changing cells B10 and C10 represent x1 and x2.Notice that cells B5:C5 are not used at aII in the model. For readability, we entered the symbol NL to indicate that the associated constraint is nonlinear. Also, you can specify the nonnegativity of the vari_ ables either in the Options dialogue box or by adding explicit nonnegativity constraints. PRoBLEM sET 21.28 1. Consider the problem Maximize z : 6x, * 3x2 - 4xp2 - 2*? - 3*3 made lu: Chapter 21 Nonlinear Programming Algorithms subject to \* x2š1 2x1*3x2=4 Xy X2> 0 Show that z is strictly concave and then solve the problem using the quadratic programming algorithm. 2. Consider the problem: Minimize z : 2x1 + 2x| + 3x! l 2xp2 * 2x24 l xt - 3x2 - 5x3 subject to \* x2*4>1, 3xr*2xr*4š6 XI, X2, X3 ž 0 Show that z is strictly convex and then solve by the quadratic programming algorithm. 21.2.3 Geometric Programming Geometric programming deals with problems in which the objective and the constraint functions are of the following type: z : í(X) where n (J1 : clffx|',, j : 1,2, ..., N i:I It is assumed that all c1 } 0, and that i/ is finite. The exponents ai1 zía unrestricted in sign. The function /(X) takes the form of a polynomial except that the exponents 4ť may be negative. For this reason, and because all c1 ž0, /(D i. called a posynomial. This section will present the unconstrained case of geometric programming. The treatment of the constrained problem is beyond the scope of this chapter. Detailed treatment of the subject is given in Beightler and associates (1979,Chap. 6). Consider the minimization of the posynomial function/(X).This problem will be referred to as the primal. The variables xi día assumed strictly positive so that the region í;< 0 is infeasible. It will be shown later that the requiremení xi > 0 plays an essential part in the derivation of the results. The first partial derivative of z must vanish at a minimum point. Thus, N :2Ui j:I !)ouP1. t nk i:I Ez - Š#: icja,@o)oo,-'Tl*?":0, k:1"2,óxt i-:1 dxp í=" T+t Because each xt ) 0 by assumption, !' :0 : dXt :Ir2r,,.)n 21.2 Let z* be the minimum value of z.Itfollows that z- } each xt ) 0. Define Constrained Algorithms 753 0 because e is posynomial and is known once aII y,have determined. solution of q V;: --- 7 Thus li > 0 and)iy. : the optimal value of the written as 1.T. value of y; represents the relative contribution of (Jlío objective function z*. The necessary conditions can now be 2ooiyi :0, k:L,2,...)nj:1 2y, : I, |i > 0 for allij:I These are known as the orthogonatity and normality condi ions and will yield a unique i:'^1':1 Y '^llf n ]- 1 : N and aIl the equations are independent. The problem Decomes more comPlex when IÝ ) n * 1 because the valuós of ! jur. .rá longer unique.It is shown later that, even in,this case, optimum yi is unique. Given 11,the values of e* and x! canbe determined as follows: _,- s\ \Z,=1 y, z:lz) " Because z* : #,it follows that (fl"(#),,(#)" {g(#)'}{ů(go,")'} This step is justified because )|:pďi: 0. The value of z* been deiermined. Now, gir"" ;,; and z*. ď : yi z- can be the following equations then yiel'ds xj. J r J {tT(#)''}{nnr",",1 ů(#), j : I,2, ..., Nď : r,fr6;1,,,, i:1 The Procedure shows that the solution to the original posynom ial z canbe transformed into the solution of a set of linear equations in".!i.T-hesó equations are the necessary conditions for a minimum. It can be sňown that tňóse conditións are also sufficient. Theproof is given in Beightler and associate s (1,979,p. 333). Chapter 2'l Nonlinear Programming Algorithms The variables y7 actually define the dual variables associated with the z-primal problem.To see this relationship, consider the primal problem in the form N /Ut\ ': ?,,,ltl Now define the dual function Because )[r/i : ]. and !1 ) O,we have wšz This result is based on Cauchy's arithmetic-geometric inequality, which states that NN Zr,zi = n zi' j_I N wj ž0, >wj :I i:í An immediate consequence of the inequality w = z is the following relationship: w*:maxw:minz:z* li Xi Example 21.2-4 In this example a problem is considered in which N : n * 1 so that the solution to the orthogonality and normality conditions is unique. The next example illustrates the case whereN>n*1,. Consider the problem Minimize z : 7 xp21 + 3x2xj2 + 5xl3xzxz * xlx2x3 This function may be written as Minimize z : 7 x|x2'"3 + 3x|x)42 + 5x13x)x! + x|xlrx! Thus, *:fi(+), (r, 1it) 1, 1, 0 1 : tS cq) .\ ,l' ,o) .ion -3 1 1 l ,.) 24) *) tio C3, ' au azq azq diti, C2, C3 :I3 a :23 a |33 a condi 0 1 -2 1 ty 1) 0-3 1\ I I 1l 2IIJ ;iven by /o\ Ii] 1 us gi il: 1 I 0 th, Io, ap \::: :1: The orthogonality ."O ""r7: l \ an azz aa.q. 21.2 Constrained Algorithms This yields the unique solution yi : i,y): ž,yi : *, yi : á Thus, , : G)'(í),(;),(;): 15 23 From the equation L( : y;z- it follows that 7xp2l : (Jt : !1ts.zz) : 7.61,5 3x2xj2 : (Jz: [1ts.zs) : 2.538 5xl3x24 : (Js : }1ts.zs) : 3.173 xlx2x3 : (J+ : l1ts.zs) : 1.90+ The solution of these equations is xI : I.316, *Ž : L2I, xi : 1.2 which is the optimal solution of the problem. -| xr' y. Solving for y1,, y2, and y3 em M and n ]_, thtlI, get 5 :ob ity lf eg 2-5 pr( rali ,n We ;on !+ the goí > fYo l21 r the ogo N) ofy rle er :hc luse rmS inimize z : 5xpj1 + zxllx, * 5x1 rormality conditions are given by (i ii il(lil :(l) :se equations do not yield 1directl (i il)gl :(,il Examp Consid The ort Becau in te lt:.5(1 -3y) lz: .5(1 - yo) The associated dual problem is ls : ll Maximize, : (.rG Á) o'-"'1*;) -'-'(*),(i), or Chapter 21 Nonlinear Programming Algorithms Maximization of w is equivalent to maximization of ln w. The latter is easier to manipulate, however. Thus, lnw:.5(1 - 3ya[ln10 - ln(1 -3y4)} +.5(1 - ya){In4 - ln(1 - y4)} + yo(1n5 -Z|nya) The value of yo maximizing ln w must be unique (because the primal problem has a unique minimum). Hence, Ólnr'Y -?}tn10 - !tn++ 1n5) + jrnlr -3yq)+}m(r - !l)- Zlnyo:g Ey+ -l This gives, after simplification, 21.2,4 -,"(' :'o') - "(o I-y)l3- ,( 7 - 3yq) ):oY4 :'12.6 whichyields yi = la.Hence, y\: .16,|): .4Z,andyi : .26. The value of z* is obtained from z* : w* : (*)'u (ay+z (7:a x 9.66I Hence, Ul : .16(9.66I) : 1,.546 : 5xt (Jq: .16(9.66I) : .1546 _ x;I The equations yield xI* : .309 and x} : .647. ytr PRoBLEM sET 21.2c 1. Solve the following problem by geometric programming. Minimize z : 2xíIx3 + xlxjz + 4x! x1, x2} 0 2. Solve the following problem by geometric programming. Minimize z : 5x ilx! + x72x11 + 10x) + 2xrlx243 XI, X2, X3 ) 0 3. Solve the following problem by geometric programming. Minimize z : 2x?xí3 + 8xr3x, + 3xp2 Xl,X2ž0 4. Solve the following problem by geometric programming. Minimize z : Zxlxi3 + 4xl2x2 * xp2 xl,x2}0 21.2 Constrained Algorithms 757 .4 Stochastic Programming Stochastic programming deals with situations where some or all parameters of the Problem are random variables. Such cases seem typical of real-life problems, where it may be difficult to determine the values of the parameters with certainty. The idea of stochastic programming is to convert the probabilistic problem into an equivalent deterministic situation. This section deals with chance-constrained programming, defined as Maximize , : f,r,*,j:1, subject to 2,...,ffiixj ž0, foralli The name "chance-constrained" follows from the fact that each constraint is realized with a minimum probability of I - or 0 { o, < 1. It is assumed that all a,, and, bi arc random variables. Three cases are considered: 1. Only au is random for all l andi. 2. Only Ď, is random for all i. 3. Both ailand biare random for all i and j. In all three cases, it is assumed that the parameters are normally distributed with known means and variances. Case 1. Each aii || normally distributed with mean E{ou}, variance var{ar}, and cov{a,,, a1,,} of ailand a11,. consider the lth constraint =1-o, and define Then /r; is normally distributed with E{h,}: i E{a}x1 j:1 var{h}: XrD;X X : ("r, ... , xr)' D;: cov{ai1, air} ,aria,,} "{á aijxj šr,} = I - ou i: !, r{),,,-, = r,} h,: io,1r1 i:1, where lth covariance matrix | ,or{o,r} ... l,: \cov{a;,, a;1} Chapter 21 Nonlinear Programming Algorithms Now ( h, - E{h} b, - E{h,)) P{h, =b,}:P) ' '" --}=1-ai I \Á-flr,} \/;^4h} J - . h, - E{h,) , where' is standard normal with meanzeto and variance one.This means that 1/var.{/t1} / b, - E{/,,.}\ P{h, = b,}: r( _l,-----:--|l l) \ Ývar{h} / where F represents the CDF of the standard normal distribution. Let K, be the standard normal value such that F(K"): 1 - ct; Then the statement P{h, ' b,) > 1 - ct; is realizedlf. and only if b-EW=K* Ý"^4W This yields the following nonlinear deterministic constraint: ialory*, + K*ÝťD1=b, j:I For the special case where the normal distributions are independent, cov{a,1, ori,} : 0 and the last constraint reduces to frp,,yr, *o*r ffi=u,1:l ' /:l This constraint can be put in the separable programming form (Section 21.2.I) by using the substitution for all l Thus, the original constraint is equivalent to n :E{r,,.}", l Ko,y,=b, í=t and { )var{a} -!?:0j:1, Case 2. Only Ď; is normal with mean E{b,} and variance var{b,}.ft" analysis is similar to that of case ].. consider the stochastic constraint ,{r,= 2o,r,} = *, l, : ll )var{a;}x|,Y i:1 21.2 Constrained Algorithms 75g As in case 1, This can hold only if }=-, Žo,,*, - E{b} j:l Ý""rW Žo,,*, - E{b} j:I --17 Ýr^r{b} -- "o' Thus, the stochastic constraint is equivalent to the deterministic linear constraint Žo,,*, =E{b}+K,Ýrupb}j:1, Case 3, In this case all ailandbiarenormal random variables. Consider the constraint ,a,lX1 = b, j:I This may be written n Zo,i*i -bi=0j=t Because all ailand biare norm a1,2i: iixi - bris also normal. This shows that the chance constraint reduces to the situation in'cáse 1 and is treated in a similar manner. Examp|e 21.2-6 Consider the chance-constrained problem Maximize z : 5x, -| 6x2 t 3x3 subject to P{alx1 l a9x2 l a3x3 = 8} > .95 P{5x1 i-x2*6xr=b2}>.10 Xl, X2, X: ž0 Assume that the aris are. independent normally distributed random variables with the following means and varianceŠ: E{a,1,} : 1,, E{alr} : 3, E{a3} : 9 var{a1} : 25, var{ap} : 16, var{a13} - 4 The parameter Ď, is normally distributed with mean7 and,variance 9. From standard normal tables in Appendix D, Ko,: K.os = 1,.645, Ko, : Kn = I.Z85 760 Chapter 2'l Nonlinear Programming Algorithms The two constraints are converted deterministically to x, * 3x2 t 94 + 1.645ffi, g 5q-|x2 l64<7 + 1.285(3):].0.855 If we let !2:25x?r+l6x?2+4x?3 the problem becomes Maximize z : 5x1 t 6x2 -l 3x3 subject to x1 *3x2t94+I.645y<8 25x]+1,6x|+a*?-!2:0 5x1 -| x2 l 64 < 10.855 X1,, X2, Xz, Y ž 0 which can be solved by separable programming. Excel optimum solution of this nonlinear problem is given in Figure 21.6 (file ch2]_SolverStochasticProgramming.xls). Only the left-hand side of constraint2 is nonlinear and is entered in cell F7 as =25 * B1-2 ^ 2 +16 * cL2 ^ 2 + 4* D1-2 ^ 2 -E12 ^ 2 FlGURE 21.6 Excel solution of the stochastic programming problem of Example 21,,2-6 21.2, 21.2 Constrained Algorithms 761 PRoBLEM sET 21.2D 1. Convert the following stochastic problem into an equivalent deterministic model. MaximizeZ:xI*2x2*5x3 subject to P{ap1 l 3x2 * a3x3< 10} > 0.9 P{7*, -l 5x2 * xs - b2} > 0.I XI, X2, X3 ž 0 Assume that al and a3 are independent and normally distributed random variables with means E{a} : 2 and E{or}: 5 and variances var{a1}: 9 and var{a3} : !6.Assume further that bris normally distributed with mean 15 and variance25. 2. consider the following stochastic programming model: MaximizeZ:xI+x}+x, subject to P{x! + arx) + or n = 10} > 0.9 XI, X2, rr: > 0 The Parameterc a2and a3are independent and normally distributed random variables with means 5 and 2,andvariance 16 and 25,respectively. Convert the problem into the (deterministic) separable programming form. 21.2.5 Linear Combinations Method This method deals with the following problem in which all constraints are linear: subject to Maximize z : í(ě) Ax0 The Procedure is based on the steepest ascent (gradient) method (Section 2I.t.2). However, the direction specified by the gradient vector may not yield a feasible solu_ tion for the constrained problem. Also, the gradient vector will not necessarily be null at the oPtimum (constrained) point.The steepest ascent method thus must be modified to handle the constrained case. Let Xk be the feasibte trial point atiteration k. The objective function/(X) can be expanded in the neighborhood of xk using Taylor's series. This gives /(x) =/(x*) + V/(xkxx - xl : ff(xo) - V/(xoX) + vflxr)x The Procedure calls for determining a feasible point X : X* such that/(X) is maximized subject to the (linear) constraints of, the problem. Because /(Xk) - Ýirxxr is a con_ stant, the Problem for determining X- reduces to solving the ioilowing lirreai program: Maximize wÁX): V/(xoX subject to Ax0 Chapter 21 Nonlinear Programming Algorithms Given wl,is constructed from the gradient of /(X) atXk, an improved solution point can be secured if and only if vylX-) > ,o(Xo). From Táylor's expansion, the condition does not guarantee that /(X-) > /(X*) unless X- is in the neighborhood of Xr. However, given }ť/.(X-) > w,,(Xo), there must exist a point 51k+1 on the line segment (Xo, X-)such that /(Xo*') > /(Xn).The objective is to determine Xk*1. Define ;1k+1 :(1 - rfio + rX*:Xk+ r(X- -Xk), 01r<1 This means that;lk+1 is a linear combination of X and X*. Because Xk and X* are two feasible points in a convex solution space,1k+1 is also feasible. By comparison with the steepest ascent method (Section 21,.1,.2),the parameter r represents the step size. The poin1 Yft+1 is determined such that/(X) is maximized. Because Xk*1 is a function of r only,;1k+1 is determined by maximizing h(r): /(Xo + r(X- - Xft) The procedure is repeated until, at the kth iteration, wr(X-) < ,o(Xo). At this point, no further improvements are possible, and the process terminates with Xr as the best solution point. The linear programming problems generated at the successive iterations differ only in the coefficients of the objective function. Sensitivity analysis procedures presented in Section 4.5 thus may be used to carry out calculations efficiently. Example 21,2-7 Consider the quadratic programming of Example 21,.2-3. Maximize /(X) : 4x1 t 6x, - 2r', - 2x,,x2 - 2*3 subject to x1l2x2<2 XyX2>0 Let the initial trial point beX0 : (+, };, *trictr is feasible. Now V/(X) : (4 - 4x, - 2x2, 6 - Zxt - 4xz) Iteration 1. V/(x) : (1, 3) The associated linear program maximizes }ť1 : x1 l 3x2 subject to the constraints of the original problem. This gives the optimal solution X* : (0, 1).The values of w1 at X0 and X* equal 2 and 3, respectively. F{ence, a new trial point is determined as x1 : (l,L) + r[(0, 1) - G,)] : (=,Lť) The maximization of h(r) : f ,Lť) yields r1 : 1,.Thus X1 : (0, 1)with /(X1) : 4. 21.2,6 21.2 Constrained Algorithms Iteraúion 2. V/(X') : Q,2) The objective function of the new linear programming problem is w2 : 2x1 l Zx2.The oPtimum solution to this problem yields X* : Q, 0).Because the value s of wrat Xl and X* are 2 and 4, a new triai point must be determined. Thus X2 : (0, 1) + ,lQ,O) - (0, 1)] : (2r, 1 - r) The maximization of h(r):f(Zr, I-r) yields : Ž.Thus X2 : (i, ) with /(X2) = +le . Iteration 3. Yí(X') : (I, 2) The corresponding objective function is w3 : x1, * 2x2.The optimum solution of this problem yields the alternative solutions X* : (0, 1) and X* : (2,0). The value of w, for Fth points equals its value atX2.Consequently, no further improvements are porribl.. Tlae approximate optimum solution is X2 : (j, ;)with /(X2) = 4.1,6.This happéns to be the exact optimum. PRoBLEM sET 21.2E 1-. Solve the following problem by the linear combinations method. Minimize /(X) : x] + x) _ 3xp2 subject to 3x1 * 5x, - Xl, .2.6 SUMT Algorithm In this section, a more general gradient method is presented. It is assumed that the objective function /(X) is concave and each constraint function 8;(X) is convex. Moreover, the solution space must have an interior. This rules out both implicit and explicit use of equality constraints. The SUMT (Sequential Unconstrained Maximization Technique) algorithm is based on transforming the constrained problem into an equivalent unconstrained Problem. The procedure is more or less similar to the use of the Lagrange multipliers method. The transformed problem can then be solved using the steepest ascent method (Section 2I.1.2). To clarify the concept, consider the new function 'X2 3xz xzž rrr, : 0+0i : 0, j : I,2, ...) n j:I It for some gj + 0, Urn, : O i:l then the vectors are linearly dependent.For example, the vectors Pt : (1, 2), Pz : Q, 4) are linearly dependent because for 0t : 2 and 02 - -1,, 01P1 + 02P2: 0 A.2 A.2.1 : lla,illo* A.2.2 Types of Matrices 1. A square matrix has m : n. 2. An identity matrix is a square matrix in which the main diagonal elements are 1, and the off-diagonal elements are zeto.For example, a (3 X 3) identity matrix is given by A.2.3 element ai1 of the matrix A occupies withm rows andn columns is said to owing matrix is of size (4 X 3). MATR!cE5 Definition of a Matrix A matrix is a rectangular aríay of elements. The the ith row andith column of the array.A matrix be of size (or order) m X n. For example, the foll Io,, atz or.\ ; : ]'r, azz orr| l o1 asz o3l \oo, ou oor l I:: (,;i) A.2 Matrices 767 3. A row vector is a matrix with one row and, n columns. 4. A column vector is a matrix with m rows and one column. 5. The matrix AZ is the transpose of A if the element a,lin A equals the element ali in A7 for all i and7. For example, lt 4\ ^:I; ;l- a,,:(I 2 3\ \; ál \4 5 6) 6. A matrix B : 0 is a zero matrix if every element of B is zero. 7. Two matrices A : llo,illand B : |lb,illare equal if, and only if, they have the same size and ai : bijfor all i and j. A.2.3 MatrixArithmeticOperations In matrices only addition (subtraction) and multiplication are defined. The division, though not defined, is replaced by inversion (see Sóction A.2.6). Addition (Subtracúion) of Matrices. Two matrices A added if they are of the same síze (m X ,).The sum D : the corresponding elements. Thus, lId,JI-*, : llo,i * b,ill-,, If the matrices A, B, and C have the same size, then A+B:B*A A+(B+C) :(A+B)+C (A*B)':AT+Bz Product of Matrices. The product D : AB of two matrices, l : |laull and B : llb,ill, is defined if, and ontY if, the number of columns of A equals the number of rows of B. If Aisof size(m X r)andBisof size(r X r),thenDmustbeof sizem X n,wheremand, n are arbitrarY Positive integer values. In this case, the elements of D are computed as 4,i : Žo,obo,, for all i and,j k:1 For example, given ": (] 1),u: (; ,r 3) : llo,ill and B : |lb,|l can be A + B is obtained by adding (Ix7+3X8)(1 x9+3X0)\ (2 x 7 + 4 x 8) (2 x 9 + 4 x 0)) we have ": (; ;X; ', : (tr:^ '^L ,3) In general, AB + BA 3) (í x 5 + 3 x 6):l\/ \(z*5+4x6) even if BA is defined. 768 Appendix A Review of Vectors and Matrices The following general properties apply to matrix multiplication: I,,A : AIn : L,I. and I, are identity matrices (AB)C : A(BC) C(A+B) :CA+CB (A+B)C:AC+BC ct(AB) : (ctA)B : A(ctB), ct is a scalar Muttiplication of Partitioned Matrices. Let A be an (m x r)-matrix and B an(r X n)matrix. Assume that A and B are partitioned as follows: (í) - A,2,4 The partitioning assumes that the number of columns of A;i is equal to the number of rows of Ba for all l and i. Then A x B: (ŤrrPrr * 4rzBzr + Ar:B:r i ArrBrz + A,zBr, + A,rB.r) -\ l For example, )(Í) J 5 6 lar( ma Determinant of a Squ Consider the n-square r 4) 4) rix ,,I)(4 \ ,)( atri (1 /| t; \Z, Ma ix k re l\ atríl + (z ,,(l) z4 ,9 + ( 4+2 - (! ;Xá) ,,,:) (".,^ (il) ): A- al,z azz : an2 Next, define the product Pirjr.,.i, : aljrazjr", anj, such that each column and each row of A is represented exactly once among the subscripts oíjt, jz, ... , and 7,. I'{ext, define _ | t, jrjz... j, evenpermutation e ji, j, - \o, jtiz... j,oddpermutation Let p represent the summation over all n! permutations;then the determinant of A, det A or IA | , is computed as ? = j,jz...j,, Pj,j,,,.i, ): A,2 Matrices 769 As an illustration, consider Then lAl : an(azz azz - azs azz) - an(azt ar - ast azl) * a13(a21 azz - azz ast) The properties of a determinant are: 1. The value of a determinant is zero if every element of a row or a column is zero. 2. lAl : lA'l. 3, If B is obtained from A by interchanging any two rows or any two columns, then IBl : -lAl. 4. If two roWS (or two columns) of A are multiples of one another, then |Al : 0. 5, The value of IAI remains the same if scalar ct times a column (row) vector is added to another column (row) vector. 6, If everY element of a column or a row of a determinant is multiplied by a scalar ct, the value of the determinant is multiplied by ct. 7. If A and B are two n-square matrices, then o : (::,, i,,: :,:\ \o, ozz o,l lABl : lAllBl Definition of the Minor of a Determinant. The mino, Mii determinant |A| is obtained from the matrix A by striking column of A. For example, for M.. : |o,, o,r| Lz lau azsl' of the element a,, in the out the ith row and ith Definition of the Adjoint Matrix. Let Ai1 : (I)t+ix4,,be defined as the cofactor of the,element a,,of the square matrix A. Theň, the adjoinimatrix of A is the transpose of llár||anO is deiined as: adjA :||l,,||r: For example, if At atz orr\ t azz arr l t asz orr l Io, A:Io, \o, ,,, : !:,,: ::I, ?) Io, A^ IA,, Az, l;: \o,,, Az, (:1;) \s 3 4l 770 Appendix A then, Au,: 4-3X2):_2,...,of A.2.5 Nonsingular Matrix A matrix is of a rank r if the largest square affay in the matrix having a nonzero determinant is of size r. A square matrix with a nonzero determinant is called a full-rank or nonsingular matrix. For example, consider lt 2 3\ n:Iz 3 4l \s 5 7l A is a singular matrix because lAl : I x (2l - 20) - 2 x (I4 - Iz) + 3 x (10 - 9) : 0 But A has a rank r : Zbecause /t 2\ (' 1):-I+0 A.2.6 lnverse of a Nonsingular Matrix If B and C are two n-square matrices such that BC : CB : I, then B is called the inverse of c and c the inverse of B. The common notation for the inverse is B-1 and g-t. Theorem If BC : l and B is nonsingular, then C : B-1, which means that the inverse is unique. Proof, By assumption, BC:I B-IBC : B-1I IC : B-1 C:B-1 Two important results can be proved for nonsingular matrices. 1. If A and B are nonsingular n-square matrices, then (ABr' : 3-14-t 2. If A is nonsingular, then AB : AC implies that B : C. Review of vectors and Matrices (-1r(3 x 4 - 2 x 3) : 6, An: (-1)3(2 x la 1-5\ adj A:|-z -5 4| \-3 3 -Ll A.2., then or or A.2 Matrices 771 Matrix inversion is used to solve nlinear|y independent equations. Consider |o,, an o,,\l*,\ /a \ l'r, azz ... ,:rll^ l, \,., o|., : ,,..)\':,):|'r') where -ri rePresents the unknowns and aii and b; are constants. These zz equations can be written in matrix form as AX:b Because the equations are independent, A must be nonsingular. Thus A-lAx : A-lb rixt ular 1 2 n A-lb latrir nguli Al An Á.- =A- Matl Lsingl lo, Il, 'lo, x-. ofM nonsi ,I Hill nverse :nA,a rdi A: v the Gi, I Ň Methods of Computing AdjoinúMatrix Method. or lnl en adj ":(1 i il :(.i 1 _í)lAl : -z A.2.7 r matrix of size n Ar, Á,,\ A, A,r l ::,lAr, A,,l _i\ i) partitioned 4-1 _ a For example, for adj A Hence l . 1 F, l-g _-t 4__ +( _i -1 -;\ : /-i -! -,\_; ; -il -\]_! \i -i Row operations (Gauss-Jordan) Method. Consider the where A is nonsingular. Premultiplying by A-r, we obtain (A-lAlA-'I) : (IlA-,) matrix (A lI), tTORA's inverse module is based on LU decomposition method. See press and associates (1986). Appendix A Review of Vectors and Matrices Thus, applying a specific sequence of row transformations, A is changed to I and I is changed to A-1. To illustrate the procedure, consider the system of equations: (i i i)(i):(i) The solution of X and the inverse of basis matrix can be obtained directly by considering a-'(AlIlb) : (IlA-' lA-'b) The following iterations detail the transformation operation: Iteration 0 Iteration 1 a J -4 -5 Iteration 2 -5 4 7 Iteration 3 This gives 11 : ], *, : |, and xs : ?.The inverse of A is given by the right-hand-side matrix, which is the same as obtained by the method of adjoint matrix. Product Form of the Inverse. Suppose that two nonsingular matrices, B and Bn"*,, differ exactly in one column. Further, assume that B-1 is given. Then the inverse B*-1*, can be computed using the formula B*1-, : EB-1 The matrix E is computed in the following manner. If the column vector P7 in B is replaced with the column vector P, to produce Bn.*,, then E is constructed as an m-identity matrix with its rth column replaced by 23 32 34 lt lz \: lt 2 lo -1 \o -3 00 10 01 lt 0 [lá (, i) 100 010 00I 100 -2 10 -3 01 -3 2 0 2-1,0 3 -3 I 61,5 -1-jj Ztr_!,777 1_1 1 7,7,7 |-i) Ii) |1) A.2 Matrices 773 -1, = (B-'P), , ) (B-lPl). + 0 +rth place ffť)If (B-lP) : 0, then B;j-, does not exist. To prove the validity of the formula B[1,.,, define F as an m-identity matrix whose rth column is replaced by B-lPr-that is, F : ("r, er-1, B-lP7l ar+Il ... , e-) Because Bo"*t differs from B only in that its rth column is replaced with P7, then Brr"*, : BF Thus, B,J-,:(BF)-':p-13-t The formula follows by setting E : F-1 The product form can be used to invert any nonsingular matrix, Bo : I : Bo1. Next, construct 81 as an identity matrix with its first col with the first column in B. Then Bi':ErBo':ElI:Et In general, if we construct B, as an identity matrix with its first i columns the first i columns of B, then B, ' : E,Bi-', : E;E;_1B i}z : : E;E;_l ... E1 This means that for the original matrix B, B-1 : E-E-- l... E1 with 1aced lwith f the 1aced tart reDp :d o B. St mnl eplar B. um re1 IfcThe following example illustrates the application of the prod inverse. consider lz 1 0\ n:lo 2 0l \+ 0 1l orm Iteration 0 Bo : Bo1 :(i 00\ 1 0l 0 Il 774 Appendix A Iteration 1 Review of vectors and Matrices lteration2 Bllp, : Ez: B-1 - Bi': ErBi' Partitioned Matrix Method. are partitioned as follows: If B is the inverse of A, th Br: Bo'P, Ei: Bi' : Suppose that the two n-nonsingular matrices A and B L, (px nonsingular L, B, (px lz 0 0\ lo 1 0l \+ 0 Il l 2\ *,:, : pr : (,l,J l+ 0 0\ I-2 1 0l \jOtl l' 0 0\ l o 1 0l \-z 0 tJ :(-i)-,:, ft -:0\ (: i:] 1i)( iii) :(i li) '),o,,di d\ ,,) we have ro\ z ol:n 0 tl Ui i)(á) -t)lz\ o * tnl o : --1-4,rJ 1 lz B,: l0 \+ (l :(i A.3 X Ir, 'A1 (p r ,(n B11 @ B^ (q'. L frten A B p) p) A.3 Quadratic Forms 775 Also, from BA : I,, we get ArrBr, * ApB27:I, ArrBrz * A9ft22:0 BrrA1 i-B22A21 :0 BuA, lB22A22:I, Because A11 is nonsingular, A-j exists. Solving for B11, Bn, Bzt,,andBrr,,we get Btt : Ali + (AiÍAlrD-'(A21A-,i) Bn: -(AillA12)D-1 Bzt: -n-l(arrAlr) Bzz : D-I where such that Att : (1), A2 ^)In this case, A-ri : 1 and D- Thus, ) ) which directly give B : A-1 QUADRATIC FORMS Given X : @r, *r, ... , xr)' a,i titic J ,r(l art Lz, ,P& l2 l,ls aS, (| D:A, To illustrate the use of these formu (; '^)-(1),D(2.3): (_1 -1) -, : -+(-; -i) : (j i) BIl :(-9),B,r:(-+ /?\ / l Btz:(]),B,r:( ! \t/ \-7 "Arr)on the matrix \ I l : (2,3), A,,: (3), n : (tr B Appendix A and Review of vectors and Matrices the function 0o<):X'AX: 'io,,*,r, i:I i:I is called a quadratic form. The matrix A can always be assumed symmetric because each element of every pair of coefficients ai1 ana ál,Q + ) can be rlplaced by g+"r without changing O(X). As an illustration, the quadratic form with unsymmetric A is the same as lt 1 z\/x,\ O(x) : (xt, xr, xr)l I 7 3 lÍ ,; l \z 3 zl\*,l with symmetric A.We will assume henceforth that A is always symmetric. The quadratic form is said to be L Positive-definite if o(x) ) 0 for all X + 0. 2. Positive-semidefinite it Q(X) > 0 for all X, and there exists X + 0 such that o(D : 0. 3. Negative-definite if -0(J() is positive-definite. 4. Negative-semidefinite if - O(X) is positive-semidefinite. 5. Indefinite in all other cases. It can be proved that the necessary and sufficient conditions for the realization of the preceding cases are 1_. Q(J{) is positive-definite (semidefinite) if the values of the principal minor determinants of A are positive (nonnegative).2 In this case, A is said to be positivedefinite (semidefinite). ",.) Ir,, an a,:Io?, o?, l:,. \o" an2 '*):@,,,,-,('; ')E) A.4 sELl PRc 2The kth principal minor determinant of A,", is defined by |o,, atz o,ol |'i o|, '?r|. k : |.2. |oi^ o, ,rrl ) Problems 777 Q(ě) is negative-definite if the value of frth principal minor determinants of A has the sign of (-1)o, k : 1-, 2, ... , n.In thiJca.", A is called negative-definite. Q(ě) is a negative-semidefinite if the kth principal minor determinant of A is either zero oí has the sign of (-1Y, k : l, 2, . . . ) n. A.4 CONVEX AND CONCAVE FUNCTIONS A function/(X) is said to be strictly convex if, for any two distinct points X1 and X2, where0<}"< convex. A special Section A.3) /(X):Cx+xrAx where C is a constant vector and A is a symmetric matrix. It can be proved that /(X) is strictlY convex if A is positive-definite and/(X) is strictly concave if A i; negative-definite. SELECTED REFERENCES Hadley, G., Matrix Algebra,Addison-Wesley, Reading, MA, 1961,. Hohn, F., Elementary Matrix Algebra,Znd ed.,Macmillan, New York, 1964. Press, W., B.Flannery, A. Teukolsky, and W. Vetterling, Numerical Recipes: The Art of Scientific Computing, Cambridge University Press, Cambridge, England ,1986. PRoBLEMs A-1. ng vectors are linearly dependent. A-2. A- find (a) A+7B (b) 2A - 3B (c) (A + 7B)r A-3. In Problem A-2, show that AB + BA /(\X, + (1 - ).)X' < },/(X,) + (1 - N)/(X' 1. Conversely, a function /(X) is strictly concave if -/(X) is strictly case of the convex (concave) function is the quadratic form (see Show that the followi_ (a) ()(:)(_i) b (ilft) Given |t 4 9\ ll -1 2\ Iz 5 -8 l.r:lq i sl \:7 2l \l 6Iól -J 4 Appendix A Revíew of Vectors and Matrices A-4. Consider the partitioned matrices 1 2 5 -6 il-:( Find AB using partitioned matrix manipulation. A-5. In Problem A-2, find A-1 and B-1 using the following: (a) Adjoint matrix method (b) Row operations method (c) Product form of the inverse (d) Partitioned matrix method A-6. Consider Suppose that the third vector P3 is replaced with the V: : PI + zP2.This means that the resulting matrix is singular. Show how the product form of the inverse discovers the singularity of the matrix. ^-7. Use the product form of the inverse to verify whether each of the following equations has a unique solution, no solution, or infinity of solutions. (a) xr*2x2:3 x1 l 4x2:2 (b) xr*2x2:5 -X1 -Zxr: _5 (c) x1 + xl: 5 4x1 -| x2 l 34: 8 x1 l 3x2 - Zxr: 3 A-8. Verify the formulas given in Section A.2.7 for obtaining the inverse of a partitioned matrix. A-9. Find the inverse of " : (i ), u nonsingular A-10. Show that the following quadratic form is negative-definite. Q@t, x): 6xl -l 3x2 - 4xp2 - 2*? - 3*3 A-11. Show that the following quadratic form is positive-definite. Q(xr, xz, xs) :2x! + 2x| + 3x! + 2xp2 l 2x2x3 A-I2. Show that the function f (*) : ď is strictly convex over all real values of x. A-13. Show that the quadratic function f@r, *r, xs): 5x| + 5x} + 4x] + 4xg2 l 2x2x3 is strictly convex. A-14. In Problem A-I3,show that -f(xy x2, h) is strictly concave. ":(íži),-,:(i1-1) B.1 B.1 &pffiffi& ffi K ffi TORA Primer The TORA Optimization System is a Windows@-based software designed for use with many of the techniques presented in this book. An important feature of the system is that it can be used to solve problems in a tutorial or automated mode. The tutorial mode is particularly useful because it allows concentrating on the main concepts of the algorithms while relieving you of the burden of the tedious computations that generally characterize OR algorithms. TORA is totally self-contained, in the sense that all the instructions needed to drive the software are represented by menus, command buttons, check boxes, and the like. It requires no user manual. Ir{evertheless, a summary of the basic features of the system will be given in this Appendix. TORA is automated for screen display settings of 800 x 600 and 1024 x 768 Pixels. The second setting is recommended because it produces a more proportionate layout of the screen. MAIN MENU Figure B.1 shows the Main Menu screen. A selection from this menu will lead to a new screen for selecting the input mode of the problem. FlGURE B.1 Main menu screen 779 B,2 Appendix B TORA Primer |NPUT MODE AND FORMAT The input mode screen (Figure B.2) does two things: 1. It allows you to enter a new set of data for the problem (default) or read the data from an existing file that was originally created by TORA. 2. It allows you to select the format (decimal or scientific) as well as control the desired level of accuracy in inputting the data. The decimal format (default) is represented by the code NNNNN.DD, whereas the scientific format is represented as .NNNNNeDD. The default values for the number of N's and the number of D's are 5 and 2, respectively. These values can be changed to any desired (reasonable) values. FlGURE B.2 Input mode screen INPUT DATA SCREEN Inputting the appropriate size of the problem (top left of the input screen) automatically exposes the input data grid (Figure B.3).The grid entries are designed to match the data of the selected model (..g., linear programming or transportation model). Regardless of the module used, the input grid is edited very much like a spreadsheet. C b r a u B.4 s l lr n lr B.3 B.4 Solve/Modify Menu FlGURE B.3 Input data screen The design of the grid allows inserting or deleting a column or a row as well as copying and pasting the contents of a row or a column. To achieve this, first click the heading of the target column or row; then use EditGrid Menu to effect the desired result. The menu uses the suggestive key combinations CTRL+I, CTRL+D, CTRL+C, and CTRL+P for insert, delete, copy, and paste.Any of these operations may be undone using CTRL+U. Once all data have been entered, press oív ..lte u and follow instructions to save the data in a file, if desired. soLVE/MoDlFY MENU The Solve/Modify menu (Figure B.4) provides options for solving the selected prob_ lem. An imPortant feature of TORA is that it allows solving the problem either iuto_ maticallY or in a tutorial (user-guided) mode. All these options are generated in a logical manner using submenus. The Modify entry allows you to go back to the Input Data screen to make changes in the original data of the problem. FlGURE B.4 Solve/modify screen 781 B.4 B.5 Appendix B TORA Primer OUTPUT FORMAT The Output Format screen (Figure B.5) controls the accuracy of the output results. The details for the output format are the same as in the input format (Section B.2). FlGURE B.5 Output format scíeen OUTPUT RESULTS The output screen provides the results either in text format or graphically depending on the type of problem being solved (Figures 8.6 and B.7).Both text and graphical results can be printed using a command button $tí_ť.ě-...Ě. . B.6 i š š =š sNN S::':: š š š B,6 Output Results {lrti i*,e ;* #., ,t}r+tryr vj j htrl+:+,,.ry j El*rgtgJg$lifr* --* . . ,,,'-li *b'*,ii*ir: ffit*lxffi1 $rťiil!s $ffiffit$.*i$l*1Ě.l1,1-..}.-,t,__*.* _,.l+ FlGURE 8.6 Text output screen FlGURE B.7 Graphical output screen &ffipffiruffi & statistical Tables TABLE C.1 Normal Distribution Function 0.090.070.060.020.01 4z): 0.03 e-é)"o, 0.05 7r,t-J _, Y2rr -^ 0.04 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1,0 LI 1,.2 1.3 I.4 1.5 I.6 1,.7 1.8 L9 2.0 2,I z.2 2.3 2.4 z.5 2.6 z.7 z.B 0.5000 0.5398 0.5793 0.6179 0.6554 0.6915 0.7257 0.7580 0.7881 0.B159 0.8413 0.8643 0.8849 0.9032 0.9I9z 0.9332 0.9452 0.9554 0.964I 0.9713 0.977z 0.982I 0.9861 0.9893 0.9918 0.9938 0.9953 0,9965 0.9974 0.5040 0.5438 0.5832 0.6217 0.6591 0.6950 0.729I 0.761I 0.79I0 0.8186 0.8438 0.8665 0.8869 0.9049 0.9207 0,9345 0.9463 0.9564 0.9649 0,9719 0.9778 0,9826 0.9864 0.9896 0.9920 0.9940 0,9955 0.9966 0.9975 0.5080 05120 0.5478 0.5517 0.5871 0.5910 0.6255 0.6293 0.6628 0,6664 0.6985 0.7019 0.7324 0.7357 0.7642 0.7673 0.7939 0.7967 0.8212 0.8238 0.8461 0.8485 0.8686 0.8708 0.8888 0.8907 0.9066 0.9082 0.9222 0.9236 0.9357 0.9370 0.9474 0.9484 0.9573 0.9582 0.9656 0.9664 0.9726 0.9732 0.9783 0,9788 0.9830 0.9834 0.9868 0,987I 0.9898 0.9901 0.9922 0.9925 0.9941 0.9943 0.9956 0.9957 0,9967 0.9968 0.9976 0.9977 0.5160 0.5199 0.5557 0.5596 0.5948 0.5987 0.6331 0.6368 0.6700 0.6736 0.7054 0.7088 0,7389 0.7422 0.7704 0.7734 0.7995 0.8023 0,8264 0.8289 0.8508 0.8531 0.8729 0.8749 0.8925 0.8944 0.9099 0,9115 0.9251 0.9265 0.9382 0.9394 0.9495 0.9505 0,9591 0.9599 0.9671 0.9678 0.9738 0.9744 0.9793 0.9798 0.9838 0.9842 0,9875 0.9878 0.9904 0.9906 0.9927 0.9929 0,9945 0.9946 0.9959 0.9960 0.9969 0.9970 0.9977 0.9978 0.5239 0.5279 0.5636 0.5675 0.6026 0.6064 0.6406 0.6443 0.6772 0.6808 0.7123 0.7157 0.7454 0.7486 0.7764 0.7794 0.8051 0.8078 0.8315 0,8340 0.8554 0.8571 0.8770 0.8790 0.8962 0.8980 0.9131 0.9147 0.9279 0.9292 0.9406 0.9.118 0,9515 0.952,5 0.9608 0,9616 0.9686 0.9693 0.9]50 0.9756 0.9803 0,9808 0.98"16 0.9850 0.9881 0.9884 0.9909 0.9911 0.9931 0.9932 0,9948 0.9949 0.9961 0.9962 0.9971 0.9972 0,9979 0.9979 0.5319 0.5359 0.5714 0.5753 0.6103 0.614I 0.6480 0.6517 0.6844 0.6879 0.7190 0.7224 0.7517 0,7549 0.7823 0.7852 0.8106 0.8133 0.8365 0.8389 0.8599 0.s627 0.8810 0.8830 0.8997 0.9015 0.9762 0.9177 0.9306 0.9319 0.9129 0.9441 0.9535 0.9545 0.9625 0.9633 0.9699 0.9706 0.916L 0.9767 0.9812 0.9817 0.9854 0.9857 0.9887 0.9890 0.9913 0.9916 0.9934 0.9936 0,9951 0,9952 0.9963 0.9964 0.9973 0.9974 0.9980 0.9981 785 Appendix C Statistical Tables 0.050.040.030.010.00 0.06 0.07 TABLE C.: 2.9 0.9981 0.9982 3.0 0.9987 0.9987 3.1 0.9990 0,999I 3.2 0.9993 0.9993 3.3 0.9995 0.9995 3.4 0.9997 0.9997 3.5 0.9998 4.0 0,99997 5.0 0,9999997 6.0 0.999999999 0.9983 0.9984 0.9988 0.9988 0.9991 0.9992 0.9994 0.9994 0,9996 0.9996 0.9997 0,9997 0.9984 0.9985 0.9989 0.9989 0.9992 0.9992 0.9994 0.9994 0.9996 0.9996 0.9997 0.9997 0.9985 0,9986 0.9986 0.9989 0.9990 0.9990 0.9992 0.9993 0.9993 0.9995 0.9995 09995 0.9996 0.9996 0.9997 0.9997 0.9997 0.9998 0.9982 0.9987 0,9991, 0.9994 0.9995 0.9997 Source: MILLER, I., and J. FRpuNo, Probability and Statistics for Engineers, Prentice Hall, Upper Saddle River, Nl 1985. TABLE C.2 í*., (student t) values* ct : 0.10 cr : 0.05 cr : 0.025 cr : 0.0]" ct : 0.005 1, 2 J 4 5 6 7 8 9 10 1I t2 13 1,4 15 16 17 18 19 z0 21, 22 23 24 25 26 27 28 29 Inf. 3.078 1.886 1,638 1.533 I.476 1.440 1,.4t5 1,.397 1.383 1,.372 L,363 1,.356 1.350 1,.345 1,34t I.337 t.333 1.330 1,.328 L3z5 1.3z3 1,.321, L.3l9 1.318 t.3t6 1.315 1,.31,4 1,.3t3 L3l1, 1,,z82 6.31,4 2.920 2.353 2.B2 z.01,5 I.943 1.895 1.860 1,833 1,.8I2 1,.796 1,.782 I.77t L.76I 1.753 t.746 L740 1,.734 1,.729 1,.725 1.72L I.717 L71,4 1,.71,1, 1.708 1,.706 1,.703 1,.70,1, L.699 1,.645 12.706 4.303 3,I82 2.776 2.571, z.447 2.365 2.306 z,262 2.2z8 z.201, 2.179 2.160 z.1,45 z.131, 2.I20 2.It} 2,101, 2,093 z.086 2.080 2.074 z.069 2.064 2.060 2.056 z.052 2,048 2.045 1.960 31,.821, 6.965 4.541, 3.747 3.365 3.1,43 2.998 2.896 z.821, 2.764 2.71,8 z.681, 2.650 2.624 2.602 2.583 2.567 z.552 2.539 2.5z8 2.5I8 2.508 2.500 2.492 2.485 2.479 2.473 2,467 2.462 2.326 63,657 1, 9.925 2 5.841 3 4.604 4 4.032 5 3.707 6 3.499 7 3.355 8 3,250 9 3J,69 10 3.106 11 3.055 12 3.012 13 2.977 1,4 2,947 15 2.921, 16 2.898 17 2.878 18 2,86I 19 2.845 20 2.831, 2I 2.8t9 22 2.807 23 2.797 24 2.787 25 2.779 26 2.771, 27 2.763 28 2.756 29 2.576 inf. "Abridged by permission of Macmillan Publishing Co.,Inc., from Statistical Methods for Research Workers,l4th ed., by R. A. Fisher. Copyright @ 1970 University of Adelaide. 1 0.0 2 0.0 3 0.0 4 0.? 5 0._t 6 0.6 7 0.9 8 1.3 9 I.1- 10 2.I 11 2.6 12 3.0 13 3.5 1,4 .í.0 15 1.6 16 5.1 I7 5.6 18 6.) 1,9 6.8 20 7.1 21, 8.0 22 8.6 23 9.2 24 9.8 25 10.- 26 11.1 27 11.8 28 Iz.1 29 13.1 30 13.T "'This t.-.- 0,02 0.08 0.09 yd Appendix C Statistical Tables 787 TABLE C.3 1],, (chi-square) values* a : 0.995 cr : 0.99 a : 0,975 ct : 0.95 a:0.05 a:0.025 a:0.01 a:0.005 1 0.0000393 2 0.0100 3 0.0717 4 0.207 5 0.412 6 0.676 7 0.989 8 I.344 9 L735 10 2.156 1J 2.603 12 3.074 1,3 3.565 14 4.075 15 4,601 1,6 5.142 17 5.697 18 6.265 1,9 6.844 20 7.434 2I 8.034 22 8.643 23 9.260 24 9.886 25 10520 26 11.160 27 11.808 28 12.46I 29 I3.I2t 30 13.787 0.000157 0.0201 0.115 0.297 0,554 0.872 I.239 1.646 2,088 z.558 3.053 3.57I 4.I07 4.660 5.229 5.8L2 6.408 7.0I5 7.633 8.260 8.897 954z I0.196 10.856 II.524 12.I98 12.879 1.3.565 14.256 14.953 6.635 7.879 9.210 10597 II.345 12.838 13.277 14.860 15.056 16.750 16,812 18.548 18.475 20.278 20.090 2L955 21,.666 23.589 23.209 25.188 24.725 26.757 26.217 28.300 27,688 29,819 z9.14I 31.319 30.578 3z,80r 32.000 34.267 33.409 35.718 34.805 37.156 36,191 38.582 37.566 39.997 38.932 4t.40I 40.289 42.796 4I.638 44.1,81 42.980 45.558 44.314 46.928 45,642 48.290 46.963 49.645 48.278 50.993 49.588 52.336 50.892 53.672 0.000982 0.00393 0.0506 0.103 0.21,6 0.352 0.484 0.711, 0.831 I.I45 I.237 I.635 1.690 2.167 2.180 2.733 2,700 3.325 3.247 3,940 3.816 4.575 4.404 5.226 5.009 5.892 5.629 6,571, 6.262 7,261, 6.908 7.962 7.564 8.672 8.z3I 9,390 8,907 r0,I17 9,591 10.851 I0.z83 1I.591 10.982 12.338 11.689 13.091 12.401 13.484 I3.I20 1,4.611, 13.844 1,5.379 14.573 16.151, 15.308 1,6.928 16.047 17.708 16.79I 18.493 3.84I 5.991 7.81,5 9.488 n.a70 Iz.59z 14.067 15.507 1,6.91,9 18.307 19.675 21.026 22362 23,685 24.996 26,296 27.587 28.869 30J,44 3I.4t0 32,67I 33.924 35.I72 36.4I5 37.652 38.885 40.II3 4I.337 42.557 43.773 5.024 ,7.378 9348 11.I43 12.832 14.449 16.013 I7.535 19.023 20.483 21,.920 23.337 24.736 26.1,I9 27.488 28.845 30.191 3I.526 32.85z 34.I70 35.479 36.781 38,076 39.364 40.646 4ts23 43,I94 44,461, 45.772 46.979 1, 2 a J 4 5 6 7 8 9 10 1,1. 12 13 I4 15 I6 17 18 19 z0 21, 22 23 24 25 26 27 28 29 30 -This table is based on Table 8 of Biometrika Tables for Statisticians,Vol. 1, by permission of the Biometrika trustees. &ffiffiffiruffi K ffi partiaI Answers to selected Problems CHAPTER 1 Set 1.1a 4. (.) 5. (u) (b) CHAPTER 2 Set 2.1a 1. (u) -hlxzžl, (.) ", -xz 0 3. @) a tonsiday Set 2.2a 1. (a and e) See Figure D.1. 2. (a and d) See Figure D.2. 5. Let 17 minutes Jim's alternatives: Throw curve or fast ball. Joe's alternatives: Prepare for curve or fast ball. Joe wants to increase his batting average. Jim wants to reduce Joe's batting average. x' : play hours per day x2 : woík hours per day 789 Appendix D partial Answers to selected problems F|GURE D.1 FlGURE D,2 Maximize z : Zxt -l x, subject to xllx2=10,xl-x2š0 xtš4,,XI,Xz>0 Optimum:(x1, x2): (4,6), z : 1,4 Set 2.2b 2. Optimumsolution: x1 : 450Ib, x2: 350 Ib, z : $450 5. Let xl : thousand bbl/day from Iran 12 : thousand bbl/day from Dubai Minimize z : x1, * x2 subject to -.6x, t .4x2 < 0, .2x1 t .Ix2 > 1,4 .25x1 l .6x2 = 30, .1,x1 * .I5x2 > ]"0 .1,5x1 l .Ix2 ž8, xb xz ž0 Optimum:.xl": 55, x2: 30, z : 85 Set 2.3a 1. (b) + = ? = ?, ,,, + 0. See Figure D.3 3. Let x1 : sold A1 cans per day .r2 : sold B&K cans per day Maximize z : 5x, -l 7x2subject to x1 l x2< 500, 2xt - xz š0, x1 > 100 Xy, X2> 0 (u) ,, : 100, x2: 400, z : $:: X2 J 2 1 Chapter 2 791 X2 Cz:0 a J z 1, X2 500 400 300 200 100 ' --- Ct:0 Optimum: x1 : 1_00, xz: 400 Optimum: 6 ^1- -.^)--) 9 5 FlGURE D.3 100 200 300 400 FlGURE D.4 (b) 7. Let View xt ž100 as ls(x, ? = LSee FigureD.4. - 6.T2) > 100. Hence, lrs* = "' = 1, or, -oo < .{1 : CáSes of juice per day X2: CilSaS ofpaste per day Maximize z : I8x, -| 9x2subject to 24x1 -| 8xr, 60000 xt š2000, xz š6000 xyx2>0 (u) *r: 500, xz : 6000,, z : $63000 (b) 0 = 'é = 3, c2 * 0, see Figure D.5. x1 : 500, xz : 6000 FlGURE D.5 Optimum: 4000 x1 792 Appendix D Set 2.3b 1. Let partial Answers to selected problems .í1: ílulllber of type 1 hats per day x2 : íluílberof type2 hats per day Maximize z:8x1 * 5x2subjectto Zxllxr=400 xt š 1,50,, x2 < Z00 XyX2ž0 (a) See Figure D.6. x1: 100, xz : 200, z : $1800 at point B (b) $+ per type Zhatin the range (200,500) (.) $0 worth per unit in the range (].00, oo), hence change to t20 has no effect (d) $1 worth per unit in the range (100,400) A: (0,200) 6 : (100,200) optimum 6: (150,200) D : (150,100) E: (150,0) p: (0,400) FlGURE D.6 4. Let -T1 : radio minutes x2 : TY minutes Maximize z : xI -l Z5x2subject to 1"5x1 * 300x2 < ]_0,000 -hlZxz4.8. (b) New z : .936 million dollars. 4. (a) 1,1,50L ftz (b) (3,0,0), (]-,].,0), and (1,0,].) with respective 0,3, and 1 trim loss per foot. (.) Number of standard20'-rolls decreased by 30. ! obj function; ! constraints; Aij (i, j )*X(j ) )>=Rhs (í) 794 Appendix D Partial Answers to Selected Problems 6. (a) Let .r1 : tons of brown sugar per week .r2 : tons of white sugar per week x3 : tons of powdered sugar per week x4 : tons of molasses per week Maximize z : 1,50x, * 200x2 1- 2304 -l 35xa subject to .76x1 -l .95x2* 4<9I2 xt ž25, x2> 25,, 4 = 25, 0 = x+< 400 Optimum solution: x1 : 25,, x2 : 25, 4 : 869.25, xa : 400, z : §222,677.50. (b) Worth per unit of syrup : $55.94 valid in the range (187.15,oo) 9. (a) Let .r; : dollars invested in project i, i : 1",2,3, 4 /i : dollars invested in bank in year j, j : 1,, 2,3, 4 Maximize z : ls subject to x1 * x2 l xal ll,< ]_0,000 .5x1 -| .6x, - 4 l .4xa + I.065y1 - lz: 0 .3x1 -| .2x2 l .8x3 * .6xa t 1,.065y2 - }r : 0 1.8x1 t I.5x2 * 194 * I.8xa + 1.065y3 - lq : 0 I.2x1 * L3x2 i .8x3 -l .95xa + I.065ya - }s : 0 Xl, X2, X3, X4, !t, !z, |z,, lq > 0 Optimum solution: xt : 0, x2: $]-0,000, x3 : $6000, x4 : 0, lt : 0, lz : 0, lz : $6800, ll : $33,642, z : S53,628.73 at the start of year 5. (b) 5.36% (.) Total return reduced by 1000 X .373 : $3730 L2. (a) Let .t7 : íluítrberof units of model j, j : 1,,2,3 Maximize 1 : 30x1 * 20x2 * 5013 subject to 2x1 * 3x2 -| 5x, - 4000 4xr*2x2*7xr-6000 x1 1.5x2*.334 < ].500 2x, - 3x2: 0 5*r.-Zxr:g xl ž200, x2 > 200, x3 > 150 Chapter 3 795 X1,, X2, í: > 0 Optimum solution: \ : 324.32, x2 : 2í6.22, xz : 540.54, z : $41,081.08. (b) Not advisable because dual price : $10.27 per lb (c) Not advisable because dual price : $0 per lb 15. (a) Let xiA : amount invested in year l using plan A, i : 1, xiB : amount invested in year l using plan B, i : 1, Maximize z : 3xru -l I.74a subject to xte*xg=I00 -I.7xglxzl*xry<0 -3xrr-I.7x2nl4n<0 XiA,XiB>O,i:1,,2,3 Optimum solution:Invest $100,000 in plan Ainyear 1 and $170,000 in plan B in year 2. Accumulation : $510,000. (b) Yes, each additional dollar of investment is worth $S.tO at the end of 3 years. CHAPTER 3 Set 3.1a 1. Ztonslday and 1 ton/day for raw materials MI, and M2. 4. Let xi1 : units of product i produced on machine j,i : 1, 2; j : L, 2 Maximize z : I}(xr, 1- xn) + Is(x,T -l xzz) subject to Xl*xzt-Xp-xzz* t:5 -Xn-XztlXnlXzzl z:5 xllxztl::200 xn l xzz * sa: 250 xij ž0, for all i and j, si ž0, i : 1,,2,3, 4 Set 3.1b 2. Let x1 : units of product j, j : 1, Z, 3. Maximize z : 2x, * 5x2 * 34 - 1,5xa - Ilxj subject to Zxr*x2*24+xi -x4:80 xl*x2*24+*i -x5:65 xb x2, xs, xt, xi, x{,, ; > 0 Optimum solution: ír: 0 units, x2 : 65 units, all others : 0, z : $SZS. )? ZrJ Appendix D Partial Answers to Selected Problems Set 3.2a 2. (.),r:|,*r:+,z:+. (") (", : 0, x2: 3) and (", : 6, x2 : 0). 4. Infeasible basic solutions are: Set 3.3a 3. (u) (b) 5. (u) Set 3.3b 3. @r, x) : (i, -t), (*r, xl) : (8, -2) (*r, *o): (6, -4), (*r, xz) : G6, -26) @z, xq): (3, -I3), @s, xq) : (6, -16) A1l pairs but (Á, B) because associated corner points are not adjacent. (i) Legitimate. (ii) Not legitimate (C and 1not adjacent). (iii) Not legitimate (Á is revisited). ,r3 enters at value 1,, z : 3. Basic variable x1 x2 x3 x4 Value Leaving variable 1.5 1 X1 X7 0 J3 .8 X5 6. 9. (b) *r, x5, zfld x6 czfl increase value oíz.Ií.r2 enters, Lz : +20.Ií x5 oiltors, A,z : 0.If x6 enters, AZ : oo. Next best value of z : 20. Set 3.4a 3. (a) Minimize z : (8M - 4)x1 (b) Minimize z : (3M - 4)x1 6. The starting tableau is + (6M - 1)*, - Ms, - Mss : I)M + (M - I)*r: 3M Basic X1 X4 Solution Set 3.4b 1. Always minimize the sum of artificials because it represents the amount of infeasibility in the problem. X3Xl -8-Iz-1 Xj x^ 1104 4018 796 Chapter 4 797 7. AnY nonbasic variable having nonzero objective coefficients at end of phase I cannot become positive in Phase II because it will mean that the optimal objective value in Phase I will be positive-that is, it leads to an infeasible phase I solution. Set 3.5a 1. (a) A-+B+C->D. (b) 1 at A,I at B, C) : 3 at C,and 1 at D. Set 3.5b 1-. Alternative basic optima: (0,0, T), (t,0,3), (0,5,0). Nonbasic alternative optima: (u2,5u3, To, * 3u2),u1 l az * a:: 1,0 šct; = L,i: I,Z,3. Set 3.5c 2. (a) Solution space is unbounded in the direction of x2. (b) Objective value is unbounded because a unit increase in x2 increases z by 10. Set 3.5d 1_. The most that can be produced is 275 units. CHAPTER 4 Set 4.1a Let yy !2, zí7dy: be the dual variables. Maximize w :3yt * 5y, t 4yrsubjectto yt -f 2!z * 3yz = 15, 2y, - 4y, + |s = 12 lt > 0, Yz = 0, y3 unrestricted (.) Let yl and y2be the dual variables. Minimize z:5yt + íyrsubjectto 2_Y, + 3_},, : 1. ],, - J,: : 1 _Vt. _}': Unrestricted 5. Dual constraint associated with the artificial variabies is t,. >_ -M. which is the same as y, being unrestricted. Set 4,2a 1. (a) AV, is undefined. (e) VrA : (-14 -32) 2. 4. 798 Appendix D Partial Answers to Selected Problems Set 4.2b 2. (a) Inverse : Set 4.2c Z. Let yl and y2be the dual variables. Minimize w :30y1 + 4}y2subjectto lt -| lz = 5, 5yr - 5y, > 2,Zyt - 6y, > 3 lt> -M,Yz>0 Solution: lt -- 5, lz: 0, w : 150. 4. (a) Let y1 and yzbe the dual variables. Minimize w : 3yt t 4yrsubject to yt t Lyz > I, Zyt - lz > 5, yt > 3 y2 unrestricted (b) Solution: y1 : 3, lz - -1,, w : 5 Set 4.2d 2. (a) Feasibility: (*r, ,o) : (3, 15)+ feasible Optimatity: Objective coefficients of (x1, *r) : (0, 2) + optimal 7. (a) br: 30, bz : 40. (b) a : 23, b : 5, c : -I0, d : 5, e : 0 Set 4.2e 2. (a) Both primal and dual are infeasible. (b) Solutions are feasible but not optimal. Set 4.3a 2. (a) Let (*r, xr, xz, xq) : daily units of SC320, SC325, SC340, and SC370 Maximize z : 9.4x, * I0.8x2 -f 8.754 * 7.&xa subject to 10.5x, * 9.3x2 -l 11,.64 * 8.2xa < 4800 20.4x1 * 24.6x2 -l I7.74 -f 26.5xa < 9600 3.2x1 -| 2.5x2 * 3.64 -| 5.5xa < 4700 5x1 i 5x2-ť 54 -| 5xo'4500 xt žI00, x2 > ]_00, xs žl00, xa > 100 Optimum solution: xr : 100, x2 : 1_00, x3 : 138.42, í+ : 100, Z : 40lI.1,6. Ll :|) Chapter 4 799 (b) OnlY soldering capacity can be increased because it has a positive dual price (:.4944). (.) Dual Prices arc negative or zero.Hence,lower bounds represent disadvantages. Set 4.3b 2. New fire truck toy is profitable because its reduced cost : lt l 3y, _ 4 : _2. Set 4.4a 1-. (a) No, because point E is feasible and the dual simplex must stay infeasible until the last iteration where it becomes feasible. 4. (.) Add the artificial constraint x1 < M.Problem has no feasible solution. Set 4.5a 4. Let Q be the weekly feed. optimum solution: Limestone : .028Q, corn : .649Q, and soybean meal : .323Q. Cost : .8I221,Q. Set 4.5b 1. (u) -20 < Dz = 400, D3 > -20. 5. (a) Scarce: resistor and capacitor resource, abundant: chips resource. (b) Worth per unit of resistor, capacitor, and chips is $1.25, $.25, and $0. (g) Increase in profit : $250. Additional cost : $200. Net profit : $so. 8. (b) Solutionx1 : x2:2+ +isfeasibleforallA > 0.F'or0 < A < 3,11 l,. _A,?: ,'< 1+feasibilityconfirmed.For3 < A = 6,rtl 12: Ť > 1 + feasibility not confirmed. For A > 6, the change falls outside the ranges for Dl and D2. Set 4.5c 1_. (a) Additionalconstraint,4x1 l x2 * 2x, = 570,isredundant. Set 4.5d 2. (u) Current solution remains optimal. (.) New solution: xI : 2, x2 : 2, x3 : 4, z : 1,4. Set 4.5e 2. (b) (d) 6. (b) (.) 9. (a) Optimum remains the same. Optimum changes: x' : I0, x2 : !02.5, 4 : 2!5, z : 665. Smallest unit profit for product 1 : $O. New solution: x1, : 0, x2 : 165, x3 : I0, z : 4105. 1.25 - .25 + .5d2 > 0, .25 + .75 - .5d2 > 0. Appendix D Partial Answers to Selected Problems Set 4.5f 1. 42.86%. 3. (a) Fire engines are not profitable. CHAPTER 5 Set 5.'la 4. Assign a very high cost, M,to the route from Detroit to dummy destination. 6. (a and b) |]se M : ].0,000. Solution is shown in bold.Total cost : $49,710. Plant 1 Plant2 Plant 3 Excess Plant 4 Demand Supply 25 40 30 13 3042 9. (") City 1 excess cost : $13,000. Solution (in million gallons) is shown in bold. Area 2 will be 2 million gallons short. Total cost : $304,000. A2A1 Supply 6 5 6 2 Refinery 1" Refinery 2 Refinery 3 Dummy Demand 600 700 400 )< 3z0 23 300 17 350 500 480 25 450 5 1000 13 1000 M I2 4 18 ) M 30 10 4 8 I 20 25 Iz 6 M 50 ,, 50 8 Chapter 5 801 Set 5,2a 2. Total cost : $SO+. Doy Sharpening service Overnight 2-day 3-day Disposal Monday 24 Tuesday 12 Wednesday Z Thursday 0 Friday 0 Saturday 0 Sunday 0 0618 1200 1400 0200 1400 z00 000 0 0 0 0 4 12 22 5. Total cost : $190,040. Problem has alternative optima. Period Capacity produced amount Delivery 400 for (period) 1 and ].00 for 2 200 for 2,220 for 3, and 180 for 4 200 for 3 200 for 4 Set 5.3a 1_. (a) Northwest: cost : $42. Least-cost: cost : $37. Vogel: cost : $sz. Set 5.3b 5. (a) Cost : $1,475. (b) ,r, > 3, cr >_ 8, czs = 13, cl ž7. Set 5.4a 5. Use the code (city, date) to define the rows and columns of the assignment problem. Example:The assignment (D, 3)-(A,7) means leaving Dallas on June 3 and returning from Atlanta June 7 at a cost of $+OO. Solution is shown in bold. Cost : $1180. Problem has alternative optima. (A,7) (A.12) (A,2I) (A,28) (D,3) (D,10) (D,17) (D,25) 6. Optimum assignme nt I-d, II-c, III-a, IV-b . 500 600 z00 200 500 600 z00 300 1 2 J 4 400 300 300 280 300 400 300 300 300 300 400 300 300 300 300 400 Appendix D Set 5,5a partial Answers to selected problems 4. Total cost : $1550. Optimum solution summarized below. Problem has alternative optima. Store ]. Store 2 Store 3 Factory 1_ Factoty 2 CHAPTER 6 Set 6,1a 1. (iXa) 1,-3-4-2. (b) 1-5-4-3-1. (c) 1,-3-4-5-1. (d) See Figure D.7. (e) See FigureD.7. 4. 1 and 8 must appear in center. Problem has more than one solution. See Figure D.8. FlGURE D.8 Set 6.2a 2. (u) I-2-5-6-4-3 or 3-t-2-5-6-4.Total length : 1,4 miles. 5. High pressure: 1,-2-3-4-6. Low pressure: 1,-5-7 and 5-9-8. Total length : 53 miles Set 6.3a 1. Buy new car in 200L and}}}4.Total cost : $8900. See Figure D.9. 5. For arc(i, v,) - (i l I, vi*1), define p(q) : value (units of item i). Solution: Select items 1 and Z.Totalvalue : $80. See Figure D.10. FlGURE D.9 5000 50 200 50 FlGURE D.7 o-- o \*@ \\ č}-ó\-./ \--./ Spanning tree MTree 4100 Chapter 6 803 FIGURE D.10 Set 6.3b 1. (.) Delete all nodes but4,5,6,7,and 8. Shortest distance : 8 associated with routes 4-5-6-8 and 4-6-8. Set 6.3c 1. (a) 5-a-2-1, distance : 12. 5. See formulation in Figure D.11. Each arc has unit length. Arrows show one-way routes. Example solution: Bob to Joe: Bob-Kay-Rae-Kim-Joe. Largest number of contacts : 4. FlGURE D.11 Set 6.3d 1-. (u) - 1 and 1: all r,1. Optimum Formulation 1: Right-hand side of nodes 1 and 5 equations are others : 0. Formulation 2:Objective function is minimize Js solution: 1,-3-4-5,distance : 90. 0(1) 0(0 Appendix D Partial Answers to Selected Problems Set 6.4a 1. Cut 1:1,-2,1,-4,3-4,3-5,capacity : 60. Set 6.4b 1. (a) Surplus capacities: arc (2-3) : 40,arc(2-5): 10, arc(4-3) : 5. (b) Node 2:20 units, node 3: 30 units, node 4: 20 units. (.) No, because there is no surplus capacity out of node ]_. 7. Maximum number of chores is 4. Rif-3, Mai-]_, Ben-Z,Kim-S. Ken has no chore. Set 6.5a 1_. See FigureD.IZ. FlGuRE D.12 [430] [-100] [-110] Set 6.5b 1. Case 1: Lower bound is not substituted out. Minimize z [-95] |-125l Xz+XnXl,XnXn Node ]. Node 2 Node 3 Node 4 Lower bound Upper bound I1, -1 -I 030 oo 40 :50 : -40 1 :20 -I : -30 0 oo 1 -1 1 -1 10 10 oo oo Case Z;Lower bound is substituted out. Xl,x5zx)qxtz Minimize z Node ]. Node 2 Node 3 Node 4 Upper bound 1 -1 oo -1 1 oo 1, -1, oo :20 : -40 :40 : -20 I 1 -1, -1 10 oo 804 Chapter 7 805 Set 6,5c 1. Optimum cost : $9895. Produce 210 units in period 1 and 220 unitsin period 3. 5. OPtimal solution: Total student miles : 24,300.Problem has alternative optima. Number of students School ] School2 Minority areal Minority areaZ Minority area3 Nonminority areaI Nonminority areaZ 0 450 0 1000 0 500 0 300 0 1000 Set 6.6a 3. See Figure D.13. F|GURE D.13 Set 6.6b 1. Critical path: I-3-4-5-6-7.Projecíduration : 19. 3. Project duration : 38 days. Set 6.6c 3. (a) Maximum delay : ].0. 5. (u) Critical path: I-3-6,duration : 45 days. (b) Red-flagged activities: A, D, and E. (c) Start of C, D, and G will be delayed by 5 days. E will not be affected. (d) Minimum equipment : 2 units. CHAPTER 7 Set 7.1a 2. Points (1,0) and (0, 2) are in Q, but },(1,0) + (1 - \X0, 2) : (N, 2 - Z)\)does not lieinQfor0 < }, < 1. 806 Appendix D Set 7.1b 2. (b) (d) (0 3. (u) (d) partial Answers to 5elected problems Unique solution, see FigureD.1,4. Infinity of solutions. No solution. Basis because det B - -4. Not a basis because a basis must include exactly three independent vectors. F|GURE D.14 Set 7.1c 1. Xa : (*r, *o)' : (2,1.5)', which is feasible. 4.Optimal z:34. Maximize z : 2x, -l 5x2 subject to xt š4, Xz < 6, xl l xz š8, x1, xz> 0 Set 7.2a 1. (a) P, must leave. (b) B : (Pz, Pa) is a feasible basis. 2. For the basic vector X3, we have ForX6, {zi -r,}: CBB-IB -CB:CaI -CB:Ca _Cr:0 7. Number of adjacent extreme points is n - ru assuming nondegeneracy. 10. In case of degeneracy,number of extreme points is less than the number of basic solutions, else they must be equal. 11-. (a) new xi : *old x7 (b) new xi : t old xi Set 7.2b 2. (b) (rr, x2, h): (1.5, 2,0), z : 5 b 1,0 < x21I Chapter 7 807 Set 7.3a 2. (*r, x2, x3, x4, x5, *u) : (0, 1, .75,1,,0, 1), z : 22 Set 7.4a 1. (.) Add the artificialconstraint x2 < M.Then (*r, *r.) : ol(0, 0) + crl10, 0) + q(20,10) + aa(20, X4) + us(O, MI) at * crz* ag * cr,q* ct5: I,ai ž 0, j: I,2,...,5 2. Subproblem 1: (*,,, *r): 0t(0, 0) + aff, 0) + a3(0, 12) Subproblem2:(*r, ,o): 9r(5, 0) + P2(50, 0) + B3(0, 10) + P4(0, 5) Optimal solution: ctl : dz : 0, a: : 1 =+ x, : 0, xz : !2 9l : .4889, 9z: .5I1I,9: : 9+ : O+xo: 28,Is : 0 6. Because the original problem is minimization,we must maximize each subproblem. Optimal solution: (xr, xz, x3, *o) : , +, O, 20), z : -2+ Set 7.5a 2. Maximize w : Yb subject to YA = C, Y > 0 Set 7.5b 5. Method ].: Given Xr : (2, 6, 2)r, then (br, br, br) : (4, 6,8) + dual objective value : 34 Method 2: Given Y : (0, 3, 2)T,then (c1, cz) : Q, 5)+ primal objective value :34 7. Minimize w : Yb subject to YA : C, Y unrestricted Set 7.5a 1, -1=t 1.5. (xr, xr, xo) : (5, 30, 10) @r, *r,,,) : (Ť, T, 5) (xr,, xo, ,r) : (), 15, z0) _3t 2 808 Appendix D Partial Answers to Selected Problems CHAPTER 8 Set 8.1a 1. G5:Minimizes!,55xu -| 3.5x7 -l 5.5x, - .0675xr * ss* - sí:0 3. Let xl : No. of in-state freshmen , x2 : No. of out-of-state freshmlfl, x3 : No. of international freshmen G;:Minimizes!,,i: l,,2, ...,5, subjecttox1 -| x2* x3 -| st* - si: 1200, 2x1 -| x2- 24 + si - sí:0, -.1,x1 - .Ixr* .94 + si - si:0, .I25x1 -.05x2-.5564 + si - +:0, -.Zxt-|.8x2-.2x, -1-ss* -sí:0 All variables are nonnegative 5. Let x1 : No. of production runs in shift j, j : 1,, 2, 3. Minimize z: s{ t si, subjectto - 100x1 -l 40x2 - 80x3 * sr* - i:0, 4 < h< 5, 10 3 xz= 20,3 < x, < 5 Set 8.2a 1. Objective function: Minimize z : s{ + si + sr+ + sa + s! Solution: xp : .0201,, x7 : .0457, x, : .0582, xs : 2 cents, si : 1.45 Gasoline tax is $1.+S million short of goal. 4. x, : lb of limestone lday, xz : Ib of corn/ddy, xs : lb of soybean meal/day. Objectivefunction:Minimize z: s{ + sí+ sj- + s/ + s; Solution: x1 : 1,66.08 lb, x2 : 2778.56 lb, x: : 3055.36lb, z : 0. A1l goals are satisfied. 7. *j : No. of units of product j, j : I, 2. Assign a relatively high weight to the quota constraints. Objectivefunction:Minimizez : 100sr+ + 100sj + s, + s| Solution: _T1 : 80, x2 : 60, sl : 100 minutes, si : I20 minutes. Production quota can be met with ]_00 minutes of overtime for machine ]. and ].20 minutes of overtime for machine 2. Set 8.2b 2. G1 solutiol7; xp : .0204,, x7 : .0457 , x, : .0582, xs : 2.si : 1,.45, aII others : 0. Goals Gt, Gz, G3,and Gaarc satisfied. G5 is not. G5 problem:Same as G1 plus s1+ : 0, si : 0, si : 0, sa : 0. Solution: Same as G1 with sl : I.45.This means that G5 cannot be satisfied. Chapter 9 809 CHAPTER 9 Set 9.1a '. xii : No. of bottles of type l assigned to individual j, where full),3(empty). constraints: l : ].(full),Z(half Xn l Xn l Xt3:7, Xzl * xzz l xzs:7, X3I * xzz l x.,.:7 xn * .5xzt : 3.5, xp -| .5x22: 3.5, xn * 5x7 : 3.5 Xl l Xzt * x31 :7, Xn l Xzz l xsz:7, XB l xzs * x3:7 *,, = Oand integer for all i and j Solution: Use a dummy objective function. No. bottles assigned to individual Status Full Half full Empty J 1 J 6. y: Originalsumof money.xi: Amounttakenonnight j, j: I,2,3. x4 : AlTtount given to each mariner by first officer. Minimize z : y subject to3x1 - y - 2, x1 l 3xr. - ! :2, x1 l x2 t 34 ! : 2, ! - h - X2 - 4 - 3xa : ].. A11 variables are nonnegative integers. Solution: ! : 79 l 8In, n : 0, !, 2, 8. Side 1:5,6, and 8 (27 minutes). Side 2: 1,,2,3,4,and7 (Z8 minutes). Problem has alternative optima. Set 9.1b 1. Let x1 :_ ryo. widgets produced on machine j, j : I,2,3. li : 1 if machine i is usedandOotherwise.Minimizez:2x1 * 1,0x2t 54 * 30Óy, + 100y2+ 20Oy3 subject to x1 l x2 l xs = 2000, x1 - 600yt š0, x2 - 800yz š0, 4 - 7200y, = 0, x., x2, xl ž500 and integer, !t, !z, _}: : (0, 1). Solution: x1 : 600, x2 : 500, x3 : 900, z - $11,300. 2- Solution: Site 1 is assigned to targets 1 and 2, and site 2 is assigned to targets 3 and4.z : 18. 13 51 I3 810 Appendix D partial Answers to selected problems Set 9.1c 1. Letx1 : ].if routei isselectedandOotherwise, j :1,,2, ...,6. Minimize z : 80x1 t 50x2 * 704 * 52xa * 60x5 * 44x6subject to x1 * x3 -l x5 * xe ž l, x1 * xz* x4* x5> I, Xt l x2* xa* x6žI, x' * x2* xs= 1,,x2* x3* xal xe žt,xi : (0, 1), foralli. Solution: x5 : x6: ]_. Select routes (I,4,2) and (I,3,5), z : ]"04. Customer ]_ should be skipped in one of the two routes. 2. Solution: Committee is formed of individua\s a, d, and f. Problem has alternative optima. Set 9.1d 1. (a) Let aii : Integer amount assigned to square (l,i). constraints: xi1 : I, 2, ... , 9 for all i and j solution: problem has more than two alternative solutions. |-, 1--Tl i-6--]--Tl |l;l|or|l;l| 3. Solution: Produce 26 units of product I,3 of product 2, and none of product 3, and use locationZ. Set 9.2a1 2. (u) z : 23, x7 : 3, xz: 2. (") z:37,x1,:6,xz:í. 3. (u) z : 7.25, h : 1.75, x2 : L. (") e : 37, (*, : 4.6, x2 : Z)or@t : 6, x2 : L) Set 9.2b 1. (u) 9 subproblems. (b) 25,739 subproblems. lUse TORA integer programming module to generate the B&B tree. Ž*,,: 15, i : I, Z, S, }r*i1 : 15, j : I, 2, 3 Xl * Xzz l X33 : 1-5, 41 l Xzz l xn: 15, ("r, > xn* 1, orx1 - x12 - 1-), ("r, > xr* I orx1 < xB - 1-), (*rr= Xn * I otxp'XI3 - 1), ("r, > Xu,* 1orx11 = x21, - ]_), (rr, > xzt* I orx1 šxzt - ]_), (*rr= xsti-1, otx21 'x3t - 1_), Chapter 'l0 811 3. Equivalent 0-1 ILP: Maximize z : í8yr,* 36yp t I4y21 + 28y22 * 8y., +- 1,6y2 i 32y3 subject to 15y11 * 30yp -l I2y1 * 24y22 * 7yr, -l l4y2 * 28y3 < 43 A11 variables are binary. Solution: z:50,ln: l,lzt:1, allothers :0.Equivalently, xI:2,xz: I. Set 9.2c 1. (a) Legitimate cut because it nate any feasible integer on the LP solution space. 6. (b) Optimum integer solution:(x1, x2, xz) : (5,2, 2), z : 23. Rounded solution: @r, *r, xz) : (5, 3, 3), which is infeasible. Set 9.3a 1. The table below gives the number of distinct employees who enter/leave the manager's office when we switch from project l to project i. The objective is to find a "tolí"through all projects that will minimize the total traffic. Projecti 34 1 z J Project l 4 5 6 CHAPTER 10 Set 10.1a 1. Solution:Shortest distance : 21, miles. Route: 1-3-5-7. Set 10.2a 3. Solution: Shortest distance : 17.Route: 1,-2-3-5-7. Set 10.3a 2. (a) Solution: Value : I20. (*r,, *r,, ms) : (0, 0, 3) or (0, 2,2) or (0,4, ].) or (0,6,0). 5. Solution:Total points : 250. Select 2 courses from I, 3 from II,4 from III, and 1 from IV. passes through an integer point and does not elimipoint. You can verify this result by plotting the cut 4 6 4 8 7 6 4 4 6 5 4 6 4 6 J 6 6 8 6 5 5 J 7 5 5 4 4 6 6 5 812 Appendix D Partial Answers to Selected Problems 7. Let x1 : ]. if applicationi is accepted, and 0 otherwise. Equivalent knapsack model is Maximize 7 : 78x1 -l 64x2 * 684 -l 62xa * 85x5 subject to 7x1 * 4x2* 64-| 5xa* 8x, < 23,x1 : (0, I), j:1,2,...,5. Solution: Accept all but the first application. Value : 279. Set 10.3b 1. (a) Solution: Hire 6 in week 1_, fire ]_ in week Z,fire 2 in week 3, hire 3 in week 4, and hire Zfor week 5. 3. Solution: Rent 7 carsin week 1, return 3 in week Z,rent4 in week 3, and no action for week 4. Set 10.3c 2. Decisions for next 4 years: Keep, Keep, Replace, Keep. Total cost : $458. Set 10.3d 3. (a) Let xi and yi Zi: Xi -| Yi, f,(z,) : í,(z,) : be no. sheep kept and sold at the end of period l and define max{p,y,} !r:Z, max{psl; + f,*{Zz; - 2y)}, i : 1-, 2, ...,, fr - I !i=Zi CHAPTER 11 Set 11.2a 2. (a) Total cost per week : $51.50 (b) Order 239.05 lb whenever inventory level is zero. Total cost per week : $50.20 4. (a) Choose policy 1 because its cost per day is $2.]_7 as opposed to $2.50 for policy 2. (b) Optimal policy: Order ].00 units whenever the level drops to ]-0 units. Cost per day : $2.00. 11.2b Optimal policy: Order 500 units whenever level drops to ],30 units. Cost day : $258.50. Optimal policy: Order 150 units whenever level drops to 0. Cost day : $479.17. Set 2, 3. per per Chapter 12 813 Set 1'1.2c 1. Excel solution: (yr, yr, ll !q, ys) : (4.4I,6.87,, 4.I2, 4. L(y,,!z,!s,.}+,N) :ž+-+-^( 2T 7.2, 5.8). - ''0) Formula: y; : Excel solution: (yr, yr, !s, Iq, N) : (155 .3, IL8.82,, 74.36,90.10, -.0564). Set 11.3a 1_. (a) 500 units required at the start of periods I,4,7,and 10. Set 11.3b 3. Produce 173 units in period 1,180 in period 2,240 in period 3,1]_0 in period 4,and, 203 in period 5. 5et 1_. ) Set 1. 11.3c (a) No, because inventory should not be held needlessly at end of horizon. (b) (i)0<.4=5, 1=zz=5,0 =zzš4:xt:4, I-xzš 6,0= xs=4 (a) a : 7, Z2 : 0, Z3 : 6, Z4: 0. Total cost : $rg. 11.3d Use initial inventory to satisfy demand for period 1 and 4 units in period 2, thus reducing demand for the four periods to 0,22,,90, and 67, respectively. Optimal solution: Order 112 units in period 2 and 67 units in period 4. Total cost : $eZZ. Set 11.3e 1. Solution: Produce 210 units in period I,255 in period 4,210 in period 7, and 165 in period 10. Total cost : $1930. CHAPTER 12 Set 12.1a 1. (u) .15 and .25,respectively. (b) .571 (.) .82I. 2. n >_ 23. 3. n > 253. 814 Appendix D Partial Answers to Selected Problems Set 1z,'lb 4. Let p: probabi\ity Liz will win. Then, probability John will win is 3p, which equals the probability Jim will win. Probability Ann will win is 6p. Thus, p+3p+3p+3pt6p:1,. Set 12.1c 3. (u) 1 8, (o)? 7. .9545. Set 12,2a 2. K:20. 3. P{Demand > 1100} - .3. Set'l2.3a 3. (a) P{x = 50} : ?. (b) Expected no. unsold copies : 2.67. Set 12.3b 1. Mean : 3.66'l, variance : 1.556. Set 12.3c 1. (a) P{x1 : I,2,3} : p{x2: l, z,,3} : (.4, .2, .4), (b) No. Set 12,4a 1. (r1)'o. 3. .0547. Set 12.4b 1. .8646. 3. (a) ro = 6. (b) P,=r -, t. Set 12.4c 1. \ : l}arrivals/min. P(r < 5 Sec) : .63. Set 12.4d 2. .0014. CHAPTER 14 Set 14,1a 1. Weights for A, B, and C : (.4421,4, .25184, .30602). Set 14.1b 2. (rr, wJ, wM) : (.331, .292, .377). Select Maisa. 4. (r r, , u) : (.502, .498). Select P. Set 14.2a 2. (a) See Figure D.15. (b) Ev(corn) : - $8250, EV(soybean) : $250. Select soybean. 6. (a) See Figure D.16. (b) Ev(game) : - $.025. Do not play the game. $30,000 FlGURE D..l 5 $o -$35,000 $10,000 $o -$5000 Set 14.2b 2. Let z be the event of having one defective item in a sample of size 5. Answer: P{Alz} : .6097, P{Blz} : .3903. 4. (a) Expected revenue if you publish : $1_96,000. Expected revenue if you use publisher : $163,000. (b) If Survey predicts success, publish it yourself, else use publisher. 7. Ship lot to B if both items are bad;else, ship lot to á. Chapter 14 815 F|GURE D,16 $3,5 $1.1 $.qo -$t $1.1 -$t -$t -$: $o .125(TTH) 816 Appendix D Partial Answers to Selected Problems Set 14.2c 1. (a) Expected value : $5, hence there is no advantage. (b) ForO < x < 10, U(r):O,and for x: 10, U(x) : 1gg. (.) Play the game. 2. Lottery: U(*): 100 - 1,00p. Set 14.3a 1. (a) A1l methods: Study all night (action a1). (b) Al1 methods: Select actions a2 oía3. Set 14.4a 1. (a) Saddle-point solution aí(2,3).Value of game : 4. 3. (")21v14. Set 14,4b 1. Each player should mix strategies 50-50.Value of game : 0. 2. Robin's payoff matrix: 1.00%A 50%"A-50%"B 1,00Y.B -100 -50 0 0 -30 -100 Strategy for Police: Mix I00%A and ]_00%B with for Robin: Mix A and B 50-50. Value of game : Robin). Set 14.4c 1. (a) Payoff matrix for the searching team: A B probability .5 each. Strategy $50 (expected fine paid by AB AC AD BC BD Optimal strategy for both teams: Mix AB and CD 50-50. Value of the game : 0. 3. (a) Payoff matrix for Colonel Blotto: AB AC AD BC BD CD 10000-1 0100-1 0 001-1 00 00-1 100 0-1 0010 -1 00001, Chapter 'l6 817 0_31,-22-I 2-0 LI 0-z -1-i00 0-1-10 00_1-1 Optimal strategy for Blotto: Blotto mixes (2-0) and (0-2) 50-50, and the enemy mixes (Z-D and (0-3) 50-50. Value of the game : -.5, and Blotto loses. CHAPTER 15 Set 15,1a 2. Solution: Day 1: Accept if offer is high. Day Z:Accept if offer is medium or high. Day 3: Accept any offer. Set 15.2a 1. Solution: Year 1: Invest $10,000. Year 2: Invest all. Year 3: Do not invest.Year 4: Invest all. Expected accumulation : $35,520. 4. Allocate 2 bikes to center 1,3 to center 2,and 3 to center 3. Set 15.3a 3. Solution: First game: Bet $1. Second game: Bet $1. Third game: Bet $1 or none. Maximum probability : .109375. CHAPTER 16 Set 16.1a 1. (a) Order 1000 units whenever inventory level drops to 537 units. Set 16.1b 2. Solution: y- : 317.82 gallons, R* : 46.82 gallons. 3. Solution: y- :31,6.85 gallons,R* : 58.73 gallons.InExample16.1-2,,y" : 319.44 gallons, R* : 93.61, gallons. Order quantity remains about the same as in Example l6.1-2,but R* is smaller because the demand pdf has a smaller variance. Set 16.2a 3. 19 < p < 35.7 6. 39 coats. 818 Appendix D Partial Answers to Selected Problems Set 16.2b 1. Order 8 - r if x < 3.528, else do not order. Set 16.3a 2. Order 4.6l - x if. x < 4.61,,else do not order. CHAPTER 17 Set 17.1a 1-. (a) Efficiency :71,o/o. (b) The two requirements cannot be satisfied simultaneously. Set 17.2a 1. situation customer server Plane Runway Passenger Thxi Set 17.3a 1. (b) (i) }, : 6 arrivals per hour, average interarrival time : I hour. (.) (i) p : 5 services per hour, average service time : .2 hour. 3. (a) í(t): 20e-20,, / > 0. (b) P{/ = uotr} : .00674. 7. Jim's payoff is 2 cents with probability P{t = 1} : .4866 and -2 cents with probability P{t > 1} : .5].34. In 8 hours, Jim pays Ann : t7 .1,5 cents. 1,0. (a) P{t < 4 minutes} : .4866. (b) Expected discount percentage : 6.208. Set 17.4a 1_. P,-_5(1, hour) : .5595].. 4. (a) Pr(t - 7) : .241,67. 6. (a) Combined X : ro! * 1 7) Pz(t:5):.2I9. Set 17.4b 2. (u) wt : 9, po(t : 3) : .00532. (") ll,t : 3, p,=n(t - 1) : .9502. a b Chapter 17 819 5. pt : 4, p0(4): .37116. 8. (a) Average order size : 25 - 7.1,1 : 17.89 items. (b) rrr : 12, po(t :4) : .00069. Set 17.5a 3. (u) p,=, : .4445. (b) p,=r: .5555. 6. (u) pi : .2, / : 0, 1,2,3, 4. (b) Expected number in shop : 2 customers. (c) pq: .2. Set 17.5a 1_. (u) L, : Ipe t 2p, t 3pr: .I9I7 car. (.) \lo.t : .1263 car per hour. 88 (d)No.of emptyspaces - c - (L,- Ln):'Z"o,* ,>_*r('- r)P,. Set 17.6b 2. (u) po - .2. (b) Average monthly income : $5O x pt : $375. (.) Expected payment : $4O x L, : $rZS. 5. (u) po - .4. (b) Ln : .9 car. (d) p,=r, : .0036. 6. (d) No. of spaces is at least 13. Set 17.6c 1. P{" > 1} : .659. 5. $37.95 per lZ-hour day. Set 'l7.6d 1. (u) po: .3654. (b) W, : .207 hour. (.) Expected number of empty spaces - 4 - L, : 3.212. (.) p : 10 will reduceW,to about 9.6 minutes. 4. (u) pr: .6. (.) Probability of tinding an empty space cannot exceed .4 regardless of belt capacity. This means that the best utilization of the assembly department is 60%_ 820 Appendix D partial Answers to selected problems p5: .962. : \ps : .I9 customer per hour. 7.(a)1(b) N,",, Set 17.6e 2. For c : 2, Wq : 3.446 hour and for c : 4, W, : 1.681 hour, an improvement of over 507o. 5. Let K be the number of waiting room spaces. Using TORA, pot ptt * pr*z> .999yieldsK> ].0. 7. (u) p,=o: .65772. (e) Average number of idle computers : .667 computer. Set 17.6f (.) Utilization : 8]..87o. (d) p, ,| pz -| pa: .545. (u) poo: .000].4. (b) pro t plt * pzg:.02453. (d) Expected number of occupied spaces : L, - Lq = 20. (0 Probability of not finding a parking space - 1, - pn=29 : .02467.Number of students who cannot park in an 8-hour period is approximately 4. Set 17.69 2. (a) Approximately 7 seats. (b) p"=r: .291,L Set 17.6h (b) Average number of idle repair persons - 4 - (L, - Lr) : 2.0L. (d) P{2 or 3 idle servers} : po * pl : .34492. (a) L, : 1,.25 machines. (b) po : .33341,. (.) W, : .25 hour. }x : 2 calls per hour per baby, F : .5 baby per hour, R : 5, K : 5. (a) Number of awake babies - 5 - L, : 1 baby. (b) P, : .32768. (.) pn=z: .05782. Set 17 .7 a 2. (a) E{t} : 1,4 minutes and var{r} : 12 minutes2. L, : 7.867 cars. 4. \: .0625 prescriptionsperminute, E{t}: ].5 minutes, var{r} :9.33 minutes2 (u) po: ,0625, ) 4. 1. 4. 6. Chapter 18 821 L, : 7.3 prescriptions. W, : 132.I] minutes. Set 17.9a 2. Use (M/M/1):(GD ll}lrc). Cost per hour is $431.50 for repair person ]. and $386.50 for repair person 2. 4. (b) r": X + ^E-'|!C1 (.) Optimum production rate : 2725 pieces per hour. Set 17,9b 2. (a) Hourly cost per hour is $86.4 for two repair persons and $94.80 for three. (b) Schedule loss per breakdown : $30 X W, : §I27.1J for two repair persons and $94.65 for three. 4. }l, : .36125 per machine per hour, p : 10 per hour. Model (MlMl3):(GDl20l20) Yields W, : .10118 hour. Lost revenue per machine per hour : 25 X XI4/ X $2 : $1.83 or $36.55 for all 20 machínes. Cost of 3 repáir persons is $60. Set 17.9c 1. (a) Number of repair persons : 5. (b) Number of repair persons : 4. CHAPTER 18 Set 18.1a 4. (a) P{ll}: P{T}:.5. If 0 < R =.5,Jimgets $10.00. If .5 < R = 1,Jangets $10.00. 7. Lead time sampling: If 0 < R = .5, L : ! day. If .5 < R < I, L : 2 d,ays. Demand per day sampling: If 0