CS 402 - ADA. UNIT 3- DYNAMIC PROGRAMMING.
SYLLABUS. SYLLABUS UNIT 3 1 Dynamic Programming - concept 2 0/1 Knapsack Problem 3 Reliability Design problem 4 All Pair Shortest Path Problem 5 Multistage graph Problem.
DYNAMIC PROGRAMMING. There are many problems where the solution set constitutes a sequence of decisionA optimal solution is the one solution which maximizes or minimizes an objective function.In the greedy approach we get one way to do so but sometimes it is not possible to find a way which lead to the optimal solution Dynamic programming is an algorithm design method that helps in finding the optimal solution where the greedy method is not applicable .It basically takes into consideration all possible decision sequences then in the process ruling out all suboptimal decision before their enumeration.This happens due to application of Principle of Optimality The Principle of Optimality states that an optimal sequence of decision has the property that whatever the initial state and decision are,the remaining decision must constitute an optimal decision sequence with regard to the state resulting from the first decision For eg.If to reach from A to G the optimal path taken from A is A-B-C-D-E-F-G then if.
0/1 KNAPSACK PROBLEM. The optimal path from D to G has to be selected then it will be D-E-F-G and nothing else as because an optimal sequence can never have a sub optimal sequence in between.In other words in an optimal decision sequence at every decision step the decision taken is the optimal one.Dynamic programming uses this principle for solving many problems 0/1 Knapsack Problem This is similar to the knapsack problem,the only difference been that no fractional weight can be considered.For every object the decision to be taken is about its inclusion or exclusion. A KNAP(l,j,y) represent the problem as to maximize ∑p i x i ,l<=i<=j subjected to ∑w i x i <= y,l<=i<=j where any x i = 0 or 1 So for problem KNAP(1,n,m) if y 1 ,y 2 ...y n is the optimal sequence of 0/1 values for x i ,1<=i<=n then if y 1 =0 then the sequence y 2 ,y 3 ...y n will constitute the optimal sequence for problem KNAP(2,n,m) and if y 1 =1 then the sequence y 2, y 3. ..y n will constitute the.
0/1 KNAPSACK PROBLEM contd... optimal sequence for problem KNAP(2,n,m-w i ) as per principle of optimality Now let g j (y) is the value of an optimal solution to KNAP(j+1,n.y).So if g 0 (m) is the optimal solution to KNAP(1,n,m).As x 1 can be either 0 or 1 so principle of optimality states g 0 (m) = max as every decision must be such which yields the maximum profit.The relation stated is applicable to any stage of decision so in general at any stage of decision making it can be stated g i (y) = max where i is the stage of decision and y is the weight value that remains to be filled Now since the last decision is the nth decision so as per above g n (y) = 0 for all y>=0 as no more decision can be taken.Also g n (y) = -∞ for all values of y < 0.As every decision is stated in terms of next decision this gives a possible method of solving the problem that can be understood with an example where the knapsack capacity is 6.
EXAMPLE 0/1 KNAPSACK PROBLEM. Let us consider an instance where n = 3, = and = So we have g0(6) = max (1) Proceeding on the same lines g1(6) = max(g2(6),g2(3) +2} (2) And again g2(6) = max (3) But g3(6) = g3(2) = 0 So putting in (3) g2(6) =max = 5 (4) Now g2(3) = max = max = max = 0 (5) Putting (4) and (5) in (2) g1(6) = max = 5 (6) Now g1(4) = max (7) Now g2(4) = max = max = max = 5 (8).
EXAMPLE 0/1 KNAPSACK PROBLEM contd... Also g2(1) = max = max = max = 0 (9) Putting (8) and (9) in (7) g1(4) = max = 5 (10) So finally putting (6) and (9) in (1) we get g0(6) = max = max Another possible method for solving the KNAPSACK problem is called set method where all possibilities of decision that are feasible is generated,then the optimal case is chosen to trace back the optimal decision steps In the problem under discussion we proceed as follows We define a term S x y as a set where x denotes the decision step -1 and y denotes the decision 0 or1 So S 0 0 = the first term in set denotes profit and second the weight.
EXAMPLE 0/1 KNAPSACK PROBLEM contd... with the same logic S 0 1 = as on inclusion decision profit = 1 and weight =2 Proceeding in the same manner S 1 0 = and S 1 1 = Proceeding in the same manner we can have the options for decisions as S 2 0 = and S 2 1 = After merging all possible decision we get the term S 3 0 = the last two term are not feasible in weight The term (3,5) gets eliminated as the next term indicates an increase in profit with a decrease in weight. This step of elimination is called Purging. So we get S 3 0 = .As the last choice of (6,6) indicates the optimal choice we trace back the decision and get the inclusion sequence as.
RELIABILITY DESIGN PROBLEM. A system is composed of several devices connected in series as shown D1 D2 D3 ……. Dn Each device has a reliability of ri that is the probability of its functioning well.since the devices are connected in series so the overall reliability is r1 x r2 x ….rn = ?ri,1<=i<=n So even if 10 devices are connected in series each with reliability 0.99, the overall reliability is 0.904.However if each device is available in multiple numbers connected parallaly as shown the overall reliability get increased ………. Suppose if mi copies of any device Di is available then probability of all of them mal- functioning is (1- ri)mi.So reliability of stage i is 1 -(1 - ri)mi.If ri = 0.99 and mi =2 then.
RELIABILITY DESIGN PROBLEM contd... The reliability of stage i is 0.9999. Now whenever a system has to be developed it is alloted some budget.Out of the given budget at least one of each device has to be procured.The rest can be utilised in procuring multiple copies of system component so that the reliability of overall system can be increased Problem Given a budget to develop a system with n number of component,the cost and reliability of each being known ,the problem is to decide number of copies of each component to be procured in for the system to achieve maximum reliability of the system within the given budget A dynamic programming solution to the above problem can be found by considering all possible combination and calculating the effect on reliability in each case.Out of all feasible solution the one providing the maximum benefit is kept.
RELIABILITY DESIGN- EXAMPLE. Let us take an instance where a 3 stage system with devices D1,D2 and D3 has to be designed.The cost of the devices are 30,15 and 20 respectively.The overall budget allocation is 105.The reliability of the devices are given as 0.9,0.8 and 0.5 respectively As discussed earlier 1 - (1 - ri) mi is the reliability when mi copies of a device having ri reliability is calculated.Also we can deduce the upper bound of every device that can be procured within the budget of 105.as the system cost is 30 +15 +20 = 65 ,so we have an extra budget of 40.As each D1 cost is 30 so maximum D1 that can be procured by budget is 1 +1 = 2. Similarly the upper bound on purchases of device D2 and D3 is 1 +2 =3 and 1 +2 = 3 respectively We adopt the set method to deduce the solution Initially when no device is considered reliability is 1 and cost is 0 So S 0 = // The first term in braces is for reliability and the second term for cost.
RELIABILITY DESIGN- EXAMPLE contd.... Then S 1 1 = // First device is taken in single copy,cost =30 And S 1 2 = // with 2 D1 the reliability becomes 1 - (1 - 0.9) 2 = 0.99 So overall we have the choice set as S 1 = Proceeding in the same manner S 2 1 = // Reliability multiplied and cost added and S 2 2 =// with 2 D2 the reliability of stage 2 is 1-(1-0.8) 2 =.96 The second term(0.9504,90) is not feasible because as the total cost becomes 90 the inclusion of even single D3(cost 20) will not be possible with budget of 105 So we redefine S 2 2 = Similarly S 2 3 =//The second term is not feasible due to overbudget.
RELIABILITY DESIGN- EXAMPLE contd.... So overall S 2 = Deducing further and proceeding in the same way applying all constraint S 3 1 = S 3 2 = S 3 3 = Merging all options and applying the Purging process S 3 = The above set indicates that the maximum reliability possible is in the last term and that is 0.648 with cost incurred 100.If this reliability is traced back we find that the decision that led to this result is D3 = 2,D2 = 2 and D1 = 1.This way system reliability is designed by dynamic programming.
ALL PAIR SHORTEST PATH PROBLEM. Given a directed graph G = (V,E) of n vertices and its cost adjacency matrix where cost(i,j) = ∞, if edge (i,j) does not exist and every cost (i,i) = 0 then all pair shortest path problem is to find a matrix A such that A(i,j) is the length of shortest path from i to j ,1<=i,j<=n The matrix can be created using single source shortest path algorithm for every node repeating it for n times thus giving an overall complexity of O(n 3 ) however the dynamic programming approach provides an easier solution using the principle of optimality.This algorithm is however applicable to graphs without a negative cycle If k is any intermediate vertex in the shortest path from i to j then as per principle of optimality the path i to k and k to j must be also the shortest one.If k is vertex of maximum index then both the path i to k and j to k will never pass through any vertex of index greater than k-1,in that case if A k (i,j) represent the length of the shortest path from i to j going through no vertex of index greater than k we get the relation.
ALL PAIR SHORTEST PATH PROBLEM contd... A k (i,j) = min for all 1<=k<=n From this clearly A 0 (i,j) = cost(i,j) 1<=i,j<=n A shortest path from i to j going through no vertex higher than k either goes through vertex k or not.If it does then A k (i,j) = A k-1 (i,k) + A k-1 (k,j) and if it does not then A k (i,j) = A k-1 (i,j) .So combining the two situation we get A k (i,j) = min ,k>=1 The above recurrence can be solved for A n by first computing A 0 ,then A 1 ,then A 2 and so on .Since there are no vertex in G with greater index than n so A(i,j) = A n (i,j) The algorithm Allpaths computes A n (i,j) and this algorithm is also called FLOYD-WARSHALL’s algorithm Algorithm AllPaths(cost,A,n).
ALL PAIR SHORTEST PATH PROBLEM contd... //cost[1:n,1:n] is the cost adjacency matrix of graph with n vertices .A[i,j] is the //cost of the shortest path from vertex i to j, cost(i,i) =0.0 // copy cost adjacency matrix to create A 0 As the algorithm is based on principle of optimality we have A k (i,k) = A k-1 (i,k) and.
ALL PAIR SHORTEST PATH PROBLEM contd... A k (k,j) = A k-1 (k,j) .So when A k is formed the kth colomn and kth row do not change in A k-1 .Consequently when A k (i,j) is computed we have A(i,k) = A k-1 (i,k) = A k (i,k) and A(k,j) = A k-1 (k,j) = A k (k,j) . So old values on which the new values are based do not change In order to understand the working of Floyd Warshall algorithm we consider the directed graph shown below along with the cost adjacency matrix 6 1 4 2 A 0 1 2 3 3 11 2 1 0 4 11 3 2 6 0 2 3 3 ∞ 0.
ALL PAIR SHORTEST PATH PROBLEM EXAMPLE. We begin with the formation of A 1 from the given A 0 We do not consider the elements of row 1 and column 1 but for all other values we compare A 0 (i,j) with (A 0 (i,1) + A 0 (1,j) and retain the smaller value as shown below.Next we construct A 2 from A 1 this time without considering the elements of row 2 and column 2 and for all other value comparing A 1 (i,j) with (A 1 (i,2) + A 1 (2,j) and retaining the smaller one.The process is repeated again to get A 3 from A 2 The matrix A 3 is the answer matrix for the given problem A1 1 2 3 A2 1 2 3 A3 1 2 3 1 0 4 11 1 0 4 6 1 0 4 6 2 6 0 2 2 6 0 2 2 5 0 2 3 3 7 0 3 3 7 0 3 3 7 0 The complexity of this algorithm is of the order O(n 3 ) as there are 3 loops nested.
MULTISTAGE GRAPH PROBLEM. A multistage graph G = (V,E ) is a directed graph in which the vertices are partitioned into k disjoint sets V i , 1<=i<=k,k>=2. Also if (u,v) is an edge in E then u દ V i and v દ V i+1 .The sets V1 and Vk are such that that │V 1 │ =│V k │ =1..The single node in V1 is called s(source) and that in Vk is called t(sink).The term c(i,j) is the cost of edge(i,j).The cost of a path from s to t is the sum of all cost of the edges on that path. The multistage graph problem is to find a minimum cost path from s to t A dynamic programming solution to a k-stage graph is obtained by considering that the selection of path is the result of k-2 decision.The ith decision involves determining which vertex in stage V i +1 ,1<=i<=k-2 is to be on path.Let p(i,j) is the minimum cost path from vertex j in V i to vertex t.Let cost (i,j) be the cost of this path.Applying the principle of optimality we have cost (i,j) = min , lદ Vi+1 ,(j,l)દ E.
MULTISTAGE GRAPH PROBLEM. A 5 Stage multigraph with 12 nodes 2 4 9 2 6 6 9 3 2 5 5 4 9 7 s 1 3 1 7 3 10 2 12 t 2 4 11 5 5 11 5 8 8 6 11.
MULTISTAGE GRAPH PROBLEM contd... Since cost(k-1,j) = c(j,t) if (j,t) દ E So to compute cost(1,s) we first compute cost(k-2,j) for all j દ V k-2 then we compute cost(k-3,j) for all j દ V k-3 and so on till we reach for cost (1,s) Algorithm Fgraph(G,k,n,p) //The input is a kstage graph G = (V,E) with n vertices indexed in order of stages.E is the set of edges and c(i,j) is the cost of (i,j).p[1..k] is the minimum cost path, cost [(i,j] = cost[j] denotes the cost of total path from node j in stage i to sink node t //.
MULTISTAGE GRAPH PROBLEM contd... / / Find the minimum cost path p[1] := 1; and p[ k ]:= n ; For j := 2 to k-1 do p[j] := d[p[j-1]] ; } Let us apply the algorithm in the given 5 stage graph to find the minimum cost path from s to t So in this case cost[12] = 0 // node 12 being the sink node Then applying the for loop cost[11] = [ 5 + cost[12] ] = [5 + 0] = 5 so d[ 11 ] = 12 cost[ 10 ] =[ 2 +cost[12] ] = [ 2 + 0] = 2 so d[ 10 ] = 12 cost[ 9 ] = [ 4 + cost[ 12 ] ] = [ 4 +0] = 4 so d[ 9 ] = 12.
MULTISTAGE GRAPH PROBLEM EXAMPLE contd.... cost[8] =min[5 +cost[10], 6 +cost[11] ] = min[5 +2,6 +5] = min[ 7,11] = 7 So d[8] =10 cost[7] =min[5 +cost[9], 3 +cost[10] ] = min[5 +4,3 +2] = min[ 9,5] = 5 So d[7] =10 cost[6] =min[6 +cost[9], 5 +cost[10] ] = min[6 +4,5 +2] = min[ 10,7] = 7 So d[6] =10 cost5] =min[11 +cost[7], 8 +cost[8] ] = min[11 +5,8 +7] = min[16,15] =15 So d[5] =8 cost[4] =min[11 +cost[8], ] = = [ 11 +7] = 18 So d[4] =8 cost[3] =min[2 +cost[6], 7 +cost[7], = min[2 +7,7 +5] = min[ 9,12] = 9 So d[3] =6 cost[2] =min[4 +cost[6], 2 +cost[7],1 + cost[8] ] = min[4 +7,2 +5,1 +7] = min[11 7,8] = 7 So d[2] =7 cost[1] =min[9 +cost[2], 9 +cost[3] , 3 + cost[4] , 2 +cost[5]] = min[9 +7,9 +9 ,3 + 18, 2 + 15] = min[ 16,18,21,17] = 16 So d[1] =2.
MULTISTAGE GRAPH PROBLEM EXAMPLE contd.... To calculate the path p[ 1 ] = 1 and p[5] = 12 p[2] = d[ p[2 -1]] = d[1] = 2 p[3] = d[ p[3 -1]] = d[p[ 2] ] =d[2] = 7 p[4 ] = d[ p[4 -1] = d[p[3]] = d[7] =10 So p[ 1..k] = [1:2:7:10:12] The multistage graph problem can also be solved using a backward approach. Let bp(i,j) be a minimum cost path from vertices to a vertex j in Vi.Let cost (i,j) be the cost of bp(i,j).In this backward approach we obtain b cost (i,j) = min , lદ V i-1 ,(l,j)દ E Since bcost(2,j) = c(1,j) if (1,j) દ E,bcost (i,j) can be computed by first computing cost for i = 3,then for i =4 and so on.The algorithm this time is Bgraph.
MULTISTAGE GRAPH PROBLEM contd.... Algorithm Bgraph(G,k,n,p) //Same functioning as FGraph / / Find the minimum cost path p[1] := 1; and p[ k ]:= n ; for j := k-1 to 2 do p[j] := d[p[j+1]] ; } The complexity of both FGraph and Bgraph is of the order of Θ(V + E).