Towards the Simplex Method
The Classical Simplex Method


Para mis visitantes del mundo de habla hispana, este sitio se encuentra disponible en español en:

Sitio Espejo para América Latina     Sitio en los E.E.U.U.



The Simplex Method is the earliest solution algorithm for solving LP problems. It is an efficient implementation of solving a series of systems of linear equations. By using a greedy strategy while jumping from a feasible vertex of the next adjacent vertex, the algorithm terminates at an optimal solution. The aim of this site is to enhance the understanding of the Simplex Method developmental process.
Professor Hossein Arsham   

To search the site, try Edit | Find in page [Ctrl + f]. Enter a word or phrase in the dialogue box, e.g. "parameter " or "linear " If the first appearance of the word/phrase is not what you are looking for, try Find Next.


   MENU
  1. Introduction
  2. Algebraic Method
  3. Pivoting Row Operations
  4. The Simplex Method

Companion Sites:


Introduction

The Graphical Method is limited in solving LP problems having one or two decision variables. However, it provides a clear illustration of where the feasible and non-feasible regions are, as well as, vertices. Having a visual understanding of the problem helps with a more rational thought process. For example, we learned that: if a linear program has a non-empty, bounded feasible region, then the optimal solution is always one the vertices of its feasible region (a corner point). Therefore, what is needed to be done is to find all the intersection points (vertices) and then examine which one among all feasible vertices, provides the optimal solution. Now, by using the Analytical Geometry concepts, we will overcome this limitation of human vision. The Algebraic Method is designed to extend the graphical method results to multi-dimensional LP problem.


Algebraic Method:
LP models with multi-dimensional decision variables

We are interested in solving the following general problem:

Max( or Min) C.X
subject to:
AX £ a
BX ³ b
DX = d
with possibly some restricted variables: some Xi's ³ 0, some Xi's £ 0, and all others unrestricted in sign.

The customary notation is used: C = {Cj} for the objective function coefficients (known as the cost coefficients, because historically, the first LP problem was a cost minimization problem), and X = {Xj } is used for the decision variables.

Here is a four-step procedure for the algebraic solution algorithm:

  1. Construction of the Boundaries of the Constraints Set: Transform all inequalities (except the restricted condition on each variable, if any) to equalities by adding or subtracting slack and surplus variables. Construction of the boundary of the restricted variables is included in the next step.
  2. Finding All the Vertices: If the number of variables (including slack and surplus) is more than the number of equations, then set the following number of variables to zero:

    [ (Total number of variables including slack and surplus variables) - (Number of equations) ]

    The variables to set to zero are the slack, surplus, and restricted variables (any Xi's ³ 0, or Xi's £ 0) only. After setting these many variables to zero, then find the other variables by solving the resulting squared system of equations.

    Notice if Xi ³ 0, then it could be written as Xi - Si = 0, setting the corresponding Si to zero means Xi = 0, therefore, it is not necessary to explicitly introduce addition slack and surplus variables. Note also that when any Xi is an unrestricted in sign, it does not add a constraint.

  3. Check For Feasibility: All slack and surplus must be non-negate and check for restricted condition on each variable, if any. Each feasible solution is called a Basic Feasible Solution, which is a corner point of the feasible region.
  4. Selecting the Optimal Corner Point: Among all feasible solutions (i.e., feasible corner points), find the optimal one (if any) by evaluating the objective function. There might be multiple optimal solutions.

Remember that: A linear system AX=b has a solution if and only if the rank of A is equal to the rank of the augmented matrix (A,B).

Definitions to know:

Each solution to any system of equations is called a Basic Solution (BS). Those BS, which are feasible are called Basic Feasible Solutions (BFS).
In every basic solution, the variables, which you set equal to zero are called the Non-Basic Variables (NBV), all other variables you compute by using the system of equations are called Basic Variables (BV).
Note that, the list of decision variables, which are BV's for an optimal solution is readily available in the QSB optimal solution tableau.

Slack is the leftover of a resource. Surplus is the excess of production.

Number of basic solutions: After converting all inequalities into equalities, let T = Total number of variables including slack and surplus variables, E = Number of equations, and R = total number of slack and surplus variables and restricted decision variables, then the maximum number of basic solutions is:

R! / [(T - E)! (R + E - T)!]

where ! stands for Factorials. For example, 4 ! = (1)(2)(3)(4) = 24. Note that, by definition 0 ! = 1.

Notice that, If E > T or T > R + E, then the initial LP formulation could be wrong. The remedies for corrective actions are discussed in The Dark Side of LP: Tools for Modeling Validation section.

The main result: If an LP problem has bounded solution(s), then, the algebraic method generates the solution(s).

Example 1 : A Problem Without Any Restricted Variable:

Max X1 + X2
subject to:
X1 + X2 ³ 10
X1 £ 8
X2 £ 12

Introducing the slack and surplus variables, we have:

X1 + X2 - S1 = 10
X1 + S2 = 8
X2 + S3 = 12

For this problem E = 3, T = 5, R = 3, therefore, there are at most 3! / [2! . (3 - 2)! ] = 3 basic solutions. To find out the basic solutions, we notice that there are 3 equations with 5 unknowns, setting any 2 = 5 - 3 slack and surplus variables to zero, and solving the three resultant systems of equations, we have:

S1 S2 S3 X1 X2 X1 + X2
0 0 10 8 2 10
10 008 12 20*
010 0-2 12 10

The optimal solution is S1= 10, S2 = 0, S3 = 0, X1 = 8, X2 = 12, with optimal value of 20.

Example 2: A Problem With One Restricted Variable:

The following problem is attributed to Andreas Enge, and Petra Huhn.

Maximize 3X1 + X2 - 4X3
subject to:
X1 + X2 - X3 =1,
X2 ³ 2,
X1 ³ 0

After adding a surplus variable, we have:

Maximize 3X1 + X2 - 4X3
subject to:
X1 + X2 - X3 =1,
X2 - S1 = 2,

This LP problem cannot be solved by the graphical method. However, the algebraic method is general in the sense that it does not put any limitations on the dimensionality of the problem. Notice that we have two equations with one surplus variable and one restricted decision variable. The parameters for this problem are: T = 4, R = 2, and E = 2. This gives the total number of possible basic solutions: 2! / [(2!). (0!)] = 1. By setting the surplus and X1 variables to zero we get:

X1 X2 X3 S1 3X1 + X2 -4X3
02 1 0-2*

Thus the optimal solution is X1 = 0, X2 = 2, X3 = 1, with optimal value of -2.

Example 3: The Carpenter's Problem:

Introducing the slack and surplus variables to convert all inequalities into equalities, we have:

2X1 + X2 + S1 = 40
X1 + 2X2 + S2 = 50

Here we have 2 equations with 4 unknowns. Since the variables X1 and X2 both are restricted, we must set any two variables including these two, to zero. Solving the six resultant systems of equations, we have:

S1 S2 X1 X2 5X1 + 3X2
0010 20 110*
0-30 040 infeasible
030 20 0100
15 0025 75
-60 050 0infeasible
40 50 000

Therefore, from the above table, we see that, the optimal solution is S1= 0, S2 = 0, X1 = 10, X2 = 20, with optimal value of $110.

Example 4: A Problem With Mixed Constraints:

Min X1 + 2X2
subject to:
X1 + X2 ³ 4
-X1 + X2 £ 2
X1 ³ 0, and X2 is unrestricted in sign.

Introducing the slack and surplus variables, we have:

X1 + X2 - S1 = 4
-X1 + X2 + S2 = 2

Here we have 2 equations with 4 unknowns. Since only X1 is restricted, we must set any two of the variables S1, S2, and X1 to zero. Solving the six resultant systems of equations, we have:

X1 X2 S1 S2 X1 + 2X2
04 0-2 infeasible
02 -2 0infeasible
1 3 0 07*

Therefore, from the above table, we see that, the optimal solution is X1 = 1, X2 = 3, S1= 0, S2 = 0, with optimal value of 7.

Example 5: A Transportation Problem:

The goal is to find the most effective way to transport the goods. The supply and demand at each origin (e.g.; warehouse) O1, O2 and destination (e.g.; market) D1 and D2, together with the unit transportation cost are summarized in the following table.

The Unit Transportation Cost Matrix
D1 D2 Supply
O1 20 30 200
O2 10 40 100
Demand 150 150 300

Let Xij denotes the amount of shipment from source i to destination j. The LP formulation of the problem minimizing the total transportation cost is:

Min 20X11 + 30X12 + 10X21 + 40X22

subject to:
X11 + X12 = 200
X21 + X22 = 100
X11 + X21 = 150
X12 + X22 = 150
all Xij ³ 0

Because this transportation problem is a balanced one (total supply = total demand), therefore, all constraints are in equality form. Moreover, any one of the constraints is redundant (adding any two constraints and subtracting another one we obtain the remaining one). Let us delete the last constraint. Therefore, the problem reduces to:

Min 20X11 + 30X12 + 10X21 + 40X22

subject to:
X11 + X12 = 200
X21 + X22 = 100
X11 + X21 = 150
all Xij ³ 0

This LP problem cannot be solved by the graphical method. However, the algebraic method has no limitation on the LP dimension. Notice that we have three equations with four restricted decision variables. Setting any one of the variables in turn to zero we get:

X11 X12 X21 X22 Total Transportation Cost
0200 150 -50
infeasible
200 0-50 150 infeasible
150 50 0100 8500
50 150 100 06500*

Thus the optimal strategy is X11 = 50, X12 = 150, X21 = 100, and X22 = 0, with a least total transportation cost of $6,500.

Example 6: A Problem With Too-few Constraints:

As our last example, consider the following problem:

Max X1 + X2
subject to:
X1 + X2 £ 5

Introducing the slack variable we have:

X1 + X2 + S1 = 5

The parameters for this problem are: T = 3, R = 1, and E = 1. This gives the total number of possible basic solutions 1! / [(1!). (0!)] = 1. By setting the slack variable to zero we have this single equation X1 + X2 = 5 with two variables to solve. Therefore, there is no corner point, moreover, the feasible region is also unbounded. However, any arbitrary solution such as X1 = 0, X2 = 5 is optimal solution to the LP problem, with an optimal value of 5.

For a refined algebraic method visit the Solution Algorithms for LP Models Web site.


For more numerical examples, extensions, proofs read the following articles:

Arsham H., Affine Geometric Method for Linear Programs, Journal of Scientific Computing, Vol. 12, No. 3, 289-303, 1997.
Arsham H., Links Among a Linear System of Equations, Matrix Inversion, and Linear Program Solver Routines, Journal of Mathematical Education in Science and Technology, Vol. 29, No. 5, 764-769, 1998.


Pivoting Row Operations

As you recall, the Algebraic Method involves solving many linear systems of equations. When the LP problem has many variables and constraints, then solving so many systems of equations by hand becomes very tedious and even for very large-scale problems it is an impossible task. Therefore, we need the computer to do the computations for us. One of the algorithmic and computerized approaches to solve linear systems of equations is known as the Gauss-Jordan (GJ) pivoting operations.

The GJ pivoting will be also required later during the Simplex Method, and now is a good time to develop the habits needed at those later times.

Pivoting uses row operations (known as Gauss-Jordan row operations) to change one matrix entry (the Pivot) to "1", and then to change all other entries in the pivot's column into Zero's.

Once a pivot is chosen, the row operations of pivoting must be as follows:

Step 1: Make the pivot "1" by dividing the pivot's row by the pivot number

Step 2: Make the remainder of the pivot's Columns into Zero's by adding to each row a suitable multiple of the Pivot Row.

Note: The number changing to "1" is called a Pivot, is usually encircled, and cannot be zero. If it is zero, then interchange this row with a row below it with a non-zero element in that column (if there is none, then the conversion is impossible).

A Numerical Example: Using the Gauss-Jordan row operations, solve the following system of equations:

2X1 + X2 + X3 = 3
X1 + 3X3 = 1
2X1 + X2 = 2

The aim is to convert the coefficients of the system of equations into the following identity matrix. The results on the RHS's elements (?) provide the solution (if one exists).
1 0 0 ?
0 1 0 ?
0 0 1 ?
Step 1. Use row operations column by column;

Step 2. In each column:

a) First obtain the 1 in the appropriate row by multiplying by the reciprocal. Note: If there is a 0 in this position, exchange rows with a row below with a non-zero element if possible.

b) Reduce all other values in the column to zero by adding the appropriate multiple of the row containing the 1 to each row.

Let's apply the above procedure to our numerical example.

Notations: Old Row [ ], New Row { }. Putting these two matrices next to each other, the augmented matrix is:


Row #
Operations
Results
RHS
1
2
1
1
3
2
1
0
3
1
3
2
1
0
2
1
[1]/2
1
1/2
1/2
3/2
2
-1{1}+[2]
0
-1/2
5/2
-1/2
3
-2{1}+[3]
0
0
-1
-1
1
(-1/2){2}+[1]
1
0
3
1
2
[2]/(-1/2)
0
1
-5
1
3
(0){2}+[3]
0
0
-1
-1
1
(-3){3} + [1]
1
0
0
-2
2
(5){3} + [2]
0
1
0
6
3
[3]/(-1)
0
0
1
1


The solution is X1 = -2, X2 = 6, and X3 =1, which can be verified by substitutions.

Also visit the Web sites Solving a linear system of equations, Pivot Engine, Solve System of Linear Equations, and The Equator Module.


The Simplex Method

The Simplex Method is another algorithm for solving LP problems. You recall that the Algebraic Method provides all vertices even those which are not feasible. Therefore, it is not an efficient way of solving LP problems with large numbers of constraints. The Simplex Method is a modification of the Algebraic Method, which overcomes this deficiency. However, the Simplex Method has its own deficiencies. For example, it requires that all variables be non-negative (³ 0); also, all other constraints must be in £ form with non-negative right-hand-side (RHS) values.

Like the Algebraic Method, the simplex method is also a tabular solution algorithm. However, each tableau in the simplex method corresponds to a movement from one basic variable set BVS (extreme or corner point) to another, making sure that the objective function improves at each iteration until the optimal solution is reached.

The presentation of the simplex method is not universal. In the U.S.A. professors on the West Coast enjoy solving the minimization problems, while in the East the maximization version is preferred. Even within each of these groups you will find differences in presenting the simplex rules. The following process describes all the steps involved in applying the simplex solution algorithm:

  1. Convert the LP to the following form:

    Convert the minimization problem into a maximization one (by multiplying the objective function by -1).
    All variables must be non-negative.
    All RHS values must be non-negative (multiply both sides by -1, if needed).
    All constraints must be in £ form (except the non-negativity conditions). No strictly equality or ³ constraints are allowed.
    If this condition cannot be satisfied, then use the Initialization of the Simplex Method: Articicial-Free.

  2. Convert all £ constraints to equalities by adding a different slack variable for each one of them.

  3. Construct the initial simplex tableau with all slack variables in the BVS. The last row in the table contains the coefficient of the objective function (row Cj).

  4. Determine whether the current tableau is optimal. That is:
    If all RHS values are non-negative (called, the feasibility condition)
    If all elements of the last row, that is Cj row, are non-positive (called, the optimality condition).

    If the answers to both of these two questions are Yes, then stop. The current tableau contains an optimal solution.
    Otherwise, go to the next step.

  5. If the current BVS is not optimal, determine, which nonbasic variable should become a basic variable and, which basic variable should become a nonbasic variable. To find the new BVS with the better objective function value, perform the following tasks:

    • Identify the entering variable: The entering variable is the one with the largest positive Cj value (In case of a tie, we select the variable that corresponds to the leftmost of the columns) .

    • Identify the outgoing variable: The outgoing variable is the one with smallest non-negative column ratio (to find the column ratios, divide the RHS column by the entering variable column, wherever possible). In case of a tie we select the variable that corresponds to the upmost of the tied rows.

    • Generate the new tableau: Perform the Gauss-Jordan pivoting operation to convert the entering column to an identity column vector (including the element in the Cj row).

      Go to step 4.

A short discussion on the simplex method strategy: At the start of the simplex procedure; the set of basis is constituted by the slack variables. Remember that the first BVS has only slack variables in it. The row Cj presents the increase in the value of the objective function that will result if one unit of the variable corresponding to the jth column was brought in the basis. This row answers the question: Can we improve the objective function by moving to a new BVS? We will call it The Indicator Row (since it indicates if the optimality condition is satisfied).

Criterion for entering a new variable into the BVS will cause the largest per-unit improvement of the objective function. Criterion for removing a variable from the current BVS maintains feasibility (making sure that the new RHS, after pivoting remain non-negative).

Warning: Whenever during the Simplex iterations yet get a negative RHS, it means you have selected a wrong outgoing variable. The best remedy is to start all over again.

Notice that there is a solution corresponding to each simplex tableau. The numerical of basic variables are the RHS values, while the other variables (non-basic variables) are always equal to zero.

Notice also that variables can exit and enter the basis repeatedly during the simplex algorithm.

Numerical Recipes states that the Simplex algorithm is 'almost always' O(max(N,M)), which means that the number of iteration is a factor of number of variables or number of constraints, whichever is larger.

Geometric Interpretation of the Simplex Method: The simplex method always starts at the origin (which is a corner point) and then jumps from a corner point to the neighboring corner point until it reaches the optimal corner point (if bounded). Therefore, at each one of the simplex iterations, we are searching for a better solution among the vertices of a Simplex. A simplex in an n-dimensional space is the simplest shape having n + 1 vertices. For example, a triangle is a simplex in 2-dimensional space while a pyramid is a simplex in 3-dimensional space. These movements can be seen when you correspond each simplex tableau with an specific corner point in the graphical method e.g.; the carpenter's problem, as shown next.

A Numerical Example: The Carpenter's Problem

Maximize 5X1 + 3X2

Subject to:
2X1 + X2 £ 40
X1 + 2X2 £ 50
and both X1, X2 are non-negative.

After adding two slack variables S1 and S2 the problem is equivalent to:

Maximize 5X1 + 3X2

Subject to:
2X1 + X2 + S1 = 40
X1 + 2X2 + S2 = 50
and variables X1, X2, S1, S2 are all non-negative.

The initial tableau is:

BVS X1 X2 S1 S2 RHS Column Ratio (C/R)
S1[2]1104040/2
S21201 5050/1
Cj5300

The solution shown by this tableau is: S1 = 40, S2 = 50, X1 = 0, and X2 = 0. This solution is the origin, shown in our graphical method.

This table is not optimal since some of Cj elements are positive. The incoming variable is X1 and the outgoing variable is S1 (by C/R test). The pivot element is in the bracket. After pivoting, we have:

BVS X1 X2 S1 S2 RHSColumn Ratio (C/R)
X111/21/202020/(1/2)=40
S20[3/2]-1/2130 30/(3/2)=10
Cj01/2-5/20

The solution to this tableau is: X1 = 20, S2 = 30, S1 = 0, and X2 = 0. This solution is the corner point (20, 0), shown in our graphical method.

This table is not optimal, since some of Cj elements is positive. The incoming variable is X2 and the outgoing variable is S2 (by C/R test). The pivot element is in the bracket. After pivoting, we have:

BVS X1 X2 S1 S2 RHS
X1102/3-1/310
X201-1/32/320
Cj00-7/3-1/3

The solution to this tableau is: X1 = 10, X2 = 20, S1 = 0, and S2 = 0. This solution is the corner point (10, 20), shown in our graphical method.

This tableau is optimal, since all Cj elements are non-positive and all RHS are non-negative. The optimal solution is X1 = 10, X2 = 20, S1 =0, S2 = 0. To find the optimal value, plug in this solution into the objective function 5X1 + 3X2 = 5(10) + 3(20) = $110.

Visit also the Web sites tutOR, The Simplex Place, Simplex Machine, and LP-Explorer.

Further Readings:

Maros I., Computational Techniques of the Simplex Method, Kluwer Publishers, 2002.


The Copyright Statement: The fair use, according to the 1996 Fair Use Guidelines for Educational Multimedia, of materials presented on this Web site is permitted for non-commercial and classroom purposes only.
This site may be mirrored intact (including these notices), on any server with public access, and linked to other Web pages. All files are available at http://home.ubalt.edu/ntsbarsh/Business-stat for mirroring.

Kindly e-mail me your comments, suggestions, and concerns. Thank you.

Professor Hossein Arsham   


This site was launched on 2/25/1994, and its intellectual materials have been thoroughly revised on a yearly basis. The current version is the 8th Edition. All external links are checked once a month.


Back to:

Dr Arsham's Home Page


EOF: © 1994-2015.