Topics in Linear Algebra
Unification of System of Linear Equations,
Matrix Inversion, and Linear Programming

Site for Serbo-Croatian


This site extends the existing one-way connections among the solving linear systems of equations, matrix inversion, and linear programming. The additional linkages empower the user to understand the wholeness and manifoldness of these topics. They also assist the user to model and solve a problem modeled as any one of the above topics by having access to a computer package solver. The goals are theoretical unification as well as advancements in applications. Illustrative numerical examples are presented.
Professor Hossein Arsham   

To search the site, try Edit | Find in page [Ctrl + f]. Enter a word or phrase in the dialogue box, e.g. "inverse" or "equations" If the first appearance of the word/phrase is not what you are looking for, try Find Next.


   MENU
  1. Introduction
  2. LP Problem Solved by System of Equation Solver
  3. System of Equations Solution by LP Solver
  4. Solving for the Inverse of a Matrix Using LP Solver
  5. Solving an LP by using a Matrix Inverse Solver
  6. Solve a System of Equation Using Matrix Inverse Solver
  7. Finding the Matrix Inverse Using System of Equations Solver
  8. Matrix Inversion: A Computational Algebra Approach
  9. Learning Objects: Online Interactive Solvers
  10. Rounding Errors and Stability Analysis
  11. Concluding Remarks
  12. References and Further Readings

Companion Sites:

Introduction

Linear programming (LP), Linear systems of equations, and Matrix inversion are often favorite topics for both instructors and students. The ability to solve these problems by Gauss-Jordan pivoting (GJP), the widespread availability of software packages, and their wide range of applications make these topics accessible even for students with relatively weak mathematical backgrounds. The typical textbooks on LP usually devote separate sections for each topic. However, the relationships among these closely related topics are often not presented or thoroughly discussed. This article extends the existing one-way connections among these topics to construct a comprehensive two-way relationship as in the following figure. For each topic, it is shown how the problem may be modeled and solved by either of the associated methodologies.

Comprehensive Links

The additional linkages introduced here, empower the user to understand, to model and to solve a problem modeled as any one of the above topics by having access to a computer package-solver. The goals are theoretical unification as well as advancements in applications. The following six sections derive the linkages which are illustrated by small numerical examples. While some of these are quite well known, we include them here for completeness.

LP Problem Solved by System of Equation Solver

This section is probably among the most familiar since many introductions to the Simplex Method of linear programming (LP) are developed using a system of linear equations approach.

The coordinates of vertices are the basic feasible solution of the systems of equations obtained by setting some of the constraints at binding (i.e., equality) position. For a bounded feasible region, the number of vertices is at most combinatorial Cpq where p is the number of constraints and q is the number of variables. Therefore, by taking any q equations and solving them simultaneously one obtains a basic solution (if exists). By plugging this basic solution in the constraints of other equations, one can check for feasibility of the basic solution. If it is feasible, then this solution is a basic feasible solution that provides the coordinates of a corner point of the feasible region.

Each solution to any system of equations is called a Basic Solution (BS). Those Basic Solutions which are feasible are called Basic Feasible Solutions (BFS). The vertices of set S are the BFS.

Suppose we have the following LP problem with all variables unrestricted in sign:

Max X1 + X2

subject to:
X1 + X2 ³ 10
X1 £ 8
X2 £ 12

To illustrate the procedure, Set all the constraints at their binding (i.e., equality) position. The result is the following set of 3 equations with 2 unknowns:

X1 + X2 = 10
X1 = 8
X2 = 12

Here we have p=3 equations with q=2 unknowns. In terms of a "binomial coefficient", there are at most C32 = 3! / [2! (3-2)!] = 3 basic solutions. Solving the three resultant systems of equations, we have:

X1 X2 X1 + X2
8 2 10
8 12 20*
-2 12 10

The optimal solution is X1 = 8, X2 = 12, with optimal value of 20.

For details and more numerical examples visit the Solution Algorithms for LP Models Web site.

System of Equations Solution by LP Solver

Suppose we have a system of equations we would like to solve and a LP solver but no solver for a system of equations available. To solve as an LP add a dummy objective function, and for generality we assume the variables are unrestricted in their sign. Thus substitute Xi = yi - z, everywhere. This substitution must be done since many LP packages such as LINDO, and QSB assume the variables are non-negative.

For example to solve the following system of equations

2X1 + X2 = 3
X1 -X2 = 3

Convert to the LP problem below:

Max (or min) Dummy, say Max (or min) z,
subject to: 2y1 + y2 - 3z = 3, and y1 - y2 = 3

Using any LP solver such as LINDO we find the optimal solution to be y1 = 3, y2 = 0, z = 1. Therefore, the solution to the original system of equation is X1 = 3 -1 = 2, X2 = 0 - 1 = -1.

Solving for the Inverse of a Matrix Using LP Solver

To find the inverse to matrix AnXn we solve a series of n LP problems to find the inverse column by column. For example, to find the first column of the inverse matrix solve

Max Dummy
subject to: AX - z [SA1j , SA2j ,.... , SAnj] T = (1, 0, 0, .....0)T

The first column of A-1 is then (X1*- z*, X2*- z*, ...Xn*- z* )T

Suppose we wish to find the inverse of the following matrix (if it exists)

2   1
A =
1
-1

To find the first column of A-1 use an LP solver to solve

Max Dummy

subject to: 2X1 + X2 - 3z =1, and X1 - X2 = 0

The optimal solution is X1 = 1/3, X2 = 1/3, and z = 0. Therefore the first column of A-1 is [1/3 - 0, 1/3 - 0]T = [1/3, 1/3]T. To compute the second column solve

Max Dummy

Subject to: 2X1 + X2 - 3z =0, and X1 - X2 = 1

The optimal solution is X1 = 1, X2 = 0, and z = 2/3. Therefore the second column of A-1 is [1 - 2/3, 0 - 2/3]T = [1/3, -2/3]T. Therefore the inverse matrix is:

1/3   1/3
A-1 =
1/3
-2/3

Notice: A matrix possessing an inverse is called nonsingular, or invertible. A matrix is called singular if it does not have an inverse. For example, the following matrix is a singular one:

 1    6    4
 2    4   -1
-1    2    5

Therefore, in applying the above procedure for inverting a matrix, if the matrix is a singular one, then there is no the optimal solution.

Solving an LP by using a Matrix Inverse Solver

To solve an LP we find the inverse of each basis matrix with the appropriate number of slack/surplus variables set equal to zero. Then X = B-1 b, and we now use the objective function to find the optimal value and optimal solution. Suppose we wish to solve the following LP problem using an inverse solver.

Max X1 + X2

subject to:
X1 + X2 ³ 10
X1 £ 8
X2 £ 12

The first step is to convert all inequality constraints to equalities by introducing slack/surplus variables (the equality constraints, if any, remain unchanged), we have:

X1 + X2 - S1 = 10
X1 + S2 = 8
X2 + S3 = 12

Then, by finding the inverse of the following three matrices, we obtain the basic solutions:

S2 = S3= 0

 1   -1
B =  0   0
 1   0
Its inverse is:
 1   0
B-1 =  0   1
-1   1   1
Therefore:
X1 8
X2 = B-1.b = 12
S1 10
S1 = S3 = 0
 1   0
B =  0   1
 1   0
Its inverse is:
 0   -1
B-1 =  0   1
-1   1   1
Therefore:
X1 -2
X2 = B-1.b = 12
S2 10

S1 = S2 = 0

 1   0
B =  0   0
 1   1
Its inverse is:

 1   0
B-1 =  -1   0
-1   1   1
Therefore:
X1 8
X2 = B-1.b = 12
S3 10

Evaluating the objective function for the feasible basic solution, we obtain the optimal solution which is (X1 = 8, X2 = 12), with optimal value =20

Solve a System of Equation Using Matrix Inverse Solver

For completeness, we include the well known matrix inverse approach to solving a system of equations. To solve the following system of equations,

2X1 + X2 = 3
X1 -X2 = 3

which may be written as matrix equation AX = b, then X= A-1.b, if the inverse exists

2   1
A =
1
-1
Its inverse is:
1/3          1/3
A-1 =
1/3
-2/3
Therefore:
X1       2
= A-1b =
X   2
  -1

Therefor X1 = 2, and X2 = -1 is the unique solution.

Finding the Matrix Inverse Using System of Equations Solver

To find the inverse of a square matrix of size n, solve n systems of equations with a unit vector as their right hand side. The following proposition justifies this claim:

Proposition 2: Given a matrix AnXn has an inverse, then the jth column of the inverse denoted by A-1.j is the unique solution of AX = Ij, where Ij in a unit vector where the "1" element located at the jth row, j =1, 2,..,n.

Suppose we wish to find the inverse of the following matrix (if it exists)

2   1
A =
1
-1
To find the first column of A-1 solve:

2X1 + X1 = 1
X1 - X2 = 0

This gives X1 = 1/3 , X2 = 1/3. To find the second column of A-1 solve:

2X1 + X1 = 0
X1 - X2 = 1

This gives X1 = 1/3 , X2 = -2/3. Therefore, A-1 is

1/3            1/3
A-1 =
1/3
-2/3

Notice: A matrix possessing an inverse is called nonsingular, or invertible. A matrix is called singular if it does not have an inverse. For example, the following matrix is a singular one:

 1    6    4
 2    4   -1
-1    2    5

Therefore, in applying the above procedure for inverting a matrix, if the matrix is a singular one, then at least of the systems of equations has no solution.

Matrix Inversion: A Computational Algebra Approach

A standard method for computing A-1 for an r x n matrix A is to use Gauss-Jordan row operations to reduce the n x 2n augmented matrix [A | I] to [I | C], whence A-1 = C.

Instead, we propose reducing the [n by (n + 1)] augmented matrix [A | R], where R is an [n by 1] column vector of unspecified variable r. The G-J symbolic reduction of [A | R] to [I | D] yields A-1 as the matrix of coefficients of the ri's in D.

It is shown that, this approach yields substantial savings in both space and computational time over approaches used in existing computer algebra systems. A computational complexity analysis of the required number of operations shows that, for large n, their proposed method saves about 25% over the conventional method [Arsham 1993]. This reduction results from starting each row of R with just one (symbolic) term, instead of the n (numerical) terms in each row of I. Clearly, this approach has pedagogical advantages of their approach. One advantage is that simultaneously the generic linear system with coefficient matrix A is solved and A-1 is found, as discussed before.

Suppose we wish to find the inverse of the following matrix (if it exists):

2   1
A =
1
-1

2 1 r1
1 -1 r2

1 1/2 r1 / 2
0 -3/2 -r1 / 2 + r2

1 0 r1 / 3 + r2 / 3
0 1 r1 / 3 - 2r2 / 3

Therefore the inverse matrix is:

1/3       1/3
A-1 =
1/3
-2/3

Notice: A matrix possessing an inverse is called nonsingular, or invertible. A matrix is called singular if it does not have an inverse. For example, the following matrix is a singular one:

 1    6    4
 2    4   -1
-1    2    5

Therefore, in applying the above procedure for inverting a matrix, if the matrix is a singular one, then at least one row will have All Zero Elements during the G-J operations.

Learning Objects: Online Interactive Solvers

Computer Assisted Learning: I do recommend the following online interactive solvers sites to understand the concepts presented on this site by performing some numerical experimentation:

System of Equations and Matrix Inversion.
Linear Optimization.

Rounding Errors and Stability Analysis

Rounding errors is due to the hardware limitation of any computer package. Therefore, one must be concerned about the stability analysis of the solution with respect to the input parameters. Following are three numerical examples for Linear programming, matrix inversion, and solving system of linear equations, respectively:

Linear Programming: Consider the following problem:

Max 6X1 + 4X2
subject to:
3.01X1 + 2X2 £ 24
X1 + 2X2 £ 16 all decision variables ³ 0.

The optimal solution is (X1 = 3.9801, X2 = 6.0100). However, the optimal solution for the same problem but with a slight change in the constraints' coefficients matrix one may get a completely different optimal solution. For example, if we change 3.01 to 2.99, then the optimal solution is (X1 = 8.0268, X2 = 0). That is, decreasing the one element of the technology-matrix by 0.66%, the solution changes drastically! Therefore, the optimal solution is unstable with respect to this input parameter.

Matrix Inversion: Consider the following matrix:

1.9998 0.9999
A =
5.9994
3.0009

This matrix has a proper inverse, however rounding the elements of matrix A,

2 1
Ar =
6
3

it becomes a singular matrix, i.e., it has no inverse.

Solving System of Equations: Consider the following system of equation:

2.04X1 + 2.49X2 = 0.45
2.49X1 + 3.04X2 = 0.55

which has a unique solution (X1 = -1, X2 = 1). However, rounding the coefficient matrix to:

2.0X1 + 2.5X2 = 0.45
2.5X1 + 3.0X2 = 0.55

now the solution become (X1 = 0.1, X2 = 0.1) a startling change.

Concluding Remarks

This site expands the discussion of the interrelationships between linear system of equations, matrix inversion and linear programming so each may be modeled and solved by either of the other two methodologies. Although few of these linkages are well known, this site completes the remaining missing links.

References and Further Readings:
Arsham H., Links Among a Linear System of Equations, Matrix Inversion, and Linear Program Solver Routines, Journal of Mathematical Education in Science and Technology, 29(5), 764-769, 1998.
Arsham H., Matrix Inversion: A Computational Algebra Approach, Journal of Mathematical Education in Science and Technology, 27(4), 599-605, 1996.
Arsham H., A Linear Symbolic-Based Approach to Matrix Inversion, Journal of Mathematics and Computers in Simulation, 35(6), 493-500, 1993.
Arsham H., The Gauss-Jordan matrix inversion is not optimal: A symbolic adaptation, The 5th SIAM Conference on Applied Linear Algebra, Ed. J. Lewis, 528-532, 1994.
Bellman R., Introduction to Matrix Analysis, SIAM: Society for Industrial & Applied Mathematics, 1997.


The Copyright Statement: The fair use, according to the 1996 Fair Use Guidelines for Educational Multimedia, of materials presented on this Web site is permitted for non-commercial and classroom purposes only.
This site may be mirrored intact (including these notices), on any server with public access, and linked to other Web pages. All files are available at http://home.ubalt.edu/ntsbarsh/Business-stat for mirroring.

Kindly e-mail me your comments, suggestions, and concerns. Thank you.

Professor Hossein Arsham   

This site was launched on 2/25/1994, and its intellectual materials have been thoroughly revised on a yearly basis. The current version is the 8th Edition. All external links are checked once a month.


Back to:

Dr Arsham's Home Page


EOF: Ó 1994-2015.