Linear Optimization

Deterministic Modeling:
Linear Optimization with Applications

Para mis visitantes del mundo de habla hispana,este sitio se encuentra disponible en espańol en:

A mathematical optimization model consists of an objective function and a set of constraints in the form of a system of equations or inequalities. Optimization models are used extensively in almost all areas of decision-making, such as engineering design and financial portfolio selection. This site presents a focused and structured process for optimization problem formulation, design of optimal strategy, and quality-control tools that include validation, verification, and post-solution activities.
Professor Hossein Arsham

To search the site, try Edit | Find in page [Ctrl + f]. Enter a word or phrase in the dialogue box, e.g. "parameter " or "linear " If the first appearance of the word/phrase is not what you are looking for, try Find Next.

Companion Sites:

Ingredients of Optimization Problems and Their Classification Linear Programming (LP) The Dual Problem: Its Construction and Economics Implications Learning From the Optimal Strategy:
Sensitivity, Specificity, Structural, and the "What-if" Analysis

#### Introduction & Summary

Decision-making problems may be classified into two categories: deterministic and probabilistic decision models. In deterministic models good decisions bring about good outcomes. You get that what you expect; therefore, the outcome is deterministic (i.e., risk-free). This depends largely on how influential the uncontrollable factors are in determining the outcome of a decision, and how much information the decision-maker has in predicting these factors.

Those who manage and control systems of men and equipment face the continuing problem of improving (e.g., optimizing) system performance. The problem may be one of reducing the cost of operation while maintaining an acceptable level of service, and profit of current operations, or providing a higher level of service without increasing cost, maintaining a profitable operation while meeting imposed government regulations, or "improving" one aspect of product quality without reducing quality in another. To identify methods for improvement of system operation, one must construct a synthetic representation or model of the physical system, which could be used to describe the effect of a variety of proposed solutions.

A model is a representation of the reality that captures "the essence" of reality. A photograph is a model of the reality portrayed in the picture. Blood pressure may be used as a model of the health of an individual. A pilot sales campaign may be used to model the response of individuals to a new product. In each case the model captures some aspect of the reality it attempts to represent.

Since a model only captures certain aspects of reality, it may be inappropriate for use in a particular application for it may capture the wrong elements of the reality. Temperature is a model of climatic conditions, but may be inappropriate if one is interested in barometric pressure. A photograph of a person is a model of that individual, but provides little information regarding his or her academic achievement. An equation that predicts annual sales of a particular product is a model of that product, but is of little value if we are interested in the cost of production per unit. Thus, the usefulness of the model is dependent upon the aspect of reality it represents.

If a model does capture the appropriate elements of reality, but capture the elements in a distorted or biased manner, then it still may not be useful. An equation predicting monthly sales volume may be exactly what the sales manager is looking for, but could lead to serious losses if it consistently yields high estimates of sales. A thermometer that reads too high or too low would be of little use in medical diagnosis. A useful model is one that captures the proper elements of reality with acceptable accuracy.

Mathematical optimization is the branch of computational science that seeks to answer the question `What is best?' for problems in which the quality of any answer can be expressed as a numerical value. Such problems arise in all areas of business, physical, chemical and biological sciences, engineering, architecture, economics, and management. The range of techniques available to solve them is nearly as wide.

A mathematical optimization model consists of an objective function and a set of constraints expressed in the form of a system of equations or inequalities. Optimization models are used extensively in almost all areas of decision-making such as engineering design, and financial portfolio selection. This site presents a focused and structured process for optimization analysis, design of optimal strategy, and controlled process that includes validation, verification, and post-solution activities.

If the mathematical model is a valid representation of the performance of the system, as shown by applying the appropriate analytical techniques, then the solution obtained from the model should also be the solution to the system problem. The effectiveness of the results of the application of any optimization technique, is largely a function of the degree to which the model represents the system studied.

To define those conditions that will lead to the solution of a systems problem, the analyst must first identify a criterion by which the performance of the system may be measured. This criterion is often referred to as the measure of the system performance or the measure of effectiveness. In business applications, the measure of effectiveness is often either cost or profit, while government applications more often in terms of a benefit-to-cost ratio.

The mathematical (i.e., analytical) model that describes the behavior of the measure of effectiveness is called the objective function. If the objective function is to describe the behavior of the measure of effectiveness, it must capture the relationship between that measure and those variables that cause it to change. System variables can be categorized as decision variables and parameters. A decision variable is a variable, that can be directly controlled by the decision-maker. There are also some parameters whose values might be uncertain for the decision-maker. This calls for sensitivity analysis after finding the best strategy. In practice, mathematical equations rarely capture the precise relationship between all system variables and the measure of effectiveness. Instead, the OR/MS/DS analyst must strive to identify the variables that most significantly affect the measure of effectiveness, and then attempt to logically define the mathematical relationship between these variables and the measure of effectiveness. This mathematical relationship is the objective function that is used to evaluate the performance of the system being studied.

Formulation of a meaningful objective function is usually a tedious and frustrating task. Attempts to develop the objective function may fail. Failure could result because the analyst chose the wrong set of variables for inclusion in the model, because he fails to identify the proper relationship between these variables and the measure of effectiveness. Returning to the drawing board, the analyst attempts to discover additional variables that may improve his model while discarding those which seem to have little or no bearing. However, whether or not these factors do in fact improve the model, can only be determined after formulation and testing of new models that include the additional variables. The entire process of variable selection, rejection, and model formulation may require multiple reiteration before a satisfactory objective function is developed. The analyst hopes to achieve some improvement in the model at each iteration, although it is not usually the case. Ultimate success is more often preceded by a string of failures and small successes.

At each stage of the development process the analyst must judge the adequacy and validity of the model. Two criteria are frequently employed in this determination. The first involves the experimentation of the model: subjecting the model to a variety of conditions and recording the associated values of the measure of effectiveness given by the model in each case. For example, suppose a model is developed to estimate the market value of single-family homes. The model will express market value in dollars as a function of square feet of living area, number of bedrooms, number of bathrooms, and lot size. After developing the model, the analyst applies the model to the valuation of several homes, each having different values for the characteristics mentioned above. For this, the analyst finds market value tends to decrease as the square feet of living area increases. Since this result is at variance with reality, the analyst would question the validity of the model. On the other hand, suppose the model is such that home value is an increasing function of each of the four characteristics cited, as we should generally expect. Although this result is encouraging, it does not imply that the model is a valid representation of reality, since the rate of increase with each variable may be inappropriately high or low. The second stage of model validation calls for a comparison of model results with those achieved in reality.

A mathematical model offers the analyst a tool that he can manipulate in his/her analysis of the system under study, without disturbing the system itself. For example, suppose that a mathematical model has been developed to predict annual sales as a function of unit selling price. If the production cost per unit is known, total annual profit for any given selling price can easily be calculated. However, to determine the selling price to yield the maximum total profit, various values for the selling price can be introduced into the model one at a time. The resulting sales are noted and the total profit per year are computed for each value of selling price examined. By trial and error, the analyst may determine the selling price that will maximize total annual profit. Unfortunately, this approach does not guarantee that one obtained the optimal or best price, because the possibilities are enormous to try them all. The trial-and-error approach is a simple example for sequential thinking. Optimization solution methodologies are based on simultaneous thinking that result in the optimal solution. The step-by-step approach is called an optimization solution algorithm.

Progressive Approach to Modeling: Modeling for decision making involves two distinct parties, one is the decision-maker and the other is the model-builder known as the analyst. The analyst is to assist the decision-maker in his/her decision-making process. Therefore, the analyst must be equipped with more than a set of analytical methods.

Specialists in model building are often tempted to study a problem, and then go off in isolation to develop an elaborate mathematical model for use by the manager (i.e., the decision-maker). Unfortunately the manager may not understand this model and may either use it blindly or reject it entirely. The specialist may feel that the manager is too ignorant and unsophisticated to appreciate the model, while the manager may feel that the specialist lives in a dream world of unrealistic assumptions and irrelevant mathematical language.

Such miscommunication can be avoided if the manager works with the specialist to develop first a simple model that provides a crude but understandable analysis. After the manager has built up confidence in this model, additional detail and sophistication can be added, perhaps progressively only a bit at a time. This process requires an investment of time on the part of the manager and sincere interest on the part of the specialist in solving the manager's real problem, rather than in creating and trying to explain sophisticated models. This progressive model building is often referred to as the bootstrapping approach and is the most important factor in determining successful implementation of a decision model. Moreover the bootstrapping approach simplifies otherwise the difficult task of model validating and verification processes.

The purpose of this site is not to make the visitor an expert on all aspects of mathematical optimization, but to provide a broad overview of the field. We introduce the terminology of optimization and the ways in which problems and their solutions are formulated and classified. Subsequent sections consider the most appropriate methods for dealing with linear optimization, with emphasis placed on the formulation, solution algorithm, and the managerial implication of the optimal solution, with sensitivity analysis.

Ackoff R., Ackoff's Best: His Classic Writings on Management, Wiley, 1999.
Bender E., An Introduction to Mathematical Modeling, Dover Pubns, 2000.
Fdida S., and G. Pujolle, Modeling Techniques and Performance Evaluation, Elsevier Science, 1987.
Gershenfeld N., The Nature of Mathematical Modeling, Cambridge Univ. Pr., 1998.

#### Optimization-Modeling Process

Optimization problems are ubiquitous in the mathematical modeling of real world systems and cover a very broad range of applications. These applications arise in all branches of Economics, Finance, Chemistry, Materials Science, Astronomy, Physics, Structural and Molecular Biology, Engineering, Computer Science, and Medicine.

Optimization modeling requires appropriate time. The general procedure that can be used in the process cycle of modeling is to: (1) describe the problem, (2) prescribe a solution, and (3) control the problem by assessing/updating the optimal solution continuously, while changing the parameters and structure of the problem. Clearly, there are always feedback loops among these general steps.

Mathematical Formulation of the Problem: As soon as you detect a problem, think about and understand it in order to adequately describe the problem in writing. Develop a mathematical model or framework to re-present reality in order to devise/use an optimization solution algorithm. The problem formulation must be validated before it is offered a solution. A good mathematical formulation for optimization must be both inclusive (i.e., it includes what belongs to the problem) and exclusive (i.e., shaved-off what does not belong to the problem).

Find an Optimal Solution: This is an identification of a solution algorithm and its implementation stage. The only good plan is an implemented plan, which stays implemented!

Managerial Interpretations of the Optimal Solution: Once you recognize the algorithm and determine the appropriate module of software to apply, utilize software to obtain the optimal strategy. Next, the solution will be presented to the decision-maker in the same style and language used by the decision-maker. This means providing managerial interpretations of the strategic solution in layman's terms, not just handing the decision-maker a computer printout.

Post-Solution Analysis: These activities include updating the optimal solution in order to control the problem. In this ever-changing world, it is crucial to periodically update the optimal solution to any given optimization problem. A model that was valid may lose validity due to changing conditions, thus becoming an inaccurate representation of reality and adversely affecting the ability of the decision-maker to make good decisions. The optimization model you create should be able to cope with changes.

The Importance of Feedback and Control: It is necessary to place heavy emphasis on the importance of thinking about the feedback and control aspects of an optimization problem. It would be a mistake to discuss the context of the optimization-modeling process and ignore the fact that one can never expect to find a never-changing, immutable solution to a decision problem. The very nature of the optimal strategy's environment is changing, and therefore feedback and control are an important part of the optimization-modeling process.

The above process is depicted as the Systems Analysis, Design, and Control stages in the following flow chart, including the validation and verification activities: Beroggi G., Decision Modeling in Policy Management: An Introduction to the Analytic Concepts, Boston, Kluwer Academic Publishers, 1999.
Camm J., and J. Evans, Management Science: Modeling, Analysis, and Interpretation, South-Western College Pub., 1999.

#### Ingredients of Optimization Problems and Their Classification

The essence of all businesslike decisions, whether made for a firm, or an individual, is finding a course of action that leaves you with the largest profit.

Mankind has long sought, or professed to seek, better ways to carry out the daily tasks of life. Throughout human history, man has first searched for more effective sources of food and then later searched for materials, power, and mastery of the physical environment. However, relatively late in human history general questions began to quantitatively formulate first in words, and later developing into symbolic notations. One pervasive aspect of these general questions was to seek the "best" or "optimum". Most of the time managers seek merely to obtain some improvement in the level of performance, or a "goal-seeking" problem. It should be emphasized that these words do not usually have precise meanings.

Efforts have been made to describe complex human and social situations. To have meaning, the problem should be written down in a mathematical expression containing one or more variables, in which the value of variables are to be determined. The question then asked, is what values should these variables have to ensure the mathematical expression has the greatest possible numerical value (maximization) or the least possible numerical value (minimization). This process of maximizing or minimizing is referred to as optimization.

Optimization, also called mathematical programming, helps find the answer that yields the best result--the one that attains the highest profit, output, or happiness, or the one that achieves the lowest cost, waste, or discomfort. Often these problems involve making the most efficient use of resources--including money, time, machinery, staff, inventory, and more. Optimization problems are often classified as linear or nonlinear, depending on whether the relationship in the problem is linear with respect to the variables. There are a variety of software packages to solve optimization problems. For example, LINDO or your WinQSB solve linear program models and LINGO and What'sBest! solve nonlinear and linear problems.

Mathematical Programming, solves the problem of determining the optimal allocations of limited resources required to meet a given objective. The objective must represent the goal of the decision-maker. For example, the resources may correspond to people, materials, money, or land. Out of all permissible allocations of the resources, it is desired to find the one or ones that maximize or minimize some numerical quantity such as profit or cost. Optimization models are also called Prescriptive or Normative models since they seek to find the best possible strategy for decision-maker.

There are many optimization algorithms available. However, some methods are only appropriate for certain types of problems. It is important to be able to recognize the characteristics of a problem and identify an appropriate solution technique. Within each class of problems, there are different minimization methods, which vary in computational requirements, convergence properties, and so on. Optimization problems are classified according to the mathematical characteristics of the objective function, the constraints, and the controllable decision variables.

Optimization problems are made up of three basic ingredients:

1. An objective function that we want to minimize or maximize. That is, the quantity you want to maximize or minimize is called the objective function. Most optimization problems have a single objective function, if they do not, they can often be reformulated so that they do. The two interesting exceptions to this rule are:

The goal seeking problem: In most business applications the manager wishes to achieve a specific goal, while satisfying the constraints of the model. The user does not particularly want to optimize anything so there is no reason to define an objective function. This type of problem is usually called a feasibility problem.

Multiple objective functions: Often, the user would actually like to optimize many different objectives at once. Usually, the different objectives are not compatible. The variables that optimize one objective may be far from optimal for the others. In practice, problems with multiple objectives are reformulated as single-objective problems by either forming a weighted combination of the different objectives or else by placing some objectives as "desirable" constraints.

2. The controllable inputs are the set of decision variables which affect the value of the objective function. In the manufacturing problem, the variables might include the allocation of different available resources, or the labor spent on each activity. Decision variables are essential. If there are no variables, we cannot define the objective function and the problem constraints.

3. The uncontrollable inputs are called parameters. The input values may be fixed numbers associated with the particular problem. We call these values parameters of the model. Often you will have several "cases" or variations of the same problem to solve, and the parameter values will change in each problem variation.

4. Constraints are relations between decision variables and the parameters. A set of constraints allows some of the decision variables to take on certain values, and exclude others. For the manufacturing problem, it does not make sense to spend a negative amount of time on any activity, so we constrain all the "time" variables to be non-negative. Constraints are not always essential. In fact, the field of unconstrained optimization is a large and important one for which a lot of algorithms and software are available. In practice, answers that make good sense about the underlying physical or economic problem, cannot often be obtained without putting constraints on the decision variables.

Feasible and Optimal Solutions: A solution value for decision variables, where all of the constraints are satisfied, is called a feasible solution. Most solution algorithms proceed by first finding a feasible solution, then seeking to improve upon it, and finally changing the decision variables to move from one feasible solution to another feasible solution. This process is repeated until the objective function has reached its maximum or minimum. This result is called an optimal solution.

The basic goal of the optimization process is to find values of the variables that minimize or maximize the objective function while satisfying the constraints. This result is called an optimal solution.

There are well over 4000 solution algorithms for different kinds of optimization problems. The widely used solution algorithms are those developed for the following mathematical programs: convex programs, separable programs, quadratic programs and the geometric programs.

#### Linear Program

Linear programming deals with a class of optimization problems, where both the objective function to be optimized and all the constraints, are linear in terms of the decision variables.

A short history of Linear Programming:

1. In 1762, Lagrange solved tractable optimization problems with simple equality constraints.
2. In 1820, Gauss solved linear system of equations by what is now call Causssian elimination. In 1866 Wilhelm Jordan refinmened the method to finding least squared errors as ameasure of goodness-of-fit. Now it is referred to as the Gauss-Jordan Method.
3. In 1945, Digital computer emerged.
4. In 1947, Dantzig invented the Simplex Methods.
5. In 1968, Fiacco and McCormick introduced the Interior Point Method.
6. In 1984, Karmarkar applied the Interior Method to solve Linear Programs adding his innovative analysis.

Linear programming has proven to be an extremely powerful tool, both in modeling real-world problems and as a widely applicable mathematical theory. However, many interesting optimization problems are nonlinear. The study of such problems involves a diverse blend of linear algebra, multivariate calculus, numerical analysis, and computing techniques. Important areas include the design of computational algorithms (including interior point techniques for linear programming), the geometry and analysis of convex sets and functions, and the study of specially structured problems such as quadratic programming. Nonlinear optimization provides fundamental insights into mathematical analysis and is widely used in a variety of fields such as engineering design, regression analysis, inventory control, geophysical exploration, and economics.

Quadratic Program (QP) comprises an area of optimization whose broad range of applicability is second only to linear programs. A wide variety of applications fall naturally into the form of QP. The kinetic energy of a projectile is a quadratic function of its velocity. The least-square regression with side constraints has been modeled as a QP. Certain problems in production planning, location analysis, econometrics, activation analysis in chemical mixtures problem, and in financial portfolio management and selection are often treated as QP. There are numerous solution algorithms available for the case under the restricted additional condition, where the objective function is convex.

#### Constraint Satisfaction

Many industrial decision problems involving continuous constraints can be modeled as continuous constraint satisfaction and optimization problems. Constraint Satisfaction problems are large in size and in most cases involve transcendental functions. They are widely used in chemical processes and cost restrictions modeling and optimization.

#### Convex Program

Convex Program (CP) covers a broad class of optimization problems. When the objective function is convex and the feasible region is a convex set, both of these assumptions are enough to ensure that local minimum is a global minimum.

#### Data Envelopment Analysis

The Data Envelopment Analysis (DEA) is a performance metric that is grounded in the frontier analysis methods from the economics and finance literature. Frontier efficiency (output/input) analysis methods identify best practice performance frontier, which refers to the maximal outputs that can be obtained from a given set of inputs with respect to a sample of decision making units using a comparable process to transform inputs to outputs. The strength of DEA relies partly on the fact that it is a non-parametric approach, which does not require specification of any functional form of relationships between the inputs and the outputs. DEA output reduces multiple performance measures to a single one to use linear programming techniques. The weighting of performance measures reacts to the decision-maker's utility.

#### Dynamic Programming

Dynamic programming (DP) is essentially bottom-up recursion where you store the answers in a table starting from the base case(s) and building up to larger and larger parameters using the recursive rule(s). You would use this technique instead of recursion when you need to calculate the solutions to all the sub-problems and the recursive solution would solve some of the sub-problems repeatedly. While generally DP is capable of solving many diverse problems, it may require huge computer storage in most cases.

#### Separable Program

Separable Program (SP) includes a special case of convex programs, where the objective function and the constraints are separable functions, i.e., each term involves just a single variable.

#### Geometric Program

Geometric Program (GP) belongs to Nonconvex programming, and has many applications in particular in engineering design problems.

#### Fractional Program

In this class of problems, the objective function is in the form of a fraction (i.e., ratio of two functions). Fractional Program (FP) arises, for example, when maximizing the ratio of profit capital to capital expended, or as a performance measure wastage ratio.

#### Heuristic Optimization

A heuristic is something "providing aid in the direction of the solution of a problem but otherwise unjustified or incapable of justification." So heuristic arguments are used to show what we might later attempt to prove, or what we might expect to find in a computer run. They are, at best, educated guesses.

Several heuristic tools have evolved in the last decade that facilitate solving optimization problems that were previously difficult or impossible to solve. These tools include evolutionary computation, simulated annealing, tabu search, particle swarm, etc.

Common approaches include, but are not limited to:

1. comparing solution quality to optimum on benchmark problems with known optima, average difference from optimum, frequency with which the heuristic finds the optimum.
2. comparing solution quality to a best known bound for benchmark problems whose optimal solutions cannot be determined.
3. comparing your heuristic to published heuristics for the same problem type, difference in solution quality for a given run time and, if relevant, memory limit.
4. profiling average solution quality as a function of run time, for instance, plotting mean and either min and max or 5th and 95th percentiles of solution value as a function of time -- this assumes that one has multiple benchmark problem instances that are comparable.

#### Global Optimization

The aim of Global Optimization (GO) is to find the best solution of decision models, in presence of the multiple local solutions. While constrained optimization is dealing with finding the optimum of the objective function subject to constraints on its decision variables, in contrast, unconstrained optimization seeks the global maximum or minimum of a function over its entire domain space, without any restrictions on decision variables.

#### Nonconvex Program

A Nonconvex Program (NC) encompasses all nonlinear programming problems that do not satisfy the convexity assumptions. However, even if you are successful at finding a local minimum, there is no assurance that it will also be a global minimum. Therefore, there is no algorithm that will guarantee finding an optimal solution for all such problem.

#### Nonsmooth Program

Nonsmooth Programs (NSP) contain functions for which the first derivative does not exist. NSP are arising in several important applications of science and engineering, including contact phenomena in statics and dynamics or delamination effects in composites. These applications require the consideration of nonsmoothness and nonconvexity.

#### Metaheuristics

Most metaheuristics have been created for solving discrete combinatorial optimization problems. Practical applications in engineering, however, usually require techniques, which handle continuous variables, or miscellaneous continuous and discrete variables. As a consequence, a large research effort has focused on fitting several well-known metaheuristics, like Simulated Annealing (SA), Tabu Search (TS), Genetic Algorithms (GA), Ant Colony Optimization (ACO), to the continuous cases. The general metaheuristics aim at transforming discrete domains of application into continuous ones, by means of:

• Methodological developments aimed at adapting some metaheuristics, especially SA, TS, GA, ACO, GRASP, variable neighborhood search, guided local search, scatter search, to continuous or discrete/continuous variable problems.
• Theoretical and experimental studies on metaheuristics adapted to continuous optimization, e.g., convergence analysis, performance evaluation methodology, test-case generators, constraint handling, etc.
• Software implementations and algorithms for metaheuristics adapted to continuous optimization.
• Real applications of discrete metaheuristics adapted to continuous optimization.
• Performance comparisons of discrete metaheuristics (adapted to continuous optimization) with that of competitive approaches, e.g., Particle Swarm Optimization (PSO), Estimation of Distribution Algorithms (EDA), Evolutionary Strategies (ES), specifically created for continuous optimization.

#### Multilevel Optimization

In many decision processes there is a hierarchy of decision makers and decisions are taken at different levels in thishierarchy. Multilevel Optimization focuses on the whole hierarchy structure. The field of multilevel optimization has become a well known and important research field. Hierarchical structures can be found in scientific disciplines such as environment, ecology, biology, chemical engineering, mechanics, classification theory, databases, network design, transportation, supply chain, game theory and economics. Moreover, new applications are constantly being introduced.

#### Multiobjective Program

Multiobjective Program (MP) known also as Goal Program, is where a single objective characteristic of an optimization problem is replaced by several goals. In solving MP, one may represent some of the goals as constraints to be satisfied, while the other objectives can be weighted to make a composite single objective function.

Multiple objective optimization differs from the single objective case in several ways:

1. The usual meaning of the optimum makes no sense in the multiple objective case because the solution optimizing all objectives simultaneously is, in general, impractical; instead, a search is launched for a feasible solution yielding the best compromise among objectives on a set of, so called, efficient solutions;
2. The identification of a best compromise solution requires taking into account the preferences expressed by the decision-maker;
3. The multiple objectives encountered in real-life problems are often mathematical functions of contrasting forms.
4. A key element of a goal programming model is the achievement function; that is, the function that measures the degree of minimisation of the unwanted deviation variables of the goals considered in the model.
A Business Application: In credit card portfolio management, predicting the cardholder's spending behavior is a key to reduce the risk of bankruptcy. Given a set of attributes for major aspects of credit cardholders and predefined classes for spending behaviors, one might construct a classification model by using multiple criteria linear programming to discover behavior patterns of credit cardholders.

#### Non-Binary Constraints Program

Over the years, the constraint programming community has paid considerable attention to modeling and solving problems by using binary constraints. Only recently has non-binary constraints captured attention, due to growing number of real-life applications. A non-binary constraint is a constraint that is defined on k variables, where k is normally greater than two. A non-binary constraint can be seen as a more global constraint. Modeling a problem as a non-binary constraint has two main advantages: It facilitates the expression of the problem; and it enables more powerful constraint propagation as more global information becomes available.

Success in timetabling, scheduling, and routing, has proven that the use of non-binary constraints is a promising direction of research. In fact, a growing number of OR/MS/DS workers feel that this topic is crucial to making constraint technology a realistic way to model and solve real-life problems.

#### Bilevel Optimization

Most of the mathematical programming models deal with decision-making with a single objective function. The bilevel programming on the other hand is developed for applications in decentralized planning systems in which the first level is termed as the leader and the second level pertains to the objective of the follower. In the bilevel programming problem, each decision maker tries to optimize its own objective function without considering the objective of the other party, but the decision of each party affects the objective value of the other party as well as the decision space.

Bilevel programming problems are hierarchical optimization problems where the constraints of one problem are defined in part by a second parametric optimization problem. If the second problem has a unique optimal solution for all parameter values, this problem is equivalent to usual optimization problem having an implicitly defined objective function. However, when the problem has non-unique optimal solutions, the optimistic (or weak) and the pessimistic (or strong) approaches are being applied.

#### Combinatorial Optimization

Combinatorial generally means that the state space is discrete (e.g., symbols, not necessarily numbers). This space could be finite or denumerable sets. For example, a discrete problem is combinatorial. Problems where the state space is totally ordered can often be solved by mapping them to the integers and applying "numerical" methods. If the state space is unordered or only partially ordered, these methods fail. This means that the heuristics methods becomes necessary, such as simulated annealing.

Combinatorial optimization is the study of packing, covering, and partitioning, which are applications of integer programs. They are the principle mathematical topics in the interface between combinatorics and optimization. These problems deal with the classification of integer programming problems according to the complexity of known algorithms, and the design of good algorithms for solving special subclasses. In particular, problems of network flows, matching, and their matroid generalizations are studied. This subject is one of the unifying elements of combinatorics, optimization, operations research, and computer science.

#### Evolutionary Techniques

Nature is a robust optimizer. By analyzing nature's optimization mechanism we may find acceptable solution techniques to intractable problems. Two concepts that have most promise are simulated annealing and the genetic techniques. Scheduling and timetabling are amongst the most successful applications of evolutionary techniques.

Genetic Algorithms (GAs) have become a highly effective tool for solving hard optimization problems. However, its theoretical foundation is still rather fragmented.

#### Particle Swarm Optimization

Particle Swarm Optimization (PSO) is a stochastic, population-based optimization algorithm. Instead of competition/selection, like say in Evolutionary Computation, PSO makes use of cooperation, according to a paradigm sometimes called "swarm intelligence". Such systems are typically made up of a population of simple interacting agents without any centralized control, and inspired by cases that can be found in nature, such as ant colonies, bird flocking, animal herding, bacteria molding, fish schooling, etc.

There are many variants of PSO including constrained, multiobjective, and discrete or combinatorial versions, and applications have been developed using PSO in many fields.

#### Swarm Intelligence

Biologists studied the behavior of social insects for a long time. After millions of years of evolution all these species have developed incredible solutions for a wide range of problems. The intelligent solutions to problems naturally emerge from the self-organization and indirect communication of these individuals. Indirect interactions occur between two individuals when one of them modifies the environment and the other responds to the new environment at a later time.

Swarm Intelligence is an innovative distributed intelligent paradigm for solving optimization problems that originally took its inspiration from the biological examples by swarming, flocking and herding phenomena in vertebrates. Data Mining is an analytic process designed to explore large amounts of data in search of consistent patterns and/or systematic relationships between variables, and then to validate the findings by applying the detected patterns to new subsets of data.

#### Online Optimization

Whether costs are to be reduced, profits to be maximized, or scarce resources to be used wisely, optimization methods are available to guide decision-making. In online optimization, the main issue is incomplete data and the scientific challenge: how well can an online algorithm perform? Can one guarantee solution quality, even without knowing all data in advance? In real-time optimization there is an additional requirement: decisions have to be computed very fast in relation to the time frame we are considering.

Abraham A., C. Grosan and V. Ramos, Swarm Intelligence, Springer Verlag, 2006. It deals with the applications of swarm intelligence in data mining, using different intelligent approaches.
Charnes A., Cooper W., Lewin A., and L. Seiford, Data Envelopment Analysis: Theory, Methodology and Applications, Kluwer Academic Publications, 1994.
Dempe S., Foundations of Bilevel Programming, Kluwer, 2002.
Diwekar U., Introduction to Applied Optimization, Kluwer Academic Publishers, 2003. Covers almost all the above techniques.
Liu B., and A. Esogbue, Decision Criteria and Optimal Inventory Processes, Kluwer, 1999.
Luenberger D., Linear and Nonlinear Programming, Kluwer Academic Publishers, 2003.
Miller R., Optimization: Foundations and Applications, Wiley, 1999.
MigdalasA., Pardalos p., and P. Varbrand, Multilevel Optimization: Algorithms and Applications, Kluwer, 1998.
Reeves C., and J. Rowe, Genetic Algorithms: Principles and Perspectives, Kluwer, 2002.
Rodin R., Optimization in Operations Research, Prentice Hall, New Jersey, 2000.

For more books and journal articles on optimization visit the Web site Decision Making Resources

#### Linear Programming

Linear programming is often a favorite topic for both professors and students. The ability to introduce LP using a graphical approach, the relative ease of the solution method, the widespread availability of LP software packages, and the wide range of applications make LP accessible even to students with relatively weak mathematical backgrounds. Additionally, LP provides an excellent opportunity to introduce the idea of "what-if" analysis, due to the powerful tools for post-optimality analysis developed for the LP model.

Linear Programming (LP) is a mathematical procedure for determining optimal allocation of scarce resources. LP is a procedure that has found practical application in almost all facets of business, from advertising to production planning. Transportation, distribution, and aggregate production planning problems are the most typical objects of LP analysis. In the petroleum industry, for example a data processing manager at a large oil company recently estimated that from 5 to 10 percent of the firm's computer time was devoted to the processing of LP and LP-like models.

Linear programming deals with a class of programming problems where both the objective function to be optimized is linear and all relations among the variables corresponding to resources are linear. This problem was first formulated and solved in the late 1940's. Rarely has a new mathematical technique found such a wide range of practical business, commerce, and industrial applications and simultaneously received so thorough a theoretical development, in such a short period of time. Today, this theory is being successfully applied to problems of capital budgeting, design of diets, conservation of resources, games of strategy, economic growth prediction, and transportation systems. In very recent times, linear programming theory has also helped resolve and unify many outstanding applications.

It is important for the reader to appreciate, at the outset, that the "programming" in Linear Programming is of a different flavor than the "programming" in Computer Programming. In the former case, it means to plan and organize as in "Get with the program!", it programs you by its solution. While in the latter case, it means to write codes for performing calculations. Training in one kind of programming has very little direct relevance to the other. In fact, the term "linear programming" was coined before the word "programming" became closely associated with computer software. This confusion is sometimes avoided by using the term linear optimization as a synonym for linear programming.

Any LP problem consists of an objective function and a set of constraints. In most cases, constraints come from the environment in which you work to achieve your objective. When you want to achieve the desirable objective, you will realize that the environment is setting some constraints (i.e., the difficulties, restrictions) in fulfilling your desire or objective. This is why religions such as Buddhism, among others, prescribe living an abstemious life. No desire, no pain. Can you take this advice with respect to your business objective?

What is a function: A function is a thing that does something. For example, a coffee grinding machine is a function that transform the coffee beans into powder. The (objective) function maps and translates the input domain (called the feasible region) into output range, with the two end-values called the maximum and the minimum values.

When you formulate a decision-making problem as a linear program, you must check the following conditions:

1. The objective function must be linear. That is, check if all variables have power of 1 and they are added or subtracted (not divided or multiplied)

2. The objective must be either maximization or minimization of a linear function. The objective must represent the goal of the decision-maker

3. The constraints must also be linear. Moreover, the constraint must be of the following forms ( £, ³, or =, that is, the LP-constraints are always closed).

For example, the following problem is not an LP: Max X, subject to X < 1. This very simple problem has no solution.

As always, one must be careful in categorizing an optimization problem as an LP problem. Here is a question for you. Is the following problem an LP problem?

Max X2
subject to:
X1 + X2 £ 0
X12 - 4 £ 0

Although the second constraint looks "as if" it is a nonlinear constraint, this constraint can equivalently be written as:
X1 ³ -2, and X2 £ 2.
Therefore, the above problem is indeed an LP problem.

For most LP problems one can think of two important classes of objects: The first is limited resources such as land, plant capacity, or sales force size; the second, is activities such as "produce low carbon steel", "produce stainless steel", and "produce high carbon steel". Each activity consumes or possibly contributes additional amounts of the resources. There must be an objective function, i.e. a way to tell bad from good, from an even better decision. The problem is to determine the best combination of activity levels, which do not use more resources than are actually available. Many managers are faced with this task everyday. Fortunately, when a well-formulated model is input, linear programming software helps to determine the best combination.

The Simplex method is a widely used solution algorithm for solving linear programs. An algorithm is a series of steps that will accomplish a certain task.

#### LP Problem Formulation Process and Its Applications

To formulate an LP problem, I recommend using the following guidelines after reading the problem statement carefully a few times.

Any linear program consists of four parts: a set of decision variables, the parameters, the objective function, and a set of constraints. In formulating a given decision problem in mathematical form, you should practice understanding the problem (i.e., formulating a mental model) by carefully reading and re-reading the problem statement. While trying to understand the problem, ask yourself the following general questions:

1. What are the decision variables? That is, what are controllable inputs? Define the decision variables precisely, using descriptive names. Remember that the controllable inputs are also known as controllable activities, decision variables, and decision activities.

2. What are the parameters? That is, what are the uncontrollable inputs? These are usually the given constant numerical values. Define the parameters precisely, using descriptive names.

3. What is the objective? What is the objective function? Also, what does the owner of the problem want? How the objective is related to his decision variables? Is it a maximization or minimization problem? The objective represents the goal of the decision-maker.

4. What are the constraints? That is, what requirements must be met? Should I use inequality or equality type of constraint? What are the connections among variables? Write them out in words before putting them in mathematical form.

Learn that the feasible region has nothing or little to do with the objective function (min or max). These two parts in any LP formulation come mostly from two distinct and different sources. The objective function is set up to fulfill the decision-maker's desire (objective), whereas the constraints which shape the feasible region usually comes from the decision-maker's environment putting some restrictions/conditions on achieving his/her objective.

The following is a very simple illustrative problem. However, the way we approach the problem is the same for a wide variety of decision-making problems, and the size and complexity may differ. The first example is a product-mix problem.

#### The Carpenter's Problem: Allocating Scarce Resources Among Competitive Means

During a couple of brain-storming sessions with a carpenter (our client), he told us that he, solely, makes tables and chairs, sells all tables and chairs he makes at a market place, however, does not have a stable income, and wishes to do his best.

The objective is to find out how many tables and chairs he should make to maximize net income. We begin by focusing on a time frame, i.e., planning time-horizon, to revise our solution weekly if needed. To learn more about his problem, we must go to his shop and observe what is going on and measure what we need to formulate (i.e., to give a Form, to make a model) of his problem. We must confirm that his objective is to maximize net income. We must communicate with the client.

The carpenter's problem deals with finding out how many tables and chairs to make per week; but first an objective function must be established:

Since the total cost is the sum of the fixed cost (F) and the variable cost per unit multiplied by the number of units produced. Therefore, the decision problem is to find X1 and X2 such that:

Maximize 9X1 + 6X2 – [(1.5X1 + X2) + (2.5X1 + 2X2) + F1 + F2],

where X1 and X2 stand for the number of tables and chairs; the cost terms in the brackets are the raw material, and labor costs respectively. F1 and F2 are the fixed costs for the two products respectively. Without loss of generality, and any impact on optimal solution, let us set F1 = 0, and F2 = 0. The objective function reduces to the following net profit function:

Maximize 5X1 + 3X2

That is, the net incomes (say, in dollars, or tens of dollars) from selling X1 tables and X2 chairs.

The constraining factors which, usually come from outside, are the limitations on labors (this limitation comes from his family) and raw material resources (this limitation comes from scheduled delivery). Production times required for a table and a chair are measured at different times of day, and estimated to be 2 hours and 1 hour, respectively. Total labor hours per week are only 40 hrs. Raw materials required for a table and a chair are 1, and 2 units respectively. Total supply of raw material is 50 units per week. Therefore, the LP formulation is:

Maximize 5 X1 + 3 X2

Subject to:
2 X1 + X2 £ 40 labor constraint
X1 + 2 X2 £ 50 material constraint
and both X1, X2 are non-negative.

This is a mathematical model for the carpenter's problem. The decision variables, i.e., controllable inputs are X1, and X2. The output for this model is the total net income 5 X1 + 3 X2. All functions used in this model are linear (the decision variable have power equal to 1). The coefficients of these constraints are called Technological Factors (matrix). The review period is one week, an appropriate period within which the uncontrollable inputs (all parameters such as 5, 50, 2,..) are less likely to change (fluctuate). Even for such a short planning time-horizon, we must perform the what-if analysis to react to any changes in these inputs in order to control the problem, i.e., to update the prescribed solution.

Notice that since the carpenter is not going out of business at the end of the planning horizon, we added the conditions that both X1, X2 must be non-negative instead of the requirements that X1, and X2 must be positive integers. The non-negativity conditions are also known as "implied constraints." Again, a Linear Program would be fine for this problem if the carpenter were going to continue to manufacture these products. The partial items would simply be counted as work in progress and would eventually become finished goods say, in the next week.

We may try to solve for X1 and X2 by listing possible solutions for each and selecting the pair (X1, X2) that maximize 5X1 + 3X2 (the net income). However, it is too time consuming to list all possible alternatives and if the alternatives are not exhaustively listed, we cannot be sure that the pair we select (as a solution) is the best of all alternatives. This way of solving a problem is known as "sequential thinking" versus "simultaneous thinking". More efficient and effective methodologies, known as the Linear Programming Solution Techniques are based on simultaneous thinking are commercially available in over 400 different software packages from all over the world.

The optimal solution, i.e., optimal strategy, is to make X1 = 10 tables, and X2 = 20 chairs. We may program the carpenter's weekly activities to make 10 tables and 20 chairs. With this (optimal) strategy, the net income is \$110. This prescribed solution was a surprise for the carpenter since, because of more net income of selling a table (\$5), he used to make more tables than chairs!

Hire or Not? Suppose the carpenter can hire someone to help at a cost of \$2 per hour. This is, in addition, hourly-based wage he/she is currently paying; otherwise \$2 is much lower than the current minimum wage in US. Should the carpenter hire and if yes then for how may hours?

Let X3 be the number of extra hours, then the modified problem is:

Maximize 5 X1 + 3 X2 - 2 X3

Subject to:
2 X1 + X2 £ 40 + X3 labor constraint with unknown additional hours
X1 + 2 X2 £ 50 material constraint

Under this new condition, we will see that the optimal solution is X1 = 50, X2 = 0, X3 = 60, with optimal net income of \$130. Therefore, the carpenter should be hired for 60 hours. What about only hiring 40 hours? The answer to this and other types of what-if questions are treated under sensitivity analysis in this Web site.

As an exercise, use your LP software to find the largest range for X values satisfying the following inequality with two absolute value terms:

| 3X – 4 | - | 2X – 1 | £ 2

#### A Product-Replacement Problem

A price-taking firm sells S units of its product at the market price of p. Management's policy is to replace defective units at no additional charge, on the first-come, first-served basis, while replacement units are available. Because management does not want to risk making the same mistake twice, it produces the units that it sells to the market on one machine. Moreover, it produces the replacement units, denoted X, on a second, higher-quality machine. The fixed cost associated with operating both machines, the variable cost, and replacement cost are given is the short-run cost function C(X) = 100 + 20S + 30X.

The exact probability that a unit will be defective is r. Acting out of caution, however, management always underestimate the reliability of its product. Nonetheless, it imposes the condition that X ³ r.S. The demand for the firm's product is given by S(r) = 10000e-0.2r. Hence the decision problem is to maximize the net profit function P(X):

Maximize P(X) = 100000p e- 0.2r - 100 - 20S - 30X,
subject to: X ³ r.S.

As we will learn, the solutions to the LP problems are at the vertices of the feasible region. Therefore, the net profit P(X) will be maximized if the management set X = r.S.

#### A Diet Problem

Suppose the only foods available in your local store are potatoes and steak. The decision about how much of each food to buy is to made entirely on dietary and economic considerations. We have the nutritional and cost information in the following table:

 Per unit of potatoes Per unit of steak Minimum requirements Units of carbohydrates 3 1 8 Units of vitamins 4 3 19 Units of proteins 1 3 7 Unit cost 25 50

The problem is to find a diet (a choice of the numbers of units of the two foods) that meets all minimum nutritional requirements at minimal cost.

1. Formulate the problem in terms of linear inequalities and an objective function.
2. Solve the problem geometrically.
3. Explain how the 2:1 cost ratio (steak to potatoes) dictates that the solution must be where you said it is.
4. Find a cost ratio that would move the optimal solution to a different choice of numbers of food units, but that would still require buying both steak and potatoes.
5. Find a cost ratio that would dictate buying only one of the two foods in order to minimize cost.

a) We begin by setting the constraints for the problem. The first constraint represents the minimum requirement for carbohydrates, which is 8 units per some unknown amount of time. 3 units can be consumed per unit of potatoes and 1 unit can be consumed per unit of steak. The second constraint represents the minimum requirement for vitamins, which is 19 units. 4 units can be consumed per unit of potatoes and 3 units can be consumed per unit of steak. The third constraint represents the minimum requirement for proteins, which is 7 units. 1 unit can be consumed per unit of potatoes and 3 units can be consumed per unit of steak. The fourth and fifth constraints represent the fact that all feasible solutions must be nonnegative because we can't buy negative quantities.

constraints:

{3X1 + X2 ³ 8, 4X1+ 3X2 ³ 19, X1+ 3X2 ³ 7, X1³ 0, X2 ³ 0};

Next we plot the solution set of the inequalities to produce a feasible region of possibilities.

c) The 2:1 cost ratio of steak to potatoes dictates that the solution must be here since, as a whole, we can see that one unit of steak is slightly less nutritious than one unit of potatoes. Plus, in the one category where steak beats potatoes in healthiness (proteins), only 7 total units are necessary. Thus it is easier to fulfill these units without buying a significant amout of steak. Since steak is more expensive, buying more potatoes to fulfill these nutritional requirements is more logical.

d) Now we choose a new cost ratio that will move the optimal solution to a different choice of numbers of food units. Both steak and potatoes will still be purchased, but a different solution will be found. Let's try a 5:2 cost ratio.

d) Now we choose a new cost ratio that will move the optimal solution to a different choice of numbers of food units. Both steak and potatoes will still be purchased, but a different solution will be found. Let's try a 5:2 cost ratio.

d) Now we choose a new cost ratio that will move the optimal solution to a different choice of numbers of food units. Both steak and potatoes will still be purchased, but a different solution will be found. Let's try a 5:2 cost ratio.

Thus, the optimal solution for this cost ratio is buying 8 steaks and no potatoes per unit time to meet the minimum nutritional requirements.

#### A Blending Problem

Bryant's Pizza, Inc. is a producer of frozen pizza products. The company makes a net income of \$1.00 for each regular pizza and \$1.50 for each deluxe pizza produced. The firm currently has 150 pounds of dough mix and 50 pounds of topping mix. Each regular pizza uses 1 pound of dough mix and 4 ounces (16 ounces= 1 pound) of topping mix. Each deluxe pizza uses 1 pound of dough mix and 8 ounces of topping mix. Based on the past demand per week, Bryant can sell at least 50 regular pizzas and at least 25 deluxe pizzas. The problem is to determine the number of regular and deluxe pizzas the company should make to maximize net income. Formulate this problem as an LP problem.

Let X1 and X2 be the number of regular and deluxe pizza, then the LP formulation is:

Maximize X1 + 1.5 X2

Subject to:
X1 + X2 £ 150
0.25 X1 + 0.5 X2 £ 50
X1 ³ 50
X2 ³ 25
X1 ³ 0, X2 ³ 0

#### Other Common Applications of LP

Linear programming is a powerful tool for selecting alternatives in a decision problem and, consequently, has been applied in a wide variety of problem settings. We will indicate a few applications covering the major functional areas of a business organization.

Finance: The problem of the investor could be a portfolio-mix selection problem. In general, the number of different portfolios can be much larger than the example indicates, more and different kinds of constraints can be added. Another decision problem involves determining the mix of funding for a number of products when more than one method of financing is available. The objective may be to maximize total profits, where the profit for a given product depends on the method of financing. For example, funding may be done with internal funds, short-term debt, or intermediate financing (amortized loans). There may be limits on the availability of each of the funding options as well as financial constraints requiring certain relationships between the funding options so as to satisfy the terms of bank loans or intermediate financing. There may also be limits on the production capacity for the products. The decision variables would be the number of units of each product to be financed by each funding option.

Production and Operations Management: Quite often in the process industries a given raw material can be made into a wide variety of products. For example, in the oil industry, crude oil is refined into gasoline, kerosene, home-heating oil, and various grades of engine oil. Given the present profit margin on each product, the problem is to determine the quantities of each product that should be produced. The decision is subject to numerous restrictions such as limits on the capacities of various refining operations, raw-material availability, demands for each product, and any government-imposed policies on the output of certain products. Similar problems also exist in the chemical and food-processing industries.

Human Resources: Personnel planning problems can also be analyzed with linear programming. For example, in the telephone industry, demands for the services of installer-repair personnel are seasonal. The problem is to determine the number of installer-repair personnel and line-repair personnel to have on the work force each month where the total costs of hiring, layoff, overtime, and regular-time wages are minimized. The constraints set includes restrictions on the service demands that must be satisfied, overtime usage, union agreements, and the availability of skilled people for hire. This example runs contrary to the assumption of divisibility; however, the work-force levels for each month would normally be large enough that rounding to the closest integer in each case would not be detrimental, provided the constraints are not violated.

Marketing: Linear programming can be used to determine the proper mix of media to use in an advertising campaign. Suppose that the available media are radio, television, and newspapers. The problem is to determine how many advertisements to place in each medium. Of course, the cost of placing an advertisement depends on the medium chosen. We wish to minimize the total cost of the advertising campaign, subject to a series of constraints. Since each medium may provide a different degree of exposure of the target population, there may be a lower bound on the total exposure from the campaign. Also, each medium may have a different efficiency rating in producing desirable results; there may thus be a lower bound on efficiency. In addition, there may be limits on the availability of each medium for advertising.

Distribution: Another application of linear programming is in the area of distribution. Consider a case in which there are m factories that must ship goods to n warehouses. A given factory could make shipments to any number of warehouses. Given the cost to ship one unit of product from each factory to each warehouse, the problem is to determine the shipping pattern (number of units that each factory ships to each warehouse) that minimizes total costs. This decision is subject to the restrictions that demand at each factory cannot ship more products than it has the capacity to produce.

#### Graphical Solution Method

Procedure for Graphical Method of Solving LP Problems:
1. Is the problem an LP? Yes, if and only if:

All variables have power of 1, and they are added or subtracted (not divided or multiplied). The constraint must be of the following forms ( £, ³, or =, that is, the LP-constraints are always closed), and the objective must be either maximization or minimization.

For example, the following problem is not an LP: Max X, subject to X < 1. This very simple problem has no solution.

2. Can I use the graphical method? Yes, if the number of decision variables is either 1 or 2.
3. Use Graph Paper. Graph each constraint one by one, by pretending that they are equalities (pretend all £ and ³, are = ) and then plot the line. Graph the straight line on a system of coordinates on a graph paper. A system of coordinate has two axes: a horizontal axis called the x-axis (abscissa), and a vertical axis, called the y-axis (ordinate). The axes are numbered, usually from zero to the largest value expected for each variable.
4. As each line is created, divide the region into 3 parts with respect to each line. To identify the feasible region for this particular constraint, pick a point in either side of the line and plug its coordinates into the constraint. If it satisfies the condition, this side is feasible; otherwise the other side is feasible. For equality constraints, only the points on the line are feasible.
5. Throw away the sides that are not feasible.

After all the constraints are graphed, you should have a non-empty (convex) feasible region, unless the problem is infeasible.

6. Unfortunately, some of the boundaries of the feasible regions described in your textbook are wrong. See, e.g., the figures depicted on page 56. Almost all inequalities must be changed to equality. Right?

7. Create (at least) two iso-value lines from the objective function, by setting the objective function to any two distinct numbers. Graph the resulting lines. By moving these lines parallel, you will find the optimal corner (extreme point), if it does exist.

In general, if the feasible region is within the first quadrant of the coordinate system (i.e., if X1 and X2 ³ 0), then, for the maximization problems you are moving the iso-value objective function parallel to itself far away from the origin point (0, 0), while having at least a common point with the feasible region. However, for minimization problems the opposite is true, that is, you are moving the iso-value objective parallel to itself closer to the origin point, while having at least a common point with the feasible region. The common point provides the optimal solution.

Classification of the Feasible Points: : The feasible points of any non-empty LP feasible region can be classified as, interiors, boundaries, or vertices. As shown in the following figure: The point B in the above two-dimensional figure, for example, is a boundary point of the feasible set because every small circle centered at the point B, however small, contains both points some in the set and some points outside the set. The point I is an interior point because the orange circle and all smaller circles, as well as some larger ones; contains exclusively points in the set. The collection of boundary points belonging to one set is called boundary line (segment), e.g. the line segment cd. The intersections of boundary lines (segments) are called the vertices, if feasible it is called the corner point. In three-dimensional space and higher the circles become spheres, and hyper-spheres.

Know that, the LP constraints provide the vertices and the corner-points. A vertex is the intersection of 2-lines, or in general n-hyperplanes in LP problems with n-decision variables. A corner-point is a vertex that is also feasible.

A Numerical Example: The Carpenter's Problem

Maximize 5 X1 + 3 X2

Subject to:
2 X1 + X2 £ 40
X1 + 2 X2 £ 50
and both X1, X2 are non-negative. Note: There is an alternative to the Iso-value objective function approach with problems that have few constraints and a bounded feasible region. First, find all the corner points, which are called extreme points. Then, evaluate the objective function at the extreme points to find the optimal value and the optimal solution.

Clearly, the carpenter has many alternative sets of actions to take. However, the four "extreme" options are:

Objective Function Value at Each Corner (i.e., Extreme) Point
Decision-Maker's Choices Corner Point Coordinates Net Income Function
Number of Tables or Chairs X1, X2 5 X1 + 3 X2
Make No Table nor Chair 0, 0 0
Make All Tables You Can 20, 0 100
Make All Chairs You Can 0, 25 75
Make Mixed Products 10, 20 110

Since the objective is to maximize, from the above table we read off the optimal value to be 110, which is obtainable if the carpenter follows the optimal strategy of X1 = 10, and X2 = 20.

Notice that in the carpenter problem, the convex feasible region provides the corner points with coordinates shown in the above table.

The main deficiency of the graphical method is that it is limited to solving LP problems with 1 or 2 decision variables only. However, the main and useful conclusion we easily observe from the graphical methods, is as follow:

If a linear program has a bounded optimal solution, then one of the corner points provides an optimal solution.

The proof of this claim follows from the results of the following two facts:

Fact No. 1: The feasible region of any linear program is always a convex set.

Since all of the constraints are linear, the feasible region (F.R.) is a polygon. Moreover, this polygon is a convex set. In any higher than two-dimensional LP problem, the boundaries of F.R. are parts of the hyper-planes, and the F.R. in this case is called polyhedra that is also convex. A convex set is the one that if you choose two feasible points from it, then all the points on the straight line segment joining these two points are also feasible. The proof that the F.R. of linear programs are always convex sets follows by contradiction. The following figures depict examples for the two types of sets: A non-convex and a convex set. The set of feasible region in any linear program is called a polyhedron, it is called a polytope if it is bounded.

Fact No. 2: The Iso-value of a linear program objective function is always a linear function.

This fact follows from the nature of the objective function in any LP problem. The following figures depict the two typical kinds of iso-valued objective functions. Combining the above two facts, it follows that, if a linear program has a non-empty, bounded feasible region, then the optimal solution is always one of the corner points.

To overcome the deficiency of the graphical method, we will utilize this useful and practical conclusion in developing an algebraic method that is applicable to multi-dimensional LP problems.

The convexity of the feasible region for linear programs makes the LP problems easy to solve. Because of this property and linearity of the objective function, the solution is always one of the vertices. Moreover, since number of vertices is limited, one has to find all feasible vertices, and then evaluate the objective function at these vertices to seek the optimal point.

For nonlinear programs, the problem is much harder to solve, because the solution could be anywhere inside the feasible region on the boundary of the feasible region, or at a vertex.

Fortunately, most of the Business optimization problems have linear constraints, which is why LP is so popular. There are well over 400 computer packages in the market today solving LP problems. Most of them are based on vertex searching, that is, jumping from one vertex to the neighboring one in search of an optimal point.

You have already noticed that, a graph of a system of inequalities and/or equalities is called the feasible region. These two representations, graphical, and algebraic are equivalent to each other, which means the coordinate of any point satisfying the constraints is located in the feasible region, and the coordinate of any point in the feasible region satisfies all the constraints.

A numerical Example: Find the system of constraints representing the following feasible region. In the above Figure, the system of coordinate is shown in gray color at the background. By constructing the equations of the boundary lines of the feasible region, we can verify that the following system of inequalities indeed represents the above feasible region:

X1 ³ -1
X2 £ 1
X1 - X2 £ 1

#### Links Between LP and Systems of Equations: Algebraic Method

As George Dantzig pointed out, linear programming is strictly "the theory and solution of linear inequality systems." The basic solutions to a linear program are the solutions to the systems of equations consisting of constraints at binding position. Not all basic solutions satisfy all the problem constraints. Those that do meet all the restrictions are called basic feasible solutions. The basic feasible solutions correspond precisely to the extreme points of the feasible region.

For example, for Carpenter's Problem, one can compute all the basic solutions, by taking any two of the equations and solving them simultaneously and then, using the constraints of other equations to check for feasibility of this solution. If feasible, then this solution is a basic feasible solution that provides the coordinates of a corner point of the feasible region. To illustrate the procedure, consider the Carpenter's constraints at binding (i.e., all with = sign) position:

2X1 + X2 = 40
X1 + 2X2 = 50
X1 = 0
X2 = 0

Here we have 4 equations with 2 unknowns. In terms of a "binomial coefficient", there are at most C42 = 4! / [2! (4-2)!] = 6 basic solutions. Solving the six resultant systems of equations, we have:

Six Basic Solutions with Four Basic Feasible Solutions

 X1 X2 5X1 + 3X2 10 20 110* 0 40 infeasible 20 0 100 0 25 75 50 0 infeasible 0 0 0

Four of the above basic solutions are basic feasible solutions satisfying all the constraints, belonging to the coordinates of the vertices of the bounded feasible region. By plugging in the basic feasible solution in the objective function, we compute the optimal value. Therefore, from the above table, we see that, the optimal solution is X1 = 10, X2 = 20, with optimal value of \$110. The above approach can be applied in solving higher dimension LP problems provided the optimal solution is bounded.

You may like using Solving Systems of Equations JavaScript to check your computation.

Arsham H, Links Among a Linear System of Equations, Matrix Inversion, and Linear Program Solver Routines, Journal of Mathematical Education in Science and Technology, 29(5), 764-769, 1998.
Dantzig G., Linear Programming & Extensions, page 21, The Rand-Princeton U. Press, 1963.

#### Extension to Higher Dimensions

The Graphical Method is limited in solving LP problems having one or two decision variables. However, it provides a clear illustration of where the feasible and non-feasible regions are, as well as, vertices. Having a visual understanding of the problem helps with a more rational thought process. For example, we learned that: If an LP program has a bounded optimal solution, then the optimal solution is always one of the vertices of its feasible region (a corner point). What is needed to be done is to find all the intersection points (vertices) and then examine which one among all feasible vertices, provides the optimal solution. By using Analytical Geometry concepts, we will overcome this limitation of human vision. The Algebraic Method is designed to extend the graphical method results to multi-dimensional LP problem, as illustrated in the following numerical example.

#### Numerical Example: The Transportation Problem

Transportation models play an important role in logistics and supply chain management for reducing cost and improving service. Therefore, the goal is to find the most cost effective way to transport the goods.

Consider a model with 2 origins and 2 destinations. The supply and demand at each origin (e.g; warehouse) O1, O2 and destination (e.g.; market) D1 and D2, together with the unit transportation cost are summarized in the following table.

 The Unit Transportation Cost Matrix D1 D2 Supply O1 20 30 200 O2 10 40 100 Demand 150 150 300

Let Xij's denote the amount of shipment from source i to destination j. The LP formulation of the problem minimizing the total transportation cost is:

Min 20X11 + 30X12 + 10X21 + 40X22

subject to:
X11 + X12 = 200
X21 + X22 = 100
X11 + X21 = 150
X12 + X22 = 150
all Xij ³ 0

Notice that the feasible region is bounded, therefore one may use the algebraic method. Because this transportation problem is a balanced one (total supply = total demand) all constraints are in equality form. Moreover, any one of the constraints is redundant (adding any two constraints and subtracting another one, we obtain the remaining one). Let us delete the last constraint. Therefore, the problem reduces to:

Min 20X11 + 30X12 + 10X21 + 40X22

subject to:
X11 + X12 = 200
X21 + X22 = 100
X11 + X21 = 150
all Xij ³ 0

This LP problem cannot be solved by the graphical method. However, the algebraic method has no limitation on the LP dimension. The constraints are already at binding (equality) position.

Notice that we have m=3 equality constraints with (four implied non-negative) decision variables. Therefore, out of these four variables there is at most m=3 variables with positive value and the rest must be at zero level. For example, by setting any one of the variables in turn to zero, we get:

 X11 X12 X21 X22 Total Transportation Cost 0 200 150 -50 infeasible 200 0 -50 150 infeasible 150 50 0 100 8500 50 150 100 0 6500*

Now by setting any one (or more) variables to zero, it is easy to see, by inspecting the constraints that all other solutions are infeasible. Thus, from the above table, we obtain the optimal strategy to be: X11 = 50, X12 = 150, X21 = 100, and X22 = 0, with the least total transportation cost of \$6,500.

You may like to run this problem using Module Net.Exe in your WinQSB Package to check these results for yourself.

Notice that in the above example, there are m=3 constraints (excluding the non-negativity conditions), and n=4 decision variables. The optimal shipment indicates that, the manager should not send any shipment from one source to one destination. The optimal solution consists of at most 3 positive decision variables which is equal to the number of constrains. If the manager is shipping goods from every source to every destination, then the result is not optimal.

The above results, found in the above example, by Algebraic Method can be generalize, in the following main Economic result:

Given an LP having a bounded feasible region, with m constraints (excluding any sign constraints such as non-negativity conditions) and n decision variables, if n> m then at most m decision variables have positive value at the optimal solution and the rest of (i.e., n-m) decision variables must be set at zero level. This result holds if the problem has a bounded unique optimal solution.
The above result follows from the fact that, using the shadow prices indicates that the opportunity cost for the decision variable at zero level exceed its contribution.

Numerical Example: Find the optimal solution for the following production problem with n=3 products and m=1 (resource) constraint:

Maximize 3X1 + 2X2 + X3

Subject to: 4X1 + 2X2 + 3X3 £ 12
all variables Xi's ³ 0

Since the feasible region is bounded, following the Algebraic Method by setting all the constraints at the binding position, we have the following system of equations:

4X1 + 2X2 + 3X3 = 12
X1 = 0
X2 = 0
X3 = 0

The (basic) solutions obtained, from this system of equations are summarized in the following table.

 X1 X2 X3 Total Net Profit 0 0 4 4 0 6 0 12* 3 0 0 9 0 0 0 0

Thus, the optimal strategy is X1 = 0, X2 = 6, X3 = 0, with the maximum net profit of \$12.

The result in the above table is consistent with the application of the above main economic result. In other words, the optimal solution can be found by setting at least n - m = 3 - 1 = 2 decision variables to zero:

For large-scale LP problems with many constraints, the Algebraic Method involves solving many linear systems of equations. When the LP problem has many variables and constraints, solving many systems of equations by hand can become very tedious. Even for very large-scale problems it is an impossible task. Therefore, we need the computer to do the computations for us. One of the algorithmic and computerized approaches is The Simplex Method, which is an efficient and effective implementation of the Algebraic Method. There are well over 400 LP solvers, all of which using the Simplex method, including your software. Upon solving the LP problem by computer packages, the optimal solution provides valuable information, such as sensitivity analysis ranges.

You may like using Solving Systems of Equations JavaScript for up to 3-decision variables LP problems to check your computation by the algebraic method.

For details and more numerical examples visit the Solution Algorithms for LP Models Web site.

#### How to Solve a Linear System of Equations by LP Solvers?

In the Algebraic Method of solving LP problems, we have to solve some systems of equations. There is a link between LP solvers and the systems of equation solvers. Suppose we have a very large system of equations that we would like to solve and an LP solver package but we still have no solver computer package for a system of equations available. The question is "How to use an LP solver to find the solution to a system of equations?" The following steps outline the process of solving any linear system of equations using an available LP solver.

1- Because some LP solvers require that all variables be non-negative, substitute for each variable Xi = Yi - T everywhere.
2- Create a dummy objective, such as minimize T.
3- The constraints of the LP problem are the equations in the system after the substitutions outlined in step 1.

Numerical Example: Solve the following system of equations

2X1 + X2 = 3
X1 -X2 = 3

Since the WinQSB package accepts LP in various formats ( unlike Lindo), solving this problem by WinQSB is straightforward:

First, create an LP with a dummy objective function such as Max X1, subject to 2X1 + X2 = 3, X1 - X2 = 3, and both X1 and X2 unrestricted in sign. Then, enter this LP into the LP/ILP module to get the solution. The generated solution is X1= 2, X2= -1, which can easily be verified by substitution.

However, if you use any LP solver which requires by default (e.g., Lindo) that all variables be non-negative, you need to do some preparations to satisfy this requirement: First substitute for X1 = Y1 - T and X2 = Y2 - T in both equations. We also need an objective function. Let us have a dummy objective function such as minimize T. The result is the following LP:

Min T

Subject to:
2Y1 + Y2 - 3T = 3,
Y1 - Y2 = 3.

Using any LP solver, such as Lindo, we find the optimal solution to be Y1 = 3, Y2 = 0, T = 1. Now, substitute this LP solution into both transformations X1 = Y1 - T and X2 = Y2 - T. This gives the numerical values for our original variables. Therefore, the solution to the system of equations is X1 = 3 - 1 = 2, X2 = 0 - 1 = -1, which can easily be verified by substitution.

#### Dual Problem: Construction and Its Meaning

Associated with each (primal) LP problem is a companion problem called the dual. The following classification of the decision variable constraints is useful and easy to remember in construction of the dual.

---------------------------------------------------------------------------
The Dual Problem Construction
 Objective: Max (e.g. Profit) Constraint types: £ a Sensible constraint = a Restricted constraint ³ an Unusual const.
 Objective: Min (e.g. Cost) Constraint types: ³ a Sensible constraint = a Restricted const. £ an Unusual const.
 Variables types: ³ 0 a Sensible condition ... un-Restricted in sign £ 0 an Unusual condition
---------------------------------------------------------------------------
A one-to-one correspondence between the constraint type and the variable type exists using this classification of constraints and variables for both the primal and the dual problems.

Dual Problem Construction:

- If the primal is a maximization problem, then its dual is a minimization problem (and vise versa).

- Use the variable type of one problem to find the constraint type of the other problem.

- Use the constraint type of one problem to find the variable type of the other problem.

- The RHS elements of one problem become the objective function coefficients of the other problem (and vice versa).

- The matrix coefficients of the constraints of one problem is the transpose of the matrix coefficients of the constraints for the other problem. That is, rows of the matrix becomes columns and vise versa.

You may check your dual constructions rules by using your WinQSB package.

Numerical Examples:

Consider the following primal problem:

min x1-2x2
subject to:
x1+x2 ³ 2,
x1-x2 £ -1,
x2 ³ 3,
and x1, x2 ³ 0.

Following the above construction rule, the dual problem is:

max 2u1 - u2 + 3u3
subject to:
u1 + u2 £ 1,
u1 - u2 + u3 £ -2,
u1 ³ 0,
u2 £ 0,
and u3 ³ 0

The Dual of the Carpenter's Problem:

Maximize 5X1 + 3X2

Subject to:
2X1 + X2 £ 40
X1 + 2X2 £ 50
X1 ³ 0
X2 ³ 0

Its dual is:

Minimize 40U1 + 50U2

Subject to:
2U1 + U2 ³ 5
U1 + 2U2 ³ 3
U1 ³ 0
U2 ³ 0

Applications: One may use duality in a wide variety of applications including:

- It may be more efficient to solve the dual than the primal.

- The dual solution provides important economical interpretation such as shadow prices, i.e., the marginal values of the RHS elements. The shadow price was defined, historically as the improvement in the objective function value per unit increase in the right hand side, because the problem was often put in the form of profit maximization improvement (meaning increase). The shadow price may not be the market price. The shadow price is e.g., the worth of the resource under the "shadow" of your business activity. Sensitivity analysis, i.e., the analysis of the effect of small variations in system parameters on the output measures can be studied by computing the derivatives of the output measures with respect to the parameter.

- If a constraint in one problem is not binding (i.e., the LHS value agrees with the RHS value), then the associated variable in the other problem is zero. If a decision variable in one problem is not zero, then the associated constraint in the other problem is binding. These results are known as the Complementarily Slackness Conditions (CSC).

- Obtain the sensitivity range of the RHS of one problem from the sensitivity range of the cost coefficient in the other problem, and vice versa.

These results imply the only possible combinations of primal and dual properties as shown in the following table:

 Possible Combinations of Primal and Dual Properties Primal Problem Condition Implies Dual Problem Feasible; bounded objective « Feasible; bounded objective Feasible; unbounded objective ® Infeasible Infeasible ¬ Feasible; unbounded objective Infeasible « Infeasible Multiple solutions « Degenerate solution Degenerate solution « Multiple solutions

Notice that almost all LP solvers produce sensitivity range for the last two cases; however these ranges are not valid. For this reason you must make sure that the solution is unique, and non-degenerate in analyzing and applying the sensitivity ranges.

Arsham H., An Artificial-Free Simplex Algorithm for General LP Models, Mathematical and Computer Modelling, Vol. 25, No.1, 107-123, 1997.
Benjamin A., Sensible Rules for Remembering Duals_ S-O-B Method, SIAM Review, Vol. 37, No.1, 85-87, 1995.
Chambers R., Applied Production Analysis: A Dual Approach, Cambridge University Press, 1988.

#### The Dual Problem of the Carpenter's Problem and Its Interpretation

In this section we will construct the Dual Problem of the Carpenter's Problem and provide an economical interpretation of it.

In the Carpenter's Problem uncontrollable input parameters are the following:

 The Uncontrollable Inputs Table Chair Available Labor 2 1 40 Raw Material 1 2 50 Net Income 5 3

and its LP formulation as:

Maximize 5 X1 + 3 X2

Subject to:
2 X1 + X2 £ 40 labor constraint
X1 + 2 X2 £ 50 material constraint
and both X1, X2 are non-negative.

Where X1 and X2 are the number of tables and chairs to make.

Suppose the Carpenter wishes to buy insurance for his net income. Let U1 = the dollar amount payable to the Carpenter for every labor hour lost (due to illness, for example), and U2 = the dollar amount payable to the Carpenter for every raw material unit lost (due to fire, for example).

The insurance officer tries to minimize the total amount of \$(40U1 + 50U2) payable to the Carpenter by the insurance company. However, the Carpenter will set the constraints (i.e. conditions) by insisting that the insurance company cover all his/her loss that is his net income since he/she cannot make the products. Therefore, the insurance company problem is:

Minimize 40 U1 + 50 U2
Subject to:
2U1 + 1U2 ³ 5 Net Income from a table
1U1 + 2U2 ³ 3 Net Income from a chair
and U1, U2 are non-negative.

Implementing this problem on your computer package shows that the optimal solution is U1 = \$7/3 and U2 = \$1/3 with the optimal value of \$110 (the amount the Carpenter expects to receive). This ensures that the Carpenter can manage his life smoothly. The only cost is the premium that the insurance company will charge.

Shadow Price Unit of Measure: Notice that the unit of measure of the RHS shadow price is the unit of measure of the primal objective divided by the unit measure of the RHS. For example for the Carpenters problem, U1 = [\$/week] / hours/week] = \$/hour. Thus

U1 = 7/3 \$/hour, and U2 = 1/3 \$/unit of raw material.

As you can see, the insurance company problem is closely related to the original problem.

In OR/MS/DS modeling terminology, the original problem is called the Primal Problem while the related problem is called its Dual Problem.

In the Carpenter's Problem and its Dual Problem, the Optimal Value for both problems is always the same. This fact is referred to as Equilibrium (taken from the complementarity theory, equilibrium of economical systems, and efficiency in Pareto's sense) between the Primal and the Dual Problems. Therefore, there is no duality gap in linear programming.

The dual solution provides important economical interpretation such as the marginal values of the RHS elements. The elements of the dual solution are known as the Lagrangian multipliers because they provide (a tight) bound on the optimal value of the primal, and vise versa. For example, considering the Carpenter's problem the dual solution can be used to find a lower tight bound for the optimal value, as follow. After converting the inequality constraints into £ form, multiply each constraint by its corresponded dual solution and then add them up, we get:

7/3   [ 2X1 + X2 £ 40]
1/3   [ X1 + 2X2 £ 50]
_____________________
5X1 + 3X2 £ 110

Notice that the resultant on the left side is the objective function of the primal problem, and this lower bound for it is a tight one, since the optimal value is 110.

#### Managerial Roundoff Errors

You must be careful whenever you round the value of shadow prices. For example, the shadow price of the resource constraint in the above problem is 8/3; therefore, if you wish to buy more of this resource, you should not pay more than \$2.66. Whenever you want to sell any unit of this resource, you should not sell it at a price below \$2.67.

A similar error might occur whenever you round the limits on the sensitivity ranges. One must be careful because the upper limit and lower limit must be rounded down and up, respectively.

You know by now that the shadow prices are the solution to the dual problem. Here is a numerical example.

Compute the shadow price for both resources in the following LP problem:

Max -X1 + 2X2
S.T. X1 + X2 £ 5
X1 + 2X2 £ 6
and both X1, X2 non-negative

The solution to this primal problem (using, e.g., the graphical method) is X1 = 0, X2 = 3, with the leftover S1 = 2 of the first resource, while the second resource is fully utilized, S2 = 0.

The shadow prices are the solution to the dual problem:

Min 5U1 + 6U2
S.T. U1 + U2 ³ -1
U1 + 2U2 ³ 2
and both U1 and U2 are non-negative.

The solution to the dual problem (using, e.g., the graphical method) is U1 = 0, U2 = 1 which are the shadow prices for the first and second resource, respectively. Notice that whenever the slack/surplus of a constraint is non-zero, the shadow price related to that RHS of that constraint is always zero; however, the reverse statement may not hold. In this numerical example S1 = 2 (i.e. slack value of the RHS1 of the primal), which is non-zero; therefore U1 is equal to zero as expected.

Consider the following problem:

Max X1 + X2
subject to:
X1 £ 1
X2 £ 1
X1 + X2 £ 2
all decision variables ³ 0.

By using your computer package, you may verify that the shadow price for the third resource is zero, while there is no leftover of that resource at the optimal solution X1 = 1, X2 = 1.

#### Behavior of Changes in the RHS Values of the Optimal Value

To study the directional changes in the optimal value with respect to changes in the RHS (with no redundant constraints present, and all RHS ³0), we distinguish the following two cases:

Case I: Maximization problem

For £ constraint: The change is in the same direction. That is, increasing the value of RHS does not decrease the optimal value. It increases or remains the same depending on whether the constraint is a binding or non-binding constraint.

For ³ constraint: The change is in the reverse direction. That is, increasing the value of RHS does not increase the optimal value. It decreases or remains the same depending on whether the constraint is a binding or non-binding constraint.

For = constraint: The change could be in either direction (see the More-for-less section).

Case II: Minimization problem

For £ type constraint: The change is in the reverse direction. That is, increasing the value of RHS does not increase the optimal value (rather, it decreases or has no change depending on whether the constraint is a binding or non-binding constraint).

For ³ type constraint: The change is in the same direction. That is, increasing the value of RHS does not decrease the optimal value (rather, increases or has no change depending on whether the constraint is a binding or non-binding constraint).

For = constraint: The change could be in either direction (see the More-for-less section).

## Dealing with Uncertainties and Scenario Modeling

The business environment is often unpredictable and uncertain because of factors such as economic changes, government regulations, dependence on subcontractors and vendors, etc. Managers often find themselves in a dynamic, unsettled environment where even short range plans must be constantly reassessed and incrementally adjusted. All these require a change-oriented mentality to cope with uncertainties. Remember that surprise is not an element of a robust decision.

Managers use mathematical and computational constructs (models) for a variety of settings and purposes, often to gain insight into possible outcomes of one or more courses of action. This may concern financial investments, the choice (whether/how much) to insure, industrial practices, and environmental impacts. The use of models is flawed by the unavoidable presence of uncertainties, which arise at different stages; in the construction and corroboration of the model itself, and in its use. Its use is often the culprit.

Every solution to a decision problem is based on certain parameters that are assumed to be fixed. Sensitivity analysis is a collection of post-solution activities to study and determine how sensitive the solution is to changes in the assumptions. Other names for such activities are stability analysis, what-if analysis, scenario modeling, ranging analysis, specificity analysis, uncertainty analysis, computational and numerical instability, functional instability, tolerance analysis, post-optimality analysis, allowable increases and decreases, and many other similar phrases that reflect the importantness of this stage of the modeling.

Numerical Example: Consider the following problem:

Max 6X1 + 4.01X2
subject to:
X1 + 2X2 £ 16
3X1 + 2X2 £ 24
all decision variables ³ 0.

The optimal solution is (X1 = 4, X2 = 6), but with a slight change in the objective function, one may get a completely different optimal solution. For example, if we change it to 6X1 + 3.99X2, then the optimal solution is (X1 = 8, X2 = 0). That is, decreasing the second coefficient by 0.5%, the solution changes drastically! Therefore, the optimal solution is not stable with respect to this input parameter.

Sensitivity analysis is not the typical term employed in econometric for the method of investigating the response of a solution to perturbations in parameters. In econometrics, the process of changing the value of a parameter in a model, in order to see its individual impacts on the performance measure, is called comparative statics or comparative dynamics, depending on the type of model under consideration.

Uncertainty in a model can have different origins in different decision problems. It may be due to either incomplete information, or fluctuations inhere in the problem, or unpredictable changes in the future. Current approaches to deal with uncertainties includes:

Scenario Analysis: In this approach one assumes scenarios (e.g. certain combinations of possible values of uncertain parameter) and solves the problem for each. By solving the problem repeatedly for different scenarios and studying the solutions obtained, the manager observes sensitivities and heuristically decides on an approximate, which is subjective.

Worst-case Analysis: This technique attempts to account for putting safety margins into the problem in the planning stage.

Monte-Carlo Approach: Stochastic models assume that the uncertainty is known by its statistical distribution.

Sensitivity analysis vs. Stochastic Programming: Sensitivity analysis (SA) and Stochastic Programming (SP) formulations are the two major approaches used for dealing with uncertainty. SA is a post-optimality procedure with no power of influencing the solution. It is used to investigate the effects of the uncertainty on the model's recommendation. SP formulation, on the other hand, introduces probabilistic information about the problem data, albeit with the first moments (i.e. the expected values) of the distribution of the objective function with respect to the uncertainty. This ignores the decision-makers' risk assessments, characterized by variance, or coefficient of variation.

One may tackle uncertainties in a more "deterministic" manner. The approach is called various names such as "scenario modeling", "deterministic modeling", "sensitivity analysis", "ranging procedures", and "stability analysis". The idea is to subjectively come up with a ranked list of higher level uncertainties that might presumably have a bigger impact on the eventual mapping result. This is done before zooming in on the details of any particular "scenario" or model. For example, the problem parameters, and the uncontrollable factors indicated in the above figure for the Carpenter's problem, required a complete sensitivity analysis in order to enable the carpenter to be in control of his/her business.

Managerial Roundoff Errors: You must be very careful whenever you round the value of the limits on the sensitivity ranges. To be valid the upper limit and lower limit must be rounded down and up, respectively.

For construction of sensitivity analysis region that allows you to analyze any type of changes, including dependent, independent, and multiple changes in both the RHS values and the cost coefficients of LP visit Construction of General Sensitivity Regions site.

Arsham H., Algorithms for sensitivity information in discrete-event systems simulation, Simulation Practice and Theory, 6(1), 1-22, 1998.
Arsham H., Perturbation analysis of general LP models: A unified approach to sensitivity, parametric, tolerance, and more-for-less analysis, Mathematical and Computer Modelling, 13(3), 79-102, 1990.
Kouvelis P., and G. Yu, Robust Discrete Optimization and its Applications, Kluwer Academic Publishers, 1997. Provides a comprehensive discussion of motivations for sources of uncertainty in optimization.

#### Computation of Sensitivity Ranges for Small Size Problems

To compute sensitivity ranges for LP Problems with at most 2 decision variables and/or at most 2 constraints, you may like to try the following easy-to-use approach.

The only restriction is that no equality constraint is permitted. Having an equality constraint is the case of degeneracy, because every equality constraint, for example, X1 + X2 = 1, means two simultaneous constraints: X1 + X2 £ 1 and X1 + X2 ³ 1. The number of binding constraints in such a case will be more than the number of decision variables. This is known as a degenerate situation for which the usual sensitivity analysis may not be valid.

Cost Sensitivity Range for LP Problems with two Decision Variables

Referring to the Carpenter's Problem, changing the profit on each product changes the slope of the iso-value objective function. For "small" changes, the optimal stays at the same extreme point. For larger changes the optimal solution moves to another point. Then we have to modify the formation and solve a new problem.

The problem is to find a range for each cost coefficient c(j), of variable Xj, such that the current optimal solution, i.e., the current extreme point (corner point), remains optimal.

For a 2-dimensional LP problem, you may like to try the following approach to find out the amount of increase/decrease in any one of the coefficients of the objective function (also known as the cost coefficients. Historically during World War II, the first LP problem was a cost minimization problem) in order to maintain the validity of the current optimal solution. The only condition required for this approach is that no equality constraint is permitted, since this leads to the case of degeneracy, for which the usual sensitivity analysis may not be valid.

Step 1: Consider the only two binding constraints at the current optimal solution. If there are more than two binding constraints, then this is the case of degeneracy, for which the usual sensitivity analysis may not be valid.

Step 2: Perturb the jth cost coefficient by parameter cj (this is the unknown amount of changes).

Step 3: Construct one equation corresponding to each binding constraints as follow:

(Perturbed Cj cost)/ coefficient of Xj in the constraint = Coefficient of the other variable in the objective function /coefficient of that variable of the constraint.

Step 4: The amount of allowable increase is the smallest positive cj, while the allowable decrease is the largest negative cj, obtained in Step 3.

Notice that if there is no positive (negative) cj, then the amount of the increase (decrease) is unlimited.

Warnings:

1. Remember that you should never divide by zero. The practice of dividing by zero is a common fallacy found in some textbooks. For example, in the
Introduction to Management Science,
Taylor III, B., Prentice Hall,
the author, unfortunately divides by zero, in Module A: The Simplex Solution Method, pp. A26-A27, where computing the needed column-ratio in the simplex method.
For more on this, and other common fallacies visit the Web site The zero saga & confusions with numbers . Here is a question for you: Which of the following are correct and why?
a) any number divided by zero is undefined;
b) zero divided by any number is zero; or
c) any number divided by itself is 1.
2. Finding the Cost Sensitivity Range by Grapical Method: It is a commonly held belief that one can compute the cost sensitivity range by bracketing the (perturbed) slope of the (iso-value) objective function by the slopes of the two lines resulting from the binding constraints. This graphical slope-based method to compute the sensitivity ranges is described in popular textbooks, such as Anderson et al., (2007), Lawrence and Pasternack (2002), and Taylor (2010).

Unfortunately these are misleading. Warning should have been given that their approach is not general and works if and only if the coefficients do not change sign.

In an LP with 2 variables and inequality constraints, suppose we have a unique, non-degenerate optimum at the intersection of two lines, as shown in the following figure. Then, the range of objective coefficients for which this solution remains optimal is given by the slopes of the two lines.

The following is a counterexample. It points out that one must be careful to state that the coefficients do not change sign.

Counterexample: Maximixe 5X1 + 3X2
X1 + X2 £ 2,
X1 - X2 £ 0,
X1 ³ 0, X2 ³ 0. The Carpenter's Problem:

Maximize 5X1 + 3X2

Subject to:
2X1 + X2 £ 40
X1 + 2X2 £ 50
X1 ³ 0
X2 ³ 0

Computation of allowable increase/decrease on the C1 = 5: The binding constraints are the first and the second one. Perturbing this cost coefficient by c1, we have 5 + c1. At step 3, we have:

(5 + c1)/2 = 3/1, for the first constraint, and (5 + c1)/1 = 3/2 for the second constraint. Solving these two equations, we have: c1 = 1 and c1 = -3.5. The allowable increase is 1, while the allowable decrease is 1.5. As far as the first cost coefficient C1 remains within the interval [ 5 - 3.5, 5 + 1] = [1.5, 6], the current optimal solution remains.

Similarly for the second cost coefficient C2 = 3, we have the sensitivity range of [2.5, 10].

As another example, consider the earlier problem:

Maximize 5X1 + 3X2

Subject to:
X1 + X2 £ 2
X1 - X2 £ 0
X1 ³ 0
X2 ³ 0

Computation of allowable increase/decrease on the C1 = 5: The binding constraints are the first and the second one. Perturbing this cost coefficient by c1, we have 5 + c1. At step 3, we have:

(5 + c1)/1 = 3/1, for the first constraint and (5 + c1)/1 = 3/(-1) for the second constraint. Solving these two equations, we have: c1 = -2 and c1 = -8. The allowable decrease is 2, while the allowable increase is unlimited. As far as the first cost coefficient C1 remains within the interval [ 5 - 2, 5 + ¥] = [3, ¥], the current optimal solution remains optimal.

Similarly, for the second cost coefficient C2 = 3 we have the sensitivity range of
[3 - 8, 3 + 2] = [-5, 5].

For construction of sensitivity analysis region that allows you to analyze any type of changes, including dependent, independent, and multiple changes in both the RHS values and the cost coefficients of LP visit Construction of General Sensitivity Regions site.

RHS Sensitivity Range for LP Problems with at Most Two Constraints

Referring to the Carpenter's Problem, for small changes in either resources, the optimal strategy (i.e. make the mixed-product) stays valid. For larger changes, this optimal strategy moves and the Carpenter must either make all the tables or the chairs he/she can. This is a drastic change in the strategy; therefore, we have to revise the formulation and solve a new problem.

Apart from the above needed information, we are also interested in knowing how much the Carpenter can sell (or buy) each resource at a "reasonable" price (or cost). That is, how far can we increase or decrease RHS(i) for fixed i while maintaining the validity of the current shadow price of the RHS(i)? That is, how far can we increase or decrease RHS(i) for fixed i while maintaining the current optimal solution to the dual problem?

Historically, the shadow price was defined as the improvement in the objective function value per unit increase in the right hand side, because the problem was often put in the form of profit maximization improvement (meaning increase).

Also, know that for any RHS, the shadow price (also known also its marginal value), is the amount of change in the optimal value proportion to one unit change for that particular RHS. However, in some cases it is not permitted to change the RHS by that much. The sensitivity range for the RHS provides the values for which the shadow price has such an economic meaning and remains unchanged.

How far can we increase or decrease each individual RHS in order to maintain the validity of shadow prices? The question is equivalent to asking what is the sensitivity range for the cost coefficient in the dual problem.

The dual of the Carpenter's Problem is:

Minimize 40U1 + 50U2

Subject to:
2U1 + U2 ³ 5
U1 + 2U2 ³ 3
U1 ³ 0
U2 ³ 0

The optimal solution is U1 = 7/3 and U2 = 1/3 (which are the shadow prices).

The Carpenter's Problem:

Maximize 5X1 + 3X2

Subject to:
2X1 + X2 £ 40
X1 + 2X2 £ 50
X1 ³ 0
X2 ³ 0

Computation of Range for the RHS1: The first two constraints are binding, therefore:

(40 + r1)/2 = 50/ 1, and (40 + r1) / 1 = 50/ 2.

Solving these two equations gives: r1 = 60 and r1 = -15. Therefore, the sensitivity range for the first RHS in the carpenter's problem is: [40-15, 40 + 60] = [25, 100].

Similarly, for the second RHS, we obtain: [50 - 30, 50 + 30] = [20, 80].

For construction of sensitivity analysis region that allows you to analyze any type of changes, including dependent, independent, and multiple changes in both the RHS values and the cost coefficients of LP visit Construction of General Sensitivity Regions site.

Lawrence J., Jr., and B. Pasternack, Applied Management Science: Modeling, Spreadsheet Analysis, and Communication for Decision Making, John Wiley and Sons, 2002.
Anderson D., Sweeney D., and Williams T., An Introduction to Management Science, West Publisher, 2007.
by Taylor III, B., Introduction to Management Science, Prentice Hall, 2006.

#### Marginal Analysis & Factors Prioritization

The major applications of sensitivity analysis information for the decision-maker are the Marginal Analysis and the Factors Prioritization:

Marginal Analysis: Marginal analysis is a concept employed, in microeconomics where the marginal change in some parameter might be of interest to the decision-maker. A marginal change is a ration of very small addition or subtraction to the total quantity of some parameter. Marginal analysis is the analysis of the relationships between such changes in relation to the performance measure. Examples of marginal analysis are: marginal cost; marginal revenue; marginal product; marginal rate of substitution; marginal propensity to save, and so on. In optimization, the marginal analysis is employed primarily to explicate various changes in the parameters and their impact on optimal value. Sensitivity analysis, i.e., the analysis of the effect of small variations in system parameters on the output measures can be studied by computing the derivatives of the output measures with respect to the parameter.

The decision-makers ponder what factors are important and have major impact on the decision outcome. The marginal analysis aims at identifying the important factors (i.e., parameters) and ranking them according to their impact on optimal value. One may derive the marginal values by evaluating the first derivatives of performance measure with respect to the parameter with specific value.

Factors Prioritization Based on Sensitivity Ranges: Consider the Carpenter's Problem:

Maximize 5X1 + 3X2

Subject to:
2X1 + X2 £ 40
X1 + 2X2 £ 50
X1 ³ 0

While the computed sensitivity ranges are valid for one-change at-a-time and not necessarily for simultaneous changes, they provide useful information for prioritization of uncontrollable factors. The following figure provides a comparative chart for the cost coefficients prioritization purposes: The following figure depicts the shadow price as the slope (i.e., marginal value) of a linear function measuring the amount of change in the optimal value as a result of any change in the RHS1, provided the change is within the sensitivity range of the RHS1. This function also can be used to solve the inverse problem, that is, what the RHS1 value should be to achieve a specific optimal value. #### What Is the 100% Rule (sensitivity region)

The sensitivity range presented in the previous section is a "one-change-at-a-time" type of what-if analysis. Consider the Carpenter's problem; suppose we want to find the simultaneous allowable increases in RHS ( r1, r2 ³ 0). There is an easy method to apply here known as "the 100% rule" which says that the shadow prices remain unchanged if the following sufficient condition holds:

r1/60 + r2/30 £ 1, 0 £ r1 £ 60, 0 £ r2 £ 30.

In the above, 60 and 30 are the allowable increases for the RHS's, based on the application of ordinary sensitivity analysis. That is, whenever the first and the second RHS increase by r1 and r2, respectively, as long as this inequality holds, the shadow prices for the RHS values remain unchanged. Notice that this is a sufficient condition, for if the above condition is violated, then the shadow prices may change or still remain the same. The term "100% rule" becomes evident when you notice that in the left hand side of the above condition each term is a non-negative number being less than one, which could be represented as a percentage allowable change. The total sum of such changes should not exceed 100%.

Applying the 100% rule to the other three possible changes on the RHS's, we have:

r1/(-15) + r2/(-30) £ 1, -15 £ r1 £ 0, -30 £ r2 £ 0.
r1/(60) + r2/(-30) £ 1, 0 £ r1 £ 60, -30 £ r2 £ 0.
r1/(-15) + r2/(30) £ 1, -15 £ r1 £ 0, 0 £ r2 £ 30.

The following Figure depicts the sensitivity region for both RHS's values as the results of the application of the 100% rule for the Carpenter's problem. From a geometric point of view, notice that the polyhedral with vertices (60, 0), (0, 30), (-15, 0), and (0,-30) in the above Figure is only a subset of a larger sensitivity region for changes in both RHS values. Therefore, to remain within this sensitivity region is only a sufficient condition (not a necessary one) to maintain the validity of the current shadow prices.

Similar results can be obtained for the cost coefficients' simultaneous changes. For example, suppose we want to find the simultaneous allowable decrease in C1 and increases in C2. That is, the amount of changes in both cost coefficients by c1 £ 0 and c2 ³ 0.
The 100% rule states that the current basis remains optimal provided that:

c1/(-3.5) + c2/7 £ 1, -3.5 £ c1 £ 0, 0 £ c2£ 7.

Where 3.5 and 7 are the allowable decrease and increase for the cost coefficient C1 and C2, respectively, that we found earlier by the application of the ordinary sensitivity analysis.

The above Figure also depicts all the other possibilities of increasing and decreasing both cost coefficients values as the result of the application of the 100% rule, while maintaining the current optimal solution for the Carpenter's problem.

As another numerical example, consider the following problem:

Maximize 5X1 + 3X2

Subject to:
X1 + X2 £ 2
X1 - X2 £ 0
X1 ³ 0
X2 ³ 0

You may recall that we have already computed the one-change-at-a-time sensitivity ranges for this problem in the Computation of Sensitivity Ranges section. The sensitivity range for the first cost coefficient is [ 5 - 2, 5 + ¥] = [3, ¥], while, for the second cost coefficient it is [3 - 8, 3 + 2] = [-5, 5]. You should be able to reproduce a figure similar to the above depicting all other possibilities of increasing/decreasing both cost coefficients values as the results of the application of the 100% rule, while maintaining the current optimal solution for this problem.

The application of the 100% rule as presented here is general and can be extended to any large size LP problem. As the size of problem becomes larger, this type of sensitivity region becomes smaller and therefore less useful to the managers. There are more powerful (providing both necessary and sufficient conditions) and useful techniques to the managers for dependent (or independent) simultaneous changes in the parameters.

For construction of sensitivity analysis region that allows you to analyze any type of changes, including dependent, independent, and multiple changes in both the RHS values and the cost coefficients of LP visit Construction of General Sensitivity Regions site.

The process: Insert the current optimal solution into the newly added constraint. If the constraint is not violated, the new constraint does NOT affect the optimal solution. Otherwise, the new problem must be resolved to obtain the new optimal solution.

#### Deleting a Constraint

The process: Determine if the constraint is a binding (i.e. active, important) constraint by finding whether its slack/surplus value is zero. If binding, deletion is very likely to change the current optimal solution. Delete the constraint and re-solve the problem. Otherwise, (if not a binding constraint) deletion will not affect the optimal solution.

#### Replacing a Constraint

Suppose we replace a constraint with a new constraint. What is the affect of this exchange?
The process: Determine if the old constraint is a binding (i.e. active, important) constraint by finding out whether its slack/surplus value is zero. If binding, the replacement is very likely to affect the current optimal solution. Replace the constraint and resolve the problem. Otherwise, (if not a binding constraint) determine whether the current solution satisfies the new constraint. If it does, then this exchange will not affect the optimal solution. Otherwise, (if the current solution does not satisfy the new constraint) replace the old constraint with the new one and resolve the problem.

#### Changes in the Coefficients of Constraints

Any changes in the coefficients of constraints might cause significant changes to the nominal (original) problem. Any such changes fall logically within the sensitivity analysis; however, these are not changes that can be analyzed using the information generated by the optimal solution. Such changes are best dealt with by solving the modified problem anew.

#### Adding a Variable (e.g., Introducing a new product)

The coefficient of the new variable in the objective function, and the shadow prices of the resources provide information about marginal worth of resources and knowing the resource needs corresponding to the new variable, the decision can be made, e.g., if the new product is profitable or not.

The process: Compute what will be your loss if you produce the new product using the shadow price values (i.e., what goes into producing the new product). Then compare it with its net profit. If the profit is less than or equal to the amount of the loss then DO NOT produce the new product. Otherwise it is profitable to produce the new product. To find out the production level of the new product resolves the new problem.

#### Deleting a Variable (e.g., Terminating a product)

The process: If for the current optimal solution, the value of the deleted variable is zero, then the current optimal solution still is optimal without including that variable. Otherwise, delete the variable from the objective function and the constraints, and then resolve the new problem.

#### Optimal Resource Allocation Problem

Since resources are always scarce, managers are concerned with the problem of optimal resources allocation. You will recall in the The Carpenter's Problem formulation that we treated both resources as parameters, that is, as given fixed numerical values:

Maximize 5 X1 + 3 X2

Subject to:
2 X1 + X2 £ 40 labor constraint
X1 + 2 X2 £ 50 material constraint
and both X1, X2 are nonnegative.

We usually classify constraints as resource or production type constraints. It is a fact that in most maximization problem, the resource constraints are the natural part of the problem, while in the minimization problem the production constraints are the most important part of the problem.

Suppose we wish to find the best allocation of the labor resource for the Carpenter. In other words, what is the best number of hours the Carpenter should allocate to his or her business?

Let the allocated number of hours be R, which we want to use in determining its optimal value. Therefore, the mathematical model is to find R1 such that:

Maximize 5 X1 + 3 X2

Subject to:
2 X1 + X2 £ R1 labor constraint
X1 + 2 X2 £ 50 material constraint
and all variables X1, X2, and R1 are nonnegative.

We are now treating R1 not as a parameter but as a decision variable. That is, the maximization is over all three variables; X1, X2, and R1:

Maximize 5 X1 + 3 X2

Subject to:
2 X1 + X2 - R1 £ 0 labor constraint
X1 + 2 X2 £ 50 material constraint
and all variables X1, X2, and R1 are nonnegative.

Using your LP software, the optimal solution is X1 = 50, X2 = 0, with an optimal labor allocation of R1 = 100 hours. This brings an optimal value of \$250.

Notice that the optimal resource allocation value is always the same as the upper bound on the RHS1 sensitivity range generated by your software.

The allowable increase in number of hours is 100 - 40 = 60 hours which brings additional 250 - 110 = 140.

We are able even to obtain the shadow price for this resource using this information. The shadow price is the rate of change in the optimal value with respect to the change in the RHS. Therefore (250 - 110)/(100 - 40) = 140/60 = 7/3, which is the shadow price of the RHS1 as we found by other methods in earlier sections.

#### Determination of Product's Least Net Profit

In most "price taker" business settings, the net profit is an uncontrollable factor . The manager is interested in knowing the least net profit for a product that makes it profitable to produce at all.

You may recall that in the The Carpenter's Problem we treated both net profits (\$5, and \$3) as uncontrollable inputs, that is, the values determined by the market:

Maximize 5 X1 + 3 X2

Subject to:
2 X1 + X2 £ 40 labor constraint
X1 + 2 X2 £ 50 material constraint
And both X1, X2 are nonnegative.

This has the optimal strategy of X1 =10, X2 = 20, with an optimal value of \$110.

Suppose the Carpenter wants to know the least value for the first coefficient in the objective function, which is currently \$5, in order to make it still profitable to produce the first product (i.e., tables).

Suppose the least net profit is c1 dollars; therefore, the problem is to find c1 such that:

Maximize c1 X1 + 3 X2

Subject to:
2 X1 + X2 £ 40 labor constraint
X1 + 2 X2 £ 50 material constraint
And all variables X1, X2, c1 are nonnegative.

Minimize 40 U1 + 50 U2
Subject to:
2U1 + 1U1 ³ c1 Net profit from a table
1U1 + 2U2 ³ 3 Net profit from a chair
And U1, U2, c1 are nonnegative.

We are now treating the net profit c1 as a decision variable. The minimization is over all three variables; X1, X2, and c1:

Minimize 40 U1 + 50 U2
Subject to:
2U1 + 1U1 - c1 ³ 0
1U1 + 2U2 ³ 3
And U1, U2, c1 are nonnegative.

Implementing this problem on your computer package shows that the optimal solution is U1 = \$7/3, U2 = \$1/3, and c1 = \$1.5.

There are alternative solutions for this boundary value of the sensitivity range for the cost coefficient. The solution corresponds to the lower limit in the sensitivity analysis range of the cost coefficient computed earlier for the Carpenter's Problem. The least net profit is always the same as the lower bound on the cost coefficient sensitivity range generated by your software.

#### Min Max and Max Min Computation in a Single-Run

Suppose we wish to find the worst of several objective functions values defined on a common set of constraints in a single-run computer implementation.

As an application, suppose that in The Carpenter's Problem, without loss of generality, we have three markets with objective functions 5X1 + 3X2, 7X1 + 2X2, and 4X1 + 4X2, respectively. The carpenter is interested in knowing the worst market. That is, the solution of the following problem:

The Min Max Problem:

Min Max {5X1 + 3X2, 7X1 + 2X2, 4X1 + 4X2}

Subject to:
2 X1 + X2 £ 40
X1 + 2 X2 £ 50
and both X1, X2 are nonnegative.

The Min Max Problem is equivalent to:

Max y

Subject to:
y £ 5x1 + 3X2
y £ 7X1 + 2X2
y £ 4X1 + 4X2
2X1 + X2 £ 40
X1 + 2X2 £ 50
And all variables X1, X2, y, are nonnegative.

If you take all the variables to the left-hand side of the constraints and implement this problem on your computer package, the optimal solution is X1 = 10, X2 = 20, y = \$110. This means the first and the second markets are the worst (because the first and the second constraints are binding) bringing only \$110 net profit.

Similarly, one can solve the maximum of min of several objective functions in a single run.

#### Feasibility Problem: Goal-Seeking Indicators

In most business applications, the manager wishes to achieve a specific goal, while satisfying the constraints of the model. The user does not particularly want to optimize anything so there is no reason to define an objective function. This type of problem is usually called a feasibility problem.

Although some decision-makers would prefer the optimal. However, in most practical situations, decision-maker aims at satisfying or making incremental changes rather than optimizing. This is so, because human mind has a bounded rationality and hence can not comprehend all alternatives. In the incremental approach to decision-making. the manager takes only small steps, or incremental moves, away from the existing system. This is usually accomplished by a "local search" to find a "good enough" solution. This problem is referred to as "satisfying problem", "feasibility problem", or the "goal-seeking" problem. Therefore, the aim is to achieve a global improvement to a level that is good enough, given current information and resources. One reason that causes a business manager to overestimate the importance of the optimal strategy, is that organizations often use indicators as "proxies" for satisfying their immediate needs. Most managers pay attention to indicators, such as profit, cash flow, share price etc., to indicate survival rather than as a goal for optimization.

To solve the goal-seeking problem, one must first add the goal to the constraint set. To convert the goal seeking problem to an optimization problem, one must create a dummy objective function. It could be a linear combination of the sub-set of decision variables. If you maximize this objective function, you will get a feasible solution (if one exists). If you minimize it, you might get another one (usually at the other "side" of the feasible region). You could optimize with different objective functions.

Another approach is to use "Goal Programming" models that deal precisely with problems of constraint satisfaction without necessarily having a single objective. Basically, they look at measures of constraint violation and try to minimize them. You can formulate and solve goal programming models in ordinary LP, using ordinary LP solution codes.

In the artificial-variable free solution algorithm one may use a zero dummy objective function, but not in some software packages, such as Lindo. In using software packages one may maximize or minimize any variable as an objective function.

Numerical Example

Consider Example 1 in the Initialization of the Simplex Method section of a companion site to this site. Instead of maximizing, we now wish to achieve a goal of 4. That is,

Goal: -X1 + 2X2 = 4
subject to:
X1 + X2 ³ 2,
-X1 + X2 ³ 1,
X2 £ 3,
and X1, X2 ³ 0.

Adding this goal to the constraint set and converting the constraints into equality form, we have:

X1 + X2 - S1 = 2, -X1 + X2 - S2 = 1, X2 + S3 = 3, and
X1, X2, S1, S2, S3 ³ 0.

A solution is X1 = 2, X2 = 3, S1 = 3, S2 = 0, and S3 = 0.

For details on the solution algorithms, visit the Web site Artificial-variable Free Solution Algorithms.

Borden T., and W. Banta, (Ed.), Using Performance Indicators to Guide Strategic Decision Making, Jossey-Bass Pub., 1994.
Eilon S., The Art of Reckoning: Analysis of Performance Criteria, Academic Press, 1984.

The Copyright Statement: The fair use, according to the 1996 Fair Use Guidelines for Educational Multimedia, of materials presented on this Web site is permitted for non-commercial and classroom purposes only.
This site may be mirrored intact (including these notices), on any server with public access. All files are available at http://home.ubalt.edu/ntsbarsh/Business-stat for mirroring.