Zero-Sum Games with Applications
Introduction to Game Theory:
Wining Business in A Competitive Environment

Para mis visitantes del mundo de habla hispana, este sitio se encuentra disponible en español en:

Sitio Espejo para América Latina     Sitio en los E.E.U.U.

This Web site presents the theory of the Two-person Zero-sum games with an illustrative numerical example. Applications to optimal portfolio selections in investment decision together with its risk assessment are provided. Game theory is indeed about modeling for winning business in a competitive environment.

Professor Hossein Arsham   

To search the site, try Edit | Find in page [Ctrl + f]. Enter a word or phrase in the dialogue box, e.g. "risk " or "utility " If the first appearance of the word/phrase is not what you are looking for, try Find Next.


  1. Introduction & Summary
  2. Investment Decisions: Optimal Portfolio Selections
  3. A Classification of Investors Relative Attitudes toward Risk and Its Impact
  4. Risk Assessment: How Good Is Your Portfolio?
  5. Portfolio's Factors-Prioritization & Stability Analysis
  6. The Gambler’s Ruin Probability
  7. Other Competition Modeling Techniques
  8. JavaScript E-labs,  Europe Mirror Site.
  9. Linear Optimization Solvers to Download (free-of-charge),  Europe Mirror Site.

Companion Sites:

Introduction & Summary

Game theory describes the situations involving conflict in which the payoff is affected by the actions and counter-actions of intelligent opponents. Two-person zero-sum games play a central role in the development of the theory of games.

Game theory is indeed about modeling for winning business in a competitive environment: For example, in winning a large bid, there are factors that are important. These factors include: Establishing and maintaining a preferred supplier position, developing a relationship of trust with the customer, the offering itself, and the price.

In order to develop the game theory concepts, consider the following game in which player I has two choices from which to select, and player II has three alternatives for each choice of player I. The payoff matrix T is given below:

player II
j=1 j=2 j=3
player I i=1 4 1 3
i=2 2 3 4
The Payoff Matrix

In the payoff matrix, the two rows (i = 1, 2) represent the two possible strategies that player I can employ, and the three columns (j = 1, 2, 3) represent the three possible strategies that player II can employ. The payoff matrix is oriented to player I, meaning that a positive tij is a gain for player I and a loss for player II, and a negative tij is a gain for player II and a loss for player I. For example, if player I uses strategy 2 and player II uses strategy 1, player I receives t21 = 2 units and player II thus loses 2 units. Clearly, in our example player II always loses; however, the objective is to minimize the payoff to player I.

Graphical representation of the games may be viewed as a Tournament game played on a directed network. In this setting, there are two kinds of nodes: terminating and continuing. Terminating nodes lead to no other node, and player I receives the payoff associated with his/her own arcs. Continuing nodes lead to at least one additional node. The two players simultaneously choose one node each. For the numerical example, the corresponding graph is given in the figure below. In this directed network the edge goes from vertex u to vertex w, and if one player chooses w while the other chooses u, the player who selects w, which is at the head of arc connecting the two nodes, receives payoff tuw.

A pure strategy pair (i. j) is in equilibrium if and only if the corresponding element tij is both the largest in its column and the smallest in its row. Such an element is also called a saddle point (by analogy with the surface of a saddle).

An "equilibrium decision point", that is a "saddle point", also known as a "minimax point", represents a decision by two players upon which neither can improve by unilaterally departing from it.

When there is no saddle point, one must choose the strategy randomly. This is the idea behind a mixed strategy. A mixed strategy for a player is defined as a probability distribution on the set of the pure strategies. Our numerical example is such a case. Player I can assure the payoff to be maxi minj tij = 2, while player II plays in such a manner that player I receives no more than min max tij = 3. The problem of how the difference [minj maxi tij] - [max min tij] ³ 0 should be subdivided between the players thus remains open. In such cases, the players naturally seek additional strategic opportunities to assure themselves of the largest possible share of this difference. To achieve this objective, they must select their strategies randomly to confuse each other.

We will use a general method based on a linear programming (LP) formulation. This equivalency of games and LP may be surprising, since a LP problem involves just one decision-maker, but it should be noted that with each LP problem there is an associated problem called the dual LP. The optimal values of the objective functions of the two LPs are equal, corresponding to the value of the game. When solving LP by simplex-type methods, the optimal solution of the dual problem also appears as part of the final tableau. So we get v, Y*, and X* by solving one LP. LP formulation and the simplex method is the fastest, most practical, and most useful method for solving games with a large matrix T. Suppose that player II is permitted to adopt mixed strategies, but player I is allowed to use only pure strategies. What mixed strategies Y = (y1, y2, y3) should player II adopt to minimize the maximum expected payoff v? A moment's thought shows that player II must solve the following problem:
Min v = y
subject to: T.Y £ y
Ut.Y = 1

The minimization is over all elements of the decision vector Y ³ 0, the scalar y is unrestricted in sign, and U is an n- dimensional column vector with all elements equal to one. The left hand side of the first n constraints, by definition, is player II's expected return against player I's pure strategies. It turns out that these mixed strategies are still optimal if we allow player I to employ mixed strategies. Mixed strategies are also known as "saddle point" strategies. Of course, a game with a saddle point can be solved by this method as well. The standard formulation in the simplex method requires that all variables be non-negative. To achieve this condition one may substitute the difference of two new variables for y.

The optimal strategy for player I is the solution to the dual problem of player II's problem. The simplex method of linear programming provides optimal strategies for both players.

The social games and fairness norm is a convention that evolved to coordinate behavior on an equilibrium of a society's Game of Life. According to this view, the metaphysics of Emmanuel Kant for the naturalistic approach, and the morality of David Hume could be abandoned.

Numerical Examples

The LP formulation for player II's problem in a game with payoff matrix T given above, is:

Min v

subject to:

4y1 + y2 + 3y3 £ v
2y1 + 3y2 + 4y3 £ v
y1 + y2 + y3 = 1
yj³0, j = 1, 2, 3, and v is unrestricted

The optimal solution for player II is: y1 = 1/2, y2 = 1/2, y3 = 0. The shadow prices are the optimal strategies for player I. Therefore, the mixed saddle point is: x1 = 1/4, x2 = 3/4; y1 = 1/2, y2 = 1/2, y3 = 0, and the value of the game equals 5/2. Note that the essential strategies for Player I are i = 1, i = 2; for Player II they are j = 1, j = 2 while j = 3 is non-essential.

It is customary to discard the dominant rows or columns in finding optimal strategies. This assumes, however, that the payoff matrix is fixed. If you are interested in the stability analysis of the essential (and non-essential) strategies with respect to changes in payoffs, read the following article:

Further Readings:
Arsham H., Stability of essential strategy in two-person zero-sum games, Congressus Numerantium, 110(3), 167-180. 1995.
Borm P., (Ed.), Chapters in Game Theory, Kluwer, 2002.
Raghavan T., and Z. Syed, A policy-improvement type algorithm for solving zero-sum two-person stochastic games of perfect information, Mathematical Programming, Ser. A, 95(3), 513-532, 2003.
Weintraub E., (ed.), Toward a History of Game Theory, Duke University Press, 1992.

Visit also the Web sites:
Two-Person Zero-Sum Games Theory with Applications.

Investment Decisions: Optimal Portfolio Selections

Consider the following investment problem discussed in the Decision Analysis site. The problem is to decide what action or a combination of actions to take among three possible courses of action with the given rates of return as shown in the body of the following table.

States of Nature (Events)
Growth Medium G No Change Low
Actions Bonds 12% 8 7 3
Stocks 15 9 5 -2
Deposit 7 7 7 7

In decision analysis, the decision-maker has to select at least and at most one option from all possible options. This certainly limits its scope and its applications. You have already learned both decision analysis and linear programming. Now is the time to use the game theory concepts to link together these two seemingly different types of models to widen their scopes in solving more realistic decision-making problems. The investment problem can be formulated as if the investor is playing a game against nature.

Suppose our investor has $100,000 to allocate among the three possible investments with the unknown amounts Y1, Y2, Y3, respectively. That is,

Y1 + Y2 + Y3 = 100,000

Notice that this condition is equivalent to the total probability condition for player I in the Game Theory.

Under these conditions, the returns are:

0.12Y1 + 0.15Y2 + 0.07Y3 {if Growth (G)}
0.08Y1 + 0.09Y2 + 0.07Y3 {if Medium G}
0.07Y1 + 0.05Y2 + 0.07Y3 {if No Change}
0.03Y1 - 0.02Y2 + 0.07Y3 {if Low}

The objective is that the smallest return (let us denote it by v value) be as large as possible.

Formulating this Decision Analysis problem as a Linear Programming problem, we have:

Max v

Subject to:
Y1 + Y2 + Y3 = 100,000
0.12Y1 + 0.15Y2 + 0.07 Y3 ³ v
0.08Y1 + 0.09Y2 + 0.07Y3 ³ v
0.07Y1 + 0.05Y2 + 0.07Y3 ³ v
0.03Y1 - 0.02Y2 + 0.07Y3 ³ v
and Y1, Y2, Y3 ³ 0, while v is unrestricted in sign (could have negative return).

This LP formulation is similar to the problem discussed in the Game Theory section. In fact, the interpretation of this problem is that, in this situation, the investor is playing against nature (the states of economy).

Solving this problem by any LP solution algorithm, the optimal solution is Y1 = 0, Y2 = 0, Y3 = 100,000, and v = $7000. That is, the investor must put all the money in the money market account with the accumulated return of 100,000´1.07 = $10,7000.

Note that the pay-off matrix for this problem has a saddle-point; therefore, as expected, the optimal strategy is a pure strategy. In other words, we have to invest all our money into one portfolio only.

Buying Gold or Foreign Currencies Investment Decision: As another numerical example, consider the following two investments with the given rate of returns. Given you wish to invest $12,000 over a period of one year, how do you invest for the optimal strategy?

States of Nature (Economy)
Growth Medium G No Change Low
Actions Buy Currencies (C) 5 4 3 -1
Buy Gold (G) 2 3 4 5

The objective is that the smallest return (let us denote it by X3 value) be as large as possible.

Similar to previous example, formulating this Decision Analysis problem as a Linear Programming problem, we have:

Maximize X3

Subject to:
X1 + X2 = 12,000
0.05X1 + 0.02X2 ³ X3
0.04X1 + 0.03X2 ³ X3
0.03X1 + 0.04X2 ³ X3
-0.01X1 + 0.05X2 ³ X3
and X1, X2 ³ 0, while X3 is unrestricted in sign (i.e., could have negative return).

Again, this LP formulation is similar to the problem discussed in the Game Theory section. In fact, the interpretation of this problem is that, in this situation, the investor is playing against nature (i.e., the states of economy).

Solving this problem by any LP solution algorithm, the optimal solution is a mixed strategy: Buy X1 = $4000 Foreign Currencies and X2= $8000 Gold.

The Investment Problem Under Risk:

The following table shows the risk measurements computed for the Investment Decision Example:

Risk Assessment
G(0.4) MG(0.3) NC(0.2) L(0.1) Exp. Value St. Dev. C. V.
B 12 8 7 3 8.9 2.9 32%
S 15 9 5 -2 9.5* 5.4 57%
D 7 7 7 7 7 0 0%

The Risk Assessment columns in the above table indicate that bonds are much less risky than the stocks, while its return is lower. Clearly, deposits are risk free.

Now, an interesting question is: Given all this relevant information, what action do you take? It is all up to you.

Max [v - 0.029Y1 - 0.054Y2 -0Y3]

Subject to:
Y1 + Y2 + Y3 = 100,000
0.089Y1 + 0.095Y2 + 0.07Y3 ³ v
and Y1, Y2, Y3 ³ 0, while v is unrestricted in sign (could have negative return).

Solving this Linear Program (LP) model by any computer LP solver, the optimal solution is Y1 = 0, Y2 = 0, Y3 = 100,000, and v = $7000. That is, the investor must put all the money in the money market account with the accumulated return of 100,000´1.07 = $10,7000.

Notice that, for this particular numerical example, it turns out that the different approaches provide the same optimal decision; however one must be careful not to do any generalization at all.

Note that the above objective function includes the standard deviations to reduce the risk of your decision. However, it is more appropriate to use the covariance matrix instead. Nevertheless the new objective function will have a quadratic form, which can be solved by applying nonlinear optimization algorithms. For more information on decision problem construction, and solution algorithm, together with some illustrative numerical applications, visit the Optimal Business Decisions Web site.

You may use:
Two-Person Zero-Sum Games Theory with Applications for checking your computation and experimentation.

Risk Assessment Process: Clearly, different subjective probability models are plausible they can give quite different answers. These examples show how important it is to be clear about the objectives of the modeling. An important application of subjective probability models is in modeling the effect of state-of-knowledge uncertainties in consequence models. Often it turns out that dependencies between uncertain factors can be important in driving the output of the models. For example, consider two portfolios having random variable R1 and R2 returns; the ratio:

Cov (R1, R2) / Var (R1)

is called the beta of the trading strategy 1 with respect to the trading strategy 2. Various methods are available to model these dependencies, in particular proportional to the Beta values methods.

Numerical Example: Consider our Buying Gold or Foreign Currencies Investment Decision, using the Bivariate Discrete Distributions, JavaScript with equally likelihood (0.25), we obtain:

Beta (Currencies) = -0.457831, and Beta (Gold) = -1.9

Now, one may distribute the total capital ($12000) proportional to the Beta values:

Sum of Beta’s = -0.457831 -1.9 = -2.357831

Y1 = 12000 (-0.457831 /-2.357831) = 12000(0.194175) = $2330,    Investing on Foreign Currencies

Y2 = 12000 (-1.9/-2.357831) = 12000(0.805825) = $9670,    Investing on Gold

That is, the optimal strategic decision based upon the Beta criterion is: Buy $2330 foreign Currencies, and $9670 Gold.

The following flowchart depicts the risk assessment process for portfolio selection based on their financial time series.

Risk Assessment in Portfolio Selection

Risk Assessment in Portfolio Selection
Click on the image to enlarge it

The above hybrid model brings together the techniques of decision analysis, linear programming, and statistical risk assessments (via a quadratic risk function defined by covariance matrix) to support the interactive decisions for modeling investment alternatives.

Further Readings:
Dixit A., and R. Pindyck, Investment Under Uncertainty, Princeton Univ Pr, 1994.
Dokuchaev N., Dynamic Portfolio Strategies: Quantitative Methods and Empirical Rules for Incomplete Information, Kluwer, 2002. Contains investment data-based optimal strategies.
Korn R., and E. Korn, Options Pricing and Portfolio Optimization: Modern Methods of Financial Mathematics, Amer Mathematical Society, 2000.
Luenberger D., Investment Science, Oxford Univ Press, 1997.
Pliska S., Introduction to Mathematical Finance: Discrete Time Models, Blackwell Pub, 1997.
Winston W., Financial Models Using Simulation and Optimization, Palisade Corporation, 1998.

A Classification of Investors Relative Attitudes Toward Risk and Its Impact

Probability of an Event and the Impact of its Occurrence: The process-oriented approach of managing risk and uncertainty is part of any probabilistic modeling. It allows the decision-maker to examine the risk within its expected return, and identify the critical issues in assessing, limiting, and mitigating risk. This process involves both the qualitative and quantitative aspects of assessing the impact of risk.

Decision science does not describe what people actually do since there are difficulties with both computations of probability and the utility of an outcome. Decisions can also be affected by people's subjective rationality, and by the way in which a decision problem is perceived.

Traditionally, the expected value of random variables has been used as a major aid to quantify the amount of risk. However, the expected value is not necessarily a good measure alone by which to make decisions since it blurs the distinction between probability and severity. To demonstrate this, consider the following example:

Suppose that a person must make a choice between scenarios 1 and 2 below:

  • Scenario 1: There is a 50% chance of a loss of $50, and a 50% chance of no loss.

  • Scenario 2: There is a 1% chance of a loss of $2,500, and a 99% chance of no loss.

Both scenarios result in an expected loss of $25, but this does not reflect the fact that the second scenario might be much more risky than the first. (Of course, this is a subjective assessment). The decision-maker may be more concerned about minimizing the effect of the occurrence of an extreme event than he/she is concerned about the mean. The following charts depict the complexity of the probability of an event, the impact of the occurrence of the event, and its related risk indicator, respectively:

From the previous section, you may recall that the certainty equivalent is the risk free payoff; moreover, the difference between a decision-maker's certainty equivalent and the expected monetary value (EMV) is called the risk premium. We may use the sign and the magnitude of the risk premium in classifying a decision-maker's relative attitude toward risk as follows:

  • If the risk premium is positive, then the decision-maker is willing to take the risk, and the decision-maker is said to be a risk seeker. Clearly, some people are more risk-seeker than others.
  • If the risk premium is negative, then the decision-maker would avoid taking the risk, and the decision-maker is said to be risk averse.

  • If the risk premium is zero, then the decision-maker is said to be risk neutral.

Further Readings
Brooks C., Introductory Econometrics for Finance, Cambridge University Press, 2002.
Eilon S., The Art of Reckoning: Analysis of Performance Criteria, Academic Press, 1984.
Hammond J., R. Keeney, and H. Raiffa, Smart Choices: A Practical Guide to Making Better Decisions, Harvard Business School Press., 1999.
Richter M., and K. Wong, Computable preference and utility, Journal of Mathematical Economics, 32(3), 339-354, 1999.

Risk Assessment: How Good Is Your Portfolio?

Risk is the downside of a gamble, which is described in terms of probability. Risk assessment is a procedure of quantifying the loss or gain values and supplying them with proper values of probabilities. In other words, risk assessment means constructing the random variable that describes the risk. Risk indicator is a quantity that describes the quality of the decision.

Without loss of generality, consider our earlier Investment Example. Suppose the optimal portfolio is:

Y1.B + Y2.S + Y3.D

The expected value (i.e., the averages): The expected return is:

Y1.Br + Y2.Sr + Y3.Dr

where, Br, Sr, and Dr are the historical averages for B, S, and D, respectively.

Expected return alone is not a good indication of a quality decision. The variance must be known so that an educated decision may be made. Have you ever heard the dilemma of the six-foot tall statistician who drowned in a stream that had an average depth of three feet?

In the investment example, it is also necessary to compute the 'risk' associated with the optimal portfolio. A measure of risk is generally reported by variation, or its square root called standard deviation. Variation or standard deviation are numerical values that indicate the variability inherent to your decision. For risk, smaller values indicate that what you expect is likely to be what you get. What we desire is a large expected return, with small risk; thus, high risk makes the investor very worried.

Variance: An important measure of risk is variance:

Y12.Var(B) + Y22.Var(S) + Y32.Var(D) +
2Y1.Y2.Cov(B, S) + 2Y1.Y3.Cov(B, D) + 2Y2.Y3.Cov(S, D)

Where Var and Cov are the variance and covariance, respectively, they are computed using recent historical data.

The variance is a measure of risk; therefore, the greater the variance, the higher the risk. The variance is not expressed in the same units as the expected value. So, the variance is hard to understand and explain as a result of the squared term in its computation. This can be alleviated by working with the square root of the variance, which is called the Standard Deviation:

Standard Deviation

Both variance and standard deviation provide the same information and, therefore, one can always be obtained from the other. In other words, the process of computing standard deviation always involves computing the variance. Since standard deviation is the square root of the variance, it is always expressed in the same units as the expected value.

Numerical Example: Consider our Buying Gold or Foreign Currencies Investment Decision, the above formulas reduce to:

The optimal portfolio is:

Y1.C + Y2.G

The expected value (i.e., the averages): The expected return is:

Y1.Cr + Y2.Gr

The risk measured in terms of variance is:

Y12.Var(C) + Y22.Var(G) + 2Y1.Y2.Cov(C, G)

Using the Bivariate Distributions JavaScript, we have:

The expected return is:
$4000 (2.75) + $8000 (3.5) = $39000

The standard deviation is:
[($4000)2 + ($8000)2 + 2($4000)($8000)(-2.375)]1/2 = $8936

Notice that Beta1 and Beta2 are directly related, for example, the multiplication of the two provides the correlation square, i.e. r2. The r2 which always between [0, 1] is a number without any dimensional units, and it represent strong is the linear dependency between the rates of return of one portfolios against the other one. When any beta is negative, and the r2 is large enough, then the two portfolios are related inversely and strongly. In such a case, diversification of the total capital is recommended.

For the dynamic decision process, the Volatility as a measure for risk includes the time period over which the standard deviation is computed. The Volatility measure is defined as standard deviation divided by the square root of the time duration.

When considering two different portfolios, what do you do if one portfolio has a larger expected return but a much higher risk than the alternative portfolio? In such cases, using another measure of risk known as the Coefficient of Variation is appropriate.

Coefficient of Variation (CV) is the absolute relative deviation with respect to size provided is not zero, expressed in percentage:

CV =100 |S/| %

For the above numerical example, the coefficient of variation is:
100(8936/39000) = 23%

Notice that the CV is independent from the expected value measurement. The coefficient of variation demonstrates the relationship between standard deviation and expected value, by expressing the risk as a percentage of the expected value. A portfolio with 15% or less CV is considered a "good" portfolio. The inverse of CV (namely 1/CV) is called the Signal-to-Noise Ratio.

Diversification may reduce your risk: Since the covariance appears in risk assessment, it reduces the risk if its negative. Therefore, diversifying your investment may reduce the risk without reducing the benefits you gain from the activities. For example, you may choose to buy a variety of stocks rather than just one.

For an application of signal-to-noise ratio as a diversification criteria in reducing your investment risk decisions, visit Risk: The Four Letters Word section.

Notice that the diversification based on signal-to-noise ratio criteria can be extended to more than two portfolios, unlike the beta ratio criteria, which is limited to two inversely correlated portfolios, only.

You may use the following JavaScript for computational purposes and computer-assisted experiment as learning tools for fundamental of risk analysis:

Further Readings:
Dupacová J., J. Hurt, and J. Štepán, Stochastic Modeling in Economics and Finance, Kluwer Academic Publishers, 2002. Part II of the book is devoted to the allocation of funds and risk management.
Moore P., The Business of Risk, Cambridge University Press, 1984.
Vose D., Risk Analysis: A Quantitative Guide, John Wiley & Sons, 2000.

Portfolio's Factors-Prioritization & Stability Analysis

Introduction: Sensitivity analysis, known also as stability analysis, is a technique for determining how much an expected return will change in response to a given change in an input variable, all other things remaining unchanged.

Steps in Sensitivity Analysis:

  1. Begins with consideration of a nominal base-case situation, using the expected values for each input.
  2. Calculates the base-case output.
  3. Considers a series of "what-if" questions to determine by how much the output would deviate from this nominal level if input values deviated from their expected values.
  4. Each input is changed by several percentage points above and below its expected value, and the expected payoff is recalculated.
  5. The set of expected payoff is plotted against the variable that was changed.
  6. The steeper the slope (i.e., derivative) of the resulting line, the more sensitive the expected payoff is to a change in the variable.

Scenario Analysis: Scenario analysis is a risk analysis technique that considers both the sensitivity of expected payoff to changes in key variables, and the likely range of variable values. The worst and best "reasonable" sets of circumstances are considered, and the expected payoff for each is calculated and compared to the expected, or base-case, output. Clearly, extensive scenario and sensitivity analysis can be carried out using computerized versions of the above procedure.

How Stable is Your Decision? Stability Analysis compares the outcome of each of your scenarios with chance events. Computerized versions of the above procedure are necessary and useful tools. They can be used extensively to examine the decision for stability and sensitivity whenever there is uncertainty in the rate of return.

Prioritization of Uncontrollable Factors: Stability analysis also provides critical model inputs. The simplest test for sensitivity is whether the optimal portfolio changes when an uncertainty factor is set to its extreme value while holding all other variables unchanged. If the decision does not change, the uncertainty can be regarded as relatively less important than for the other factors. Sensitivity analysis focuses on the factors with the greatest impact, thus helping to prioritize data gathering while increasing the reliability of information.

You may like to use MultiVariate Statistics: Mean, Variance, & Covariance for checking your computation and for performing computer-assisted experimentation.

The Gambler’s Ruin Probability

By now you may have realized that the above game is not a pure random decision problem. Switching to different strategy with specific frequencies obtained by optimal solution is aimed at confusing the other player. The following game is an example of pure random decision-making problem.

Ruin Probability: The second JavaScript is for sensitivity analysis of the winning the target dollar amount or losing it all (i.e., ruin) game.

Let R = the amount of money you bring to the table, T = the targeted-winning amount, U = the size of each bet, and p = The probability of winnig any bet, then the probability of probability (W) of reaching the target, i.e., leaving with $(R+T ) is:

W = (A -1) / (B -1)


A = [ (1 - p) / p ] R / U


B = [ (1 - p) / p ] (T+R) / U

Therefore, the Ruin Probability, i.e., the probability of losing it all $R is: (1 – W).

Notice: This results are subject to the condition that the targeted-winning amount ($T) must be much less than amount of money you bring to the table ($R). That is ($T) must be a fraction (f) of ($R).

Remember that if you bet too much you will leave a loser, while if you bet too little, your capital will grow too slowly. You may ask what fraction (f) of R you should bet always. Let V be the amount that you win for every dollar that you risk, then the optimal fraction is:

f = p - (1 - p) / V

For example for p = 0.5, and v = 2, the optimal decision value for f is 25% of your capital R. The above result, recommend that fraction (f) of R you should bet always, must never exceed p.

You may like to use Two-Person Zero-Sum and The Gambler’s Games with Applications for checking your computation and for performing computer-assisted experimentation.

Other Competition Modeling Techniques

Competition is business often occurs because of the negative effects of one party (e.g. company) on another due to use and depletion of shared scares resources. Competition is one mechanism leading to logistic growth. The logistic growth model is expressed as:

dN / dt = r N (K - N)/K

where, N = population size,
r = per-capita rate of population growth,
K = carrying capacity of the environment,

The main question is the following. Could the carrying capacity factor in the logistic equation be a function, rather than a constant? The competition of two technologies, e.g. wireless versus cable or simulated annealing versus genetic algorithm can be intriguing. Here, the two parties are merely competitors; one does not "eat" the other --, as the consumer is the real "prey" though at times willing. Lotka-Voterra model and its many variants are the widely used methods in many disciplines, including economics.

The classical Lotka-Volterra competition model is described by the following system of differential equations:

dN1 / dt = r1 N1 (K1 - N1 - aN2 )/K1
dN2 / dt = r2 N2 (K2 - N2 - bN1)/K2

The competition coefficients a, and b are the proportional constants that relate the effect of say one customer of the first party on population of second party. At the equilibrium level, we have:

N2 dN1 / dt = N1 dN2 / dt = 0.

Any extension for modification of the Lotka-Volterra competition modelmust overcome the following weaknesses: The model assumes the prey spontaneously grows and the carrying capacity of environment has no limit. Moreover, removing the assumption that all predators and prey start at the same spatial point produces a set of interesting, more realistic, and useful results.

Further Readings:
Arkin V., et al., Stochastic Models of Control and Economic Dynamics, Academic Press, 1997.
Beltrami E., Mathematical Models for Society and Biology, Academic Press, 2001.
Ferguson B., and G. Lim, Introduction to Dynamic Economic Models, Manchester University Press, 1998.
Ford E., Scientific Method for Ecological Research, Cambridge University Press, 2000.
Hassell M., The Spatial and Temporal Dynamics of Host-Parasitoid Interactions, Oxford University Press, 2000.

The Copyright Statement: The fair use, according to the 1996 Fair Use Guidelines for Educational Multimedia, of materials presented on this Web site is permitted for non-commercial and classroom purposes only.
This site may be mirrored intact (including these notices), on any server with public access. All files are available at for mirroring.

Kindly e-mail me your comments, suggestions, and concerns. Thank you.

Professor Hossein Arsham   

This site was launched on 2/25/1994, and its intellectual materials have been thoroughly revised on a yearly basis. The current version is the 8th Edition. All external links are checked once a month.

Back to:

Dr Arsham's Home Page

EOF: Ó 1994-2015.