A Brief History of Optimization Modeling - Part 1

     

Ever since algebra was invented by a Persian named Muhammad ibn Musa al-Khwarizm in 820 AD, our species has used it for practical and increasingly complex purposes  to the point where our modern civilization could not function without algebra. 

What most people don’t realize is that our civilization would also not function as efficiently without optimization modeling (yes, it’s that important!).

By optimization modeling, we're referring to the use of mathematical techniques in solving problems based on certain characteristics, including the purpose of integrated business planning, like:

  • Linear programming (LP)
  • Mixed integer programming (MIP)
  • Nonlinear programming (NLP)
  • Constraint programming (CP)

The BEGINNING OF LINEAR PROGRAMMING

Over the last four centuries, many important mathematicians like Pascal, Newton, Bernoulli, and Lagrange all made important contributions to the mathematical sciences. However, it wasn’t until 1826, when Jean-Baptiste-Joseph Fourier — who stated that certain problems could be defined as linear-programming problems — and Carl Friedrich Gauss — who proved that elementary row operations could be used to solve a set of linear equations — approached mathematics that the thought of solving real-world problems began. At this time, one limiting factor was present: only extremely small problems could be solved.

Fast-forward 110 years. Between 1936 and 1946, more important problems began to be addressed, if not solved outright. The term “operational resource” began in Britain during the early days of World War II; while it might not have involved actual modeling, it combined mathematics, science and common sense to help the Allies make intelligent decisions, positively contributing to the outcome. This fact did not go unnoticed, especially by mathematicians, who soon began to define many famous problems — e.g., traveling salespeople, diet and Monte Carlo simulations.

Mainframes, Tapes, and Punch Cards

Everything changed in 1947 when Dr. George Dantzig invented the simplex algorithm to solve LP problems. The fact that it is still widely used today is proof how important this breakthrough was! 

The earliest problems were addressed by the National Bureau of Standards and the RAND Corporation, dealing with something called the Card Programmable Calculator. This was a mechanical calculation machine that required cards as input and could solve a variety of problems — with as much as 45 constraints and 70 variables — in about 8 hours.

By the mid-1950s, IBM began to use the first “computer” to solve problems. By using specific code, the IBM machines could solve problems with several hundred constraints. In the early 60s, these machines were capable of solving problems with more than 1000 constraints, which caused those in the oil industry to take notice.

By 1964, researchers wanting to solve more complicated problems (e.g., site location, transportation, and facility open/close) developed the first mixed-integer program (MIP), which was based on the branch-and-bound method. The physical process of solving a MIP problem, however, was complicated and required tape storage. Still, it was indeed a breakthrough, and many others, such as those in process industries and the military, began to adopt optimization modeling seriously.

With the 1970s came many rapid advances for IBM’s mainframe computers, and the race was on to develop the best algorithms to solve increasingly larger and more difficult LP/MIP problems. During this time many different matrix formats appeared, including early versions of MPSX format. By the late seventies, portable code written in FORTRAN was introduced; however, solving any “large-scale” optimization problems at that time remained largely in the domains of academia and well-funded consulting companies — something that would soon change.

Early PCs and Algebraic Modeling Languages

In the early 1980’s, the IBM PC began to be adopted for business purposes. By 1983, LP codes for computers began to emerge, including early versions of LINDO and XpressMP languages. At this point, however, even the largest models solved on PCs were relatively small, with approximately 1000 constraints and 1000 variables. Time-to-solve remained several hundred times slower than the best available mainframe computers, which still required punch cards.  

In the mid-1980s, as computing speed, memory advances, and solver refinement began to increase, the focus shifted to new methods of generating much larger LP and MIP problems. Apart from established third-generation programming languages (3GLs) like Basic and FORTRAN, a new class of modeling software emerged called Algebraic Modeling Languages (AMLs). Considered fourth-generation languages (4GLs), AMLs were created for operations research professions, and all had similar characteristics:

  • A modeling language specifically designed for creating large-scale mathematical problems
  • A user interface
  • Ability to read/write data
  • Links to solvers

Original AML software vendors included General Algebraic Modeling System (GAMS), AIMMS, LINGO, AMPL, MathPro, and MPL—all of which remain in use.

To learn more about optimization technology, check out Chapter 5 of our “Introduction to Prescriptive Analytics for Business Leaders” e-book.

In our next blog, we outline the history of optimization modeling from the early 1990s to today. 

New Call-to-action  

Subscribe Here!

Supply Chain Brief