Linear programming relaxation
In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable. For example, in a 0–1 integer program, all constr
Data envelopment analysis
Data envelopment analysis (DEA) is a nonparametric method in operations research and economics for the estimation of production frontiers.
Weak duality
In applied mathematics, weak duality is a concept in optimization which states that the duality gap is always greater than or equal to 0. That means the solution to the dual (minimization) problem is
Basic solution (linear programming)
In linear programming, a discipline within applied mathematics, a basic solution is any solution of a linear programming problem satisfying certain specified technical conditions. For a polyhedron and
Karmarkar's algorithm
Karmarkar's algorithm is an algorithm introduced by Narendra Karmarkar in 1984 for solving linear programming problems. It was the first reasonably efficient algorithm that solves these problems in po
Drawdown (economics)
The drawdown is the measure of the decline from a historical peak in some variable (typically the cumulative profit or total open equity of a financial trading strategy). Somewhat more formally, if is
Benson's algorithm
Benson's algorithm, named after , is a method for solving multi-objective linear programming problems and vector linear programs. This works by finding the "efficient extreme points in the outcome set
Graver basis
In applied mathematics, Graver bases enable iterative solutions of linear and various nonlinear integer programming problems in polynomial time. They were introduced by . Their connection to the theor
HiGHS optimization solver
HiGHS is open-source software to solve linear programming (LP), mixed-integer programming (MIP), and convex quadratic programming (QP) models. Written in C++ and published under an MIT license, HiGHS
Hirsch conjecture
In mathematical programming and polyhedral combinatorics, the Hirsch conjecture is the statement that the edge-vertex graph of an n-facet polytope in d-dimensional Euclidean space has diameter no more
Revised simplex method
In mathematical optimization, the revised simplex method is a variant of George Dantzig's simplex method for linear programming. The revised simplex method is mathematically equivalent to the standard
Semidefinite programming
Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximiz
Dual linear program
The dual of a given linear program (LP) is another LP that is derived from the original (the primal) LP in the following schematic way:
* Each variable in the primal LP becomes a constraint in the du
Integer points in convex polyhedra
The study of integer points in convex polyhedra is motivated by questions such as "how many nonnegative integer-valued solutions does a system of linear equations with nonnegative coefficients have" o
Network simplex algorithm
In mathematical optimization, the network simplex algorithm is a graph theoretic specialization of the simplex algorithm. The algorithm is usually formulated in terms of a minimum-cost flow problem. T
Strong duality
Strong duality is a condition in mathematical optimization in which the primal optimal objective and the dual optimal objective are equal. This is as opposed to weak duality (the primal problem has op
Stigler diet
The Stigler diet is an optimization problem named for George Stigler, a 1982 Nobel Laureate in economics, who posed the following problem: For a moderately active man weighing 154 pounds, how much of
Slack variable
In an optimization problem, a slack variable is a variable that is added to an inequality constraint to transform it into an equality. Introducing a slack variable replaces an inequality constraint wi
Perturbation function
In mathematical optimization, the perturbation function is any function which relates to primal and dual problems. The name comes from the fact that any such function defines a perturbation of the ini
Dual cone and polar cone
Dual cone and polar cone are closely related concepts in convex analysis, a branch of mathematics.
Benders decomposition
Benders decomposition (or Benders' decomposition) is a technique in mathematical programming that allows the solution of very large linear programming problems that have a special block structure. Thi
Configuration linear program
The configuration linear program (configuration-LP) is a particular linear programming used for solving combinatorial optimization problems. It was introduced in the context of the cutting stock probl
Fundamental theorem of linear programming
In mathematical optimization, the fundamental theorem of linear programming states, in a weak formulation, that the maxima and minima of a linear function over a convex polygonal region occur at the r
Big M method
In operations research, the Big M method is a method of solving linear programming problems using the simplex algorithm. The Big M method extends the simplex algorithm to problems that contain "greate
Zadeh's rule
In mathematical optimization, Zadeh's rule (also known as the least-entered rule) is an algorithmic refinement of the simplex method for linear optimization. The rule was proposed around 1980 by Norma
Multi-objective linear programming
Multi-objective linear programming is a subarea of mathematical optimization. A multiple objective linear program (MOLP) is a linear program with more than one objective function. An MOLP is a special
Reduced cost
In linear programming, reduced cost, or opportunity cost, is the amount by which an objective function coefficient would have to improve (so increase for maximization problem, decrease for minimizatio
Cunningham's rule
In mathematical optimization, Cunningham's rule (also known as least recently considered rule or round-robin rule) is an algorithmic refinement of the simplex method for linear optimization. The rule
Set cover problem
The set cover problem is a classical question in combinatorics, computer science, operations research, and complexity theory. It is one of Karp's 21 NP-complete problems shown to be NP-complete in 197
Expected shortfall
Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. The "expected shortfall at q% level" is t
Linear programming decoding
In information theory and coding theory, linear programming decoding (LP decoding) is a decoding method which uses concepts from linear programming (LP) theory to solve decoding problems. This approac
Vertex enumeration problem
In mathematics, the vertex enumeration problem for a polytope, a polyhedral cell complex, a hyperplane arrangement, or some other object of discrete geometry, is the problem of determination of the ob
Basic feasible solution
In the theory of linear programming, a basic feasible solution (BFS) is a solution with a minimal set of non-zero variables. Geometrically, each BFS corresponds to a corner of the polyhedron of feasib
GLOP
GLOP (the Google Linear Optimization Package) is Google's open source linear programming solver, created by Google's Operations Research Team. It is written in C++ and was released to the public as pa
Duality (optimization)
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem.
Assignment problem
The assignment problem is a fundamental combinatorial optimization problem. In its most general form, the problem is as follows: The problem instance has a number of agents and a number of tasks. Any
Linear-fractional programming
In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective
MPS (format)
MPS (Mathematical Programming System) is a file format for presenting and archiving linear programming (LP) and mixed integer programming problems.
Criss-cross algorithm
In mathematical optimization, the criss-cross algorithm is any of a family of algorithms for linear programming. Variants of the criss-cross algorithm also solve more general problems with linear ineq
Duality gap
In optimization problems in applied mathematics, the duality gap is the difference between the primal and dual solutions. If is the optimal dual value and is the optimal primal value then the duality
Omega ratio
The Omega ratio is a risk-return performance measure of an investment asset, portfolio, or strategy. It was devised by Con Keating and William F. Shadwick in 2002 and is defined as the probability wei
Simplex algorithm
In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was
Total dual integrality
In mathematical optimization, total dual integrality is a sufficient condition for the integrality of a polyhedron. Thus, the optimization of a linear objective over the integral points of such a poly
Farkas' lemma
Farkas' lemma is a solvability theorem for a finite system of linear inequalities in mathematics. It was originally proven by the Hungarian mathematician Gyula Farkas.Farkas' lemma is the key result u
Theory of two-level planning
The theory of two-level planning (alternatively, Kornai–Liptak decomposition) is a method that decomposes large problems of linear optimization into sub-problems. This decomposition simplifies the sol
LP-type problem
In the study of algorithms, an LP-type problem (also called a generalized linear program) is an optimization problem that shares certain properties with low-dimensional linear programs and that may be
Hilbert basis (linear programming)
The Hilbert basis of a convex cone C is a minimal set of integer vectors such that every integer vector in C is a conical combination of the vectors in the Hilbert basis with integer coefficients.
Ellipsoid method
In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ell
Cashflow matching
Cash flow matching is a process of hedging in which a company or other entity matches its cash outflows (i.e., financial obligations) with its cash inflows over a given time horizon. It is a subset of
Minimum relevant variables in linear system
MINimum Relevant Variables in Linear System (Min-RVLS) is a problem in mathematical optimization. Given a linear program, it is required to find a feasible solution in which the number of non-zero var
Linear inequality
In mathematics a linear inequality is an inequality which involves a linear function. A linear inequality contains one of the symbols of inequality:. It shows the data which is not equal in graph form
Affine scaling
In mathematical optimization, affine scaling is an algorithm for solving linear programming problems. Specifically, it is an interior point method, discovered by Soviet mathematician I. I. Dikin in 19
Dantzig–Wolfe decomposition
Dantzig–Wolfe decomposition is an algorithm for solving linear programming problems with special structure. It was originally developed by George Dantzig and Philip Wolfe and initially published in 19
Prune and search
Prune and search is a method of solving optimization problems suggested by Nimrod Megiddo in 1983. The basic idea of the method is a recursive procedure in which at each step the input size is reduced
Linear programming
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by
Klee–Minty cube
The Klee–Minty cube or Klee–Minty polytope (named after Victor Klee and George J. Minty) is a unit hypercube of variable dimension whose corners have been perturbed. Klee and Minty demonstrated that G