- Applied mathematics
- >
- Algorithms
- >
- Numerical analysis
- >
- Mathematical optimization

- Applied mathematics
- >
- Computational mathematics
- >
- Numerical analysis
- >
- Mathematical optimization

- Equivalence (mathematics)
- >
- Approximations
- >
- Numerical analysis
- >
- Mathematical optimization

- Fields of mathematics
- >
- Computational mathematics
- >
- Numerical analysis
- >
- Mathematical optimization

- Mathematical analysis
- >
- Fields of mathematical analysis
- >
- Numerical analysis
- >
- Mathematical optimization

- Mathematical logic
- >
- Algorithms
- >
- Numerical analysis
- >
- Mathematical optimization

- Mathematical relations
- >
- Approximations
- >
- Numerical analysis
- >
- Mathematical optimization

- Mathematics
- >
- Fields of mathematics
- >
- Mathematical analysis
- >
- Mathematical optimization

- Theoretical computer science
- >
- Algorithms
- >
- Numerical analysis
- >
- Mathematical optimization

- Theoretical computer science
- >
- Mathematics of computing
- >
- Numerical analysis
- >
- Mathematical optimization

Bilinear program

In mathematics, a bilinear program is a nonlinear optimization problem whose objective or constraint functions are bilinear. An example is the .

Dual norm

In functional analysis, the dual norm is a measure of size for a continuous linear function defined on a normed vector space.

Mean field annealing

Mean field annealing is a deterministic approximation to the simulated annealing technique of solving optimization problems. This method uses mean field theory and is based on Peierls' inequality.

Optimal apportionment

Optimal apportionment is an approach to apportionment that is based on mathematical optimization. In a problem of apportionment, there is a resource to allocate, denoted by . For example, it can be an

Dead-end elimination

The dead-end elimination algorithm (DEE) is a method for minimizing a function over a discrete set of independent variables. The basic idea is to identify "dead ends", i.e., combinations of variables

Local optimum

In applied mathematics and computer science, a local optimum of an optimization problem is a solution that is optimal (either maximal or minimal) within a neighboring set of candidate solutions. This

Low-rank approximation

In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix (the data) and an approximating matrix (the optimization variable),

Maximum theorem

The maximum theorem provides conditions for the continuity of an optimized function and the set of its maximizers with respect to its parameters. The statement was first proven by Claude Berge in 1959

Relaxation (approximation)

In mathematical optimization and related fields, relaxation is a modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve. A solution of t

Open energy system models

Open energy system models are energy system models that are open source. However, some of them may use third party proprietary software as part of their workflows to input, process, or output data. Pr

Central composite design

In statistics, a central composite design is an experimental design, useful in response surface methodology, for building a second order (quadratic) model for the response variable without needing to

Utility maximization problem

Utility maximization was first developed by utilitarian philosophers Jeremy Bentham and John Stuart Mill. In microeconomics, the utility maximization problem is the problem consumers face: "How should

Stress majorization

Stress majorization is an optimization strategy used in multidimensional scaling (MDS) where, for a set of -dimensional data items, a configuration of points in -dimensional space is sought that minim

Vector optimization

Vector optimization is a subarea of mathematical optimization where optimization problems with a vector-valued objective functions are optimized with respect to a given partial ordering and subject to

Mathematical programming with equilibrium constraints

Mathematical programming with equilibrium constraints (MPEC) is the study of constrained optimization problems where the constraints include variational inequalities or complementarities. MPEC is rela

Pareto efficiency

Pareto efficiency or Pareto optimality is a situation where no individual or preference criterion can be made better off without making at least one individual or preference criterion worse off. The c

Response surface methodology

In statistics, response surface methodology (RSM) explores the relationships between several explanatory variables and one or more response variables. The method was introduced by George E. P. Box and

PDE-constrained optimization

PDE-constrained optimization is a subset of mathematical optimization where at least one of the constraints may be expressed as a partial differential equation. Typical domains where these problems ar

No free lunch in search and optimization

In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged o

Smoothed analysis

In theoretical computer science, smoothed analysis is a way of measuring the complexity of an algorithm. Since its introduction in 2001, smoothed analysis has been used as a basis for considerable res

Robust optimization

Robust optimization is a field of mathematical optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented

Sion's minimax theorem

In mathematics, and in particular game theory, Sion's minimax theorem is a generalization of John von Neumann's minimax theorem, named after Maurice Sion. It states: Let be a compact convex subset of

Energy minimization

In the field of computational chemistry, energy minimization (also called energy optimization, geometry minimization, or geometry optimization) is the process of finding an arrangement in space of a c

Pseudo-Boolean function

In mathematics and optimization, a pseudo-Boolean function is a function of the form where B = {0, 1} is a Boolean domain and n is a nonnegative integer called the arity of the function. A Boolean fun

Constrained optimization

In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presen

Rosenbrock function

In mathematical optimization, the Rosenbrock function is a non-convex function, introduced by Howard H. Rosenbrock in 1960, which is used as a performance test problem for optimization algorithms. It

Bayesian efficiency

Bayesian efficiency is an analog of Pareto efficiency for situations in which there is incomplete information. Under Pareto efficiency, an allocation of a resource is Pareto efficient if there is no o

Linear search problem

In computational complexity theory, the linear search problem is an optimal search problem introduced by Richard E. Bellman and independently considered by Anatole Beck.

Rastrigin function

In mathematical optimization, the Rastrigin function is a non-convex function used as a performance test problem for optimization algorithms. It is a typical example of non-linear multimodal function.

Basis pursuit

Basis pursuit is the mathematical optimization problem of the form where x is a N-dimensional solution vector (signal), y is a M-dimensional vector of observations (measurements), A is a M × N transfo

Mixed complementarity problem

Mixed Complementarity Problem (MCP) is a problem formulation in mathematical programming. Many well-known problem types are special cases of, or may be reduced to MCP. It is a generalization of nonlin

Multiple-criteria decision analysis

Multiple-criteria decision-making (MCDM) or multiple-criteria decision analysis (MCDA) is a sub-discipline of operations research that explicitly evaluates multiple conflicting criteria in decision ma

S-procedure

The S-procedure or S-lemma is a mathematical result that gives conditions under which a particular quadratic inequality is a consequence of another quadratic inequality. The S-procedure was developed

Nearest neighbor search

Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically

Applicable mathematics

No description available.

Variational Monte Carlo

In computational physics, variational Monte Carlo (VMC) is a quantum Monte Carlo method that applies the variational method to approximate the ground state of a quantum system. The basic building bloc

Quadratically constrained quadratic program

In mathematical optimization, a quadratically constrained quadratic program (QCQP) is an optimization problem in which both the objective function and the constraints are quadratic functions. It has t

Proportional-fair rule

In operations research and social choice, the proportional-fair (PF) rule is a rule saying that, among all possible alternatives, one should pick an alternative that cannot be improved, where "improve

Superiorization

Superiorization is an iterative method for constrained optimization. It is used for improving the efficacy of an iterative method whose convergence is resilient to certain kinds of perturbations. Such

Karush–Kuhn–Tucker conditions

In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a so

VIKOR method

The VIKOR method is a multi-criteria decision making (MCDM) or multi-criteria decision analysis method. It was originally developed by Serafim Opricovic to solve decision problems with conflicting and

Utilitarian rule

In social choice and operations research, the utilitarian rule (also called the max-sum rule) is a rule saying that, among all possible alternatives, society should pick the alternative which maximize

TOPSIS

The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is a multi-criteria decision analysis method, which was originally developed by Ching-Lai Hwang and Yoon in 1981 with fur

Hydrological optimization

Hydrological optimization applies mathematical optimization techniques (such as dynamic programming, linear programming, integer programming, or quadratic programming) to water-related problems. These

Linear complementarity problem

In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It wa

Descent direction

In optimization, a descent direction is a vector that, in the sense below, moves us closer towards a local minimum of our objective function . Suppose we are computing by an iterative method, such as

Fritz John conditions

The Fritz John conditions (abbr. FJ conditions), in mathematics, are a necessary condition for a solution in nonlinear programming to be optimal. They are used as lemma in the proof of the Karush–Kuhn

Simulation-based optimization

Simulation-based optimization (also known as simply simulation optimization) integrates optimization techniques into simulation modeling and analysis. Because of the complexity of the simulation, the

Analysis of Boolean functions

In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on or (such functions are sometimes known as pseudo-Boolean functions) from a spect

Distributed constraint optimization

Distributed constraint optimization (DCOP or DisCOP) is the distributed analogue to constraint optimization. A DCOP is a problem in which a group of agents must distributedly choose values for a set o

Bauer maximum principle

Bauer's maximum principle is the following theorem in mathematical optimization: Any function that is convex and continuous, and defined on a set that is convex and compact, attains its maximum at som

Wolfe conditions

In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 196

Bilevel optimization

Bilevel optimization is a special kind of optimization where one problem is embedded (nested) within another. The outer optimization task is commonly referred to as the upper-level optimization task,

Convex optimization

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets

Highly optimized tolerance

In applied mathematics, highly optimized tolerance (HOT) is a method of generating power law behavior in systems by including a global optimization principle. It was developed by Jean M. Carlson and J

JuMP

JuMP is an algebraic modeling language and a collection of supporting packages for mathematical optimization embedded in the Julia programming language. JuMP is used by companies, government agencies,

Faustmann's formula

Faustmann's formula, or the Faustmann model, gives the present value of the income stream for forest rotation. It was derived by the German forester in 1849. The rotation problem, deciding when to cut

Optimal control

Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has nume

Energy modeling

Energy modeling or energy system modeling is the process of building computer models of energy systems in order to analyze them. Such models often employ scenario analysis to investigate different ass

Gradient descent

In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take re

Multidisciplinary design optimization

Multi-disciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. It is also known as multidisciplin

MCACEA

MCACEA (Multiple Coordinated Agents Coevolution Evolutionary Algorithm) is a general framework that uses a single evolutionary algorithm (EA) per agent sharing their optimal solutions to coordinate th

Elbow of a curve

No description available.

Wing-shape optimization

Wing-shape optimization is a software implementation of shape optimization primarily used for aircraft design. This allows for engineers to produce more efficient and cheaper aircraft designs.

P versus NP problem

The P versus NP problem is a major unsolved problem in theoretical computer science. In informal terms, it asks whether every problem whose solution can be quickly verified can also be quickly solved.

Similarity-based-TOPSIS

Similarity based TOPSIS is a multi-criteria decision-making method. The name TOPSIS is shortening from the Technique for Order Performance by Similarity to Ideal Solution. It is based on the idea of f

Jeep problem

The jeep problem, desert crossing problem or exploration problem is a mathematics problem in which a jeep must maximize the distance it can travel into a desert with a given quantity of fuel. The jeep

Ordinal priority approach

Ordinal priority approach (OPA) is a multiple-criteria decision analysis method that aids in solving the group decision-making problems based on preference relations.

Multi-objective optimization

Multi-objective optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, multiattribute optimization or Pareto optimization) is an area of multiple cri

Optimal design

In the design of experiments, optimal designs (or optimum designs) are a class of experimental designs that are optimal with respect to some statistical criterion. The creation of this field of statis

Shape optimization

Shape optimization is part of the field of optimal control theory. The typical problem is to find the shape which is optimal in that it minimizes a certain cost functional while satisfying given const

Clarke's generalized Jacobian

In mathematics, Clarke's generalized Jacobian is a generalization of the Jacobian matrix of a smooth function to non-smooth functions. It was introduced by Clarke.

Lazy caterer's sequence

The lazy caterer's sequence, more formally known as the central polygonal numbers, describes the maximum number of pieces of a disk (a pancake or pizza is usually used to describe the situation) that

Complementarity theory

A complementarity problem is a type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a function of two vector variables subject to certain requirements

Chebyshev center

In geometry, the Chebyshev center of a bounded set having non-empty interior is the center of the minimal-radius ball enclosing the entire set , or alternatively (and non-equivalently) the center of l

Multicriteria classification

In multiple criteria decision aiding (MCDA), multicriteria classification (or sorting) involves problems where a finite set of alternative actions should be assigned into a predefined set of preferent

Measuring attractiveness by a categorical-based evaluation technique (MACBETH)

Measuring attractiveness through a categorical-based evaluation technique (MACBETH) is a multiple-criteria decision analysis (MCDA) method that evaluates options against multiple criteria. MACBETH was

Lagrange multiplier

In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints (i.e., subject to the condition tha

Topological derivative

The topological derivative is, conceptually, a derivative of a shape functional with respect to infinitesimal changes in its topology, such as adding an infinitesimal hole or crack. When used in highe

Sum-of-squares optimization

A sum-of-squares optimization program is an optimization problem with a linear cost function and a particular type of constraint on the decision variables. These constraints are of the form that when

Continuous optimization

Continuous optimization is a branch of optimization in applied mathematics. As opposed to discrete optimization, the variables used in the objective function are required to be continuous variables—th

Ackley function

In mathematical optimization, the Ackley function is a non-convex function used as a performance test problem for optimization algorithms. It was proposed by David Ackley in his 1987 PhD dissertation.

Steiner's calculus problem

Steiner's problem, asked and answered by , is the problem of finding the maximum of the function It is named after Jakob Steiner. The maximum is at , where e denotes the base of the natural logarithm.

Least-squares spectral analysis

Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum, based on a least squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the mos

Basis pursuit denoising

In applied mathematics and statistics, basis pursuit denoising (BPDN) refers to a mathematical optimization problem of the form where is a parameter that controls the trade-off between sparsity and re

Goal programming

Goal programming is a branch of multiobjective optimization, which in turn is a branch of multi-criteria decision analysis (MCDA). It can be thought of as an extension or generalisation of linear prog

Online optimization

Online optimization is a field of optimization theory, more popular in computer science and operations research, that deals with optimization problems having no or incomplete knowledge of the future (

Himmelblau's function

In mathematical optimization, Himmelblau's function is a multi-modal function, used to test the performance of optimization algorithms. The function is defined by: It has one local maximum at and wher

Cake number

In mathematics, the cake number, denoted by Cn, is the maximum number of regions into which a 3-dimensional cube can be partitioned by exactly n planes. The cake number is so-called because one may im

Mathematical Optimization Society

The Mathematical Optimization Society (MOS), known as the Mathematical Programming Society until 2010, is an international association of researchers active in optimization. The MOS encourages the res

Knee of a curve

In mathematics, a knee of a curve (or elbow of a curve) is a point where the curve visibly bends, specifically from high slope to low slope (flat or close to flat), or in the other direction. This is

Binary constraint

A binary constraint, in mathematical optimization, is a constraint that involves exactly two variables. For example, consider the n-queens problem, where the goal is to place n chess queens on an n-by

Deterministic global optimization

Deterministic global optimization is a branch of numerical optimization which focuses on finding the global solutions of an optimization problem whilst providing theoretical guarantees that the report

Multi-attribute global inference of quality

Multi-attribute global inference of quality (MAGIQ) is a multi-criteria decision analysis technique. MAGIQ is based on a hierarchical decomposition of comparison attributes and rating assignment using

Oracle complexity (optimization)

In mathematical optimization, oracle complexity is a standard theoretical framework to study the computational requirements for solving classes of optimization problems. It is suitable for analyzing i

NP-completeness

In computational complexity theory, a problem is NP-complete when: 1.
* it is a problem for which the correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-for

Geometric median

In geometry, the geometric median of a discrete set of sample points in a Euclidean space is the point minimizing the sum of distances to the sample points. This generalizes the median, which has the

Superiority and inferiority ranking method

The superiority and inferiority ranking method (or SIR method) is a multi-criteria decision making model (MCDA) which can handle real data and provides six different preference structures for the syst

Paper bag problem

In geometry, the paper bag problem or teabag problem is to calculate the maximum possible inflated volume of an initially flat sealed rectangular bag which has the same shape as a cushion or pillow, m

ÉLECTRE

ÉLECTRE is a family of multi-criteria decision analysis (MCDA) methods that originated in Europe in the mid-1960s. The acronym ÉLECTRE stands for: ÉLimination Et Choix Traduisant la REalité ("Eliminat

Mixed linear complementarity problem

In mathematical optimization theory, the mixed linear complementarity problem, often abbreviated as MLCP or LMCP, is a generalization of the linear complementarity problem to include free variables.

Mathematical optimization

Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. I

Minimax theorem

In the mathematical area of game theory, a minimax theorem is a theorem providing conditions that guarantee that the max–min inequality is also an equality. The first theorem in this sense is von Neum

Shekel function

The Shekel function is a multidimensional, multimodal, continuous, deterministic function commonly used as a test function for testing optimization techniques. The mathematical form of a function in d

Map segmentation

In mathematics, the map segmentation problem is a kind of optimization problem. It involves a certain geographic region that has to be partitioned into smaller sub-regions in order to achieve a certai

Proximal operator

In mathematical optimization, the proximal operator is an operator associated with a proper, lower semi-continuous convex function from a Hilbert space to , and is defined by: For any function in this

Trajectory optimization

Trajectory optimization is the process of designing a trajectory that minimizes (or maximizes) some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimiz

Compressed sensing

Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solu

Dispersive flies optimisation

Dispersive flies optimisation (DFO) is a bare-bones swarm intelligence algorithm which is inspired by the swarming behaviour of flies hovering over food sources. DFO is a simple optimiser which works

Mirror descent

In mathematics, mirror descent is an iterative optimization algorithm for finding a local minimum of a differentiable function. It generalizes algorithms such as gradient descent and multiplicative we

Corner solution

A corner solution is a special solution to an agent's maximization problem in which the quantity of one of the arguments in the maximized function is zero. In non-technical terms, a corner solution is

Signomial

A signomial is an algebraic function of one or more independent variables. It is perhaps most easily thought of as an algebraic extension of multivariable polynomials—an extension that permits exponen

Separation oracle

A separation oracle (also called a cutting-plane oracle) is a concept in the mathematical theory of convex optimization. It is a method to describe a convex set that is given as an input to an optimiz

Feasible region

In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem tha

Guess value

In mathematical modeling, a guess value is more commonly called a starting value or initial value. These are necessary for most optimization problems which use search algorithms, because those algorit

Maxima and minima

In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest valu

Topology optimization

Topology optimization (TO) is a mathematical method that optimizes material layout within a given design space, for a given set of loads, boundary conditions and constraints with the goal of maximizin

Hyperparameter optimization

In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to

Backtracking line search

In (unconstrained) mathematical optimization, a backtracking line search is a line search method to determine the amount to move along a given search direction. Its use requires that the objective fun

Stochastic multicriteria acceptability analysis

Stochastic multicriteria acceptability analysis (SMAA) is a multiple-criteria decision analysis method for problems with missing or incomplete information.

Discrete optimization

Discrete optimization is a branch of optimization in applied mathematics and computer science.

International Society for Structural and Multidisciplinary Optimization

The International Society for Structural and Multidisciplinary Optimization is a learned society in the field of multidisciplinary design optimization that was founded in October 1991. It has more tha

Keynes–Ramsey rule

In macroeconomics, the Keynes–Ramsey rule is a necessary condition for the optimality of intertemporal consumption choice. Usually it is express as a differential equation relating the rate of change

Walrasian auction

A Walrasian auction, introduced by Léon Walras, is a type of simultaneous auction where each agent calculates its demand for the good at every possible price and submits this to an auctioneer. The pri

Optimistic knowledge gradient

In statistics The optimistic knowledge gradient is a approximation policy proposed by Xi Chen, Qihang Lin and Dengyong Zhou in 2013. This policy is created to solve the challenge of computationally in

Max–min inequality

In mathematics, the max–min inequality is as follows: For any function When equality holds one says that f, W, and Z satisfies a strong max–min property (or a saddle-point property). The example funct

Wald's maximin model

In decision theory and game theory, Wald's maximin model is a non-probabilistic decision-making model according to which decisions are ranked on the basis of their worst-case outcomes – the optimal de

Constraint (mathematics)

In mathematics, a constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraints—primarily equality constraints, inequality constraints, an

© 2023 Useful Links.