- Affine geometry
- >
- Convex geometry
- >
- Convex analysis
- >
- Convex optimization

- Algorithms
- >
- Numerical analysis
- >
- Mathematical optimization
- >
- Convex optimization

- Approximations
- >
- Numerical analysis
- >
- Mathematical optimization
- >
- Convex optimization

- Calculus of variations
- >
- Variational analysis
- >
- Convex analysis
- >
- Convex optimization

- Computational mathematics
- >
- Numerical analysis
- >
- Mathematical optimization
- >
- Convex optimization

- Control theory
- >
- Variational analysis
- >
- Convex analysis
- >
- Convex optimization

- Fields of geometry
- >
- Convex geometry
- >
- Convex analysis
- >
- Convex optimization

- Fields of mathematical analysis
- >
- Numerical analysis
- >
- Mathematical optimization
- >
- Convex optimization

- Fields of mathematics
- >
- Mathematical analysis
- >
- Mathematical optimization
- >
- Convex optimization

- Functional analysis
- >
- Variational analysis
- >
- Convex analysis
- >
- Convex optimization

- Linear algebra
- >
- Convex geometry
- >
- Convex analysis
- >
- Convex optimization

- Mathematics of computing
- >
- Numerical analysis
- >
- Mathematical optimization
- >
- Convex optimization

- Optimization in vector spaces
- >
- Variational analysis
- >
- Convex analysis
- >
- Convex optimization

Weak duality

In applied mathematics, weak duality is a concept in optimization which states that the duality gap is always greater than or equal to 0. That means the solution to the dual (minimization) problem is

Test functions for optimization

In applied mathematics, test functions, known as artificial landscapes, are useful to evaluate characteristics of optimization algorithms, such as:
* Convergence rate.
* Precision.
* Robustness.
*

Slater's condition

In mathematics, Slater's condition (or Slater condition) is a sufficient condition for strong duality to hold for a convex optimization problem, named after Morton L. Slater. Informally, Slater's cond

Maximum theorem

The maximum theorem provides conditions for the continuity of an optimized function and the set of its maximizers with respect to its parameters. The statement was first proven by Claude Berge in 1959

Geometric programming

A geometric program (GP) is an optimization problem of the form where are posynomials and are monomials. In the context of geometric programming (unlike standard mathematics), a monomial is a function

Subgradient method

Subgradient methods are iterative methods for solving convex minimization problems. Originally developed by Naum Z. Shor and others in the 1960s and 1970s, subgradient methods are convergent when appl

Pseudoconvex function

In convex analysis and the calculus of variations, both branches of mathematics, a pseudoconvex function is a function that behaves like a convex function with respect to finding its local minima, but

Fenchel's duality theorem

In mathematics, Fenchel's duality theorem is a result in the theory of convex functions named after Werner Fenchel. Let ƒ be a proper convex function on Rn and let g be a proper concave function on Rn

Semidefinite programming

Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximiz

Geodesic convexity

In mathematics — specifically, in Riemannian geometry — geodesic convexity is a natural generalization of convexity for sets and functions to Riemannian manifolds. It is common to drop the prefix "geo

Bregman method

The Bregman method is an iterative algorithm to solve certain convex optimization problems involving regularization. The original version is due to Lev M. Bregman, who published it in 1967. The algori

Convexity in economics

Convexity is an important topic in economics. In the Arrow–Debreu model of general economic equilibrium, agents have convex budget sets and convex preferences: At equilibrium prices, the budget hyperp

Strong duality

Strong duality is a condition in mathematical optimization in which the primal optimal objective and the dual optimal objective are equal. This is as opposed to weak duality (the primal problem has op

Perturbation function

In mathematical optimization, the perturbation function is any function which relates to primal and dual problems. The name comes from the fact that any such function defines a perturbation of the ini

Conic optimization

Conic optimization is a subfield of convex optimization that studies problems consisting of minimizing a convex function over the intersection of an affine subspace and a convex cone. The class of con

Barrier function

In constrained optimization, a field of mathematics, a barrier function is a continuous function whose value on a point increases to infinity as the point approaches the boundary of the feasible regio

Quasiconvex function

In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form is a convex set.

Lagrangian relaxation

In the field of mathematical optimization, Lagrangian relaxation is a relaxation method which approximates a difficult problem of constrained optimization by a simpler problem. A solution to the relax

Linear matrix inequality

In convex optimization, a linear matrix inequality (LMI) is an expression of the form where
* is a real vector,
* are symmetric matrices ,
* is a generalized inequality meaning is a positive semide

Proximal gradient methods for learning

Proximal gradient (forward backward splitting) methods for learning is an area of research in optimization and statistical learning theory which studies algorithms for a general class of convex regula

Shapley–Folkman lemma

The Shapley–Folkman lemma is a result in convex geometry with applications in mathematical economics that describes the Minkowski addition of sets in a vector space. Minkowski addition is defined as t

Tracking error

In finance, tracking error or active risk is a measure of the risk in an investment portfolio that is due to active management decisions made by the portfolio manager; it indicates how closely a portf

Separation oracle

A separation oracle (also called a cutting-plane oracle) is a concept in the mathematical theory of convex optimization. It is a method to describe a convex set that is given as an input to an optimiz

Second-order cone programming

A second-order cone program (SOCP) is a convex optimization problem of the form minimize subject to where the problem parameters are , and . is the optimization variable. is the Euclidean norm and ind

Non-convexity (economics)

In economics, non-convexity refers to violations of the convexity assumptions of elementary economics. Basic economics textbooks concentrate on consumers with convex preferences (that do not prefer ex

Subderivative

In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the

Stochastic variance reduction

(Stochastic) variance reduction is an algorithmic approach to minimizing functions that can be decomposed into finite sums. By exploiting the finite sum structure, variance reduction techniques are ab

Convex optimization

Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets

Duality (optimization)

In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem.

Danskin's theorem

In convex analysis, Danskin's theorem is a theorem which provides information about the derivatives of a function of the form The theorem has applications in optimization, where it sometimes is used t

Duality gap

In optimization problems in applied mathematics, the duality gap is the difference between the primal and dual solutions. If is the optimal dual value and is the optimal primal value then the duality

Biconvex optimization

Biconvex optimization is a generalization of convex optimization where the objective function and the constraint set can be biconvex. There are methods that can find the global optimum of these proble

Ellipsoid method

In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ell

Wolfe duality

In mathematical optimization, Wolfe duality, named after Philip Wolfe, is type of dual problem in which the objective function and constraints are all differentiable functions. Using this concept a lo

Stochastic gradient descent

Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can b

Structured sparsity regularization

Structured sparsity regularization is a class of methods, and an area of research in statistical learning theory, that extend and generalize sparsity regularization learning methods. Both sparsity and

Linear programming

Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by

© 2023 Useful Links.