- Mathematical analysis
- >
- Mathematical optimization
- >
- Stochastic optimization
- >
- Stochastic control

- Mathematics
- >
- Fields of mathematics
- >
- Control theory
- >
- Stochastic control

- Numerical analysis
- >
- Mathematical optimization
- >
- Stochastic optimization
- >
- Stochastic control

- Probability theory
- >
- Statistical randomness
- >
- Stochastic processes
- >
- Stochastic control

- Randomness
- >
- Statistical randomness
- >
- Stochastic processes
- >
- Stochastic control

- Statistical algorithms
- >
- Randomized algorithms
- >
- Stochastic optimization
- >
- Stochastic control

- Statistical randomness
- >
- Randomized algorithms
- >
- Stochastic optimization
- >
- Stochastic control

- Statistical theory
- >
- Statistical randomness
- >
- Stochastic processes
- >
- Stochastic control

Automatic basis function construction

In machine learning, automatic basis function construction (or basis discovery) is the mathematical method of looking for a set of task-independent basis functions that map the state space to a lower-

Optimal projection equations

In control theory, optimal projection equations constitute necessary and sufficient conditions for a locally optimal reduced-order LQG controller. The linear-quadratic-Gaussian (LQG) control problem i

Mabinogion sheep problem

In probability theory, the Mabinogion sheep problem or Mabinogian urn is a problem in stochastic control introduced by David Williams , who named it after a herd of magic sheep in the Welsh collection

Separation principle in stochastic control

The separation principle is one of the fundamental principles of stochastic control theory, which states that the problems of optimal control and state estimation can be decoupled under certain condit

Partially observable Markov decision process

A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics

Merton's portfolio problem

Merton's portfolio problem is a well known problem in continuous-time finance and in particular intertemporal portfolio choice. An investor must choose how much to consume and must allocate their weal

Markov decision process

In mathematics, a Markov decision process (MDP) is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly

Stochastic control

Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the s

Witsenhausen's counterexample

Witsenhausen's counterexample, shown in the figure below, is a deceptively simple toy problem in decentralized stochastic control. It was formulated by Hans Witsenhausen in 1968. It is a counterexampl

Separation principle

In control theory, a separation principle, more formally known as a principle of separation of estimation and control, states that under some assumptions the problem of designing an optimal feedback c

Robust control

In control theory, robust control is an approach to controller design that explicitly deals with uncertainty. Robust control methods are designed to function properly provided that uncertain parameter

Hamilton–Jacobi–Bellman equation

In optimal control theory, the Hamilton-Jacobi-Bellman (HJB) equation gives a necessary and sufficient condition for optimality of a control with respect to a loss function. It is, in general, a nonli

Multiplier uncertainty

In macroeconomics, multiplier uncertainty is lack of perfect knowledge of the multiplier effect of a particular policy action, such as a monetary or fiscal policy change, upon the intended target of t

Linear–quadratic–Gaussian control

In control theory, the linear–quadratic–Gaussian (LQG) control problem is one of the most fundamental optimal control problems, and it can also be operated repeatedly for model predictive control. It

© 2023 Useful Links.