Category: Markov processes

Piecewise-deterministic Markov process
In probability theory, a piecewise-deterministic Markov process (PDMP) is a process whose behaviour is governed by random jumps at points in time, but whose evolution is deterministically governed by
Kolmogorov equations (continuous-time Markov chains)
In mathematics and statistics, in the context of Markov processes, the Kolmogorov equations, including Kolmogorov forward equations and Kolmogorov backward equations, are a pair of systems of differen
Birth process
In probability theory, a birth process or a pure birth process is a special case of a continuous-time Markov process and a generalisation of a Poisson process. It defines a continuous process which ta
Conductance (graph)
In graph theory the conductance of a graph G = (V, E) measures how "well-knit" the graph is: it controls how fast a random walk on G converges to its stationary distribution. The conductance of a grap
Diffusion process
In probability theory and statistics, diffusion processes are a class of continuous-time Markov process with almost surely continuous sample paths. Brownian motion, reflected Brownian motion and Ornst
Geometric process
In probability, statistics and related fields, the geometric process is a counting process, introduced by Lam in 1988. It is defined as The geometric process. Given a sequence of non-negative random v
Markov chains on a measurable state space
A Markov chain on a measurable state space is a discrete-time-homogeneous Markov chain with a measurable space as state space.
Markov decision process
In mathematics, a Markov decision process (MDP) is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly
Markov information source
In mathematics, a Markov information source, or simply, a Markov source, is an information source whose underlying dynamics are given by a stationary finite Markov chain.
Markov kernel
In probability theory, a Markov kernel (also known as a stochastic kernel or probability kernel) is a map that in the general theory of Markov processes plays the role that the transition matrix does
Queueing theory
Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered
Chapman–Kolmogorov equation
In mathematics, specifically in the theory of Markovian stochastic processes in probability theory, the Chapman–Kolmogorov equation is an identity relating the joint probability distributions of diffe
Burstiness
In statistics, burstiness is the intermittent increases and decreases in activity or frequency of an event.One of measures of burstiness is the Fano factor—a ratio between the variance and mean of cou
Absorbing Markov chain
In the mathematical theory of probability, an absorbing Markov chain is a Markov chain in which every state can reach an absorbing state. An absorbing state is a state that, once entered, cannot be le
Markov chain approximation method
In numerical methods for stochastic differential equations, the Markov chain approximation method (MCAM) belongs to the several numerical (schemes) approaches used in stochastic control theory. Regret
Continuous-time Markov chain
A continuous-time Markov chain (CTMC) is a continuous stochastic process in which, for each state, the process will change state according to an exponential random variable and then move to a differen
Nearly completely decomposable Markov chain
In probability theory, a nearly completely decomposable (NCD) Markov chain is a Markov chain where the state-space can be partitioned in such a way that movement within a partition occurs much more fr
Markovian arrival process
In queueing theory, a discipline within the mathematical theory of probability, a Markovian arrival process (MAP or MArP) is a mathematical model for the time between job arrivals to a system. The sim
Quasi-birth–death process
In queueing models, a discipline within the mathematical theory of probability, the quasi-birth–death process describes a generalisation of the birth–death process. As with the birth-death process it
Random surfing model
The random surfing model is a graph model which describes the probability of a random user visiting a web page. The model attempts to predict the chance that a random internet surfer will arrive at a
Markov Chains and Mixing Times
Markov Chains and Mixing Times is a book on Markov chain mixing times. The second edition was written by David A. Levin, and Yuval Peres. Elizabeth Wilmer was a co-author on the first edition and is c
Kolmogorov equations
In probability theory, Kolmogorov equations, including Kolmogorov forward equations and Kolmogorov backward equations, characterize continuous-time Markov processes. In particular, they describe how t
Telescoping Markov chain
In probability theory, a telescoping Markov chain (TMC) is a vector-valued stochastic process that satisfies a Markov property and admits a hierarchical format through a network of transition matrices
Kelly's lemma
In probability theory, Kelly's lemma states that for a stationary continuous time Markov chain, a process defined as the time-reversed process has the same stationary distribution as the forward-time
Perron–Frobenius theorem
In matrix theory, the Perron–Frobenius theorem, proved by Oskar Perron and Georg Frobenius, asserts that a real square matrix with positive entries has a unique largest real eigenvalue and that the co
Transition rate matrix
In probability theory, a transition rate matrix (also known as an intensity matrix or infinitesimal generator matrix) is an array of numbers describing the instantaneous rate at which a continuous tim
Uniformization (probability theory)
In probability theory, uniformization method, (also known as Jensen's method or the randomization method) is a method to compute transient solutions of finite state continuous-time Markov chains, by a
Markov renewal process
In probability and statistics, a Markov renewal process (MRP) is a random process that generalizes the notion of Markov jump processes. Other random processes like Markov chains, Poisson processes and
Birth–death process
The birth–death process (or birth-and-death process) is a special case of continuous-time Markov process where the state transitions are of only two types: "births", which increase the state variable
Brownian meander
In the mathematical theory of probability, Brownian meander is a continuous non-homogeneous Markov process defined as follows: Let be a standard one-dimensional Brownian motion, and , i.e. the last ti
Fleming–Viot process
In probability theory, a Fleming–Viot process (F–V process) is a member of a particular subset of probability measure-valued Markov processes on compact metric spaces, as defined in the 1979 paper by
Lévy flight
A Lévy flight is a random walk in which the step-lengths have a Lévy distribution, a probability distribution that is heavy-tailed. When defined as a walk in a space of dimension greater than one, the
Stochastic cellular automaton
Stochastic cellular automata or probabilistic cellular automata (PCA) or random cellular automata or locally interacting Markov chains are an important extension of cellular automaton. Cellular automa
Feller process
In probability theory relating to stochastic processes, a Feller process is a particular kind of Markov process.
Vacancy chain
A vacancy chain is a social structure through which resources are distributed to consumers. In a vacancy chain, a new resource unit that arrives into a population is taken by the first individual in l
Discrete-time Markov chain
In probability, a discrete-time Markov chain (DTMC) is a sequence of random variables, known as a stochastic process, in which the value of the next variable depends only on the value of the current v
Markov chain central limit theorem
In the mathematical theory of random processes, the Markov chain central limit theorem has a conclusion somewhat similar in form to that of the classic central limit theorem (CLT) of probability theor
Markov additive process
In applied probability, a Markov additive process (MAP) is a bivariate Markov process where the future states depends only on one of the variables.
Partially observable Markov decision process
A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics
Multiscale decision-making
Multiscale decision-making, also referred to as multiscale decision theory (MSDT), is an approach in operations research that combines game theory, multi-agent influence diagrams, in particular depend
Lumpability
In probability theory, lumpability is a method for reducing the size of the state space of some continuous-time Markov chains, first published by Kemeny and Snell.
Hunt process
In probability theory, a Hunt process is a which is quasi-left continuous with respect to the minimum completed admissible filtration . It is named after Gilbert Hunt.
Markov property
In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It is named after the Russian mathematician Andrey Markov. The term strong Mar
Dirichlet form
In potential theory (the study of harmonic function) and functional analysis, Dirichlet forms generalize the Laplacian (the mathematical operator on scalar fields). Dirichlet forms can be defined on a
Markov chain
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informa
Kolmogorov's criterion
In probability theory, Kolmogorov's criterion, named after Andrey Kolmogorov, is a theorem giving a necessary and sufficient condition for a Markov chain or continuous-time Markov chain to be stochast
Poisson point process
In probability, statistics and related fields, a Poisson point process is a type of random mathematical object that consists of points randomly located on a mathematical space. The Poisson point proce
Foster's theorem
In probability theory, Foster's theorem, named after Gordon Foster, is used to draw conclusions about the positive recurrence of Markov chains with countable state spaces. It uses the fact that positi
Decentralized partially observable Markov decision process
The decentralized partially observable Markov decision process (Dec-POMDP) is a model for coordination and decision-making among multiple agents. It is a probabilistic model that can consider uncertai
Markov reward model
In probability theory, a Markov reward model or Markov reward process is a stochastic process which extends either a Markov chain or continuous-time Markov chain by adding a reward rate to each state.
Poisson clumping
Poisson clumping, or Poisson bursts, is a phenomenon where random events may appear to occur in clusters, clumps, or bursts.
Additive Markov chain
In probability theory, an additive Markov chain is a Markov chain with an additive conditional probability function. Here the process is a discrete-time Markov chain of order m and the transition prob
Branching process
In probability theory, a branching process is a type of mathematical object known as a stochastic process, which consists of collections of random variables. The random variables of a stochastic proce
Chan–Karolyi–Longstaff–Sanders process
In mathematics, the Chan–Karolyi–Longstaff–Sanders process (abbreviated as CKLS process) is a stochastic process with applications to finance. In particular it has been used to model the term structur
Harris chain
In the mathematical study of stochastic processes, a Harris chain is a Markov chain where the chain returns to a particular part of the state space an unbounded number of times. Harris chains are rege
Markov chain mixing time
In probability theory, the mixing time of a Markov chain is the time until the Markov chain is "close" to its steady state distribution. More precisely, a fundamental result about Markov chains is tha
Memorylessness
In probability and statistics, memorylessness is a property of certain probability distributions. It usually refers to the cases when the distribution of a "waiting time" until a certain event does no
Gauss–Markov process
Gauss–Markov stochastic processes (named after Carl Friedrich Gauss and Andrey Markov) are stochastic processes that satisfy the requirements for both Gaussian processes and Markov processes. A statio
Subshift of finite type
In mathematics, subshifts of finite type are used to model dynamical systems, and in particular are the objects of study in symbolic dynamics and ergodic theory. They also describe the set of all poss
Spectral expansion solution
In probability theory, the spectral expansion solution method is a technique for computing the stationary probability distribution of a continuous-time Markov chain whose state space is a semi-infinit
Markov chain tree theorem
In the mathematical theory of Markov chains, the Markov chain tree theorem is an expression for the stationary distribution of a Markov chain with finitely many states. It sums up terms for the rooted
Kemeny's constant
In probability theory, Kemeny’s constant is the expected number of time steps required for a Markov chain to transition from a starting state i to a random destination state sampled from the Markov ch
Optimistic knowledge gradient
In statistics The optimistic knowledge gradient is a approximation policy proposed by Xi Chen, Qihang Lin and Dengyong Zhou in 2013. This policy is created to solve the challenge of computationally in
Ionescu-Tulcea theorem
In the mathematical theory of probability, the Ionescu-Tulcea theorem, sometimes called the Ionesco Tulcea extension theorem deals with the existence of probability measures for probabilistic events c
Ornstein–Uhlenbeck process
In mathematics, the Ornstein–Uhlenbeck process is a stochastic process with applications in financial mathematics and the physical sciences. Its original application in physics was as a model for the
Matrix analytic method
In probability theory, the matrix analytic method is a technique to compute the stationary probability distribution of a Markov chain which has a repeating structure (after some point) and a state spa