Category: Markov models

Snakes and ladders
Snakes and ladders is a board game for two or more players regarded today as a worldwide classic. The game originated in ancient India as Moksha Patam, and was brought to the UK in the 1890s. It is pl
Quantum Markov chain
In mathematics, the quantum Markov chain is a reformulation of the ideas of a classical Markov chain, replacing the classical definitions of probability with quantum probability.
Queueing theory
Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered
Burst error
In telecommunication, a burst error or error burst is a contiguous sequence of symbols, received over a communication channel, such that the first and last symbols are in error and there exists no con
Reinforcement learning
Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforce
Mark V. Shaney
Mark V. Shaney is a synthetic Usenet user whose postings in the net.singles newsgroups were generated by Markov chain techniques, based on text from other postings. The username is a play on the words
Markov model
In probability theory, a Markov model is a stochastic model used to model pseudo-randomly changing systems. It is assumed that future states depend only on the current state, not on the events that oc
Multiple sequence alignment
Multiple sequence alignment (MSA) may refer to the process or the result of sequence alignment of three or more biological sequences, generally protein, DNA, or RNA. In many cases, the input set of qu
Markov partition
A Markov partition in mathematics is a tool used in dynamical systems theory, allowing the methods of symbolic dynamics to be applied to the study of hyperbolic dynamics. By using a Markov partition,
Markov switching multifractal
In financial econometrics (the application of statistical methods to economic data), the Markov-switching multifractal (MSM) is a model of asset returns developed by Laurent E. Calvet and Adlai J. Fis
MRF optimization via dual decomposition
In dual decomposition a problem is broken into smaller subproblems and a solution to the relaxed problem is found. This method can be employed for MRF optimization. Dual decomposition is applied to ma
Gene prediction
In computational biology, gene prediction or gene finding refers to the process of identifying the regions of genomic DNA that encode genes. This includes protein-coding genes as well as RNA genes, bu
Algorithmic composition
Algorithmic composition is the technique of using algorithms to create music. Algorithms (or, at the very least, formal sets of rules) have been used to compose music for centuries; the procedures use
Absorbing Markov chain
In the mathematical theory of probability, an absorbing Markov chain is a Markov chain in which every state can reach an absorbing state. An absorbing state is a state that, once entered, cannot be le
Interacting particle system
In probability theory, an interacting particle system (IPS) is a stochastic process on some configuration space given by a site space, a countable-infinite graph and a local state space, a compact met
Population process
In applied probability, a population process is a Markov chain in which the state of the chain is analogous to the number of individuals in a population (0, 1, 2, etc.), and changes to the state are a
Iterative Viterbi decoding
Iterative Viterbi decoding is an algorithm that spots the subsequence S of an observation O = {o1, ..., on} having the highest average probability (i.e., probability scaled by the length of S) of bein
Dependability state model
A dependability state diagram is a method for modelling a system as a Markov chain. It is used in reliability engineering for availability and reliability analysis. It consists of creating a finite st
Language model
A language model is a probability distribution over sequences of words. Given such a sequence of length m, a language model assigns a probability to the whole sequence. Language models generate probab
Detailed balance
The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions). It states that at equilibrium, each ele
Markov chain Monte Carlo
In statistics, Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a probability distribution. By constructing a Markov chain that has the desired distribution as
Diffusion model
In machine learning, diffusion models, also known as diffusion probabilistic models, are a class of latent variable models. These models are Markov chains trained using variational inference. The goal
Baum–Welch algorithm
In electrical engineering, statistical computing and bioinformatics, the Baum–Welch algorithm is a special case of the expectation–maximization algorithm used to find the unknown parameters of a hidde
Examples of Markov chains
This article contains examples of Markov chains and Markov processes in action. All examples are in the countable state space. For an overview of Markov chains in general state space, see Markov chain
Dynamics of Markovian particles
Dynamics of Markovian particles (DMP) is the basis of a theory for kinetics of particles in open . It can be looked upon as an application of the notion of stochastic process conceived as a physical e
Google matrix
A Google matrix is a particular stochastic matrix that is used by Google's PageRank algorithm. The matrix represents a graph with edges representing links between pages. The PageRank of each page can
Stochastic cellular automaton
Stochastic cellular automata or probabilistic cellular automata (PCA) or random cellular automata or locally interacting Markov chains are an important extension of cellular automaton. Cellular automa
Path dependence
Path dependence is a concept in economics and the social sciences, referring to processes where past events or decisions constrain later events or decisions. It can be used to refer to outcomes at a s
Markov chain central limit theorem
In the mathematical theory of random processes, the Markov chain central limit theorem has a conclusion somewhat similar in form to that of the classic central limit theorem (CLT) of probability theor
CLAWS (linguistics)
The Constituent Likelihood Automatic Word-tagging System (CLAWS) is a program that performs part-of-speech tagging. It was developed in the 1980s at Lancaster University by the University Centre for C
Markovian discrimination
Within the probability theory Markov model, Markovian discrimination in spam filtering is a method used in CRM114 and other spam filters to model the statistical behaviors of spam and nonspam more acc
Forward–backward algorithm
The forward–backward algorithm is an inference algorithm for hidden Markov models which computes the posterior marginals of all hidden state variables given a sequence of observations/emissions , i.e.
Hidden Markov model
A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it — with unobservable ("hidden") states. As part of the definition
Dynamic Markov compression
Dynamic Markov compression (DMC) is a lossless data compression algorithm developed by Gordon Cormack and Nigel Horspool. It uses predictive arithmetic coding similar to prediction by partial matching
Markov property
In probability theory and statistics, the term Markov property refers to the memoryless property of a stochastic process. It is named after the Russian mathematician Andrey Markov. The term strong Mar
PageRank
PageRank (PR) is an algorithm used by Google Search to rank web pages in their search engine results. It is named after both the term "web page" and co-founder Larry Page. PageRank is a way of measuri
GLIMMER
In bioinformatics, GLIMMER (Gene Locator and Interpolated Markov ModelER) is used to find genes in prokaryotic DNA. "It is effective at finding genes in bacteria, archea, viruses, typically finding 98
Forward algorithm
The forward algorithm, in the context of a hidden Markov model (HMM), is used to calculate a 'belief state': the probability of a state at a certain time, given the history of evidence. The process is
Markov chain
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informa
Entropy rate
In the mathematical theory of probability, the entropy rate or source information rate of a stochastic process is, informally, the time density of the average information in a stochastic process. For
Variable-order Markov model
In the mathematical theory of stochastic processes, variable-order Markov (VOM) models are an important class of models that extend the well known Markov chain models. In contrast to the Markov chain
Pop music automation
Pop music automation is a field of study among musicians and computer scientists with a goal of producing successful pop music algorithmically. It is often based on the premise that pop music is espec
Stochastic matrix
In mathematics, a stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a nonnegative real number representing a probability. It is also calle
MegaHAL
MegaHAL is a computer conversation simulator, or "chatterbot", created by .
Discrete phase-type distribution
The discrete phase-type distribution is a probability distribution that results from a system of one or more inter-related geometric distributions occurring in sequence, or phases. The sequence in whi
Variable-order Bayesian network
Variable-order Bayesian network (VOBN) models provide an important extension of both the Bayesian network models and the variable-order Markov models. VOBN models are used in machine learning in gener
Markov chain geostatistics
Markov chain geostatistics uses Markov chain spatial models, simulation algorithms and associated spatial correlation measures (e.g., transiogram) based on the Markov chain random field theory, which
Switching Kalman filter
The switching Kalman filtering (SKF) method is a variant of the Kalman filter. In its generalised form, it is often attributed to Kevin P. Murphy, but related switching state-space models have been in
Maximum-entropy Markov model
In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum en
Part-of-speech tagging
In corpus linguistics, part-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particu
Kalman filter
For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise
Viterbi algorithm
The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results i
Generalized filtering
Generalized filtering is a generic Bayesian filtering scheme for nonlinear state-space models. It is based on a variational principle of least action, formulated in generalized coordinates of motion.
Models of DNA evolution
A number of different Markov models of DNA sequence evolution have been proposed. These substitution models differ in terms of the parameters used to describe the rates at which one nucleotide replace
Bernoulli scheme
In mathematics, the Bernoulli scheme or Bernoulli shift is a generalization of the Bernoulli process to more than two possible outcomes. Bernoulli schemes appear naturally in symbolic dynamics, and ar