- Dimensional analysis
- >
- Dimensionless numbers
- >
- Probability
- >
- Probability theory

- Fields of mathematics
- >
- Probability and statistics
- >
- Probability
- >
- Probability theory

- Mathematical analysis
- >
- Fields of mathematical analysis
- >
- Measure theory
- >
- Probability theory

- Numbers
- >
- Dimensionless numbers
- >
- Probability
- >
- Probability theory

Exotic probability

Exotic probability is a branch of probability theory that deals with probabilities which are outside the normal range of [0, 1]. According to the author of various papers on exotic probability, , the

Blumenthal's zero–one law

In the mathematical theory of probability, Blumenthal's zero–one law, named after Robert McCallum Blumenthal, is a statement about the nature of the beginnings of right continuous Feller process. Loos

Uncertainty theory

Uncertainty theory is a branch of mathematics based on normality, monotonicity, self-duality, countable subadditivity, and product measure axioms. Mathematical measures of the likelihood of an event b

Augmented filtration

No description available.

Almost surely

In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions

Transition kernel

In the mathematics of probability, a transition kernel or kernel is a function in mathematics that has different applications. Kernels can for example be used to define random measures or stochastic p

Fat Chance: Probability from 0 to 1

Fat Chance: Probability from 0 to 1 is an introductory undergraduate-level textbook on probability theory, centered on the metaphor of games of chance. It was written by Benedict Gross, Joe Harris, an

Indicator function

In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if A is a subset

Inductive probability

Inductive probability attempts to give the probability of future events based on past events. It is the basis for inductive reasoning, and gives the mathematical basis for learning and the perception

Carleman's condition

In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure satisfies Carleman's condition, there is n

High-dimensional statistics

In statistical theory, the field of high-dimensional statistics studies data whose dimension is larger than typically considered in classical multivariate analysis. The area arose owing to the emergen

Independent increments

In probability theory, independent increments are a property of stochastic processes and random measures. Most of the time, a process or random measure has independent increments by definition, which

Hajek projection

In statistics, Hájek projection of a random variable on a set of independent random vectors is a particular measurable function of that, loosely speaking, captures the variation of in an optimal way.

With high probability

In mathematics, an event that occurs with high probability (often shortened to w.h.p. or WHP) is one whose probability depends on a certain number n and goes to 1 as n goes to infinity, i.e. the proba

Exponentially equivalent measures

In mathematics, exponential equivalence of measures is how two sequences or families of probability measures are "the same" from the point of view of large deviations theory.

Decoupling (probability)

In probability and statistics, decoupling is a reduction of a sample statistic to an average of the statistic evaluated on several independent sequences of the random variable. This sum, conditioned o

Chain rule (probability)

In probability theory, the chain rule (also called the general product rule) permits the calculation of any member of the joint distribution of a set of random variables using only conditional probabi

Impossibility of a gambling system

The principle of the impossibility of a gambling system is a concept in probability. It states that in a random sequence, the methodical selection of subsequences does not change the probability of sp

Probability interpretations

The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to o

Total correlation

In probability theory and in particular in information theory, total correlation (Watanabe 1960) is one of several generalizations of the mutual information. It is also known as the multivariate const

Law of truly large numbers

The law of truly large numbers (a statistical adage), attributed to Persi Diaconis and Frederick Mosteller, states that with a large enough number of independent samples, any highly implausible (i.e.

Dynkin system

A Dynkin system, named after Eugene Dynkin is a collection of subsets of another universal set satisfying a set of axioms weaker than those of 𝜎-algebra. Dynkin systems are sometimes referred to as 𝜆-

Intensity measure

In probability theory, an intensity measure is a measure that is derived from a random measure. The intensity measure is a non-random measure and is defined as the expectation value of the random meas

Probability vector

In mathematics and statistics, a probability vector or stochastic vector is a vector with non-negative entries that add up to one. The positions (indices) of a probability vector represent the possibl

Possibility theory

Possibility theory is a mathematical theory for dealing with certain types of uncertainty and is an alternative to probability theory. It uses measures of possibility and necessity between 0 and 1, ra

Sub-probability measure

In the mathematical theory of probability and measure, a sub-probability measure is a measure that is closely related to probability measures. While probability measures always assign the value 1 to t

Typical set

In information theory, the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution. That this set has total probabi

Empirical characteristic function

Let be independent, identically distributed real-valued random variables with common characteristic function . The empirical characteristic function (ECF) defined as is an unbiased and consistent esti

Collectively exhaustive events

In probability theory and logic, a set of events is jointly or collectively exhaustive if at least one of the events must occur. For example, when rolling a six-sided die, the events 1, 2, 3, 4, 5, an

Imprecise probability

Imprecise probability generalizes probability theory to allow for partial probability specifications, and is applicable when information is scarce, vague, or conflicting, in which case a unique probab

Isotropic measure

In probability theory, an isotropic measure is any mathematical measure that is invariant under linear isometries. It is a standard simplification and assumption used in probability theory. Generally,

Catalog of articles in probability theory

This page lists articles related to probability theory. In particular, it lists many articles corresponding to specific probability distributions. Such articles are marked here by a code of the form (

Filtration (probability theory)

In the theory of stochastic processes, a subdiscipline of probability theory, filtrations are totally ordered collections of subsets that are used to model the information that is available at a given

Right-continuous filtration

No description available.

Usual hypotheses

No description available.

Graphon

In graph theory and statistics, a graphon (also known as a graph limit) is a symmetric measurable function , that is important in the study of dense graphs. Graphons arise both as a natural notion for

Independent S-increments

No description available.

Covariance operator

In probability theory, for a probability measure P on a Hilbert space H with inner product , the covariance of P is the bilinear form Cov: H × H → R given by for all x and y in H. The covariance opera

Pascal's wager

Pascal's wager is a philosophical argument presented by the seventeenth-century French mathematician, philosopher, physicist and theologian Blaise Pascal (1623–1662). It posits that human beings wager

Mixture (probability)

In probability theory and statistics, a mixture is a probabilistic combination of two or more probability distributions. The concept arises mostly in two contexts:
* A mixture defining a new probabil

Probabilistic argumentation

Probabilistic argumentation refers to different formal frameworks pertaining to probabilistic logic. All share the idea that qualitative aspects can be captured by an underlying logic, while quantitat

Ibragimov–Iosifescu conjecture for φ-mixing sequences

Ibragimov–Iosifescu conjecture for φ-mixing sequences in probability theory is the collective name for 2 closely-related conjectures by Ildar Ibragimov and ro:Marius Iosifescu.

Complex random vector

In probability theory and statistics, a complex random vector is typically a tuple of complex-valued random variables, and generally is a random variable taking values in a vector space over the field

Radical probabilism

Radical probabilism is a hypothesis in philosophy, in particular epistemology, and probability theory that holds that no facts are known for certain. That view holds profound implications for statisti

Geometric probability

Problems of the following type, and their solution techniques, were first studied in the 18th century, and the general topic became known as geometric probability.
* (Buffon's needle) What is the cha

Central tendency

In statistics, a central tendency (or measure of central tendency) is a central or typical value for a probability distribution. Colloquially, measures of central tendency are often called averages. T

Zero–one law

In probability theory, a zero–one law is a result that states that an event must have probability 0 or 1 and no intermediate value. Sometimes, the statement is that the limit of certain probabilities

Ε-net (computational geometry)

An ε-net (pronounced epsilon-net) in computational geometry is the approximation of a general set by a collection of simpler subsets. In probability theory it is the approximation of one probability d

Continuum percolation theory

In mathematics and probability theory, continuum percolation theory is a branch of mathematics that extends discrete percolation theory to continuous space (often Euclidean space ℝn). More specificall

Inclusion–exclusion principle

In combinatorics, a branch of mathematics, the inclusion–exclusion principle is a counting technique which generalizes the familiar method of obtaining the number of elements in the union of two finit

Pairwise error probability

Pairwise error probability is the error probability that for a transmitted signal its corresponding but distorted version will be received. This type of probability is called ″pair-wise error probabil

Filtered probability space

No description available.

Set-theoretic limit

In mathematics, the limit of a sequence of sets (subsets of a common set ) is a set whose elements are determined by the sequence in either of two equivalent ways: (1) by upper and lower bounds on the

Complex random variable

In probability theory and statistics, complex random variables are a generalization of real-valued random variables to complex numbers, i.e. the possible values a complex random variable may take are

Graphoid

A graphoid is a set of statements of the form, "X is irrelevant to Y given that we know Z" where X, Y and Z are sets of variables. The notion of "irrelevance" and "given that we know" may obtain diffe

Total variation distance of probability measures

In probability theory, the total variation distance is a distance measure for probability distributions. It is an example of a statistical distance metric, and is sometimes called the statistical dist

Principle of indifference

The principle of indifference (also called principle of insufficient reason) is a rule for assigning epistemic probabilities. The principle of indifference states that in the absence of any relevant e

Contiguity (probability theory)

In probability theory, two sequences of probability measures are said to be contiguous if asymptotically they share the same support. Thus the notion of contiguity extends the concept of absolute cont

Additive smoothing

In statistics, additive smoothing, also called Laplace smoothing or Lidstone smoothing, is a technique used to smooth categorical data. Given a set of observation counts from a -dimensional multinomia

Probability theory

Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathem

Wald's equation

In probability theory, Wald's equation, Wald's identity or Wald's lemma is an important identity that simplifies the calculation of the expected value of the sum of a random number of random quantitie

Unit measure

Unit measure is an axiom of probability theory that states that the probability of the entire sample space is equal to one (unity); that is, P(S)=1 where S is the sample space. Loosely speaking, it me

Convergence of Probability Measures

Convergence of Probability Measures is a graduate textbook in the field of mathematical probability theory. It was written by Patrick Billingsley and published by Wiley in 1968. A second edition in 19

Lévy–Prokhorov metric

In mathematics, the Lévy–Prokhorov metric (sometimes known just as the Prokhorov metric) is a metric (i.e., a definition of distance) on the collection of probability measures on a given metric space.

Big O in probability notation

The order in probability notation is used in probability theory and statistical theory in direct parallel to the big-O notation that is standard in mathematics. Where the big-O notation deals with the

Statistical interference

When two probability distributions overlap, statistical interference exists. Knowledge of the distributions can be used to determine the likelihood that one parameter exceeds another, and by how much.

Additive process

An additive process, in probability theory, is a cadlag, continuous in probability stochastic process with independent increments.An additive process is the generalization of a Lévy process (a Lévy pr

Complete filtration

No description available.

Chvátal–Sankoff constants

In mathematics, the Chvátal–Sankoff constants are mathematical constants that describe the lengths of longest common subsequences of random strings. Although the existence of these constants has been

The Mathematics of Games and Gambling

The Mathematics of Games and Gambling is a book on probability theory and its application to games of chance. It was written by Edward Packel, and published in 1981 by the Mathematical Association of

Probability axioms

The Kolmogorov axioms are the foundations of probability theory introduced by Russian mathematician Andrey Kolmogorov in 1933. These axioms remain central and have direct contributions to mathematics,

Coupling (probability)

In probability theory, coupling is a proof technique that allows one to compare two unrelated random variables (distributions) X and Y by creating a random vector W whose marginal distributions corres

Dual total correlation

In information theory, dual total correlation (Han 1978), information rate (Dubnov 2006), excess entropy (Olbrich 2008), or binding information (Abdallah and Plumbley 2010) is one of several known non

© 2023 Useful Links.