- Logical consequence
- >
- Theorems
- >
- Mathematical theorems
- >
- Theorems in statistics

- Logical expressions
- >
- Theorems
- >
- Mathematical theorems
- >
- Theorems in statistics

- Mathematical concepts
- >
- Mathematical problems
- >
- Mathematical theorems
- >
- Theorems in statistics

- Mathematics
- >
- Mathematical theorems
- >
- Theorems in statistics

- Probability and statistics
- >
- Statistics
- >
- Statistical theory
- >
- Theorems in statistics

- Propositions
- >
- Theorems
- >
- Mathematical theorems
- >
- Theorems in statistics

Binomial sum variance inequality

The binomial sum variance inequality states that the variance of the sum of binomially distributed random variables will always be less than or equal to the variance of a binomial variable with the sa

Law of total variance

In probability theory, the law of total variance or variance decomposition formula or conditional variance formulas or law of iterated variances also known as Eve's law, states that if and are random

Optional stopping theorem

In probability theory, the optional stopping theorem (or Doob's optional sampling theorem) says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its in

Welch–Satterthwaite equation

In statistics and uncertainty analysis, the Welch–Satterthwaite equation is used to calculate an approximation to the effective degrees of freedom of a linear combination of independent sample varianc

Bayes' theorem

In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule), named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions

Slutsky's theorem

In probability theory, Slutsky’s theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables. The theorem was named after Eugen Slu

Stein's lemma

Stein's lemma, named in honor of Charles Stein, is a theorem of probability theory that is of interest primarily because of its applications to statistical inference — in particular, to James–Stein es

Bernstein–von Mises theorem

In Bayesian inference, the Bernstein-von Mises theorem provides the basis for using Bayesian credible sets for confidence statements in parametric models. It states that under some conditions, a poste

Bruck–Ryser–Chowla theorem

The Bruck–Ryser–Chowla theorem is a result on the combinatorics of block designs that implies nonexistence of certain kinds of design. It states that if a (v, b, r, k, λ)-design exists with v = b (a s

Wold's theorem

In statistics, Wold's decomposition or the Wold representation theorem (not to be confused with the Wold theorem that is the discrete-time analog of the Wiener–Khinchin theorem), named after Herman Wo

Basu's theorem

In statistics, Basu's theorem states that any boundedly complete minimal sufficient statistic is independent of any ancillary statistic. This is a 1955 result of Debabrata Basu. It is often used in st

Schuette–Nesbitt formula

In mathematics, the Schuette–Nesbitt formula is a generalization of the inclusion–exclusion principle. It is named after and Cecil J. Nesbitt. The probabilistic version of the Schuette–Nesbitt formula

Gauss–Markov theorem

In statistics, the Gauss–Markov theorem (or simply Gauss theorem for some authors) states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear un

Craps principle

In probability theory, the craps principle is a theorem about event probabilities under repeated iid trials. Let and denote two mutually exclusive events which might occur on a given trial. Then the p

Fieller's theorem

In statistics, Fieller's theorem allows the calculation of a confidence interval for the ratio of two means.

Itô's lemma

In mathematics, Itô's lemma or Itô's formula (also called the Itô-Doeblin formula, especially in French literature) is an identity used in Itô calculus to find the differential of a time-dependent fun

Law of large numbers

In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results

Law of the iterated logarithm

In probability theory, the law of the iterated logarithm describes the magnitude of the fluctuations of a random walk. The original statement of the law of the iterated logarithm is due to A. Ya. Khin

Central limit theorem

In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distri

Raikov's theorem

Raikov’s theorem, named for Russian mathematician Dmitrii Abramovich Raikov, is a result in probability theory. It is well known that if each of two independent random variables ξ1 and ξ2 has a Poisso

Lyapunov's central limit theorem

No description available.

Law of total cumulance

In probability theory and mathematical statistics, the law of total cumulance is a generalization to cumulants of the law of total probability, the law of total expectation, and the law of total varia

Cox's theorem

Cox's theorem, named after the physicist Richard Threlkeld Cox, is a derivation of the laws of probability theory from a certain set of postulates. This derivation justifies the so-called "logical" in

Varadhan's lemma

In mathematics, Varadhan's lemma is a result from large deviations theory named after S. R. Srinivasa Varadhan. The result gives information on the asymptotic distribution of a statistic φ(Zε) of a fa

Bapat–Beg theorem

In probability theory, the Bapat–Beg theorem gives the joint probability distribution of order statistics of independent but not necessarily identically distributed random variables in terms of the cu

Bochner's theorem

In mathematics, Bochner's theorem (named for Salomon Bochner) characterizes the Fourier transform of a positive finite Borel measure on the real line. More generally in harmonic analysis, Bochner's th

Rao–Blackwell theorem

In statistics, the Rao–Blackwell theorem, sometimes referred to as the Rao–Blackwell–Kolmogorov theorem, is a result which characterizes the transformation of an arbitrarily crude estimator into an es

Neyman–Pearson lemma

In statistics, the Neyman–Pearson lemma was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman-Pearson lemma is part of the Neyman-Pearson theory of statistical testing, which

Glivenko–Cantelli theorem

In the theory of probability, the Glivenko–Cantelli theorem (sometimes referred to as the Fundamental Theorem of Statistics), named after Valery Ivanovich Glivenko and Francesco Paolo Cantelli, determ

Le Cam's theorem

In probability theory, Le Cam's theorem, named after Lucien Le Cam (1924 – 2000), states the following. Suppose:
* are independent random variables, each with a Bernoulli distribution (i.e., equal to

Robbins lemma

In statistics, the Robbins lemma, named after Herbert Robbins, states that if X is a random variable having a Poisson distribution with parameter λ, and f is any function for which the expected value

Frisch–Waugh–Lovell theorem

In econometrics, the Frisch–Waugh–Lovell (FWL) theorem is named after the econometricians Ragnar Frisch, Frederick V. Waugh, and Michael C. Lovell. The Frisch–Waugh–Lovell theorem states that if the r

Lindeberg's condition

In probability theory, Lindeberg's condition is a sufficient condition (and under certain conditions also a necessary condition) for the central limit theorem (CLT) to hold for a sequence of independe

Shannon–Hartley theorem

In information theory, the Shannon–Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise. It is

Cramér's decomposition theorem

Cramér’s decomposition theorem for a normal distribution is a result of probability theory. It is well known that, given independent normally distributed random variables ξ1, ξ2, their sum is normally

Freidlin–Wentzell theorem

In mathematics, the Freidlin–Wentzell theorem (due to Mark Freidlin and Alexander D. Wentzell) is a result in the large deviations theory of stochastic processes. Roughly speaking, the Freidlin–Wentze

Law of total covariance

In probability theory, the law of total covariance, covariance decomposition formula, or conditional covariance formula states that if X, Y, and Z are random variables on the same probability space, a

Aumann's agreement theorem

Aumann's agreement theorem was stated and proved by Robert Aumann in a paper titled "Agreeing to Disagree", which introduced the set theoretic description of common knowledge. The theorem concerns age

Lehmann–Scheffé theorem

In statistics, the Lehmann–Scheffé theorem is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation. The theorem states that any estima

Skorokhod's representation theorem

In mathematics and statistics, Skorokhod's representation theorem is a result that shows that a weakly convergent sequence of probability measures whose limit measure is sufficiently well-behaved can

Asymptotic equipartition property

In information theory, the asymptotic equipartition property (AEP) is a general property of the output samples of a stochastic source. It is fundamental to the concept of typical set used in theories

Hammersley–Clifford theorem

The Hammersley–Clifford theorem is a result in probability theory, mathematical statistics and statistical mechanics that gives necessary and sufficient conditions under which a strictly positive prob

Cochran's theorem

In statistics, Cochran's theorem, devised by William G. Cochran, is a theorem used to justify results relating to the probability distributions of statistics that are used in the analysis of variance.

Pyrrho's lemma

In statistics, Pyrrho's lemma is the result that if one adds just one extra variable as a regressor from a suitable set to a linear regression model, one can get any desired outcome in terms of the co

Continuous mapping theorem

In probability theory, the continuous mapping theorem states that continuous functions preserve limits even if their arguments are sequences of random variables. A continuous function, in Heine’s defi

Lévy's continuity theorem

In probability theory, Lévy’s continuity theorem, or Lévy's convergence theorem, named after the French mathematician Paul Lévy, connects convergence in distribution of the sequence of random variable

Donsker's theorem

In probability theory, Donsker's theorem (also known as Donsker's invariance principle, or the functional central limit theorem), named after Monroe D. Donsker, is a functional extension of the centra

Fisher–Tippett–Gnedenko theorem

In statistics, the Fisher–Tippett–Gnedenko theorem (also the Fisher–Tippett theorem or the extreme value theorem) is a general result in extreme value theory regarding asymptotic distribution of extre

Kosambi–Karhunen–Loève theorem

In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève), also known as the Kosambi–Karhunen–Loève theorem is a representation of a stochastic pro

Doob–Meyer decomposition theorem

The Doob–Meyer decomposition theorem is a theorem in stochastic calculus stating the conditions under which a submartingale may be decomposed in a unique way as the sum of a martingale and an increasi

Characterization of probability distributions

In mathematics in general, a characterization theorem says that a particular object – a function, a space, etc. – is the only one that possesses properties specified in the theorem. A characterization

Kac–Bernstein theorem

The Kac–Bernstein theorem is one of the first characterization theorems of mathematical statistics. It is easy to see that if the random variables and are independent and normally distributed with the

Hájek–Le Cam convolution theorem

In statistics, the Hájek–Le Cam convolution theorem states that any in a parametric model is asymptotically equivalent to a sum of two independent random variables, one of which is normal with asympto

Berry–Esseen theorem

In probability theory, the central limit theorem states that, under certain circumstances, the probability distribution of the scaled mean of a random sample converges to a normal distribution as the

© 2023 Useful Links.