Category: Statistical inequalities

Binomial sum variance inequality
The binomial sum variance inequality states that the variance of the sum of binomially distributed random variables will always be less than or equal to the variance of a binomial variable with the sa
Samuelson's inequality
In statistics, Samuelson's inequality, named after the economist Paul Samuelson, also called the Laguerre–Samuelson inequality, after the mathematician Edmond Laguerre, states that every one of any co
Kullback's inequality
In information theory and statistics, Kullback's inequality is a lower bound on the Kullback–Leibler divergence expressed in terms of the large deviations rate function. If P and Q are probability dis
Bhatia–Davis inequality
In mathematics, the Bhatia–Davis inequality, named after Rajendra Bhatia and Chandler Davis, is an upper bound on the variance σ2 of any bounded probability distribution on the real line.
Eaton's inequality
In probability theory, Eaton's inequality is a bound on the largest values of a linear combination of bounded random variables. This inequality was described in 1974 by Morris L. Eaton.
Cheeger bound
In mathematics, the Cheeger bound is a bound of the second largest eigenvalue of the transition matrix of a finite-state, discrete-time, reversible stationary Markov chain. It can be seen as a special
Cramér–Rao bound
In estimation theory and statistics, the Cramér–Rao bound (CRB) expresses a lower bound on the variance of unbiased estimators of a deterministic (fixed, though unknown) parameter, the variance of any
Popoviciu's inequality on variances
In probability theory, Popoviciu's inequality, named after Tiberiu Popoviciu, is an upper bound on the variance σ2 of any bounded probability distribution. Let M and m be upper and lower bounds on the
Vysochanskij–Petunin inequality
In probability theory, the Vysochanskij–Petunin inequality gives a lower bound for the probability that a random variable with finite variance lies within a certain number of standard deviations of th
Fréchet inequalities
In probabilistic logic, the Fréchet inequalities, also known as the Boole–Fréchet inequalities, are rules implicit in the work of George Boole and explicitly derived by Maurice Fréchet that govern the
Dvoretzky–Kiefer–Wolfowitz inequality
In the theory of probability and statistics, the Dvoretzky–Kiefer–Wolfowitz–Massart inequality (DKW inequality) bounds how close an empirically determined distribution function will be to the distribu
Multidimensional Chebyshev's inequality
In probability theory, the multidimensional Chebyshev's inequality is a generalization of Chebyshev's inequality, which puts a bound on the probability of the event that a random variable differs from
Entropy power inequality
In information theory, the entropy power inequality (EPI) is a result that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random var
Etemadi's inequality
In probability theory, Etemadi's inequality is a so-called "maximal inequality", an inequality that gives a bound on the probability that the partial sums of a finite collection of independent random
Le Cam's theorem
In probability theory, Le Cam's theorem, named after Lucien Le Cam (1924 – 2000), states the following. Suppose: * are independent random variables, each with a Bernoulli distribution (i.e., equal to
Doob's martingale inequality
In mathematics, Doob's martingale inequality, also known as Kolmogorov’s submartingale inequality is a result in the study of stochastic processes. It gives a bound on the probability that a submartin
Marcinkiewicz–Zygmund inequality
In mathematics, the Marcinkiewicz–Zygmund inequality, named after Józef Marcinkiewicz and Antoni Zygmund, gives relations between moments of a collection of independent random variables. It is a gener
Jensen's inequality
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by J
Doob martingale
In the mathematical theory of probability, a Doob martingale (named after Joseph L. Doob, also known as a Levy martingale) is a stochastic process that approximates a given random variable and has the
McDiarmid's inequality
In probability theory and theoretical computer science, McDiarmid's inequality is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain func
Chebyshev's inequality
In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values
Chapman–Robbins bound
In statistics, the Chapman–Robbins bound or Hammersley–Chapman–Robbins bound is a lower bound on the variance of estimators of a deterministic parameter. It is a generalization of the Cramér–Rao bound
Boole's inequality
In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater tha
Fisher's inequality
Fisher's inequality is a necessary condition for the existence of a balanced incomplete block design, that is, a system of subsets that satisfy certain prescribed conditions in combinatorial mathemati