- Arithmetic
- >
- Comparison (mathematical)
- >
- Inequalities
- >
- Probabilistic inequalities

- Binary operations
- >
- Binary relations
- >
- Inequalities
- >
- Probabilistic inequalities

- Binary operations
- >
- Comparison (mathematical)
- >
- Inequalities
- >
- Probabilistic inequalities

- Elementary arithmetic
- >
- Comparison (mathematical)
- >
- Inequalities
- >
- Probabilistic inequalities

- Mathematical problems
- >
- Mathematical theorems
- >
- Inequalities
- >
- Probabilistic inequalities

- Mathematical problems
- >
- Mathematical theorems
- >
- Probability theorems
- >
- Probabilistic inequalities

- Mathematical relations
- >
- Binary relations
- >
- Inequalities
- >
- Probabilistic inequalities

- Mathematics
- >
- Mathematical theorems
- >
- Inequalities
- >
- Probabilistic inequalities

- Mathematics
- >
- Mathematical theorems
- >
- Probability theorems
- >
- Probabilistic inequalities

- Measure theory
- >
- Probability theory
- >
- Probability theorems
- >
- Probabilistic inequalities

- Measures (measure theory)
- >
- Probability distributions
- >
- Theory of probability distributions
- >
- Probabilistic inequalities

- Probability
- >
- Probability theory
- >
- Probability theorems
- >
- Probabilistic inequalities

- Probability theory
- >
- Probability distributions
- >
- Theory of probability distributions
- >
- Probabilistic inequalities

- Statistical models
- >
- Probability distributions
- >
- Theory of probability distributions
- >
- Probabilistic inequalities

- Statistical theory
- >
- Probability distributions
- >
- Theory of probability distributions
- >
- Probabilistic inequalities

- Theorems
- >
- Mathematical theorems
- >
- Inequalities
- >
- Probabilistic inequalities

- Theorems
- >
- Mathematical theorems
- >
- Probability theorems
- >
- Probabilistic inequalities

Vitale's random Brunn–Minkowski inequality

In mathematics, Vitale's random Brunn–Minkowski inequality is a theorem due to that generalizes the classical Brunn–Minkowski inequality for compact subsets of n-dimensional Euclidean space Rn to rand

Cauchy–Schwarz inequality

The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is considered one of the most important and widely used inequalities in mathematics. The inequality for sums was publi

Bernstein inequalities (probability theory)

In probability theory, Bernstein inequalities give bounds on the probability that the sum of random variables deviates from its mean. In the simplest case, let X1, ..., Xn be independent Bernoulli ran

Gaussian correlation inequality

The Gaussian correlation inequality (GCI), formerly known as the Gaussian correlation conjecture (GCC), is a mathematical theorem in the fields of mathematical statistics and convex geometry. A specia

Khintchine inequality

In mathematics, the Khintchine inequality, named after Aleksandr Khinchin and spelled in multiple ways in the Latin alphabet, is a theorem from probability, and is also frequently used in analysis. He

Lorden's inequality

In probability theory, Lorden's inequality is a bound for the moments of overshoot for a stopped sum of random variables, first published by Gary Lorden in 1970. Overshoots play a central role in rene

Eaton's inequality

In probability theory, Eaton's inequality is a bound on the largest values of a linear combination of bounded random variables. This inequality was described in 1974 by Morris L. Eaton.

Cheeger bound

In mathematics, the Cheeger bound is a bound of the second largest eigenvalue of the transition matrix of a finite-state, discrete-time, reversible stationary Markov chain. It can be seen as a special

Paley–Zygmund inequality

In mathematics, the Paley–Zygmund inequality bounds theprobability that a positive random variable is small, in terms ofits first two moments. The inequality wasproved by Raymond Paley and Antoni Zygm

Kolmogorov's inequality

In probability theory, Kolmogorov's inequality is a so-called "maximal inequality" that gives a bound on the probability that the partial sums of a finite collection of independent random variables ex

Sedrakyan's inequality

The following inequality is known as Sedrakyan's inequality, Bergström's inequality, Engel's form or Titu's lemma, respectively, referring to the article About the applications of one useful inequalit

Gaussian isoperimetric inequality

In mathematics, the Gaussian isoperimetric inequality, proved by Boris Tsirelson and , and later independently by , states that among all sets of given Gaussian measure in the n-dimensional Euclidean

Pinsker's inequality

In information theory, Pinsker's inequality, named after its inventor Mark Semenovich Pinsker, is an inequality that bounds the total variation distance (or statistical distance) in terms of the Kullb

Vysochanskij–Petunin inequality

In probability theory, the Vysochanskij–Petunin inequality gives a lower bound for the probability that a random variable with finite variance lies within a certain number of standard deviations of th

Azuma's inequality

In probability theory, the Azuma–Hoeffding inequality (named after Kazuoki Azuma and Wassily Hoeffding) gives a concentration result for the values of martingales that have bounded differences. Suppos

Fréchet inequalities

In probabilistic logic, the Fréchet inequalities, also known as the Boole–Fréchet inequalities, are rules implicit in the work of George Boole and explicitly derived by Maurice Fréchet that govern the

Hoeffding's lemma

In probability theory, Hoeffding's lemma is an inequality that bounds the moment-generating function of any bounded random variable. It is named after the Finnish–American mathematical statistician Wa

Ross's conjecture

In queueing theory, a discipline within the mathematical theory of probability, Ross's conjecture gives a lower bound for the average waiting-time experienced by a customer when arrivals to the queue

Hsu–Robbins–Erdős theorem

In the mathematical theory of probability, the Hsu–Robbins–Erdős theorem states that if is a sequence of i.i.d. random variables with zero mean and finite variance and then for every . The result was

Stochastic Gronwall inequality

Stochastic Gronwall inequality is a generalization of Gronwall's inequality and has been used for proving the well-posedness of path-dependent stochastic differential equations with local monotonicity

Gauss's inequality

In probability theory, Gauss's inequality (or the Gauss inequality) gives an upper bound on the probability that a unimodal random variable lies more than any given distance from its mode. Let X be a

Bretagnolle–Huber inequality

In information theory, the Bretagnolle–Huber inequality bounds the total variation distance between two probability distributions and by a concave and bounded function of the Kullback–Leibler divergen

Lenglart's inequality

In the mathematical theory of probability, Lenglart's inequality was proved by Èrik Lenglart in 1977. Later slight modifications are also called Lenglart's inequality.

Chernoff bound

In probability theory, the Chernoff bound gives exponentially decreasing bounds on tail distributions of sums of independent random variables. Despite being named after Herman Chernoff, the author of

Cantelli's inequality

In probability theory, Cantelli's inequality (also called the Chebyshev-Cantelli inequality and the one-sided Chebyshev inequality) is an improved version of Chebyshev's inequality for one-sided tail

Multidimensional Chebyshev's inequality

In probability theory, the multidimensional Chebyshev's inequality is a generalization of Chebyshev's inequality, which puts a bound on the probability of the event that a random variable differs from

Doob's martingale inequality

In mathematics, Doob's martingale inequality, also known as Kolmogorov’s submartingale inequality is a result in the study of stochastic processes. It gives a bound on the probability that a submartin

Entropy power inequality

In information theory, the entropy power inequality (EPI) is a result that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random var

Etemadi's inequality

In probability theory, Etemadi's inequality is a so-called "maximal inequality", an inequality that gives a bound on the probability that the partial sums of a finite collection of independent random

Jensen's inequality

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by J

Le Cam's theorem

In probability theory, Le Cam's theorem, named after Lucien Le Cam (1924 – 2000), states the following. Suppose:
* are independent random variables, each with a Bernoulli distribution (i.e., equal to

Marcinkiewicz–Zygmund inequality

In mathematics, the Marcinkiewicz–Zygmund inequality, named after Józef Marcinkiewicz and Antoni Zygmund, gives relations between moments of a collection of independent random variables. It is a gener

Second moment method

In mathematics, the second moment method is a technique used in probability theory and analysis to show that a random variable has positive probability of being positive. More generally, the "moment m

Doob martingale

In the mathematical theory of probability, a Doob martingale (named after Joseph L. Doob, also known as a Levy martingale) is a stochastic process that approximates a given random variable and has the

Hölder's inequality

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces. Theorem (Hölder's inequality

McDiarmid's inequality

In probability theory and theoretical computer science, McDiarmid's inequality is a concentration inequality which bounds the deviation between the sampled value and the expected value of certain func

BRS-inequality

BRS-inequality is the short name for Bruss-Robertson-Steele inequality. This inequality gives a convenient upper bound for the expected maximum number of non-negative random variables one can sum up w

Sanov's theorem

In mathematics and information theory, Sanov's theorem gives a bound on the probability of observing an atypical sequence of samples from a given probability distribution. In the language of large dev

Janson inequality

In the mathematical theory of probability, Janson's inequality is a collection of related inequalities giving an exponential bound on the probability of many related events happening simultaneously by

Markov's inequality

In probability theory, Markov's inequality gives an upper bound for the probability that a non-negative function of a random variable is greater than or equal to some positive constant. It is named af

Hoeffding's inequality

In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain am

Chebyshev's inequality

In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) guarantees that, for a wide class of probability distributions, no more than a certain fraction of values

Chung–Erdős inequality

In probability theory, the Chung–Erdős inequality provides a lower bound on the probability that one out of many (possibly dependent) events occurs. The lower bound is expressed in terms of the probab

Borell–TIS inequality

In mathematics and probability, the Borell–TIS inequality is a result bounding the probability of a deviation of the uniform norm of a centered Gaussian stochastic process above its expected value. Th

Talagrand's concentration inequality

In the probability theory field of mathematics , Talagrand's concentration inequality is an isoperimetric-type inequality for product probability spaces. It was first proved by the French mathematicia

Concentration inequality

In probability theory, concentration inequalities provide bounds on how a random variable deviates from some value (typically, its expected value). The law of large numbers of classical probability th

Bennett's inequality

In probability theory, Bennett's inequality provides an upper bound on the probability that the sum of independent random variables deviates from its expected value by more than any specified amount.

Boole's inequality

In probability theory, Boole's inequality, also known as the union bound, says that for any finite or countable set of events, the probability that at least one of the events happens is no greater tha

Correlation inequality

A correlation inequality is any of a number of inequalities satisfied by the correlation functions of a model. Such inequalities are of particular use in statistical mechanics and in percolation theor

Grönwall's inequality

In mathematics, Grönwall's inequality (also called Grönwall's lemma or the Grönwall–Bellman inequality) allows one to bound a function that is known to satisfy a certain differential or by the solutio

Kunita–Watanabe inequality

In stochastic calculus, the Kunita–Watanabe inequality is a generalization of the Cauchy–Schwarz inequality to integrals of stochastic processes.It was first obtained by Hiroshi Kunita and Shinzo Wata

Berry–Esseen theorem

In probability theory, the central limit theorem states that, under certain circumstances, the probability distribution of the scaled mean of a random sample converges to a normal distribution as the

© 2023 Useful Links.