- Continuous distributions
- >
- Exponential family distributions
- >
- Normal distribution
- >
- Normality tests

- Exponentials
- >
- Gaussian function
- >
- Normal distribution
- >
- Normality tests

- Statistical inference
- >
- Statistical hypothesis testing
- >
- Statistical tests
- >
- Normality tests

- Statistics
- >
- Statistical methods
- >
- Statistical tests
- >
- Normality tests

- Types of functions
- >
- Gaussian function
- >
- Normal distribution
- >
- Normality tests

D'Agostino's K-squared test

In statistics, D'Agostino's K2 test, named for Ralph D'Agostino, is a goodness-of-fit measure of departure from normality, that is the test aims to gauge the compatibility of given data with the null

Jarque–Bera test

In statistics, the Jarque–Bera test is a goodness-of-fit test of whether sample data have the skewness and kurtosis matching a normal distribution. The test is named after Carlos Jarque and Anil K. Be

Anderson–Darling test

The Anderson–Darling test is a statistical test of whether a given sample of data is drawn from a given probability distribution. In its basic form, the test assumes that there are no parameters to be

Kolmogorov–Smirnov test

In statistics, the Kolmogorov–Smirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous (or discontinuous, see ), one-dimensional probability distributions that can be u

Normal probability plot

The normal probability plot is a graphical technique to identify substantive departures from normality. This includes identifying outliers, skewness, kurtosis, a need for transformations, and mixtures

Lilliefors test

In statistics, the Lilliefors test is a normality test based on the Kolmogorov–Smirnov test. It is used to test the null hypothesis that data come from a normally distributed population, when the null

Shapiro–Wilk test

The Shapiro–Wilk test is a test of normality in frequentist statistics. It was published in 1965 by Samuel Sanford Shapiro and Martin Wilk.

Cramér–von Mises criterion

In statistics the Cramér–von Mises criterion is a criterion used for judging the goodness of fit of a cumulative distribution function compared to a given empirical distribution function , or for comp

Pearson's chi-squared test

Pearson's chi-squared test is a statistical test applied to sets of categorical data to evaluate how likely it is that any observed difference between the sets arose by chance. It is the most widely u

Normality test

In statistics, normality tests are used to determine if a data set is well-modeled by a normal distribution and to compute how likely it is for a random variable underlying the data set to be normally

Shapiro–Francia test

The Shapiro–Francia test is a statistical test for the normality of a population, based on sample data. It was introduced by S. S. Shapiro and R. S. Francia in 1972 as a simplification of the Shapiro–

© 2023 Useful Links.