Category: Statistical deviation and dispersion

Law of total variance
In probability theory, the law of total variance or variance decomposition formula or conditional variance formulas or law of iterated variances also known as Eve's law, states that if and are random
McKay's approximation for the coefficient of variation
In statistics, McKay's approximation of the coefficient of variation is a statistic based on a sample from a normally distributed population. It was introduced in 1932 by A. T. McKay. Statistical meth
Common-method variance
In applied statistics, (e.g., applied to the social sciences and psychometrics), common-method variance (CMV) is the spurious "variance that is attributable to the measurement method rather than to th
Reduced chi-squared statistic
In statistics, the reduced chi-square statistic is used extensively in goodness of fit testing. It is also known as mean squared weighted deviation (MSWD) in isotopic dating and variance of unit weigh
Central moment
In probability theory and statistics, a central moment is a moment of a probability distribution of a random variable about the random variable's mean; that is, it is the expected value of a specified
Population variance
No description available.
MINQUE
In statistics, the theory of minimum norm quadratic unbiased estimation (MINQUE) was developed by C. R. Rao. Its application was originally to the problem of heteroscedasticity and the estimation of v
Mean absolute error
In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of Y versus X include comparisons of predicted versus observed, sub
Fano factor
In statistics, the Fano factor, like the coefficient of variation, is a measure of the dispersion of a probability distribution of a Fano noise. It is named after Ugo Fano, an Italian American physici
Pooled variance
In statistics, pooled variance (also known as combined variance, composite variance, or overall variance, and written ) is a method for estimating variance of several different populations when the me
Cornish–Fisher expansion
The Cornish–Fisher expansion is an asymptotic expansion used to approximate the quantiles of a probability distribution based on its cumulants. It is named after E. A. Cornish and R. A. Fisher, who fi
Deviation (statistics)
In mathematics and statistics, deviation is a measure of difference between the observed value of a variable and some other value, often that variable's mean. The sign of the deviation reports the dir
Minimum mean square error
In statistics and signal processing, a minimum mean square error (MMSE) estimator is an estimation method which minimizes the mean square error (MSE), which is a common measure of estimator quality, o
Robust measures of scale
In statistics, robust measures of scale are methods that quantify the statistical dispersion in a sample of numerical data while resisting outliers. The most common such robust statistics are the inte
Root-mean-square deviation of atomic positions
In bioinformatics, the root-mean-square deviation of atomic positions, or simply root-mean-square deviation (RMSD), is the measure of the average distance between the atoms (usually the backbone atoms
Standard deviation
In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also ca
Mean absolute percentage error
The mean absolute percentage error (MAPE), also known as mean absolute percentage deviation (MAPD), is a measure of prediction accuracy of a forecasting method in statistics. It usually expresses the
Estimation of covariance matrices
In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimated. Estimation of covariance matrices then deals with the question of how to approxim
F-test of equality of variances
In statistics, an F-test of equality of variances is a test for the null hypothesis that two normal populations have the same variance. Notionally, any F-test can be regarded as a comparison of two va
Circular standard deviation
No description available.
Homoscedasticity and heteroscedasticity
In statistics, a sequence (or a vector) of random variables is homoscedastic (/ˌhoʊmoʊskəˈdæstɪk/) if all its random variables have the same finite variance. This is also known as homogeneity of varia
Taylor's law
Taylor's power law is an empirical law in ecology that relates the variance of the number of individuals of a species per unit area of habitat to the corresponding mean by a power law relationship. It
Circular variance
No description available.
Skewness
In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero,
Mean squared prediction error
In statistics the mean squared prediction error or mean squared error of the predictions of a smoothing or curve fitting procedure is the expected value of the squared difference between the fitted va
Popoviciu's inequality on variances
In probability theory, Popoviciu's inequality, named after Tiberiu Popoviciu, is an upper bound on the variance σ2 of any bounded probability distribution. Let M and m be upper and lower bounds on the
Bollinger Bands
Bollinger Bands (/ˈbɒlɪndʒər/) are a type of statistical chart characterizing the prices and volatility over time of a financial instrument or commodity, using a formulaic method propounded by John Bo
Cokurtosis
In probability theory and statistics, cokurtosis is a measure of how much two random variables change together. Cokurtosis is the fourth standardized cross central moment. If two random variables exhi
Squared deviations from the mean
Squared deviations from the mean (SDM) result from squaring deviations. In probability theory and statistics, the definition of variance is either the expected value of the SDM (when considering a the
True variance
No description available.
Root mean square
In mathematics and its applications, the root mean square of a set of numbers (abbreviated as RMS, RMS or rms and denoted in formulas as either or ) is defined as the square root of the mean square (t
Standardized moment
In probability theory and statistics, a standardized moment of a probability distribution is a moment (often a higher degree central moment) that is normalized, typically by a power of the standard de
Qualitative variation
An index of qualitative variation (IQV) is a measure of statistical dispersion in nominal distributions. There are a variety of these, but they have been relatively little-studied in the statistics li
Clustered standard errors
Clustered standard errors (or Liang-Zeger standard errors) are measurements that estimate the standard error of a regression parameter in settings where observations may be subdivided into smaller-siz
Quasi-variance
Quasi-variance (qv) estimates are a statistical approach that is suitable for communicating the effects of a categorical explanatory variable within a statistical model. In standard statistical models
Mean absolute difference
The mean absolute difference (univariate) is a measure of statistical dispersion equal to the average absolute difference of two independent values drawn from a probability distribution. A related sta
Ratio estimator
The ratio estimator is a statistical parameter and is defined to be the ratio of means of two random variables. Ratio estimates are biased and corrections must be made when they are used in experiment
Tracking signal
In statistics and management science, a tracking signal monitors any forecasts that have been made in comparison with actuals, and warns when there are unexpected departures of the outcomes from the f
Algorithms for calculating variance
Algorithms for calculating variance play a major role in computational statistics. A key difficulty in the design of good algorithms for this problem is that formulas for the variance may involve sums
Mean absolute scaled error
In statistics, the mean absolute scaled error (MASE) is a measure of the accuracy of forecasts. It is the mean absolute error of the forecast values, divided by the mean absolute error of the in-sampl
Quartile coefficient of dispersion
In statistics, the quartile coefficient of dispersion is a descriptive statistic which measures dispersion and which is used to make comparisons within and between data sets. Since it is based on quan
Root-mean-square deviation
The root-mean-square deviation (RMSD) or root-mean-square error (RMSE) is a frequently used measure of the differences between values (sample or population values) predicted by a model or an estimator
Errors and residuals
In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "true val
Option on realized variance
In finance, an option on realized variance (or variance option) is a type of variance derivatives which is the derivative securities on which the payoff depends on the annualized realized variance of
Mean squared displacement
In statistical mechanics, the mean squared displacement (MSD, also mean square displacement, average squared displacement, or mean square fluctuation) is a measure of the deviation of the position of
Variation ratio
The variation ratio is a simple measure of statistical dispersion in nominal distributions; it is the simplest measure of qualitative variation. It is defined as the proportion of cases which are not
Variance inflation factor
In statistics, the variance inflation factor (VIF) is the ratio (quotient) of the variance of estimating some parameter in a model that includes multiple other terms (parameters) by the variance of a
Engineering tolerance
Engineering tolerance is the permissible limit or limits of variation in: 1. * a physical dimension; 2. * a measured value or physical property of a material, manufactured object, system, or service
Deviance (statistics)
In statistics, deviance is a goodness-of-fit statistic for a statistical model; it is often used for statistical hypothesis testing. It is a generalization of the idea of using the sum of squares of r
Range (statistics)
In statistics, the range of a set of data is the difference between the largest and smallest values,the result of subtracting the sample maximum and minimum. It is expressed in the same units as the d
Skewness risk
Skewness risk in financial modeling is the risk that results when observations are not spread symmetrically around an average value, but instead have a skewed distribution. As a result, the mean and t
Mean square quantization error
Mean square quantization error (MSQE) is a figure of merit for the process of analog to digital conversion. In this conversion process, analog signals in a continuous range of values are converted to
Variance
In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it
Goldfeld–Quandt test
In statistics, the Goldfeld–Quandt test checks for homoscedasticity in regression analyses. It does this by dividing a dataset into two parts or groups, and hence the test is sometimes called a two-gr
Average absolute deviation
The average absolute deviation (AAD) of a data set is the average of the absolute deviations from a central point. It is a summary statistic of statistical dispersion or variability. In the general fo
Full width at half maximum
In a distribution, full width at half maximum (FWHM) is the difference between the two values of the independent variable at which the dependent variable is equal to half of its maximum value. In othe
Symmetric mean absolute percentage error
Symmetric mean absolute percentage error (SMAPE or sMAPE) is an accuracy measure based on percentage (or relative) errors. It is usually defined as follows: where At is the actual value and Ft is the
Rescaled range
The rescaled range is a statistical measure of the variability of a time series introduced by the British hydrologist Harold Edwin Hurst (1880–1978). Its purpose is to provide an assessment of how the
Cosmic variance
The term cosmic variance is the statistical uncertainty inherent in observations of the universe at extreme distances. It has three different but closely related meanings: * It is sometimes used, inc
Market risk
Market risk is the risk of losses in positions arising from movements in market variables like prices and volatility.There is no unique classification as each classification may refer to different asp
Variogram
In spatial statistics the theoretical variogram is a function describing the degree of spatial dependence of a spatial random field or stochastic process . The semivariogram is half the variogram. In
Kurtosis
In probability theory and statistics, kurtosis (from Greek: κυρτός, kyrtos or kurtos, meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued rando
Otsu's method
In computer vision and image processing, Otsu's method, named after Nobuyuki Otsu (大津展之, Ōtsu Nobuyuki), is used to perform automatic image thresholding. In the simplest form, the algorithm returns a
Greenwood statistic
The Greenwood statistic is a spacing statistic and can be used to evaluate clustering of events in time or locations in space.
Coefficient of variation
In probability theory and statistics, the coefficient of variation (CV), also known as relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequ
Conditional variance
In probability theory and statistics, a conditional variance is the variance of a random variable given the value(s) of one or more other variables.Particularly in econometrics, the conditional varian
Berkson error model
The Berkson error model is a description of random error (or misclassification) in measurement. Unlike classical error, Berkson error causes little or no bias in the measurement. It was proposed by Jo
Bessel's correction
In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation, where n is the number of observations in a sample. This method
Precision (statistics)
In statistics, the precision matrix or concentration matrix is the matrix inverse of the covariance matrix or dispersion matrix, .For univariate distributions, the precision matrix degenerates into a
Statistical dispersion
In statistics, dispersion (also called variability, scatter, or spread) is the extent to which a distribution is stretched or squeezed. Common examples of measures of statistical dispersion are the va
Median absolute deviation
In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated b
Probable error
In statistics, probable error defines the half-range of an interval about a central point for the distribution, such that half of the values from the distribution will lie within the interval and half
Standard error
The standard error (SE) of a statistic (usually an estimate of a parameter) is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sa
Index of dispersion
In probability theory and statistics, the index of dispersion, dispersion index, coefficient of dispersion, relative variance, or variance-to-mean ratio (VMR), like the coefficient of variation, is a
Margin of error
The margin of error is a statistic expressing the amount of random sampling error in the results of a survey. The larger the margin of error, the less confidence one should have that a poll result wou
WMAPE
No description available.
Negentropy
In information theory and statistics, negentropy is used as a measure of distance to normality. The concept and phrase "negative entropy" was introduced by Erwin Schrödinger in his 1944 popular-scienc
Bhattacharyya distance
In statistics, the Bhattacharyya distance measures the similarity of two probability distributions. It is closely related to the Bhattacharyya coefficient which is a measure of the amount of overlap b
Propagation of uncertainty
In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on the
Option on realized volatility
In finance, option on realized volatility (or volatility option) is a subclass of derivatives securities that the payoff function embedded with the notion of annualized realized volatility of a specif
Medcouple
In statistics, the medcouple is a robust statistic that measures the skewness of a univariate distribution. It is defined as a scaled median difference of the left and right half of a distribution. It
Studentized residual
In statistics, a studentized residual is the quotient resulting from the division of a residual by an estimate of its standard deviation. It is a form of a Student's t-statistic, with the estimate of
Mean squared error
In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the errors—tha