# Category: Covariance and correlation

Complex Wishart distribution
In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of times the sample Hermitian covariance matrix of zero-mean independent Gaussi
Partial correlation
In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed. When determ
Eddy covariance
The eddy covariance (also known as eddy correlation and eddy flux) is a key atmospheric measurement technique to measure and calculate vertical turbulent fluxes within atmospheric boundary layers. The
Generalized variance
The generalized variance is a scalar value which generalizes variance for multivariate random variables. It was introduced by Samuel S. Wilks. The generalized variance is defined as the determinant of
Scatter matrix
In multivariate statistics and probability theory, the scatter matrix is a statistic that is used to make estimates of the covariance matrix, for instance of the multivariate normal distribution.
Uncorrelatedness (probability theory)
In probability theory and statistics, two real-valued random variables, , , are said to be uncorrelated if their covariance, , is zero. If two variables are uncorrelated, there is no linear relationsh
Spike-triggered covariance
Spike-triggered covariance (STC) analysis is a tool for characterizing a neuron's response properties using the covariance of stimuli that elicit spikes from a neuron. STC is related to the spike-trig
Correlation does not imply causation
The phrase "correlation does not imply causation" refers to the inability to legitimately deduce a cause-and-effect relationship between two events or variables solely on the basis of an observed asso
RV coefficient
In statistics, the RV coefficientis a multivariate generalization of the squared Pearson correlation coefficient (because the RV coefficient takes values between 0 and 1). It measures the closeness of
Correlation correction for attenuation
No description available.
Cramér's V
In statistics, Cramér's V (sometimes referred to as Cramér's phi and denoted as φc) is a measure of association between two nominal variables, giving a value between 0 and +1 (inclusive). It is based
Normally distributed and uncorrelated does not imply independent
In probability theory, although simple examples illustrate that linear uncorrelatedness of two random variables does not in general imply their independence, it is sometimes mistakenly thought that it
Intraclass correlation
In statistics, the intraclass correlation, or the intraclass correlation coefficient (ICC), is a descriptive statistic that can be used when quantitative measurements are made on units that are organi
Spurious correlation of ratios
In statistics, spurious correlation of ratios is a form of spurious correlation that arises between ratios of absolute measurements which themselves are uncorrelated. The phenomenon of spurious correl
Hoeffding's independence test
In statistics, Hoeffding's test of independence, named after Wassily Hoeffding, is a test based on the population measure of deviation from independence where is the joint distribution function of two
Item-total correlation
The item-total correlation test arises in psychometrics in contexts where a number of tests or questions are given to an individual and where the problem is to construct a useful single quantity for e
Canonical correlation
In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors X = (X1, ..., Xn)
Complex inverse Wishart distribution
The complex inverse Wishart distribution is a matrix probability distribution defined on complex-valued positive-definite matrices and is the complex analog of the real inverse Wishart distribution. T
Covariance matrix
In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covari
Correlation function (astronomy)
In astronomy, a correlation function describes the distribution of galaxies in the universe. By default, "correlation function" refers to the two-point autocorrelation function. The two-point autocorr
Total correlation
In probability theory and in particular in information theory, total correlation (Watanabe 1960) is one of several generalizations of the mutual information. It is also known as the multivariate const
Sample mean and covariance
The sample mean (or "empirical mean") and the sample covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or mean value) of a
Sample matrix inversion
Sample matrix inversion (or direct matrix inversion) is an algorithm that estimates weights of an array (adaptive filter) by replacing the correlation matrix with its estimate. Using -dimensional samp
Super-resolution optical fluctuation imaging
Super-resolution optical fluctuation imaging (SOFI) is a post-processing method for the calculation of super-resolved images from recorded image time series that is based on the temporal correlations
Correlation function
A correlation function is a function that gives the statistical correlation between random variables, contingent on the spatial or temporal distance between those variables. If one considers the corre
Correlates of crime
The correlates of crime explore the associations of specific non-criminal factors with specific crimes. The field of criminology studies the dynamics of crime. Most of these studies use correlational
Ecological correlation
In statistics, an ecological correlation (also spatial correlation) is a correlation between two variables that are group means, in contrast to a correlation between two variables that describe indivi
Tail dependence
In probability theory, the tail dependence of a pair of random variables is a measure of their comovements in the tails of the distributions. The concept is used in extreme value theory. Random variab
Galton's problem
Galton's problem, named after Sir Francis Galton, is the problem of drawing inferences from cross-cultural data, due to the statistical phenomenon now called autocorrelation. The problem is now recogn
Bucket evaluations
In statistics, bucket evaluations is a method for correlating vectors. This method is a non-parametric, unsupervised correlation method first published in 2012 by et al. Bucket evaluations was initial
Cross-covariance
In probability and statistics, given two stochastic processes and , the cross-covariance is a function that gives the covariance of one process with the other at pairs of time points. With the usual n
Partial autocorrelation function
In time series analysis, the partial autocorrelation function (PACF) gives the partial correlation of a stationary time series with its own lagged values, regressed the values of the time series at al
In statistics, the rational quadratic covariance function is used in spatial statistics, geostatistics, machine learning, image analysis, and other fields where multivariate statistical analysis is co
Covariance operator
In probability theory, for a probability measure P on a Hilbert space H with inner product , the covariance of P is the bilinear form Cov: H × H → R given by for all x and y in H. The covariance opera
Covariance function
In probability theory and statistics, the covariance function describes how much two random variables change together (their covariance) with varying spatial or temporal separation. For a random field
Fisher transformation
In statistics, the Fisher transformation (or Fisher z-transformation) of a Pearson correlation coefficient is its inverse hyperbolic tangent (artanh). When the sample correlation coefficient r is near
Cophenetic correlation
In statistics, and especially in biostatistics, cophenetic correlation (more precisely, the cophenetic correlation coefficient) is a measure of how faithfully a dendrogram preserves the pairwise dista
Correlation ratio
In statistics, the correlation ratio is a measure of the curvilinear relationship between the statistical dispersion within individual categories and the dispersion across the whole population or samp
Cross-correlation matrix
The cross-correlation matrix of two random vectors is a matrix containing as elements the cross-correlations of all pairs of elements of the random vectors. The cross-correlation matrix is used in var
Coskewness
In probability theory and statistics, coskewness is a measure of how much three random variables change together. Coskewness is the third standardized cross central moment, related to skewness as cova
Generalized canonical correlation
In statistics, the generalized canonical correlation analysis (gCCA), is a way of making sense of cross-correlation matrices between the sets of random variables when there are more than two sets. Whi
Moran's I
In statistics, Moran's I is a measure of spatial autocorrelation developed by Patrick Alfred Pierce Moran. Spatial autocorrelation is characterized by a correlation in a signal among nearby locations
Pseudo-determinant
In linear algebra and statistics, the pseudo-determinant is the product of all non-zero eigenvalues of a square matrix. It coincides with the regular determinant when the matrix is non-singular.
Ceiling effect (statistics)
The "ceiling effect" is one type of scale attenuation effect; the other scale attenuation effect is the "floor effect". The ceiling effect is observed when an independent variable no longer has an eff
Interclass correlation
In statistics, the interclass correlation (or interclass correlation coefficient) measures a relation between two variables of different classes (types), such as the weights of 10-year-old sons and th
Covariance and correlation
In probability theory and statistics, the mathematical concepts of covariance and correlation are very similar. Both describe the degree to which two random variables or sets of random variables tend
Law of total covariance
In probability theory, the law of total covariance, covariance decomposition formula, or conditional covariance formula states that if X, Y, and Z are random variables on the same probability space, a
FKG inequality
In mathematics, the Fortuin–Kasteleyn–Ginibre (FKG) inequality is a correlation inequality, a fundamental tool in statistical mechanics and probabilistic combinatorics (especially random graphs and th
Maximal information coefficient
In statistics, the maximal information coefficient (MIC) is a measure of the strength of the linear or non-linear association between two variables X and Y. The MIC belongs to the maximal information-
The quadrant count ratio (QCR) is a measure of the association between two quantitative variables. The QCR is not commonly used in the practice of statistics; rather, it is a useful tool in statistics
Triple correlation
The triple correlation of an ordinary function on the real line is the integral of the product of that function with two independently shifted copies of itself: The Fourier transform of triple correla
Biweight midcorrelation
In statistics, biweight midcorrelation (also called bicor) is a measure of similarity between samples. It is median-based, rather than mean-based, thus is less sensitive to outliers, and can be a robu
Matérn covariance function
In statistics, the Matérn covariance, also called the Matérn kernel, is a covariance function used in spatial statistics, geostatistics, machine learning, image analysis, and other applications of mul
Analysis of covariance
Analysis of covariance (ANCOVA) is a general linear model which blends ANOVA and regression. ANCOVA evaluates whether the means of a dependent variable (DV) are equal across levels of a categorical in
Distance correlation
In statistics and in probability theory, distance correlation or distance covariance is a measure of dependence between two paired random vectors of arbitrary, not necessarily equal, dimension. The po
Kendall rank correlation coefficient
In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient (after the Greek letter τ, tau), is a statistic used to measure the ordinal association between
Polychoric correlation
In statistics, polychoric correlation is a technique for estimating the correlation between two hypothesised normally distributed continuous latent variables, from two observed ordinal variables. Tetr
Rank correlation
In statistics, a rank correlation is any of several statistics that measure an ordinal association—the relationship between rankings of different ordinal variables or different rankings of the same va
Scaled correlation
In statistics, scaled correlation is a form of a coefficient of correlation applicable to data that have a temporal component such as time series. It is the average short-term correlation. If the sign
Spearman's rank correlation coefficient
In statistics, Spearman's rank correlation coefficient or Spearman's ρ, named after Charles Spearman and often denoted by the Greek letter (rho) or as , is a nonparametric measure of rank correlation
Concordance correlation coefficient
In statistics, the concordance correlation coefficient measures the agreement between two variables, e.g., to evaluate reproducibility or for inter-rater reliability.
Correlation
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indic
Comonotonicity
In probability theory, comonotonicity mainly refers to the perfect positive dependence between the components of a random vector, essentially saying that they can be represented as increasing function
Correlation function (statistical mechanics)
In statistical mechanics, the correlation function is a measure of the order in a system, as characterized by a mathematical correlation function. Correlation functions describe how microscopic variab
Cross-covariance matrix
In probability theory and statistics, a cross-covariance matrix is a matrix whose element in the i, j position is the covariance between the i-th element of a random vector and j-th element of another
Wishart distribution
In statistics, the Wishart distribution is a generalization to multiple dimensions of the gamma distribution. It is named in honor of John Wishart, who first formulated the distribution in 1928. It is
Correlation function (quantum field theory)
In quantum field theory, correlation functions, often referred to as correlators or Green's functions, are vacuum expectation values of time-ordered products of field operators. They are a key object
Unbiased estimation of standard deviation
In statistics and in particular statistical theory, unbiased estimation of a standard deviation is the calculation from a statistical sample of an estimated value of the standard deviation (a measure
Ecological regression
Ecological regression is a statistical technique which runs regression on aggregates, often used in political science and history to estimate group voting behavior from aggregate data. For example, if
Cross-correlation
In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding
Geary's C
Geary's C is a measure of spatial autocorrelation or an attempt to determine if adjacent observations of the same phenomenon are correlated. Spatial autocorrelation is more complex than autocorrelatio
Kendall tau distance
The Kendall tau rank distance is a metric (distance function) that counts the number of pairwise disagreements between two ranking lists. The larger the distance, the more dissimilar the two lists are
Functional correlation
In statistics, functional correlation is a dimensionality reduction technique used to quantify the correlation and dependence between two variables when the data is functional. Several approaches have
Covariance
In probability theory and statistics, covariance is a measure of the joint variability of two random variables. If the greater values of one variable mainly correspond with the greater values of the o
Dual total correlation
In information theory, dual total correlation (Han 1978), information rate (Dubnov 2006), excess entropy (Olbrich 2008), or binding information (Abdallah and Plumbley 2010) is one of several known non