Category: Independence (probability theory)

Path analysis (statistics)
In statistics, path analysis is used to describe the directed dependencies among a set of variables. This includes models equivalent to any form of multiple regression analysis, factor analysis, canon
Neutral vector
In statistics, and specifically in the study of the Dirichlet distribution, a neutral vector of random variables is one that exhibits a particular type of statistical independence amongst its elements
Basu's theorem
In statistics, Basu's theorem states that any boundedly complete minimal sufficient statistic is independent of any ancillary statistic. This is a 1955 result of Debabrata Basu. It is often used in st
Negative relationship
In statistics, there is a negative relationship or inverse relationship between two variables if higher values of one variable tend to be associated with lower values of the other. A negative relation
Conditional dependence
In probability theory, conditional dependence is a relationship between two or more events that are dependent when a third event occurs. For example, if and are two events that individually increase t
Independent and identically distributed random variables
In probability theory and statistics, a collection of random variables is independent and identically distributed if each random variable has the same probability distribution as the others and all ar
Collider (statistics)
In statistics and causal graphs, a variable is a collider when it is causally influenced by two or more variables. The name "collider" reflects the fact that in graphical models, the arrow heads from
Antecedent variable
In statistics and social sciences, an antecedent variable is a variable that can help to explain the apparent relationship (or part of the relationship) between other variables that are nominally in a
Local independence
Within statistics, Local independence is the underlying assumption of latent variable models.The observed items are conditionally independent of each other given an individual score on the latent vari
Independence (probability theory)
Independence is a fundamental notion in probability theory, as in statistics and the theory of stochastic processes. Two events are independent, statistically independent, or stochastically independen
Copula (probability theory)
In probability theory and statistics, a copula is a multivariate cumulative distribution function for which the marginal probability distribution of each variable is uniform on the interval [0, 1]. Co
Tail dependence
In probability theory, the tail dependence of a pair of random variables is a measure of their comovements in the tails of the distributions. The concept is used in extreme value theory. Random variab
Mediation (statistics)
In statistics, a mediation model seeks to identify and explain the mechanism or process that underlies an observed relationship between an independent variable and a dependent variable via the inclusi
Spurious relationship
In statistics, a spurious relationship or spurious correlation is a mathematical relationship in which two or more events or variables are associated but not causally related, due to either coincidenc
Somers' D
In statistics, Somers’ D, sometimes incorrectly referred to as Somer’s D, is a measure of ordinal association between two possibly dependent random variables X and Y. Somers’ D takes values between wh
Vine copula
A vine is a graphical tool for labeling constraints in high-dimensional probability distributions. A regular vine is a special case for which all constraints are two-dimensional or conditional two-dim
Mean dependence
In probability theory, a random variable is said to be mean independent of random variable if and only if its conditional mean equals its (unconditional) mean for all such that the probability density
Factorial code
Most real world data sets consist of data vectors whose individual components are not statistically independent. In other words, knowing the value of an element will provide information about the valu
Pairwise independence
In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are independent. Any collection of mutually independent random variables is p
Gutenberg–Richter law
In seismology, the Gutenberg–Richter law (GR law) expresses the relationship between the magnitude and total number of earthquakes in any given region and time period of at least that magnitude. or wh
Dependent and independent variables
Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their value
Kendall rank correlation coefficient
In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient (after the Greek letter τ, tau), is a statistic used to measure the ordinal association between
Conditional independence
In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usuall
Subindependence
In probability theory and statistics, subindependence is a weak form of independence. Two random variables X and Y are said to be subindependent if the characteristic function of their sum is equal to
Rescaled range
The rescaled range is a statistical measure of the variability of a time series introduced by the British hydrologist Harold Edwin Hurst (1880–1978). Its purpose is to provide an assessment of how the
Comonotonicity
In probability theory, comonotonicity mainly refers to the perfect positive dependence between the components of a random vector, essentially saying that they can be represented as increasing function
Wald–Wolfowitz runs test
The Wald–Wolfowitz runs test (or simply runs test), named after statisticians Abraham Wald and Jacob Wolfowitz is a non-parametric statistical test that checks a randomness hypothesis for a two-valued
XYZ inequality
In combinatorial mathematics, the XYZ inequality, also called the Fishburn–Shepp inequality, is an inequality for the number of linear extensions of finite partial orders. The inequality was conjectur