- Categorical data
- >
- Categorical variable interactions
- >
- Contingency table
- >
- Summary statistics for contingency tables

- Independence (probability theory)
- >
- Categorical variable interactions
- >
- Contingency table
- >
- Summary statistics for contingency tables

- Sample statistics
- >
- Summary statistics
- >
- Summary statistics for categorical data
- >
- Summary statistics for contingency tables

- Statistical analysis
- >
- Descriptive statistics
- >
- Contingency table
- >
- Summary statistics for contingency tables

- Statistical data types
- >
- Categorical data
- >
- Summary statistics for categorical data
- >
- Summary statistics for contingency tables

- Summary statistics
- >
- Descriptive statistics
- >
- Contingency table
- >
- Summary statistics for contingency tables

Cohen's kappa

Cohen's kappa coefficient (κ, lowercase Greek kappa) is a statistic that is used to measure inter-rater reliability (and also intra-rater reliability) for qualitative (categorical) items. It is genera

Positive and negative predictive values

The positive and negative predictive values (PPV and NPV respectively) are the proportions of positive and negative results in statistics and diagnostic tests that are true positive and true negative

P4-metric

P4 metric enables performance evaluation of the binary classifier.It is calculated from precision, recall, specificity and NPV (negative predictive value).P4 is designed in similar way to F1 metric, h

Receiver operating characteristic

A receiver operating characteristic curve, or ROC curve, is a graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. The method

Variation of information

In probability theory and information theory, the variation of information or shared information distance is a measure of the distance between two clusterings (partitions of elements). It is closely r

Cramér's V

In statistics, Cramér's V (sometimes referred to as Cramér's phi and denoted as φc) is a measure of association between two nominal variables, giving a value between 0 and +1 (inclusive). It is based

Sample odds ratio

No description available.

Coefficient of colligation

In statistics, Yule's Y, also known as the coefficient of colligation, is a measure of association between two binary variables. The measure was developed by George Udny Yule in 1912, and should not b

Goodman and Kruskal's gamma

In statistics, Goodman and Kruskal's gamma is a measure of rank correlation, i.e., the similarity of the orderings of the data when ranked by each of the quantities. It measures the strength of associ

Total operating characteristic

The total operating characteristic (TOC) is a statistical method to compare a Boolean variable versus a rank variable. TOC can measure the ability of an index variable to diagnose either presence or a

False discovery rate

In statistics, the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are d

Fleiss' kappa

Fleiss' kappa (named after Joseph L. Fleiss) is a statistical measure for assessing the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of item

Pointwise mutual information

In statistics, probability theory and information theory, pointwise mutual information (PMI), or point mutual information, is a measure of association. It compares the probability of two events occurr

Index of coincidence

In cryptography, coincidence counting is the technique (invented by William F. Friedman) of putting two texts side-by-side and counting the number of times that identical letters appear in the same po

Goodman and Kruskal's lambda

In probability theory and statistics, Goodman & Kruskal's lambda is a measure of proportional reduction in error in cross tabulation analysis. For any sample with a nominal independent variable and de

Diagnostic odds ratio

In medical testing with binary classification, the diagnostic odds ratio (DOR) is a measure of the effectiveness of a diagnostic test. It is defined as the ratio of the odds of the test being positive

McNemar's test

In statistics, McNemar's test is a statistical test used on paired nominal data. It is applied to 2 × 2 contingency tables with a dichotomous trait, with matched pairs of subjects, to determine whethe

Rand index

The Rand index or Rand measure (named after William M. Rand) in statistics, and in particular in data clustering, is a measure of the similarity between two data clusterings. A form of the Rand index

F-score

In statistical analysis of binary classification, the F-score or F-measure is a measure of a test's accuracy. It is calculated from the precision and recall of the test, where the precision is the num

Likelihood ratios in diagnostic testing

In evidence-based medicine, likelihood ratios are used for assessing the value of performing a diagnostic test. They use the sensitivity and specificity of the test to determine whether a test result

Polychoric correlation

In statistics, polychoric correlation is a technique for estimating the correlation between two hypothesised normally distributed continuous latent variables, from two observed ordinal variables. Tetr

Tschuprow's T

In statistics, Tschuprow's T is a measure of association between two nominal variables, giving a value between 0 and 1 (inclusive). It is closely related to Cramér's V, coinciding with it for square c

Phi coefficient

In statistics, the phi coefficient (or mean square contingency coefficient and denoted by φ or rφ) is a for two binary variables. In machine learning, it is known as the Matthews correlation coefficie

Odds ratio

An odds ratio (OR) is a statistic that quantifies the strength of the association between two events, A and B. The odds ratio is defined as the ratio of the odds of A in the presence of B and the odds

Cochran–Mantel–Haenszel statistics

In statistics, the Cochran–Mantel–Haenszel test (CMH) is a test used in the analysis of stratified or matched categorical data. It allows an investigator to test the association between a binary predi

Uncertainty coefficient

In statistics, the uncertainty coefficient, also called proficiency, entropy coefficient or Theil's U, is a measure of nominal association. It was first introduced by Henri Theil and is based on the c

Pre- and post-test probability

Pre-test probability and post-test probability (alternatively spelled pretest and posttest probability) are the probabilities of the presence of a condition (such as a disease) before and after a diag

Yule's Q

No description available.

© 2023 Useful Links.