- Measure theory
- >
- Probability theory
- >
- Independence (probability theory)
- >
- Inter-rater reliability

- Multivariate statistics
- >
- Comparison of assessments
- >
- Statistical reliability
- >
- Inter-rater reliability

- Philosophy of mathematics
- >
- Philosophy of statistics
- >
- Statistical reliability
- >
- Inter-rater reliability

- Probability
- >
- Probability theory
- >
- Independence (probability theory)
- >
- Inter-rater reliability

- Statistical analysis
- >
- Multivariate statistics
- >
- Independence (probability theory)
- >
- Inter-rater reliability

- Statistical analysis
- >
- Multivariate statistics
- >
- Statistical reliability
- >
- Inter-rater reliability

- Statistical analysis
- >
- Survival analysis
- >
- Statistical reliability
- >
- Inter-rater reliability

- Statistics
- >
- Philosophy of statistics
- >
- Statistical reliability
- >
- Inter-rater reliability

Cohen's kappa

Cohen's kappa coefficient (κ, lowercase Greek kappa) is a statistic that is used to measure inter-rater reliability (and also intra-rater reliability) for qualitative (categorical) items. It is genera

Inter-rater reliability

In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is

Krippendorff's alpha

Krippendorff's alpha coefficient, named after academic Klaus Krippendorff, is a statistical measure of the agreement achieved when coding a set of units of analysis. Since the 1970s, alpha has been us

Bennett, Alpert and Goldstein's S

Bennett, Alpert & Goldstein’s S is a statistical measure of inter-rater agreement. It was created by Bennett et al. in 1954.

Kendall's W

Kendall's W (also known as Kendall's coefficient of concordance) is a non-parametric statistic for rank correlation. It is a normalization of the statistic of the Friedman test, and can be used for as

Concordance correlation coefficient

In statistics, the concordance correlation coefficient measures the agreement between two variables, e.g., to evaluate reproducibility or for inter-rater reliability.

Interclass correlation

In statistics, the interclass correlation (or interclass correlation coefficient) measures a relation between two variables of different classes (types), such as the weights of 10-year-old sons and th

Intraclass correlation

In statistics, the intraclass correlation, or the intraclass correlation coefficient (ICC), is a descriptive statistic that can be used when quantitative measurements are made on units that are organi

Fleiss' kappa

Fleiss' kappa (named after Joseph L. Fleiss) is a statistical measure for assessing the reliability of agreement between a fixed number of raters when assigning categorical ratings to a number of item

Scott's Pi

Scott's pi (named after William A Scott) is a statistic for measuring inter-rater reliability for nominal data in communication studies. Textual entities are annotated with categories by different ann

© 2023 Useful Links.