- Binary operations
- >
- Logical consequence
- >
- Inference
- >
- Statistical inference

- Probability and statistics
- >
- Statistics
- >
- Statistical theory
- >
- Statistical inference

- Propositional calculus
- >
- Logical consequence
- >
- Inference
- >
- Statistical inference

- Statistics
- >
- Statistical methods
- >
- Statistical analysis
- >
- Statistical inference

Spillover (experiment)

In experiments, a spillover is an indirect effect on a subject not directly treated by the experiment. These effects are useful for policy analysis but complicate the statistical analysis of experimen

Data transformation (statistics)

In statistics, data transformation is the application of a deterministic mathematical function to each point in a data set—that is, each data point zi is replaced with the transformed value yi = f(zi)

Whittle likelihood

In statistics, Whittle likelihood is an approximation to the likelihood function of a stationary Gaussian time series. It is named after the mathematician and statistician Peter Whittle, who introduce

Pseudolikelihood

In statistical theory, a pseudolikelihood is an approximation to the joint probability distribution of a collection of random variables. The practical use of this is that it can provide an approximati

Exact statistics

Exact statistics, such as that described in exact test, is a branch of statistics that was developed to provide more accurate results pertaining to statistical testing and interval estimation by elimi

Resampling (statistics)

In statistics, resampling is the creation of new samples based on one observed sample.Resampling methods are: 1.
* Permutation tests (also re-randomization tests) 2.
* Bootstrapping 3.
* Cross vali

Solomonoff's theory of inductive inference

Solomonoff's theory of inductive inference is a mathematical proof that if a universe is generated by an algorithm, then observations of that universe, encoded as a dataset, are best predicted by the

Decision theory

Decision theory (or the theory of choice; not to be confused with choice theory) is a branch of applied probability theory concerned with the theory of making decisions based on assigning probabilitie

Nonparametric statistics

Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions (common examples of parameters are the mean and variance). Nonparame

Empirical characteristic function

Let be independent, identically distributed real-valued random variables with common characteristic function . The empirical characteristic function (ECF) defined as is an unbiased and consistent esti

Under-fitting

No description available.

Multispecies coalescent process

Multispecies Coalescent Process is a stochastic process model that describes the genealogical relationships for a sample of DNA sequences taken from several species. It represents the application of c

Statistical inference

Statistical inference is the process of using data analysis to infer properties of an underlying distribution of probability. Inferential statistical analysis infers properties of a population, for ex

Parametric statistics

Parametric statistics is a branch of statistics which assumes that sample data comes from a population that can be adequately modeled by a probability distribution that has a fixed set of parameters.

Frequentist inference

Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means

Weighted product model

The weighted product model (WPM) is a popular multi-criteria decision analysis (MCDA) / multi-criteria decision making (MCDM) method. It is similar to the weighted sum model (WSM). The main difference

Underfitting

No description available.

Overfitting

In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict

Polykay

In statistics, a polykay, or generalised k-statistic, (denoted ) is a statistic defined as a linear combination of sample moments.

Randomised decision rule

In statistical decision theory, a randomised decision rule or mixed decision rule is a decision rule that associates probabilities with deterministic decision rules. In finite decision problems, rando

Weighted sum model

In decision theory, the weighted sum model (WSM), also called weighted linear combination (WLC) or simple additive weighting (SAW), is the best known and simplest multi-criteria decision analysis (MCD

Informal inferential reasoning

In statistics education, informal inferential reasoning (also called informal inference) refers to the process of making a generalization based on data (samples) about a wider universe (population/pro

Inverse probability

In probability theory, inverse probability is an obsolete term for the probability distribution of an unobserved variable. Today, the problem of determining an unobserved variable (by whatever method)

Sunrise problem

The sunrise problem can be expressed as follows: "What is the probability that the sun will rise tomorrow?" The sunrise problem illustrates the difficulty of using probability theory when evaluating t

Sampling distribution

In statistics, a sampling distribution or finite-sample distribution is the probability distribution of a given random-sample-based statistic. If an arbitrarily large number of samples, each involving

Well-behaved statistic

Although the term well-behaved statistic often seems to be used in the scientific literature in somewhat the same way as is well-behaved in mathematics (that is, to mean "non-pathological") it can als

Additive disequilibrium and z statistic

Additive disequilibrium (D) is a statistic that estimates the difference between observed genotypic frequencies and the genotypic frequencies that would be expected under Hardy–Weinberg equilibrium. A

Transferable belief model

The transferable belief model (TBM) is an elaboration on the Dempster–Shafer theory (DST), which is a mathematical model used to evaluate the probability that a given proposition is true from other pr

Fiducial inference

Fiducial inference is one of a number of different types of statistical inference. These are rules, intended for general application, by which conclusions can be drawn from samples of data. In modern

Gambling and information theory

Statistical inference might be thought of as gambling theory applied to the world around us. The myriad applications for logarithmic information measures tell us precisely how to take the best guess i

Rodger's method

Rodger's method is a statistical procedure for examining research data post hoc following an 'omnibus' analysis (e.g., after an analysis of variance – anova). The various components of this methodolog

Group size measures

Many animals, including humans, tend to live in groups, herds, flocks, bands, packs, shoals, or colonies (hereafter: groups) of conspecific individuals. The size of these groups, as expressed by the n

© 2023 Useful Links.