- Applied statistics
- >
- Econometrics
- >
- Estimation theory
- >
- Estimator

- Inference
- >
- Statistical inference
- >
- Estimation theory
- >
- Estimator

- Mathematical economics
- >
- Econometrics
- >
- Estimation theory
- >
- Estimator

- Statistical analysis
- >
- Statistical inference
- >
- Estimation theory
- >
- Estimator

- Statistical theory
- >
- Statistical inference
- >
- Estimation theory
- >
- Estimator

Shrinkage (statistics)

In statistics, shrinkage is the reduction in the effects of sampling variation. In regression analysis, a fitted relationship appears to perform less well on a new data set than on the data set used f

Kaplan–Meier estimator

The Kaplan–Meier estimator, also known as the product limit estimator, is a non-parametric statistic used to estimate the survival function from lifetime data. In medical research, it is often used to

Two-step M-estimator

Two-step M-estimators deals with M-estimation problems that require preliminary estimation to obtain the parameter of interest. Two-step M-estimation is different from usual M-estimation problem becau

Extremum estimator

In statistics and econometrics, extremum estimators are a wide class of estimators for parametric models that are calculated through maximization (or minimization) of a certain objective function, whi

Maximum score estimator

In statistics and econometrics, the maximum score estimator is a nonparametric estimator for discrete choice models developed by Charles Manski in 1975. Unlike the multinomial probit and multinomial l

Adaptive estimator

In statistics, an adaptive estimator is an estimator in a parametric or semiparametric model with nuisance parameters such that the presence of these nuisance parameters does not affect efficiency of

Invariant estimator

In statistics, the concept of being an invariant estimator is a criterion that can be used to compare the properties of different estimators for the same quantity. It is a way of formalising the idea

Estimator

In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the

James–Stein estimator

The James–Stein estimator is a biased estimator of the mean, , of (possibly) correlated Gaussian distributed random vectors with unknown means . It arose sequentially in two main published papers, the

Trimmed estimator

In statistics, a trimmed estimator is an estimator derived from another estimator by excluding some of the extreme values, a process called truncation. This is generally done to obtain a more robust s

Minimax estimator

In statistical decision theory, where we are faced with the problem of estimating a deterministic parameter (vector) from observations an estimator (estimation rule) is called minimax if its maximal r

L-estimator

In statistics, an L-estimator is an estimator which is a linear combination of order statistics of the measurements (which is also called an L-statistic). This can be as little as a single point, as i

Multi-fractional order estimator

The multi-fractional order estimator (MFOE) is a straightforward, practical, and flexible alternative to the Kalman filter (KF) for tracking targets. The MFOE is focused strictly on simple and pragmat

K-statistic

In statistics, a k-statistic is a minimum-variance unbiased estimator of a cumulant.

Minimum-variance unbiased estimator

In statistics a minimum-variance unbiased estimator (MVUE) or uniformly minimum-variance unbiased estimator (UMVUE) is an unbiased estimator that has lower variance than any other unbiased estimator f

Testimator

A testimator is an estimator whose value depends on the result of a test for statistical significance. In the simplest case the value of the final estimator is that of the basic estimator if the test

Bayes estimator

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function (i.e., the posterior expec

M-estimator

In statistics, M-estimators are a broad class of extremum estimators for which the objective function is a sample average. Both non-linear least squares and maximum likelihood estimation are special c

Hodges' estimator

In statistics, Hodges' estimator (or the Hodges–Le Cam estimator), named for Joseph Hodges, is a famous counterexample of an estimator which is "superefficient", i.e. it attains smaller asymptotic var

S-estimator

The goal of S-estimators is to have a simple high-breakdown regression estimator, which share the flexibility and nice asymptotic properties of M-estimators. The name "S-estimators" was chosen as they

Sieve estimator

In statistics, sieve estimators are a class of non-parametric estimators which use progressively more complex models to estimate an unknown high-dimensional function as more data becomes available, wi

Efficient estimator

No description available.

Newey–West estimator

A Newey–West estimator is used in statistics and econometrics to provide an estimate of the covariance matrix of the parameters of a regression-type model where the standard assumptions of regression

First-difference estimator

In statistics and econometrics, the first-difference (FD) estimator is an estimator used to address the problem of omitted variables with panel data. It is consistent under the assumptions of the fixe

Consistent estimator

In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ0—having the property that as the number of data points used

Arellano–Bond estimator

In econometrics, the Arellano–Bond estimator is a generalized method of moments estimator used to estimate dynamic models of panel data. It was proposed in 1991 by Manuel Arellano and Stephen Bond, ba

© 2023 Useful Links.