Category: Computational statistics

Stan (software)
Stan is a probabilistic programming language for statistical inference written in C++. The Stan language is used to specify a (Bayesian) statistical model with an imperative program calculating the lo
Statistical relational learning
Statistical relational learning (SRL) is a subdiscipline of artificial intelligence and machine learning that is concerned with domain models that exhibit both uncertainty (which can be dealt with usi
Random forest
Random forests or random decision forests is an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time. Fo
Twisting properties
Twisting properties in general terms are associated with the properties of samples that identify with statistics that are suitable for exchange.
Bootstrap aggregating
Bootstrap aggregating, also called bagging (from bootstrap aggregating), is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms use
Types of artificial neural networks
There are many types of artificial neural networks (ANN). Artificial neural networks are computational models inspired by biological neural networks, and are used to approximate functions that are gen
Stochastic gradient Langevin dynamics (SGLD) is an optimization and sampling technique composed of characteristics from Stochastic gradient descent, a Robbins–Monro optimization algorithm, and Langevi
Plug-in principle
No description available.
Bootstrapping (statistics)
Bootstrapping is any test or metric that uses random sampling with replacement (e.g. mimicking the sampling process), and falls under the broader class of resampling methods. Bootstrapping assigns mea
ProbLog
ProbLog is a probabilistic logic programming that extends Prolog with probabilities. It minimally extends Prolog by adding the notion of a probabilistic fact, which combines the idea of logic atoms an
Vecchia approximation
Vecchia approximation is a Gaussian processes approximation technique originally developed by , a statistician at United States Geological Survey. It is one of the earliest attempts to use Gaussian pr
Continuity correction
In probability theory, a continuity correction is an adjustment that is made when a discrete distribution is approximated by a continuous distribution.
Markov chain Monte Carlo
In statistics, Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a probability distribution. By constructing a Markov chain that has the desired distribution as
Symbolic data analysis
Symbolic data analysis (SDA) is an extension of standard data analysis where symbolic data tables are used as input and symbolic objects are made output as a result. The data units are called symbolic
In statistics, the bootstrap error-adjusted single-sample technique (BEST or the BEAST) is a non-parametric method that is intended to allow an assessment to be made of the validity of a single sample
Spiking neural network
Spiking neural networks (SNNs) are artificial neural networks that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into the
Auxiliary particle filter
The auxiliary particle filter is a particle filtering algorithm introduced by Pitt and Shephard in 1999 to improve some deficiencies of the sequential importance resampling (SIR) algorithm when dealin
Gaussian process approximations
In statistics and machine learning, Gaussian process approximation is a computational method that accelerates inference tasks in the context of a Gaussian process model, most commonly likelihood evalu
Guess value
In mathematical modeling, a guess value is more commonly called a starting value or initial value. These are necessary for most optimization problems which use search algorithms, because those algorit
Bayesian inference using Gibbs sampling
Bayesian inference using Gibbs sampling (BUGS) is a software package for performing Bayesian inference using Markov chain Monte Carlo (based on Gibbs sampling). It was developed by David Spiegelhalter
Group method of data handling
Group method of data handling (GMDH) is a family of inductive algorithms for computer-based mathematical modeling of multi-parametric datasets that features fully automatic structural and parametric o
Owen's T function
In mathematics, Owen's T function T(h, a), named after statistician Donald Bruce Owen, is defined by The function was first introduced by Owen in 1956.
Antithetic variates
In statistics, the antithetic variates method is a variance reduction technique used in Monte Carlo methods. Considering that the error in the simulated signal (using Monte Carlo methods) has a one-ov
Isomap
Isomap is a nonlinear dimensionality reduction method. It is one of several widely used low-dimensional embedding methods. Isomap is used for computing a quasi-isometric, low-dimensional embedding of
Multivariate kernel density estimation
Kernel density estimation is a nonparametric technique for density estimation i.e., estimation of probability density functions, which is one of the fundamental questions in statistics. It can be view
Conformal prediction
Conformal prediction (CP) is a set of algorithms devised to assess the uncertainty of predictions produced by a machine learning model. CP algorithms do this by computing and comparing nonconformity m
ArviZ
ArviZ (/ˈɑːrvɪz/ AR-vees) is a Python package for exploratory analysis of Bayesian models. When working with Bayesian models there are a series of related tasks that need to be addressed besides infer
Integrated nested Laplace approximations
Integrated nested Laplace approximations (INLA) is a method for approximate Bayesian inference based on Laplace's method. It is designed for a class of models called latent Gaussian models (LGMs), for
Joint Approximation Diagonalization of Eigen-matrices
Joint Approximation Diagonalization of Eigen-matrices (JADE) is an algorithm for independent component analysis that separates observed mixed signals into latent source signals by exploiting fourth or
Jackknife resampling
In statistics, the jackknife (jackknife cross-validation) is a cross-validation technique and, therefore, a form of resampling.It is especially useful for bias and variance estimation. The jackknife p
Artificial precision
In numerical mathematics, artificial precision is a source of error that occurs when a numerical value or semantic is expressed with more precision than was initially provided from measurement or user
In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear c
Particle filter
Particle filters, or sequential Monte Carlo methods, are a set of Monte Carlo algorithms used to solve filtering problems arising in signal processing and Bayesian statistical inference. The filtering
Computational statistics
Computational statistics, or statistical computing, is the bond between statistics and computer science. It means statistical methods that are enabled by using computational methods. It is the area of
Linear least squares
Linear least squares (LLS) is the least squares approximation of linear functions to data.It is a set of formulations for solving statistical problems involved in linear regression, including variants
Artificial neural network
Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is
Signal magnitude area
In mathematics, the signal magnitude area (abbreviated SMA or sma) is a statistical measure of the magnitude of a varying quantity.
Mathematics of artificial neural networks
An artificial neural network (ANN) combines biological principles with advanced statistics to solve problems in domains such as pattern recognition and game-play. ANNs adopt the basic model of neuron
History of artificial neural networks
The history of artificial neural networks (ANN) began with Warren McCulloch and Walter Pitts (1943) who created a computational model for neural networks based on algorithms called threshold logic. Th
Bootstrapping populations
Bootstrapping populations in statistics and mathematics starts with a sample observed from a random variable. When X has a given distribution law with a set of non fixed parameters, we denote with a v
PyMC
PyMC (formerly known as PyMC3) is a Python package for Bayesian statistical modeling and probabilistic machine learning which focuses on advanced Markov chain Monte Carlo and variational fitting algor
Residual neural network
A residual neural network (ResNet) is an artificial neural network (ANN). It is a gateless or open-gated variant of the HighwayNet, the first working very deep feedforward neural network with hundreds
Semidefinite embedding
Maximum Variance Unfolding (MVU), also known as Semidefinite Embedding (SDE), is an algorithm in computer science that uses semidefinite programming to perform non-linear dimensionality reduction of h
Synthetic measure
A synthetic measure (or synthetic indicator) is a value that is the result of combining other metrics, which are measurements of various features.
FastICA
FastICA is an efficient and popular algorithm for independent component analysis invented by Aapo Hyvärinen at Helsinki University of Technology. Like most ICA algorithms, FastICA seeks an orthogonal
Control variates
The control variates method is a variance reduction technique used in Monte Carlo methods. It exploits information about the errors in estimates of known quantities to reduce the error of an estimate
Iterated conditional modes
In statistics, iterated conditional modes is a deterministic algorithm for obtaining a configuration of a local maximum of the joint probability of a Markov random field. It does this by iteratively m