Sequential methods

Sequential estimation

In statistics, sequential estimation refers to estimation methods in sequential analysis where the sample size is not fixed in advance. Instead, data is evaluated as it is collected, and further sampling is stopped in accordance with a predefined stopping rule as soon as significant results are observed.The generic version is called the optimal Bayesian estimator, which is the theoretical underpinning for every sequential estimator (but cannot be instantiated directly). It includes a Markov process for the state propagation and measurement process for each state, which yields some typical statistical independence relations. The Markov process describes the propagation of a probability distribution over discrete time instances and the measurement is the information one has about each time instant, which is usually less informative than the state. Only the observed sequence will, together with the models, accumulate the information of all measurements and the corresponding Markov process to yield better estimates. From that, the Kalman filter (and its variants), the particle filter, the histogram filter and others can be derived. It depends on the models, which one to use and requires experience to choose the right one. In most cases, the goal is to estimate the state sequence from the measurements. In other cases, one can use the description to estimate the parameters of a noise process for example. One can also accumulate the unmodeled statistical behavior of the states projected in the measurement space (called innovation sequence, which naturally includes the orthogonality principle in its derivations to yield an independence relation and therefore can be also cast into a Hilbert space representation, which makes things very intuitive for a smaller audience) over time and compare it with a threshold, which then corresponds to the aforementioned stopping criterion. One difficulty is to setup the initial conditions for the probabilistic models, which is in most cases done by experience, data sheets or precise measurements with a different setup. The statistical behaviour of the heuristic/sampling methods (e.g. particle filter or histogram filter) depends on many parameters and implementation details and should not be used in safety critical applications (since it is very hard to yield theoretical guarantees or do proper testing), unless one has a very good reason. If there is a dependence of each state on an overall entity (e.g. a map or simply an overall state variable), one typically uses SLAM (simultaneous localization and mapping) techniques, which include the sequential estimator as a special case (when the overall state variable has just one state). It will estimate the state sequence and the overall entity. There are also none-causal variants, that have all measurements at the same time, batches of measurements or revert the state evolution to go backwards again. These are then, however, not real time capable (except one uses a really big buffer, that lowers the throughput dramatically) anymore and only sufficient for post processing. Other variants do several passes to yield a rough estimate first and then refine it by the following passes, which is inspired by video editing/transcoding. For image processing (where all pixels are available at the same time) these methods become causal again. Sequential estimation is the core of many well known applications, such as the Viterbi decoder, convolutional codes, video compression or target tracking. Due to its state space representation, which is in most cases motivated by physical laws of motion, there is a direct link to control applications, which led to the use of the Kalman filter for space applications for example. (Wikipedia).

Video thumbnail

(ML 4.1) Maximum Likelihood Estimation (MLE) (part 1)

Definition of maximum likelihood estimates (MLEs), and a discussion of pros/cons. A playlist of these Machine Learning videos is available here: http://www.youtube.com/my_playlists?p=D0F06AA0D2E8FFBA

From playlist Machine Learning

Video thumbnail

Introduction to Estimation Theory

http://AllSignalProcessing.com for more great signal-processing content: ad-free videos, concept/screenshot files, quizzes, MATLAB and data files. General notion of estimating a parameter and measures of estimation quality including bias, variance, and mean-squared error.

From playlist Estimation and Detection Theory

Video thumbnail

Ex: Write a Recursive and Explicit Equation to Model Linear Growth

This video provides an basic example of how to determine a recursive and explicit equation to model linear growth given P_0 and P_1. http://mathispower4u.com

From playlist Linear, Exponential, and Logistic Growth: Recursive/Explicit

Video thumbnail

Estimation

"Estimate the result of a calculation by first rounding each number."

From playlist Number: Rounding & Estimation

Video thumbnail

Statistics 5_1 Confidence Intervals

In this lecture explain the meaning of a confidence interval and look at the equation to calculate it.

From playlist Medical Statistics

Video thumbnail

Random and systematic error explained: from fizzics.org

In scientific experiments and measurement it is almost never possible to be absolutely accurate. We tend to make two types of error, these are either random or systematic. The video uses examples to explain the difference and the first steps you might take to reduce them. Notes to support

From playlist Units of measurement

Video thumbnail

Exponential Growth Models

Introduces notation and formulas for exponential growth models, with solutions to guided problems.

From playlist Discrete Math

Video thumbnail

Learn to use summation notation for an arithmetic series to find the sum

👉 Learn how to find the partial sum of an arithmetic series. A series is the sum of the terms of a sequence. An arithmetic series is the sum of the terms of an arithmetic sequence. The formula for the sum of n terms of an arithmetic sequence is given by Sn = n/2 [2a + (n - 1)d], where a is

From playlist Series

Video thumbnail

Sequential Stopping for Parallel Monte Carlo by Peter W Glynn

PROGRAM: ADVANCES IN APPLIED PROBABILITY ORGANIZERS: Vivek Borkar, Sandeep Juneja, Kavita Ramanan, Devavrat Shah, and Piyush Srivastava DATE & TIME: 05 August 2019 to 17 August 2019 VENUE: Ramanujan Lecture Hall, ICTS Bangalore Applied probability has seen a revolutionary growth in resear

From playlist Advances in Applied Probability 2019

Video thumbnail

(ML 7.7.A2) Expectation of a Dirichlet random variable

How to compute the expected value of a Dirichlet distributed random variable.

From playlist Machine Learning

Video thumbnail

05-5 Inverse modeling : sequential importance re-sampling

Introduction to sequential importance resampling

From playlist QUSS GS 260

Video thumbnail

99 Data Analytics: Second Midterm Walk Through

I walk through my 2nd midterm from the Fall 2019.

From playlist Data Analytics and Geostatistics

Video thumbnail

NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Fast Cross-Validation...

Big Learning Workshop: Algorithms, Systems, and Tools for Learning at Scale at NIPS 2011 Invited Talk: Fast Cross-Validation via Sequential Analysis by Tammo Kruger Abstract: With the increasing size of today's data sets, finding the right parameter configuration via cross-validatio

From playlist NIPS 2011 Big Learning: Algorithms, System & Tools Workshop

Video thumbnail

13 Data Analytics: Simulation

Lecture on the motivation for simulation vs. estimation and development of the sequential Gaussian simulation approach.

From playlist Data Analytics and Geostatistics

Video thumbnail

Felix Kwok: Analysis of a Three-Level Variant of Parareal

In this talk, we present a three-level variant of the parareal algorithm that uses three propagators at the fine, intermediate and coarsest levels. The fine and intermediate levels can both be run in parallel, only the coarsest level propagation is completely sequential. We interpret our a

From playlist Jean-Morlet Chair - Gander/Hubert

Video thumbnail

Aaditya Ramdas: Universal inference using the split likelihood ratio test

CIRM VIRTUAL EVENT Recorded during the meeting "Mathematical Methods of Modern Statistics 2" the June 05, 2020 by the Centre International de Rencontres Mathématiques (Marseille, France) Filmmaker: Guillaume Hennenfent Find this video and other talks given by worldwide mathematicians

From playlist Virtual Conference

Video thumbnail

David O. Siegmund: Change: Detection,Estimation, Segmentation

CIRM VIRTUAL EVENT Recorded during the meeting "Mathematical Methods of Modern Statistics 2" the June 08, 2020 by the Centre International de Rencontres Mathématiques (Marseille, France) Filmmaker: Guillaume Hennenfent Find this video and other talks given by worldwide mathematicians

From playlist Virtual Conference

Video thumbnail

David O. Siegmund: Change: Detection,Estimation, Segmentation

CIRM VIRTUAL EVENT Recorded during the meeting "Mathematical Methods of Modern Statistics 2" the June 08, 2020 by the Centre International de Rencontres Mathématiques (Marseille, France) Filmmaker: Guillaume Hennenfent Find this video and other talks given by worldwide mathematicians

From playlist Virtual Conference

Video thumbnail

Stanford CS330: Multi-Task and Meta-Learning, 2019 | Lecture 6 - Reinforcement Learning Primer

For more information about Stanford’s Artificial Intelligence professional and graduate programs, visit: https://stanford.io/ai Assistant Professor Chelsea Finn, Stanford University http://cs330.stanford.edu/ 0:00 Introduction 0:46 Logistics 2:31 Why Reinforcement Learning? 3:37 The Pla

From playlist Stanford CS330: Deep Multi-Task and Meta Learning

Video thumbnail

(ML 10.7) Predictive distribution for linear regression (part 4)

How to compute the (posterior) predictive distribution for a new point, under a Bayesian model for linear regression.

From playlist Machine Learning

Related pages

Estimation theory | Testimator | Sequential analysis | Kalman filter | Statistics | Particle filter