# Category: Time series

Autoregressive conditional duration
In financial econometrics, an autoregressive conditional duration (ACD, Engle and Russell (1998)) model considers irregularly spaced and autocorrelated intertrade durations. ACD is analogous to GARCH.
Smoothing
In statistics and image processing, to smooth a data set is to create an approximating function that attempts to capture important patterns in the data, while leaving out noise or other fine-scale str
CARIACO Ocean Time Series Program
In 1995, the Ocean Time Series Program called CARIACO (Carbon Retention in a Colored Ocean) was initiated, completing 232 monthly core cruises through January 2017. This time series consists of field
Journal of Time Series Analysis
The Journal of Time Series Analysis is a bimonthly peer-reviewed academic journal covering mathematical statistics as it relates to the analysis of time series data. It was established in 1980 and is
Wold's theorem
In statistics, Wold's decomposition or the Wold representation theorem (not to be confused with the Wold theorem that is the discrete-time analog of the Wiener–Khinchin theorem), named after Herman Wo
Analysis of rhythmic variance
In statistics, analysis of rhythmic variance (ANORVA) is a method for detecting rhythms in biological time series, published by Peter Celec (Biol Res. 2004, 37(4 Suppl A):777–82). It is a procedure fo
Dynamic factor
In econometrics, a dynamic factor (also known as a diffusion index) is a series which measures the of many time series. It is used in certain macroeconomic models. A diffusion index is intended to ind
Stationary distribution
Stationary distribution may refer to: * A special distribution for a Markov chain such that if the chain starts with its stationary distribution, the marginal distribution of all states at any time w
Satellite Image Time Series
A Satellite Image Time Series (SITS) is a set of satellite images taken from the same scene at different times. A SITS makes use of different satellite sources to obtain a larger data series with shor
Dynamic mode decomposition
Dynamic mode decomposition (DMD) is a dimensionality reduction algorithm developed by Peter Schmid in 2008.Given a time series of data, DMD computes a set of modes each of which is associated with a f
Stochastic drift
In probability theory, stochastic drift is the change of the average value of a stochastic (random) process. A related concept is the drift rate, which is the rate at which the average changes. For ex
Exponential smoothing
Exponential smoothing is a rule of thumb technique for smoothing time series data using the exponential window function. Whereas in the simple moving average the past observations are weighted equally
Moving average
In statistics, a moving average (rolling average or running average) is a calculation to analyze data points by creating a series of averages of different subsets of the full data set. It is also call
Fourier analysis
In mathematics, Fourier analysis (/ˈfʊrieɪ, -iər/) is the study of the way general functions may be represented or approximated by sums of simpler trigonometric functions. Fourier analysis grew from t
Economic data
Economic data are data describing an actual economy, past or present. These are typically found in time-series form, that is, covering more than one time period (say the monthly unemployment rate for
Trend-stationary process
In the statistical analysis of time series, a trend-stationary process is a stochastic process from which an underlying trend (function solely of time) can be removed, leaving a stationary process. Th
Decomposition of time series
The decomposition of time series is a statistical task that deconstructs a time series into several components, each representing one of the underlying categories of patterns. There are two principal
Whittle likelihood
In statistics, Whittle likelihood is an approximation to the likelihood function of a stationary Gaussian time series. It is named after the mathematician and statistician Peter Whittle, who introduce
Mean absolute error
In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of Y versus X include comparisons of predicted versus observed, sub
Phase dispersion minimization
Phase dispersion minimization (PDM) is a data analysis technique that searches for periodic components of a time series data set. It is useful for data sets with gaps, non-sinusoidal variations, poor
Seasonal adjustment or deseasonalization is a statistical method for removing the seasonal component of a time series. It is usually done when wanting to analyse the trend, and cyclical deviations fro
The seasonally adjusted annual rate (SAAR) is a rate that is adjusted to take into account typical seasonal fluctuations in data and is expressed as an annual total. SAARs are used for data affected b
Berlin procedure
The Berlin procedure (BV) is a mathematical procedure for time series decomposition and seasonal adjustment of monthly and quarterly economic time series. The mathematical foundations of the procedure
Secular variation
The secular variation of a time series is its long-term, non-periodic variation (see decomposition of time series). Whether a variation is perceived as secular or not depends on the available timescal
Time Warp Edit Distance
Time Warp Edit Distance (TWED) is a measure of similarity (or dissimilarity) for discrete time series matching with time 'elasticity'. In comparison to other distance measures, (e.g. DTW (dynamic time
Order of integration
In statistics, the order of integration, denoted I(d), of a time series is a summary statistic, which reports the minimum number of differences required to obtain a covariance-stationary series.
Bayesian structural time series
Bayesian structural time series (BSTS) model is a statistical technique used for feature selection, time series forecasting, nowcasting, inferring causal impact and other applications. The model is de
Chain linking is a statistical method, defined by the Organisation for Economic Co-operation and Development as: Joining together two indices that overlap in one period by rescaling one of them to mak
Time-series segmentation
Time-series segmentation is a method of time-series analysis in which an input time-series is divided into a sequence of discrete segments in order to reveal the underlying properties of its source. A
Correlation function
A correlation function is a function that gives the statistical correlation between random variables, contingent on the spatial or temporal distance between those variables. If one considers the corre
State observer
In control theory, a state observer or state estimator is a system that provides an estimate of the internal state of a given real system, from measurements of the input and output of the real system.
Unevenly spaced time series
In statistics, signal processing, and econometrics, an unevenly (or unequally or irregularly) spaced time series is a sequence of observation time and value pairs (tn, Xn) in which the spacing of obse
High frequency data
High frequency data refers to time-series data collected at an extremely fine scale. As a result of advanced computational power in recent decades, high frequency data can be accurately collected at a
Partial autocorrelation function
In time series analysis, the partial autocorrelation function (PACF) gives the partial correlation of a stationary time series with its own lagged values, regressed the values of the time series at al
Rising moving average
The rising moving average is a technical indicator used in stock market trading. Most commonly found visually, the pattern is spotted with a moving average overlay on a stock chart or price series. Wh
Approximate entropy
In statistics, an approximate entropy (ApEn) is a technique used to quantify the amount of regularity and the unpredictability of fluctuations over time-series data. For example, consider two series o
Stationary sequence
In probability theory – specifically in the theory of stochastic processes, a stationary sequence is a random sequence whose joint probability distribution is invariant over time. If a random sequence
Structural break
In econometrics and statistics, a structural break is an unexpected change over time in the parameters of regression models, which can lead to huge forecasting errors and unreliability of the model in
Least-squares spectral analysis
Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum, based on a least squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the mos
Anomaly (natural sciences)
In the natural sciences, especially in atmospheric and Earth sciences involving applied statistics, an anomaly is a persisting deviation in a physical quantity from its expected value, e.g., the syste
Cointegration
Cointegration is a statistical property of a collection (X1, X2, ..., Xk) of time series variables. First, all of the series must be integrated of order d (see Order of integration). Next, if a linear
Bispectrum
In mathematics, in the area of statistical analysis, the bispectrum is a statistic used to search for nonlinear interactions.
Measuring economic worth over time
The measurement of economic worth over time is the problem of relating past prices, costs, values and proportions of social production to current ones. For a number of reasons, relating any past indic
Cross-correlation matrix
The cross-correlation matrix of two random vectors is a matrix containing as elements the cross-correlations of all pairs of elements of the random vectors. The cross-correlation matrix is used in var
Lag operator
In time series analysis, the lag operator (L) or backshift operator (B) operates on an element of a time series to produce the previous element. For example, given some time series then for all or sim
Hodrick–Prescott filter
The Hodrick–Prescott filter (also known as Hodrick–Prescott decomposition) is a mathematical tool used in macroeconomics, especially in real business cycle theory, to remove the cyclical component of
Time reversibility
A mathematical or physical process is time-reversible if the dynamics of the process remain well-defined when the sequence of time-states is reversed. A deterministic process is time-reversible if the
Divisia index
A Divisia index is a theoretical construct to create index number series for continuous-time data on prices and quantities of goods exchanged. The name comes from François Divisia who first proposed a
Moving average crossover
In the statistics of time series, and in particular the stock market technical analysis, a moving-average crossover occurs when, on plotting two moving averages each based on different degrees of smoo
Kernel (statistics)
The term kernel is used in statistical analysis to refer to a window function. The term "kernel" has several distinct meanings in different branches of statistics.
Time series
In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thu
Tracking signal
In statistics and management science, a tracking signal monitors any forecasts that have been made in comparison with actuals, and warns when there are unexpected departures of the outcomes from the f
Deflator
In statistics, a deflator is a value that allows data to be measured over time in terms of some base period, usually through a price index, in order to distinguish between changes in the money value o
Mean absolute scaled error
In statistics, the mean absolute scaled error (MASE) is a measure of the accuracy of forecasts. It is the mean absolute error of the forecast values, divided by the mean absolute error of the in-sampl
Seasonal subseries plot
Seasonal subseries plots are a graphical tool to visualize and detect seasonality in a time series. Seasonal subseries plots involves the extraction of the seasons from a time series into a subseries.
Interrupted time series
Interrupted time series analysis (ITS), sometimes known as quasi-experimental time series analysis, is a method of statistical analysis involving tracking a long-term period before and after a point o
Forecasting
Forecasting is the process of making predictions based on past and present data. Later these can be compared (resolved) against what happens. For example, a company might estimate their revenue in the