Time series forecasting is a statistical modelling discipline that focuses solely on the historical behavior of only one metric.
In other words, it is not concerned with predicting the value of one metric, say GDP, by assessing the value of another metric, say interest rates.
Throughout, we loosely use the terms metric, measure, instrument, etc… interchangeably to mean the same thing: a numeric value that is recorded chronologically in a way that we can use calculations to extract meaningful insights.
What is it?
And, how is it recorded in this chronological fashion?
A time series consists of a sequence of equally spaced data points for an item or instrument of interest.
Time series analysis includes analyzing a time series in order to extract meaningful information that can be used to later develop a forecasting model.
Periodicity
The periodicity of a time series is the frequency at which its data points are recorded.
For instance, are the data points hourly, daily, weekly, monthly, quarterly, yearly, etc….
This will play directly into the seasonal component covered below.
With time series, we can perform aggregations, but only from more granular to less granular. For instance, we can aggregate from hourly to daily to weekly to monthly to quarterly to yearly.
However, we cannot aggregate down from yearly to quarterly to monthly and so on.
Components
A metric’s historical behavior usually includes a mixture of the following components: trend, seasonality, cyclicality, and irregularity.
Keep in mind that not every metric will exhibit all of these components. For instance, for a metric that’s recorded yearly, such as yearly wolf populations, we won’t be to extract seasonal effects.
Each of these components need to be correctly identified and estimated to provide an accurate prediction.

Trend takes into account whether what direction an instrument has taken in recent history.
This component is heavily analyzed in the stock market. Is it trending up, down, or sideways.
A seasonal component of a time series can be introduced when a time series reacts differently for each month, quarter, week, or even day of the week.
Additionally, an instrument may exhibit cycles over a larger periods of time that is not influenced by seasons. Think business cycle, economic cycles, and other cycles inherent in the historical data points.
After any or all of the above components have been extracted from a time series, all that is left over is the irregular component. The only remaining deterministic factor that might be teased out is what’s know as autocorrelation.
Autocorrelation just handles the correlation between one data point in time with previous data points in time. For instance, a stock’s price today might be heavily influenced by its price yesterday. If it was down yesterday, there’s a likelihood it will be down again today.
After assessing autocorrelation, we might conclude that a data point is effected by 1 data point back, 2 data points back……10 data points back, or perhaps none (no autocorrelation).
In any case, what’s left over after this analysis is pure randomness, or white noise.
Now, after extracting all these components, we must combine them in a meaningful manner in order to make an estimation or prediction.
There are generally 2 ways in which this is done: additively or multiplicatively. And, these are very self-explanatory. In the former method, we add together all the component estimations, whereas in the latter, we multiply them together.
Fit Statistics
Fit statistics essentially tell us how well our model fits the time series in question. Here, we use maximum likelihood functions that do exactly what their name implies: The maximize the likelihood of this model describing the time series data.
The most popular types a fit stats are:
BIC (Bayesian Information Criterion)
AIC (Akaike Information Criterion)
AICc (Akaike Information Criterion corrected)
Typically, we want to minimize these values as low as possible. The smaller the stat, the better the fit.
Accuracy Measures
Accuracy measures behave in a similar fashion to fit stats, however, their function is to assess not how well the model fits the time series in question, but how well it predicts other time series of the same metric and same periodicity.
Versions of this include:
MAE (Mean Absolute Error)
MAD (Mean Absolute Deviation)
MAPE (Mean Absolute Percentage Error)
MASE (Mean Absolute Scaled Error)
ME (Mean Error)
MPE (Mean Percentage Error)
RMSE (Root Mean Square Error)
Again, we want to minimize this to the lowest value possible. The gold standard, or benchmark, used by most statisticians is the MAPE.
One additional stat that is calculated across all models is the R-Squared Holdout, which is a pseudo r-square accuracy measure for forecasting.
Models
And, since we’re measuring the fitness & accuracy of our model, we need to address the fact that more than one type of model exists. Let’s explore what those are.
Stationary (First Order vs. Second Order)
Stationary Time Series
The process of most time series models requires conditions that are stationary from the start before any autoregressive components, differencing, or averaging is performed.
First Order Stationary
A time series is first order stationary when underlying conditions exist, but there is no trend or seasonality in the data. In other words, the predictions for all time points t are based on these same underlying conditions.
Second Order Stationary
The series becomes second order stationary when the predicted forecast for any given point t has some kind of trend component added to the underlying conditions.
Three general groups of model types include simple, exponential smoothing, and autorgressive.
Simple Models
These models are typically built with estimation based solely on one value of the series. This could be the entire mean, the very last recorded value (in the case of a Random Walk or Naïve Forecast), or any of these simple values in combination with the most recent error (or standard deviation) and seasonal effects (with a Seasonal Naïve).
Here are the most common types:
Random Walk
Random Walk with Drift (includes the error)
Naïve
Seasonal Naïve
It is not heavily advised to use time series forecasting in the stock market, where future values are highly unpredictable due to volatility, investor sentiment/emotions, global black swan events, economic disruptions, etc…..
It’s more beneficial to use forecasting for measuring more predictable behaviors such as consumer buying patterns, energy usage, travel tickets, holiday spending, etc….things that have very predictable seasonality with high accuracy.
However, those that do use time series in the stock market tend to apply random walk with drift.
Exponential Smoothing Models
Whereas an overall mean is just a deviation of a weighted mean where all values have equal weights, any other type of weighted mean will apply differing weights to each data point.
An exponentially smoothed value is a weighted average of the current observation and all previous observations, with more recent observations taking more importance and, thus, getting higher weights.
Model types included in this category are:
SES (Simple Exponential Smoothing)
Holt’s ES (Holt’s Trended Exponential Smoothing)
Damped ES (Holt’s Trend with a dampening effect on forecasts further into the future)
HW Additive ES (Holt-Winters Additive Exponential Smoothing with Seasonality that adds all components)
HW Multip ES (Holt-Winters Additive Exponential Smoothing with Seasonality that multiplies all components)
ES State Space
TS Decomposition
A time series decomposition model consists of decomposing a time series into trend, seasonal, cyclical, and irregular components. Then each component is explicitly estimated and measured statistically. Each estimated component is then recombined in order to estimate a final model and calculate predictions going forward.
TS Regression
A time series regression model consists of modeling the time series of interest as a dependent variable and then using trend and seasonal components as independent variables used to predict the dependent variable. The time series regression model utilizes all linear, quadratic, cubic, and fourth order trend and seasonal components along with a trend*season interaction term as independent variables. Each variable is incorporated additively.
ARIMA
ARIMA stands for autoregressive integrated moving average. Thus, this type of model is a combination of those elements.
Autoregression – autocorrelation calculated between current value and previous values
Integration – differencing effect added to the time series
Moving average – a rolling average of the past q observations (rolling window)
More Advanced Methods
In this article, we will not dive deeply into more advanced methods. A couple commonly used advanced methods include the following.
Splines
Neural Nets
How would you like a service or tool that imports a time series spreadsheet, decomposes each instrument, calculates all components (trend, seasonal, etc.), optimizes for the best model by maximizing accuracy, and produces on-demand automated downloadable forecasts including analytics? Read on…
MarketRails Forecasting App
Our solution solves these issues in several ways:
- Our service is offered at a fraction of the cost of purchasing commercial software
- We already have the system developed
- The solution is statistically sound
MarketRails.com Forecasting App can input a data set with multiple instruments and output the same 3 deliverables:
- Microsoft Excel Data Set of Forecasts (with all instruments)
- Microsoft Word Table of Forecasts (with all instruments)
- Microsoft Word Analytical Report (analyses for all instruments)
Share this:
- Click to share on X (Opens in new window) X
- Click to share on Facebook (Opens in new window) Facebook
- Click to share on LinkedIn (Opens in new window) LinkedIn
- Click to share on Reddit (Opens in new window) Reddit
- Click to share on Tumblr (Opens in new window) Tumblr
- Click to share on Pinterest (Opens in new window) Pinterest
- Click to share on Telegram (Opens in new window) Telegram
- Click to share on WhatsApp (Opens in new window) WhatsApp