Introduction to time series forecasting

crystal ball

From forecasting the weather each day, predicting the future price of an asset, or identifying seasonality in a company’s sales revenue, time series forecasting plays an incredibly important part of our personal and professional lives.

Forecasting the future is never an easy task, although in this article we’ll introduce several statistical and machine learning techniques that can help us with the task. In particular, we’ll cover the following topics:

  1. What is Time Series Data?
  2. What is Time Series Forecasting?
  3. Statistical Times Series Forecasting Techniques
  4. Machine Learning Time Series Forecasting
  5. Challenges in Forecasting

Let’s get started.

1. What is Time Series Data?

A time series is a sequence of numeric data points over an ordered sequence of time steps.

One key difference with time series data is that time is not treated like any other variable, but rather is treated as the primary axis.

A time series is measurable and variable, and four common components in the data include:

  • Level: This refers to the average, baseline value of a time series if it were a straight line.
  • Trends: Time series data often move up or down in reasonably predictable patterns.
  • Seasonality: There are often seasonal variations that repeat over a specific time period.
  • Variability: Also referred to as volatility, this refers to random variations in the data that don’t fall into any of the other categories. Aside from random variations, variability can come as a result of special events such as news or weather events.

Now that we’ve reviewed what time series data is, let’s look at how we can use it for time series forecasting.

What is Time Series Forecasting?

Time series forecasting is the process of building a model that is trained on historical data and uses the observations to predict future observations.

As discussed on Machine Learning Mastery, with time series analysis, the objective of our research is generally to understand why something happened in our time series data. With time series forecasting, on the other hand, we’re not necessarily concerned with why something happened in our dataset, but rather what will happen in the future.

One important distinction in time series forecasting is that future data is completely unavailable to us, so it must be estimated using a statistical or machine learning forecasting technique.

Statistical Times Series Forecasting Techniques

Now that we have a high-level definition of times series forecasting, let’s review several statistical techniques that can help us accomplish the task at hand.

First, let’s review several univariate time series forecasting techniques -- that is, there is only one single time-dependent variable. For example, let’s say that we’re tasked with predicting the temperature of the weather each day.

  • Naive Approach: The simplest possible way we could do this is by taking the previous day’s temperature and use it to estimate the next day’s closing price. Mathematically we can express this as follows:
  • Simple Average: Also referred to as the arithmetic average, the simple average is computed by taking the sum of single observations and dividing them by the total number of observations in the time period.
  • Weighted Average: The weighted average is similar to the simple average, although it is influenced by the weights given to each data value. For example, if a math class has three tests each semester and each test is weighted differently, the weighted average would identify how much each one contributed to our final grade.
  • Simple Moving Average (SMA): Another way to build on the simple average is to only take the average for different subsets of the entire time period. For example, it is common in investing to take the average price for the previous 10, 20, 50, 100 or 200 days.
  • Exponential Smoothing: Exponential smoothing is similar to the simple moving average, but instead it places greater significance on the most recent data points. In other words, an SMA weights all past observations equally, whereas exponential smoothing assigns exponentially decreasing weights over time.
  • Double and triple exponential smoothing: Also known as Holt-Winters forecasting, this takes into account seasonal patterns and long term trends in the forecast.

As mentioned, up until this point we’ve only looked at univariate time series data. In the real world, however, data is often in the form of a multivariate time series, which means that there are multiple variables that influence the forecast of future values.

Two  of the most common techniques for forecasting multivariate time series data include Vector Auto Regression (VAR) and Auto Regressive Moving Average (ARIMA).

  • Vector Auto Regression (VAR): In a VAR model, each variable is a linear function of its own past values, as well as the past values of all other variables. This aspect of VAR models allows it to capture linear interdependencies among multiple variables as they allow for more than one evolving variable.
  • Auto Regressive Moving Average (ARIMA): ARIMA is another forecasting model that aims to describe a given time series based on its own past values. In other words, it aims to ‘explain’ correlations in the data in order to forecast future values.

If you want to see an example of a Python implementation of VAR for multivariate time series data, check out this article from Analytics Vidyha.

Machine Learning Time Series Forecasting Techniques

So far we have discussed the classical statistical approaches to time series forecasting, but there are also many machine learning models at our disposal.

A few  of the most common machine learning techniques for time series forecasting include:

  • XGBoost: XGBoost is an open-source software that provides a framework for gradient boosting, which is a machine learning technique for classification and regression problems that outputs a prediction model.
  • Principal Component Regression (PCR): Based on the common statistical technique Principal Component Analysis (PCA), PCR is a regression analysis technique that is often used when the explanatory variables are highly correlated.
  • Recurrent Neural Networks (RNNs): RNNs are a class of artificial neural network that are often used for time series forecasting. Unlike other neural networks, RNNs have connections between the nodes that are sequentially ordered, which means that their internal states allow them to process sequences of temporal inputs (i.e. time series data).
  • Long Short-Term Memory Networks (LSTMs): LSTM networks are a variant of RNNs that specialize in learning longer sequences of time series data, which makes them particularly useful in time series forecasting.

Challenges in Forecasting

As we’ve discussed, accurate forecasting within a business is one of the most important aspects of corporate planning. Regardless if you use a statistical or machine learning method, however, building an accurate forecasting model is a challenging task.

A few of the main challenges in forecasting include:

  • Finite Data: if you go with a machine learning forecasting model, the first challenge is finding enough data available as input. For example, if you are a new company and only have one year of sales data, accurately forecasting the next year isn’t realistic with the available data.
  • Factor Overload: The fact that machine learning can incorporate millions of factors into the model is both a strength and a challenge in forecasting. On the one hand, you would never be able to incorporate all these factors into a manual forecasting method, although it is a challenge to find which factors actually have an influence on the subsequent time periods.
  • Curse of Dimensionality: As mentioned, time series data always has a finite amount of data available to train the model. Also, since more and more factors, or dimensions, can be added to the model, another challenge is the diminishing return in terms of accuracy from adding another data dimension.

If you choose to not build your own forecasting model, you can use a third-party SaaS platform that has mastered these techniques and "productized" the entire process. This means that you can simply input your time series data sources, specify the parameters you wish to forecast, and start getting forecasts for your data right away.

Summary: Time Series Forecasting

We have introduced the key concepts in time series forecasting including the unique traits of time series data, as well as summarizing several statistical and machine learning techniques for forecasting.

As discussed, for univariate time series data we can use statistical techniques including simple averages, moving averages, and exponential smoothing. For multivariate time series data, two  of the most common statistical techniques are called Vector Auto Regression (VAR) and Auto Regressive Moving Average (ARIMA).

If instead, we want to use a machine learning algorithm,  there are several techniques including XGBoost, Principal Component Regression (PCR). In addition, the subclass of deep learning  called Recurrent Neural Networks, and particularly LSTM networks, are very useful in time series forecasting.

Photo Credit: vinzstudio/Shutterstock

Ira Cohen is chief data scientist and co-founder of Anodot, where he develops real-time multivariate anomaly detection algorithms designed to oversee millions of time series signals. He holds a PhD in machine learning from the University of Illinois at Urbana-Champaign and has more than 12 years of industry experience.

One Response to Introduction to time series forecasting

© 1998-2024 BetaNews, Inc. All Rights Reserved. Privacy Policy - Cookie Policy.