Time Series Machine Learning (and Feature Engineering) in R
Written by Matt Dancho
Machine learning is a powerful way to analyze Time Series. With innovations in the tidyverse
modeling infrastructure (tidymodels
), we now have a common set of packages to perform machine learning in R. These packages include parsnip
, recipes
, tune
, and workflows
. But what about Machine Learning with Time Series Data? The key is Feature Engineering. (Read the updated article at Business Science)
The timetk
package has a feature engineering innovation in version 0.1.3. A recipe step called step_timeseries_signature()
for Time Series Feature Engineering that is designed to fit right into the tidymodels
workflow for machine learning with timeseries data.
The small innovation creates 25+ time series features, which has a big impact in improving our machine learning models. Further, these βcore featuresβ are the basis for creating 200+ time-series features to improve forecasting performance. Letβs see how to do Time Series Machine Learning in R.
Time Series Feature Engineering
with the Time Series Signature
Use feature engineering with timetk to forecast
The time series signature is a collection of useful engineered features that describe the time series index of a time-based data set. It contains a 25+ time-series features that can be used to forecast time series that contain common seasonal and trend patterns:
-
β
Trend in Seconds Granularity: index.num
-
β
Yearly Seasonality: Year, Month, Quarter
-
β
Weekly Seasonality: Week of Month, Day of Month, Day of Week, and more
-
β
Daily Seasonality: Hour, Minute, Second
-
β
Weekly Cyclic Patterns: 2 weeks, 3 weeks, 4 weeks
We can then build 200+ of new features from these core 25+ features by applying well-thought-out time series feature engineering strategies.
Time Series Forecast Strategy
6-Month Forecast of Bike Transaction Counts
In this tutorial, the user will learn methods to implement machine learning to predict future outcomes in a time-based data set. The tutorial example uses a well known time series dataset, the Bike Sharing Dataset, from the UCI Machine Learning Repository. The objective is to build a model and predict the next 6-months of Bike Sharing daily transaction counts.
Feature Engineering Strategy
Iβll use timetk
to build a basic Machine Learning Feature Set using the new step_timeseries_signature()
function that is part of preprocessing specification via the recipes
package. Iβll show how you can add interaction terms, dummy variables, and more to build 200+ new features from the pre-packaged feature set.
Machine Learning Strategy
Weβll then perform Time Series Machine Learning using parsnip
and workflows
to construct and train a GLM-based time series machine learning model. The model is evaluated on out-of-sample data. A final model is trained on the full dataset, and extended to a future dataset containing 6-months to daily timestamp data.
Time Series Forecast using Feature Engineering
How to Learn Forecasting Beyond this Tutorial
I canβt possibly show you all the Time Series Forecasting techniques you need to learn in this post, which is why I have a NEW Advanced Time Series Forecasting Course on its way. The course includes detailed explanations from 3 Time Series Competitions. We go over competition solutions and show how you can integrate the key strategies into your organizationβs time series forecasting projects. Check out the course page, and Sign-Up to get notifications on the Advanced Time Series Forecasting Course (Coming soon).
Need to improve forecasting at your company?
I have the Advanced Time Series Forecasting Course (Coming Soon). This course pulls forecasting strategies from experts that have placed 1st and 2nd solutions in 3 of the most important Time Series Competitions. Learn the strategies that win forecasting competitions. Then apply them to your time series projects.
Join the waitlist to get notified of the Course Launch!
Join the Advanced Time Series Course Waitlist
Prerequisites
Please use timetk
0.1.3 or greater for this tutorial. You can install via remotes::install_github("business-science/timetk")
until released on CRAN.
Before we get started, load the following packages.
Data
Weβll be using the Bike Sharing Dataset from the UCI Machine Learning Repository. Download the data and select the βday.csvβ file which is aggregated to daily periodicity.
A visualization will help understand how we plan to tackle the problem of forecasting the data. Weβll split the data into two regions: a training region and a testing region.
Split the data into train and test sets at β2012-07-01β.
Modeling
Start with the training set, which has the βdateβ and βvalueβ columns.
Recipe Preprocessing Specification
The first step is to add the time series signature to the training set, which will be used this to learn the patterns. New in timetk
0.1.3 is integration with the recipes
R package:
-
The recipes
package allows us to add preprocessing steps that are applied sequentially as part of a data transformation pipeline.
-
The timetk
has step_timeseries_signature()
, which is used to add a number of features that can help machine learning models.
When we apply the prepared recipe prep()
using the bake()
function, we go from 2 features to 29 features! Yes, 25+ new columns were added from the timestamp βdateβ feature. These are features we can use in our machine learning models and build on top of. .
Building Engineered Features on Top of our Recipe
Next is where the magic happens. I apply various preprocessing steps to improve the modeling behavior to go from 29 features to 225 engineered features! If you wish to learn more, I have an Advanced Time Series course that will help you learn these techniques.
Model Specification
Next, letβs create a model specification. Weβll use a glmnet
.
Workflow
We can mary up the preprocessing recipe and the model using a workflow()
.
Training
The workflow can be trained with the fit()
function.
Visualize the Test (Validation) Forecast
With a suitable model in hand, we can forecast using the βtestβ set for validation purposes.
Visualize the results using ggplot()
.
Validation Accuracy (Out of Sample)
The Out-of-Sample Forecast Accuracy can be measured with yardstick
.
Next we can visualize the residuals of the test set. The residuals of the model arenβt perfect, but we can work with it. The residuals show that the model predicts low in October and high in December.
At this point you might go back to the model and try tweaking features using interactions or polynomial terms, adding other features that may be known in the future (e.g. temperature of day can be forecasted relatively accurately within 7 days), or try a completely different modeling technique with the hope of better predictions on the test set. Once you feel that your model is optimized, move on the final step of forecasting.
This accuracy can be improved significantly with Competition-Level Forecasting Strategies. And, guess what?! I teach these strategies in my NEW Advanced Time Series Forecasting Course (coming soon). Register for the waitlist to get notified. π
Learn algorithms that win competitions
I have the Advanced Time Series Forecasting Course (Coming Soon). This course pulls forecasting strategies from experts that have placed 1st and 2nd solutions in 3 of the most important Time Series Competitions. Learn the strategies that win forecasting competitions. Then apply them to your time series projects.
Join the waitlist to get notified of the Course Launch!
Forecasting Future Data
Letβs use our model to predict What are the expected future values for the next six months. The first step is to create the date sequence. Letβs use tk_get_timeseries_summary()
to review the summary of the dates from the original dataset, βbikesβ.
The first six parameters are general summary information.
The second six parameters are the periodicity information.
From the summary, we know that the data is 100% regular because the median and mean differences are 86400 seconds or 1 day. We donβt need to do any special inspections when we use tk_make_future_timeseries()
. If the data was irregular, meaning weekends or holidays were excluded, youβd want to account for this. Otherwise your forecast would be inaccurate.
Retrain the model specification on the full data set, then predict the next 6-months.
Visualize the forecast.
Forecast Error
A forecast is never perfect. We need prediction intervals to account for the variance from the model predictions to the actual data. Thereβs a number of methods to achieve this. Weβll follow the prediction interval methodology from Forecasting: Principles and Practice.
Now, plotting the forecast with the prediction intervals.
My Key Points on Time Series Machine Learning
Forecasting using the time series signature can be very accurate especially when time-based patterns are present in the underlying data. As with most machine learning applications, the prediction is only as good as the patterns in the data. Forecasting using this approach may not be suitable when patterns are not present or when the future is highly uncertain (i.e. past is not a suitable predictor of future performance). However, in may situations the time series signature can provide an accurate forecast.
External Regressors - A huge benefit: One benefit to the machine learning approach that was not covered in this tutorial is that _correlated features (including non-time-based) can be included in the analysis. This is called adding External Regressors - examples include adding data from weather, financial, energy, google analytics, email providers, and more. For example, one can expect that experts in Bike Sharing analytics have access to historical temperature and weather patterns, wind speeds, and so on that could have a significant affect on bicycle sharing. The beauty of this method is these features can easily be incorporated into the model and prediction.
There is a whole lot more to time series forecasting that we did not cover (read on). π
How to Learn Time Series Forecasting?
Here are some techniques you need to learn to become good at forecasting. These techiques are absolutely critical to developing forecasts that will return ROI to your company:
- β
Preprocessing
- β
Feature engineering using Lagged Features and External Regressors
- β
Hyperparameter Tuning
- β
Time series cross validation
- β
Using Multiple Modeling Techniques
- β
Leveraging Autocorrelation
- β
and more.
All of these techiniques are covered in my upcoming Advanced Time Series Course (Register Here). I teach Competition-Winning Forecast Strategies too:
- β
Ensembling Strategies and Techniques
- β
Deep Learning Algorithms leveraging Recurrent Neural Networks
- β
Feature-Based Model Selection
And a whole lot more! It should be simple by now - Join my course waitlist.
Join the Advanced Time Series Course Waitlist