Introducing Modeltime Ensemble: Time Series Forecast Stacking
Written by Matt Dancho
Iām SUPER EXCITED to introduce modeltime.ensemble
. Modeltime Ensemble implements three competition-winning forecasting strategies. This article (recently updated) introduces Modeltime Ensemble, which makes it easy to perform blended and stacked forecasts that improve forecast accuracy.
- Weāll quickly introduce you to the growing modeltime ecosystem.
- Weāll explain what Modeltime Ensemble does.
- Then, weāll do a Modeltime Ensemble Forecast Tutorial using the
modeltime.ensemble
If you like what you see, I have an Advanced Time Series Course where you will become the time-series expert for your organization by learning modeltime
, modeltime.ensemble
, and timetk
.
Time Series Forecasting Article Guide:
This article is part of a series of software announcements on the Modeltime Forecasting Ecosystem.
-
(Start Here) Modeltime: Tidy Time Series Forecasting using Tidymodels
-
Modeltime H2O: Forecasting with H2O AutoML
-
Modeltime Ensemble: Time Series Forecast Stacking
-
Modeltime Recursive: Tidy Autoregressive Forecasting
-
Hyperparameter Tuning Forecasts in Parallel with Modeltime
-
Time Series Forecasting Course: Now Available
Like these articles?
š Register to stay in the know
š
on new cutting-edge R software like modeltime
.
Meet the Modeltime Ecosystem
A growing ecosystem for tidymodels forecasting
Modeltime Ensemble is part of a growing ecosystem of Modeltime forecasting packages. The main purpose of the Modeltime Ecosystem is to develop scalable forecasting systems.
Modeltime Ensemble
The time series ensemble API for Modeltime
Three months ago I introduced modeltime
, a new R package that speeds up forecasting experimentation and model selection with Machine Learning (XGBoost, GLMNET, Prophet, Prophet Boost, ARIMA, and ARIMA Boost). Fast-forward to now. Iām thrilled to announce the first expansion to the Modeltime Ecosystem: modeltime.ensemble
.
Modeltime Ensemble is a cutting-edge package that integrates competition-winning time series ensembling strategies:
-
Stacked Meta-Learners
-
Weighted Ensembles
-
Average Ensembles
What is a Stacked Ensemble?
Using modeltime.ensemble
, you can build something called a Stacked Ensemble. Letās break this down:
-
An ensemble is just a combination of models. We can combine them in many ways.
-
One method is stacking, which typically uses a āmeta-learning algorithmā to learn how to combine āsub-modelsā (the lower level models used as inputs to the stacking algorithm)
Stacking Diagram
Hereās a Multi-Level Stack, which won the Kaggle Grupo Bimbo Inventory Demand Forecasting Competition.
The Multi-Level Stacked Ensemble that won the Kaggle Grupo Bimbo Inventory Demand Challenge
The multi-level stack can be broken down:
-
Level 1 - Sub-Models. Includes models like ARIMA, Elastic Net, Support Vector Machines, or XGBoost. These models each predict independently on the time series data.
-
Level 2 - Stacking Algorithms. Stacking algorithms learn how to combine each of the sub-models by training a āmeta-modelā on the predictions from the sub-models.
-
Level 3 - Weighted Stacking. Weighted stacking is a simple approach that is fast and effective. Itās a very simple approach where we apply a weighting and average the predictions of the incoming models. We can use this approach on sub-models, stacked models, or even a combination of stacked and sub-models. We decide the weighting.
I teach how to do a Multi-Level Stack in Module 14 of my High-Performance Time Series Forecasting Course.
What Modeltime Ensemble Functions do I need to know?
Hereās the low-down on which functions youāll need to learn to implement different strategies. I teach these in in Module 14 of my High-Performance Time Series Forecasting Course.
-
Stacked Meta-Learners: Use modeltime_fit_resamples()
to create sub-model predictions. Use ensemble_model_spec()
to create super learners (models that learn from the predictions of sub-models).
-
Weighted Ensembles: Use ensemble_weighted()
to create weighted ensemble blends. You choose the weights.
-
Average Ensembles: Use ensemble_average()
to build simple average and median ensembles. No decisions necessary, but accuracy may be sub-optimal.
Ensemble Tutorial
Forecasting with Weighted Ensembles
Today, Iāll cover forecasting Product Sales Demand with Average and Weighted Ensembles, which are fast to implement and can have good performance (although stacked ensembles tend to have better performance).
Get the Cheat Sheet
As you go through this tutorial, it may help to use the Ultimate R Cheat Sheet. Page 3 Covers the Modeltime Forecasting Ecosystem with links to key documentation.
Forecasting Ecosystem Links (Ultimate R Cheat Sheet)
Modeltime Ensemble Diagram
Hereās an ensemble diagram of what we are going to accomplish.
Weighted Stacking, Modeltime Ensemble Diagram
Modeltime Ensemble Functions used in this Tutorial
The idea is that we have several sub-models (Level 1) that make predictions. We can then take these predictions and blend them using weighting and averaging techniques (Level 2):
- Simple Average: Weights all models with the same proportion. Selects the average for each timestamp. Use
ensemble_average(type = "mean")
.
- Median Average: No weighting. Selects prediction using the centered value for each time stamp. Use
ensemble_average(type = "median")
.
- Weighted Average: User defines the weights (loadings). Applies a weighted average at each of the timestamps. Use
ensemble_weighted(loadings = c(1, 2, 3, 4))
.
More Advanced Ensembles: Stacked Meta-Learners
The average and weighted ensembles are the simplest approaches to ensembling. One method that Modeltime Ensemble has integrated is Stacked Meta-Learners, which learn from the predictions of sub-models. We wonāt cover stacked meta-learners in this tutorial. But, I teach them in my High-Performance Time Series Course. šŖ
Getting Started
Letās kick the tires on modeltime.ensemble
Install modeltime.ensemble
.
Load the following libraries.
Get Your Data
Forecasting Product Sales
Start with our Business Objective:
Our Business objective is to forecast the next 12-weeks of Product Sales Demandgiven 2-year sales history.
Weāll use the walmart_sales_weekly
time series data set that includes Walmart Product Transactions from several stores, which is a small sample of the dataset from Kaggle Walmart Recruiting - Store Sales Forecasting. Weāll simplify the data set to a univariate time series with columns, āDateā and āWeekly_Salesā from Store 1 and Department 1.
Next, visualize the dataset with the plot_time_series()
function. Toggle .interactive = TRUE
to get a plotly interactive plot. FALSE
returns a ggplot2 static plot.
Seasonality Evaluation
Letās do a quick seasonality evaluation to hone in on important features using plot_seasonal_diagnostics()
.
We can see that certain weeks and months of the year have higher sales. These anomalies are likely due to events. The Kaggle Competition informed competitors that Super Bowl, Labor Day, Thanksgiving, and Christmas were special holidays. To approximate the events, week number and month may be good features. Letās come back to this when we preprocess our data.
Train / Test
Split your time series into training and testing sets
Give the objective to forecast 12 weeks of product sales, we use time_series_split()
to make a train/test set consisting of 12-weeks of test data (hold out) and the rest for training.
- Setting
assess = "12 weeks"
tells the function to use the last 12-weeks of data as the testing set.
- Setting
cumulative = TRUE
tells the sampling to use all of the prior data as the training set.
Next, visualize the train/test split.
tk_time_series_cv_plan()
: Converts the splits object to a data frame
plot_time_series_cv_plan()
: Plots the time series sampling data using the ādateā and āvalueā columns.
Feature Engineering
Weāll make a number of calendar features using recipes
. Most of the heavy lifting is done by timetk::step_timeseries_signature()
, which generates a series of common time series features. We remove the ones that wonāt help. After dummying we have 74 total columns, 72 of which are engineered calendar features.
Make Sub-Models
Letās make some sub-models with Modeltime
Now for the fun part! Letās make some models using functions from modeltime
and parsnip
.
Auto ARIMA
Hereās the basic Auto ARIMA Model.
- Model Spec:
arima_reg()
<ā This sets up your general model algorithm and key parameters
- Set Engine:
set_engine("auto_arima")
<ā This selects the specific package-function to use and you can add any function-level arguments here.
- Fit Model:
fit(Weekly_Sales ~ Date, training(splits))
<ā All Modeltime Models require a date column to be a regressor.
Elastic Net
Making an Elastic NET model is easy to do. Just set up your model spec using linear_reg()
and set_engine("glmnet")
. Note that we have not fitted the model yet (as we did in previous steps).
Next, make a fitted workflow:
- Start with a
workflow()
- Add a Model Spec:
add_model(model_spec_glmnet)
- Add Preprocessing:
add_recipe(recipe_spec %>% step_rm(date))
<ā Note that Iām removing the ādateā column since Machine Learning algorithms donāt typically know how to deal with date or date-time features
- Fit the Workflow:
fit(training(splits))
XGBoost
We can fit a XGBoost Model using a similar process as the Elastic Net.
NNETAR
We can use a NNETAR model. Note that add_recipe()
uses the full recipe (with the Date column) because this is a Modeltime Model.
Prophet w/ Regressors
Weāll build a Prophet Model with Regressors. This uses the Facebook Prophet forecasting algorithm and supplies all of the 72 features as regressors to the model. Note - Because this is a Modeltime Model we need to have a Date Feature in the recipe.
Sub-Model Evaluation
Letās take a look at our progress so far. We have 5 models. Weāll put them into a Modeltime Table to organize them using modeltime_table()
.
We can get the accuracy on the hold-out set using modeltime_accuracy()
and table_modeltime_accuracy()
. The best model is the Prophet with Regressors with a MAE of 1031.
.model_id |
.model_desc |
.type |
mae |
mape |
mase |
smape |
rmse |
rsq |
1 |
ARIMA(0,0,1)(0,1,0)[52] |
Test |
1359.99 |
6.77 |
1.02 |
6.93 |
1721.47 |
0.95 |
2 |
GLMNET |
Test |
1222.38 |
6.47 |
0.91 |
6.73 |
1349.88 |
0.98 |
3 |
XGBOOST |
Test |
1089.56 |
5.22 |
0.82 |
5.20 |
1266.62 |
0.96 |
4 |
NNAR(4,1,10)[52] |
Test |
2529.92 |
11.68 |
1.89 |
10.73 |
3507.55 |
0.93 |
5 |
PROPHET W/ REGRESSORS |
Test |
1031.53 |
5.13 |
0.77 |
5.22 |
1226.80 |
0.98 |
And, we can visualize the forecasts with modeltime_forecast()
and plot_modeltime_forecast()
.
Build Modeltime Ensembles
This is exciting.
Weāll make Average, Median, and Weighted Ensembles. If you are interested in making Super Learners (Meta-Learner Models that leverage sub-model predictions), I teach this in my new High-Performance Time Series course.
Iāve made it super simple to build an ensemble from a Modeltime Tables. Hereās how to use ensemble_average()
.
- Start with your Modeltime Table of Sub-Models
- Pipe into
ensemble_average(type = "mean")
You now have a fitted average ensemble.
We can make median and weighted ensembles just as easily. Note - For the weighted ensemble Iām loading the better performing models higher.
Ensemble Evaluation
Letās see how we did
We need to have Modeltime Tables that organize our ensembles before we can assess performance. Just use modeltime_table()
to organize ensembles just like we did for the Sub-Models.
Letās check out the Accuracy Table using modeltime_accuracy()
and table_modeltime_accuracy()
.
- From MAE, Ensemble Model ID 1 has 1000 MAE, a 3% improvement over our best submodel (MAE 1031).
- From RMSE, Ensemble Model ID 3 has 1228, which is on par with our best submodel.
.model_id |
.model_desc |
.type |
mae |
mape |
mase |
smape |
rmse |
rsq |
1 |
ENSEMBLE (MEAN): 5 MODELS |
Test |
1000.01 |
4.63 |
0.75 |
4.58 |
1408.68 |
0.97 |
2 |
ENSEMBLE (MEDIAN): 5 MODELS |
Test |
1146.60 |
5.68 |
0.86 |
5.77 |
1310.30 |
0.98 |
3 |
ENSEMBLE (WEIGHTED): 5 MODELS |
Test |
1056.59 |
5.15 |
0.79 |
5.20 |
1228.45 |
0.98 |
And finally we can visualize the performance of the ensembles.
It gets better
Youāve just scratched the surface, hereās whatās comingā¦
The modeltime.ensemble
package functionality is much more feature-rich than what weāve covered here (I couldnāt possibly cover everything in this post). š
Hereās what I didnāt cover:
-
Scalable Forecasting with Ensembles: What happens when your data has more than one time series. This is called scalable forecasting, and we need to use special techniques to ensemble these models.
-
Stacked Super-Learners: We can make use resample predictions from our sub-models as inputs to a meta-learner. This can result is significantly better accuracy (5% improvement is what we achieve in my Time Series Course).
-
Multi-Level Stacking: This is the strategy that won the Grupo Bimbo Inventory Demand Forecasting Challenge where multiple layers of ensembles are used.
-
Refitting Sub-Models and Meta-Learners: Refitting is special task that is needed prior to forecasting future data. Refitting requires careful attention to control the sub-model and meta-learner retraining process.
So how are you ever going to learn time series analysis and forecasting?
Youāre probably thinking:
- Thereās so much to learn
- My time is precious
- Iāll never learn time series
I have good news that will put those doubts behind you.
You can learn time series analysis and forecasting in hours with my state-of-the-art time series forecasting course. š
High-Performance Time Series Course
Become the times series expert in your organization.
My High-Performance Time Series Forecasting in R course is available now. Youāll learn timetk
and modeltime
plus the most powerful time series forecasting techniques available like GluonTS Deep Learning. Become the times series domain expert in your organization.
š High-Performance Time Series Course.
You will learn:
- Time Series Foundations - Visualization, Preprocessing, Noise Reduction, & Anomaly Detection
- Feature Engineering using lagged variables & external regressors
- Hyperparameter Tuning - For both sequential and non-sequential models
- Time Series Cross-Validation (TSCV)
- Ensembling Multiple Machine Learning & Univariate Modeling Techniques (Competition Winner)
- Deep Learning with GluonTS (Competition Winner)
- and more.
Unlock the High-Performance Time Series Course
Project Roadmap, Future Work, and Contributing to Modeltime
Modeltime is a growing ecosystem of packages that work together for forecasting and time series analysis. Here are several useful links:
Have questions about Modeltime Ensemble?
Make a comment in the chat below. š
And, if you plan on using modeltime.ensemble
for your business, itās a no-brainer - Take my Time Series Course.