This tutorial introduces Facebook Prophet forecasting algorithm.
- Python
- Time Series
- Forecasting
- Facebook Prophet
In this third tutorial about time series we explore the Facebook Prophet forecasting algorithm. Similarly to the previous tutorial about SARIMAX models we do so in a way that you can start applying it and go further exploring it with different use cases.
The code of this tutoriall can be found at 03-Forecasting_with_Facebook_Prophet.ipynb on GitHub.
After completing this tutorial, you will know:
- How to apply Facebook Prophet for forecasting
- How to search for the best model using hyperparameter fine tunning
- How to evaluate the obtained model
- How to save the chosen model for future use
If you didn’t check yet the other tutorials about time series, they are available at:
- Time Series Part 1: An Introduction to Time Series Analysis
- Time Series Part 2: Forecasting with SARIMAX models: An Intro
Introduction to Facebook Prophet
In this section we introduce Facebook Prophet. Here you get the basics for building your first model with Facebook’s advanced forecasting tool and go further exploring it.
To illustrate how Prophet works we apply it to forecast sales using the same dataset used in the tutorial where we introduced SARIMAX models.
Facebook Prophet
Facebook Prophet is open-source library released by Facebook’s Core Data Science team. It is available in R and Python.
Prophet is a procedure for univariate (one variable) time series forecasting data based on an additive model, and the implementation supports trends, seasonality, and holidays. It works best with time series that have strong seasonal effects and several seasons of historical data. Prophet is robust to missing data and shifts in the trend, and typically handles outliers well.
It is specially interesting for new users because of its easy use and capacity of find automatically a good set of hyperparameters for the model. Therefore, it allows users without prior knowledge or experience of forecasting time series data start using it and get reasonably good results that are often equal or sometimes even better than the ones produced by the experts.
Forecasting with Facebook Prophet
Prepare Data for Prophet
Prophet requires as input a dataframe with two columns:
- ds: datetime column.
- y: numeric column which represents the measurement we wish to forecast.
Our data is almost ready. We just need to rename columns date and sales, respectively, as ds and y.

Train Model
To train a model in Prophet, first we create an instance of the model class and then we call the fit method.
In principle, you don’t need to specify any hyparameters. One important exception is seasonality_mode. It is important to set this parameter to multiplicative if your model is multiplicative since Prophet is based on an additive model. As we saw previously, the seasonality in our model follows an additive behavior. Therefore, there is no need to set seasonality_mode as multiplicative.
Although Prophet is able to find automatically a good set of hyperparameters, we will see later that some fine tuning can improve performance. Specially, applying your knowledge of the business case might make a huge difference even if Prophet has the power of handling many things by itself.
Just to exemplify, I’ll include the parameter interval_width that sets the confidence interval.
To know more about how to tweak those parameters in your favor check Prophet documentation.
m = Prophet(interval_width=0.95) #by default is 80% | |
model = m.fit(df_store_2_item_28) |
view rawtrain_prophet.py hosted with ❤ by GitHub
Forecast
In order to forecast we first need to create a dataframe that will save our predictions. Method make_future_dataframe builds a dataframe that extends into the future a specified number of days. In our case, we will predict 90 days into the future.
By default the dataframe created includes the dates from the history, so we see the model fit as well.

To make predictions we apply method predict on the future dataframe that we have just generated.
Forecast Dataframe
The forecast dataframe contains Prophet’s prediction for sales. Because we’ve also passed historical dates, it provides an in-sample fit that we can use to evaluate our model.
As you can see, forecast includes a column yhat with the forecast, as well as columns for components and uncertainty intervals

Forecast Plot
To plot the forecast you just need to call method .plot() on your forecast dataframe.

In the forecast plot above, deep blue line is forecast sales forecast[‘y_hat’], black dots are actual sales forecast[‘y’]. The light blue shade is 95% confidence interval around the forecast. The uncertainty interval in this region is bounded by forecast[‘yhat_lower’] and forecast[‘yhat_upper’] values.
Trend Changepoints
Real life time series such as this one, frequently have abrupt changes in their trajectories. These changepoints sign abrupt changes in the time series caused, for instances, by new product launch, unforeseen calamity. Prophet will automatically detect these changepoints and will allow the trend to adapt appropriately. At these points, the growth rate is allowed to change making the model more flexible. This may cause overfitting or underfitting.
A parameter called changepoint_prior_scale could be used to adjust the trend flexibility and tackle overfitting and underfitting. Higher value fits a more flexible curve to the time series.
By default changepoints are only inferred for the first 80% of the time series, but you can change it by making use of the changepoint_range argument of the model.
It is also possible to add your own changepoints manually, using the changepoints argument.
If you want to know more about changepoints and Prophet check this out.
In the plot below, the dotted lines represent the changepoints for the given time series.

Component Plots

We can observe the following on the forecast components plotted above:
Trend component: Trend upwards.
Weekly seasonality component: The weekly seasonality shows that people buy more on weekends. In particular, we observe a drop on sales from Sunday to Monday. This might point to a holiday effect.
Yearly seasonality component: As observed previously the volume of sales is higher in July and lower in January. This peak in sales in July might mean seasonal sales with high discount prices.
Holidays, special events, as well as seasonality can be explored to improve your model. For more information on how to use this information check Prophet’s documentation section Seasonality, Holiday Effects, And Regressors.
Evaluate model
How this model is performing?
The forecast dataframe includes predictions made on the training data dates. Therefore, we can use this in-sample fit to evaluate our model.
df_merge = pd.merge(df_store_2_item_28, forecast[[‘ds’,’yhat_lower’,’yhat_upper’,’yhat’]],on=’ds’) | |
df_merge = df_merge[[‘ds’,’yhat_lower’,’yhat_upper’,’yhat’,’y’]] | |
df_merge.head() |
view rawmerge_historical_forecast.py hosted with ❤ by GitHub
To evaluate this model we will be using MAE and MAPE, as done previously for the SARIMAX models. These are popular metrics which you will come often across when evaluating forecasting models.
Like this, we can compare and find out which model is the best model.
# calculate MAE between observed and predicted values | |
y_true = df_merge[‘y’].values | |
y_pred = df_merge[‘yhat’].values | |
mae_01 = mean_absolute_error(y_true, y_pred) | |
mape_01 = mean_absolute_percentage_error(y_true, y_pred) |
view rawprophet_metrics_in_sample.py hosted with ❤ by GitHub
For this model MAE = 4.275 and MAPE=0.169 which are better than the values obtained by our best SARIMAX model.
In addition, to calculating metrics let’s plot both actual and predicted data.

From the plot, it seems that the model is able to fit the data points well.
Prophet’s Diagnostic Tools
As part of the evaluation process we can make good use of some diagnostic tools provided by Prophet, such as Cross Validation and Hyperparameter tuning.
Cross Validation
Prophet includes functionality for time series cross validation to measure forecast error by comparing the predicted values with the actual values.
To apply the cross_validation function, we specify the forecast horizon (horizon), then optionally the size of the initial training period (initial) and the spacing between cutoff dates (period). You can also just set the horizon and let prophet choose the other parameters.
from fbprophet.diagnostics import cross_validation | |
df_cv = cross_validation(m, horizon=’90 days’) | |
# df_cv = cross_validation(m, initial=’270 days’, period=’45 days’, horizon = ’90 days’) | |
df_cv.head() |
view rawprophet_crossvalidation.py hosted with ❤ by GitHub

from fbprophet.diagnostics import performance_metrics | |
df_p = performance_metrics(df_cv) | |
df_p.head() |
view rawprophet_performance_metrics.py hosted with ❤ by GitHub

The blue line shows the MAE (1st plot above) and MAPE (2nd plot above), where the mean is taken over a rolling window of the dots.
We see for this forecast that, considering MAPE, errors around 18.50% are typical for predictions 9 days into the future, and that errors decrease to around 17.20% for predictions that are 90 days out.
Fine tuning hyperparameters
Is it possible to make it better??
We can apply grid search and try to fine tune the hyperparameters. Let’s apply it and try to find hyperparameters that may give us lower MAPE.
Here we try different values of changepoint_prior_scale and seasonality_prior_scale. changepoint_prior_scale determines the flexibility of the trend, and in particular how much the trend changes at the trend changepoints. While seasonality_prior_scale controls the flexibility of the seasonality. For more details, check section Hyperparameter tuning of Prophet’s documentation.
import itertools | |
param_grid = { | |
‘changepoint_prior_scale’: [0.001, 0.01, 0.1, 0.5], | |
‘seasonality_prior_scale’: [0.01, 0.1, 1.0, 10.0], | |
} | |
# Generate all combinations of parameters | |
all_params = [dict(zip(param_grid.keys(), v)) for v in itertools.product(*param_grid.values())] | |
maes = [] # Store the MAE for each params here | |
mapes = [] # Store the MAPE for each params here | |
# Use cross validation to evaluate all parameters | |
for params in all_params: | |
m = Prophet(**params).fit(df_store_2_item_28) # Fit model with given params | |
df_cv = cross_validation(m, horizon=’90 days’, parallel=”processes”) | |
df_p = performance_metrics(df_cv, rolling_window=1) | |
maes.append(df_p[‘mae’].values[0]) | |
mapes.append(df_p[‘mape’].values[0]) | |
# Find the best parameters | |
tuning_results = pd.DataFrame(all_params) | |
tuning_results[‘mae’] = maes | |
tuning_results[‘mape’] = mapes |
view rawtune_hyperparameters_prophet.py hosted with ❤ by GitHub
The following image shows the smallest values for both MAE and MAPE:

Then we apply the best model based on MAPE.
Best Model obtained by Fine-tuning Hyperparameters

Let’s compare the values of the metrics of the two models where Prophet_01 is our first model and Prophet_02 is the model with smallest MAPE provided by the fine-tuning process.

The model pointed out by fine-tuning (Prophet_02) gave us a slight improvement when compared with previous model (Prophet_01).
Best Model adding US Holidays
What if we include US-Holidays? Would it improve further the results obtained by our last model?
Let’s use for this add_country_holidays(country_name=’US’) provided by Prophet.
m = Prophet(interval_width=0.95, weekly_seasonality=True, | |
changepoint_prior_scale=best_params[‘changepoint_prior_scale’], | |
seasonality_prior_scale=best_params[‘seasonality_prior_scale’]) | |
m.add_country_holidays(country_name=’US’) | |
model = m.fit(df_store_2_item_28) |
view rawtrain_prophet_with_holidays.py hosted with ❤ by GitHub
These are the US holidays included:

Note that now the plots of components include holidays.

By adding holidays we’ve obtained a slightly better model (Prophet_03) when compared with previous models; considering both MAE and MAPE metrics.
Even if, in this case, the improvement was not significative, it shows that by changing hyperparameters based on our business case can play in our favor.

Saving and Loading the Best model
Now that you found your best model, it is time to save it for future use.
Saving Prophet models is a bit different than saving SARIMAX models. In Python, Prophet models should not be saved with pickle; the Stan backend attached to the model object will not pickle well, and will produce issues under certain versions of Python. Instead, you should use the built-in serialization functions to serialize the model to json.
Save Prophet model
import json | |
from fbprophet.serialize import model_to_json, model_from_json | |
with open(‘../model/prophet_model.json’, ‘w’) as fout: | |
json.dump(model_to_json(m), fout) # Save model |
view rawsave_prophet_model.py hosted with ❤ by GitHub
Load Prophet model
with open(‘../model/prophet_model.json’, ‘r’) as fin: | |
m = model_from_json(json.load(fin)) # Load model |
view rawload_prophet_model.py hosted with ❤ by GitHub
Comparing all results – SARIMA models vs Prophet models
Below, we compare the performance of SARIMAX models (Time Series Tutorial Part 2) and Prophet models.

The Prophet models present better accuracy than the (S)ARIMA models, as show by the lowest values of MAE and MAPE. The best Prophet model has a MAE 32.38% lower than the MAE of the best SARIMA model. The MAPE of the best Prophet is 21.13% which is lower than the MAPE of the best SARIMA model.
Conclusions about Facebook Prophet
In this third time series tutorial we presented Facebook Prophet. With our example we could observe that in fact Prophet is easy to use. In our first model we didn’t make any choice and by letting Prophet choose automatically the hyperparameters we have already obtained a model with better performance than the best SARIMA model from last tutorial.
By tweaking a bit the hyperparameters we have improved our results. Adding holidays showed to be useful even if the improvement, in this case, was small.
We have also explored some of the evaluation tools provided by Prophet: Cross validation and Hyperparameter tuning.
If you want to go deeper and learn even more about Prophet visit its official documentation page.