Skip to content

Latest commit

 

History

History
126 lines (89 loc) · 5.61 KB

regression_performance_calculation.rst

File metadata and controls

126 lines (89 loc) · 5.61 KB

Monitoring Realized Performance for Regression

Note

The following example uses timestamps<Timestamp>. These are optional but have an impact on the way data is chunked and results are plotted. You can read more about them in the data requirements<data_requirements_columns_timestamp>.

Just The Code

Walkthrough

For simplicity the guide is based on a synthetic dataset where the monitored model predicts the selling price of a used car. You can learn more about this dataset<dataset-synthetic-regression>.

In order to monitor a model, NannyML needs to learn about it from a reference dataset. Then it can monitor the data that is subject to actual analysis, provided as the analysis dataset. You can read more about this in our section on data periods<data-drift-periods>.

The analysis_targets dataframe contains the target results of the analysis period. This is kept separate in the synthetic data because it is not used during performance estimation<performance-estimation>. But as it is required to calculate performance, the first thing to do in this case is to join the analysis target values with the analysis data.

Next a ~nannyml.performance_calculation.calculator.PerformanceCalculator is created using a list of metrics to calculate (or just one metric), the data columns required for these metrics, an optional chunking<chunking> specification and the type of machine learning problem being addressed.

The list of metrics specifies which performance metrics of the monitored model will be calculated. The following metrics are currently supported:

  • mae - mean absolute error
  • mape - mean absolute percentage error
  • mse - mean squared error
  • rmse - root mean squared error
  • msle - mean squared logarithmic error
  • rmsle - root mean squared logarithmic error

For more information on metrics, check the ~nannyml.performance_calculation.metrics module.

The new ~nannyml.performance_calculation.calculator.PerformanceCalculator is fitted using the ~nannyml.performance_calculation.calculator.PerformanceCalculator.fit method on the reference data.

The fitted ~nannyml.performance_calculation.calculator.PerformanceCalculator can then be used to calculate realized performance metrics on all data which has target values available with the ~nannyml.performance_calculation.calculator.PerformanceCalculator.calculate method. NannyML can output a dataframe that contains all the results of the analysis data.

There results from the reference data are also available.

Apart from chunking and chunk and period-related columns, the results data have a set of columns for each calculated metric. When taking mae as an example:

  • targets_missing_rate - The fraction of missing target data.
  • <metric> - The value of the metric for a specific chunk.
  • <metric>_lower_threshold> and <metric>_upper_threshold> - Lower and upper thresholds for performance metric. Crossing them will raise an alert that there is a significant metric change. The thresholds are calculated based on the realized performance of chunks in the reference period. The thresholds are 3 standard deviations away from the mean performance calculated on reference chunks. They are calculated during fit phase.
  • <metric>_alert - A flag indicating potentially significant performance change. True if realized performance crosses upper or lower threshold.
  • <metric>_sampling_error - Estimated Sampling Error for the relevant metric.

The results can be plotted for visual inspection:

image

Insights

From looking at the RMSE and RMSLE performance results we can observe an interesting effect. We know that RMSE penalizes mispredictions symmetrically while RMSLE penalizes underprediction more than overprediction. Hence while our model has become a little bit more accurate according to RMSE, the increase in RMSLE tells us that our model is now underpredicting more than it was before!

What Next

If we decide further investigation is needed, the Data Drift<data-drift> functionality can help us to see what feature changes may be contributing to any performance changes.

It is also wise to check whether the model's performance is satisfactory according to business requirements. This is an ad-hoc investigation that is not covered by NannyML.