Skip to content

Releases: Nixtla/neuralforecast

v1.7.2

07 May 16:36
Compare
Choose a tag to compare

New Features

Bug Fixes

Enhancement

v1.7.1

11 Apr 00:16
Compare
Choose a tag to compare

New Features

Bug Fixes

Documentation

v1.7.0

27 Mar 18:34
Compare
Choose a tag to compare

New Features

Bug Fixes

Documentation

Dependencies

Enhancement

v1.6.4

05 Oct 19:39
Compare
Choose a tag to compare

New Features

Bug Fixes

  • [FIX] futr_exog_list in Auto and HINT classes @cchallu (#773)
  • fix off by one error in BaseRecurrent available_ts @KeAWang (#759)

Documentation

Enhancement

v1.6.2

16 Aug 20:55
Compare
Choose a tag to compare

What's Changed

  • [FEAT] Add horizon_weight parameter to losses and BasePointLoss in #704
  • [FIX] Fix device error in horizon_weight in #706
  • [FIX] Base Windows padding in #715
  • [FIX] Fixed bug in validation loss scale in #720
  • [FIX] Base recurrent valid loss on original scale in #721

Full Changelog: v1.6.1...v1.6.2

v1.6.1

18 Jul 21:17
Compare
Choose a tag to compare

New Models

  • DeepAR
  • FEDformer

New features

  • Available Mask to specify missing data in input data frame.
  • Improve fit and cross_validation methods with use_init_models parameter to restore models to initial parameters.
  • Added robust losses: HuberLoss, TukeyLoss, HuberQLoss, and HuberMQLoss.
  • Added Bernoulli DistributionLoss to build temporal classifiers.
  • New exclude_insample_y parameter to all models to build models only based on exogenous regressors.
  • Added dropout to NBEATSx and NHITS models.
  • Improved predict method of windows-based models to create batches to control memory usage. Can be controlled with the new inference_windows_batch_size parameter.
  • Improvements to the HINT family of hierarchical models: identity reconciliation, AutoHINT, and reconciliation methods in hyperparameter selection.
  • Added inference_input_sizehyperparameter to recurrent-based methods to control historic length during inference to better control memory usage and inference times.

New tutorials and documentation

  • Neuralforecast map and How-to add new models
  • Transformers for time-series
  • Predict insample tutorial
  • Interpretable Decomposition
  • Outlier Robust Forecasting
  • Temporal Classification
  • Predictive Maintenance
  • Statistical, Machine Learning, and Neural Forecasting methods

Fixed bugs and new protections

  • Fixed bug on MinMax scalers that returned NaN values when the mask had 0 values.
  • Fixed bug on y_loc and y_scale being in different devices.
  • Added early_stopping_steps to the HINT method.
  • Added protection in the fit method of all models to stop training when training or validation loss becomes NaN. Print input and output tensors for debugging.
  • Added protection to prevent the case val_check_step > max_steps from causing an error when early stopping is enabled.
  • Added PatchTST to save and load methods dictionaries.
  • Added AutoNBEATSx to core's MODEL_DICT.
  • Added protection to the NBEATSx-i model where horizon=1 causes an error due to collapsing trend and seasonality basis.

v1.5.0

22 Apr 19:28
Compare
Choose a tag to compare

What's Changed

Features

New models

  • [FEAT] VanillaTransformer, Autoformer in #469
  • [FEAT] StemGNN in #456
  • [FEAT] PatchTST in #485
  • [FEAT] Informer, augment_calendar_df, set seeds in fit and predict in #463
  • [FEAT] Hierarchical Forecasting Networks (HINT) in #489

Misc

  • [FEAT] Added MSSE class to losses.pytorch notebook in #468
  • [FEAT] Robustified Distribution Outputs in #492
  • [FEAT] Added MS availability to augment_calendar_df function in #506
  • [FEAT] Add alias argument in #502
  • [FEAT] mean default distribution output in addition to quantiles in #529
  • [FEAT] Predict insample in #528

Fixes

  • [FIX] Remove fixed lib versions in #446
  • [FIX] Fixed sCRPS in losses.pytorch notebook in #462
  • [FIX] Compute validation loss per epoch in #507
  • [FIX] MLP/Recurrent-based memory inference complications in #512
  • [FIX] Fix error with inference_input_size in #536
  • [FIX] Add instructions python version in #539
  • [FIX] Predict dates bug in #540
  • [FIX] Autoformer in #523
  • [FIX] Removed duplicate from model collection list in #517

Tutorials and Docs

  • [FEAT] Electricity Peak Detection in #450
  • [FEAT] Add End to End Walkthrough tutorial in #472
  • [DOCS] Improved HINT documentation, and broken links in #490
  • [DOCS] HINT documentation in #491
  • [DOCS] HINT: Updated Unit Test and Example Notebooks in #516
  • [FEAT] HINT Unit Test in #499

New dependencies

  • [FEAT] Add support for lightning>=2.0.0, and torch>=2.0.0 in #498
  • [FEAT] Allow pandas 2 in #508

New Contributors

Full Changelog: v1.4.0...v1.5.0

v1.4.0

14 Feb 22:25
Compare
Choose a tag to compare

New Models

  • Temporal Convolution Network (TCN)
  • AutoNBEATSx
  • AutoTFT (Transformers)

New features

  • Recurrent models (RNN, LSTM, GRU, DilatedRNN) can now take static, historical, and future exogenous variables. These variables are combined with lags to produce "context" vectors based on MLP decoders, based on the MQ-RNN model (https://arxiv.org/pdf/1711.11053.pdf).

  • The new DistributionLoss class allows for producing probabilistic forecasts with all available models. By changing the loss hyperparameter to one of these losses, the model will learn and output the distribution parameters:

    • Bernoulli, Poisson, Normal, StudentT, Negative Binomial, and Tweedie distributions
    • Scale-decoupled optimization using Temporal Scalers to improve convergence and performance.
    • The predict method can return samples, quantiles, or distribution parameters.
  • sCRPS loss in PyTorch to minimize errors generating prediction intervals.

Optimization improvements

We included new optimization features commonly used to train neural models:

  • Added learning rate scheduler, using torch.optim.lr_scheduler.StepLR scheduler. The new num_lr_decays hyperparameter controls the number of decays (evenly distributed) during training.
  • Added Early stopping using validation loss. The new early_stop_patience_steps controls the number of validation steps with no improvement after which training will be stopped.
  • New validation loss hyperparameter to allow different train and validation losses

Training, scheduler, validation loss computation, and early stopping are now defined in steps (instead of epochs) to control the training procedure better. Use max_steps to define the number of training iterations. Note: max_epochs will be deprecated in the future.

New tutorials and documentation

  • Probabilistic Long-horizon forecasting
  • Save and Load Models to use them in different datasets
  • Temporal Fusion Transformer
  • Exogenous variables
  • Automatic hyperparameter tuning
  • Intermittent or Sparse Time Series
  • Detect Demand Peaks

v1.3.0

15 Dec 17:42
Compare
Choose a tag to compare

What's Changed

  • [DOCS] Probabilistic Long-horizon forecasting in #361
  • [FEAT]: Updated GMM Class in losses.pytorch in #365
  • [FEAT] Scale decoupling changes for GMM and PMM class in #366
  • [FEAT] AutoTFT in #367
  • [FIX] Losses in Auto models initialization in #369

Full Changelog: v1.2.0...v1.3.0

v1.2.0

07 Dec 21:11
Compare
Choose a tag to compare

What's Changed

  • [FIX] Colab link getting started in #329
  • Improved MQ-NBEATS [B,H]+[B,H,Q] -> [B,H,Q] in #330
  • Improved MQ-NBEATSx [B,H]+[B,H,Q] -> [B,H,Q] in #331
  • fixed pytorch losses' init documentation in #333
  • TCN in #332
  • Update README.md in #335
  • [FEAT] DistributionLoss in #339
  • [FEAT] Deprecated GMMTFT in favor of DistributionLoss' modularity in #342
  • [Feat] Scaled Distributions in #345
  • Deprecate AffineTransformed class in #350
  • [FEAT] Add cla action in #349
  • [FIX] Delete cla.yml in #353
  • [FIX] CI tests in #357
  • [FEAT] Added return_params to Distributions in #348
  • [FEAT] Ignore jupyter notebooks as part of languages in #355
  • [FEAT] Added num_samples to Distribution's initialization in #359

Full Changelog: v1.1.0...v1.2.0