Releases: Nixtla/neuralforecast
v1.7.2
New Features
- [FEAT] DeepNPTS model @elephaint (#990)
- [FEAT] TiDE model @elephaint (#971)
Bug Fixes
- [FIX] Refit after validation boolean @elephaint (#991)
- fix cross_validation results with uneven windows @jmoralez (#989)
- [FIX] fix wrong import doc PatchTST @elephaint (#967)
- [FIX] raise exception nbeats h=1 with stacks @elephaint (#966)
Enhancement
- reduce default warnings @jmoralez (#974)
- Create CODE_OF_CONDUCT.md @tracykteal (#972)
v1.7.1
New Features
- multi-node distributed training with spark @jmoralez (#935)
- [FEAT] Add BiTCN model @elephaint (#958)
- [FEAT] - Add iTransformer to neuralforecast @marcopeix (#944)
- [FEAT] Add MLPMultivariate model @elephaint (#938)
Bug Fixes
- [FIX] Fixes default settings of BiTCN @elephaint (#961)
- [FIX] HINT not producing coherent forecasts @elephaint (#964)
- [FIX] Fixes 948 multivariate predict/val issues when n_series > 1024 @elephaint (#962)
- handle exogenous variables of TFT in parent class @jmoralez (#959)
- fix early stopping in ray auto models @jmoralez (#953)
- fix cross_validation when the id is the index @jmoralez (#951)
Documentation
- add MLflow logging example @cargecla1 (#892)
v1.7.0
New Features
- [FEAT] Added TSMixerx model @elephaint (#921)
- Add Time-LLM @marcopeix (#908)
- [FEAT] Added TSMixer model @elephaint (#914)
- Add option to support user defined optimizer for NeuralForecast Models @JQGoh (#901)
- [FEAT] Added NLinear model @ggattoni (#900)
- [FEAT] Added DLinear model @cchallu (#875)
- support refit in cross_validation @jmoralez (#842)
- use environment variable to get id as column in outputs @jmoralez (#841)
- support different column names for ids, times and targets @jmoralez (#838)
- polars support @jmoralez (#829)
- add callbacks to auto models @jmoralez (#795)
Bug Fixes
- [FIX] Avoid raised error for varied step_size parameter during predict_insample() @JQGoh (#933)
- [FIX] 926 auto ensure all models support alias and 924 Configuring hyperparameter space for Auto* Models @elephaint (#927)
- fix base_multivariate window generation @jmoralez (#907)
- Fix optuna multigpu @jmoralez (#889)
- support saving and loading models with alias @jmoralez (#867)
- [FIX] Polars
.columns
produces list rather than Pandas Index @akmalsoliev (#862) - add missing models to filename dict @jmoralez (#856)
- ensure exogenous features are lists @jmoralez (#851)
- fix save with save_dataset=False @jmoralez (#850)
- copy config in optuna @jmoralez (#844)
- Fixed: Exception: max_epochs is deprecated, use max_steps instead. @twobitunicorn (#835)
- fix single column 2d array polars df @jmoralez (#830)
- move scalers to core @jmoralez (#813)
- [FIX] Default AutoPatchTST config @cchallu (#811)
- [FIX] ReVin Numerical Stability @dluuo (#781)
- On Windows, prevent long trial directory names @tg2k (#735)
Documentation
- removed documentation for missing argument @yarnabrina (#913)
- feat: Added cross-validation tutorial @MMenchero (#897)
- chore: update license to apache-2 @AzulGarza (#882)
- [FEAT] Model table in README @cchallu (#880)
- redirect to mintlify docs @jmoralez (#816)
- add missing models to documentation @jmoralez (#775)
Dependencies
- add windows to CI @jmoralez (#814)
- address future warnings @jmoralez (#898)
- use scalers from coreforecast @jmoralez (#873)
- add python 3.11 to CI @jmoralez (#839)
Enhancement
- Reduce device transfers @elephaint (#923)
- extract common methods to BaseModel @jmoralez (#915)
- remove TQDMProgressBar callback @jmoralez (#899)
- use fsspec in save and load methods @jmoralez (#895)
- Feature/Check input for NaNs when available_mask = 1 @JQGoh (#894)
- switch
flake8
toruff
@Borda (#871) - use future instead of deprecation warnings @jmoralez (#849)
- add frequency validation and futr_df debugging methods @jmoralez (#833)
v1.6.4
New Features
- TemporalNorm with ReVIN learnable parameters @kdgutier (#768)
- support optuna in auto models @jmoralez (#763)
- [FEAT] TimesNet model @cchallu (#757)
- add local_scaler_type @jmoralez (#754)
- [FEAT] Implementation of Exogenous - NBEATSx @akmalsoliev (#738)
Bug Fixes
- [FIX] futr_exog_list in Auto and HINT classes @cchallu (#773)
- fix off by one error in BaseRecurrent available_ts @KeAWang (#759)
Documentation
- [DOCS] Scaling tutorial @cchallu (#770)
- [DOCS] Auto hyperparameter selection with optuna @cchallu (#767)
- [DOCS] Update tutorials to v.1.6.3 @cchallu (#741)
Enhancement
v1.6.2
What's Changed
- [FEAT] Add
horizon_weight
parameter to losses andBasePointLoss
in #704 - [FIX] Fix device error in
horizon_weight
in #706 - [FIX] Base Windows padding in #715
- [FIX] Fixed bug in validation loss scale in #720
- [FIX] Base recurrent valid loss on original scale in #721
Full Changelog: v1.6.1...v1.6.2
v1.6.1
New Models
- DeepAR
- FEDformer
New features
- Available Mask to specify missing data in input data frame.
- Improve
fit
andcross_validation
methods withuse_init_models
parameter to restore models to initial parameters. - Added robust losses:
HuberLoss
,TukeyLoss
,HuberQLoss
, andHuberMQLoss
. - Added Bernoulli
DistributionLoss
to build temporal classifiers. - New
exclude_insample_y
parameter to all models to build models only based on exogenous regressors. - Added dropout to
NBEATSx
andNHITS
models. - Improved
predict
method of windows-based models to create batches to control memory usage. Can be controlled with the newinference_windows_batch_size
parameter. - Improvements to the
HINT
family of hierarchical models: identity reconciliation,AutoHINT
, and reconciliation methods in hyperparameter selection. - Added
inference_input_size
hyperparameter to recurrent-based methods to control historic length during inference to better control memory usage and inference times.
New tutorials and documentation
- Neuralforecast map and How-to add new models
- Transformers for time-series
- Predict insample tutorial
- Interpretable Decomposition
- Outlier Robust Forecasting
- Temporal Classification
- Predictive Maintenance
- Statistical, Machine Learning, and Neural Forecasting methods
Fixed bugs and new protections
- Fixed bug on
MinMax
scalers that returned NaN values when the mask had 0 values. - Fixed bug on
y_loc
andy_scale
being in different devices. - Added
early_stopping_steps
to theHINT
method. - Added protection in the
fit
method of all models to stop training when training or validation loss becomes NaN. Print input and output tensors for debugging. - Added protection to prevent the case
val_check_step
>max_steps
from causing an error when early stopping is enabled. - Added PatchTST to save and load methods dictionaries.
- Added
AutoNBEATSx
to core'sMODEL_DICT
. - Added protection to the
NBEATSx-i
model wherehorizon
=1 causes an error due to collapsing trend and seasonality basis.
v1.5.0
What's Changed
Features
New models
- [FEAT] VanillaTransformer, Autoformer in #469
- [FEAT] StemGNN in #456
- [FEAT] PatchTST in #485
- [FEAT] Informer, augment_calendar_df, set seeds in fit and predict in #463
- [FEAT] Hierarchical Forecasting Networks (HINT) in #489
Misc
- [FEAT] Added MSSE class to losses.pytorch notebook in #468
- [FEAT] Robustified Distribution Outputs in #492
- [FEAT] Added MS availability to augment_calendar_df function in #506
- [FEAT] Add alias argument in #502
- [FEAT] mean default distribution output in addition to quantiles in #529
- [FEAT] Predict insample in #528
Fixes
- [FIX] Remove fixed lib versions in #446
- [FIX] Fixed sCRPS in losses.pytorch notebook in #462
- [FIX] Compute validation loss per epoch in #507
- [FIX] MLP/Recurrent-based memory inference complications in #512
- [FIX] Fix error with inference_input_size in #536
- [FIX] Add instructions python version in #539
- [FIX] Predict dates bug in #540
- [FIX] Autoformer in #523
- [FIX] Removed duplicate from model collection list in #517
Tutorials and Docs
- [FEAT] Electricity Peak Detection in #450
- [FEAT] Add End to End Walkthrough tutorial in #472
- [DOCS] Improved HINT documentation, and broken links in #490
- [DOCS] HINT documentation in #491
- [DOCS] HINT: Updated Unit Test and Example Notebooks in #516
- [FEAT] HINT Unit Test in #499
New dependencies
New Contributors
- @VinishUchiha made their first contribution in #517
Full Changelog: v1.4.0...v1.5.0
v1.4.0
New Models
- Temporal Convolution Network (TCN)
- AutoNBEATSx
- AutoTFT (Transformers)
New features
-
Recurrent models (RNN, LSTM, GRU, DilatedRNN) can now take static, historical, and future exogenous variables. These variables are combined with lags to produce "context" vectors based on MLP decoders, based on the MQ-RNN model (https://arxiv.org/pdf/1711.11053.pdf).
-
The new
DistributionLoss
class allows for producing probabilistic forecasts with all available models. By changing theloss
hyperparameter to one of these losses, the model will learn and output the distribution parameters:- Bernoulli, Poisson, Normal, StudentT, Negative Binomial, and Tweedie distributions
- Scale-decoupled optimization using Temporal Scalers to improve convergence and performance.
- The
predict
method can return samples, quantiles, or distribution parameters.
-
sCRPS loss in PyTorch to minimize errors generating prediction intervals.
Optimization improvements
We included new optimization features commonly used to train neural models:
- Added learning rate scheduler, using
torch.optim.lr_scheduler.StepLR
scheduler. The newnum_lr_decays
hyperparameter controls the number of decays (evenly distributed) during training. - Added Early stopping using validation loss. The new
early_stop_patience_steps
controls the number of validation steps with no improvement after which training will be stopped. - New validation loss hyperparameter to allow different train and validation losses
Training, scheduler, validation loss computation, and early stopping are now defined in steps (instead of epochs) to control the training procedure better. Use max_steps
to define the number of training iterations. Note: max_epochs
will be deprecated in the future.
New tutorials and documentation
- Probabilistic Long-horizon forecasting
- Save and Load Models to use them in different datasets
- Temporal Fusion Transformer
- Exogenous variables
- Automatic hyperparameter tuning
- Intermittent or Sparse Time Series
- Detect Demand Peaks
v1.3.0
v1.2.0
What's Changed
- [FIX] Colab link getting started in #329
- Improved MQ-NBEATS [B,H]+[B,H,Q] -> [B,H,Q] in #330
- Improved MQ-NBEATSx [B,H]+[B,H,Q] -> [B,H,Q] in #331
- fixed pytorch losses' init documentation in #333
- TCN in #332
- Update README.md in #335
- [FEAT] DistributionLoss in #339
- [FEAT] Deprecated GMMTFT in favor of DistributionLoss' modularity in #342
- [Feat] Scaled Distributions in #345
- Deprecate AffineTransformed class in #350
- [FEAT] Add cla action in #349
- [FIX] Delete cla.yml in #353
- [FIX] CI tests in #357
- [FEAT] Added return_params to Distributions in #348
- [FEAT] Ignore jupyter notebooks as part of
languages
in #355 - [FEAT] Added
num_samples
to Distribution's initialization in #359
Full Changelog: v1.1.0...v1.2.0