- remove
log_args
(#2954)
- Improve performance of
RandomSplitter
(h/t @muellerzr) (#2957)
- Exporting TabularLearner via learn.export() leads to huge file size (#2945)
TensorPoint
object has no attributeimg_size
(#2950)
- moved
has_children
fromnn.Module
to free function (#2931)
- Support persistent workers (#2768)
unet_learner
segmentation fails (#2939)- In "Transfer learning in text" tutorial, the "dls.show_batch()" show wrong outputs (#2910)
Learn.load
andLRFinder
not functioning properly for the optimizer states (#2892)- Documentation for
Show_Images
broken (#2876) - URL link for documentation for
torch_core
library from thedoc()
method gives incorrect url (#2872)
- Work around broken PyTorch subclassing of some
new_*
methods (#2769)
- PyTorch 1.7 compatibility (#2917)
PyTorch 1.7 includes support for tensor subclassing, so we have replaced much of our custom subclassing code with PyTorch's. We have seen a few bugs in PyTorch's subclassing feature, however, so please file an issue if you see any code failing now which was working before.
There is one breaking change in this version of fastai, which is that custom metadata is now stored directly in tensors as standard python attributes, instead of in the special _meta
attribute. Only advanced customization of fastai OO tensors would have used this functionality, so if you do not know what this all means, then it means you did not use it.
This version was released after 2.1.0
, and adds fastcore 1.3 compatibility, whilst maintaining PyTorch 1.6 compatibility. It has no new features or bug fixes.
The next version of fastai will be 2.1. It will require PyTorch 1.7, which has significant foundational changes. It should not require any code changes except for people doing sophisticated tensor subclassing work, but nonetheless we recommend testing carefully. Therefore, we recommend pinning your fastai version to <2.1
if you are not able to fully test your fastai code when the new version comes out.
- pin pytorch (
<1.7
) and torchvision (<0.8
) requirements (#2915) - Add version pin for fastcore
- Remove version pin for sentencepiece
- added support for tb projector word embeddings (#2853), thanks to @floleuerer
- Added ability to have variable length draw (#2845), thanks to @marii-moe
- add pip upgrade cell to all notebooks, to ensure colab has current fastai version (#2843)
- loss functions were moved to
loss.py
(#2843)
-
new callback event:
after_create
(#2842)- This event runs after a
Learner
is constructed. It's useful for initial setup which isn't needed for everyfit
, but just once for eachLearner
(such as setting initial defaults).
- This event runs after a
-
Modified XResNet to support Conv1d / Conv3d (#2744), thanks to @floleuerer
- Supports different input dimensions, kernel sizes and stride (added parameters ndim, ks, stride). Tested with fastai_audio and fastai time series with promising results.
- Undo breaking num_workers fix (#2804)
- Some users found the recent addition of
num_workers
to inference functions was causing problems, particularly on Windows. This PR reverts that change, until we find a more reliable way to handlenum_workers
for inference.
- Some users found the recent addition of
- learn.tta() fails on a learner imported with load_learner() (#2764)
- learn.summary() crashes out on 2nd transfer learning (#2735)
- Undo breaking
num_workers
fix (#2804)
- Fix
cont_cat_split
for multi-label classification (#2759) - fastbook error: "index 3 is out of bounds for dimension 0 with size 3" (#2792)
- update for fastcore 1.0.5 (#2775)
- "Remove pandas min version requirement" (#2765)
- Modify XResNet to support Conv1d / Conv3d (#2744)
- Also support different input dimensions, kernel sizes and stride (added parameters ndim, ks, stride).
- Add support for multidimensional arrays for RNNDropout (#2737)
- MCDropoutCallback to enable Monte Carlo Dropout in fastai. (#2733)
- A new callback to enable Monte Carlo Dropout in fastai in the
get_preds
method. Monte Carlo Dropout is simply enabling dropout during inference. Calling get_preds multiple times and stacking them yield of a distribution of predictions that you can use to evaluate your prediction uncertainty.
- A new callback to enable Monte Carlo Dropout in fastai in the
- adjustable workers in
get_preds
(#2721)
- Initial release of v2