Skip to content

v0.8.2

Compare
Choose a tag to compare
@justinxzhao justinxzhao released this 01 Sep 14:06
· 236 commits to master since this release

What's Changed

  • int: Rename original combiner_registry to combiner_config_registry, update decorator name by @ksbrar in #3516
  • Add mechanic to override default values for generation during model.predict() by @justinxzhao in #3520
  • [feat] Support for numeric date feature inputs by @jeffkinnison in #3517
  • Add new sythesized response column for text output features during postprocessing by @arnavgarg1 in #3521
  • Disable flaky twitter bots dataset loading test. by @justinxzhao in #3439
  • Add test that verifies that the generation config passed in at model.predict() is used correctly. by @justinxzhao in #3523
  • Move loss metric to same device as inputs by @Infernaught in #3522
  • Add comment about batch size tuning by @arnavgarg1 in #3526
  • Ensure user sets backend to local w/ quantization by @Infernaught in #3524
  • README: Update LLM fine-tuning config by @arnavgarg1 in #3530
  • Revert "Ensure user sets backend to local w/ quantization (#3524)" by @tgaddair in #3531
  • Revert "Ensure user sets backend to local w/ quantization" for release-0.8 branch and upgrade version to 0.8.1.post1 by @justinxzhao in #3532
  • Improve observability during LLM inference by @arnavgarg1 in #3536
  • [bug] Pin pydantic to < 2.0 by @jeffkinnison in #3537
  • [bug] Support preprocessing datetime.date date features by @jeffkinnison in #3534
  • Remove obsolete prompt tuning example. by @justinxzhao in #3540
  • Add Ludwig 0.8 notebook to the README by @arnavgarg1 in #3542
  • Add effective_batch_size to auto-adjust gradient accumulation by @tgaddair in #3533
  • Refactor evaluation metrics to support decoded generated text metrics like BLEU and ROUGE. by @justinxzhao in #3539
  • Fix sequence generator test. by @justinxzhao in #3546
  • Revert "Add Cosine Annealing LR scheduler as a decay method (#3507)" by @justinxzhao in #3545
  • Set default max_sequence_length to None for LLM text input/output features by @arnavgarg1 in #3547
  • Add skip_all_evaluation as a mechanic to skip all evaluation. by @justinxzhao in #3543
  • Roll-forward with fixes: Fix interaction between scheduler.step() and gradient accumulation steps, refactor schedulers to use LambdaLR, and add cosine annealing LR scheduler as a decay method. by @justinxzhao in #3555
  • fix: Move model to the correct device for eval by @jeffkinnison in #3554
  • Report loss in tqdm to avoid log spam by @tgaddair in #3559
  • Wrap each metric update in try/except. by @justinxzhao in #3562
  • Move DDP model to device if it hasn't been wrapped yet by @tgaddair in #3566
  • ensure that there are enough colors to match the score index in visua… by @thelinuxkid in #3560
  • Pin Transformers to 4.31.0 by @arnavgarg1 in #3569

New Contributors

Full Changelog: v0.8.1...v0.8.2