Skip to content

Releases: ludwig-ai/ludwig

v0.8.5

09 Oct 21:39
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.8.4...v0.8.5

v0.8.4

19 Sep 16:20
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.8.3...v0.8.4

v0.8.3

12 Sep 01:05
Compare
Choose a tag to compare

What's Changed

  • Add test to show global_max_sequence_length can never exceed an LLMs context length by @arnavgarg1 in #3548
  • WandB: Add metric logging support on eval end and epoch end by @arnavgarg1 in #3586
  • schema: Add prompt validation check by @ksbrar in #3564
  • Unpin Transformers for CodeLlama support by @arnavgarg1 in #3592
  • Add support for Paged Optimizers (Adam, Adamw), 8-bit optimizers, and new optimizers: LARS, LAMB and LION by @arnavgarg1 in #3588
  • fix: Failure in TabTransformer Combiner Unit test by @jimthompson5802 in #3596
  • fix: Move target tensor to model output device in check_module_parameters_updated by @jeffkinnison in #3567
  • Allow user to specify huggingface link or local path to pretrained lora weights by @Infernaught in #3572

Full Changelog: v0.8.2...v0.8.3

v0.8.2

01 Sep 14:06
Compare
Choose a tag to compare

What's Changed

  • int: Rename original combiner_registry to combiner_config_registry, update decorator name by @ksbrar in #3516
  • Add mechanic to override default values for generation during model.predict() by @justinxzhao in #3520
  • [feat] Support for numeric date feature inputs by @jeffkinnison in #3517
  • Add new sythesized response column for text output features during postprocessing by @arnavgarg1 in #3521
  • Disable flaky twitter bots dataset loading test. by @justinxzhao in #3439
  • Add test that verifies that the generation config passed in at model.predict() is used correctly. by @justinxzhao in #3523
  • Move loss metric to same device as inputs by @Infernaught in #3522
  • Add comment about batch size tuning by @arnavgarg1 in #3526
  • Ensure user sets backend to local w/ quantization by @Infernaught in #3524
  • README: Update LLM fine-tuning config by @arnavgarg1 in #3530
  • Revert "Ensure user sets backend to local w/ quantization (#3524)" by @tgaddair in #3531
  • Revert "Ensure user sets backend to local w/ quantization" for release-0.8 branch and upgrade version to 0.8.1.post1 by @justinxzhao in #3532
  • Improve observability during LLM inference by @arnavgarg1 in #3536
  • [bug] Pin pydantic to < 2.0 by @jeffkinnison in #3537
  • [bug] Support preprocessing datetime.date date features by @jeffkinnison in #3534
  • Remove obsolete prompt tuning example. by @justinxzhao in #3540
  • Add Ludwig 0.8 notebook to the README by @arnavgarg1 in #3542
  • Add effective_batch_size to auto-adjust gradient accumulation by @tgaddair in #3533
  • Refactor evaluation metrics to support decoded generated text metrics like BLEU and ROUGE. by @justinxzhao in #3539
  • Fix sequence generator test. by @justinxzhao in #3546
  • Revert "Add Cosine Annealing LR scheduler as a decay method (#3507)" by @justinxzhao in #3545
  • Set default max_sequence_length to None for LLM text input/output features by @arnavgarg1 in #3547
  • Add skip_all_evaluation as a mechanic to skip all evaluation. by @justinxzhao in #3543
  • Roll-forward with fixes: Fix interaction between scheduler.step() and gradient accumulation steps, refactor schedulers to use LambdaLR, and add cosine annealing LR scheduler as a decay method. by @justinxzhao in #3555
  • fix: Move model to the correct device for eval by @jeffkinnison in #3554
  • Report loss in tqdm to avoid log spam by @tgaddair in #3559
  • Wrap each metric update in try/except. by @justinxzhao in #3562
  • Move DDP model to device if it hasn't been wrapped yet by @tgaddair in #3566
  • ensure that there are enough colors to match the score index in visua… by @thelinuxkid in #3560
  • Pin Transformers to 4.31.0 by @arnavgarg1 in #3569

New Contributors

Full Changelog: v0.8.1...v0.8.2

v0.8.1.post1

15 Aug 16:26
Compare
Choose a tag to compare

What's Changed

  • Revert "Ensure user sets backend to local w/ quantization" for release-0.8 branch and upgrade version to 0.8.1.post1 by @justinxzhao in #3532

Full Changelog: v0.8.1...v0.8.1.post1

v0.8.1

v0.8: Low Code Framework to Efficiently Build Custom LLMs on Your Data

09 Aug 04:55
efed598
Compare
Choose a tag to compare

Full Release Blog Post Here: https://predibase.com/blog/ludwig-v0-8-open-source-toolkit-to-build-and-fine-tune-custom-llms-on-your-data

What's Changed

Read more

v0.7.5

07 Aug 19:59
c020e64
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.7.4...v0.7.5

v0.7.4

23 Mar 15:29
7b72de5
Compare
Choose a tag to compare

What's Changed

  • Tagger decoder config override and auxiliary validation checks (#3222)

Full Changelog: v0.7.3...v0.7.4

v0.7.3

17 Mar 20:43
Compare
Choose a tag to compare

What's Changed

  • Support for PyTorch 2.0 via trainer.compile: true (#3246)
  • Fix ludwig docker (#3264)
  • Add env var LUDWIG_SCHEMA_VALIDATION_POLICY to change marshmallow validation strictness (#3226)
  • Add sequence_length capability (#3259)
  • Persist Dask Dataframe after binary image/audio reads (#3241)
  • Replace NaN in timeseries rows with padding_value (#3238)
  • Remove partial RayTune checkpoints for trials that have not completed because of forceful termination (#3232)

Full Changelog: v0.7.2...v0.7.3