Releases: transformerlab/transformerlab-app
Releases · transformerlab/transformerlab-app
0.19.1
Highlights
- support for schedulers when using diffusion
- support for minP as a generation parameter
- several fixes for docker support, and we now use a single Dockerfile for all platforms
- trains will now use the first unused split in a dataset if there is no "train" split
- upgrades to MLX LoRA trainer to better support Qwen models
- fix for embedding dataset creation using Synthesizer Docs plugin
- run a model on an inference engine even if the architecture is not listed as supported
Details
- schedulers drop-down in Diffusion tab by @Sourenm in #539
- Add Minimum P sampling for all generations by @deep1401 in #538
- Model architecture override by @mina-parham in #519
- Fix issues related to unsupported engine checkbox by @mina-parham in #546
- Add model architecture on server start by @deep1401 in #540
- Moved entire Workflows router under Experiment by @aahaanmaini in #531
- Fix default inference engine logic for new experiment by @deep1401 in #554
- Fix errors showing up in console from Import Models modal by @dadmobile in #548
- Add Alert Message on Workflow Start by @aahaanmaini in #553
Full Changelog: v0.19.0...v0.19.1
0.19.0
This release introduces support for diffusion models including inference, training and improved management of image datasets.
What's Changed
- Make Workflows tab visible by @aahaanmaini in #517
- V1 of adaptors search and install for models by @Sourenm in #514
- Added creation and preview of local image datasets with in place caption editing and searching abilities by @Sourenm in #498
- fix for drag and drop for image datasets by @Sourenm in #523
- Workflows belong to Experiments by @aahaanmaini in #525
- Track job queue events by @aliasaria in #516
- Fix errant API calls on Model Zoo by @dadmobile in #524
- Export experiment to recipe JSON by @aahaanmaini in #520
- Fix information for GPU metrics on Header in Mac by @deep1401 in #532
- Added the same dataset preview logic that exists in the Dataset tab to the Generate tab by @Sourenm in #533
- Add Diffusion by @deep1401 in #488
- Fix/separate-use-analytics by @aliasaria in #536
Full Changelog: v0.18.0...v0.19.0
0.18.0
The most exciting new feature is support for multimodal model training. This includes a VLM trainer and the ability to work with vision datasets.
Notable bug fixes:
- Canceling downloads is a lot more responsive
- LlamaIndex plugin setup error causing issues for some RAG users
- removed limit on the number of samples being used for trains on STF Llama Factory plugin
- Eval and Generate jobs no longer show as COMPLETE when they were STOPPED
What's Changed
- v1 of the Recipes Modal (hidden in DEV_MODE) by @aliasaria in #495
- Preview Image Data in Datasets Tab by @deep1401 in #470
- Changed the visuals in Download Progress Bar by @Sourenm in #491
- Mark plugin as deprecated if supported_hardware_architectures is an empty list by @deep1401 in #494
- Add MCP Server support in Tools tab within Interact tab by @deep1401 in #468
- Recipe navigates to Notes page by @aahaanmaini in #509
Full Changelog: v0.17.1...v0.18.0
0.17.1
What's Changed
Full Changelog: v0.17.0...v0.17.1
0.17.0
- AMD GPU Support!
- New Model Zoo groups models to make it easier to navigate
- Export screen updated to match style of other job pages
- Dataset fixes: Better viewing of ChatML datasets, fixed issues using some of our gallery datasets
What's Changed
- Add a tab for running Sweeps by @deep1401 in #382
- Add/model-groups-with-tags-search-bar by @Sourenm in #477
- Fixed the Preview Dataset feature for ChatML style datasets by @Sourenm in #479
- Fix tab for model zoo sidebar by @deep1401 in #482
- Recipe Modal by @aliasaria in #484
- fix model store option default in foundation section by @mina-parham in #486
- Make the 'New' button dropdown in the Train section scrollable by @mina-parham in #487
- Make Exporter Page Match Others by @aahaanmaini in #467
- receipes: fake loading of dependencies by @aliasaria in #490
- Fix Local Models Download bar, inference destructuring and experimental plugin card overflows by @deep1401 in #489
- Use datasets.path instead of datasets.name while creating a task from recipe by @deep1401 in #492
- Add support for the new Claude 4 models by @deep1401 in #493
- Add support for AMD GPUs by @deep1401 in #425
- One liner change for correct machine type by @deep1401 in #496
New Contributors
- @mina-parham made their first contribution in #486
Full Changelog: v0.16.4...v0.17.0
0.16.4
What's Changed
- Github build upset about outputs in publish.yml by @dadmobile in #475
Full Changelog: v0.16.3...v0.16.4
0.16.1
0.16.0
- Major update to CUDA and torch in order to add support for NVIDIA 50 series GPUS
- new MLX PPO Trainer plugin!
- ability to run 4-bit bitsandbytes models in fastchat
Bug fixes
- local and generated datasets viewing doesn’t throw an error
- ability to manually set fallback context length used by MLX
- names of task templates update correctly
What's Changed
- Fix internal error on Datasets tab due to name field by @deep1401 in #444
- Add/further sdk cleanup by @aliasaria in #447
- Add logic to filter experimental plugins by @deep1401 in #451
- Fix rendering of eval and generate configs on update by @deep1401 in #455
Full Changelog: v0.15.2...v0.16.0
0.15.2
Highlights
This build fixes an installer bug that was caused by the latest API update.
What's Changed
- Simplify transformerlab api sdk by @aliasaria in #418
- Fixed dropdown dark mode background by @aahaanmaini in #434
- If Plugin install fails, show a message by @dadmobile in #430
- Remove unneeded API call on startup by @dadmobile in #435
- Split up transformerlab-api-sdk into components by @aliasaria in #438
- Add/first-example-of-using-new-sdk by @aliasaria in #441
- Added input field to CSS by @aahaanmaini in #437
- All exporter plugins showing 'Exporting...' when one runs by @aahaanmaini in #442
- Fix info icon for training jobs by @deep1401 in #445
- Change key dependency name to transformerlab-inference to fix broken installer by @deep1401 in #446
New Contributors
- @aahaanmaini made their first contribution in #434
Full Changelog: v0.15.1...v0.15.2
0.15.1
This is a patch release with a number of bug fixes:
- Web app will auto-connect to API running on the same machine
- Installer no longer halts if local_server.log is not accessible/writable
- Console automatically opens to show error if inference server fails to start correctly
- Console logger scrolls way faster
- Plugins no longer always say they are unsupported for non-GPU/mac users
- if you upload docs before installing RAG plugin it will index the docs for you before running your query
- You can now edit the setup script for plugins inside Transformer Lab (good for debugging or making custom modifications to a plugin’s venv)
git Details
- Open console logger upon a failed inference server run by @deep1401 in #415
- Fix supported arch message for local plugins by @deep1401 in #417
- Fix error caused by deleting plugins by @deep1401 in #419
- If you are on the webapp, correctly find the local server we should first try to connect to by @aliasaria in #421
- Don't hang or crash if local_server.log is not writeable by @dadmobile in #423
- Reduce console scroll delay from 100ms to 10ms by @dadmobile in #428
Full Changelog: v0.15.0...v0.15.1