-
Notifications
You must be signed in to change notification settings - Fork 35
chore: bump version to 0.16.0 #393
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Coverage Report
Files without new missing coverage
274 files skipped due to complete coverage. Coverage success: total of 98.12% is above 98.12% 🎉 |



Changelog
Added
edsnlp.tunefor hyperparameter tuning using Optuna. This feature allows users to efficiently optimize model parameters with options for single-phase or two-phase tuning strategies. Includes support for parameter importance analysis, visualization, pruning, and automatic handling of GPU time budgets.ScheduledOptimizer(e.g.,@core: "optimizer") now supports importing optimizers using their qualified name (e.g.,optim: "torch.optim.Adam").eds.ner_crfnow computes confidence score on spans.Changed
eds.ner_crfis now computed as the mean over the words instead of the sum. This change is compatible with multi-gpu training.Fixed
Support packaging with poetry 2.0
Solve pickling issues with multiprocessing when pytorch is installed
Allow deep attributes like
a.b.cforspan_attributesin Standoff and OMOP doc2dict convertersFixed various aspects of stream shuffling:
shuffle=Truestream.shuffle()with no seedstream.shuffle(batch_size=...)is not compatible with the streameds.splitnow keeps doc and span attributes in the sub-documents.