-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix lint issues #5305
Merged
Merged
Fix lint issues #5305
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: harupy <17039389+harupy@users.noreply.github.com>
github-actions
bot
added
area/build
Build and test infrastructure for MLflow
rn/none
List under Small Changes in Changelogs.
labels
Jan 25, 2022
WeichenXu123
approved these changes
Jan 25, 2022
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
BenWilson2
added a commit
that referenced
this pull request
Feb 1, 2022
* onnxruntime InferenceSession fix Some distributions of onnxruntime require the specification of the providers argument on calling InferenceSession. E.g. onnxruntime-gpu. The package import call does not differnetiate which architecture specific version has been installed, as all are imported with onnxruntime. onnxruntime documentation says that from v1.9.0 some distributions require the providers list to be provided on calling an InferenceSession. Therefore the try catch nested structure below attempts to create an inference session with just the model path as pre v1.9.0. If that fails, it will use the providers list as part of the InferenceSession call. At the moment this is just CUDA and CPU, and probably should be expanded. * Refactor onnxruntime call and providers list onnxruntime call now uses specific ValueError catch. Initial attempt at loading model uses just path to onnx file. If ValueError, the v1.9.0 call with providers is used. The providers list argument is now obtained from the MLModel file. The MLModel file meta data is loaded with assumption it is in the same directory as the onnx model file. Updating of save_model() function needed. Signed-off-by: Ed Morris <ecm200@gmail.com> * onnxruntime providers argument added to log model For the onnx model flavour, added ability to provide onnxruntime execution providers list. onnx_execution_providors argument passed through log_model function as kwargs. Added to save_model function by kwargs pass through. Model flavour metadata saving updated to providers field. If not specified, saves default to file. Tested and works for MLModel files with and without new field. Signed-off-by: Ed Morris <ecm200@gmail.com> * Code comment formatting changes Code comments formatting corrections Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> * Code comment formatting changes Code formatting changes. Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> * Code comment formatting changes Code formatting changes Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> * Black reformat changes to onnx.py Ran BLACK to check formatting of ONNX.py Signed-off-by: Ed Morris <ecm200@gmail.com> * Fix lint issues (#5305) Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Remove num examples (#5304) Signed-off-by: dbczumar <corey.zumar@databricks.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Run `apt-get update` before installing `libopenblas-dev` (#5313) * run sudo update Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> * fix typo Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> * fix typo again Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * onnxruntime InferenceSession fix Some distributions of onnxruntime require the specification of the providers argument on calling InferenceSession. E.g. onnxruntime-gpu. The package import call does not differnetiate which architecture specific version has been installed, as all are imported with onnxruntime. onnxruntime documentation says that from v1.9.0 some distributions require the providers list to be provided on calling an InferenceSession. Therefore the try catch nested structure below attempts to create an inference session with just the model path as pre v1.9.0. If that fails, it will use the providers list as part of the InferenceSession call. At the moment this is just CUDA and CPU, and probably should be expanded. Signed-off-by: Ed Morris <ecm200@gmail.com> * Refactor onnxruntime call and providers list onnxruntime call now uses specific ValueError catch. Initial attempt at loading model uses just path to onnx file. If ValueError, the v1.9.0 call with providers is used. The providers list argument is now obtained from the MLModel file. The MLModel file meta data is loaded with assumption it is in the same directory as the onnx model file. Updating of save_model() function needed. Signed-off-by: Ed Morris <ecm200@gmail.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * onnxruntime providers argument added to log model For the onnx model flavour, added ability to provide onnxruntime execution providers list. onnx_execution_providors argument passed through log_model function as kwargs. Added to save_model function by kwargs pass through. Model flavour metadata saving updated to providers field. If not specified, saves default to file. Tested and works for MLModel files with and without new field. Signed-off-by: Ed Morris <ecm200@gmail.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Code comment formatting changes Code comments formatting corrections Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Code comment formatting changes Code formatting changes. Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Code comment formatting changes Code formatting changes Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Black reformat changes to onnx.py Ran BLACK to check formatting of ONNX.py Signed-off-by: Ed Morris <ecm200@gmail.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Removed catch all exception and linting issue. Removed the catch all exception of the InferenceSession call. Removed linting issue. Signed-off-by: Ed Morris <ecm200@gmail.com> * Autoformat: https://github.com/mlflow/mlflow/actions/runs/1779614231 Signed-off-by: mlflow-automation <mlflow-automation@users.noreply.github.com> Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Co-authored-by: Harutaka Kawamura <hkawamura0130@gmail.com> Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com> Co-authored-by: mlflow-automation <mlflow-automation@users.noreply.github.com>
mehtayogita
pushed a commit
to mehtayogita/mlflow
that referenced
this pull request
Feb 3, 2022
* onnxruntime InferenceSession fix Some distributions of onnxruntime require the specification of the providers argument on calling InferenceSession. E.g. onnxruntime-gpu. The package import call does not differnetiate which architecture specific version has been installed, as all are imported with onnxruntime. onnxruntime documentation says that from v1.9.0 some distributions require the providers list to be provided on calling an InferenceSession. Therefore the try catch nested structure below attempts to create an inference session with just the model path as pre v1.9.0. If that fails, it will use the providers list as part of the InferenceSession call. At the moment this is just CUDA and CPU, and probably should be expanded. * Refactor onnxruntime call and providers list onnxruntime call now uses specific ValueError catch. Initial attempt at loading model uses just path to onnx file. If ValueError, the v1.9.0 call with providers is used. The providers list argument is now obtained from the MLModel file. The MLModel file meta data is loaded with assumption it is in the same directory as the onnx model file. Updating of save_model() function needed. Signed-off-by: Ed Morris <ecm200@gmail.com> * onnxruntime providers argument added to log model For the onnx model flavour, added ability to provide onnxruntime execution providers list. onnx_execution_providors argument passed through log_model function as kwargs. Added to save_model function by kwargs pass through. Model flavour metadata saving updated to providers field. If not specified, saves default to file. Tested and works for MLModel files with and without new field. Signed-off-by: Ed Morris <ecm200@gmail.com> * Code comment formatting changes Code comments formatting corrections Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> * Code comment formatting changes Code formatting changes. Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> * Code comment formatting changes Code formatting changes Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> * Black reformat changes to onnx.py Ran BLACK to check formatting of ONNX.py Signed-off-by: Ed Morris <ecm200@gmail.com> * Fix lint issues (mlflow#5305) Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Remove num examples (mlflow#5304) Signed-off-by: dbczumar <corey.zumar@databricks.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Run `apt-get update` before installing `libopenblas-dev` (mlflow#5313) * run sudo update Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> * fix typo Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> * fix typo again Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * onnxruntime InferenceSession fix Some distributions of onnxruntime require the specification of the providers argument on calling InferenceSession. E.g. onnxruntime-gpu. The package import call does not differnetiate which architecture specific version has been installed, as all are imported with onnxruntime. onnxruntime documentation says that from v1.9.0 some distributions require the providers list to be provided on calling an InferenceSession. Therefore the try catch nested structure below attempts to create an inference session with just the model path as pre v1.9.0. If that fails, it will use the providers list as part of the InferenceSession call. At the moment this is just CUDA and CPU, and probably should be expanded. Signed-off-by: Ed Morris <ecm200@gmail.com> * Refactor onnxruntime call and providers list onnxruntime call now uses specific ValueError catch. Initial attempt at loading model uses just path to onnx file. If ValueError, the v1.9.0 call with providers is used. The providers list argument is now obtained from the MLModel file. The MLModel file meta data is loaded with assumption it is in the same directory as the onnx model file. Updating of save_model() function needed. Signed-off-by: Ed Morris <ecm200@gmail.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * onnxruntime providers argument added to log model For the onnx model flavour, added ability to provide onnxruntime execution providers list. onnx_execution_providors argument passed through log_model function as kwargs. Added to save_model function by kwargs pass through. Model flavour metadata saving updated to providers field. If not specified, saves default to file. Tested and works for MLModel files with and without new field. Signed-off-by: Ed Morris <ecm200@gmail.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Code comment formatting changes Code comments formatting corrections Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Code comment formatting changes Code formatting changes. Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Code comment formatting changes Code formatting changes Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Black reformat changes to onnx.py Ran BLACK to check formatting of ONNX.py Signed-off-by: Ed Morris <ecm200@gmail.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Removed catch all exception and linting issue. Removed the catch all exception of the InferenceSession call. Removed linting issue. Signed-off-by: Ed Morris <ecm200@gmail.com> * Autoformat: https://github.com/mlflow/mlflow/actions/runs/1779614231 Signed-off-by: mlflow-automation <mlflow-automation@users.noreply.github.com> Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Co-authored-by: Harutaka Kawamura <hkawamura0130@gmail.com> Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com> Co-authored-by: mlflow-automation <mlflow-automation@users.noreply.github.com> Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com>
mehtayogita
added a commit
that referenced
this pull request
Feb 4, 2022
* Condense run comparison table (#5306) * init * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * use antd tooltip Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * support safari Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * fix js lint Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update test Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * fix Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update var names Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * test1 Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * adjust col width by state Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * use state control visibility Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * use react ref Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * fix lint Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * address comments Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * address comments Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * address comments Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * clean Signed-off-by: Weichen Xu <weichen.xu@databricks.com> * update Signed-off-by: Weichen Xu <weichen.xu@databricks.com> Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com> * onnxruntime InferenceSession fix to bug #5308 (#5317) * onnxruntime InferenceSession fix Some distributions of onnxruntime require the specification of the providers argument on calling InferenceSession. E.g. onnxruntime-gpu. The package import call does not differnetiate which architecture specific version has been installed, as all are imported with onnxruntime. onnxruntime documentation says that from v1.9.0 some distributions require the providers list to be provided on calling an InferenceSession. Therefore the try catch nested structure below attempts to create an inference session with just the model path as pre v1.9.0. If that fails, it will use the providers list as part of the InferenceSession call. At the moment this is just CUDA and CPU, and probably should be expanded. * Refactor onnxruntime call and providers list onnxruntime call now uses specific ValueError catch. Initial attempt at loading model uses just path to onnx file. If ValueError, the v1.9.0 call with providers is used. The providers list argument is now obtained from the MLModel file. The MLModel file meta data is loaded with assumption it is in the same directory as the onnx model file. Updating of save_model() function needed. Signed-off-by: Ed Morris <ecm200@gmail.com> * onnxruntime providers argument added to log model For the onnx model flavour, added ability to provide onnxruntime execution providers list. onnx_execution_providors argument passed through log_model function as kwargs. Added to save_model function by kwargs pass through. Model flavour metadata saving updated to providers field. If not specified, saves default to file. Tested and works for MLModel files with and without new field. Signed-off-by: Ed Morris <ecm200@gmail.com> * Code comment formatting changes Code comments formatting corrections Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> * Code comment formatting changes Code formatting changes. Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> * Code comment formatting changes Code formatting changes Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> * Black reformat changes to onnx.py Ran BLACK to check formatting of ONNX.py Signed-off-by: Ed Morris <ecm200@gmail.com> * Fix lint issues (#5305) Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Remove num examples (#5304) Signed-off-by: dbczumar <corey.zumar@databricks.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Run `apt-get update` before installing `libopenblas-dev` (#5313) * run sudo update Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> * fix typo Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> * fix typo again Signed-off-by: harupy <17039389+harupy@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * onnxruntime InferenceSession fix Some distributions of onnxruntime require the specification of the providers argument on calling InferenceSession. E.g. onnxruntime-gpu. The package import call does not differnetiate which architecture specific version has been installed, as all are imported with onnxruntime. onnxruntime documentation says that from v1.9.0 some distributions require the providers list to be provided on calling an InferenceSession. Therefore the try catch nested structure below attempts to create an inference session with just the model path as pre v1.9.0. If that fails, it will use the providers list as part of the InferenceSession call. At the moment this is just CUDA and CPU, and probably should be expanded. Signed-off-by: Ed Morris <ecm200@gmail.com> * Refactor onnxruntime call and providers list onnxruntime call now uses specific ValueError catch. Initial attempt at loading model uses just path to onnx file. If ValueError, the v1.9.0 call with providers is used. The providers list argument is now obtained from the MLModel file. The MLModel file meta data is loaded with assumption it is in the same directory as the onnx model file. Updating of save_model() function needed. Signed-off-by: Ed Morris <ecm200@gmail.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * onnxruntime providers argument added to log model For the onnx model flavour, added ability to provide onnxruntime execution providers list. onnx_execution_providors argument passed through log_model function as kwargs. Added to save_model function by kwargs pass through. Model flavour metadata saving updated to providers field. If not specified, saves default to file. Tested and works for MLModel files with and without new field. Signed-off-by: Ed Morris <ecm200@gmail.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Code comment formatting changes Code comments formatting corrections Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Code comment formatting changes Code formatting changes. Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Code comment formatting changes Code formatting changes Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Black reformat changes to onnx.py Ran BLACK to check formatting of ONNX.py Signed-off-by: Ed Morris <ecm200@gmail.com> Signed-off-by: Ed Morris <ecm200@gmail.com> * Removed catch all exception and linting issue. Removed the catch all exception of the InferenceSession call. Removed linting issue. Signed-off-by: Ed Morris <ecm200@gmail.com> * Autoformat: https://github.com/mlflow/mlflow/actions/runs/1779614231 Signed-off-by: mlflow-automation <mlflow-automation@users.noreply.github.com> Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Co-authored-by: Harutaka Kawamura <hkawamura0130@gmail.com> Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com> Co-authored-by: mlflow-automation <mlflow-automation@users.noreply.github.com> Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com> * Make it easy to find how to log model signature. Add one line in the model signature introduction section and add link to detailed section in the introduction. Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com> * Fix the link to how to log models with signatures section. Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com> * Fix the link to how to log models with signatures. Verified by running sphinx build locally. Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com> * Update docs/source/models.rst Co-authored-by: Ankit Mathur <52183359+ankit-db@users.noreply.github.com> Signed-off-by: Yogita Mehta <yogita.mehta@databricks.com> Co-authored-by: WeichenXu <weichen.xu@databricks.com> Co-authored-by: Dr Ed Morris <34489160+ecm200@users.noreply.github.com> Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Co-authored-by: Harutaka Kawamura <hkawamura0130@gmail.com> Co-authored-by: Corey Zumar <39497902+dbczumar@users.noreply.github.com> Co-authored-by: mlflow-automation <mlflow-automation@users.noreply.github.com> Co-authored-by: Ankit Mathur <52183359+ankit-db@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Signed-off-by: harupy 17039389+harupy@users.noreply.github.com
What changes are proposed in this pull request?
Fix a few lint issues.
How is this patch tested?
Lint check
Does this PR change the documentation?
ci/circleci: build_doc
check. If it's successful, proceed to thenext step, otherwise fix it.
Details
on the right to open the job page of CircleCI.Artifacts
tab.docs/build/html/index.html
.Release Notes
Is this a user-facing change?
(Details in 1-2 sentences. You can just refer to another PR with a description if this PR is part of a larger change.)
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notes