Skip to content

Commit

Permalink
Fix api docs (#2508)
Browse files Browse the repository at this point in the history
* fixing many errors

* fixing many errors

* fixing many errors

* fixing many errors

* changing vision__properties

* fixing redirects

* fixing labels

* fixing labels

* fixing tf

* fixing mf

* fixing mf

* update welcome and sub package indices

---------

Co-authored-by: shir22 <shir@deepchecks.com>
  • Loading branch information
ItayGabbay and shir22 committed May 9, 2023
1 parent 5e60e95 commit 7e0365b
Show file tree
Hide file tree
Showing 31 changed files with 86 additions and 72 deletions.
4 changes: 2 additions & 2 deletions README.md
Expand Up @@ -119,7 +119,7 @@ Head over to one of our following quickstart tutorials, and have deepchecks runn
- [Data Integrity Quickstart (avocado sales data)](
https://docs.deepchecks.com/stable/user-guide/tabular/auto_quickstarts/plot_quick_data_integrity.html?utm_source=github.com&utm_medium=referral&utm_campaign=readme&utm_content=try_it_out)
- [Model Evaluation Quickstart (wine quality data)](
https://docs.deepchecks.com/en/stable/user-guide/tabular/auto_quickstarts/plot_quickstart_in_5_minutes.html?utm_source=github.com&utm_medium=referral&utm_campaign=readme&utm_content=try_it_out)
https://docs.deepchecks.com/stable/user-guide/tabular/auto_quickstarts/plot_quickstart_in_5_minutes.html?utm_source=github.com&utm_medium=referral&utm_campaign=readme&utm_content=try_it_out)

> **Recommended - download the code and run it locally** on the built-in dataset and (optional) model, or **replace them with your own**.
Expand Down Expand Up @@ -230,7 +230,7 @@ covering all kinds of common issues, such as:
- Conflicting Labels

and [many more
checks](https://docs.deepchecks.com/stable/checks_gallery/tabular.html?utm_source=github.com&utm_medium=referral&utm_campaign=readme&utm_content=key_concepts__check).
checks](https://docs.deepchecks.com/stable/tabular/index.html?utm_source=github.com&utm_medium=referral&utm_campaign=readme&utm_content=key_concepts__check).


Each check can have two types of
Expand Down
14 changes: 7 additions & 7 deletions deepchecks/nlp/metric_utils/token_classification.py
Expand Up @@ -31,15 +31,15 @@ def get_scorer_dict(
) -> t.Dict[str, t.Callable[[t.List[str], t.List[str]], float]]:
"""Return a dict of scorers for token classification.
Parameters:
Parameters
-----------
mode: str, [None (default), `strict`].
if ``None``, the score is compatible with conlleval.pl. Otherwise,
the score is calculated strictly.
scheme: Token, [IOB2, IOE2, IOBES]
suffix: bool, False by default.
mode: str, [None (default), `strict`].
if ``None``, the score is compatible with conlleval.pl. Otherwise,
the score is calculated strictly.
scheme: Token, [IOB2, IOE2, IOBES]
suffix: bool, False by default.
Returns:
Returns
--------
A dict of scorers.
"""
Expand Down
4 changes: 2 additions & 2 deletions deepchecks/tabular/suites/default_suites.py
Expand Up @@ -251,7 +251,7 @@ def model_evaluation(alternative_scorers: Dict[str, Callable] = None,
- :class:`~deepchecks.tabular.checks.model_evaluation.RocReport`
* - :ref:`tabular__confusion_matrix_report`
- :class:`~deepchecks.tabular.checks.model_evaluation.ConfusionMatrixReport`
* - :ref:`tabular__weak_segment_performance`
* - :ref:`tabular__weak_segments_performance`
- :class:`~deepchecks.tabular.checks.model_evaluation.WeakSegmentPerformance`
* - :ref:`tabular__prediction_drift`
- :class:`~deepchecks.tabular.checks.model_evaluation.PredictionDrift`
Expand Down Expand Up @@ -356,7 +356,7 @@ def production_suite(task_type: str = None,
- :class:`~deepchecks.tabular.checks.model_evaluation.RocReport`
* - :ref:`tabular__confusion_matrix_report`
- :class:`~deepchecks.tabular.checks.model_evaluation.ConfusionMatrixReport`
* - :ref:`tabular__weak_segment_performance`
* - :ref:`tabular__weak_segments_performance`
- :class:`~deepchecks.tabular.checks.model_evaluation.WeakSegmentPerformance`
* - :ref:`tabular__regression_error_distribution`
- :class:`~deepchecks.tabular.checks.model_evaluation.RegressionErrorDistribution`
Expand Down
Expand Up @@ -54,7 +54,7 @@ class AbstractPropertyOutliers(SingleDatasetCheck):
- ``'class_id'`` - for properties that return the class_id. This is used because these
properties are later matched with the ``VisionData.label_map``, if one was given.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
property_input_type : PropertiesInputType , default: PropertiesInputType.IMAGES
The type of input to the properties, required for caching the results after first calculation.
n_show_top : int , default: 3
Expand Down
Expand Up @@ -38,7 +38,7 @@ class ImagePropertyOutliers(AbstractPropertyOutliers):
- ``'categorical'`` - for discrete, non-ordinal outputs. These can still be numbers,
but these numbers do not have inherent value.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
n_show_top : int , default: 3
number of outliers to show from each direction (upper limit and bottom limit)
iqr_percentiles: Tuple[int, int], default: (25, 75)
Expand Down
Expand Up @@ -42,7 +42,7 @@ class LabelPropertyOutliers(AbstractPropertyOutliers):
- ``'class_id'`` - for properties that return the class_id. This is used because these
properties are later matched with the ``VisionData.label_map``, if one was given.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
n_show_top : int , default: 3
number of outliers to show from each direction (upper limit and bottom limit)
iqr_percentiles: Tuple[int, int], default: (25, 75)
Expand Down
Expand Up @@ -67,7 +67,7 @@ class PropertyLabelCorrelation(SingleDatasetCheck):
- ``'categorical'`` - for discrete, non-ordinal outputs. These can still be numbers,
but these numbers do not have inherent value.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
n_top_properties: int, default: 5
Number of features to show, sorted by the magnitude of difference in PPS
min_pps_to_show: float, default 0.05
Expand Down
Expand Up @@ -83,7 +83,7 @@ class PredictionDrift(TrainTestCheck, ReducePropertyMixin):
but these numbers do not have inherent value.
- ``'class_id'`` - for properties that return the class_id. This is used because these
properties are later matched with the ``VisionData.label_map``, if one was given.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
margin_quantile_filter : float , default : 0.025
float in range [0,0.5), representing which margins (high and low quantiles) of the distribution will be filtered
out of the EMD calculation. This is done in order for extreme values not to affect the calculation
Expand Down
Expand Up @@ -51,7 +51,7 @@ class ImageDatasetDrift(TrainTestCheck):
- 'categorical' - for discrete, non-ordinal outputs. These can still be numbers,
but these numbers do not have inherent value.
For more on image / label properties, see the guide about :doc:`/user-guide/vision/vision_properties`
For more on image / label properties, see the guide about :doc:`/vision/usage_guides/vision_properties`
n_top_properties : int , default: 3
Amount of properties to show ordered by domain classifier feature importance. This limit is used together
(AND) with min_feature_importance, so less than n_top_columns features can be displayed.
Expand Down
Expand Up @@ -54,7 +54,7 @@ class ImagePropertyDrift(TrainTestCheck, ReducePropertyMixin):
- ``'categorical'`` - for discrete, non-ordinal outputs. These can still be numbers,
but these numbers do not have inherent value.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
margin_quantile_filter: float, default: 0.025
float in range [0,0.5), representing which margins (high and low quantiles) of the distribution will be filtered
out of the EMD calculation. This is done in order for extreme values not to affect the calculation
Expand Down
Expand Up @@ -79,7 +79,7 @@ class LabelDrift(TrainTestCheck, ReducePropertyMixin, ReduceLabelMixin):
- ``'class_id'`` - for properties that return the class_id. This is used because these
properties are later matched with the ``VisionData.label_map``, if one was given.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
margin_quantile_filter : float, default: 0.025
float in range [0,0.5), representing which margins (high and low quantiles) of the distribution will be filtered
out of the EMD calculation. This is done in order for extreme values not to affect the calculation
Expand Down
Expand Up @@ -73,7 +73,7 @@ class PropertyLabelCorrelationChange(TrainTestCheck):
- ``'categorical'`` - for discrete, non-ordinal outputs. These can still be numbers,
but these numbers do not have inherent value.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
per_class : bool, default: True
boolean that indicates whether the results of this check should be calculated for all classes or per class in
label. If True, the conditions will be run per class as well.
Expand Down
24 changes: 12 additions & 12 deletions deepchecks/vision/suites/default_suites.py
Expand Up @@ -48,7 +48,7 @@ def train_test_validation(label_properties: List[Dict[str, Any]] = None, image_p
- :class:`~deepchecks.vision.checks.train_test_validation.ImagePropertyDrift`
* - :ref:`vision__image_dataset_drift`
- :class:`~deepchecks.vision.checks.train_test_validation.ImageDatasetDrift`
* - :ref:`vision__property_label_correlation `
* - :ref:`vision__property_label_correlation_change`
- :class:`~deepchecks.vision.checks.train_test_validation.PropertyLabelCorrelationChange`
Parameters
Expand All @@ -64,7 +64,7 @@ def train_test_validation(label_properties: List[Dict[str, Any]] = None, image_p
- ``'class_id'`` - for properties that return the class_id. This is used because these
properties are later matched with the ``VisionData.label_map``, if one was given.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
image_properties : List[Dict[str, Any]], default: None
List of properties. Replaces the default deepchecks properties.
Expand All @@ -75,7 +75,7 @@ def train_test_validation(label_properties: List[Dict[str, Any]] = None, image_p
- ``'categorical'`` - for discrete, non-ordinal outputs. These can still be numbers,
but these numbers do not have inherent value.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
**kwargs : dict
additional arguments to pass to the checks.
Expand Down Expand Up @@ -134,7 +134,7 @@ def model_evaluation(scorers: Union[Dict[str, Union[Callable, str]], List[Any]]
- :class:`~deepchecks.vision.checks.model_evaluation.PredictionDrift`
* - :ref:`vision__simple_model_comparison`
- :class:`~deepchecks.vision.checks.model_evaluation.SimpleModelComparison`
* - :ref:`vision__weak_segment_performance`
* - :ref:`vision__weak_segments_performance`
- :class:`~deepchecks.vision.checks.model_evaluation.WeakSegmentPerformance`
Parameters
Expand All @@ -153,7 +153,7 @@ def model_evaluation(scorers: Union[Dict[str, Union[Callable, str]], List[Any]]
- ``'categorical'`` - for discrete, non-ordinal outputs. These can still be numbers,
but these numbers do not have inherent value.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
prediction_properties : List[Dict[str, Any]], default: None
List of properties. Replaces the default deepchecks properties.
Each property is a dictionary with keys ``'name'`` (str), ``method`` (Callable) and ``'output_type'`` (str),
Expand All @@ -165,7 +165,7 @@ def model_evaluation(scorers: Union[Dict[str, Union[Callable, str]], List[Any]]
- ``'class_id'`` - for properties that return the class_id. This is used because these
properties are later matched with the ``VisionData.label_map``, if one was given.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
**kwargs : dict
additional arguments to pass to the checks.
Expand Down Expand Up @@ -230,7 +230,7 @@ def data_integrity(image_properties: List[Dict[str, Any]] = None, label_properti
- ``'categorical'`` - for discrete, non-ordinal outputs. These can still be numbers,
but these numbers do not have inherent value.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
label_properties : List[Dict[str, Any]], default: None
List of properties. Replaces the default deepchecks properties.
Each property is a dictionary with keys ``'name'`` (str), ``method`` (Callable) and ``'output_type'`` (str),
Expand All @@ -242,7 +242,7 @@ def data_integrity(image_properties: List[Dict[str, Any]] = None, label_properti
- ``'class_id'`` - for properties that return the class_id. This is used because these
properties are later matched with the ``VisionData.label_map``, if one was given.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
**kwargs : dict
additional arguments to pass to the checks.
Expand Down Expand Up @@ -293,7 +293,7 @@ def full_suite(n_samples: Optional[int] = 5000, image_properties: List[Dict[str,
- ``'categorical'`` - for discrete, non-ordinal outputs. These can still be numbers,
but these numbers do not have inherent value.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
label_properties : List[Dict[str, Any]], default: None
List of properties. Replaces the default deepchecks properties.
Each property is a dictionary with keys ``'name'`` (str), ``method`` (Callable) and ``'output_type'`` (str),
Expand All @@ -305,7 +305,7 @@ def full_suite(n_samples: Optional[int] = 5000, image_properties: List[Dict[str,
- ``'class_id'`` - for properties that return the class_id. This is used because these
properties are later matched with the ``VisionData.label_map``, if one was given.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
scorers: Union[Dict[str, Union[Callable, str]], List[Any]], default: None
Scorers to override the default scorers (metrics), find more about the supported formats at
Expand All @@ -321,7 +321,7 @@ def full_suite(n_samples: Optional[int] = 5000, image_properties: List[Dict[str,
- ``'categorical'`` - for discrete, non-ordinal outputs. These can still be numbers,
but these numbers do not have inherent value.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
prediction_properties : List[Dict[str, Any]], default: None
List of properties. Replaces the default deepchecks properties.
Each property is a dictionary with keys ``'name'`` (str), ``method`` (Callable) and ``'output_type'`` (str),
Expand All @@ -333,7 +333,7 @@ def full_suite(n_samples: Optional[int] = 5000, image_properties: List[Dict[str,
- ``'class_id'`` - for properties that return the class_id. This is used because these
properties are later matched with the ``VisionData.label_map``, if one was given.
For more on image / label properties, see the guide about :ref:`vision_properties_guide`.
For more on image / label properties, see the guide about :ref:`vision__properties_guide`.
Returns
-------
Suite
Expand Down
4 changes: 2 additions & 2 deletions docs/source/_templates/autosummary/check.rst
Expand Up @@ -22,10 +22,10 @@ Examples

.. only:: html

.. figure:: /checks_gallery/{{ submoduletype }}/{{ checktype}}/images/thumb/sphx_glr_plot_{{ to_snake_case(objname).lower() }}_thumb.png
.. figure:: /{{ submoduletype }}/auto_checks/{{ checktype}}/images/thumb/sphx_glr_plot_{{ to_snake_case(objname).lower() }}_thumb.png
:alt: {{ objname }}

:ref:`sphx_glr_checks_gallery_{{submoduletype}}_{{ checktype }}_plot_{{ to_snake_case(objname).lower() }}.py`
:ref:`sphx_glr_{{submoduletype}}_auto_checks_{{ checktype }}_plot_{{ to_snake_case(objname).lower() }}.py`

.. raw:: html

Expand Down
Expand Up @@ -2,8 +2,9 @@
"""
.. _nlp__prediction_drift:
================
Prediction Drift
****************
================
This notebook provides an overview for using and understanding the NLP prediction drift check.
Expand Down Expand Up @@ -43,7 +44,6 @@
#
# For this example, we'll use the tweet emotion dataset, which is a dataset of tweets labeled by one of four emotions:
# happiness, anger, sadness and optimism.
#%%

import numpy as np
from deepchecks.nlp.checks import PredictionDrift
Expand All @@ -57,6 +57,7 @@

#%%
# Let's see how our data looks like:

train_ds.head()

#%%
Expand Down
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
"""
.. _plot_vision_new_labels:
.. _vision__new_labels:
New Labels
==========
Expand Down

0 comments on commit 7e0365b

Please sign in to comment.