Skip to content

Deepchecks: Tests for Continuous Validation of ML Models & Data. Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling to thoroughly test your data and models from research to production.

License

Notifications You must be signed in to change notification settings

deepchecks/deepchecks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

GitHub stars build pkgVersion Maintainability Coverage Status

All Contributors

Deepchecks - Continuous Validation for AI & ML: Testing, CI & Monitoring

Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling you to thoroughly test your data and models from research to production.

Deepchecks continuous validation parts.

โ€ƒ ๐Ÿ‘‹ Join Slack โ€ƒ | โ€ƒ ๐Ÿ“– Documentation โ€ƒ | โ€ƒ ๐ŸŒ Blog โ€ƒ | โ€ƒ ๐Ÿฆ Twitter โ€ƒ

๐Ÿงฉ Components

Deepchecks includes:

  • Deepchecks Testing (Quickstart, docs):
    • Running built-in & your own custom Checks and Suites for Tabular, NLP & CV validation (open source).
  • CI & Testing Management (Quickstart, docs):
    • Collaborating over test results and iterating efficiently until model is production-ready and can be deployed (open source & managed offering).
  • Deepchecks Monitoring (Quickstart, docs):
    • Tracking and validating your deployed models behavior when in production (open source & managed offering).

This repo is our main repo as all components use the deepchecks checks in their core. See the Getting Started section for more information about installation and quickstarts for each of the components. If you want to see deepchecks monitoring's code, you can check out the deepchecks/monitoring repo.

โฉ Getting Started

๐Ÿ’ป Installation

Deepchecks Testing (and CI) Installation

pip install deepchecks -U --user

For installing the nlp / vision submodules or with conda:

  • For NLP: Replace deepchecks with "deepchecks[nlp]", and optionally install alsodeepchecks[nlp-properties]
  • For Computer Vision: Replace deepchecks with "deepchecks[vision]".
  • For installing with conda, similarly use: conda install -c conda-forge deepchecks.

Check out the full installation instructions for deepchecks testing here.

Deepchecks Monitoring Installation

To use deepchecks for production monitoring, you can either use our SaaS service, or deploy a local instance in one line on Linux/MacOS (Windows is WIP!) with Docker. Create a new directory for the installation files, open a terminal within that directory and run the following:

pip install deepchecks-installer
deepchecks-installer install-monitoring

This will automatically download the necessary dependencies, run the installation process and then start the application locally.

The installation will take a few minutes. Then you can open the deployment url (default is http://localhost), and start the system onboarding. Check out the full monitoring open source installation & quickstart.

Note that the open source product is built such that each deployment supports monitoring of a single model.

๐Ÿƒโ€โ™€๏ธ Quickstarts

Deepchecks Testing Quickstart

Jump right into the respective quickstart docs:

to have it up and running on your data.

Inside the quickstarts, you'll see how to create the relevant deepchecks object for holding your data and metadata (Dataset, TextData or VisionData, corresponding to the data type), and run a Suite or Check. The code snippet for running it will look something like the following, depending on the chosen Suite or Check.

from deepchecks.tabular.suites import model_evaluation
suite = model_evaluation()
suite_result = suite.run(train_dataset=train_dataset, test_dataset=test_dataset, model=model)
suite_result.save_as_html() # replace this with suite_result.show() or suite_result.show_in_window() to see results inline or in window
# or suite_result.results[0].value with the relevant check index to process the check result's values in python

The output will be a report that enables you to inspect the status and results of the chosen checks:

Deepchecks Monitoring Quickstart

Jump right into the open source monitoring quickstart docs to have it up and running on your data. You'll then be able to see the checks results over time, set alerts, and interact with the dynamic deepchecks UI that looks like this:

Deepchecks CI & Testing Management Quickstart

Deepchecks managed CI & Testing management is currently in closed preview. Book a demo for more information about the offering.

For building and maintaining your own CI process while utilizing Deepchecks Testing for it, check out our docs for Using Deepchecks in CI/CD.

๐Ÿงฎ How does it work?

At its core, deepchecks includes a wide variety of built-in Checks, for testing all types of data and model related issues. These checks are implemented for various models and data types (Tabular, NLP, Vision), and can easily be customized and expanded.

The check results can be used to automatically make informed decisions about your model's production-readiness, and for monitoring it over time in production. The check results can be examined with visual reports (by saving them to an HTML file, or seeing them in Jupyter), processed with code (using their pythonic / json output), and inspected and collaborated on with Deepchecks' dynamic UI (for examining test results and for production monitoring).

โœ… Deepchecks' Core: The Checks

  • All of the Checks and the framework for customizing them are implemented inside the Deepchecks Testing Python package (this repo).
  • Each check tests for a specific potential problem. Deepchecks has many pre-implemented checks for finding issues with the model's performance (e.g. identifying weak segments), data distribution (e.g. detect drifts or leakages) and data integrity (e.g. find conflicting labels).
  • Customizable: each check has many configurable parameters, and custom checks can easily be implemented.
  • Can be run manually (during research) or triggered automatically (in CI processes or production monitoring)
  • Check results can be consumed by:
    • Visual output report - Saving to HTML(result.save_to_html('output_report_name.html')) or viewing them in Jupyter (result.show()).
    • Processing with code - with python using the check result's value attribute, or saving a JSON output
    • Deepchecks' UI - for dynamic inspection and collaboration (of test results and production monitoring)
  • Optional conditions can be added and customized, to automatically validate check results, with a a pass โœ“, fail โœ– or warning ! status
  • An ordered list of checks (with optional conditions) can be run together in a "Suite" (and the output is a concluding report of all checks that ran)

๐Ÿ“œ Open Source vs Paid

Deepchecks' projects (deepchecks/deepchecks & deepchecks/monitoring) are open source and are released under AGPL 3.0.

The only exception are the Deepchecks Monitoring components (in the deepchecks/monitoring repo), that are under the (backend/deepchecks_monitoring/ee) directory, that are subject to a commercial license (see the license here). That directory isn't used by default, and is packaged as part of the deepchecks monitoring repository simply to support upgrading to the commercial edition without downtime.

Enabling premium features (contained in the backend/deepchecks_monitoring/ee directory) with a self-hosted instance requires a Deepchecks license. To learn more, book a demo or see our pricing page.

Looking for a ๐Ÿ’ฏ% open-source solution for deepcheck monitoring? Check out the Monitoring OSS repository, which is purged of all proprietary code and features.

๐Ÿ‘ญ Community, Contributing, Docs & Support

Deepchecks is an open source solution. We are committed to a transparent development process and highly appreciate any contributions. Whether you are helping us fix bugs, propose new features, improve our documentation or spread the word, we would love to have you as part of our community.

  • Give us a โญ๏ธ github star โญ๏ธ on the top of this page to support what we're doing, it means a lot for open source projects!
  • Read our docs for more info about how to use and customize deepchecks, and for step-by-step tutorials.
  • Post a Github Issue to submit a bug report, feature request, or suggest an improvement.
  • To contribute to the package, check out our first good issues and contribution guidelines, and open a PR.

Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions, get help for package usage or contributions, or engage in discussions about ML testing!

โœจ Contributors

Thanks goes to these wonderful people (emoji key):

Itay Gabbay
Itay Gabbay

๐Ÿ’ป ๐Ÿ“– ๐Ÿค”
matanper
matanper

๐Ÿ“– ๐Ÿค” ๐Ÿ’ป
JKL98ISR
JKL98ISR

๐Ÿค” ๐Ÿ’ป ๐Ÿ“–
Yurii Romanyshyn
Yurii Romanyshyn

๐Ÿค” ๐Ÿ’ป ๐Ÿ“–
Noam Bressler
Noam Bressler

๐Ÿ’ป ๐Ÿ“– ๐Ÿค”
Nir Hutnik
Nir Hutnik

๐Ÿ’ป ๐Ÿ“– ๐Ÿค”
Nadav-Barak
Nadav-Barak

๐Ÿ’ป ๐Ÿ“– ๐Ÿค”
Sol
Sol

๐Ÿ’ป ๐Ÿ“– ๐Ÿค”
DanArlowski
DanArlowski

๐Ÿ’ป ๐Ÿš‡
DBI
DBI

๐Ÿ’ป
OrlyShmorly
OrlyShmorly

๐ŸŽจ
shir22
shir22

๐Ÿค” ๐Ÿ“– ๐Ÿ“ข
yaronzo1
yaronzo1

๐Ÿค” ๐Ÿ–‹
ptannor
ptannor

๐Ÿค” ๐Ÿ–‹
avitzd
avitzd

๐Ÿ“‹ ๐Ÿ“น
DanBasson
DanBasson

๐Ÿ“– ๐Ÿ› ๐Ÿ’ก
S.Kishore
S.Kishore

๐Ÿ’ป ๐Ÿ“– ๐Ÿ›
Shay Palachy-Affek
Shay Palachy-Affek

๐Ÿ”ฃ ๐Ÿ’ก ๐Ÿ““
Cemal GURPINAR
Cemal GURPINAR

๐Ÿ“– ๐Ÿ›
David de la Iglesia Castro
David de la Iglesia Castro

๐Ÿ’ป
Levi Bard
Levi Bard

๐Ÿ“–
Julien Schuermans
Julien Schuermans

๐Ÿ›
Nir Ben-Zvi
Nir Ben-Zvi

๐Ÿ’ป ๐Ÿค”
Shiv Shankar Dayal
Shiv Shankar Dayal

๐Ÿš‡
RonItay
RonItay

๐Ÿ› ๐Ÿ’ป
Jeroen Van Goey
Jeroen Van Goey

๐Ÿ› ๐Ÿ“–
idow09
idow09

๐Ÿ› ๐Ÿ’ก
Ikko Ashimine
Ikko Ashimine

๐Ÿ“–
Jason Wohlgemuth
Jason Wohlgemuth

๐Ÿ“–
Lokin Sethia
Lokin Sethia

๐Ÿ’ป ๐Ÿ›
Ingo Marquart
Ingo Marquart

๐Ÿ’ป ๐Ÿ›
Oscar
Oscar

๐Ÿ’ป
Richard W
Richard W

๐Ÿ’ป ๐Ÿ“– ๐Ÿค”
Bernardo
Bernardo

๐Ÿ’ป ๐Ÿ“–
Olivier Binette
Olivier Binette

๐Ÿ’ป ๐Ÿ“– ๐Ÿค”
้™ˆ้ผŽๅฝฆ
้™ˆ้ผŽๅฝฆ

๐Ÿ›
Andres Vargas
Andres Vargas

๐Ÿ“–
Michael Marien
Michael Marien

๐Ÿ“– ๐Ÿ›
OrdoAbChao
OrdoAbChao

๐Ÿ’ป
Matt Chan
Matt Chan

๐Ÿ’ป
Harsh Jain
Harsh Jain

๐Ÿ’ป ๐Ÿ“– ๐Ÿ›
arterm-sedov
arterm-sedov

๐Ÿ“–
AIT ALI YAHIA Rayane
AIT ALI YAHIA Rayane

๐Ÿ’ป ๐Ÿค”

This project follows the all-contributors specification. Contributions of any kind are welcome!