Deepchecks is a holistic tool for testing, validating and monitoring your machine learning models and data, throughout the model's lifecycle. It enables you to identify problems with your data quality, distributions, and model's performance with minimal effort.
See more info in the Deepchecks Components for Continuous Validation <welcome__deepchecks_components>
section, along with the direct links to the documentation of each component.
1
🏃♀️ Quickstarts 🏃♀️
Downloadable end-to-end guides, demonstrating how to start testing your data & model in just a few minutes.
💁♂️ Get Help & Give Us Feedback 💁
Links for how to interact with us via our Slack Community or by opening an issue on Github.
💻 Install 💻
Full installation guide (quick one can be found in quickstarts)
🤓 General: Concepts & Guides 🤓
A comprehensive view of deepchecks concepts, customizations, and core use cases.
🔢 Tabular 🔢
Quickstarts, main concepts, checks gallery and end-to-end guides demonstrating how to start working Deepchecks with tabular data and models.
🔤️ NLP 🔤️
Quickstarts, main concepts, checks gallery and end-to-end guides demonstrating how to start working Deepchecks with textual data. Future releases to come!
🎦 Computer Vision (Note: in Beta Release) 🎦
Quickstarts, main concepts, checks gallery and end-to-end guides demonstrating how to start working Deepchecks with CV data and models. Built-in support for PyTorch, TensorFlow, and custom frameworks.
🚀 Interactive Checks Demo 🚀
Play with some of the existing tabular checks and see how they work on various datasets with custom corruptions injected.
🤖 API Reference 🤖
Reference and links to source code for Deepchecks Testing's components.
1
🔢 Tabular 🔢
🎦 Vision 🎦 (in Beta)
🔤️ NLP 🔤️ (in Alpha)
1
Testing Docs (Here)
Tests during research and model development
CI Docs
Tests before deploying the model to production
Monitoring Docs
Tests and continuous monitoring during production
Deepchecks accompanies you through various testing needs such as verifying your data's integrity, inspecting its distributions, validating data splits, evaluating your model and comparing between different models, throughout the model's lifecycle.
Deechecks' continuous validation approach is based on testing the ML models and data throughout the different phases using the exact same checks, enabling a simple, elaborate and seamless experience for configuring and consuming the results. Each phase has its relevant interfaces (e.g. visual outputs, output results to json, alert configuration, push notifications, RCA, etc.) for interacting with the test results.
Join Our Community 👋
In addition to perusing the documentation, feel free to:
- Ask questions on the Slack Community.
- Post an issue or start a discussion on Github Issues.
- To contribute to the package, check out the Contribution Guidelines and join the contributors-q-and-a channel on Slack, or communicate with us via github issues.
To support us, please give us a star on ⭐️ Github ⭐️, it really means a lot for open source projects!