Skip to content
Always know what to expect from your data.
Branch: develop
Clone or download
Latest commit 60fe351 Mar 7, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
docs Updated changelog Mar 2, 2019
examples Add explanation of the requirements-airflow.txt file to a README Jul 11, 2018
great_expectations pass metadata to expectation result Mar 6, 2019
scratch/data/ipps
tests pass metadata to expectation result Mar 6, 2019
.coveragerc Ignore NotImplementedError in coverage report. Jan 17, 2019
.gitignore Add pytest_cache to gitignore Apr 25, 2018
.travis.yml
CONTRIBUTING.md
LICENSE
MANIFEST.in
README.md Rename CONTRIBUTING->CONTRIBUTING.md and update all references Apr 16, 2018
build.bat
build.sh Add conda build config (#160) Dec 22, 2017
build_and_deploy.sh Update build script for real pypi Dec 22, 2017
generic_dickens_protagonist.png dcopperfield.png -> generic_dickens_protagonist.png Feb 18, 2018
requirements-dev.txt Changed requirements for pytest_cov to pytest_cov >=2.6.1 to make the… Feb 7, 2019
requirements.txt More build configuration testing (grabbing numpy version) Aug 28, 2018
setup.cfg Standardize pattern for maintaining internal version number (#132) Dec 9, 2017
setup.py
tox.ini

README.md

Build Status Coverage Status Documentation Status

Great Expectations

Always know what to expect from your data.

What is great_expectations?

Great Expectations is a framework that helps teams save time and promote analytic integrity with a new twist on automated testing: pipeline tests. Pipeline tests are applied to data (instead of code) and at batch time (instead of compile or deploy time).

Software developers have long known that automated testing is essential for managing complex codebases. Great Expectations brings the same discipline, confidence, and acceleration to data science and engineering teams.

Why would I use Great Expectations?

To get more done with data, faster. Teams use great_expectations to

  • Save time during data cleaning and munging.
  • Accelerate ETL and data normalization.
  • Streamline analyst-to-engineer handoffs.
  • Monitor data quality in production data pipelines and data products.
  • Simplify debugging data pipelines if (when) they break.
  • Codify assumptions used to build models when sharing with distributed teams or other analysts.

How do I get started?

It's easy! Just use pip install:

$ pip install great_expectations

You can also clone the repository, which includes examples of using great_expectations.

$ git clone https://github.com/great-expectations/great_expectations.git
$ pip install great_expectations/

What expectations are available?

Expectations include:

  • expect_table_row_count_to_equal
  • expect_column_values_to_be_unique
  • expect_column_values_to_be_in_set
  • expect_column_mean_to_be_between
  • ...and many more

Visit the glossary of expectations for a complete list of expectations that are currently part of the great expectations vocabulary.

Can I contribute?

Absolutely. Yes, please. Start here, and don't be shy with questions!

How do I learn more?

For full documentation, visit Great Expectations on readthedocs.io.

Down with Pipeline Debt! explains the core philosophy behind Great Expectations. Please give it a read, and clap, follow, and share while you're at it.

For quick, hands-on introductions to Great Expectations' key features, check out our walkthrough videos:

What's the best way to get in touch with the Great Expectations team?

Issues on GitHub. If you have questions, comments, feature requests, etc., opening an issue is definitely the best path forward.

Great Expectations doesn't do X. Is it right for my use case?

It depends. If you have needs that the library doesn't meet yet, please upvote an existing issue(s) or open a new issue and we'll see what we can do. Great Expectations is under active development, so your use case might be supported soon.

You can’t perform that action at this time.