Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tests to make sure python modules are exposed correctly #14

Closed
yongtang opened this issue Dec 9, 2018 · 4 comments
Closed

Add tests to make sure python modules are exposed correctly #14

yongtang opened this issue Dec 9, 2018 · 4 comments

Comments

@yongtang
Copy link
Member

yongtang commented Dec 9, 2018

While tensorflow-io has tests covered for different modules (Ignite/Kafka/etc) through bazel test, those tests are always run inside the repo directory and no through pip install.

It would be good to have tests to make sure pip install correctly exposes python modules. Some simple tests of pip install tensorflow-io-*.whl && python -c "import tensorflow_io.Kafka" would be good enough to serve the needs here.

@juwangvsu
Copy link

run test inside the repo is fine. It would be nice to have a section in the README about test procedure and
location of the test code.

@yongtang
Copy link
Member Author

Each op may have different test procedures. For example, Kafka requires setting up a docker container of kafka before running test. I think individual README.md at each repo would be nice.

@yongtang
Copy link
Member Author

An integration test has been added to tensorflow/io/tests. The integration test is invoked through pip install tensorflow-io-*.whl && python -m pytest . so the exposure of python modules could be validated. It covers video format only at the moment but could be a start.

@yongtang
Copy link
Member Author

yongtang commented Mar 3, 2019

Think this issue is resolved with #118.

@yongtang yongtang closed this as completed Mar 3, 2019
yongtang referenced this issue in yongtang/io Dec 22, 2021
moved bigtable to tfensorflow_io.python.api
yongtang pushed a commit that referenced this issue Jan 27, 2022
* feat: reading from bigtable (#2)

Implements reading from bigtable in a synchronous manner.

* feat: RowRange and RowSet API.

* feat: parallel read (#4)

In this pr we make the read methods accept a row_set reading only rows specified by the user.
We also add a parallel read, that leverages the sample_row_keys method to split work among workers.

* feat: version filters (#6)

This PR adds support for Bigtable version filters.

* feat: support for other data types (#5)

* fix: linter fixes (#8)

* feat docs (#9)

* fix: building on windows (#12)

* fix: refactor bigtable package to api folder (#14)

moved bigtable to tfensorflow_io.python.api

* fix: tests hanging (#30)

changed path to bigtable emulator and cbt in tests

moved arguments' initializations to the body of the function in bigtable_ops.py

 fixed interleaveFromRange of column filters when using only one column

* fix: temporarily disable macos tests (#32)

* disable tests on macos

Co-authored-by: Kajetan Boroszko <kajetan@unoperate.com>
Co-authored-by: Kajetan Boroszko <kajetan.boroszko@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants