Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set up CI with Azure Pipelines #4

Closed
wants to merge 3 commits into from
Closed

Set up CI with Azure Pipelines #4

wants to merge 3 commits into from

Conversation

yongtang
Copy link
Owner

@yongtang yongtang commented Feb 9, 2019

Signed-off-by: Yong Tang yong.tang.github@outlook.com

@yongtang yongtang force-pushed the travis branch 2 times, most recently from 13ac410 to a96d1c3 Compare February 9, 2019 04:32
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Travis CI build is getting longer and longer, so we have to
think about improvement with various methods.

Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
Signed-off-by: Yong Tang <yong.tang.github@outlook.com>
@yongtang yongtang closed this Feb 9, 2019
yongtang pushed a commit that referenced this pull request Dec 22, 2021
In this pr we make the read methods accept a row_set reading only rows specified by the user.
We also add a parallel read, that leverages the sample_row_keys method to split work among workers.
yongtang pushed a commit that referenced this pull request Feb 4, 2022
* feat: reading from bigtable (#2)

Implements reading from bigtable in a synchronous manner.

* feat: RowRange and RowSet API.

* feat: parallel read (#4)

In this pr we make the read methods accept a row_set reading only rows specified by the user.
We also add a parallel read, that leverages the sample_row_keys method to split work among workers.

* feat: version filters (#6)

This PR adds support for Bigtable version filters.

* feat: support for other data types (#5)

* fix: linter fixes (#8)

* feat docs (#9)

* fix: building on windows (#12)

* fix: refactor bigtable package to api folder (#14)

moved bigtable to tfensorflow_io.python.api

* fix: tests hanging (tensorflow#30)

changed path to bigtable emulator and cbt in tests

moved arguments' initializations to the body of the function in bigtable_ops.py

 fixed interleaveFromRange of column filters when using only one column

* fix: temporarily disable macos tests (tensorflow#32)

* disable tests on macos

Co-authored-by: Kajetan Boroszko <kajetan@unoperate.com>
Co-authored-by: Kajetan Boroszko <kajetan.boroszko@gmail.com>
yongtang pushed a commit that referenced this pull request Jul 25, 2022
Currently if DAOS libraries are not installed on a node, the
libtensorflow_io_plugins.so will fail to load due to unsatisfied
externals, and all modular filesystems are then unusable, not
just DFS.  This PR changes the DFS plugin to dynamically load
the DAOS libraries so that the DFS filesystem is available if
DAOS is installed, but the other modular filesystems are still
available if DAOS is not installed.

The checks for the DAOS libraries and the daos_init() call are
now done at filesystem registration time, not as part of each
function call in the filesystem API.  If the libraries are not
installed then the DFS filesystem will not be not registered,
and no calls into DFS functions will ever occur.  In this case
tensorflow will just report
    "File system scheme 'dfs' not implemented"
when a "dfs://" path is used.

A number of separate functions existed each of which was only
called once as part of DFS destruction, these were combined into
the DFS destructor for simplicity.  Similar recombinations were
done to simplify DFS construction.

Signed-off-by: Kevan Rehm <kevan.rehm@hpe.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
1 participant