-
Notifications
You must be signed in to change notification settings - Fork 4.5k
Hadoop compat test #22284
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Hadoop compat test #22284
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The GSON object was being initialized in a static context. When the DoFn that used it was called in another JVM, which did not initialized this object, we were getting a NPE exception. Here, we move the initialization of the GSON object inside the DoFn itself (through a setup method). Co-authored-by: Thiago Nunes <thiagotnunes@google.com>
…n API. (apache#16852) The ability to mix and match runners and SDKs is accomplished through two portability layers: 1. The Runner API provides an SDK-and-runner-independent definition of a Beam pipeline 2. The Fn API allows a runner to invoke SDK-specific user-defined functions Apache Beam pipelines support executing stateful DoFns[1]. To support this execution the Runner API defines multiple user state specifications: * ReadModifyWriteStateSpec * BagStateSpec * OrderedListStateSpec * CombiningStateSpec * MapStateSpec * SetStateSpec The Fn API[2] defines APIs[3] to get, append and clear user state currently supporting a BagUserState and MultimapUserState protocol. Since there is no clear mapping between the Runner API and Fn API state specifications, there is no way for a runner to know that it supports a given API necessary to support the execution of the pipeline. The Runner will also have to manage additional runtime metadata associated with which protocol was used for a type of state so that it can successfully manage the state’s lifetime once it can be garbage collected. Please see the doc[4] for further details and a proposal on how to address this shortcoming. 1: https://beam.apache.org/blog/stateful-processing/ 2: https://github.com/apache/beam/blob/3ad05523f4cdf5122fc319276fcb461f768af39d/model/fn-execution/src/main/proto/beam_fn_api.proto#L742 3: https://s.apache.org/beam-fn-state-api-and-bundle-processing 4: http://doc/1ELKTuRTV3C5jt_YoBBwPdsPa5eoXCCOSKQ3GPzZrK7Q
…on tests for hdfs (apache#16864)
…tefulParDo (apache#16866) [BEAM-13919] Annotate PerKeyOrderingTest with UsesStatefulParDo. Co-authored-by: Kyle Weaver <kcweaver@google.com>
…e BQIO to throw an error (apache#16862) * BEAM-13931 - make sure large rows cause BQIO to throw an error * prototyping new behavior * Add test for retryTransients
…nner (apache#16873) Co-authored-by: Ritesh Ghorse <riteshghorse@gmail.com>
…med f (apache#16886) Co-authored-by: reuvenlax <relax@google.com>
…ther string or json apache#16890 (apache#16900) struct.getValue() throws an error when getting a struct that contains a json inside. We circumvent this, by checking the type and calling either struct.getString() or struct.getJson(). Co-authored-by: Thiago Nunes <thiagotnunes@google.com>
… to actually limit memory from windmill (apache#16901) (apache#16941) Currently, because the queue is only limited by number of elements, there can be up to (num threads + queue size) elements outstanding at a time, which for large work items will almost certainly OOM the worker. This change both makes this limit explicit and adds a 50% JVM max memory limit on outstanding WorkItems to push back on windmill before workers run out of memory. Co-authored-by: dpcollins-google <40498610+dpcollins-google@users.noreply.github.com>
…apache#16968) Co-authored-by: tvalentyn <tvalentyn@users.noreply.github.com>
apache#16969) Failures to read from Spanner were ignored, and the "ok" serviceCallMwtric was updated before the read took place. Fix code and tests. Co-authored-by: Niel Markwick <nielm@users.noreply.github.com>
…pache#16918) (apache#16967) Co-authored-by: Janek Bevendorff <janek.bevendorff@uni-weimar.de>
…he#17749) * [BEAM-9351] Upgrade Hive to version 3.1.2 * This eliminated the pentaho dependency * fix auth issue in test * Add change log * move internal test only files to test * clean up original workaround: Hive 3.1.3 upgraded to log4j 2.17.1
Contributor
Author
|
Run PostCommit_Java_Hadoop_Versions |
Codecov Report
@@ Coverage Diff @@
## master #22284 +/- ##
===========================================
+ Coverage 46.79% 83.60% +36.81%
===========================================
Files 203 452 +249
Lines 20044 62277 +42233
===========================================
+ Hits 9379 52066 +42687
- Misses 9665 10211 +546
+ Partials 1000 0 -1000 Continue to review full report at Codecov.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Please add a meaningful description for your change here
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
R: @username).addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI.