Skip to content

Commit 3436381

Browse files
authored
[FLINK-20659][yarn][tests] Add flink-yarn-test README
1 parent f519346 commit 3436381

File tree

2 files changed

+31
-0
lines changed

2 files changed

+31
-0
lines changed

flink-yarn-tests/README.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
# Flink YARN tests
2+
3+
`flink-yarn-test` collects test cases which are deployed to a local Apache Hadoop YARN cluster.
4+
There are several things to consider when running these tests locally:
5+
6+
* `YarnTestBase` spins up a `MiniYARNCluster`. This cluster spawns processes outside of the IDE's JVM
7+
to run the workers on. `JAVA_HOME` needs to be set to make this work.
8+
* The Flink cluster within each test is deployed using the `flink-dist` binaries. Any changes made
9+
to the code will only take effect after rebuilding the `flink-dist` module.
10+
* Each `YARN*ITCase` will have a local working directory for resources like logs to be stored. These
11+
working directories are located in `flink-yarn-tests/target/` (see
12+
`find flink-yarn-tests/target -name "*.err" -or -name "*.out"` for the test's output).
13+
* There is a known problem causing test instabilities due to our usage of Hadoop 2.8.3 executing the
14+
tests. This is caused by a bug [YARN-7007](https://issues.apache.org/jira/browse/YARN-7007) that
15+
got fixed in [Hadoop 2.8.6](https://issues.apache.org/jira/projects/YARN/versions/12344056). See
16+
[FLINK-15534](https://issues.apache.org/jira/browse/FLINK-15534) for further details on the
17+
related discussion.

flink-yarn-tests/src/test/java/org/apache/flink/yarn/YARNSessionCapacitySchedulerITCase.java

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -232,6 +232,10 @@ public void perJobYarnCluster() throws Exception {
232232
* <p>This ensures that with (any) pre-allocated off-heap memory by us, there is some off-heap
233233
* memory remaining for Flink's libraries. Creating task managers will thus fail if no off-heap
234234
* memory remains.
235+
*
236+
* @throws NullPointerException There is a known Hadoop bug (YARN-7007) that got fixed in Hadoop
237+
* 2.8.6 but might cause test instabilities. See FLINK-20659/FLINK-15534 for further
238+
* information.
235239
*/
236240
@Test
237241
public void perJobYarnClusterOffHeap() throws Exception {
@@ -289,6 +293,10 @@ public void perJobYarnClusterOffHeap() throws Exception {
289293
*
290294
* <p><b>Hint: </b> If you think it is a good idea to add more assertions to this test, think
291295
* again!
296+
*
297+
* @throws NullPointerException There is a known Hadoop bug (YARN-7007) that got fixed in Hadoop
298+
* 2.8.6 but might cause test instabilities. See FLINK-13009/FLINK-15534 for further
299+
* information.
292300
*/
293301
@Test
294302
public void
@@ -451,6 +459,9 @@ private static Map<String, String> getFlinkConfig(final String host, final int p
451459
* Test deployment to non-existing queue & ensure that the system logs a WARN message for the
452460
* user. (Users had unexpected behavior of Flink on YARN because they mistyped the target queue.
453461
* With an error message, we can help users identifying the issue)
462+
*
463+
* @throws NullPointerException There is a known Hadoop bug (YARN-7007) that got fixed in Hadoop
464+
* 2.8.6 but might cause test instabilities. See FLINK-15534 for further information.
454465
*/
455466
@Test
456467
public void testNonexistingQueueWARNmessage() throws Exception {
@@ -493,6 +504,9 @@ public void testNonexistingQueueWARNmessage() throws Exception {
493504
/**
494505
* Test per-job yarn cluster with the parallelism set at the CliFrontend instead of the YARN
495506
* client.
507+
*
508+
* @throws NullPointerException There is a known Hadoop bug (YARN-7007) that got fixed in Hadoop
509+
* 2.8.6 but might cause test instabilities. See FLINK-15534 for further information.
496510
*/
497511
@Test
498512
public void perJobYarnClusterWithParallelism() throws Exception {

0 commit comments

Comments
 (0)