Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce chance of disconnected components in random graph test #1672

Merged
merged 1 commit into from Jun 12, 2020

Conversation

huonw
Copy link
Member

@huonw huonw commented Jun 12, 2020

This test_nodemapper_isolated_nodes test creates a graph with 5 nodes (0, 1, 2, 3, 4) being connected by 20 edges and a single isolated node (5). The test checks that the graph has 2 connected components, that is, the first 5 nodes are connected and the last node is separate. With only 20 edges, this fails very occasionally, because the randomness will sometimes miss a node or have separate clumps.

This PR ups the number of edges to 1000 to exponentially reduce the chance of the graph having more than 2 connected components.

See: #1569

@codeclimate
Copy link

codeclimate bot commented Jun 12, 2020

Code Climate has analyzed commit aca5990 and detected 0 issues on this pull request.

View more on Code Climate.

@huonw huonw merged commit 8a3a74c into develop Jun 12, 2020
@huonw huonw deleted the feature/1569-probabilities branch June 12, 2020 02:59
huonw added a commit that referenced this pull request Jun 12, 2020
This adds an xfail marker `pytest.mark.xfail`
https://docs.pytest.org/en/5.4.3/skipping.html#xfail-mark-test-functions-as-expected-to-fail
to several tests that are flaky. These tests fail semi-regularly on CI requiring
a lot of retried builds, and, for the most part, don't catch much.

The mark is added using a function that requires that several things are
specified:

- issue numbers which are linked in the `reason`; which makes it easier to keep
  track of why/how things are flaky
- an exception class which is the form of the xfail, so that the test still
  catches basic problems like invalid code (e.g. renaming functions or
  properties)

Issues for flaky tests (found by
https://github.com/stellargraph/stellargraph/issues?q=is%3Aissue+is%3Aopen+flaky):

- #585: `tests/data/test_edge_splitter.py`:
  `test_split_data_by_edge_type_and_attribute`, `test_split_data_by_edge_type`
- #970: `tests/reproducibility/test_graphsage.py`: `test_link_prediction[True]`
- #990: `tests/reproducibility/test_graphsage.py`: `test_link_prediction[False]`
- #1115: `tests/reproducibility/test_graphsage.py`: `test_unsupervised[False]`,
  `test_unsupervised[True]`, `test_nai[True]`, `test_nai[False]`
- #1160: `tests/core/test_utils.py`:`test_normalized_laplacian`
- #1623: `tests/layer/test_knowledge_graph.py`: `test_rotate`,
  `test_model_rankings[RotatE]`
- #1675: `tests/layer/test_knowledge_graph.py`: `test_model_rankings[RotH]`

This PR doesn't include #1569 (`tests/mapper/test_node_mappers.py`:
`test_nodemapper_isolated_nodes`), because that's fixed, not xfailed,
independently in #1672.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants