Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve RankD Tests #435

Closed
7 tasks done
bbengfort opened this issue May 16, 2018 · 3 comments · Fixed by #1079
Closed
7 tasks done

Improve RankD Tests #435

bbengfort opened this issue May 16, 2018 · 3 comments · Fixed by #1079
Labels
level: novice good for beginners or new contributors type: technical debt work to optimize or generalize code

Comments

@bbengfort
Copy link
Member

bbengfort commented May 16, 2018

Right now the Rank1D and Rank2D tests are very basic and can be improved using the new assert image similarity and pytest testing framework mechanisms.

Proposal/Issue

The test matrix should against the following:

  • replace make_regression dataset with load_energy and update those tests
  • use load_occupancy for classification tests
  • algorithms: pearson, covariance, spearman (2D) and shaprio (1D)
  • 1D case - horizontal orientation
  • 1D case - vertical orientation
  • Test that an exception is raised for unrecognized algorithms
  • test the underlying rank matrix is correct

Unfortunately, we can't use pytest.mark.parametrize with visual test cases (yet), so we'll have to make individual tests for each.

Code Snippet

Tests will look approximately like:

def test_rank2d_bad_algorithm(self):
    """"
    Assert that unknown algorithms raise exception
    """"
    with pytest.raises(YellowbrickValueError, match="unknown algorithm"):
         # do the thing 

def test_rank2d_pearson_regression(self):
    """"
    Test Rank2D images similar with pearson scores on regression dataset 
    """"
    data = load_energy(return_dataset=True)
    oz = Rank2D(algorithm='pearson')
    oz.fit_transform(data) 
    npt.assert_array_equal(oz.ranks_, [[]])
    self.assert_images_similar(oz, tol=0.25) 

Background

See #68 and #429

@bbengfort bbengfort added type: technical debt work to optimize or generalize code level: novice good for beginners or new contributors pycon2019 labels May 16, 2018
@tabishsada
Copy link
Contributor

I'll work on this.

bbengfort pushed a commit that referenced this issue Dec 11, 2018
Add Kendall-Tau correlation metric to the Rank2D visualizer. Additionally, extends and completes the Rank2D tests and verifies the Spearman metric. 

Fixes #628 and #435
@yanigisawa
Copy link

Are there additional tests / algorithms expected here beyond the Kendal-Tau metric added above?

@lwgray
Copy link
Contributor

lwgray commented May 11, 2019

@yanigisawa Thanks for the comment. Did you have other algorithms in mind? Note that this issue is a little tricky as it's closely tied with #682 (updating tests to use the new datasets module) and #318 (using the new pytest testing style and fixtures). If you'd like to work on this issue, the rankd tests could certainly use beefing up, but make sure to take a look at those other two issues as well!

rebeccabilbro added a commit to rebeccabilbro/yellowbrick that referenced this issue Jun 15, 2020
rebeccabilbro added a commit that referenced this issue Jun 21, 2020
This PR updates our RankD tests to improve our coverage and better leverage YB datasets.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
level: novice good for beginners or new contributors type: technical debt work to optimize or generalize code
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants