-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create some unit tests #7
Comments
Looping @chisingh in on this, let's coordinate on unit tests for R and python. One idea that wasn't obvious when this was created, but makes sense now: unit tests can check the specs generated on the backend language side, perhaps not a checksum or hash but checking for certain key fields in the generated json. |
It looks like there are some old specs that no longer work, e.g. here: https://github.com/microsoft/datamations/blob/main/inst/htmlwidgets/test/index.html Let's get rid of them to avoid confusion going forward. |
I've cleaned out the sandbox folder so it only has working specs. |
Re: testing, there are a bunch of tests here that check the format / fields of the json on the R end. Definitely not 100% test coverage because of all the shiny, legacy code, etc, but a decent start on the core functionality: (the middle columns show lines, covered, missed, etc - but the last column is coverage for each file) I'm still trying to think of the best way to compare the python vs R specs within the testthat framework (which is commonly used for R packages). If the python code can write specs to a json file, we can read those in, generate the R specs, and check that they're the same as a start! |
sounds like a good idea on writing python json specs out to files and testing them w/ the R testthat framework. @chisingh, let's work on this once python is up and running. |
I have added unit tests using pytest framework that can also be enabled to run after every commit once issue #104 is closed. Some of these tests use the raw json files provided by @sharlagelfand to verify that both the outputs match. |
@chisingh will output json specs from python and @sharlagelfand will read that in and test w/ testthat in R |
Just working on testing those specs @chisingh and it looks like there are some discrepancies in ordering of things - in the R specs, Similarly in the "gemini_id": 20,
"Work": "Academia",
"datamations_x": 1,
"datamations_y": 85.0122219615483,
"datamations_y_tooltip": 85.0122219615483,
"datamations_y_raw": 85.3303201349918,
"Lower": 84.7637160761228,
"Upper": 85.2607278469738 versus in python: "gemini_id": 20,
"Work": "Academia",
"datamations_x": 1,
"datamations_y": 85.01222196154829,
"datamations_y_raw": 85.33032013499178,
"datamations_y_tooltip": 85.01222196154829,
"Lower": 84.76371607612279,
"Upper": 85.2607278469738 The order of Otherwise the specs look equivalent! Just something it might be nice to keep in mind and change if not too big of a hassle, otherwise I will have to match the ordering before each test. |
That's great! I will reorder those fields. |
This will require some careful thought because the output are visualizations, not numerical results.
Maybe comparing to a snapshot or checksum or something?
And perhaps hook this up to Github actions for continuous integration?
The text was updated successfully, but these errors were encountered: