Skip to content

Test adequacy #215

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 23 commits into from
Aug 4, 2023
Merged

Test adequacy #215

merged 23 commits into from
Aug 4, 2023

Conversation

christopher-wild
Copy link
Contributor

No description provided.

@github-actions
Copy link

github-actions bot commented Jul 19, 2023

🦙 MegaLinter status: ✅ SUCCESS

Descriptor Linter Files Fixed Errors Elapsed time
✅ PYTHON black 26 0 1.61s
✅ PYTHON pylint 26 0 6.41s

See detailed report in MegaLinter reports

MegaLinter is graciously provided by OX Security

@codecov
Copy link

codecov bot commented Aug 3, 2023

Codecov Report

Merging #215 (933a168) into main (de7ed49) will increase coverage by 0.19%.
Report is 1 commits behind head on main.
The diff coverage is 98.93%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #215      +/-   ##
==========================================
+ Coverage   95.44%   95.63%   +0.19%     
==========================================
  Files          19       20       +1     
  Lines        1338     1399      +61     
==========================================
+ Hits         1277     1338      +61     
  Misses         61       61              
Files Changed Coverage Δ
causal_testing/data_collection/data_collector.py 100.00% <ø> (ø)
causal_testing/testing/causal_test_case.py 100.00% <ø> (ø)
causal_testing/testing/causal_test_suite.py 97.22% <ø> (ø)
causal_testing/testing/causal_test_result.py 96.42% <87.50%> (+3.97%) ⬆️
causal_testing/json_front/json_class.py 97.87% <100.00%> (-1.02%) ⬇️
causal_testing/testing/causal_test_adequacy.py 100.00% <100.00%> (ø)
causal_testing/testing/causal_test_outcome.py 96.96% <100.00%> (-0.05%) ⬇️
causal_testing/testing/estimators.py 90.94% <100.00%> (-0.04%) ⬇️

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 79dcb48...933a168. Read the comment docs.

@jmafoster1 jmafoster1 marked this pull request as ready for review August 3, 2023 14:58
@@ -66,9 +67,8 @@ def set_paths(self, json_path: str, dag_path: str, data_paths: list[str] = None)
data_paths = []
self.input_paths = JsonClassPaths(json_path=json_path, dag_path=dag_path, data_paths=data_paths)

def setup(self, scenario: Scenario):
def setup(self, scenario: Scenario, data=None):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't see this data parameter used in any setup calls, is this for some future use?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am using it as part of my case study so I can pass in the data directly rather than having to pass in filepaths

if "coverage" in test and test["coverage"]:
adequacy_metric = DataAdequacy(causal_test_case, estimation_model, self.data_collector)
adequacy_metric.measure_adequacy()
# self._append_to_file(f"KURTOSIS: {effect_estimate.mean()}", logging.INFO)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we keep these commented code lines?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probs not. I'll delete before I merge

converted = []
for r in results[field]:
if isinstance(r, float):
converted.append(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this leave you with a list of dataframes of length 1? If so it seems quite inefficient and convoluted.

Would making converted a blank dataframe and using the df.append method so it is only 1 dataframe work?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did this so it type checks. If you have a categorical variable, statsmodels handles this with a dummy encoding, so you get a dataframe of coefficients with one for each category. It's easier to turn everything to a df than to handle the two different datatypes separately, although that's probably just me thinking like a functional programmer again...

"FAILING ON",
[(ci_low, ci_high) for ci_low, ci_high in zip(ci_low, ci_high) if not ci_low < 0 < ci_high],
)
# if not all(ci_low < 0 < ci_high for ci_low, ci_high in zip(ci_low, ci_high)):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If avoidable I think it would be better to just remove code rather than commenting out. It can always be found in previous versions of the framework if needed again.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, you're probably right. I'm just a lazy programmer.

from causal_testing.specification.causal_specification import CausalSpecification


class TestJsonClass(unittest.TestCase):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test class name and docstring might need updating

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good spot!

@jmafoster1 jmafoster1 merged commit bf0bbe3 into main Aug 4, 2023
@jmafoster1 jmafoster1 deleted the test-adequacy branch August 4, 2023 13:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants