Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Standarize inputs as pd.Dataframe / pd.Series #130

Merged
merged 18 commits into from Nov 21, 2019
Merged

Conversation

angela97lin
Copy link
Contributor

@angela97lin angela97lin commented Oct 15, 2019

Fixes #61

@codecov
Copy link

codecov bot commented Oct 15, 2019

Codecov Report

Merging #130 into master will increase coverage by 0.24%.
The diff coverage is 98.11%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #130      +/-   ##
==========================================
+ Coverage    96.9%   97.15%   +0.24%     
==========================================
  Files          92       92              
  Lines        2422     2459      +37     
==========================================
+ Hits         2347     2389      +42     
+ Misses         75       70       -5
Impacted Files Coverage Δ
evalml/__init__.py 100% <ø> (ø) ⬆️
...valml/tests/preprocessing_tests/test_split_data.py 100% <ø> (ø) ⬆️
evalml/tests/objective_tests/test_objectives.py 100% <ø> (ø) ⬆️
evalml/tests/pipeline_tests/test_pipelines.py 100% <ø> (ø) ⬆️
evalml/tests/objective_tests/test_lead_scoring.py 100% <100%> (ø) ⬆️
evalml/objectives/fraud_cost.py 100% <100%> (ø) ⬆️
...alml/tests/objective_tests/test_fraud_detection.py 100% <100%> (ø) ⬆️
evalml/pipelines/pipeline_base.py 95.58% <100%> (+0.2%) ⬆️
evalml/preprocessing/utils.py 83.72% <100%> (+1.66%) ⬆️
evalml/objectives/lead_scoring.py 96.66% <85.71%> (-3.34%) ⬇️
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9185292...a3af553. Read the comment docs.

@angela97lin angela97lin requested review from kmax12 and jeremyliweishih and removed request for kmax12 Oct 15, 2019
Copy link
Contributor

@jeremyliweishih jeremyliweishih left a comment

LGTM other than these test additions.

evalml/tests/objective_tests/test_fraud_detection.py Outdated Show resolved Hide resolved
evalml/tests/objective_tests/test_lead_scoring.py Outdated Show resolved Hide resolved
jeremyliweishih
jeremyliweishih previously approved these changes Oct 16, 2019
Copy link
Contributor

@jeremyliweishih jeremyliweishih left a comment

LGTM. Lets just wait for ci to report.

@angela97lin angela97lin self-assigned this Oct 17, 2019
@kmax12 kmax12 self-assigned this Nov 18, 2019
kmax12
kmax12 previously approved these changes Nov 21, 2019
Copy link
Contributor

@kmax12 kmax12 left a comment

this looks good to me. although, as i was going through i noticed we aren't consistent with how we output stuff as pandas vs numpy. opened #236 to work through that

@kmax12 kmax12 self-requested a review Nov 21, 2019
kmax12
kmax12 approved these changes Nov 21, 2019
Copy link
Contributor

@kmax12 kmax12 left a comment

LGTM

@angela97lin angela97lin merged commit 15985d4 into master Nov 21, 2019
@angela97lin angela97lin deleted the standarize_pd branch Nov 21, 2019
@angela97lin angela97lin mentioned this pull request Dec 16, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Standardize Input as pd
3 participants