Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pass X_train and y_train to graph_prediction_vs_actual_over_time #2762

Merged
merged 2 commits into from
Sep 10, 2021

Conversation

freddyaboulton
Copy link
Contributor

Pull Request Description

Fixes #2731


After creating the pull request: in order to pass the release_notes_updated check you will need to update the "Future Release" section of docs/source/release_notes.rst to include this pull request by adding :pr:123.

@codecov
Copy link

codecov bot commented Sep 9, 2021

Codecov Report

Merging #2762 (672e243) into main (a7d5105) will decrease coverage by 0.1%.
The diff coverage is 100.0%.

Impacted file tree graph

@@           Coverage Diff           @@
##            main   #2762     +/-   ##
=======================================
- Coverage   99.8%   99.8%   -0.0%     
=======================================
  Files        301     301             
  Lines      27897   27889      -8     
=======================================
- Hits       27834   27826      -8     
  Misses        63      63             
Impacted Files Coverage Δ
evalml/model_understanding/graphs.py 100.0% <100.0%> (ø)
...lml/tests/model_understanding_tests/test_graphs.py 100.0% <100.0%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a7d5105...672e243. Read the comment docs.

@@ -7,6 +7,9 @@ Release Notes
* Set ``eval_metric`` to ``logloss`` for ``XGBoostClassifier`` :pr:`2741`
* Added support for ``woodwork`` versions ``0.7.0`` and ``0.7.1`` :pr:`2743`
* Changed ``explain_predictions`` functions to display original feature values :pr:`2759`
* Added ``X_train`` and ``y_train`` to ``graph_prediction_vs_actual_over_time`` and ``get_prediction_vs_actual_over_time_data`` :pr:`2762`
* Added ``forecast_horizon`` as a required parameter to time series pipelines and ``AutoMLSearch`` :pr:`2697`
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accidentally put the release notes for 2697 into the previous release during a rebase lol

Copy link
Contributor

@chukarsten chukarsten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This LGTM! Thank you!

Copy link
Collaborator

@jeremyliweishih jeremyliweishih left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice!

Copy link
Contributor

@angela97lin angela97lin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks super clean! Just one question to feed my growing time-series knowledge 😂

" xaxis={\"title\": \"Time\"},\n",
" yaxis={\"title\": \"Target Values and Predictions\"},\n",
")\n",
"from evalml.model_understanding import graph_prediction_vs_actual_over_time\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love how simple this is!

@@ -1427,22 +1427,23 @@ def visualize_decision_tree(
return source_obj


def get_prediction_vs_actual_over_time_data(pipeline, X, y, dates):
def get_prediction_vs_actual_over_time_data(pipeline, X, y, X_train, y_train, dates):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this a bug? Or simply doing something different? If the latter, is there any functionality that we want to keep around?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was broken with #2697 I just arbitrarily decided to do it in a follow up issue rather than in #2697 because #2697 was large enough!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, cool stuff !

@freddyaboulton freddyaboulton merged commit bb8329b into main Sep 10, 2021
@freddyaboulton freddyaboulton deleted the 2731-graph-prediction-over-time-fix branch September 10, 2021 15:32
@freddyaboulton freddyaboulton restored the 2731-graph-prediction-over-time-fix branch September 10, 2021 15:52
@freddyaboulton freddyaboulton deleted the 2731-graph-prediction-over-time-fix branch September 10, 2021 15:52
@chukarsten chukarsten mentioned this pull request Sep 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update graph_prediction_vs_actual_over_time to use predict_in_sample
4 participants