Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: EvaluationRunResult add parameter to specify columns to keep in the comparative Dataframe #7879

Merged
merged 10 commits into from
Jun 17, 2024

Conversation

davidsbatista
Copy link
Contributor

Proposed Changes:

Allow the user to specify specify columns to keep in the comparative Dataframe

How did you test it?

Run local unit tests

Checklist

@davidsbatista davidsbatista requested a review from a team as a code owner June 17, 2024 11:45
@davidsbatista davidsbatista requested review from anakin87 and removed request for a team June 17, 2024 11:45
@github-actions github-actions bot added topic:tests type:documentation Improvements on the docs labels Jun 17, 2024
@davidsbatista davidsbatista changed the title fix: EvaluationRunResult add parameter to specify columns to keep in the comparative Dataframe feat: EvaluationRunResult add parameter to specify columns to keep in the comparative Dataframe Jun 17, 2024
@davidsbatista davidsbatista requested a review from a team as a code owner June 17, 2024 11:51
@davidsbatista davidsbatista requested review from dfokina and removed request for a team June 17, 2024 11:51
@coveralls
Copy link
Collaborator

coveralls commented Jun 17, 2024

Pull Request Test Coverage Report for Build 9547073146

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 4 unchanged lines in 1 file lost coverage.
  • Overall coverage increased (+0.003%) to 89.746%

Files with Coverage Reduction New Missed Lines %
evaluation/eval_run_result.py 4 92.42%
Totals Coverage Status
Change from base Build 9544797751: 0.003%
Covered Lines: 6923
Relevant Lines: 7714

💛 - Coveralls

@coveralls
Copy link
Collaborator

coveralls commented Jun 17, 2024

Pull Request Test Coverage Report for Build 9547147359

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 4 unchanged lines in 1 file lost coverage.
  • Overall coverage increased (+0.003%) to 89.746%

Files with Coverage Reduction New Missed Lines %
evaluation/eval_run_result.py 4 92.42%
Totals Coverage Status
Change from base Build 9544797751: 0.003%
Covered Lines: 6923
Relevant Lines: 7714

💛 - Coveralls

Copy link
Collaborator

@shadeMe shadeMe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

davidsbatista and others added 4 commits June 17, 2024 17:39
Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
…ve-be3e15ce45de3e0b.yaml

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>
@coveralls
Copy link
Collaborator

coveralls commented Jun 17, 2024

Pull Request Test Coverage Report for Build 9550619663

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 4 unchanged lines in 1 file lost coverage.
  • Overall coverage increased (+0.003%) to 89.746%

Files with Coverage Reduction New Missed Lines %
evaluation/eval_run_result.py 4 92.42%
Totals Coverage Status
Change from base Build 9550292209: 0.003%
Covered Lines: 6923
Relevant Lines: 7714

💛 - Coveralls

@coveralls
Copy link
Collaborator

coveralls commented Jun 17, 2024

Pull Request Test Coverage Report for Build 9550621579

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 4 unchanged lines in 1 file lost coverage.
  • Overall coverage increased (+0.003%) to 89.746%

Files with Coverage Reduction New Missed Lines %
evaluation/eval_run_result.py 4 92.42%
Totals Coverage Status
Change from base Build 9550292209: 0.003%
Covered Lines: 6923
Relevant Lines: 7714

💛 - Coveralls

@coveralls
Copy link
Collaborator

coveralls commented Jun 17, 2024

Pull Request Test Coverage Report for Build 9550623950

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 4 unchanged lines in 1 file lost coverage.
  • Overall coverage increased (+0.003%) to 89.746%

Files with Coverage Reduction New Missed Lines %
evaluation/eval_run_result.py 4 92.42%
Totals Coverage Status
Change from base Build 9550292209: 0.003%
Covered Lines: 6923
Relevant Lines: 7714

💛 - Coveralls

@coveralls
Copy link
Collaborator

coveralls commented Jun 17, 2024

Pull Request Test Coverage Report for Build 9550688665

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • 4 unchanged lines in 1 file lost coverage.
  • Overall coverage increased (+0.003%) to 89.746%

Files with Coverage Reduction New Missed Lines %
evaluation/eval_run_result.py 4 92.42%
Totals Coverage Status
Change from base Build 9550292209: 0.003%
Covered Lines: 6923
Relevant Lines: 7714

💛 - Coveralls

@davidsbatista davidsbatista merged commit 55513f7 into main Jun 17, 2024
17 checks passed
@davidsbatista davidsbatista deleted the fix/evaluation_run_result_comparative_dataframe branch June 17, 2024 16:08
masci added a commit that referenced this pull request Jun 18, 2024
enable ruff format and re-format outdated files

feat: `EvaluationRunResult` add parameter to specify columns to keep in the comparative `Dataframe`  (#7879)

* adding param to explictily state which cols to keep

* adding param to explictily state which cols to keep

* adding param to explictily state which cols to keep

* updating tests

* adding release notes

* Update haystack/evaluation/eval_run_result.py

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* Update releasenotes/notes/add-keep-columns-to-EvalRunResult-comparative-be3e15ce45de3e0b.yaml

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* updating docstring

---------

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

add format-check

fail on format and linting failures

fix string formatting

reformat long lines

fix tests

fix typing

linter

pull from main
silvanocerza pushed a commit that referenced this pull request Jun 18, 2024
* ruff settings

enable ruff format and re-format outdated files

feat: `EvaluationRunResult` add parameter to specify columns to keep in the comparative `Dataframe`  (#7879)

* adding param to explictily state which cols to keep

* adding param to explictily state which cols to keep

* adding param to explictily state which cols to keep

* updating tests

* adding release notes

* Update haystack/evaluation/eval_run_result.py

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* Update releasenotes/notes/add-keep-columns-to-EvalRunResult-comparative-be3e15ce45de3e0b.yaml

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

* updating docstring

---------

Co-authored-by: Madeesh Kannan <shadeMe@users.noreply.github.com>

add format-check

fail on format and linting failures

fix string formatting

reformat long lines

fix tests

fix typing

linter

pull from main

* reformat

* lint -> check

* lint -> check
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic:tests type:documentation Improvements on the docs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants