Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Integrate F1 score function to frontend, align with sklearn met… #28487

Merged
merged 2 commits into from
Mar 11, 2024

Conversation

muzakkirhussain011
Copy link
Contributor

@muzakkirhussain011 muzakkirhussain011 commented Mar 6, 2024

PR Description

Pull Request - Add F1 Score Function to Ivy Frontend Sklearn Metrics

Overview

This pull request introduces a new feature to the Ivy machine learning framework. It adds an F1 score calculation function, which is a commonly used metric for evaluating the accuracy of a binary classification model.

Details

The F1 score is a measure of a test's accuracy and is the harmonic mean of the precision and recall. The new function is integrated into the frontend metrics of Ivy, aligning with the existing Scikit-learn metrics. This ensures that users familiar with Scikit-learn can easily adapt to using Ivy for their machine learning tasks.

Implementation

  • The f1_score function has been implemented in the frontends/sklearn/metrics module.
  • It accepts true labels and predicted labels as inputs and calculates the F1 score.
  • The implementation is compatible with the Scikit-learn API, making it intuitive for users transitioning from Scikit-learn to Ivy.

Benefits

  • Provides a standardized way to evaluate binary classifiers within the Ivy framework.
  • Enhances the compatibility of Ivy with Scikit-learn, one of the most popular machine learning libraries.
  • Offers users a broader set of tools for model evaluation without leaving the Ivy environment.

Testing

  • Comprehensive tests have been added to ensure the accuracy and reliability of the F1 score calculations.
  • The tests cover a variety of scenarios

Conclusion

The addition of the F1 score function enriches the Ivy metrics module and bridges the gap between Ivy and Scikit-learn metrics, fostering a more seamless user experience. We welcome feedback and contributions to further refine this feature.


Looking forward to the community's input on this enhancement!

Closes #

Checklist

  • Did you add a function?
  • Did you add the tests?
  • Did you run your tests and are your tests passing?
  • Did pre-commit not fail on any check?
  • Did you follow the steps we provided?

Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR Compliance Checks

Thank you for your Pull Request! We have run several checks on this pull request in order to make sure it's suitable for merging into this project. The results are listed in the following section.

Issue Reference

In order to be considered for merging, the pull request description must refer to a specific issue number. This is described in our contributing guide and our PR template.
This check is looking for a phrase similar to: "Fixes #XYZ" or "Resolves #XYZ" where XYZ is the issue number that this PR is meant to address.

Protected Branch

In order to be considered for merging, the pull request changes must not be implemented on the "main" branch. This is described in our Contributing Guide. We are closing this pull request and we would suggest that you implement your changes as described in our Contributing Guide and open a new pull request.

@muzakkirhussain011
Copy link
Contributor Author

muzakkirhussain011 commented Mar 6, 2024

Hi @Ishticode ,

I hope this message finds you well. I wanted to bring to your attention the new pull request for the F1 score function that I've submitted. Given your familiarity with the recent precision score PR, which you reviewed and merged, I believe this one will be quite straightforward for you.

The F1 score implementation is similar in structure and follows the same standards we established with the precision score. All backend tests have passed successfully, and I've attached a screenshot of the test outputs for your reference. Additionally, the CI tests are also green across the board.

Could you please review the PR at your earliest convenience? Your insights were invaluable last time, and I'm looking forward to your feedback on this as well.

Thank you for your time and assistance.

Best regards

Screenshot_20240306-074353_WhatsApp.jpg

@muzakkirhussain011
Copy link
Contributor Author

Hi @Ishticode ,

Could you please review the PR I've submitted on the F1 score? It's related to the previous PR on the precision score that you reviewed. Your timely review will help me proceed with further PRs lined up.

Thank you

Copy link
Contributor

@Ishticode Ishticode left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. THank you very much @muzakkirhussain011

@Ishticode Ishticode merged commit 2175c22 into Transpile-AI:main Mar 11, 2024
140 of 145 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants