Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add examples for the deita paper tasks #329

Merged
merged 1 commit into from
Feb 8, 2024
Merged

Conversation

plaguss
Copy link
Contributor

@plaguss plaguss commented Feb 5, 2024

Description

This PR includes additional examples and adds some notes for the Deita tasks.

@plaguss plaguss requested a review from dvsrepo February 5, 2024 16:04
@plaguss plaguss self-assigned this Feb 5, 2024
@plaguss plaguss requested a review from sdiazlor February 5, 2024 16:05
Copy link
Member

@davidberenstein1957 davidberenstein1957 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @plaguss, I realized we return the rank/scores here but I'm not sure if this is handled intuitively because normally we work with ratings where a higher rating corresponds to a higher score but in this case, it is the other way around. Right? And this might not be aligned with the RankingQuestion in Argilla itself.

@plaguss
Copy link
Contributor Author

plaguss commented Feb 5, 2024

Hi @davidberenstein1957, we don't really follow the paper strictly, we decided to interpret the scores as if those were Ratings, so we can reuse all the behaviour from the PreferenceTask without adding extra complexity for these tasks. This way, they are more reusable and we don't need to add a specific type of Task which takes into account the rankings. We should tackle this when we start integrating PairRM and such (this issue should take those into account: #306). In this case the higher ratings are also higher scores too, higher complexity/quality corresponds to a better response.

The discussion from the PR can be seen here.

@davidberenstein1957
Copy link
Member

davidberenstein1957 commented Feb 5, 2024

@plaguss I agree but now it seems like we attribute ratings to values that are originally rankings so they should be reversed to allow for choosing on of the preferred options. Correct?

rank 1, 2 => rating 2, 1 because a lower rating should translate in a higher rank?

How is this handled in the to_argilla for example?

@plaguss
Copy link
Contributor Author

plaguss commented Feb 5, 2024

@plaguss I agree but now it seems like we attribute ratings to values that are originally rankings so they should be reversed to allow for choosing on of the preferred options. Correct?

rank 1, 2 => rating 2, 1 because a lower rating should translate in a higher rank?

How is this handled in the to_argilla for example?

I understand, but for example, the QualityScorer uses the following prompt:

from distilabel.tasks import QualityScorerTask

task = QualityScorerTask()
print(task.generate_prompt("What are the first 5 Fibonacci numbers?", ["0 1 1 2 3", "0 1 1 2 3"]).formatted_prompt)
Rank the following responses provided by different AI assistants to the user’s question
according to the quality of their response. Score each response from 1 to 2, with 3
reserved for responses that are already very well written and cannot be improved further.
Your evaluation should consider factors such as helpfulness, relevance, accuracy, depth,
creativity, and level of detail of the response.
Use the following format:
[Response 1] Score:
[Response 2] Score:
...
#Question#: What are the first 5 Fibonacci numbers?
#Response List#:

[Response 1] 0 1 1 2 3
[Response 2] 0 1 1 2 3

Maybe we can discuss whether the prompts really make sense in our case and maybe rewrite them to more more "inspired" from the deita paper, but just to represent proper ratings, explaining if lower/higher is worse/better

@davidberenstein1957
Copy link
Member

@plaguss Yes it makes sense. I thought we needed to rank the responses within the prompt and that they were not directly asked to score. Ignore what I've commented above. 😅

@plaguss plaguss merged commit 67fc9c2 into main Feb 8, 2024
4 checks passed
@plaguss plaguss deleted the docs/evol-complexity-notes branch February 8, 2024 15:21
jphme pushed a commit to jphme/distilabel that referenced this pull request Feb 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants