Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add RankGPT support inside RankLLM #12475

Merged
merged 39 commits into from Apr 3, 2024
Merged

Conversation

xpbowler
Copy link
Contributor

@xpbowler xpbowler commented Apr 2, 2024

Description

  • Add RankGPT support inside RankLLM
  • Add sliding window example to RankLLM
  • Add parameter for choosing step_size
  • Remove redundant example notebook in docs/docs/examples/node_postprocessor

@dosubot dosubot bot added the size:XL This PR changes 500-999 lines, ignoring generated files. label Apr 2, 2024
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason to remove (or move?) this notebook? (now it won't show up in the docs, maybe we should replace it with the one you added below?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh i see, i'll add it back

from rank_llm.result import Result

self._result = Result

if model_enum == ModelType.VICUNA:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious why we removed the enum? Seemed helpful for validation

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we'd need to add another parameter gpt-type, or continuously update the enums to match the available openai models.

I wanted to keep the API clean. Tho thinking more, i think its ok adding another optional parameter, if that sounds good.

Copy link
Collaborator

@logan-markewich logan-markewich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of nits, but looks good otherwise

@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:XL This PR changes 500-999 lines, ignoring generated files. labels Apr 3, 2024
@xpbowler
Copy link
Contributor Author

xpbowler commented Apr 3, 2024

A couple of nits, but looks good otherwise

@logan-markewich fixed, lmk if looks good!

Copy link
Contributor

@nerdai nerdai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also bump the version (patch) number in pyproject.toml?

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Apr 3, 2024
@logan-markewich logan-markewich merged commit 8aa2ea3 into run-llama:main Apr 3, 2024
8 checks passed
chrisalexiuk-nvidia pushed a commit to chrisalexiuk-nvidia/llama_index that referenced this pull request Apr 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm This PR has been approved by a maintainer size:M This PR changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants