Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Command R+ Support to use the 128k context #1

Open
saleham5 opened this issue Jun 4, 2024 · 5 comments
Open

Command R+ Support to use the 128k context #1

saleham5 opened this issue Jun 4, 2024 · 5 comments

Comments

@saleham5
Copy link

saleham5 commented Jun 4, 2024

Hey as the title says. can you add Command R+ support?

@anakin87
Copy link
Owner

anakin87 commented Jun 4, 2024

Hello! In its current form, AutoQuizzer is only a nice demonstration.

I invite you to fork the repo and replace the current Generator with another one that is compatible with the desired model.

Some docs:

@saleham5
Copy link
Author

saleham5 commented Jun 4, 2024

Thank you. Another question, is it always going to give the same questions and only take the beginning of the webpage as of now, even if I use a bigger model like Command R+? Like as of now, when I use it, it gives the same exact questions from the very beginning of the webpage no matter how many times I run it. I tried to increase the tokens but still the same result. I would like to get random questions from all over the article I am passing. maybe it is possible to achieve this with Groq.

@anakin87
Copy link
Owner

anakin87 commented Jun 4, 2024

As mentioned in the Readme, I am truncating the text to the first 4k characters: in the online version, I do not want to hit Groq rate limits.

{% for doc in documents %}{{ doc.content|truncate(4000) }}{% endfor %}

If you are using the project locally, you can safely remove this limit.

@saleham5
Copy link
Author

saleham5 commented Jun 4, 2024

Thank you for being patient with me. I am kinda new to all of this. so by removing this line, you mentioned it would go through the full page right? Also do I need to increase the max_tokens here?

generation_kwargs={"max_tokens": 1000, "temperature": 0.5, "top_p": 1},

last question how can I make it generate more than 5 questions at the time? is changing create 5 multiple choice..... enough?

@anakin87
Copy link
Owner

anakin87 commented Jun 4, 2024

  • max_tokens refers to the generated tokens, not to the length of the original prompt, so there is no need to change it (unless you expect the text of the generated quiz to be longer)
  • to generate more than 5 questions at a time, you should modify this prompt template
    quiz_generation_template = """Given the following text, create 5 multiple choice quizzes in JSON format.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants