Skip to content

Conversation

@Qubitium
Copy link
Collaborator

@nbasyl Change some tests. Making them use default 512 rows of data instead of 1024. More rows of data doesn't help and make tests slower. Also I noticed llama3.2 is extermely sensitive to USE_CHAT_TEMPLATE toggle for tokenization/lm-eval. Without it, it will return absurdly low number.

@Qubitium Qubitium merged commit c7e6de1 into main Oct 18, 2025
4 checks passed
@CSY-ModelCloud CSY-ModelCloud deleted the log4 branch October 20, 2025 03:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants