Skip to content
This repository was archived by the owner on Mar 7, 2025. It is now read-only.

Conversation

@yoavkatz
Copy link
Member

Status

READY

Description

Made LM Eval wrapper compatible with LM EVAL Huggingface wrapper.
(in terms of default max_gen_tokens and truncation).

Impacted Areas in Library

LM Eval wrapper

Which issue(s) does this pull-request fix?

Any special notes for your reviewer?


Checklist

  • Automated tests exist
  • Updated Package Requirements (if required, and with maintainers' approval)
  • Local unit tests performed
  • Documentation exists link
  • Local pre-commit hooks performed
  • Desired commit message set as PR title and description set above
  • Link to relevant GitHub issue provided

…I (in terms of default max_gen_tokens and truncation).
@yoavkatz yoavkatz requested a review from a team as a code owner April 16, 2024 14:29
Copy link
Member

@jezekra1 jezekra1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

otherwise looks good

)
self._generation_execution_options = generation_execution_options or self.DEFAULT_GENERATION_EXECUTION_OPTIONS

@property
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets' use @cached_property

yoavkatz and others added 2 commits April 17, 2024 08:33
Signed-off-by: Radek Ježek <pc.jezek@gmail.com>
@coveralls
Copy link

coveralls commented Apr 17, 2024

Coverage Status

coverage: 95.63% (+0.006%) from 95.624%
when pulling 36d101d on yoavkatz:fix-LM-eval-model-to-be-compatible-with-HF-model
into 60b991e on IBM:main.

@jezekra1 jezekra1 merged commit 0788604 into IBM:main Apr 17, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants