Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Common CM interface and automation to reproduce MLPerf inference v3.1 #1053

Closed
gfursin opened this issue Jan 16, 2024 · 0 comments
Closed

Common CM interface and automation to reproduce MLPerf inference v3.1 #1053

gfursin opened this issue Jan 16, 2024 · 0 comments

Comments

@gfursin
Copy link
Contributor

gfursin commented Jan 16, 2024

Run MLPerf inference benchmarks via CM (CM workflows automatically adapt to different OS and hardware):

cmr "generate-run-cmds inference" --implementation=nvidia --model=bert-99

Prepare official submission for Edge category:

cmr "generate-run-cmds inference _submission _full" --implementation=nvidia --model=bert-99

Prepare official submission for DataCenter category:

cmr "generate-run-cmds inference _submission _full" --implementation=nvidia --model=bert-99 --category=datacenter --division=closed

The MLCommons taskforce on automation and reproducibility is gradually adding CM support for all implementations and models:

{model} Ref implementation Nvidia Intel Qualcomm
bert-99
@gfursin gfursin closed this as completed Jan 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant