Skip to content

Commit

Permalink
Add model config used in pr6 benchmarks
Browse files Browse the repository at this point in the history
  • Loading branch information
motin committed Feb 15, 2021
1 parent f3ff1d2 commit 7d6346d
Showing 1 changed file with 11 additions and 0 deletions.
11 changes: 11 additions & 0 deletions wasm/test_page/bergamot.html
Original file line number Diff line number Diff line change
Expand Up @@ -76,11 +76,22 @@
"/vocab.esen.spm",
"/vocab.esen.spm"
],
"beam-size": 1,
"mini-batch": 32,
"maxi-batch": 100,
"maxi-batch-sort": "src",

This comment has been minimized.

Copy link
@jerinphilip

jerinphilip Feb 15, 2021

Contributor

If you're using the API currently in place, mini-batch, maxi-batch, maxi-batch-sort are unused. If there are benchmarks being run and reported, they may not exactly be comparable.

--max-input-sentence-tokens and max-input-tokens are present at the moment. max-input-tokens will probably get renamed to mini-batch-words, to align with it's counter-part in marian-decoder.

I'm sorry for this mess, I expect to clean this up eventually. Just letting you know a possible issue.

This comment has been minimized.

Copy link
@motin

motin Feb 16, 2021

Author Contributor

@jerinphilip Thanks for pointing this out. What values for max-input-sentence-tokens and max-input-tokens would you recommend?

This comment has been minimized.

Copy link
@abhi-agg

abhi-agg Feb 16, 2021

Contributor

@jerinphilip Could you share the complete config that you recommend here?

"workspace": 128,
"skip-cost": true,
"cpu-threads": 1,
"shortlist": [
`/lex.${lang}.s2t`,
50,
50,
]
// TODO: Enable when wormhole is enabled
// "int8shift": true,
// TODO: Enable when loading of binary models is supported and we use model.intgemm.alphas.bin
// "int8shiftAlphaAll": true,
};

// Instantiate the TranslationModel
Expand Down

0 comments on commit 7d6346d

Please sign in to comment.