Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is message ModelConf{} in gpt.proto editable? #31

Closed
Majokiki opened this issue Apr 16, 2021 · 2 comments
Closed

Is message ModelConf{} in gpt.proto editable? #31

Majokiki opened this issue Apr 16, 2021 · 2 comments

Comments

@Majokiki
Copy link

Hi! I'm a newcomer of LightSeq.

I wanna add some params to the section "message ModelConf{} ", and I notice that the modelConf in lightseq/docs/export_model.md and lightseq/proto/transformer.proto differs:

lightseq/docs/export_model.md

message ModelConf {
  int32 head_num = 1;   // head number for multi-head attention
  int32 beam_size = 2;  // beam size of beam search
  int32 extra_decode_length =3;  // extra decode length compared with source length
  float length_penalty = 4;  // length penalty of beam search
  int32 src_padding_id = 5;  // source padding id
  int32 trg_start_id = 6;    // target start id
 }

lightseq/proto/transformer.proto

`message ModelConf {
  int32 head_num = 1;   // head number for multi-head attention
  int32 beam_size = 2;  // beam size of beam search
  int32 extra_decode_length =
      3;                     // extra decode length compared with source length
  float length_penalty = 4;  // length penalty of beam search
  int32 src_padding_id = 5;  // source padding id
  int32 trg_start_id = 6;    // target start id
  float diverse_lambda = 7; // diverse beam search lambda
  string sampling_method = 8; // choice of beam_search, topk, topp, topk_greedy
  float topp = 9; // parameter for topp sampling
  int32 topk = 10; // parameter for topk sampling
  int32 trg_end_id = 11; // eos of target embedding
  bool is_post_ln = 12; // Pre-LN or Post-LN
  bool no_scale_embedding = 13; // whether to scale embedding by sqrt(emb_dim)
  bool use_gelu = 14; // use gelu for activation otherwise relu
  // Whether it is a multilingual model.
  // If it is set to true, lang_emb and trg_vocab_mask should be non-empty.
  bool is_multilingual = 15;
}`

So I wonder that is message ModelConf{} in gpt.proto editable? Can you provide a set of addable parameters?

Thanks a lot for your time😊

@Taka152
Copy link
Contributor

Taka152 commented Apr 16, 2021

@Sueying Sorry about the confusing docs, the former needs to be updated, files in proto folder will always be the latest version. You can add parameter based on it, but remain those already existing parameters and their numbers to keep compatibility

@Majokiki
Copy link
Author

Thanks a lot! I suppose that addding a beam_size param to modelConf of gpt.proto is feasible. I'll give it a try😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants