Skip to content
This repository has been archived by the owner on May 28, 2019. It is now read-only.

Batch #76

Open
hash2430 opened this issue Apr 9, 2019 · 0 comments
Open

Batch #76

hash2430 opened this issue Apr 9, 2019 · 0 comments

Comments

@hash2430
Copy link

hash2430 commented Apr 9, 2019

when you give input to the model, do you give one npz to the model at a time?
It seems

  1. train() is called for every epoch
  2. "for full_txt, full_feat, spkr in train_enum" is called for every batch
  3. "for txt, feat, spkr, start in batch_iter:" is called for every npz
    ==> model.forward() is called here then the loss is sumed up for batch
    Then what is the meaning of having batch at all when model is not called batch-wise?
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant