-
Notifications
You must be signed in to change notification settings - Fork 9
Open
Description
I repeat my understanding about your codes, can you tell me is it exactly what you mean?
In this example , I omit the procedure of random sample, just look it.
suppose we have sequence of a,b,c,d,e,f,g.
the LSTM's 1st time input : a,b,c and output: o_a, o_b, o_c. you need to concatenate (o_a, o_b, o_c) to calculate loss, all ground truth of (label_a, label_b, label_c) will be used in calculate loss.
the LSTM's 2nd time input: b,c,d and output: o_b,o_c, o_d you need to concatenate (o_b, o_c, o_d) to calculate loss, all ground truth of (label_b, label_c, label_d) will be used in calculate loss.
so my question is:
- is this example I showed exactly what you do in your experiment?
- In this example, In order to train the whole dataset, so overlap sequence will repeatly input to LSTM in order to calcuate all position of sequence.
3..Because there exists a lot of 0-label (image frame with no action on its face) frame, Do you train every sample in train set OR just pick that frame which have AU label != 0. input to LSTM?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels