You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found a similar issue ( #5), but when I try to extract the embedding according to your instruction, the dimension of the transformer_output doesn't seem to fit the input. The dimension of the transformer_output is like [micro-batch-size, max seg length, hidden size], but how does this output match the input? I checked the bert model and its output should be like output1, output2. The output2 is the embedding that can match the input. But in your code, the output2 is None. I wonder how we can get the embedding corresponding to the inputs.
Thanks
The text was updated successfully, but these errors were encountered:
Thanks for bringing this up. I cannot find a variable named output2 in the transformer.py file. So I am not sure which variable you are referring to. Please specify.
Hello,
I found a similar issue ( #5), but when I try to extract the embedding according to your instruction, the dimension of the
transformer_output
doesn't seem to fit the input. The dimension of thetransformer_output
is like[micro-batch-size, max seg length, hidden size]
, but how does this output match the input? I checked thebert
model and its output should be likeoutput1, output2
. Theoutput2
is the embedding that can match the input. But in your code, theoutput2
isNone
. I wonder how we can get the embedding corresponding to the inputs.Thanks
The text was updated successfully, but these errors were encountered: