# return [batch_size, sequence_len, hidden_size]
multihead_attention(
query_antecedent=layer_input, # [batch_size, sequence_len, hidden_size]
memory_antecedent=layer_input, # [batch_size, sequence_len, hidden_size]
bias=attention_mask, # [batch_size, sequence_len, sequence_len]
total_key_depth=768,
total_value_depth=768,
output_depth=768,
num_heads=12,
dropout_rate=0.1,
max_relative_position=10)
-
Notifications
You must be signed in to change notification settings - Fork 0
guotong1988/transformer_relative_position_embedding
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
开箱即用
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published