Skip to content

VisualBert Doesn't return attentions #1036

@EXUPLOOOOSION

Description

@EXUPLOOOOSION

🐛 Bug

the VisualBert model ignores the output_attentions config

Command

To Reproduce

Steps to reproduce the behavior:

In a python script:

  1. get a configuration with output_attentions = True
  2. Initialize and build any VisualBert model (any that uses VisualBertBase)
  3. Run an inference (forward) and find out in the ouput that the attention tuple exists but is empty

specifying output_attentions to True in the forward function parameters doesnt help either.

Expected behavior

Of course, attentions shouldn't be empty.

Environment

PyTorch version: 1.9.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A

OS: Ubuntu 18.04.5 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
CMake version: version 3.12.0
Libc version: glibc-2.26

Python version: 3.7 (64-bit runtime)
Python platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
Is CUDA available: False
CUDA runtime version: 11.0.221
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.0.4
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.0.4
HIP runtime version: N/A
MIOpen runtime version: N/A

Versions of relevant libraries:
[pip3] numpy==1.19.5
[pip3] pytorch-lightning==1.4.0
[pip3] torch==1.9.0
[pip3] torchmetrics==0.4.1
[pip3] torchsummary==1.5.1
[pip3] torchtext==0.5.0
[pip3] torchvision==0.6.0
[conda] Could not collect

Additional context

Reason

VisualBERT uses VisualBERTForClassification, which uses VisualBERTBase, which uses BertEncoderJit
All of these get their attribute output_attentions right.
However, BertEncoderJit's forward function doesn't use BertEncoderJit's attribute output_attentions, insted it only uses its parameter output_attentions. This paired with VisualBERTBase not sending its own output_attentions as a parameter, makes all models using VisualBERTBase not output any attention.

This also applies to output_hidden_states

Fix

Either 1: make VisualBERTBase specify output_attentions as a parameter for its encoder's forward
or 2: make BertEncoderJit's forwardd function use both, its parameter and BertEncoderJit's attribute to decide wether to output attentions (as long as either one of them is true, it outputs them).

I personally implemented the second one in my local build of mmf and it works.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions