Skip to content

Commit

Permalink
Set the dtype correctly for vision GPT model (#694)
Browse files Browse the repository at this point in the history
* Set the dtype correctly

* Add changelog
  • Loading branch information
Sean Naren committed Jul 26, 2021
1 parent ab801aa commit e22b8d0
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 1 deletion.
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,9 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Removed momentum updating from val step and add separate val queue ([#631](https://github.com/PyTorchLightning/lightning-bolts/pull/631))


- Fixed FP16 support with vision GPT model ([#694](https://github.com/PyTorchLightning/lightning-bolts/pull/694))


## [0.3.4] - 2021-06-17

### Changed
Expand Down
2 changes: 1 addition & 1 deletion pl_bolts/models/vision/image_gpt/gpt2.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ def forward(self, x, classify=False):
h = self.token_embeddings(x.long())

# prepend sos token
sos = torch.ones(1, batch, self.hparams.embed_dim, device=x.device) * self.sos
sos = torch.ones(1, batch, self.hparams.embed_dim, device=x.device, dtype=x.dtype) * self.sos
h = torch.cat([sos, h[:-1, :, :]], axis=0)

# add positional embeddings
Expand Down

0 comments on commit e22b8d0

Please sign in to comment.