Skip to content

A new attention-based architecture for synthesizing images from videos or 3D-images

License

Notifications You must be signed in to change notification settings

adam-mehdi/V2Iformer

Repository files navigation


V2Iformer

A new attention-based architecture for tasks such as video frame interpolation and prediction, multi-image summarization, and 3D image flattening. It is based on the uformer, but generalized to consume multiple images and spit out one synthesized image.

Diagram

The architecture of V2Iformer is as follows:

where the Cross-Attention Transformer block takes the following form:

This constitutes the generator of the Generative Adverserial Network used for training. A shallow FastTimeSformer, an accelerated, attention-based video classifier, serves as the discriminator. The GAN that integrates the two into a system is included in this repository, coded in PyTorch Lightning.

How to use

! pip install git+https://github.com/adam-mehdi/V2Iformer.git

import torch
from pytorch_lightning import Trainer
from v2iformer.gan import GAN

# dummy dataset
b, c, f, h, w = 4, 3, 5, 32, 32
dataloader = [(torch.randn(b, c, f, h, w), torch.randn(b, c, 1, h, w)) for i in range(100)]

model = GAN(c, f, h, w)

trainer = Trainer()
trainer.fit(model, dataloader)

Citations

@misc{bertasius2021spacetime,
      title={Is Space-Time Attention All You Need for Video Understanding?}, 
      author={Gedas Bertasius and Heng Wang and Lorenzo Torresani},
      year={2021},
      eprint={2102.05095},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{wang2021uformer,
      title={Uformer: A General U-Shaped Transformer for Image Restoration}, 
      author={Zhendong Wang and Xiaodong Cun and Jianmin Bao and Jianzhuang Liu},
      year={2021},
      eprint={2106.03106},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Portions of the code code was built on the following repositories:

About

A new attention-based architecture for synthesizing images from videos or 3D-images

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages