Skip to content
This repository has been archived by the owner on May 4, 2022. It is now read-only.

What kind of VRAM requirements are necessary for this fork of deepfakes #3

Closed
Irastris opened this issue Jan 29, 2018 · 1 comment
Closed
Assignees
Labels
question Further information is requested

Comments

@Irastris
Copy link

I ask because, unlike in deepfakes/faceswap or FakeApp, my GPU seems to not have enough VRAM for the training process, spitting out the following error (uploaded to hastebin due to its sheer size): https://hastebin.com/raw/metojewuqa

It is a GTX 1060 6GB for reference, with 6GB of VRAM (as I'm sure you could tell). Is this not enough for this project? Has the model been increased to 128x128 perhaps? If so, is there a way to bump it back down to 64x64 or perhaps a middle-ground like 96x96.

Thank you.

@dfaker dfaker self-assigned this Jan 29, 2018
@dfaker dfaker added the question Further information is requested label Jan 29, 2018
@dfaker
Copy link
Owner

dfaker commented Jan 29, 2018

Yes this has a good deal larger VRAM requirement, both because of the larger 128x128 ouput size, the extra masking channel being generated in the output and the extra residual blocks in the decoders, there's no one-shot method to get it down to what you'd need but removing some of the decoder res_blocks removing and reducing ENCODER_DIM are both options.

Bringing the output size back down is possible but would need multiple changes to be made in a number of locations during data generation, training and merge.

@dfaker dfaker closed this as completed Jan 29, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants