Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disk or RAM #2

Closed
Nadian-Ali opened this issue Mar 3, 2020 · 5 comments
Closed

Disk or RAM #2

Nadian-Ali opened this issue Mar 3, 2020 · 5 comments

Comments

@Nadian-Ali
Copy link

Hi.. i am trying to run your code... i have 32 GB of RAM and more than 100GB of disk space. 3 GB of my RAM gets occupied for loading the dictionary and i get stuck there.... do i need a bigger RAM? Like 64GB of RAM space?

@LeeDoYup
Copy link
Owner

LeeDoYup commented Mar 3, 2020

I remember
VQA implementations require 150 - 200 GB RAM (also in pytorch implementation)

It is because the implementations use pretrained features and load at once..
If you want to save, find or implement a data loader that satisfy

  1. Load an image file at each time
  2. Detect objects in image
  3. and run VQA models.

The reason that many implementations use pretrained features (including this repo) is to save inference time during 1)-2) .

@LeeDoYup LeeDoYup closed this as completed Mar 3, 2020
@LeeDoYup
Copy link
Owner

LeeDoYup commented Mar 3, 2020

@Nadian-Ali
Copy link
Author

Nadian-Ali commented Mar 6, 2020 via email

@LeeDoYup
Copy link
Owner

LeeDoYup commented Mar 6, 2020

@Nadian-Ali Okay. FYI, i achieve the VQA accuracy similar to pytorch implementation.
I implemented this model for my research, but recently i start to use Pytorch.
So i would not update this implementation (only maintenance).

Also, we can refer FAIR's awesome open source library for vqa: Pythia

@Nadian-Ali
Copy link
Author

Nadian-Ali commented Mar 7, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants