Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Small ATLAS #7

Closed
jamesoneill12 opened this issue Feb 25, 2023 · 8 comments
Closed

Small ATLAS #7

jamesoneill12 opened this issue Feb 25, 2023 · 8 comments

Comments

@jamesoneill12
Copy link

Will there be any plans to release a smaller version of ATLAS ?

Although 11B is relatively small when compared to the LLMs in the paper, it's still pretty large for ML practitioners with limited resources.

Thanks! James.

@mlomeli1
Copy link
Contributor

mlomeli1 commented Mar 7, 2023

Hi, @jamesoneill12 the 11B parameter reader model corresponds to the Atlas-xxl size, you can select the Atlas-base size and its reader model is only 220M parameters. Maybe this will do?
Alternatively, if you require to further reduce the memory requirements at inference time you can use FAISS compressed indexes. Please have a look at the blog to know how to run these for the NQ task: few-shot learning with retrieval augmented language models

@mlomeli1 mlomeli1 closed this as completed Mar 7, 2023
@minhluan1590
Copy link

minhluan1590 commented Mar 8, 2023

While building the FAISS index using the recommended setting: "--faiss_index_type ivfpq --faiss_code_size 16", my machine with 8 x 80GB A100 runs out of CUDA memory (after converting about 3 million passages). How can I save memory during this step? At this stage, I think that the T5 model is not even loaded.

@mlomeli1
Copy link
Contributor

mlomeli1 commented Mar 8, 2023

@minhluan1590 what size of model are you using? I am assuming it's xl, xxl so you might need more than 8 GPUS to load all the embeddings. You could either use a smaller model or try --faiss_index_type pq --faiss_code_size 64.
Btw, we can keep commenting here but for future issues please open a new issue rather than commenting on a closed one, thanks!

@prasad4fun
Copy link

@minhluan1590 what size of model are you using? I am assuming it's xl, xxl so you might need more than 8 GPUS to load all the embeddings. You could either use a smaller model or try --faiss_index_type pq --faiss_code_size 64. Btw, we can keep commenting here but for future issues please open a new issue rather than commenting on a closed one, thanks!

Hi @mlomeli1, could you specify the minimum requirement to run Atlas, i have a 12 GB GPU would that be sufficient for fine tuning?

@minhluan1590
Copy link

Well, it might be too small. The Atlas model requires very big GPU memory. I am converting the model from Sharded Data Parallel to Fully Sharded Data Parallel to use with smaller GPU memory. Now it is running, but I am still confused if I can load the SDP stored model and optimizer parameters and keep using with FSDP or not.

@prasad4fun
Copy link

Well, it might be too small. The Atlas model requires very big GPU memory. I am converting the model from Sharded Data Parallel to Fully Sharded Data Parallel to use with smaller GPU memory. Now it is running, but I am still confused if I can load the SDP stored model and optimizer parameters and keep using with FSDP or not.

@minhluan1590 Did you solve using atlas on small GPU, also earlier comment you mentioned 8 * 80GB A100 machine, are you totally using 1600gb of GPU memory!
Have you tried using smaller model and faiss pq technique to reduce the memory requirement, if so could you please share how much did it come down to?

@minhluan1590
Copy link

Well, it might be too small. The Atlas model requires very big GPU memory. I am converting the model from Sharded Data Parallel to Fully Sharded Data Parallel to use with smaller GPU memory. Now it is running, but I am still confused if I can load the SDP stored model and optimizer parameters and keep using with FSDP or not.

@minhluan1590 Did you solve using atlas on small GPU, also earlier comment you mentioned 8 * 80GB A100 machine, are you totally using 1600gb of GPU memory! Have you tried using smaller model and faiss pq technique to reduce the memory requirement, if so could you please share how much did it come down to?

It's the finetuning process that force us use too much memory. During this process, the full index is still needed to computed, and convert into FAISS later. Still trying to optimize the memory use of this model. I will share with you when I am finished.

@mlomeli1
Copy link
Contributor

mlomeli1 commented Apr 6, 2023

Hi @minhluan1590 and @prasad4fun , thanks for all the discussions.

As I said, different model sizes have different memory requirements so would be good to know which model size you intend to use. As a reference I've used the base model in a V100 with 8 40 gb GPUS and xxl with 8 A100 with 80 gb GPUs with a flat index. It is true that we require to load the full embeddings for fine-tuning so if you happen to have multiple nodes with 4,8 GPUs each (say) you can then load less embedding per process so the memory requirements per GPU are less. (see the diagram below)
In the Atlas blog post, I've added a table with memory requirements with different PQ compression sizes, hope that helps.

atlas_distributed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants