Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feasibility of using Llama2 LLM on AWS EC2 G4dn.8xLarge and Inferentia 2.8xlarge Instances #566

Open
AmlanSamanta opened this issue Jul 27, 2023 · 1 comment
Labels
integrations issues related to integrating llama with other platforms/services

Comments

@AmlanSamanta
Copy link

AmlanSamanta commented Jul 27, 2023

Hi all,

Is it possible to do inference on the aforementioned machines as we are facing so many issues in Inf2 with Falcon model?

Context:

We are facing issues while using Falcon/Falcoder on the Inf2.8xl machine. We were able to run the same experiment on G5.8xl instance successfully but we are observing that the same code is not working on Inf2 machine instance. We are aware that it has Accelerator instead of NVIDIA GPU. Hence we tried the neuron-core's capability and added required helper code for using the capability of neuron-cores of the instance by using the torch-neuronx library. The code changes and respective error screenshots are provided below for your reference:

Code without any torch-neuronx usage - Generation code snippet:

generation_output = model.generate(
input_ids = input_ids,
attention_mask = attention_mask,
generation_config = generation_config,
return_dict_in_generate = True,
output_scores = False,
max_new_tokens = max_new_tokens,
early_stopping = True
)
#print("generation_output")
#print(generation_output)
s = generation_output.sequences[0]
output = tokenizer.decode(s)

without any changes

Code using torch-neuronx - helper function code snippet:

def generate_sample_inputs(tokenizer, sequence_length):
dummy_input = "dummy"
embeddings = tokenizer(dummy_input, max_length=sequence_length, padding="max_length",return_tensors="pt")
return tuple(embeddings.values())

def compile_model_inf2(model, tokenizer, sequence_length, num_neuron_cores):

use only one neuron core
os.environ["NEURON_RT_NUM_CORES"] = str(num_neuron_cores)
import torch_neuronx
payload = generate_sample_inputs(tokenizer, sequence_length)
return torch_neuronx.trace(model, payload)

model = compile_model_inf2(model, tokenizer, sequence_length=512, num_neuron_cores=1)

with torch-neuron related code1

with torch-neuron related code2

Can this github issue address our specific problems mentioned above?
oobabooga/text-generation-webui#2260

My queries are basically:

  1. Can we try Llama 2 on G4dn.8xLarge and Inferentia 2.8xlarge instances or it is not supported yet? If not, which machine instance we should try considering cost-effectiveness?
  2. Is it feasible to do inference with Falcon on Inf2 or should we go for G4dn.8xlarge as we are facing so many issues in Inf2?
@AmlanSamanta AmlanSamanta changed the title Feasibility of using Llama2 LLM on AWS EC2 G5.8xlarge, G4dn.8xLarge and Inferentia 2.8xlarge Instances Feasibility of using Llama2 LLM on AWS EC2 G4dn.8xLarge and Inferentia 2.8xlarge Instances Aug 2, 2023
@mallapraveen
Copy link

@AmlanSamanta We are having same query. Any answers you got or any insights you provide would be great.

@macarran macarran added the integrations issues related to integrating llama with other platforms/services label Sep 5, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
integrations issues related to integrating llama with other platforms/services
Projects
None yet
Development

No branches or pull requests

3 participants