Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I execute this on GPU? #7

Closed
chiragsanghvi10 opened this issue Feb 2, 2021 · 2 comments
Closed

How can I execute this on GPU? #7

chiragsanghvi10 opened this issue Feb 2, 2021 · 2 comments

Comments

@chiragsanghvi10
Copy link

chiragsanghvi10 commented Feb 2, 2021

Hi

I am trying to run this code below on GPU, where should I specify device and what is the command like ?

device='gpu' or device='cuda' and where should I be mentioning it ?

This is your old code bit :

from transformers import T5ForConditionalGeneration, T5Tokenizer

MODEL = "kiri-ai/t5-base-qa-summary-emotion"
TOKENIZER = "t5-base"

def generate(input_text, model_name: str = None, tokenizer_name: str = None):
    # Refer to global variables
    global model
    global tokenizer
    # Setup
    # Initialise model
    if model == None:
        # Use the default model
        if model_name == None:
            model = T5ForConditionalGeneration.from_pretrained(MODEL)
        # Use the user defined model
        else:
            model = T5ForConditionalGeneration.from_pretrained(model_name)

    # Initialise tokenizer
    if tokenizer == None:
        # Use the default tokenizer
        if tokenizer_name == None:
            tokenizer = T5Tokenizer.from_pretrained(TOKENIZER)
        # Use the user defined tokenizer
        else:
            tokenizer = T5Tokenizer.from_pretrained(tokenizer_name)

    is_list = False
    if isinstance(input_text, list):
        is_list = True

    features = tokenizer(input_text, padding=True, return_tensors='pt')
    tokens = model.generate(input_ids=features['input_ids'],
                            attention_mask=features['attention_mask'], max_length=512)
    if is_list:
        return [tokenizer.decode(tokens, skip_special_tokens=True) for tokens in tokens]
    else:
        return tokenizer.decode(tokens[0], skip_special_tokens=True)

def process_item(item):
    return f"emotion: {item}"

def emotion(input_text, model_name: str = None, tokenizer_name: str = None):

    if isinstance(input_text, list):
        input_text = [process_item(item) for item in input_text]
    else:
        input_text = process_item(input_text)

    return generate(input_text, model_name=model_name,
                    tokenizer_name=tokenizer_name)

Best,
Chirag

@ojasaar
Copy link
Contributor

ojasaar commented Feb 4, 2021

Hey,

You can call .cuda() on the model and inputs (input_ids and attention_mask) or you can use it directly with kiri as:

from kiri.models import T5QASummaryEmotion

model = T5QASummaryEmotion(device="cuda")

model.emotion("I hope this works!")

@chiragsanghvi10
Copy link
Author

Hey,

Sure, will try this..

Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants