Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use past agrument for GPT2 to speed up decoding #63

Merged
merged 8 commits into from
Nov 19, 2019

Conversation

chiragjn
Copy link
Contributor

As suggested in #61, I made changes to use the past argument for GPT2. I have not however made the equivalent changes for xlnet as mems behaves differently and under-documented for the moment.

I did some benchmarking on a 2 core CPU server with the following code

import time
import numpy as np
import nlpaug.augmenter.sentence as nas

def bench(text, return_past, model_path='gpt2', num_runs=50):
    aug = nas.ContextualWordEmbsForSentenceAug(model_path=model_path,  top_k=100, device='cpu')
    aug.model.return_past = return_past
    # Ignore the first time forward
    aug.insert(text)
    times = []
    for _ in range(num_runs):
        start = time.time()
        aug.insert(text)
        ptime = (time.time() - start)
        print(ptime)
        times.append(ptime)
    print('AVG:', np.mean(times))
    return np.mean(times)

bench(text='Hello darkness my old friend, I have come to', return_past=True, model_path='/opt/models/transformers/gpt2') 

bench(text='Hello darkness my old friend, I have come to', return_past=False, model_path='/opt/models/transformers/gpt2')

Since there is randomness involved in decoding, I also made a separate branch on top of these changes to use greedy decoding instead.
chiragjn@5a5a848

decoding top_k random top_k random greedy greedy
return_past TRUE FALSE TRUE FALSE
average 1.346138215 3.696006279 0.2176607323 0.3007589674

All numbers here: https://docs.google.com/spreadsheets/d/178VnHeBpHWz5lKHLbBuYTRPqxiWXa7-rpCje2wQH_i8/edit?usp=sharing

I leave the API design decisions up to you. Let me know how can I improve this pull request

@codecov-io
Copy link

codecov-io commented Nov 17, 2019

Codecov Report

Merging #63 into master will decrease coverage by 36.35%.
The diff coverage is n/a.

Impacted file tree graph

@@             Coverage Diff             @@
##           master      #63       +/-   ##
===========================================
- Coverage   46.58%   10.22%   -36.36%     
===========================================
  Files         124       48       -76     
  Lines        4317     1809     -2508     
===========================================
- Hits         2011      185     -1826     
+ Misses       2306     1624      -682
Impacted Files Coverage Δ
test/augmenter/char/test_random_char.py 2.19% <0%> (-97.81%) ⬇️
test/flow/test_sometimes.py 5% <0%> (-95%) ⬇️
test/augmenter/word/test_random_word.py 5.55% <0%> (-94.45%) ⬇️
test/augmenter/char/test_char.py 6.45% <0%> (-93.55%) ⬇️
test/model/char/test_keyboard_model.py 6.52% <0%> (-93.48%) ⬇️
test/augmenter/char/test_ocr.py 6.66% <0%> (-93.34%) ⬇️
test/augmenter/char/test_keyboard.py 12.5% <0%> (-87.5%) ⬇️
test/augmenter/word/test_split.py 13.33% <0%> (-86.67%) ⬇️
nlpaug/base_augmenter.py 3.12% <0%> (-68.11%) ⬇️
test/augmenter/word/test_word.py 5.88% <0%> (-64.71%) ⬇️
... and 109 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9863867...0762f78. Read the comment docs.

@makcedward makcedward merged commit 264371c into makcedward:master Nov 19, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants