Skip to content
Simple Text-Generator with OpenAI gpt-2 Pytorch Implementation
Branch: master
Clone or download
graykode Merge pull request #5 from mturnshek/master
change temperature argument type int -> float
Latest commit c92f073 Feb 28, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
GPT2 fix path error Feb 21, 2019
.gitignore first commit, refer to huggingface pytorch-pretrained-BERT in code Feb 21, 2019
GPT2_Pytorch.ipynb add google colab Feb 21, 2019
LICENSE add MIT LICENSE Feb 21, 2019 add line split Feb 22, 2019 change temperature argument type int -> float Feb 28, 2019
requirements.txt remove not used in requirements.txt Feb 22, 2019

GPT2-Pytorch with Text-Generator

Better Language Models and Their Implications

Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper. from openAI Blog

This repository is simple implementation GPT-2 about text-generator in Pytorch with compress code

Quick Start

  1. download GPT2 pre-trained model in Pytorch which huggingface/pytorch-pretrained-BERT already made! (Thanks for sharing! it's help my problem transferring tensorflow(ckpt) file to Pytorch Model!)
$ git clone && cd gpt-2-Pytorch
# download huggingface's pytorch model 
$ curl --output gpt2-pytorch_model.bin
# setup requirements
$ pip install -r requirements.txt
  1. Now, You can run like this.
  • Text from Book 1984, George Orwell
$ python --text "It was a bright cold day in April, and the clocks were striking thirteen. Winston Smith, his chin nuzzled into his breast in an effort to escape the vile wind, slipped quickly through the glass doors of Victory Mansions, though not quickly enough to prevent a swirl of gritty dust from entering along with him."
  1. Also You can Quick Starting in Google Colab


  • --text : sentence to begin with.
  • --quiet : not print all of the extraneous stuff like the "================"
  • --nsamples : number of sample sampled in batch when multinomial function use
  • --unconditional : If true, unconditional generation.
  • --batch_size : number of batch size
  • --length : sentence length (< number of context)
  • --temperature: the thermodynamic temperature in distribution (default 0.7)
  • --top_k : Returns the top k largest elements of the given input tensor along a given dimension. (default 40)

See more detail option about temperature and top_k in here


  • Pytorch 0.41+
  • regex 2017.4.5



  • OpenAi/GPT2 follow MIT license, huggingface/pytorch-pretrained-BERT is Apache license.
  • I follow MIT license with original GPT2 repository


Jeff Wu(@WuTheFWasThat), Thomas Wolf(@thomwolf) for allowing referring code.

You can’t perform that action at this time.