Skip to content
No description, website, or topics provided.
Branch: master
Clone or download
Latest commit 4e59620 Feb 22, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
log/figure initial commit Oct 27, 2017
model initial commit Oct 27, 2017
src update licenses Feb 22, 2018
Makefile update licenses Feb 22, 2018
README.md update readme Feb 22, 2018
chatbot.py update licenses Feb 22, 2018
parser.py update licenses Feb 22, 2018

README.md

ChatBot: Use TensorRT to Inference a TensorFlow Model

Demonstrate how to use TensorRT to accelerate TensorFlow inference on Jetson.




You can learn following things with this sample:

  1. Convert an LSTM TensorFlow model into UFF format.
  2. Run a UFF model on Jetson with Python (includes a C++ to Python TensorRT wrapper).

We also include chatbot training and x86 chatbot source(pure python) for user reference.

Environment

Device

  1. LINK Jetson TX2
  2. LINK JetPack 3.2

Please flash your device with JetPack3.2 first.
sudo apt-get install python-pip
sudo apt-get install swig
sudo pip install numpy

Host

  1. LINK Ubuntu16.04
  2. LINK CUDA Toolkit 9
  3. LINK TensorRT 3

Please get CUDA 9 and TensorRT 3 installed first.
sudo apt-get install python-pip
sudo pip install tensorflow
sudo apt-get install swig
sudo pip install numpy


Quick Try

We also attach a ChatBot model for user reference.

Notes

  1. Don't blame him. He learns to talk with a small database(Cornell).
  2. This model is trained on ONE Tesla P100 GPU and takes around 15 days to finish.
  3. You can have the training job done much faster with a DGX-1 server.
  4. Word vector is 4096. A word not in our vocabulary is replaced by '_'.
  5. Our ChatBot is sensitive to the symbol. Please don't forget your symbol at the end of the sentence.
  6. Our model is GAN.

Execution steps

Host: convert TF model into UFF format

$ git clone https://github.com/AastaNV/ChatBot.git
$ cd $CHATBOT_ROOT
$ python src/tf_to_uff/tf_to_trt.py model/ID210_649999 model/ID210_649999.uff
$ scp model/ID210_649999.uff nvidia@[device IP]:$CHATBOT_ROOT/model/

Device: create TensorRT engine with the converted UFF file

$ 
$ git clone https://github.com/AastaNV/ChatBot.git
$ wget https://raw.githubusercontent.com/numpy/numpy/master/tools/swig/numpy.i -P $CHATBOT_ROOT/src/
$ cd $CHATBOT_ROOT
$ make
$ python chatbot.py model/ID210.pickle model/ID210_649999.uff 

Examples

#c5f015 Hi, ChatBot!
#1589F0 hello .

#c5f015 How are you today?
#1589F0 fine .

#c5f015 How about to go out for dinner?
#1589F0 okay .

#c5f015 What do you want to eat?
#1589F0 i don't know .

#c5f015 Steak?
#1589F0 well , i don't think so .

#c5f015 Sandwich?
#1589F0 what ?

#c5f015 The coffee shop on the corner.
#1589F0 yes .

#c5f015 Great!
#1589F0 i was just wondering .

#c5f015 Wonder what?
#1589F0 i don't know .

#c5f015 Okay. YOu are a artist, right?
#1589F0 oh , no . . .

#c5f015 Anyway, see you tonight.
#1589F0 what ?

#c5f015 See you...
#1589F0 i know .

#c5f015 Okay... bye bye.
#1589F0 goodnight .



Export TensorFlow Model to UFF Format

#c5f015 Please run this step on host #c5f015

Prepare following files from TensorFlow:

  • [myModel].data-00000-of-00001
  • [myModel].index
  • [myModel].meta
$ cd $CHATBOT_ROOT/src/tf_to_uff
$ python tf_to_trt.py [myModel] [myModel].uff


Run UFF Model on Jetson

#c5f015 Please run this step on device #c5f015

Prepare following files in model directory:

  • [database].pickle
  • [myModel].uff
$ wget https://raw.githubusercontent.com/numpy/numpy/master/tools/swig/numpy.i -P $CHATBOT_ROOT/src/
$ cd $CHATBOT_ROOT/
$ make
$ python chatbot.py model/[database].pickle model/[myModel].uff 


Training

#c5f015 Please run this step on host #c5f015

We also share our training code for user reference.



Prepare following files from Cornell Movie Dialogs Corpus:

  • movie_conversations.txt
  • movie_lines.txt
$ cd $CHATBOT_ROOT/src/training
$ python parser_for_cornell.py
$ python main.py 


Run ChatBot on X86-based Linux Machine

#c5f015 Please run this step on host #c5f015

We also provide source to run TensorRT on x86-based Machine
Prepare following files:

  • [myModel].data-00000-of-00001
  • [myModel].index
  • [myModel].meta
  • [database].pickle
$ cd $CHATBOT_ROOT/src/tf_to_uff
$ python tf_to_trt.py [myModel] [myModel].uff
$ python chatbot.py [database].pickle [myModel].uff 


You can’t perform that action at this time.