Presenter: Rachel Hu, Wen-ming Ye, Laurens ten Cate
Implementing natural language processing (NLP) models just got simpler and faster. We will quickly introduce BERT (Bidirectional Encoder Representation from Transformers), the state-of-the-art (SOTA) NLP model, and demonstrate how it can be used for various NLP tasks. In this chalk talk, learn how to implement NLP models using Apache MXNet and the GluonNLP Toolkit to quickly prototype products, validate new ideas, and learn SOTA NLP. We will also show how you can use GluonNLP and SageMaker to fine-tune BERT for a text classification use case and deploy the trained model. Come join us to train your NLP model onsite!
Start the lab here in the tutorial folder [https://github.com/awshlabs/Jan2020-NLPLab/tree/master/tutorial] (https://github.com/awshlabs/Jan2020-NLPLab/tree/master/tutorial).