diff --git a/nlp/gpt_j/popxl/finetuning.ipynb b/nlp/gpt_j/popxl/finetuning.ipynb index 5ebc39e26..9e40888a5 100644 --- a/nlp/gpt_j/popxl/finetuning.ipynb +++ b/nlp/gpt_j/popxl/finetuning.ipynb @@ -969,6 +969,22 @@ "print(out)\n", "# [{'generated_text': ' contradiction'}]" ] + }, + { + "attachments": {}, + "cell_type": "markdown", + "id": "0fd3bd49", + "metadata": {}, + "source": [ + "## Conclusion\n", + "This notebook has demonstrated how easy it is to perform Fine-Tuning on GPT-J on the Graphcore IPU for a text entailment task. While not as powerful as larger models for free text-generation, medium-size auto-regressive models GPT-J can still be successfully fine-tuned to handle a range of NLP downstream tasks such as question answering, sentiment analysis, and named entity recognition. In fact, for these kind of tasks you don't need GPT-3 175B sized models. GPT-J at 6B has very good language understanding and is ideally suited & highly efficient for most of these scenarios.\n", + "\n", + "In this example we performed fine-tuning on GPT-J as a Causal Language Model (CLM) for Text Entailment on GLUE MNLI dataset.\n", + "\n", + "You can easily adapt this example to do your custom fine-tuning on several downstream tasks, such as question answering, named entity recognition, sentiment analysis, & text classification in general – by preparing your data accordingly.\n", + "\n", + "Overall, this notebook showcases the potential for GPT-J to be used effectively and efficiently for Fine-Tuning. Next, find out how GPT-J can be used effectively and efficiently on several downstream tasks after a simple fine-tuning with our Text generation on IPU using GPT-J – Inference notebook, GPTJ-generative-inference.ipynb." + ] } ], "metadata": {