Skip to content

ekaterinabutyugina/summarization_with_transformers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Fine-Tuning Transformers with On-Device GPU Resources

In this notebook, I'll walk you through fine-tuning the transformer model for a summarization task locally, specifically on a GPU NVIDIA RTX A5000-powered HP ZBook Fury.

The Importance of Summarization in NLP:
Summarization in Natural Language Processing (NLP) is a crucial task due to its applications in research and business. It addresses the issue of information overload and enhances time efficiency by distilling long documents into their essence.

Transformer Models - The Backbone of Generative AI:
Transformer model is a type of generative AI. They can generate new content based on the input they receive. This architecture is behind many well-known AI models like GPT (Generative Pretrained Transformer), BERT (Bidirectional Encoder Representations from Transformers), and others, which, besides the summarization, are capable of generating text, answering questions, and more.

Why Local GPU?
Choosing between a local GPU and a cloud service depends on factors like performance needs, data privacy concerns, cost, internet stability, and the nature of your work. For many professionals and researchers, the advantages of a local GPU make it a preferred choice for demanding computational tasks.

You can find the full code, explanations and examples in the notebook provided.

This demo was made in collaboration with Dipanjan Sarkar, our lead data scientist;
As a Z by HP Global Data Science Ambassador, I have been provided with HP products to facilitate our innovative work.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published