Skip to content

sharikalog7/Generative_AI_Coursera

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Generative_AI_Coursera

Generative AI with Large Language Models

Generative AI Use Case: Summarize DialogueGenerative AI Use Case: Summarize Dialogue

I delved into dialogue summarization tasks employing generative AI. Through meticulous experimentation, I investigated the influence of input text on model output. Engaging in prompt engineering, I directed the model towards efficiently fulfilling the task at hand. By analyzing zero-shot, one-shot, and few-shot inferences, I initiated the process of prompt engineering, showcasing its potential to optimize the generative output of Large Language Models.

Fine-Tune a Generative AI Model for Dialogue SummarizationFine-Tune a Generative AI Model for Dialogue Summarization

In my recent project, I undertook the task of refining an existing Longformer-Encoder-Decoder model from Hugging Face to enhance dialogue summarization capabilities. Leveraging the FLAN-T5 model, renowned for its high-quality instruction-tuned architecture, I aimed to elevate text summarization performance.

To achieve this, I embarked on a comprehensive fine-tuning process, meticulously adjusting parameters to optimize dialogue summarization outcomes. Employing the ROUGE metrics for evaluation, I rigorously assessed the efficacy of the refined model and iteratively refined it for enhanced performance.

In addition to conventional fine-tuning, I explored Parameter Efficient Fine-Tuning (PEFT) techniques. Despite observing slightly lower performance metrics initially, I discovered that the benefits of PEFT significantly outweighed any minor drawbacks. This approach not only streamlined the fine-tuning process but also demonstrated remarkable improvements in dialogue summarization efficiency.

Through this project, I showcased my proficiency in advanced NLP techniques, model fine-tuning methodologies, and a commitment to continuous improvement in AI-driven text summarization applications.In my recent project, I undertook the task of refining an existing Longformer-Encoder-Decoder model from Hugging Face to enhance dialogue summarization capabilities. Leveraging the FLAN-T5 model, renowned for its high-quality instruction-tuned architecture, I aimed to elevate text summarization performance. To achieve this, I embarked on a comprehensive fine-tuning process, meticulously adjusting parameters to optimize dialogue summarization outcomes. Employing the ROUGE metrics for evaluation, I rigorously assessed the efficacy of the refined model and iteratively refined it for enhanced performance. In addition to conventional fine-tuning, I explored Parameter Efficient Fine-Tuning (PEFT) techniques. Despite observing slightly lower performance metrics initially, I discovered that the benefits of PEFT significantly outweighed any minor drawbacks. This approach not only streamlined the fine-tuning process but also demonstrated remarkable improvements in dialogue summarization efficiency. Through this project, I showcased my proficiency in advanced NLP techniques, model fine-tuning methodologies, and a commitment to continuous improvement in AI-driven text summarization applications.