Skip to content

SHyalan12/article_summarizer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

article_summarizer

Simple project of LLM-powered article analyzer, showing how prompt could lead to different answers.

🎯 Project Overview

This project showcases my ability to:

  1. Develop LLM-based applications - Building a structured, modular tool using OpenAI API
  2. Debug and optimize prompts - Demonstrating iterative improvement from poor prompts to optimized ones
  3. Design software architecture - Creating clean, maintainable code with proper separation of concerns

Note: While I used LLM to assist with code generation and prompt examples, the project structure, architectural decisions, and overall approach were designed by me. The sample article used for testing is from a research opportunity document I recieved from Professor Xiao.

Features

The tool performs three main tasks on any article:

  1. Summarization - Generates concise summaries with main points
  2. Key Points Extraction - Identifies the most important information
  3. Sentiment Analysis - Analyzes tone, sentiment, and subjectivity

Each task shows 3 prompt versions (poor → better → optimized) to demonstrate prompt engineering.

Project Structure

article_summarizer/
├── main.py                 # Entry point
├── llm_client.py           # Shared LLM API client
├── tasks/                  # Task modules
│   ├── summarizer.py       # Summarization task
│   ├── key_points.py       # Key points extraction task
│   └── sentiment.py        # Sentiment analysis task
├── sample_article.txt      # Sample article for testing
├── requirements.txt        # Dependencies
└── README.md              # This file

Setup

1. Install Dependencies

pip install -r requirements.txt

2. Set OpenAI API Key

export OPENAI_API_KEY=''

3. Run the Analyzer

python main.py

Output Example

The tool shows you:

  • Version 1: Poor prompt with issues explained
  • Version 2: Better prompt with improvements
  • Version 3: Optimized prompt with best results

Final output includes the optimized results for all three tasks.

Customization

The porject can be changed for personal preference in

Analyze Your Own Article

Modify the sample_article variable in main.py or read from a file:

Adjust Temperature

In each task file, modify the temperature parameter:

  • 0.0-0.3: More consistent, factual
  • 0.4-0.7: Balanced
  • 0.8-1.0: More creative, varied

Change Model (I use a pretty ancient model gpt3.5 here for having more change I believe with the change of promt)

In llm_client.py:

client = LLMClient(model="gpt-4")  # Use GPT-4 instead

Learning Points

Prompt Optimization Techniques Demonstrated:

  1. Specific Constraints - Define length, format, structure
  2. Temperature Tuning - Lower for facts, higher for creativity
  3. Output Format Specification - Request JSON, bullets, sections
  4. Context Definition - Define ambiguous terms like "key points"

About

Simple project of LLM-powered article analyzer, showing how prompt could lead to different answers.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages