Simple project of LLM-powered article analyzer, showing how prompt could lead to different answers.
This project showcases my ability to:
- Develop LLM-based applications - Building a structured, modular tool using OpenAI API
- Debug and optimize prompts - Demonstrating iterative improvement from poor prompts to optimized ones
- Design software architecture - Creating clean, maintainable code with proper separation of concerns
Note: While I used LLM to assist with code generation and prompt examples, the project structure, architectural decisions, and overall approach were designed by me. The sample article used for testing is from a research opportunity document I recieved from Professor Xiao.
The tool performs three main tasks on any article:
- Summarization - Generates concise summaries with main points
- Key Points Extraction - Identifies the most important information
- Sentiment Analysis - Analyzes tone, sentiment, and subjectivity
Each task shows 3 prompt versions (poor → better → optimized) to demonstrate prompt engineering.
article_summarizer/
├── main.py # Entry point
├── llm_client.py # Shared LLM API client
├── tasks/ # Task modules
│ ├── summarizer.py # Summarization task
│ ├── key_points.py # Key points extraction task
│ └── sentiment.py # Sentiment analysis task
├── sample_article.txt # Sample article for testing
├── requirements.txt # Dependencies
└── README.md # This file
pip install -r requirements.txtexport OPENAI_API_KEY=''python main.pyThe tool shows you:
- Version 1: Poor prompt with issues explained
- Version 2: Better prompt with improvements
- Version 3: Optimized prompt with best results
Final output includes the optimized results for all three tasks.
The porject can be changed for personal preference in
Modify the sample_article variable in main.py or read from a file:
In each task file, modify the temperature parameter:
0.0-0.3: More consistent, factual0.4-0.7: Balanced0.8-1.0: More creative, varied
Change Model (I use a pretty ancient model gpt3.5 here for having more change I believe with the change of promt)
In llm_client.py:
client = LLMClient(model="gpt-4") # Use GPT-4 instead- Specific Constraints - Define length, format, structure
- Temperature Tuning - Lower for facts, higher for creativity
- Output Format Specification - Request JSON, bullets, sections
- Context Definition - Define ambiguous terms like "key points"