Skip to content

Aditya-0x/SummaryGenerator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Text Summarization Tool

This repo contains an end-to-end abstractive summarization project built around Hugging Face Transformers, the XSum dataset, and a Gradio demo app.

Project Layout

requirements.txt
mlplo/
  app.py            # Gradio UI for inference (single + batch mode)
  common.py         # Shared utilities
  compare.py        # Compare two models side-by-side
  data_cleaning.py  # Dataset preparation
  eval.py           # Standalone evaluation (ROUGE + BERTScore)
  report.py         # HTML Evaluation Report generator
  train.py          # Fine-tuning loop
tests/              # Pytest suite

Quick Start

  1. Create and activate a virtual environment.
  2. Install dependencies:
pip install -r requirements.txt
  1. Prepare a small debug dataset first:
python -m mlplo.data_cleaning --debug --output-dir mlplo/data/processed/xsum_debug
  1. Run a smoke-test training job:
python -m mlplo.train --dataset-dir mlplo/data/processed/xsum_debug --output-dir mlplo/checkpoints/bart-base-xsum-debug --num-train-epochs 1 --per-device-train-batch-size 2 --per-device-eval-batch-size 2 --gradient-accumulation-steps 2 --run-test-eval
  1. Evaluate the trained checkpoint:
python -m mlplo.eval --dataset-dir mlplo/data/processed/xsum_debug --model-path mlplo/checkpoints/bart-base-xsum-debug --include-bertscore
  1. Generate an Evaluation Report:
python -m mlplo.report --checkpoint-dir mlplo/checkpoints/bart-base-xsum-debug
  1. Launch the Gradio app:
python -m mlplo.app --model-path mlplo/checkpoints/bart-base-xsum-debug

Running Tests

To run the full test suite for edge cases:

python -m pytest tests/ -v

Colab Portability

The scripts are path-based and CLI-driven, so the same commands work in Google Colab after cloning the repo and installing requirements.txt. If you want a faster first pass, keep using --debug or provide --train-samples, --validation-samples, and --test-samples.

Notes

  • Training defaults to facebook/bart-base for fine-tuning.
  • The Gradio app falls back to facebook/bart-large-xsum if no local checkpoint is supplied, which makes the UI useful before fine-tuning finishes.
  • Mixed precision is enabled automatically when CUDA is available.
  • BERTScore is excluded from the training loop (to keep it fast) and is opt-in for evaluation using the --include-bertscore flag.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors