Debug Python dependency hell together with an LLM – one step at a time.
PLLM‑Interactive is a lightweight, drop‑in tool that spins up a chat‑style agent for fixing import / version conflicts in arbitrary Python scripts or Gists.
You run one shell script, point it at a snippet, and the agent:
- guesses the required Python version & packages,
- generates a Docker image, tries to build/run it,
- shows a two‑line diagnosis from the LLM,
- lets you tweak the plan live (
py==3.7,pillow==6.2,del getopt, …).
No more deciphering hundred‑line error logs, no more editing Dockerfiles by hand.
By design, this system downloads and tests real Python packages inside Docker.
This may result in high disk usage (several GB), especially for longer or more complex code snippets.
To free space, you may:
- delete the
.venv/folder if unused - run
docker system prune -a - remove downloaded Ollama models (see below)
# 1. clone & enter
git clone https://github.com/<your-name>/pllm-interactive.git
cd pllm-interactive
# 2. run the bootstrap script (creates venv, installs deps, pulls a model)
./start.shYou’ll be prompted for:
- Snippet path – e.g.
local-test-gists/5780127/snippet.py - Ollama model – default is
gemma3:4b-it-qat - Run mode
- 1 = interactive mode (recommended)
- 2 = unattended batch mode
- Optional range & loop params (±Python versions, max tries)
↩ <Enter> retry with current plan
py==3.8 force Python version
pillow==6.0 pin / change module version
del getopt remove module
q / quit abort program
Every error summary is auto‑explained like:
🧠 SUMMARY: getopt==1.2.2 does not exist on PyPI for Python 3.8.
🧠 NEXT: remove getopt or drop the explicit version pin.
Logs are saved next to the snippet in:
output_data_interactive.yml
pip install -r requirements.txt
cd src
python test_executor.py \
-f "../local-test-gists/5780127/snippet.py" \
-m "gemma3:4b-it-qat" \
-r 1 -l 10 -iAll CLI flags (-f -m -r -l -i -t ...) are unchanged from the original tool.
ollama pull phi3:medium
ollama pull llama3Then, specify them at the prompt or use the -m flag in CLI mode.
For best results, pick chat-capable models with code reasoning ability. (Quantized models like 4b-it-qat are smaller but may miss edge cases.)
Run ollama pull manually to fetch it.
Run docker system prune -a to clean up old containers & images.
Ensure you're in a real terminal (not VSCode output panel).
May be falsely inferred (e.g. stdlib like sys, os) — just del them.
Then, specify them at the prompt or use the -m flag in CLI mode.
For best results, pick chat-capable models with code reasoning ability. (Quantized models like 4b-it-qat are smaller but may miss edge cases.)
| Path/Script | Purpose |
|---|---|
start.sh |
one-liner bootstrap (venv + Ollama setup) |
start.py |
simple menu wrapper that calls test_executor.py |
src/ |
core logic with new interactive enhancements |
local-test-gists/ |
demo Gists for quick testing |
Dockerfile |
optional dockerized runner |
Omitted from this repo (see .gitignore):
pllm_results/,pyego-results/,readpy-results/(for evaluation only).venv/(local Python environment)
| Feature | Original | This repo |
|---|---|---|
One‑liner installer (start.sh) |
– | ✅ |
Interactive mode (-i) with LLM summaries |
– | ✅ |
Tiny command language (py==3.7, del foo, …) |
– | ✅ |
Unified YAML log (output_data_interactive.yml) |
long per-version logs | ✅ |
Std-lib detection (never installs os, sys, …) |
❌ | ✅ |
You can reproduce all benchmark results from the original paper.
# 1. fetch the hard-gist set (from paper §4)
./scripts/download_hard_gists.sh
# 2. run them in batch mode
nohup ./run_gists.sh > run.log 2>&1 &Results appear in pllm_results/ and match the published numbers.
This project is based on the ISSTA 2025 paper:
Raiders of the Lost Dependency: Fixing Dependency Conflicts in Python using LLMs
by Antony Bartlett, Cynthia C. S. Liem, and Annibale Panichella
[arXiv:2501.16191]
The original work showed how to use large language models (LLMs) to autonomously resolve dependency conflicts in Python via static analysis and Docker validation.
This fork focuses on developer experience, adding a true human-in-the-loop workflow.
If you use this repo in academic work, please cite the original:
@article{Bartlett2025Raiders,
title = {Raiders of the Lost Dependency: Fixing Dependency Conflicts in Python using LLMs},
author = {Bartlett, Antony and Liem, Cynthia C. S. and Panichella, Annibale},
year = {2025},
journal= {arXiv preprint arXiv:2501.16191}
}Apache 2.0 – same as upstream.
Ollama model licenses may differ; check model sources for terms.