This project is 100% LLM slop and a joke. Do not use it for anything that matters. It
exec()s code generated by an LLM at runtime. There is no sandbox. There is no safety net. The LLM might writeos.system("rm -rf /")and we will run it. You have been warned.
A Python port of shorwood/slopc — a Rust proc
macro that uses an LLM to write your function bodies. All credit for the original
idea (and most of the design) goes to @shorwood.
This is just the Python flavour of the same bad decision.
from sloplib import slop
@slop
def levenshtein(a: str, b: str) -> int:
"""Compute the Levenshtein edit distance between two strings."""
...
print(levenshtein("kitten", "sitting")) # 3, hopefullyThe @slop decorator captures the function name, parameter names + types,
return type, and docstring; ships them to an OpenAI-compatible chat endpoint;
parses the response back into Python source; verifies it compiles cleanly;
caches it on disk; and binds the resulting callable as your function.
On verify failure, it feeds the error back to the model and retries.
uv add sloplib
# or
pip install sloplibSet an API key for whatever provider you point it at:
export SLOPLIB_API_KEY=sk-...@slop(
retries=5,
model="openai/gpt-4o-mini",
provider="https://openrouter.ai/api/v1/chat/completions",
api_key_env="SLOPLIB_API_KEY",
cache=True,
ultra_slop=False,
dump="generated/levenshtein.py",
context_file="src/types.py",
hint="dynamic programming",
)
def levenshtein(a: str, b: str) -> int:
"""Compute the Levenshtein edit distance between two strings."""
...| Param | Default | Notes |
|---|---|---|
model |
"gpt-4o-mini" |
LLM model id |
provider |
OpenRouter chat completions | Any OpenAI-compatible endpoint |
api_key_env |
"SLOPLIB_API_KEY" |
env var to read the API key from |
retries |
3 |
retries on verification failure |
cache |
True |
persist generated source on disk and reuse it on later decorations |
ultra_slop |
False |
regenerate the body on every call. Maximum slop. |
dump |
None |
also write the generated source to this path |
context_file |
None |
extra file content to feed the prompt |
hint |
None |
freeform nudge string |
timeout |
60.0 |
HTTP timeout (seconds) |
cache_dir |
".sloplib_cache" |
per-project cache dir |
Configuration precedence: decorator args > env vars > pyproject.toml [tool.sloplib]
(or slop.toml) > defaults.
Environment variables: SLOPLIB_MODEL, SLOPLIB_PROVIDER, SLOPLIB_API_KEY_ENV,
SLOPLIB_RETRIES, SLOPLIB_CACHE, SLOPLIB_ULTRA_SLOP, SLOPLIB_HINT, SLOPLIB_TIMEOUT,
SLOPLIB_CACHE_DIR, SLOPLIB_DUMP, SLOPLIB_CONTEXT_FILE.
# pyproject.toml
[tool.sloplib]
model = "openai/gpt-4o-mini"
retries = 5
provider = "https://openrouter.ai/api/v1/chat/completions"
api_key_env = "SLOPLIB_API_KEY"Pass ultra_slop=True and the wrapper hits the LLM on every single call,
re-prompting, re-verifying, re-exec()ing, and binding a fresh callable.
Each invocation is an independent hallucination. There is no reason to do this.
Do it anyway.
- Reads the function signature, type annotations, and docstring with
inspect. - Builds a chat-completions request and POSTs it to the configured endpoint.
- Strips
```pythonfences from the response and locatesdef <name>(. compile()+exec()the source into a fresh namespace; pulls the named callable.- Caches the verified source under
.sloplib_cache/<name>-<hash>.json, keyed by(name, signature, docstring, model, provider, hint, context_file_sha). Any change invalidates the cache and triggers regeneration. - On verify failure, the prior source + error is appended to the next prompt
and the loop retries up to
retriestimes before raisingSlopError.
There is no good reason. See the original
slopc README for the appropriate vibe.
- Don't commit
.sloplib_cache/if your prompts contain secrets — add it to.gitignore.
This section is the only human made part of the project, I just wanted to say that I really liked the idea of slopc, I thought it was very funny. And I wanted to have a Python implementation as well for shitposting at work.