Skip to content

A Python REPL with Conversational Functionality

Notifications You must be signed in to change notification settings

MarkRoddy/replgpt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

replgpt

replgpt is a Python REPL and LLM chatbot.

  • Write and run your own code. Then ask chatbot to explain what's happening.
  • Ask the chatbot to generate code, and immediately use it in your REPL session like you'd written it yourself.
  • All from the same command line prompt. You never need to context switch between windows/programs or copy/paste again.

What It Is Not

replgpt is not an IDE. It is not an editor based coding agent, though it shares some functionality. It can help you build a new feature on an existing code base, but this is not where it shines. However, if you want to quickly explore an idea with generated Python code without toggling between windows, are looking to jump start building a new idea, or want to learn about an existing project or library, it might be up your alley.

Features

  • Standard Python REPL: Execute Python commands just like the standard Python REPL. The code you write and it's results will be automatically added to your chat context to improve future responses.
  • LLM Code Generation: Enter natural language text. Ask questions about an error message without needing to type it in. Or, ask it to write you a function. The function will immediately be available in your REPL session.

Getting Started

Installation

Install replgpt directly from PyPI:

pip install replgpt

Set Up API Key

Set the OPENAI_API_KEY environment variable with your OpenAI API key:

export OPENAI_API_KEY="your-openai-api-key"

After installing, start the REPL with:

replgpt

Functionality

Python

Enter any valid Python code. When executed, the command and it's output will be included in the Agent's memory.

Natural Language

Enter a query to the AI Agent. It can answer questions you have about the code you've run or errors you've seen. Help you debug code that isn't behaving in a way you'd expect. Or, ask the Agent to write a function for you which will automatically become availabe in your REPL session.

Commands

There are several commands you can issue to the REPL to control its behavior:

  • /help - Print additional information about the REPL and commands you can run.

  • /file_to_context <file_path> - Read the contents of a local file and load it into the Agent's context window. This is can be used to import documentation into the Agent's memory, or give it knowledge of existing code you'd like to work with inside of the REPL. Or, if you want to understand a project's dependencies better, run /file_to_context requirements.txt and ask your agent about the libraries the libraries used.

  • /auto_eval - Controls what the REPL will do with code generated by your AI agent. The default strategy of 'always' means that any code returned by the Agent will be executed. If you have any concerns about this behavior, you can toggle this to never. Alternatively, the 'infer' strategy will make an additional LLM to evaluate the safety of the generated code. In practice this should only allow definitions (functions and classes) but will not execute code that could have side effects.

About

A Python REPL with Conversational Functionality

Resources

Stars

Watchers

Forks

Packages

No packages published