Simulacra is a platform for building GPT-4 powered Telegram bots with a template-based personality system.
This project is under active development and breaking changes may occur at any time.
If this project interests you, show your support by starring it on GitHub.
For Docker specific usage, see the Docker section.
pipenv install
If you wish to include development dependencies, add --dev
.
Modify the example configuration file example/config.toml
with your TELEGRAM_API_TOKEN
and TELEGRAM_USERNAME
.
- Interact with @BotFather to create a new bot and get its API token.
For more information, see the Configuration section.
pipenv run app examples/config.toml
Send a message to your bot and it will respond. Bots can also see and understand images, if the model supports this.
Send /help
to see a list of commands:
Actions
/new - Start a new conversation
/retry - Retry the last response
/reply - Reply immediately
/undo - Undo the last exchange
/clear - Clear the current conversation
/remember <text> - Add text to memory
Information
/stats - Show conversation statistics
/help - Show this help message
The application is configured by a config file and one or more context files.
The config TOML file initializes one or more bots and defines the paths to their context files.
See example/config.toml
for a template config file:
[[simulacra]]
context_filepath = "example/context.yml"
telegram_token = "TELEGRAM_API_TOKEN"
authorized_users = [ "TELEGRAM_USERNAME" ] # [ "@username", ... ]
Note: This section no longer reflects the current state of the application.
The context file is a YAML file that defines a bot's personality prompts, stores its conversation history, and memory.
See example/context.yml
for a sample context file.
A config file contains the following top-level YAML keys:
Key | Description |
---|---|
names |
Contains two sub-keys assistant and user which identify the names of the bot and user. |
chat_prompt |
The system prompt. Describe the bot's personality and its instructions here. Write as much detail as you can. See the Prompt Design section. |
reinforcement_chat_prompt |
An optional extra system message provided very last in the context window. Use this to briefly reinforce the bot's personality and instructions. Keep it short. |
conversations |
This key will be generated by the application and contains the bot's conversation history and memory. You may edit this section at any time to manually modify the bot's memory. |
Changes to the context file take effect immediately. You do not need to restart the application.
This project publishes a Docker image to GHCR ghcr.io/njbbaer/simulacra
.
Configure your container with the following:
- Mount a directory containing your config and context files to
/config
. - Set the path to your config file in the environment as
CONFIG_FILEPATH
. - Set your OpenAI API key in the environment as
OPENAI_API_KEY
.
Ensure the context file paths in your config are accessible within the container (i.e. /config
).
docker run --name simulacra \
--volume /var/lib/simulacra:/config \
--env OPENAI_API_KEY=your_openai_api_key \
--env CONFIG_FILEPATH=/config/config.toml \
--restart unless-stopped \
ghcr.io/njbbaer/simulacra:latest
services:
simulacra:
image: ghcr.io/njbbaer/simulacra:latest
container_name: simulacra
volumes:
- /var/lib/simulacra:/config
environment:
- OPENAI_API_KEY={{ your_openai_api_key }}
- CONFIG_FILEPATH=/config/config.toml
restart: unless-stopped
Enable code reloading with development mode. Create a .env
file or add the following to your environment:
export ENVIRONMENT=development
Note: Development mode can only run a single bot at once.
Install pre-commit hooks to run code formatting and linting before committing:
pipenv run pre-commit install
pipenv run test
New versions are released automatically by GitHub Actions when a new tag is pushed.
A shortcut script is provided to create a new tag and push it to the repository:
make release version=0.0.0