Raku package that provides Command Line Interface (CLI) scripts for conversing with persistent Large Language Model (LLM) personas.
"Chatnik" uses files of the host Operating System (OS) to maintain persistent interaction with multiple LLM chat objects.
"Chatnik" simply moves the LLM-chat objects interaction system of the Raku package "Jupyter::Chatbook", [AAp3], into a UNIX-like OS terminal interaction. (I.e. an OS shell is used instead of a Jupyter notebook.)
There are several consequences of this approach:
- Multiple LLMs and LLM provider can be used
- The chat messages can use the provided by the package "LLM::Prompts", [AAp2]:
- Prompts collection
- Prompt spec DSL and related prompt expansion
- Easy access to OS shell functionalities
From Zef Ecosystem:
zef install Chatnik
From GitHub:
zef install https://github.com/antononcube/Raku-Chatnik.git
There are several options for using LLMs with this package:
- Install and run Ollama
- For the corresponding setup see "WWW::Ollama"
- Run a llamafile / LLaMA model
- For the corresponding setup see "WWW::LLaMA"
- Have programmatic access to LLMs of service providers like OpenAI or Gemini
- For the corresponding setup see "WWWW::OpenAI", "WWWW::Gemini", or "WWW::MistralAI"
The script llm-chat is used to create and chat with LLM personas (chat objects):
- Create and chat with an LLM persona named "yoda1" (using the Yoda chat persona):
llm-chat -i=yoda1 --prompt=@Yoda hi who are you# Yoda, I am. Jedi Master, wise and old. Guide you, I will, if ready you are. Hmm. Yes.
- Continue the conversation with "yoda1":
llm-chat -i=yoda1 since when do you use a green light saber# Green, my lightsaber is, yes. A symbol of the Jedi Consular, it is. Wisdom and harmony, it represents. Many years have I wielded it, hmmm. Power and knowledge, balance they bring. Understand, you do?
Remark: The message input for llm-chat can be given in quotes. For example: llm-chat 'Hi, again!' -i=yoda1.
The CLI script llm-chat-meta can be used to view and manage the chat objects used by "Chatnik".
Here is its usage message:
llm-chat-meta --help# Usage:
# llm-chat-meta <command> [-i|--id|--chat-id=<Str>] [--all] -- Meta processing of persistent LLM-chat objects.
#
# <command> Command, one of: clear, delete, file, list, messages.
# -i|--id|--chat-id=<Str> Chat id; ignored if --all is specified. [default: '']
# --all Whether to apply the command to all chat objects or not. [default: False]
List all chat objects ("chats" and "personas" are synonyms to "list"):
llm-chat-meta list# {chat-id => yoda1, context => You are Yoda.
# Respond to ALL inputs in the voice of Yoda from Star Wars.
# Be sure to ALWAYS use his distinctive style and syntax. Vary sentence length., messages => 4}
Here we see the messages of "yoda1":
llm-chat-meta messages -i yoda1# {content => hi who are you, role => user, timestamp => 2026-04-18T14:46:08.953784-04:00}
# {content => Yoda, I am. Jedi Master, wise and old. Guide you, I will, if ready you are. Hmm. Yes., role => assistant, timestamp => 2026-04-18T14:46:11.002331-04:00}
# {content => since when do you use a green light saber, role => user, timestamp => 2026-04-18T14:46:11.426382-04:00}
# {content => Green, my lightsaber is, yes. A symbol of the Jedi Consular, it is. Wisdom and harmony, it represents. Many years have I wielded it, hmmm. Power and knowledge, balance they bring. Understand, you do?, role => assistant, timestamp => 2026-04-18T14:46:12.471854-04:00}
Here we clear the messages:
llm-chat-meta clear -i yoda1# Cleared the messages of chat object yoda1.
llm-chat -i=beta --model=ollama::gemma3:12b 'What are the populations of the Brazilian states? #NothingElse|JSON' # ```json
# {
# "Acre": 876858,
# "Alagoas": 3351426,
# "Amapá": 844738,
# "Amazonas": 4278399,
# "Bahia": 14703893,
# "Ceará": 9187103,
# "Distrito Federal": 3045045,
# "Espírito Santo": 3940000,
# "Goiás": 7092263,
# "Maranhão": 7016280,
# "Mato Grosso": 3515083,
# "Mato Grosso do Sul": 3036406,
# "Minas Gerais": 21627175,
# "Pará": 8690722,
# "Paraíba": 4116706,
# "Paraná": 11478267,
# "Pernambuco": 9618273,
# "Piauí": 2606384,
# "Rio de Janeiro": 17422973,
# "Rio Grande do Norte": 3375386,
# "Rio Grande do Sul": 11356996,
# "Rondônia": 1150079,
# "Roraima": 517096,
# "Santa Catarina": 7142572,
# "São Paulo": 46287126,
# "Sergipe": 5617496,
# "Tocantins": 1572741
# }
# ```
llm-chat -i=unix '@CodeWriterX|Shell macOS list of files echo the result and copy to clipboard.' | tee >(pbcopy)# ls -1 | tee >(pbcopy)
Here is a flowchart that describes the interaction between the host Operating System and chat objects database:
flowchart LR
OpenAI{{OpenAI}}
Gemini{{Gemini}}
Ollama{{Ollama}}
LLMFunc[[LLM::Functions]]
LLMProm[[LLM::Prompts]]
CODBOS[(Chat objects<br>file)]
CODB[(Chat objects)]
PDB[(Prompts)]
CCommand[/Chat command/]
CCommandOutput[/Chat result/]
CIDQ{Chat ID<br>specified?}
CIDEQ{Chat ID<br>exists in DB?}
IngestCODB[Chat objects file<br>ingestion]
UpdateCODB[Chat objects file<br>update]
RECO[Retrieve existing<br>chat object]
COEval[Message<br>evaluation]
PromParse[Prompt<br>DSL spec parsing]
KPFQ{Known<br>prompts<br>found?}
PromExp[Prompt<br>expansion]
CNCO[Create new<br>chat object]
CIDNone["Assume chat ID<br>is 'NONE'"]
subgraph "OS Shell"
CCommand
CCommandOutput
end
subgraph OS file system
CODBOS
end
subgraph PromptProc[Prompt processing]
PDB
LLMProm
PromParse
KPFQ
PromExp
end
subgraph LLMInteract[LLM interaction]
COEval
LLMFunc
Gemini
OpenAI
Ollama
end
subgraph Chatnik backend
IngestCODB
CODB
CIDQ
CIDEQ
CIDNone
RECO
CNCO
UpdateCODB
PromptProc
LLMInteract
end
CCommand --> IngestCODB
CODBOS -.-> IngestCODB
UpdateCODB -.-> CODBOS
IngestCODB -.-> CODB
IngestCODB --> CIDQ
CIDQ --> |yes| CIDEQ
CIDEQ --> |yes| RECO
RECO --> PromParse
COEval --> CCommandOutput
CIDEQ -.- CODB
CIDEQ --> |no| CNCO
LLMFunc -.- CNCO -.- CODB
CNCO --> PromParse --> KPFQ
KPFQ --> |yes| PromExp
KPFQ --> |no| COEval
PromParse -.- LLMProm
PromExp -.- LLMProm
PromExp --> COEval
LLMProm -.- PDB
CIDQ --> |no| CIDNone
CIDNone --> CIDEQ
COEval -.- LLMFunc
COEval --> UpdateCODB
LLMFunc <-.-> OpenAI
LLMFunc <-.-> Gemini
LLMFunc <-.-> Ollama
style PromptProc fill:DimGray,stroke:#333,stroke-width:2px
style LLMInteract fill:DimGray,stroke:#333,stroke-width:2px
Here is the corresponding UML Sequence diagram:
sequenceDiagram
participant CCommand as Chat command
participant IngestCODB as Chat objects file ingestion
participant CODBOS as Chat objects file
participant CODB as Chat objects
participant CIDQ as Chat ID specified?
participant CIDEQ as Chat ID exists in DB?
participant RECO as Retrieve existing chat object
participant PromParse as Prompt DSL spec parsing
participant KPFQ as Known prompts found?
participant PromExp as Prompt expansion
participant COEval as Message evaluation
participant CCommandOutput as Chat result
participant CNCO as Create new chat object
participant CIDNone as Assume chat ID is NONE
participant UpdateCODB as Chat objects file update
participant LLMFunc as LLM Functions
participant LLMProm as LLM Prompts
CCommand->>IngestCODB: Chat command
CODBOS--)IngestCODB: Chat objects file
IngestCODB--)CODB: Chat objects
IngestCODB->>CIDQ: Chat ID specified?
CIDQ-->>CIDEQ: Yes
CIDQ-->>CIDNone: No
CIDNone->>CIDEQ: Assume chat ID is NONE
CIDEQ-->>RECO: Yes
CIDEQ-->>CNCO: No
CIDEQ--)CODB: Chat objects
RECO->>PromParse: Prompt DSL spec parsing
PromParse--)LLMProm: LLM Prompts
CNCO--)LLMFunc: LLM Functions
CNCO--)CODB: Chat objects
CNCO->>PromParse: Prompt DSL spec parsing
PromParse->>KPFQ: Known prompts found?
KPFQ-->>PromExp: Yes
KPFQ-->>COEval: No
PromExp--)LLMProm: LLM Prompts
PromExp->>COEval: Message evaluation
COEval--)LLMFunc: LLM evaluator invocation
LLMFunc--)COEval: Evaluation result
COEval->>UpdateCODB: Chat objects file update
COEval->>CCommandOutput: Chat result
Using a JSON file for keeping the chat objects database is a fairly straightforward idea. Efficiency considerations for "using the OS to manage the database" are probably can not that important because LLMs invocation is (much) slower in comparison.
Remark: The following quote is attributed to Ken Thompson about UNIX:
We have persistent objects, they're called files.
- TODO Implementation
- DONE Chats DB export
- DONE Chats DB import
- DONE LLM persona creation
- DONE LLM persona repeated interaction
- TODO CLI
llm-chat- DONE Simple:
$input&*%args - DONE Multi-word:
@words&*%args - TODO From pipeline
- TODO Format?
- DONE Simple:
- TODO CLI
llm-chat-meta- DONE Commands reaction
- DONE View messages for an id
- DONE Clear messages for an id
- DONE Delete chat for an id
- DONE View all chats
- DONE Delete all chats
- TODO Load LLM personas in the JSON file used for initialization by "Jupyter::Chatbook"
- TODO Unit tests
- DONE Export & import
- DONE Main workflow
- DONE Persona repeated interaction
- DONE Persona creation
- TODO CLI tests
- TODO Documentation
- DONE Flowchart & sequence diagram
- DONE Usage examples
- TODO Demo video
[AAp1] Anton Antonov LLM::Functions, Raku package, (2023-2026), GitHub/antononcube.
[AAp2] Anton Antonov LLM::Prompts, Raku package, (2023-2025), GitHub/antononcube.
[AAp3] Anton Antonov Jupyter::Chatbook, Raku package, (2023-2026), GitHub/antononcube.
[JSp1] Jonathan Stowe, XDG::BaseDirectory, Raku package, (2016-2026), GitHub/jonathanstowe.