Use an LLM (or anything else that can stream to stdout) directly from literally anywhere you can type. Outputs in real time.
Write a prompt, select it, and (by default) hit Cmd+Shift+.
. It will replace your prompt with the output in a streaming fashion.
Also! You can first put something on your clipboard (as in copy some text) before writing / selecting your prompt, and it (by default) Cmd+Shift+/
and it will use the copied text as context to answer your prompt.
For Linux, use Ctrl
instead of Cmd
.
100% Local by default. (If you want to use an API or something, you can call any shell script you want specified in settings.json
- just set ollama.enabled
to false
in settings.json
)
I show an example settings.json
in Settings
Note: Something not work properly? I won't know! Please log an issue or take a crack at fixing it yourself and submitting a PR! Have feature ideas? Log an issue!
(in the video I mention rem, another project I'm working on)
If you are going to use this with remote APIs, consider environment variables for your API keys... make sure they exist wherever you launch, or directly embed them (just don't push that code anywhere)
Install ollama and make sure to run ollama pull openhermes2.5-mistral
or swap it out in settings for something else.
Launch "plock"
Shortcuts:
Ctrl / Cmd + Shift + .
: Replace the selected text with the output of the model.
Ctrl / Cmd + Shift + /
: Feed whatever is on your clipboard as "context" and the replace the selected text with the output of the model.
(these two are customizable in settings.json
)
Escape
: Stop any streaming output
Mac will request access to keyboard accessibility.
Linux (untested), may require X11 libs for clipboard stuff and key simulation using enigo. Helpful instructions
Also system tray icons require some extras
Windows (untested), you'll need to swap out Ollama for something else, as it doesn't support windows yet.
There is a settings.json
file which you can edit to change shortcuts, the model,
prompts, whether to use shell scripts and what they are, and other settings.
After updating, click the tray icon and select "Load Settings" or restart it.
On mac, It's at ~/Library/Application Support/today.jason.plock/settings.json
.
On linux, I think it's ~/$XDG_DATA_HOME/today.jason.plock/settings.json
.
Windows, I think it's ~\AppData\Local\today.jason.plock\settings.json
Correct me if any of these are wrong.
Show Example
{
"environment": {},
"ollama": {
"enabled": true,
"ollama_model": "openhermes2.5-mistral"
},
"custom_commands": {
"index": 0,
"custom_commands": [
{
"name": "gpt",
"command": [
"bash",
"/Users/jason/workspace/plock/scripts/gpt.sh"
]
}
]
},
"custom_prompts": {
"basic_index": 0,
"with_context_index": 1,
"custom_prompts": [
{
"name": "default basic",
"prompt": "Say hello, then {}"
},
{
"name": "default with context",
"prompt": "I will ask you to do something. Below is some extra context to help do what I ask. --------- {} --------- Given the above context, please, {}. DO NOT OUTPUT ANYTHING ELSE."
}
]
},
"shortcuts": {
"basic": "CmdOrControl+Shift+.",
"with_context": "CmdOrControl+Shift+/"
}
}
If you don't have apple silicon or don't want to blindly trust binaries (you shouldn't), here's how you can build it yourself!
- Node.js (v14 or later)
- Rust (v1.41 or later)
- Bun (latest version)
Download from: https://nodejs.org/
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
curl https://bun.sh/install | bash
git clone <repo_url>
cd path/to/project
bun install
bun run tauri dev
bun run tauri build
Another demo where I use the perplexity shell script to generate an answer super fast. Not affiliated, was just replying to a thread lol
Screen.Recording.2024-01-21.at.7.21.53.PM.mov
Curious folks might be wondering what ocr
feature is. I took a crack at taking a screenshot,
running OCR, and using that for context, instead of copying text manually. Long story short,
rusty-tesseract really dissapointed me, which is awkward b/c it's core to xrem.
If someone wants to figure this out... this could be really cool, especially with multi-modal models.