Correction-native voice input that learns the words you actually use.
• Focused-Field Dictation • LLM Polish • Local Speech Skills •
中文 • Quick Navigation • Features • Learning Model • Safety Model
Important
From one-off dictation to a voice input loop that improves.
Plain speech-to-text keeps making the same mistakes: product names, project names, mixed Chinese/English terms, and the phrases you always fix right after insertion.
Open Typeless Harness treats those edits as signal. It transcribes, polishes, inserts into the focused field, then learns from your post-insertion corrections so future dictation better matches your vocabulary.
The premise is simple: every correction after the text lands should make the next insertion better.
Tip
I'm a user -> Open the app, put the cursor in any text field, dictate, and correct the text normally. The correction trail is the learning signal.
I'm an agent -> The product loop is ASR transcript -> LLM polish -> focused-field insertion -> post-insertion edit monitor -> local speech skills.
- Focused-field insertion: dictate into the app you are already using instead of switching into a separate editor.
- LLM-polished output: clean up spoken text before it lands in the target field.
- Post-insertion edit monitoring: watch what the user actually changes after insertion.
- Local speech skills: promote repeated corrections into local memory for future polish.
- Mixed-language vocabulary: handle terms such as
type script -> TypeScriptand知呼 -> 知乎.
- You dictate into the current focused field.
- Open Typeless Harness inserts polished text.
- For a short window after insertion, the edit monitor compares the inserted text with your manual edits.
- Stable repeated corrections become local speech skills.
- On later dictation, relevant speech skills are retrieved before the LLM polish step, so the model sees your vocabulary before it writes.
This is not global autocorrect. It is a correction-native memory loop built from your actual post-insertion edits.
Open Typeless Harness is an input layer, not an autonomous agent. It writes into the field you are already using, and it should not execute tasks for you.
Correction evidence and speech skills stay on the machine by default. Ambiguous corrections should become contextual skills, not unsafe global find-and-replace rules.
Open Typeless Harness is a technical preview built as an experimental OpenClaudex fork/fusion on top of the OpenLess desktop runtime.
Current focus:
- focused-field dictation
- LLM-polished insertion
- post-insertion edit monitoring
- local speech-skill learning
Built on top of the OpenLess desktop runtime and released under inherited MIT license terms.

