lm-command is a small Node command wrapper around litert-lm.
It gives you a global lm command that:
- auto-finds your
.litertlmmodel in the Hugging Face cache - remembers the last backend you saved (
cpuorgpu) - remembers the resolved model path on disk
- installs
uvautomatically if it is missing - prepares
litert-lmwithuvxon first use - shells out to
uvx --from litert-lm litert-lm ...
npm install -g lm-commandFor local development inside this folder:
npm install
npm linkPreferred: export NPM_TOKEN in your shell profile:
export NPM_TOKEN=your_token_hereThen publish:
npm run publish:npmFallback: log in interactively:
npm loginThen publish with the same script:
npm run publish:npmDry run:
npm run publish:npm:dry-runThe publish script will:
- use
NPM_TOKENfrom the environment when available - otherwise fall back to your existing npm login
- automatically bump to the next patch version when npm already has the current one
- keep a manually higher local version if you already bumped it yourself
- require a clean git working tree for real releases
- commit the version bump as
release: vX.Y.Z - create a matching git tag
vX.Y.Z - push the commit and tag to
originafter npm publish succeeds - run
npm pack --dry-run - publish with
--access public
For --dry-run, it previews the git commit, tag, and push steps without changing git history.
Run a prompt:
lm "hello from litert"If this is your first time, lm will make sure uv is available and prepare litert-lm.
If no model has been downloaded yet, it will tell you to run:
lm downloadUse a specific backend for this run and save it for future runs:
lm backend gpu
lm "write a haiku about local models"Fresh installs default to gpu.
Set the model path manually:
lm model /full/path/to/gemma-4-E2B-it.litertlmForce auto-discovery again:
lm model autoSee the saved config:
lm statusDownload the default model and save the discovered path:
lm downloadOr download a different LiteRT model repo:
lm download litert-community/gemma-4-E2B-it-litert-lmThe CLI stores its config in:
~/.config/lm-command/config.jsonThe saved JSON includes:
backendmodelPath
When you run:
lm "hello"this tool effectively executes:
uvx --from litert-lm litert-lm run -b gpu /path/to/model.litertlm --prompt "hello"uvis installed automatically when possible.- If no saved model exists, the CLI scans
~/.cache/huggingface/hubby default. HUGGINGFACE_HUB_CACHE,HF_HOME, andXDG_CONFIG_HOMEare respected when set.