Skip to content

guocity/lm-command

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

lm-command

lm-command is a small Node command wrapper around litert-lm.

It gives you a global lm command that:

  • auto-finds your .litertlm model in the Hugging Face cache
  • remembers the last backend you saved (cpu or gpu)
  • remembers the resolved model path on disk
  • installs uv automatically if it is missing
  • prepares litert-lm with uvx on first use
  • shells out to uvx --from litert-lm litert-lm ...

Install

npm install -g lm-command

For local development inside this folder:

npm install
npm link

Publish to npm

Preferred: export NPM_TOKEN in your shell profile:

export NPM_TOKEN=your_token_here

Then publish:

npm run publish:npm

Fallback: log in interactively:

npm login

Then publish with the same script:

npm run publish:npm

Dry run:

npm run publish:npm:dry-run

The publish script will:

  • use NPM_TOKEN from the environment when available
  • otherwise fall back to your existing npm login
  • automatically bump to the next patch version when npm already has the current one
  • keep a manually higher local version if you already bumped it yourself
  • require a clean git working tree for real releases
  • commit the version bump as release: vX.Y.Z
  • create a matching git tag vX.Y.Z
  • push the commit and tag to origin after npm publish succeeds
  • run npm pack --dry-run
  • publish with --access public

For --dry-run, it previews the git commit, tag, and push steps without changing git history.

Usage

Run a prompt:

lm "hello from litert"

If this is your first time, lm will make sure uv is available and prepare litert-lm. If no model has been downloaded yet, it will tell you to run:

lm download

Use a specific backend for this run and save it for future runs:

lm backend gpu
lm "write a haiku about local models"

Fresh installs default to gpu.

Set the model path manually:

lm model /full/path/to/gemma-4-E2B-it.litertlm

Force auto-discovery again:

lm model auto

See the saved config:

lm status

Download the default model and save the discovered path:

lm download

Or download a different LiteRT model repo:

lm download litert-community/gemma-4-E2B-it-litert-lm

Saved config

The CLI stores its config in:

~/.config/lm-command/config.json

The saved JSON includes:

  • backend
  • modelPath

Equivalent wrapped command

When you run:

lm "hello"

this tool effectively executes:

uvx --from litert-lm litert-lm run -b gpu /path/to/model.litertlm --prompt "hello"

Notes

  • uv is installed automatically when possible.
  • If no saved model exists, the CLI scans ~/.cache/huggingface/hub by default.
  • HUGGINGFACE_HUB_CACHE, HF_HOME, and XDG_CONFIG_HOME are respected when set.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors