Skip to content

sderev/lmt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Language Models Terminal

lmt is a versatile CLI tool that enables you to interact directly with OpenAI's ChatGPT models from the comfort of your terminal.

demo

Table of Contents

  1. Features
  2. Installation
    1. pip
    2. pipx, the Easy Way
  3. Getting Started
    1. Configuring your OpenAI API key
  4. Usage
    1. Basic Example
    2. Add a Persona
    3. Switching Models
    4. Template Utilization
    5. Emoji Integration
    6. Prompt Cost Estimation
    7. Reading from stdin
    8. Append an Additional Prompt to Piped stdin
    9. Output Redirection
    10. Using lmt as a Vim Filter Command
  5. Theming Colors for Code Blocks
    1. Example
  6. License

Features

  • Access All ChatGPT Models: lmt supports all available ChatGPT models (gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4, gpt-4-32k), giving you the power to choose the most suitable one for your task.
  • Custom Templates: Design your own toolbox of templates to streamline your workflow.
  • Read File: Incorporate file content into your prompts seamlessly.
  • Output to a File: Redirect standard output (stdout) to a file or another program as needed.
  • Easy Vim Integration: Integrate ChatGPT into Vim effortlessly by using lmt as a filter command.

Installation

pip

python3 -m pip install lmt-cli

pipx, the Easy Way

pipx install lmt-cli

Getting Started

Configuring your OpenAI API key

For LMT to work properly, it is necessary to acquire and configure an OpenAI API key. Follow these steps to accomplish this:

  1. Acquire the OpenAI API key: You can do this by creating an account on the OpenAI website. Once registered, you will have access to your unique API key.

  2. Set usage limit: Before you start using the API, you need to define a usage limit. You can configure this in your OpenAI account settings by navigating to Billing -> Usage limits.

  3. Configure the OpenAI API key: Once you have your API key, you can set it up by running the lmt key set command.

    lmt key set

With these steps, you should now have successfully set up your OpenAI API key, ready for use with the LMT.

Usage

Basic Example

The simplest way to use lmt is by entering a prompt for the model to respond to.

Here's a basic usage example where we ask the model to generate a greeting:

lmt "Say hello"

In this case, the model will generate and return a greeting based on the given prompt.

Add a Persona

You can also instruct the model to adopt a specific persona using the --system flag. This is useful when you want the model's responses to emulate a certain character or writing style.

Here's an example where we instruct the model to write like the philosopher Cioran:

lmt "Tell me what you think of large language models." \
        --system "You are Cioran. You write like Cioran."

In this case, the model will generate a response based on its understanding of Cioran's writing style and perspective.

Switching Models

Switching between different models is a breeze with lmt. Use the -m flag followed by the alias of the model you wish to employ.

lmt "Explain what is a large language model" -m 4

Below is a table outlining available model aliases for your convenience:

Alias Corresponding Model
chatgpt gpt-3.5-turbo
chatgpt-16k gpt-3.5-turbo-16k
3.5 gpt-3.5-turbo
3.5-16k gpt-3.5-turbo-16k
4 gpt-4
gpt4 gpt-4
4-32k gpt-4-32k
gpt4-32k gpt-4-32k

For instance, if you want to use the gpt-4 model, simply include -m 4 in your command.

Template Utilization

Templates, stored in ~/.config/lmt/templates and written in YAML, can be generated using the following command:

lmt templates add

For help regarding the templates subcommand, use:

lmt templates --help

Here's an example of invoking a template named "cioran":

lmt "Tell me how AI will change the world." --template cioran

You can also use the shorter version: -t cioran.

Emoji Integration

To infuse a touch of emotion into your requests, append the --emoji flag option.

Prompt Cost Estimation

For an estimation of your prompt's cost before sending, utilize the --tokens flag option.

Reading from stdin

lmt facilitates reading inputs directly from stdin, allowing you to pipe in the content of a file as a prompt. This feature can be particularly useful when dealing with longer or more complex prompts, or when you want to streamline your workflow by incorporating lmt into a larger pipeline of commands.

To use this feature, you simply need to pipe your content into the lmt command like this:

cat your_file.txt | lmt

In this example, lmt would use the content of your_file.txt as the input for the prompt command.

Also, remember that you can still use all other command line options with stdin. For instance, you might run:

cat your_file.py | lmt \
        --system "You explain code in the style of \
        a fast-talkin' wise guy from a 1940's gangster movie" \
        -m 4 --emoji

In this example, lmt takes the content of your_file.py as the input for the prompt command. With the gpt-4 model selected via -m 4, the system is instructed to respond in the style of a fast-talking wiseguy from a 1940s gangster movie, as specified in the -s/--system option. The --emoji flag indicates that the response may include emojis for added expressiveness.

Append an Additional Prompt to Piped stdin

Beyond the -s/--system option, lmt offers the capability to append an additional user prompt when reading from stdin. This is especially useful when you want to add context or specific instructions to the piped input without altering the system prompt.

For example, with a grocery_list.txt file, you can append a prompt for healthy alternatives and set the system prompt to guide the AI's chef-like response.

cat grocery_list.txt | lmt "What are some healthy alternatives to these items?" \
                        --system "You are a chef with a focus on healthy and sustainable cooking."

Output Redirection

You can use output redirections. For instance:

lmt "List 5 Wikipedia articles" > wiki_articles.md

Using lmt as a Vim Filter Command

To invoke lmt as a filter command in Vim, you can use the command :.!lmt. Remember, Vim offers the shortcut !! as a quick way to enter :.!. This means you can simply type !!lmt to initiate your prompt.

Example: :.!lmt write an implementation of binary search

Additionally, you can filter specific lines from your text and pass them as a prompt to lmt. To achieve this, highlight the desired lines in VISUAL mode (or use ex syntax), and then enter :.!lmt "Your additional prompt here".

vim_filter_command_code

Theming Colors for Code Blocks

Once you used lmt, you should have a configuration file (~/.config/lmt/config.json) in which you can configure the colors for inline code and code blocks.

Here are the styles for the code blocks: https://pygments.org/styles/

As for the inline code blocks, they can be styled with the 256 colors (names or hexadecimal code).

Example

{
    "code_block_theme": "default",
    "inline_code_theme": "blue on #f0f0f0"
}

License

lmt is licensed under Apache License version 2.0.


https://github.com/sderev/lmt

About

Language Models Terminal. Interact with OpenAI's ChatGPT models from the comfort of your terminal.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages