Skip to content

simonw/llm-gemini

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llm-gemini

PyPI Changelog Tests License

API access to Google's Gemini models

Installation

Install this plugin in the same environment as LLM.

llm install llm-gemini

Usage

Configure the model by setting a key called "gemini" to your API key:

llm keys set gemini
<paste key here>

Now run the model using -m gemini-pro, for example:

llm -m gemini-pro "A joke about a pelican and a walrus"

Why did the pelican get mad at the walrus?

Because he called him a hippo-crit.

To chat interactively with the model, run llm chat:

llm chat -m gemini-pro

If you have access to the Gemini 1.5 Pro preview you can use -m gemini-1.5-pro-latest to work with that model.

Embeddings

The plugin also adds support for the text-embedding-004 embedding model.

Run that against a single string like this:

llm embed -m text-embedding-004 -c 'hello world'

This returns a JSON array of 768 numbers.

This command will embed every README.md file in child directories of the current directory and store the results in a SQLite database called embed.db in a collection called readmes:

llm embed-multi readmes --files . '*/README.md' -d embed.db -m text-embedding-004

You can then run similarity searches against that collection like this:

llm similar readmes -c 'upload csvs to stuff' -d embed.db

See the LLM embeddings documentation for further details.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-gemini
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

llm install -e '.[test]'

To run the tests:

pytest

About

LLM plugin to access Google's Gemini family of models

Resources

License

Stars

Watchers

Forks

Sponsor this project

 

Packages

No packages published

Languages