Skip to content
This repository has been archived by the owner on Apr 1, 2024. It is now read-only.

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
fardjad committed Apr 1, 2024
1 parent a9232d1 commit ff59eb0
Show file tree
Hide file tree
Showing 3 changed files with 18 additions and 30 deletions.
42 changes: 15 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,40 +16,28 @@ Use self-hosted LLMs with an OpenAI compatible API

<hr />

LLMatic can be used as a drop-in replacement for OpenAI's API (see the
supported endpoints). It uses [llama-node](https://github.com/Atome-FE/llama-node)
with [llama.cpp](https://github.com/ggerganov/llama.cpp) backend to run the models locally.
## Project status

Supported endpoints:

- [x] /completions (stream and non-stream)
- [x] /chat/completions (stream and non-stream)
- [x] /embeddings
- [x] /models

This project is currently a work in progress. At this point, it's recommended
to use it only for ad-hoc development and testing.
This project was the result of my curiousity and experimentation with OpenAI's API and I enjoyed building it. It is certainly not the first nor the last project of its kind. Given my limited time and resources, I'd like to pause the development of this project for now. I'll list some other similar projects below that can be used as alternatives:

## Help Wanted
1. [Ollama](https://github.com/ollama/ollama/blob/main/docs/openai.md)
2. [LLaMA.cpp HTTP Server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server)
3. [GPT4All Chat Server Mode](https://docs.gpt4all.io/gpt4all_chat.html#gpt4all-chat-server-mode)
4. [FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md)

I'm looking for contributors to help me with the [open issues](https://github.com/fardjad/node-llmatic/issues). If you're interested, please leave a comment on the issue
you want to work on.

Also, if you have any good ideas for improving this project, please open an
issue to discuss it further.
## Synopsis

## Motivation
LLMatic can be used as a drop-in replacement for OpenAI's API [v1.2.0](https://github.com/openai/openai-openapi/blob/88f221442879061d9970ed453a65b973d226f15d/openapi.yaml) (see the
supported endpoints). By default, it uses [llama-node](https://github.com/Atome-FE/llama-node)
with [llama.cpp](https://github.com/ggerganov/llama.cpp) backend to run the models locally. However, you can easily create [your own adapter](#custom-adapters) to use any other model or service.

The main motivation behind making LLMatic was to experiment with OpenAI's API
without worrying about the cost. I have seen other attempts at creating
OpenAI-Compatible APIs such as:

1. [FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md)
2. [GPT4All Chat Server Mode](https://docs.gpt4all.io/gpt4all_chat.html#gpt4all-chat-server-mode)
3. [simpleAI](https://github.com/lhenault/simpleAI)
Supported endpoints:

But I wanted a small, simple, and easy to extend implementation in TypeScript based on the
[official OpenAI API specification](https://github.com/openai/openai-openapi/blob/master/openapi.yaml).
- [x] /completions (stream and non-stream)
- [x] /chat/completions (stream and non-stream)
- [x] /embeddings
- [x] /models

## How to use

Expand Down
4 changes: 2 additions & 2 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "llmatic",
"version": "0.4.241",
"version": "0.4.242",
"description": "Use self-hosted LLMs with an OpenAI compatible API",
"exports": {
"./llm-adapter": {
Expand Down

0 comments on commit ff59eb0

Please sign in to comment.