Skip to content
This repository has been archived by the owner on May 12, 2023. It is now read-only.

Commit

Permalink
version update: support for interactive mode
Browse files Browse the repository at this point in the history
  • Loading branch information
abdeladim-s committed May 2, 2023
1 parent a8581d7 commit d05d1fd
Show file tree
Hide file tree
Showing 6 changed files with 102 additions and 185 deletions.
75 changes: 51 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,53 +2,80 @@
Official Python CPU inference for [GPT4All](https://github.com/nomic-ai/gpt4all) language models based on [llama.cpp](https://github.com/ggerganov/llama.cpp) and [ggml](https://github.com/ggerganov/ggml)

[![License: MIT](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT)

[//]: # ([![PyPi version](https://badgen.net/pypi/v/ptgpt4all)](https://pypi.org/project/pygpt4all/))

**NB: Under active development**

[![PyPi version](https://badgen.net/pypi/v/pygpt4all)](https://pypi.org/project/pygpt4all/)

<!-- TOC -->
* [Installation](#installation)
* [Tutorial](#tutorial)
* [Model instantiation](#model-instantiation)
* [Simple generation](#simple-generation)
* [Interactive Dialogue](#interactive-dialogue)
* [API reference](#api-reference)
* [License](#license)
<!-- TOC -->
# Installation

```bash
pip install pygpt4all
```

# Usage
# Tutorial

You will need first to download the model weights

### GPT4All model
| Model | Download link |
|-----------|----------------------------------------------------------|
| GPT4ALL | http://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin |
| GPT4ALL-j | https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin |

Download a GPT4All model from http://gpt4all.io/models/ggml-gpt4all-l13b-snoozy.bin
### Model instantiation
Once the weights are downloaded, you can instantiate the models as follows:
* GPT4All model

```python
from pygpt4all.models.gpt4all import GPT4All
from pygpt4all import GPT4All

def new_text_callback(text):
print(text, end="")
model = GPT4All('path/to/ggml-gpt4all-l13b-snoozy.bin')
```

* GPT4All-J model

model = GPT4All('./models/ggml-gpt4all-l13b-snoozy.bin')
model.generate("Once upon a time, ", n_predict=55, new_text_callback=new_text_callback)
```python
from pygpt4all import GPT4All_J

model = GPT4All_J('path/to/ggml-gpt4all-j-v1.3-groovy.bin')
```

### GPT4All-J model

Download the GPT4All-J model from https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin
### Simple generation
The `generate` function is used to generate new tokens from the `prompt` given as input:

```python
from pygpt4all.models.gpt4all_j import GPT4All_J
for token in model.generate("Tell me a joke ?\n"):
print(token, end='', flush=True)
```

def new_text_callback(text):
print(text, end="")
### Interactive Dialogue
You can set up an interactive dialogue by simply keeping the `model` variable alive:

model = GPT4All_J('./models/ggml-gpt4all-j-v1.3-groovy.bin')
model.generate("Once upon a time, ", n_predict=55, new_text_callback=new_text_callback)
```python
while True:
try:
prompt = input("You: ", flush=True)
if prompt == '':
continue
print(f"AI:", end='')
for token in model.generate(prompt):
print(f"{token}", end='', flush=True)
print()
except KeyboardInterrupt:
break
```

[//]: # (* You can always refer to the [short documentation]&#40;https://nomic-ai.github.io/pyllamacpp/&#41; for more details.)
# API reference
You can check the [API reference documentation](https://nomic-ai.github.io/pygpt4all/) for more details.


# License



This project is licensed under the MIT [License](./LICENSE).

126 changes: 0 additions & 126 deletions examples/backend_test.py

This file was deleted.

2 changes: 2 additions & 0 deletions pygpt4all/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
from pygpt4all.models.gpt4all import GPT4All
from pygpt4all.models.gpt4all_j import GPT4All_J
43 changes: 25 additions & 18 deletions pygpt4all/models/gpt4all.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
# -*- coding: utf-8 -*-

"""
GPT4ALL with `llama.cpp` backend
GPT4ALL with `llama.cpp` backend through [pyllamacpp](https://github.com/abdeladim-s/pyllamacpp)
"""

__author__ = "abdeladim-s"
Expand All @@ -24,47 +24,54 @@ class GPT4All(pyllamacpp.model.Model):
```python
from pygpt4all.models.gpt4all import GPT4All
def new_text_callback(text):
print(text, end="")
model = GPT4All('./models/ggml-gpt4all-j.bin')
model.generate("Once upon a time, ", n_predict=55, new_text_callback=new_text_callback)
model = GPT4All('path/to/gpt4all/model')
for token in model.generate("Tell me a joke ?"):
print(token, end='', flush=True)
```
"""

def __init__(self,
model_path: str,
prompt_context: str = '',
prompt_prefix: str = '',
prompt_suffix: str = '',
log_level: int = logging.ERROR,
n_ctx: int = 512,
n_parts: int = -1,
seed: int = 0,
n_parts: int = -1,
f16_kv: bool = False,
logits_all: bool = False,
vocab_only: bool = False,
use_mlock: bool = False,
embedding: bool = False,
log_level: int = logging.INFO):
embedding: bool = False):
"""
:param model_path: The path to a gpt4all `ggml` model
:param n_ctx: context size
:param n_parts:
:param seed: RNG seed, 0 for random
:param model_path: the path to the gpt4all model
:param prompt_context: the global context of the interaction
:param prompt_prefix: the prompt prefix
:param prompt_suffix: the prompt suffix
:param log_level: logging level, set to INFO by default
:param n_ctx: LLaMA context
:param seed: random seed
:param n_parts: LLaMA n_parts
:param f16_kv: use fp16 for KV cache
:param logits_all: the llama_eval() call computes all logits, not just the last one
:param vocab_only: only load the vocabulary, no weights
:param use_mlock: force system to keep model in RAM
:param embedding: embedding mode only
:param log_level: logging level, set to INFO by default
"""
# set logging level
set_log_level(log_level)
super(GPT4All, self).__init__(ggml_model=model_path,
super(GPT4All, self).__init__(model_path=model_path,
prompt_context=prompt_context,
prompt_prefix=prompt_prefix,
prompt_suffix=prompt_suffix,
log_level=log_level,
n_ctx=n_ctx,
n_parts=n_parts,
seed=seed,
n_parts=n_parts,
f16_kv=f16_kv,
logits_all=logits_all,
vocab_only=vocab_only,
use_mlock=use_mlock,
embedding=embedding,
log_level=log_level)
embedding=embedding)

26 changes: 17 additions & 9 deletions pygpt4all/models/gpt4all_j.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,21 +23,29 @@ class GPT4All_J(pygptj.model.Model):
```python
from pygpt4all.models.gpt4all_j import GPT4All_J
def new_text_callback(text):
print(text, end="")
model = GPT4All_J('./models/ggml-gpt4all-j.bin')
model.generate("Once upon a time, ", n_predict=55, new_text_callback=new_text_callback)
model = GPT4All_J('.path/to/gpr4all-j/model')
for token in model.generate("Tell me a joke ?"):
print(token, end='', flush=True)
```
"""

def __init__(self,
model_path: str,
log_level: int = logging.INFO):
prompt_context: str = '',
prompt_prefix: str = '',
prompt_suffix: str = '',
log_level: int = logging.ERROR):
"""
:param model_path: The path to a gpt4all `ggml` model
:param log_level: logging level, set to INFO by default
:param model_path: The path to a gpt4all-j model
:param prompt_context: the global context of the interaction
:param prompt_prefix: the prompt prefix
:param prompt_suffix: the prompt suffix
:param log_level: logging level
"""
# set logging level
set_log_level(log_level)
super(GPT4All_J, self).__init__(model_path=model_path, log_level=log_level)
super(GPT4All_J, self).__init__(model_path=model_path,
prompt_context=prompt_context,
prompt_prefix=prompt_prefix,
prompt_suffix=prompt_suffix,
log_level=log_level)
15 changes: 7 additions & 8 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

setup(
name="pygpt4all",
version="1.0.1",
version="1.1.0",
author="Abdeladim Sadiki",
description="Official Python CPU inference for GPT4All language models based on llama.cpp and ggml",
long_description=long_description,
Expand All @@ -21,11 +21,10 @@
package_dir={'': '.'},
long_description_content_type="text/markdown",
license='MIT',
# project_urls={
# 'Documentation': '',
# 'Source': '',
# 'Tracker': '',
# },
install_requires=["pyllamacpp==1.0.7", "pygptj"],
# extras_require={"all": [""]},
project_urls={
'Documentation': 'nomic-ai.github.io/pygpt4all/',
'Source': 'https://github.com/nomic-ai/pygpt4all',
'Tracker': 'https://github.com/nomic-ai/pygpt4all/issues',
},
install_requires=["pyllamacpp", "pygptj"],
)

0 comments on commit d05d1fd

Please sign in to comment.