From 5cf3de2283f60a1b4f88fef2fc5119aff593cc59 Mon Sep 17 00:00:00 2001
From: GeneralYadoc <133023047+GeneralYadoc@users.noreply.github.com>
Date: Sun, 28 May 2023 19:50:47 +0900
Subject: [PATCH 1/4] Modify for initial release.
---
README.md | 163 +++++++++++++++++++++++++++++++++++++++++++-
requirements.txt | 2 +
samples/sample.py | 78 +++++++++++++++++++++
setup.py | 37 ++++++++++
src/ChatAIStream.py | 59 ++++++++++++++++
src/__init__.py | 3 +
6 files changed, 341 insertions(+), 1 deletion(-)
create mode 100644 requirements.txt
create mode 100644 samples/sample.py
create mode 100644 setup.py
create mode 100644 src/ChatAIStream.py
create mode 100644 src/__init__.py
diff --git a/README.md b/README.md
index c4c115a..627f7b6 100644
--- a/README.md
+++ b/README.md
@@ -1 +1,162 @@
-# ChatAIonStream
\ No newline at end of file
+# ChatAIAgent
+Message broker between user and ChatGPT.
+
+## The user of this library can
+- easily give role and messages to ChatGPT, and obtain answers.
+- spool messages which given before ChatGPT finish generating current answer.
+
+## Hou to install
+
+### Install from PyPI
+- Install package to your environment.
+ ```install
+ $ pip install chatai-agent
+ ```
+
+### Install from GitHub repository
+- Clone this repository.
+ ```clone
+ $ clone https://github.com/GeneralYadoc/ChatAIAgent.git
+ ```
+- Change directory to the root of the repository.
+ ```cd
+ $ cd ChatAIAgent
+ ```
+- Install package to your environment.
+ ```install
+ $ pip install .
+ ```
+## How to use
+- [OpenAI API Key](https://www.howtogeek.com/885918/how-to-get-an-openai-api-key/) is necessary to execute following sample.
+
+- Sample codes exist [here](samples/sample.py).
+ ``` sample.py
+ import sys
+ import datetime
+ import ChatAIAgent as ca # Import this.
+
+ # callback for getting questions that actually thrown to ChatGPT
+ # If you register external info to user message when you put it, you can obtain the external info here.
+ def ask_cb(user_message):
+ time_str = user_message.extern.strftime ('%H:%M:%S')
+ print(f"\n[question {time_str}] {user_message.message}\n")
+
+ # callback for getting answer of ChatGPT
+ def answer_cb(user_message, completion):
+ time_str = datetime.datetime.now().strftime ('%H:%M:%S')
+ message = completion.choices[0]["message"]["content"]
+ print(f"[answer {time_str}] {message}\n")
+
+ # ChatGPT API Key is given from command line in this example.
+ if len(sys.argv) <= 1:
+ exit(0)
+
+ system_role="You are a cheerful assistant who speek English and can get conversation exciting with user."
+
+ # Create ChatAIAgent instance.
+ params = ca.params(
+ api_key=sys.argv[1],
+ system_role=system_role,
+ ask_cb=ask_cb,
+ answer_cb=answer_cb,
+ max_tokens_per_request = 2048
+ )
+ agent = ca.ChatAIAgent( params )
+
+ # Wake up internal thread on which ChatGPT answer messages will be generated.
+ agent.start()
+
+ while True:
+ message = input("")
+ if message == "":
+ break
+ # Put message received from stdin on internal queue to be available from internal thread.
+ agent.put_message(ca.userMessage(message=message,extern=datetime.datetime.now()))
+
+ # Finish generating answers.
+ # Internal thread will stop soon.
+ agent.disconnect()
+
+ # terminating internal thread.
+ agent.join()
+
+ del agent
+ ```
+
+- Output of the sample
+ ```output
+ $ python3 samples/sample.py XXXXXXXXXXXXXXXXXX (OpenAI API Key)
+ Who are you?
+
+ [question 17:30:35] Who are you?
+
+ [answer 17:30:37] Hello! I am a cheerful assistant and I'm here to help you. My name is not important, but I'm happy to assist you with anything you need. How can I help you today?
+
+ Would you make sound of a cat?
+
+ [question 17:31:14] Would you make sound of a cat?
+
+ [answer 17:31:16] Meow! Meow! That's the sound a cat makes. Is there anything else you would like me to assist you with?
+ ```
+## Arguments of Constructor
+- ChatAIAgent object can be configured with following params given to constructor.
+
+ | name | description | default |
+ |------|------------|---------|
+ | api_key | API Key string of OpenAI | - |
+ | system_role | API Key string of OpenAI | - |
+ | ask_cb | user message given to ChatGPT is thrown to this callback | None |
+ | max_messages_in_context | Max messages in context given to ChatGPT | 20 |
+ | answer_cb | ChatGPT answer is thrown to this callback | None |
+ | max_queue_size | Max slots of internal queue (0 is no limit) | 10 |
+ | model | Model of AI to be used. | None |
+ | max_tokens_per_request | Max number of tokens which can be contained in a request. | 256 |
+ | interval_sec | Interval of ChatGPT API call | 20.0 \[sec\] |
+### Notice
+- Default value of interval_sec is 20.0, since free user of OpenAI API can get only 3 completions per minitue.
+
+## Methods
+### start()
+- Start ChatGPT conversation and calling user callbacks asyncronously.
+- No arguments required, nothing returns.
+
+### join()
+- Wait terminating internal threads kicked by start().
+- No arguments required, nothing returns.
+
+### connect()
+- Start ChatGPT conversation and calling user callbacks syncronously.
+- Lines following the call of the method never executen before terminate of internal threads.
+- No arguments required, nothing returns.
+
+### disconnect()
+- Request to terminate conversation and calling user callbacks.
+- Internal process will be terminated soon after.
+- No arguments required, nothing returns.
+
+And other [threading.Thread](https://docs.python.org/3/library/threading.html) public pethods are available.
+
+## Callbacks
+### ask_cb
+- Callback for getting questions that actually thrown to ChatGPT.
+- If you register external info to user message when you put it, you can obtain the external info here.
+- It's not be assumed that any values are returned.
+### answer_cb
+- Callback for getting question and answer of ChatGPT
+- The type of completion is mentioned [here](https://platform.openai.com/docs/guides/chat).
+- It's not be assumed that any values are returned.
+
+## Concept of design
+- User message is put on internal queue and treated on internal thread.
+This feature gives advantage when You put ChatGPT on chat stream.
+Please try [this sample](samples/sample2.py) to experience the benefit.
+ ```usage
+ $ python3 ./sample2.py VIDEO-ID OpenAI-API-KEY
+ ```
+ 
+- The system role given by user remains ever as the oldest sentence of current context even if the number of messages is reached to the maximum, so ChatGPT doesn't forgot the role while current cunversation.
+
+## Links
+StreamingChaatAgent uses following libraries internally.
+
+- [streamchat-agent](https://github.com/GeneralYadoc/StreamChatAgent)
YouTube chat poller which can get massages very smothly by using internal queue.
diff --git a/requirements.txt b/requirements.txt
new file mode 100644
index 0000000..a17657c
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,2 @@
+streamchat-agent==1.0.1
+chatai-agent==1.0.0
diff --git a/samples/sample.py b/samples/sample.py
new file mode 100644
index 0000000..f198fb4
--- /dev/null
+++ b/samples/sample.py
@@ -0,0 +1,78 @@
+# To execute this sample, please install streamchat-agent from PyPI as follows.
+# $ pip install streamchat-agent
+import sys
+import time
+import math
+import datetime
+import ChatAIStream as cas
+
+# print sentencce by a character incrementally.
+def print_incremental(st, interval_sec):
+ for i in range(len(st)):
+ if not running:
+ break
+ print(f"{st[i]}", end='')
+ sys.stdout.flush()
+ interruptible_sleep(interval_sec)
+
+# Customized sleep for making available of running flag interruption.
+def interruptible_sleep(time_sec):
+ counter = math.floor(time_sec / 0.01)
+ frac = time_sec - (counter * 0.01)
+ for i in range(counter):
+ if not running:
+ break
+ time.sleep(0.01)
+ if not running:
+ return
+ time.sleep(frac)
+
+# callback for getting answer of ChatGPT
+def answer_cb(user_message, completion):
+ print(f"\n[{user_message.extern.author.name} {user_message.extern.datetime}] {user_message.message}\n")
+ interruptible_sleep(3)
+ time_str = datetime.datetime.now().strftime ('%H:%M:%S')
+ message = completion.choices[0]["message"]["content"]
+ print(f"[ChatGPT {time_str}] ", end='')
+ print_incremental(message, 0.05)
+ print("\n")
+ interruptible_sleep(5)
+
+running = False
+
+# YouTube Video ID and ChatGPT API Key is given from command line in this example.
+if len(sys.argv) <= 2:
+ exit(0)
+
+# Set params of getting messages from stream source.
+stream_params=cas.streamParams(video_id=sys.argv[1])
+
+# Set params of Chat AI.
+ai_params=cas.aiParams(
+ api_key = sys.argv[2],
+ system_role = "You are a cheerful assistant who speek English and can get conversation exciting with user.",
+ answer_cb = answer_cb
+)
+
+# Create ChatAIStream instance.
+ai_stream = cas.ChatAIStream(cas.params(stream_params=stream_params, ai_params=ai_params))
+
+running = True
+
+# Wake up internal thread to get chat messages from stream and ChatGPT answers.
+ai_stream.start()
+
+# Wait any key inputted from keyboad.
+input()
+
+# Turn off runnging flag in order to finish printing fung of dhit sample.
+running=False
+
+# Finish getting ChatGPT answers.
+# Internal thread will stop soon.
+ai_stream.disconnect()
+
+# terminating internal thread.
+ai_stream.join()
+
+del ai_stream
diff --git a/setup.py b/setup.py
new file mode 100644
index 0000000..69766d4
--- /dev/null
+++ b/setup.py
@@ -0,0 +1,37 @@
+from glob import glob
+from os.path import basename
+from os.path import splitext
+
+from setuptools import setup
+from setuptools import find_packages
+
+
+def _requires_from_file(filename):
+ return open(filename).read().splitlines()
+
+
+setup(
+ name="chatai-stream",
+ version="0.0.1",
+ license="MIT",
+ description="ChatGPT reacts YouTube chat messages.",
+ author="General Yadoc",
+ author_email="133023047+GeneralYadoc@users.noreply.github.com",
+ classifiers=[
+ 'Development Status :: 4 - Beta',
+ 'Programming Language :: Python',
+ 'Programming Language :: Python :: 3.7',
+ 'Programming Language :: Python :: 3.8',
+ 'Programming Language :: Python :: 3.9',
+ 'License :: OSI Approved :: MIT License',
+ ],
+ url="https://github.com/GeneralYadoc/ChatAIStream",
+ packages=find_packages("src"),
+ package_dir={"": "src"},
+ py_modules=[splitext(basename(path))[0] for path in glob('src/*.py')],
+ include_package_data=True,
+ zip_safe=False,
+ install_requires=_requires_from_file('requirements.txt'),
+ setup_requires=["pytest-runner"],
+ tests_require=["pytest", "pytest-cov"]
+)
\ No newline at end of file
diff --git a/src/ChatAIStream.py b/src/ChatAIStream.py
new file mode 100644
index 0000000..8d48522
--- /dev/null
+++ b/src/ChatAIStream.py
@@ -0,0 +1,59 @@
+import re
+import threading
+from typing import Callable
+from dataclasses import dataclass
+import StreamChatAgent as sca
+import ChatAIAgent as ca
+
+
+streamParams = sca.params
+aiParams = ca.params
+
+@dataclass
+class params():
+ stream_params: streamParams
+ ai_params: aiParams
+
+class ChatAIStream(threading.Thread):
+ def my_pre_filter_cb(self, c):
+ prefiltered_c = c
+ prefiltered_c.message = re.sub(r':[^:]+:', ".", prefiltered_c.message)
+ prefiltered_c.message = re.sub(r'^[\.]+', "", prefiltered_c.message)
+ if prefiltered_c and self.pre_filter_cb:
+ prefiltered_c = self.pre_filter_cb(prefiltered_c)
+ return None if prefiltered_c.message == "" else prefiltered_c
+
+ def ask_stream_message_to_ai(self, c):
+ if self.get_stream_message_cb:
+ self.get_stream_message_cb(c)
+ if self.ai_agent:
+ self.ai_agent.put_message(ca.userMessage(message=c.message, extern=c))
+
+ def default_answer_cb(self, user_message, completion):
+ print("testtest")
+ answer = completion.choices[0]["message"]["content"]
+ print(f"[Answer] {answer}")
+
+ def __init__( self, params):
+ self.get_stream_message_cb=params.stream_params.get_item_cb
+ params.stream_params.get_item_cb=self.ask_stream_message_to_ai
+ self.pre_filter_cb=params.stream_params.pre_filter_cb
+ params.stream_params.pre_filter_cb=self.my_pre_filter_cb
+
+ self.ai_agent = ca.ChatAIAgent( params.ai_params )
+ self.stream_agent = sca.StreamChatAgent( params.stream_params )
+
+ super(ChatAIStream, self).__init__(daemon=True)
+
+ def run(self):
+ self.stream_agent.start()
+ self.ai_agent.start()
+
+ def connect(self):
+ self.start()
+ self.join()
+
+ def disconnect(self):
+ self.ai_agent.disconnect()
+ self.stream_agent.disconnect()
+
diff --git a/src/__init__.py b/src/__init__.py
new file mode 100644
index 0000000..4dc8d4b
--- /dev/null
+++ b/src/__init__.py
@@ -0,0 +1,3 @@
+from .ChatAIStream import *
+
+__version__ = '0.0.1'
\ No newline at end of file
From 074a1933f1fa7ac9f27991a91f75cfe63fdc9124 Mon Sep 17 00:00:00 2001
From: GeneralYadoc <133023047+GeneralYadoc@users.noreply.github.com>
Date: Sun, 28 May 2023 19:54:54 +0900
Subject: [PATCH 2/4] Add gif of ReadMe parts.
---
.gitattributes | 1 +
ReadMeParts/ChatAIAgent.gif | 3 +++
2 files changed, 4 insertions(+)
create mode 100644 .gitattributes
create mode 100644 ReadMeParts/ChatAIAgent.gif
diff --git a/.gitattributes b/.gitattributes
new file mode 100644
index 0000000..d41a940
--- /dev/null
+++ b/.gitattributes
@@ -0,0 +1 @@
+*.gif filter=lfs diff=lfs merge=lfs -text
diff --git a/ReadMeParts/ChatAIAgent.gif b/ReadMeParts/ChatAIAgent.gif
new file mode 100644
index 0000000..4b8eeda
--- /dev/null
+++ b/ReadMeParts/ChatAIAgent.gif
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8f9e7694e5d7bace4c5a4d85ae1f435af8acb113926ba960b2eeea9ae34df708
+size 37174451
From d63e07ad8870310a7ee056cc83a27beafa452230 Mon Sep 17 00:00:00 2001
From: GeneralYadoc <133023047+GeneralYadoc@users.noreply.github.com>
Date: Sun, 28 May 2023 19:59:30 +0900
Subject: [PATCH 3/4] Modify ReadMe for ChatAIStream.
---
README.md | 179 ++++++++++++++++++++++++++++++++++--------------------
1 file changed, 112 insertions(+), 67 deletions(-)
diff --git a/README.md b/README.md
index 627f7b6..1d866ff 100644
--- a/README.md
+++ b/README.md
@@ -1,26 +1,26 @@
-# ChatAIAgent
-Message broker between user and ChatGPT.
+# ChatAIStream
+Message broker between YouTube chat stream and ChatGPT.
## The user of this library can
-- easily give role and messages to ChatGPT, and obtain answers.
-- spool messages which given before ChatGPT finish generating current answer.
+- pick up massegase from YouTube Chat and generate answer by ChatGPT.
+- easily give role to ChatGPT.
## Hou to install
### Install from PyPI
- Install package to your environment.
```install
- $ pip install chatai-agent
+ $ pip install chatai-stream
```
### Install from GitHub repository
- Clone this repository.
```clone
- $ clone https://github.com/GeneralYadoc/ChatAIAgent.git
+ $ clone https://github.com/GeneralYadoc/ChatAIStream.git
```
- Change directory to the root of the repository.
```cd
- $ cd ChatAIAgent
+ $ cd ChatAIStream
```
- Install package to your environment.
```install
@@ -31,80 +31,115 @@ Message broker between user and ChatGPT.
- Sample codes exist [here](samples/sample.py).
``` sample.py
+ # To execute this sample, please install streamchat-agent from PyPI as follows.
+ # $ pip install streamchat-agent
import sys
+ import time
+ import math
import datetime
- import ChatAIAgent as ca # Import this.
-
- # callback for getting questions that actually thrown to ChatGPT
- # If you register external info to user message when you put it, you can obtain the external info here.
- def ask_cb(user_message):
- time_str = user_message.extern.strftime ('%H:%M:%S')
- print(f"\n[question {time_str}] {user_message.message}\n")
+ import ChatAIStream as cas
+
+ # print sentencce by a character incrementally.
+ def print_incremental(st, interval_sec):
+ for i in range(len(st)):
+ if not running:
+ break
+ print(f"{st[i]}", end='')
+ sys.stdout.flush()
+ interruptible_sleep(interval_sec)
+
+ # Customized sleep for making available of running flag interruption.
+ def interruptible_sleep(time_sec):
+ counter = math.floor(time_sec / 0.01)
+ frac = time_sec - (counter * 0.01)
+ for i in range(counter):
+ if not running:
+ break
+ time.sleep(0.01)
+ if not running:
+ return
+ time.sleep(frac)
# callback for getting answer of ChatGPT
def answer_cb(user_message, completion):
+ print(f"\n[{user_message.extern.author.name} {user_message.extern.datetime}] {user_message.message}\n")
+ interruptible_sleep(3)
time_str = datetime.datetime.now().strftime ('%H:%M:%S')
message = completion.choices[0]["message"]["content"]
- print(f"[answer {time_str}] {message}\n")
+ print(f"[ChatGPT {time_str}] ", end='')
+ print_incremental(message, 0.05)
+ print("\n")
+ interruptible_sleep(5)
+
+ running = False
- # ChatGPT API Key is given from command line in this example.
- if len(sys.argv) <= 1:
+ # YouTube Video ID and ChatGPT API Key is given from command line in this example.
+ if len(sys.argv) <= 2:
exit(0)
- system_role="You are a cheerful assistant who speek English and can get conversation exciting with user."
+ # Set params of getting messages from stream source.
+ stream_params=cas.streamParams(video_id=sys.argv[1])
- # Create ChatAIAgent instance.
- params = ca.params(
- api_key=sys.argv[1],
- system_role=system_role,
- ask_cb=ask_cb,
- answer_cb=answer_cb,
- max_tokens_per_request = 2048
+ # Set params of Chat AI.
+ ai_params=cas.aiParams(
+ api_key = sys.argv[2],
+ system_role = "You are a cheerful assistant who speek English and can get conversation exciting with user.",
+ answer_cb = answer_cb
)
- agent = ca.ChatAIAgent( params )
- # Wake up internal thread on which ChatGPT answer messages will be generated.
- agent.start()
+ # Create ChatAIStream instance.
+ ai_stream = cas.ChatAIStream(cas.params(stream_params=stream_params, ai_params=ai_params))
- while True:
- message = input("")
- if message == "":
- break
- # Put message received from stdin on internal queue to be available from internal thread.
- agent.put_message(ca.userMessage(message=message,extern=datetime.datetime.now()))
+ running = True
- # Finish generating answers.
- # Internal thread will stop soon.
- agent.disconnect()
+ # Wake up internal thread to get chat messages from stream and ChatGPT answers.
+ ai_stream.start()
- # terminating internal thread.
- agent.join()
+ # Wait any key inputted from keyboad.
+ input()
- del agent
- ```
+ # Turn off runnging flag in order to finish printing fung of dhit sample.
+ running=False
-- Output of the sample
- ```output
- $ python3 samples/sample.py XXXXXXXXXXXXXXXXXX (OpenAI API Key)
- Who are you?
+ # Finish getting ChatGPT answers.
+ # Internal thread will stop soon.
+ ai_stream.disconnect()
- [question 17:30:35] Who are you?
+ # terminating internal thread.
+ ai_stream.join()
- [answer 17:30:37] Hello! I am a cheerful assistant and I'm here to help you. My name is not important, but I'm happy to assist you with anything you need. How can I help you today?
+ del ai_stream
- Would you make sound of a cat?
+ ```
- [question 17:31:14] Would you make sound of a cat?
+- Usage of the sample
+ ```usage
+ $ python3 ./sample2.py VIDEO-ID OpenAI-API-KEY
+ ```
+- Output of the sample
+ The outputs of the right window are provided by this sample.
+ Left outputs are also available by ChatAIStream.
+ 
- [answer 17:31:16] Meow! Meow! That's the sound a cat makes. Is there anything else you would like me to assist you with?
- ```
## Arguments of Constructor
-- ChatAIAgent object can be configured with following params given to constructor.
+- ChatAIStream object can be configured with following params given to constructor.
+
+ ### steamParams
+ | name | description | default |
+ |------|------------|---------|
+ | video_id | String following after 'v=' in url of target YouTube live | - |
+ | get_item_cb | Chat items are thrown to this callback | None |
+ | pre_filter_cb | Filter set before internal queue | None |
+ | post_filter_cb | Filter set between internal queue and get_item_cb | None |
+ | max_queue_size | Max slots of internal queue (0 is no limit) | 1000 |
+ | interval_sec | Polling interval of picking up items from YouTube | 0.01 \[sec\] |
+ ### aiParams
+
| name | description | default |
|------|------------|---------|
| api_key | API Key string of OpenAI | - |
- | system_role | API Key string of OpenAI | - |
+ | system_role | API Key string of OpenAI | "You are a helpful assistant." |
| ask_cb | user message given to ChatGPT is thrown to this callback | None |
| max_messages_in_context | Max messages in context given to ChatGPT | 20 |
| answer_cb | ChatGPT answer is thrown to this callback | None |
@@ -114,10 +149,11 @@ Message broker between user and ChatGPT.
| interval_sec | Interval of ChatGPT API call | 20.0 \[sec\] |
### Notice
- Default value of interval_sec is 20.0, since free user of OpenAI API can get only 3 completions per minitue.
+- The system role given by user remains ever as the oldest sentence of current context even if the number of messages is reached to the maximum, so ChatGPT doesn't forgot the role while current cunversation.
## Methods
### start()
-- Start ChatGPT conversation and calling user callbacks asyncronously.
+- Start YouTube Chat polling and ChatGPT conversation, then start calling user callbacks asyncronously.
- No arguments required, nothing returns.
### join()
@@ -125,18 +161,34 @@ Message broker between user and ChatGPT.
- No arguments required, nothing returns.
### connect()
-- Start ChatGPT conversation and calling user callbacks syncronously.
+- Start YouTube Chat polling and ChatGPT conversation, then start calling user callbacks syncronously.
- Lines following the call of the method never executen before terminate of internal threads.
- No arguments required, nothing returns.
### disconnect()
-- Request to terminate conversation and calling user callbacks.
+- Request to terminate YouTube Chat polling, ChatGPT conversation and calling user callbacks.
- Internal process will be terminated soon after.
- No arguments required, nothing returns.
And other [threading.Thread](https://docs.python.org/3/library/threading.html) public pethods are available.
## Callbacks
+### get_item_cb
+- Callback for getting YouTube chat items.
+- You can implement several processes in it.
+- YouTube chat item is thrown as an argument.
+- It's not be assumed that any values are returned.
+### pre_filter_cb
+- pre putting queue filter.
+- YouTube chat item is thrown as an argument.
+- You can edit YouTube chat items before putting internal queue.
+- It's required that edited chat item is returned.
+- You can avoid putting internal queue by returning None.
+### post_filter_cb
+- post getting queue filter
+- You can edit YouTube chat items after popping internal queue.
+- It's required that edited chat item is returned.
+- You can avoid sending item to get_item_cb by returning None.
### ask_cb
- Callback for getting questions that actually thrown to ChatGPT.
- If you register external info to user message when you put it, you can obtain the external info here.
@@ -146,17 +198,10 @@ And other [threading.Thread](https://docs.python.org/3/library/threading.html) p
- The type of completion is mentioned [here](https://platform.openai.com/docs/guides/chat).
- It's not be assumed that any values are returned.
-## Concept of design
-- User message is put on internal queue and treated on internal thread.
-This feature gives advantage when You put ChatGPT on chat stream.
-Please try [this sample](samples/sample2.py) to experience the benefit.
- ```usage
- $ python3 ./sample2.py VIDEO-ID OpenAI-API-KEY
- ```
- 
-- The system role given by user remains ever as the oldest sentence of current context even if the number of messages is reached to the maximum, so ChatGPT doesn't forgot the role while current cunversation.
-
## Links
StreamingChaatAgent uses following libraries internally.
-- [streamchat-agent](https://github.com/GeneralYadoc/StreamChatAgent)
YouTube chat poller which can get massages very smothly by using internal queue.
+- [streamchat-agent](https://github.com/GeneralYadoc/StreamChatAgent)
+ YouTube chat poller which can get massages very smothly by using internal queue.
+- [chatai-agent](https://github.com/GeneralYadoc/ChatAIAgent)
+ Message broker between user and ChatGPT.
From 7a729d1e84d2c74118edaa869112c10dc79c4f6a Mon Sep 17 00:00:00 2001
From: GeneralYadoc <133023047+GeneralYadoc@users.noreply.github.com>
Date: Sun, 28 May 2023 20:18:26 +0900
Subject: [PATCH 4/4] Add description of the type of YouTube Chat item.
---
README.md | 2 ++
1 file changed, 2 insertions(+)
diff --git a/README.md b/README.md
index 1d866ff..dfd67fa 100644
--- a/README.md
+++ b/README.md
@@ -148,6 +148,8 @@ Message broker between YouTube chat stream and ChatGPT.
| max_tokens_per_request | Max number of tokens which can be contained in a request. | 256 |
| interval_sec | Interval of ChatGPT API call | 20.0 \[sec\] |
### Notice
+- Please refer [pytchat README](https://github.com/taizan-hokuto/pytchat) to know the type of YouTube Chat item used by get_item_cb, pre_filter_cb and post filter_cb.
+- Stamps in a message and messages consisted by stamps only are removed defaultly even if user doesn't set pre_filter_cb.
- Default value of interval_sec is 20.0, since free user of OpenAI API can get only 3 completions per minitue.
- The system role given by user remains ever as the oldest sentence of current context even if the number of messages is reached to the maximum, so ChatGPT doesn't forgot the role while current cunversation.