Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

发版:v0.2.5 #1620

Merged
merged 51 commits into from
Sep 28, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
51 commits
Select commit Hold shift + click to select a range
f7c73b8
优化configs (#1474)
liunux4odoo Sep 15, 2023
80375e1
fix merge conflict for #1474 (#1494)
liunux4odoo Sep 15, 2023
955b0bc
修复ChatGPT api_base_url错误;用户可以在model_config在线模型配置中覆盖默认的api_base_url (#…
liunux4odoo Sep 15, 2023
3dde02b
优化LLM模型列表获取、切换的逻辑: (#1497)
liunux4odoo Sep 15, 2023
c8d8727
更新migrate.py和init_database.py,加强知识库迁移工具: (#1498)
liunux4odoo Sep 16, 2023
745a105
feat: support volc fangzhou
liunux4odoo Sep 16, 2023
9a7beef
使火山方舟正常工作,添加错误处理和测试用例
liunux4odoo Sep 16, 2023
7577bd5
Merge branch 'pr1501' into dev
liunux4odoo Sep 16, 2023
13cca9c
feat: support volc fangzhou (#1501)
qiankunli Sep 16, 2023
598eb29
第一版初步agent实现 (#1503)
zRzRzRzRzRzRzR Sep 17, 2023
a65bc4a
添加configs/prompt_config.py,允许用户自定义prompt模板: (#1504)
liunux4odoo Sep 17, 2023
175c90c
增加其它模型的参数适配
glide-the Sep 17, 2023
902ba0c
增加传入矢量名称加载
glide-the Sep 17, 2023
bb7ce60
1. 搜索引擎问答支持历史记录;
liunux4odoo Sep 17, 2023
7d31e84
langchain日志开关
glide-the Sep 17, 2023
ec85cd1
move wrap_done & get_ChatOpenAI from server.chat.utils to server.util…
liunux4odoo Sep 17, 2023
1bae930
修复faiss_pool知识库缓存key错误 (#1507)
liunux4odoo Sep 17, 2023
a580cbd
fix ReadMe anchor link (#1500)
zhengxiaoyao0716 Sep 16, 2023
cb2b560
fix : Duplicate variable and function name (#1509)
dividez Sep 18, 2023
be22869
Update README.md
imClumsyPanda Sep 18, 2023
b161985
fix #1519: streamlit-chatbox旧版BUG,但新版有兼容问题,先在webui中作处理,并限定chatbox版本 (…
liunux4odoo Sep 19, 2023
9bcce0a
【功能新增】在线 LLM 模型支持阿里云通义千问 (#1534)
yihleego Sep 20, 2023
bd0164e
处理序列化至磁盘的逻辑
glide-the Sep 20, 2023
92359fb
remove depends on volcengine
liunux4odoo Sep 20, 2023
818cb1a
update kb_doc_api: use Form instead of Body when upload file
liunux4odoo Sep 21, 2023
e4a927c
将所有httpx请求改为使用Client,提高效率,方便以后设置代理等。 (#1554)
liunux4odoo Sep 21, 2023
171300c
update QR code
imClumsyPanda Sep 22, 2023
f3042a6
merge master
imClumsyPanda Sep 22, 2023
89aed8e
update readme_en,readme,requirements_api,requirements,model_config.py…
hzg0601 Sep 22, 2023
192fbee
Merge pull request #1568 from hzg0601/dev
hzg0601 Sep 22, 2023
810145c
新增特性:1.支持vllm推理加速框架;2. 更新支持模型列表
hzg0601 Sep 22, 2023
f4da084
更新文件:1. startup,model_config.py.example,serve_config.py.example,FAQ
hzg0601 Sep 22, 2023
3a6d166
Merge branch 'dev' of github.com:chatchat-space/Langchain-Chatchat in…
hzg0601 Sep 22, 2023
3309b5c
Merge pull request #1574 from hzg0601/dev
hzg0601 Sep 22, 2023
2d823aa
1. debug vllm加速框架完毕;2. 修改requirements,requirements_api对vllm的依赖;3.注释掉s…
hzg0601 Sep 23, 2023
9cbd9f6
Merge pull request #1581 from hzg0601/dev
hzg0601 Sep 23, 2023
56d75af
Merge pull request #1582 from chatchat-space/fschat_vllm
hzg0601 Sep 24, 2023
2716ff7
1. 更新congif中关于vllm后端相关说明;2. 更新requirements,requirements_api;
hzg0601 Sep 26, 2023
c546b42
Merge pull request #1603 from hzg0601/dev
hzg0601 Sep 26, 2023
5702554
增加了仅限GPT4的agent功能,陆续补充,中文版readme已写 (#1611)
zRzRzRzRzRzRzR Sep 27, 2023
d39878f
Dev (#1613)
zRzRzRzRzRzRzR Sep 27, 2023
523764e
fix: set vllm based on platform to avoid error on windows
liunux4odoo Sep 27, 2023
8d0f8a5
fix: langchain warnings for import from root
liunux4odoo Sep 27, 2023
b3c7f8b
修复webui中重建知识库以及对话界面UI错误 (#1615)
liunux4odoo Sep 28, 2023
8fa9902
根据官方文档,添加对英文版的bge embedding的指示模板 (#1585)
WilliamChen-luckbob Sep 28, 2023
efd8edd
Dev (#1618)
zRzRzRzRzRzRzR Sep 28, 2023
1b312d5
更改readme 0928 (#1619)
zRzRzRzRzRzRzR Sep 28, 2023
30b8dae
fix readme
liunux4odoo Sep 28, 2023
99e9005
处理序列化至磁盘的逻辑
glide-the Sep 20, 2023
e0b50b2
merge dev
liunux4odoo Sep 28, 2023
4554bd8
update version number to v0.2.5
liunux4odoo Sep 28, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 60 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,25 @@ docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/ch

---

## 环境最低要求

想顺利运行本代码,请按照以下的最低要求进行配置:
+ Python版本: >= 3.8.5, < 3.11
+ Cuda版本: >= 11.7, 且能顺利安装Python

如果想要顺利在GPU运行本地模型(int4版本),你至少需要以下的硬件配置:

+ chatglm2-6b & LLaMA-7B 最低显存要求: 7GB 推荐显卡: RTX 3060, RTX 2060
+ LLaMA-13B 最低显存要求: 11GB 推荐显卡: RTX 2060 12GB, RTX3060 12GB, RTX3080, RTXA2000
+ Qwen-14B-Chat 最低显存要求: 13GB 推荐显卡: RTX 3090
+ LLaMA-30B 最低显存要求: 22GB 推荐显卡:RTX A5000,RTX 3090,RTX 4090,RTX 6000,Tesla V100,RTX Tesla P40
+ LLaMA-65B 最低显存要求: 40GB 推荐显卡:A100,A40,A6000

如果是int8 则显存x1.5 fp16 x2.5的要求
如:使用fp16 推理Qwen-7B-Chat 模型 则需要使用16GB显存。

以上仅为估算,实际情况以nvidia-smi占用为准。

## 变更日志

参见 [版本更新日志](https://github.com/imClumsyPanda/langchain-ChatGLM/releases)。
Expand Down Expand Up @@ -112,27 +131,29 @@ docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/ch
- [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
- [baichuan-inc/baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B)
- [internlm/internlm-chat-7b](https://huggingface.co/internlm/internlm-chat-7b)
- [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat)
- [Qwen/Qwen-7B-Chat/Qwen-14B-Chat](https://huggingface.co/Qwen/)
- [HuggingFaceH4/starchat-beta](https://huggingface.co/HuggingFaceH4/starchat-beta)
- [FlagAlpha/Llama2-Chinese-13b-Chat](https://huggingface.co/FlagAlpha/Llama2-Chinese-13b-Chat) and others
- [BAAI/AquilaChat-7B](https://huggingface.co/BAAI/AquilaChat-7B)
- [all models of OpenOrca](https://huggingface.co/Open-Orca)
- [Spicyboros](https://huggingface.co/jondurbin/spicyboros-7b-2.2?not-for-all-audiences=true) + [airoboros 2.2](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2)
- [VMware&#39;s OpenLLaMa OpenInstruct](https://huggingface.co/VMware/open-llama-7b-open-instruct)
- [baichuan2-7b/baichuan2-13b](https://huggingface.co/baichuan-inc)
- 任何 [EleutherAI](https://huggingface.co/EleutherAI) 的 pythia 模型,如 [pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b)
- 在以上模型基础上训练的任何 [Peft](https://github.com/huggingface/peft) 适配器。为了激活,模型路径中必须有 `peft` 。注意:如果加载多个peft模型,你可以通过在任何模型工作器中设置环境变量 `PEFT_SHARE_BASE_WEIGHTS=true` 来使它们共享基础模型的权重。

以上模型支持列表可能随 [FastChat](https://github.com/lm-sys/FastChat) 更新而持续更新,可参考 [FastChat 已支持模型列表](https://github.com/lm-sys/FastChat/blob/main/docs/model_support.md)。


除本地模型外,本项目也支持直接接入 OpenAI API、智谱AI等在线模型,具体设置可参考 `configs/model_configs.py.example` 中的 `llm_model_dict` 的配置信息。

在线 LLM 模型目前已支持:
在线 LLM 模型目前已支持:

- [ChatGPT](https://api.openai.com)
- [智谱AI](http://open.bigmodel.cn)
- [MiniMax](https://api.minimax.chat)
- [讯飞星火](https://xinghuo.xfyun.cn)
- [百度千帆](https://cloud.baidu.com/product/wenxinworkshop?track=dingbutonglan)
- [阿里云通义千问](https://dashscope.aliyun.com/)

项目中默认使用的 LLM 类型为 `THUDM/chatglm2-6b`,如需使用其他 LLM 类型,请在 [configs/model_config.py] 中对 `llm_model_dict` 和 `LLM_MODEL` 进行修改。

Expand All @@ -157,9 +178,11 @@ docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/ch
- [GanymedeNil/text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese)
- [nghuyong/ernie-3.0-nano-zh](https://huggingface.co/nghuyong/ernie-3.0-nano-zh)
- [nghuyong/ernie-3.0-base-zh](https://huggingface.co/nghuyong/ernie-3.0-base-zh)
- [sensenova/piccolo-base-zh](https://huggingface.co/sensenova/piccolo-base-zh)
- [sensenova/piccolo-base-zh](https://huggingface.co/sensenova/piccolo-large-zh)
- [OpenAI/text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings)

项目中默认使用的 Embedding 类型为 `moka-ai/m3e-base`,如需使用其他 Embedding 类型,请在 [configs/model_config.py] 中对 `embedding_model_dict` 和 `EMBEDDING_MODEL` 进行修改。
项目中默认使用的 Embedding 类型为 `sensenova/piccolo-base-zh`,如需使用其他 Embedding 类型,请在 [configs/model_config.py] 中对 `embedding_model_dict` 和 `EMBEDDING_MODEL` 进行修改。

---

Expand Down Expand Up @@ -187,15 +210,27 @@ docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/ch

关于如何使用自定义分词器和贡献自己的分词器,可以参考[Text Splitter 贡献说明](docs/splitter.md)。

## Agent生态
### 基础的Agent
在本版本中,我们实现了一个简单的基于OpenAI的React的Agent模型,目前,经过我们测试,仅有以下两个模型支持:
+ OpenAI GPT4
+ ChatGLM2-130B

目前版本的Agent仍然需要对提示词进行大量调试,调试位置

### 构建自己的Agent工具

详见 [自定义Agent说明](docs/自定义Agent.md)

## Docker 部署

🐳 Docker 镜像地址: `registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.3)`
🐳 Docker 镜像地址: `registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5)`

```shell
docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.3
docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5
```

- 该版本镜像大小 `35.3GB`,使用 `v0.2.3`,以 `nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04` 为基础镜像
- 该版本镜像大小 `35.3GB`,使用 `v0.2.5`,以 `nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04` 为基础镜像
- 该版本内置两个 `embedding` 模型:`m3e-large`,`text2vec-bge-large-chinese`,默认启用后者,内置 `chatglm2-6b-32k`
- 该版本目标为方便一键部署使用,请确保您已经在Linux发行版上安装了NVIDIA驱动程序
- 请注意,您不需要在主机系统上安装CUDA工具包,但需要安装 `NVIDIA Driver` 以及 `NVIDIA Container Toolkit`,请参考[安装指南](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html)
Expand Down Expand Up @@ -391,22 +426,26 @@ CUDA_VISIBLE_DEVICES=0,1 python startup.py -a
- [X] .csv
- [ ] .xlsx
- [ ] 分词及召回
- [ ] 接入不同类型 TextSplitter
- [ ] 优化依据中文标点符号设计的 ChineseTextSplitter
- [X] 接入不同类型 TextSplitter
- [X] 优化依据中文标点符号设计的 ChineseTextSplitter
- [ ] 重新实现上下文拼接召回
- [ ] 本地网页接入
- [ ] SQL 接入
- [ ] 知识图谱/图数据库接入
- [X] 搜索引擎接入
- [X] Bing 搜索
- [X] DuckDuckGo 搜索
- [ ] Agent 实现
- [X] Agent 实现
- [X] 基础React形式的Agent实现,包括调用计算器等
- [X] Langchain 自带的Agent实现和调用
- [ ] 更多模型的Agent支持
- [ ] 更多工具
- [X] LLM 模型接入
- [X] 支持通过调用 [FastChat](https://github.com/lm-sys/fastchat) api 调用 llm
- [ ] 支持 ChatGLM API 等 LLM API 的接入
- [X] 支持 ChatGLM API 等 LLM API 的接入
- [X] Embedding 模型接入
- [X] 支持调用 HuggingFace 中各开源 Emebdding 模型
- [ ] 支持 OpenAI Embedding API 等 Embedding API 的接入
- [X] 支持 OpenAI Embedding API 等 Embedding API 的接入
- [X] 基于 FastAPI 的 API 方式调用
- [X] Web UI
- [X] 基于 Streamlit 的 Web UI
Expand All @@ -417,4 +456,12 @@ CUDA_VISIBLE_DEVICES=0,1 python startup.py -a

<img src="img/qr_code_64.jpg" alt="二维码" width="300" height="300" />

🎉 langchain-ChatGLM 项目微信交流群,如果你也对本项目感兴趣,欢迎加入群聊参与讨论交流。
🎉 langchain-Chatchat 项目微信交流群,如果你也对本项目感兴趣,欢迎加入群聊参与讨论交流。


## 关注我们

<img src="img/official_account.png" alt="图片" width="900" height="300" />
🎉 langchain-Chatchat 项目官方公众号,欢迎扫码关注。


75 changes: 67 additions & 8 deletions README_en.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,25 @@ docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/ch

---

## Environment Minimum Requirements

To run this code smoothly, please configure it according to the following minimum requirements:
+ Python version: >= 3.8.5, < 3.11
+ Cuda version: >= 11.7, with Python installed.

If you want to run the native model (int4 version) on the GPU without problems, you need at least the following hardware configuration.

+ chatglm2-6b & LLaMA-7B Minimum RAM requirement: 7GB Recommended graphics cards: RTX 3060, RTX 2060
+ LLaMA-13B Minimum graphics memory requirement: 11GB Recommended cards: RTX 2060 12GB, RTX3060 12GB, RTX3080, RTXA2000
+ Qwen-14B-Chat Minimum memory requirement: 13GB Recommended graphics card: RTX 3090
+ LLaMA-30B Minimum Memory Requirement: 22GB Recommended Cards: RTX A5000,RTX 3090,RTX 4090,RTX 6000,Tesla V100,RTX Tesla P40
+ Minimum memory requirement for LLaMA-65B: 40GB Recommended cards: A100,A40,A6000

If int8 then memory x1.5 fp16 x2.5 requirement.
For example: using fp16 to reason about the Qwen-7B-Chat model requires 16GB of video memory.

The above is only an estimate, the actual situation is based on nvidia-smi occupancy.

## Change Log

plese refer to [version change log](https://github.com/imClumsyPanda/langchain-ChatGLM/releases)
Expand Down Expand Up @@ -105,18 +124,31 @@ The project use [FastChat](https://github.com/lm-sys/FastChat) to provide the AP
- [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0)
- [baichuan-inc/baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B)
- [internlm/internlm-chat-7b](https://huggingface.co/internlm/internlm-chat-7b)
- [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat)
- [Qwen/Qwen-7B-Chat/Qwen-14B-Chat](https://huggingface.co/Qwen/)
- [HuggingFaceH4/starchat-beta](https://huggingface.co/HuggingFaceH4/starchat-beta)
- [FlagAlpha/Llama2-Chinese-13b-Chat](https://huggingface.co/FlagAlpha/Llama2-Chinese-13b-Chat) and other models of FlagAlpha
- [BAAI/AquilaChat-7B](https://huggingface.co/BAAI/AquilaChat-7B)
- [all models of OpenOrca](https://huggingface.co/Open-Orca)
- [Spicyboros](https://huggingface.co/jondurbin/spicyboros-7b-2.2?not-for-all-audiences=true) + [airoboros 2.2](https://huggingface.co/jondurbin/airoboros-l2-13b-2.2)
- [baichuan2-7b/baichuan2-13b](https://huggingface.co/baichuan-inc)
- [VMware&#39;s OpenLLaMa OpenInstruct](https://huggingface.co/VMware/open-llama-7b-open-instruct)

* Any [EleutherAI](https://huggingface.co/EleutherAI) pythia model such as [pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b)(任何 [EleutherAI](https://huggingface.co/EleutherAI) 的 pythia 模型,如 [pythia-6.9b](https://huggingface.co/EleutherAI/pythia-6.9b))
* Any [Peft](https://github.com/huggingface/peft) adapter trained on top of a model above. To activate, must have `peft` in the model path. Note: If loading multiple peft models, you can have them share the base model weights by setting the environment variable `PEFT_SHARE_BASE_WEIGHTS=true` in any model worker.

Please refer to `llm_model_dict` in `configs.model_configs.py.example` to invoke OpenAI API.

The above model support list may be updated continuously as [FastChat](https://github.com/lm-sys/FastChat) is updated, see [FastChat Supported Models List](https://github.com/lm-sys/FastChat/blob/main /docs/model_support.md).
In addition to local models, this project also supports direct access to online models such as OpenAI API, Wisdom Spectrum AI, etc. For specific settings, please refer to the configuration information of `llm_model_dict` in `configs/model_configs.py.example`.
Online LLM models are currently supported:

- [ChatGPT](https://api.openai.com)
- [Smart Spectrum AI](http://open.bigmodel.cn)
- [MiniMax](https://api.minimax.chat)
- [Xunfei Starfire](https://xinghuo.xfyun.cn)
- [Baidu Qianfan](https://cloud.baidu.com/product/wenxinworkshop?track=dingbutonglan)
- [Aliyun Tongyi Qianqian](https://dashscope.aliyun.com/)

The default LLM type used in the project is `THUDM/chatglm2-6b`, if you need to use other LLM types, please modify `llm_model_dict` and `LLM_MODEL` in [configs/model_config.py].

### Supported Embedding models

Expand All @@ -129,6 +161,8 @@ Following models are tested by developers with Embedding class of [HuggingFace](
- [BAAI/bge-base-zh](https://huggingface.co/BAAI/bge-base-zh)
- [BAAI/bge-large-zh](https://huggingface.co/BAAI/bge-large-zh)
- [BAAI/bge-large-zh-noinstruct](https://huggingface.co/BAAI/bge-large-zh-noinstruct)
- [sensenova/piccolo-base-zh](https://huggingface.co/sensenova/piccolo-base-zh)
- [sensenova/piccolo-large-zh](https://huggingface.co/sensenova/piccolo-large-zh)
- [shibing624/text2vec-base-chinese-sentence](https://huggingface.co/shibing624/text2vec-base-chinese-sentence)
- [shibing624/text2vec-base-chinese-paraphrase](https://huggingface.co/shibing624/text2vec-base-chinese-paraphrase)
- [shibing624/text2vec-base-multilingual](https://huggingface.co/shibing624/text2vec-base-multilingual)
Expand All @@ -137,16 +171,24 @@ Following models are tested by developers with Embedding class of [HuggingFace](
- [GanymedeNil/text2vec-large-chinese](https://huggingface.co/GanymedeNil/text2vec-large-chinese)
- [nghuyong/ernie-3.0-nano-zh](https://huggingface.co/nghuyong/ernie-3.0-nano-zh)
- [nghuyong/ernie-3.0-base-zh](https://huggingface.co/nghuyong/ernie-3.0-base-zh)
- [sensenova/piccolo-base-zh](https://huggingface.co/sensenova/piccolo-base-zh)
- [sensenova/piccolo-base-zh](https://huggingface.co/sensenova/piccolo-large-zh)
- [OpenAI/text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings)

The default Embedding type used in the project is `sensenova/piccolo-base-zh`, if you want to use other Embedding types, please modify `embedding_model_dict` and `embedding_model_dict` and `embedding_model_dict` in [configs/model_config.py]. MODEL` in [configs/model_config.py].

### Build your own Agent tool!

See [Custom Agent Instructions](docs/自定义Agent.md) for details.

---

## Docker Deployment

🐳 Docker image path: `registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.0)`
🐳 Docker image path: `registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5)`

```shell
docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.0
docker run -d --gpus all -p 80:8501 registry.cn-beijing.aliyuncs.com/chatchat/chatchat:0.2.5
```

- The image size of this version is `33.9GB`, using `v0.2.0`, with `nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04` as the base image
Expand Down Expand Up @@ -328,17 +370,21 @@ Please refer to [FAQ](docs/FAQ.md)
- [ ] Structured documents
- [X] .csv
- [ ] .xlsx
- [ ] TextSplitter and Retriever
- [x] multiple TextSplitter
- [x] ChineseTextSplitter
- [] TextSplitter and Retriever
- [X] multiple TextSplitter
- [X] ChineseTextSplitter
- [ ] Reconstructed Context Retriever
- [ ] Webpage
- [ ] SQL
- [ ] Knowledge Database
- [X] Search Engines
- [X] Bing
- [X] DuckDuckGo
- [ ] Agent
- [X] Agent
- [X] Agent implementation in the form of basic React, including calls to calculators, etc.
- [X] Langchain's own Agent implementation and calls
- [ ] More Agent support for models
- [ ] More tools
- [X] LLM Models
- [X] [FastChat](https://github.com/lm-sys/fastchat) -based LLM Models
- [ ] Mutiply Remote LLM API
Expand All @@ -348,3 +394,16 @@ Please refer to [FAQ](docs/FAQ.md)
- [X] FastAPI-based API
- [X] Web UI
- [X] Streamlit -based Web UI

---

## Wechat Group

<img src="img/qr_code_64.jpg" alt="QR Code" width="300" height="300" />

🎉 langchain-Chatchat project WeChat exchange group, if you are also interested in this project, welcome to join the group chat to participate in the discussion and exchange.

## Follow us

<img src="img/official_account.png" alt="image" width="900" height="300" />
🎉 langchain-Chatchat project official public number, welcome to scan the code to follow.
15 changes: 4 additions & 11 deletions chains/llmchain_with_history.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,12 @@
from langchain.chat_models import ChatOpenAI
from configs.model_config import llm_model_dict, LLM_MODEL
from langchain import LLMChain
from server.utils import get_ChatOpenAI
from configs.model_config import LLM_MODEL, TEMPERATURE
from langchain.chains import LLMChain
from langchain.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
)

model = ChatOpenAI(
streaming=True,
verbose=True,
# callbacks=[callback],
openai_api_key=llm_model_dict[LLM_MODEL]["api_key"],
openai_api_base=llm_model_dict[LLM_MODEL]["api_base_url"],
model_name=LLM_MODEL
)
model = get_ChatOpenAI(model_name=LLM_MODEL, temperature=TEMPERATURE)


human_prompt = "{input}"
Expand Down
6 changes: 5 additions & 1 deletion configs/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,8 @@
from .basic_config import *
from .model_config import *
from .kb_config import *
from .server_config import *
from .prompt_config import *

VERSION = "v0.2.4"

VERSION = "v0.2.5"
22 changes: 22 additions & 0 deletions configs/basic_config.py.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
import logging
import os
import langchain

# 是否显示详细日志
log_verbose = False
langchain.verbose = False


# 通常情况下不需要更改以下内容

# 日志格式
LOG_FORMAT = "%(asctime)s - %(filename)s[line:%(lineno)d] - %(levelname)s: %(message)s"
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logging.basicConfig(format=LOG_FORMAT)


# 日志存储路径
LOG_PATH = os.path.join(os.path.dirname(os.path.dirname(__file__)), "logs")
if not os.path.exists(LOG_PATH):
os.mkdir(LOG_PATH)
Loading