Skip to content

Commit

Permalink
update descriptions for apps
Browse files Browse the repository at this point in the history
  • Loading branch information
minggnim committed Aug 11, 2023
1 parent 026f64f commit 1e52bd7
Show file tree
Hide file tree
Showing 3 changed files with 16 additions and 19 deletions.
16 changes: 16 additions & 0 deletions apps/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
## Instructions to run applications

1. Install the latest nlp_models package

```
pip install -U nlp_models
```

2. Download Llama2 quantization model `llama-2-7b-chat.ggmlv3.q8_0.bin`

from https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/tree/main and place it under `models` folder


3. Run the application of interest
- Chat `streamlit run chat.py`
- Q&A `streamlit run qa.py`
11 changes: 0 additions & 11 deletions apps/chat.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,3 @@
"""
To run Llama2 chat UI on CPU
1. Install the latest nlp_models package
`pip install -U nlp_models`
2. Download `llama-2-7b-chat.ggmlv3.q8_0.bin`
from https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/tree/main
and place it under `models` folder
3. Run `streamlit run chat.py` to spin up the UI
"""


import streamlit as st
from langchain.memory import ConversationBufferWindowMemory
from nlp_models.llm.base import LlmConfig
Expand Down
8 changes: 0 additions & 8 deletions apps/qa.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,3 @@
"""
To run Llama2 QA UI on CPU
1. Install the latest nlp_models package `pip install -U nlp_models`
2. Download `llama-2-7b-chat.ggmlv3.q8_0.bin` from https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/tree/main and place it under `models` folder
3. Run `streamlit run qa.py` to spin up the UI
"""


import streamlit as st
from tempfile import NamedTemporaryFile
from nlp_models.llm.base import LlmConfig
Expand Down

0 comments on commit 1e52bd7

Please sign in to comment.