Skip to content
This repository was archived by the owner on Apr 12, 2026. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .streamlit/template.secrets.toml
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
[DEEPSEEK]
DEEPSEEK_API_KEY = ""

[OPENAI]
openai_api_key = ""

Expand Down
49 changes: 7 additions & 42 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# FreeStream

Build your own personal chatbot interface with Streamlit!
Streamlit Chatbots - Work in Progress

***TLDR***:
- A repository to help you get started building your own chatbots using LangChain
- All conversation content is traced via LangSmith for developer evaluation
- Build on top of an already robust chatbot that reasons before responding, and has access to tools
- Customize a chatbot to respond exactly how you want
- Hostable for free through Streamlit Community Cloud
- API keys required
- Pay-per-use ChatGPT style interface
Expand All @@ -14,10 +14,6 @@ Build your own personal chatbot interface with Streamlit!
- [Quickstart](#quickstart)
- [Installation](#installation)
- [Description](#description)
- [Key Concepts](#key-concepts)
- [What can I do with FreeStream?](#what-can-i-do-with-freestream)
- [Functional Requirements](#functional-requirements)
- [Non-Functional Requirements](#non-functional-requirements)
- [License](./LICENSE)
- [LLM Providers' Privacy Policies](#llm-providers-privacy-policies)

Expand All @@ -42,16 +38,9 @@ Run:
poetry install
```

You will need to set all required secrets, which require their own respective accounts.
You will need to set all required secrets(API keys), which require their own respective accounts.
Make a copy of "template.secrets.toml" and rename it to "secrets.toml" in the root of the project. Fill out each field in the file.

**Need API Keys?**
| **API Platform** | **Link** |
| ---- | ---------- |
| Claude | https://console.anthropic.com/ |
| Google | https://aistudio.google.com/app/apikey |
| Langchain | https://smith.langchain.com/ |
| OpenAI | https://platform.openai.com/api-keys |

You can then start the development server with hot reloading by running:

Expand All @@ -62,11 +51,9 @@ poetry run streamlit run ./freestream/🏡_Home.py
---

## Description
I originally created this project as a chatbot for law and medical professionals, but I quickly realized a more flexible system would benefit everyone.

#### **Key Concepts**
Just a freaking robot.

*Related to extending the capabilities of generative AI.*
*Here's some papers I found interesting when first learning about generative AI and "augmented generation."*
| **Concept** | **Definition** |
| ---- | ---------- |
| [Large Language Model](https://en.wikipedia.org/wiki/Large_language_model "Wikipedia: Large language model") | A model that can generate text. |
Expand All @@ -76,34 +63,12 @@ I originally created this project as a chatbot for law and medical professionals
| [ColBERT](https://arxiv.org/abs/2004.12832 "Arxiv: 2004.12832") | Efficient BERT-Based Document Search |
| [RAPTOR](https://arxiv.org/abs/2401.18059 "Arxiv: 2401.18059") | Recursive Abstractive Processing for Tree-Organized Retrieval |

### What can I do with FreeStream?

FreeStream has two chatbots where you can interact with an LLM of your choosing, for example, GPT-4o or Claude Opus. You can very easily add more LLMs to the chatbot dictionary, like Llama 3 via [Ollama](https://ollama.com/ "Ollama home page"), or Gemini-Pro through LangChain's [`ChatGoogleGenerativeAI`](https://api.python.langchain.com/en/latest/chat_models/langchain_google_genai.chat_models.ChatGoogleGenerativeAI.html "LangChain API Docs"). The original chatbot for this project was "RAGbot," which allows you to ask questions about your upload file(s). Curie, is a more for programming and self-learning purposes.

#### Functional Requirements

The application **MUST**...
1. Provide a user interface for chatting with large language models.
2. Have a retrieval augmented generative chatbot.
3. Provide a range of chatbot pages, differentiated by their prompt engineering.
4. Let the user "drop-in" their choice of LLM at any time during a conversation.
5. ~~Allow users to perform image upscaling (PDF, JPEG, PNG) without limits.~~

#### Non-Functional Requirements

The application **SHOULD**...
1. Aim for 24/7 availability.
2. Prioritize ease of navigation
3. Feature a visually appealing, seamless interface.

---

# [License](./LICENSE)

# LLM Providers' Privacy Policies

- [OpenAI Privacy Policy](https://openai.com/policies/privacy-policy)
- [Google](https://transparency.google/our-policies/privacy-policy-terms-of-service/ "Was unable to find a privacy policy specific to Google AI Studio.")
- [Anthropic](https://support.anthropic.com/en/articles/7996866-how-long-do-you-store-personal-data "Support forum response that may suddenly be obsoleted.")
- [DeepSeek Privacy Policy](https://cdn.deepseek.com/policies/en-US/deepseek-privacy-policy.html)
- [Streamlit](https://streamlit.io/privacy-policy/)

Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@
import streamlit as st
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain_anthropic import ChatAnthropic
from langchain_community.chat_message_histories import \
StreamlitChatMessageHistory
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
Expand Down Expand Up @@ -108,38 +107,38 @@
# )
}

anthropic_models = {
"Claude: Haiku": ChatAnthropic(
model="claude-3-haiku-20240307",
anthropic_api_key=anthropic_api_key,
temperature=temperature_slider,
streaming=True,
max_tokens=4096,
),
"Claude 3.5: Sonnet": ChatAnthropic(
model="claude-3-5-sonnet-20240620",
anthropic_api_key=anthropic_api_key,
temperature=temperature_slider,
streaming=True,
max_tokens=4096,
),
# "Claude: Opus": ChatAnthropic(
# model="claude-3-opus-20240229",
# anthropic_api_key=anthropic_api_key,
# temperature=temperature_slider,
# streaming=True,
# max_tokens=4096,
# ),
}
# anthropic_models = {
# "Claude: Haiku": ChatAnthropic(
# model="claude-3-haiku-20240307",
# anthropic_api_key=anthropic_api_key,
# temperature=temperature_slider,
# streaming=True,
# max_tokens=4096,
# ),
# "Claude 3.5: Sonnet": ChatAnthropic(
# model="claude-3-5-sonnet-20240620",
# anthropic_api_key=anthropic_api_key,
# temperature=temperature_slider,
# streaming=True,
# max_tokens=4096,
# ),
# "Claude: Opus": ChatAnthropic(
# model="claude-3-opus-20240229",
# anthropic_api_key=anthropic_api_key,
# temperature=temperature_slider,
# streaming=True,
# max_tokens=4096,
# ),
# }

# Master dictionary
model_names = {}

# Update model master dictionary based on present API keys
if openai_api_key:
model_names.update(openai_models)
if anthropic_api_key:
model_names.update(anthropic_models)
# if anthropic_api_key:
# model_names.update(anthropic_models)

# Create a dropdown menu for selecting a chat model
selected_model = st.selectbox(
Expand Down
89 changes: 89 additions & 0 deletions freestream/pages/.archive/🏡_Home.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
import datetime
import streamlit as st
from langchain_deepseek import ChatDeepSeek
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
from freestream import footer
from pages import save_conversation_history

st.set_page_config(
page_title="FreeStream: A basic chatbot to build on.", page_icon="🏡"
)

st.title("FreeStream")
st.header(":green[_A basic chatbot to build on._]", divider="red")
st.markdown(footer, unsafe_allow_html=True)

DEEPSEEK_API_KEY = st.sidebar.text_input("DeepSeek API Key", type="password")
# Button to clear conversation history
if st.sidebar.button("Clear message history", use_container_width=True):
st.session_state.clear()


# Initialize chat model (but only if API key is provided)
if DEEPSEEK_API_KEY:
chat_model = ChatDeepSeek(
temperature=0.5,
api_key=DEEPSEEK_API_KEY,
model="deepseek-chat",
max_tokens=128,
streaming=True
)

# Initialize session state for messages
if "messages" not in st.session_state:
st.session_state.messages = []


# Display chat history
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.markdown(message["content"])

# Save the formatted conversation history to a variable
formatted_history = save_conversation_history(st.session_state.messages)
current_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
# Create a sidebar button to download the conversation history
st.sidebar.download_button(
label="Download conversation history",
data=formatted_history,
file_name=f"conversation_history {current_time}.md",
mime="text/markdown",
key="download_conversation_history_button",
help="Download the conversation history as a text file with some formatting.",
use_container_width=True,
)

# Handle user input
if user_input := st.chat_input("type here<3"):
# Display user message
with st.chat_message("user"):
st.markdown(user_input)

# Add user message to history
st.session_state.messages.append({"role": "user", "content": user_input})

# Display assistant response
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""

# Convert messages to LangChain format
lc_messages = []
for msg in st.session_state.messages:
if msg["role"] == "user":
lc_messages.append(HumanMessage(content=msg["content"]))
elif msg["role"] == "assistant":
lc_messages.append(AIMessage(content=msg["content"]))
elif msg["role"] == "system":
lc_messages.append(SystemMessage(content=msg["content"]))

# Stream the response
for chunk in chat_model.stream(lc_messages):
if hasattr(chunk, 'content'):
full_response += chunk.content
message_placeholder.markdown(full_response + "▌")

message_placeholder.markdown(full_response)

# Add assistant response to history
st.session_state.messages.append({"role": "assistant", "content": full_response})
4 changes: 2 additions & 2 deletions freestream/pages/utils/chatbot_operators.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
from typing import List

import streamlit as st
import torch
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import UnstructuredFileLoader
from langchain_community.embeddings import HuggingFaceEmbeddings
Expand Down Expand Up @@ -47,7 +46,8 @@ def __init__(self):
)
self.embeddings = HuggingFaceEmbeddings(
model_name="all-MiniLM-L6-v2",
model_kwargs={"device": "cuda" if torch.cuda.is_available() else "cpu"},
# model_kwargs={"device": "cuda" if torch.cuda.is_available() else "cpu"},
model_kwargs={"device": "cpu"},
)

@st.cache_resource(ttl="1h")
Expand Down
4 changes: 2 additions & 2 deletions freestream/pages/utils/streamlit_operators.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
import streamlit as st

# Define a function to change the background to an image via URL
# https://discuss.streamlit.io/t/how-do-i-use-a-background-image-on-streamlit/5067/19?u=daethyra
# https://discuss.streamlit.io/t/how-do-i-use-a-background-image-on-streamlit/5067/19
def set_bg_url():
"""
A function to unpack an image from url and set as bg.
Expand All @@ -27,7 +27,7 @@ def set_bg_url():


# Define a function to change the background to a local image
# https://discuss.streamlit.io/t/how-do-i-use-a-background-image-on-streamlit/5067/16?u=daethyra
# https://discuss.streamlit.io/t/how-do-i-use-a-background-image-on-streamlit/5067/16
def set_bg_local(main_bg):
"""
A function to unpack an image from root folder and set as bg.
Expand Down
80 changes: 0 additions & 80 deletions freestream/🏡_Home.py

This file was deleted.

Loading