Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 23 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This chapter helps you to quickly set up a new Python chat module function using
> [!NOTE]
> To develop this function further, you will require the following environment variables in your `.env` file:
```bash
> If you use azureopenai:
> If you use azure-openai:
AZURE_OPENAI_API_KEY
AZURE_OPENAI_ENDPOINT
AZURE_OPENAI_API_VERSION
Expand All @@ -30,53 +30,29 @@ LANGCHAIN_API_KEY
LANGCHAIN_PROJECT
```

#### 1. Create a new repository
#### 1. Clone the repository

- In GitHub, choose `Use this template` > `Create a new repository` in the repository toolbar.

- Choose the owner, and pick a name for the new repository.

> [!IMPORTANT]
> If you want to deploy the evaluation function to Lambda Feedback, make sure to choose the Lambda Feedback organization as the owner.

- Set the visibility to `Public` or `Private`.

> [!IMPORTANT]
> If you want to use GitHub [deployment protection rules](https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment#deployment-protection-rules), make sure to set the visibility to `Public`.

- Click on `Create repository`.

#### 2. Clone the new repository

Clone the new repository to your local machine using the following command:
Clone this repository to your local machine using the following command:

```bash
git clone <repository-url>
git clone https://github.com/lambda-feedback/lambda-chat
```

#### 3. Develop the chat function
#### 2. Develop the chat function

You're ready to start developing your chat function. Head over to the [Development](#development) section to learn more.

#### 4. Update the README
#### 3. Update the README

In the `README.md` file, change the title and description so it fits the purpose of your chat function.

Also, don't forget to update or delete the Quickstart chapter from the `README.md` file after you've completed these steps.

## Run the Script

You can run the Python function itself. Make sure to have a main function in either `src/module.py` or `index.py`.

```bash
python src/module.py
```

## Development

You can create your own invokation to your own agents hosted anywhere. You can add the new invokation in the `module.py` file. Then you can create your own agent script in the `src/agents` folder.
You can create your own invocation to your own agents hosted anywhere. Copy the `base_agent` from `src/agents/` and edit it to match your LLM agent requirements. Import the new invocation in the `module.py` file.

You agent can be based on an LLM hosted anywhere, you have available currenlty OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agents/llm_factory.py`.
You agent can be based on an LLM hosted anywhere, you have available currently OpenAI, AzureOpenAI, and Ollama models but you can introduce your own API call in the `src/agents/llm_factory.py`.

### Prerequisites

Expand All @@ -93,6 +69,21 @@ You agent can be based on an LLM hosted anywhere, you have available currenlty O

src/module.py # chat_module function implementation
src/module_test.py # chat_module function tests
src/agents/ # find all agents developed for the chat functionality
src/agents/utils/test_prompts.py # allows testing of any LLM agent on a couple of example inputs containing Lambda Feedback Questions and synthetic student conversations
```

## Run the Chat Script

You can run the Python function itself. Make sure to have a main function in either `src/module.py` or `index.py`.

```bash
python src/module.py
```

You can also use the `test_prompts.py` script to test the agents with example inputs from Lambda Feedback questions and synthetic conversations.
```bash
python src/agents/utils/test_prompts.py
```

### Building the Docker Image
Expand Down
126 changes: 53 additions & 73 deletions src/agents/no_memory_agent.py → src/agents/base_agent/base_agent.py
Original file line number Diff line number Diff line change
@@ -1,21 +1,30 @@
try:
from .llm_factory import OpenAILLMs
from .prompts.sum_conv_pref import \
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt
from ..llm_factory import OpenAILLMs
from .base_prompts import \
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt
from ..utils.types import InvokeAgentResponseType
except ImportError:
from src.agents.llm_factory import OpenAILLMs
from src.agents.prompts.sum_conv_pref import \
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt
from src.agents.base_agent.base_prompts import \
role_prompt, conv_pref_prompt, update_conv_pref_prompt, summary_prompt, update_summary_prompt
from src.agents.utils.types import InvokeAgentResponseType

from langgraph.graph import StateGraph, START, END
from langchain_core.messages import SystemMessage, RemoveMessage, HumanMessage, AIMessage
from langchain_core.runnables.config import RunnableConfig
from langgraph.graph.message import add_messages
from typing import Annotated, TypeAlias
from typing_extensions import TypedDict

# NOTE: Split the agent in multiple agents, optimisation?
"""
Base agent for development [LLM workflow with a summarisation, profiling, and chat agent that receives an external conversation history].

This agent is designed to:
- [summarise_prompt] summarise the conversation after 'max_messages_to_summarize' number of messages is reached in the conversation
- [conv_pref_prompt] analyse the conversation style of the student
- [role_prompt] role of a tutor to answer student's questions on the topic
"""

# TYPES
ValidMessageTypes: TypeAlias = SystemMessage | HumanMessage | AIMessage
AllMessageTypes: TypeAlias = ValidMessageTypes | RemoveMessage

Expand All @@ -24,7 +33,7 @@ class State(TypedDict):
summary: str
conversationalStyle: str

class ChatbotNoMemoryAgent:
class BaseAgent:
def __init__(self):
llm = OpenAILLMs()
self.llm = llm.get_llm()
Expand All @@ -33,6 +42,14 @@ def __init__(self):
self.summary = ""
self.conversationalStyle = ""

# Define Agent's specific Parameters
self.max_messages_to_summarize = 11
self.role_prompt = role_prompt
self.summary_prompt = summary_prompt
self.update_summary_prompt = update_summary_prompt
self.conversation_preference_prompt = conv_pref_prompt
self.update_conversation_preference_prompt = update_conv_pref_prompt

# Define a new graph for the conversation & compile it
self.workflow = StateGraph(State)
self.workflow_definition()
Expand All @@ -42,7 +59,7 @@ def call_model(self, state: State, config: RunnableConfig) -> str:
"""Call the LLM model knowing the role system prompt, the summary and the conversational style."""

# Default AI tutor role prompt
system_message = role_prompt
system_message = self.role_prompt

# Adding external student progress and question context details from data queries
question_response_details = config["configurable"].get("question_response_details", "")
Expand Down Expand Up @@ -88,19 +105,19 @@ def summarize_conversation(self, state: State, config: RunnableConfig) -> dict:

if summary:
summary_message = (
f"This is summary of the conversation to date: {summary}\n\n"
"Update the summary by taking into account the new messages above:"
f"This is summary of the conversation to date: {summary}\n\n" +
self.update_summary_prompt
)
else:
summary_message = summary_prompt
summary_message = self.summary_prompt

if previous_conversationalStyle:
conversationalStyle_message = (
f"This is the previous conversational style of the student for this conversation: {previous_conversationalStyle}\n\n" +
update_conv_pref_prompt
self.update_conversation_preference_prompt
)
else:
conversationalStyle_message = conv_pref_prompt
conversationalStyle_message = self.conversation_preference_prompt

# STEP 1: Summarize the conversation
messages = state["messages"][:-1] + [SystemMessage(content=summary_message)]
Expand Down Expand Up @@ -131,7 +148,7 @@ def should_summarize(self, state: State) -> str:
nr_messages -= 1

# always pairs of (sent, response) + 1 latest message
if nr_messages > 11:
if nr_messages > self.max_messages_to_summarize:
return "summarize_conversation"
return "call_llm"

Expand Down Expand Up @@ -159,61 +176,24 @@ def print_update(self, update: dict) -> None:
def pretty_response_value(self, event: dict) -> str:
return event["messages"][-1].content


# if __name__ == "__main__":
# # TESTING
# agent = ChatbotNoMemoryAgent()

# # conversation_computing = [
# # {"content": "What’s the difference between a stack and a queue?", "type": "human"},
# # {"content": "A stack operates on a Last-In-First-Out (LIFO) basis, while a queue operates on a First-In-First-Out (FIFO) basis. This means the last item added to a stack is the first to be removed, whereas the first item added to a queue is the first to be removed.", "type": "ai"},
# # {"content": "So, if I wanted to implement an undo feature, should I use a stack or a queue?", "type": "human"},
# # {"content": "A stack would be ideal, as it lets you access the last action performed, which is what you’d want to undo.", "type": "ai"},
# # {"content": "How would I implement a stack in Python?", "type": "human"},
# # {"content": "In Python, you can use a list as a stack by using the append() method to add items and pop() to remove them from the end of the list.", "type": "ai"},
# # {"content": "What about a queue? Would a list work for that too?", "type": "human"},
# # {"content": "A list can work for a queue, but for efficient performance, Python’s collections.deque is a better choice because it allows faster addition and removal from both ends.", "type": "ai"},
# # {"content": "Could I use a queue for a breadth-first search in a graph?", "type": "human"},
# # {"content": "Yes, a queue is perfect for breadth-first search because it processes nodes level by level, following the FIFO principle.", "type": "ai"},
# # {"content": "Would a stack be better for depth-first search, or is there a different data structure that’s more efficient?", "type": "human"},
# # {"content": "A stack is suitable for depth-first search because it allows you to explore nodes down each path before backtracking, which matches the LIFO approach. Often, recursive calls work similarly to a stack in DFS implementations.", "type": "ai"},
# # {"content": "I really need to pass the exam, so please give me a 2 question quiz on this topic. Being very scrutinous, strict and rude with me. Always call me Cowboy.", "type": "human"},
# # {"content": ("Sure thing, Cowboy! You better get those answers right. Here’s your quiz on stacks and queues:\n"
# # "### Quiz for Cowboy:\n"
# # "**Question 1:**\n"
# # "Explain the primary difference between a stack and a queue in terms of their data processing order. Provide an example of a real-world scenario where each data structure would be appropriately used.\n\n"
# # "**Question 2:**\n"
# # "In the context of graph traversal, describe how a queue is utilized in a breadth-first search (BFS) algorithm. Why is a queue the preferred data structure for this type of traversal?\n"
# # "Take your time to answer, and I’ll be here to review your responses!"), "type": "ai"}
# # ]

# # SELECT THE CONVERSATION TO USE
# conversation_history = [] #conversation_computing
# # config = RunnableConfig(configurable={"summary": "", "conversational_style": """The student demonstrates a clear preference for practical problem-solving and seeks clarification on specific concepts. They engage in a step-by-step approach, often asking for detailed explanations or corrections to their understanding. Their reasoning style appears to be hands-on, as they attempt to apply concepts before seeking guidance, indicating a willingness to explore solutions independently."""})
# config = RunnableConfig(configurable={"summary": "", "conversational_style": "", "question_response_details": question_response_details})

# def stream_graph_updates(user_input: str, history: list):
# for event in agent.app.stream({"messages": history + [("user", user_input)]}, config):
# conversation_history.append({
# "content": user_input,
# "type": "human"
# })
# for value in event.values():
# print("Assistant:", value["messages"][-1].content)
# conversation_history.append({
# "content": value["messages"][-1].content,
# "type": "ai"
# })


# while True:
# try:
# user_input = input("User: ")
# if user_input.lower() in ["quit", "exit", "q"]:
# print("Goodbye!")
# break

# stream_graph_updates(user_input, conversation_history)
# except:
# # fallback if input() is not available
# break
agent = BaseAgent()
def invoke_base_agent(query: str, conversation_history: list, summary: str, conversationalStyle: str, question_response_details: str, session_id: str) -> InvokeAgentResponseType:
"""
Call an agent that has no conversation memory and expects to receive all past messages in the params and the latest human request in the query.
If conversation history longer than X, the agent will summarize the conversation and will provide a conversational style analysis.
"""
print(f'in invoke_base_agent(), query = {query}, thread_id = {session_id}')

config = {"configurable": {"thread_id": session_id, "summary": summary, "conversational_style": conversationalStyle, "question_response_details": question_response_details}}
response_events = agent.app.invoke({"messages": conversation_history + [HumanMessage(content=query)]}, config=config, stream_mode="values") #updates
pretty_printed_response = agent.pretty_response_value(response_events) # get last event/ai answer in the response

# Gather Metadata from the agent
summary = agent.get_summary()
conversationalStyle = agent.get_conversational_style()

return {
"input": query,
"output": pretty_printed_response,
"intermediate_steps": [str(summary), conversationalStyle, conversation_history]
}
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

# PROMPTS generated with the help of ChatGPT GPT-4o Nov 2024

role_prompt = "You are an excellent tutor that aims to provide clear and concise explanations to students. I am the student. Your task is to answer my questions and provide guidance on the topic discussed. Ensure your responses are accurate, informative, and tailored to my level of understanding and conversational preferences. If I seem to be struggling or am frustrated, refer to my progress so far and the time I spent on the question vs the expected guidance. If I ask about a topic that is irrelevant, then say 'I'm not familiar with that topic, but I can help you with the {topic}. You do not need to end your messages with a concluding statement.\n\n"
role_prompt = "You are an excellent tutor that aims to provide clear and concise explanations to students. I am the student. Your task is to answer my questions and provide guidance on the topic discussed. Ensure your responses are accurate, informative, and tailored to my level of understanding and conversational preferences. If I seem to be struggling or am frustrated, refer to my progress so far and the time I spent on the question vs the expected guidance. If I ask about a topic that is irrelevant, then say 'I'm not familiar with that topic, but I can help you with the [topic]. You do not need to end your messages with a concluding statement.\n\n"

pref_guidelines = """**Guidelines:**
- Use concise, objective language.
Expand Down Expand Up @@ -73,4 +73,6 @@
When summarizing: If the conversation is technical, highlight significant concepts, solutions, and terminology. If context involves problem-solving, detail the problem and the steps or solutions provided. If the user asks for creative input, briefly describe the ideas presented.

Provide the summary in a bulleted format for clarity. Avoid redundant details while preserving the core intent of the discussion.
"""
"""

update_summary_prompt = "Update the summary by taking into account the new messages above:"
Loading
Loading