Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
212 changes: 170 additions & 42 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,83 @@

"The Tutor" is an intelligent tutoring platform designed to provide students with personalized educational support through interactive AI-driven experiences. The platform leverages both text-based and avatar-based interactions to engage students, offering a wide variety of learning aids such as real-time question answering, essay evaluation, and interactive conversation. By integrating advanced large, visual and multimodal language models, "The Tutor" helps students improve their learning outcomes while providing teachers with insightful evaluation tools.

---

## Getting Started

### Prerequisites

- **OS:** Windows, Linux, or macOS
- **Python:** 3.10+
- **Node.js:** 18+
- **Docker:** (for building backend containers)
- **Azure CLI:** (for cloud deployment)
- **Git**

### Installation

1. **Clone the repository:**

```pwsh
git clone <repository-url>
cd tutor
```

2. **Install Python dependencies:**

```pwsh
pip install poetry
poetry install
```

3. **Install frontend dependencies:**

```pwsh
cd src/frontend
npm install
```

### Quickstart (Local)

1. **Run backend services:**

For each backend app (avatar, essays, questions, configuration):

```pwsh
cd src/<app>/app
poetry run uvicorn main:app --reload
```

See each app's [README](src/avatar/README.md), [README](src/essays/README.md), [README](src/questions/README.md), [README](src/configuration/README.md) for details.

2. **Run the frontend:**

```pwsh
cd src/frontend
npm run dev
```

Open [http://localhost:3000](http://localhost:3000) in your browser.

### Quickstart (Azure Cloud)

1. **Provision Azure resources:**

Use the Bicep files in [`infra/`](infra/main.bicep) to deploy all required Azure resources:

```pwsh
az deployment sub create --location <location> --template-file infra/main.bicep --parameters rgName=<resource-group> location=<location> environment=prod
```

2. **Build and push backend containers:**
- Build Docker images for each backend app and push to the Azure Container Registry (ACR) provisioned by the infra scripts.
3. **Configure environment variables and secrets:**
- Store all sensitive configuration (API keys, connection strings) in Azure Key Vault as referenced in the Bicep modules.
4. **Deploy frontend:**
- The frontend is deployed as an Azure Static Web App, as defined in the infra scripts. Push your code to the configured repository or deploy manually if needed.

---

## Core Architecture and Components

![Core Architecture](./.assets/architecture.png)
Expand All @@ -28,6 +105,14 @@ The evaluation of student responses is carried out by the Evaluation Model, whic

An essential feature of "The Tutor" is its Conversation History module, which stores previous interactions to maintain continuity in conversations. The Avatar Questions Memory ensures that the AI avatar can recall and build upon previous queries posed by the student, providing a coherent and personalized learning journey. The Conversation Preprocessor also uses natural language processing to enhance the quality and relevance of the conversation before passing it to the AI engine for response generation.

---

## Demo App

A demo app is included to show how to use the project. To run the demo locally, follow the Quickstart steps above. For a cloud demo, deploy using the Azure instructions.

---

## Business Goals and Use Cases

The Tutor might be applied on different scenarios.
Expand All @@ -52,48 +137,91 @@ Through the Professor Dashboard, educators are provided with real-time insights

By using conversation history and memory, "The Tutor" ensures continuity in the learning process, helping students build upon previous sessions and facilitating long-term retention of knowledge.

## Getting Started

### Prerequisites

(ideally very short, if any)

- OS
- Library version
- ...

### Installation

(ideally very short)

- npm install [package name]
- mvn install
- ...

### Quickstart
(Add steps to get up and running quickly)

1. git clone [repository clone url]
2. cd [repository name]
3. ...


## Demo

A demo app is included to show how to use the project.

To run the demo, follow these steps:

(Add steps to start up the demo)

1.
2.
3.
---

## Resources

(Any additional resources or related projects)

- Link to supporting information
- Link to similar sample\
- ...
- [Frontend README](src/frontend/README.md)
- [Avatar Backend README](src/avatar/README.md)
- [Essays Backend README](src/essays/README.md)
- [Questions Backend README](src/questions/README.md)
- [Configuration Backend README](src/configuration/README.md)
- [Infrastructure Bicep](infra/main.bicep)
- [Changelog](CHANGELOG.md)
- [Contributing Guide](CONTRIBUTING.md)
- [Security Policy](SECURITY.md)

---

## App Use Case Diagrams

### Frontend (Next.js)

```mermaid
flowchart TD
User((User)) -->|Web| Frontend["Tutor Frontend (Next.js)"]
Frontend -->|API Calls| AvatarAPI["Avatar API"]
Frontend -->|API Calls| EssaysAPI["Essays API"]
Frontend -->|API Calls| QuestionsAPI["Questions API"]
Frontend -->|API Calls| ConfigAPI["Configuration API"]
```

### Avatar Backend

```mermaid
sequenceDiagram
participant User
participant Frontend
participant AvatarAPI
participant AzureSpeech
participant AzureOpenAI
User->>Frontend: Start avatar chat
Frontend->>AvatarAPI: Send message
AvatarAPI->>AzureSpeech: Synthesize/recognize speech
AvatarAPI->>AzureOpenAI: Get AI response
AvatarAPI->>Frontend: Return avatar response
Frontend->>User: Show avatar reply
```

### Essays Backend

```mermaid
sequenceDiagram
participant User
participant Frontend
participant EssaysAPI
participant AzureOpenAI
User->>Frontend: Submit essay
Frontend->>EssaysAPI: POST /essays
EssaysAPI->>AzureOpenAI: Evaluate essay
EssaysAPI->>Frontend: Return feedback
Frontend->>User: Show evaluation
```

### Questions Backend

```mermaid
sequenceDiagram
participant User
participant Frontend
participant QuestionsAPI
participant AzureOpenAI
User->>Frontend: Answer question
Frontend->>QuestionsAPI: POST /grader/interaction
QuestionsAPI->>AzureOpenAI: Evaluate answer
QuestionsAPI->>Frontend: Return feedback
Frontend->>User: Show evaluation
```

### Configuration Backend

```mermaid
flowchart TD
Admin((Admin/Teacher)) -->|Web| Frontend
Frontend -->|API Calls| ConfigAPI["Configuration API"]
ConfigAPI -->|CRUD| CosmosDB[(Cosmos DB)]
```

---

For more details, see the linked READMEs in each app folder and the comments in [`infra/main.bicep`](infra/main.bicep).
29 changes: 29 additions & 0 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 5 additions & 0 deletions package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"dependencies": {
"react-icons": "^5.5.0"
}
}
41 changes: 41 additions & 0 deletions src/avatar/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# Tutor Avatar Backend

This service provides the avatar-based conversational engine for The Tutor platform.

## Objective

- Enable real-time, speech-based AI avatar interactions
- Provide context-aware, multimodal feedback

## Functionalities

- Real-time chat and speech synthesis
- Avatar memory and context management
- Integration with Azure Speech and OpenAI

## Infrastructure Requirements

- Python 3.10+
- FastAPI
- Azure Speech
- Azure OpenAI

## Running Locally

1. Install dependencies:

```pwsh
poetry install
```

2. Start the API:

```pwsh
poetry run uvicorn app.main:app --reload
```

## Deploying to Azure

- Build a Docker image and push to ACR
- Deploy as a container app using the provided Bicep infra
- Configure environment variables for Azure Speech and OpenAI
25 changes: 18 additions & 7 deletions src/avatar/app/avatar.py
Original file line number Diff line number Diff line change
Expand Up @@ -92,14 +92,25 @@ def evaluate(self, avatar_data: dict, prompt_data: ChatResponse):
user_prompt=prompt_data.prompt
)

chat_history = None
if prompt_data.chat_history:
for msg in prompt_data.chat_history:
if msg.get("role") == "assistant":
messages.append({"role": "assistant", "content": msg.get("content")})
elif msg.get("role") == "system":
messages.append({"role": "system", "content": msg.get("content")})
elif msg.get("role") == "user":
messages.append({"role": "user", "content": msg.get("content")})
# Parse chat history if it's a string
if isinstance(prompt_data.chat_history, str):
try:
chat_history = json.loads(prompt_data.chat_history)
except json.JSONDecodeError:
pass
else:
chat_history = prompt_data.chat_history

if chat_history:
for msg in chat_history:
if "assistant" in msg:
messages.append({"role": "assistant", "content": msg["assistant"]})
elif "system" in msg:
messages.append({"role": "system", "content": msg["system"]})
elif "user" in msg:
messages.append({"role": "user", "content": msg["user"]})

response = self.client.complete(
model=self.model_deployment,
Expand Down
Loading