Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions .github/workflows/check-regeneration.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,7 @@ jobs:
- name: Activate pnpm version
working-directory: test-proj/ui
run: corepack prepare --activate


- name: Run UI checks
run: pnpm run all-check
working-directory: test-proj/ui
working-directory: test-proj/ui
2 changes: 0 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,6 @@ This application uses LlamaDeploy. For more information see [the docs](https://d

1. install `uv` if you haven't `brew install uv`
2. run `uvx llamactl serve`
3. Visit http://localhost:4501/docs and see workflow APIs


# Organization

Expand Down
1 change: 1 addition & 0 deletions pyproject.toml.jinja
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ build-backend = "hatchling.build"

[dependency-groups]
dev = [
"click==8.1.7",
"hatch>=1.14.1",
"pytest>=8.4.2",
"ruff>=0.13.0",
Expand Down
16 changes: 9 additions & 7 deletions src/{{ project_name_snake }}/qa_workflows.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,16 @@
import logging
import os
import tempfile

import httpx
from dotenv import load_dotenv
from llama_cloud.types import RetrievalMode
import tempfile
from llama_index.core import Settings
from llama_index.core.chat_engine.types import BaseChatEngine, ChatMode
from llama_index.core.memory import ChatMemoryBuffer
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from llama_cloud_services import LlamaCloudIndex
from workflows import Workflow, step, Context
from workflows.events import (
StartEvent,
Expand All @@ -15,12 +21,6 @@
)
from workflows.retry_policy import ConstantDelayRetryPolicy

from llama_cloud_services import LlamaCloudIndex
from llama_index.core import Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.core.memory import ChatMemoryBuffer

from .clients import (
LLAMA_CLOUD_API_KEY,
LLAMA_CLOUD_BASE_URL,
Expand All @@ -30,6 +30,8 @@
LLAMA_CLOUD_PROJECT_ID,
)

load_dotenv()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's this for? Doesn't really hurt, but this is automatically handled by llama deploy

[tool.llamadeploy]
env-files = [".env"]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OPENAI_API_KEY somehow not able to be passed. That's why I add it back

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

Seems found the root cause? Now we only read env_files, but config here is env-files



logger = logging.getLogger(__name__)

Expand Down
5 changes: 2 additions & 3 deletions test-proj/.copier-answers.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# Changes here will be overwritten by Copier; NEVER EDIT MANUALLY
_commit: '2405947'
_commit: c9f43f6
_src_path: .
llama_org_id: asdf
llama_project_id: asdf
project_name: test-proj
project_title: Test Proj
2 changes: 0 additions & 2 deletions test-proj/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,6 @@ This application uses LlamaDeploy. For more information see [the docs](https://d

1. install `uv` if you haven't `brew install uv`
2. run `uvx llamactl serve`
3. Visit http://localhost:4501/docs and see workflow APIs


# Organization

Expand Down
1 change: 1 addition & 0 deletions test-proj/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ build-backend = "hatchling.build"

[dependency-groups]
dev = [
"click==8.1.7",
"hatch>=1.14.1",
"pytest>=8.4.2",
"ruff>=0.13.0",
Expand Down
16 changes: 9 additions & 7 deletions test-proj/src/test_proj/qa_workflows.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,16 @@
import logging
import os
import tempfile

import httpx
from dotenv import load_dotenv
from llama_cloud.types import RetrievalMode
import tempfile
from llama_index.core import Settings
from llama_index.core.chat_engine.types import BaseChatEngine, ChatMode
from llama_index.core.memory import ChatMemoryBuffer
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.llms.openai import OpenAI
from llama_cloud_services import LlamaCloudIndex
from workflows import Workflow, step, Context
from workflows.events import (
StartEvent,
Expand All @@ -15,12 +21,6 @@
)
from workflows.retry_policy import ConstantDelayRetryPolicy

from llama_cloud_services import LlamaCloudIndex
from llama_index.core import Settings
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.openai import OpenAIEmbedding
from llama_index.core.memory import ChatMemoryBuffer

from .clients import (
LLAMA_CLOUD_API_KEY,
LLAMA_CLOUD_BASE_URL,
Expand All @@ -30,6 +30,8 @@
LLAMA_CLOUD_PROJECT_ID,
)

load_dotenv()


logger = logging.getLogger(__name__)

Expand Down
1 change: 1 addition & 0 deletions test-proj/ui/src/libs/config.ts
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
export const APP_TITLE = "Test Proj";
export const AGENT_NAME = import.meta.env.VITE_LLAMA_DEPLOY_DEPLOYMENT_NAME;
export const INDEX_NAME = "document_qa_index";
12 changes: 2 additions & 10 deletions test-proj/ui/src/pages/Home.tsx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import ChatBot from "../components/ChatBot";
import { WorkflowTrigger } from "@llamaindex/ui";
import { APP_TITLE } from "../libs/config";
import { APP_TITLE, INDEX_NAME } from "../libs/config";

export default function Home() {
return (
Expand All @@ -20,18 +20,10 @@ export default function Home() {
<div className="flex mb-4">
<WorkflowTrigger
workflowName="upload"
inputFields={[
{
key: "index_name",
label: "Index Name",
placeholder: "e.g. document_qa_index",
required: true,
},
]}
customWorkflowInput={(files, fieldValues) => {
return {
file_id: files[0].fileId,
index_name: fieldValues.index_name,
index_name: INDEX_NAME,
};
}}
/>
Expand Down
3 changes: 3 additions & 0 deletions ui/src/libs/config.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
export const APP_TITLE = "Test Project";
export const AGENT_NAME = import.meta.env.VITE_LLAMA_DEPLOY_DEPLOYMENT_NAME;
export const INDEX_NAME = "document_qa_index";
1 change: 1 addition & 0 deletions ui/src/libs/config.ts.jinja
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
export const APP_TITLE = "{{ project_title }}";
export const AGENT_NAME = import.meta.env.VITE_LLAMA_DEPLOY_DEPLOYMENT_NAME;
export const INDEX_NAME = "document_qa_index";
12 changes: 2 additions & 10 deletions ui/src/pages/Home.tsx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import ChatBot from "../components/ChatBot";
import { WorkflowTrigger } from "@llamaindex/ui";
import { APP_TITLE } from "../libs/config";
import { APP_TITLE, INDEX_NAME } from "../libs/config";

export default function Home() {
return (
Expand All @@ -20,18 +20,10 @@ export default function Home() {
<div className="flex mb-4">
<WorkflowTrigger
workflowName="upload"
inputFields={[
{
key: "index_name",
label: "Index Name",
placeholder: "e.g. document_qa_index",
required: true,
},
]}
customWorkflowInput={(files, fieldValues) => {
return {
file_id: files[0].fileId,
index_name: fieldValues.index_name,
index_name: INDEX_NAME,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this just entirely be managed on the server? I don't see a reason it needs to be parameterized

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just want to be as less change as possible to ship...

};
}}
/>
Expand Down