Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 0 additions & 11 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -44,14 +44,3 @@ AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION=ap-south-1
AWS_S3_BUCKET_PREFIX="bucket-prefix-name"

# OpenAI

OPENAI_API_KEY="this_is_not_a_secret"
LANGFUSE_PUBLIC_KEY="this_is_not_a_secret"
LANGFUSE_SECRET_KEY="this_is_not_a_secret"
LANGFUSE_HOST="this_is_not_a_secret"

# Misc

CI=""
30 changes: 30 additions & 0 deletions .env.test.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
ENVIRONMENT=testing
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

ENVIRONMENT likely ignored – backend code expects APP_ENV
According to the PR description and conftest.py, the configuration loader branches on APP_ENV="testing".
Because this file sets ENVIRONMENT=testing, the loader will never see the flag and will fall back to the default (usually “production”) settings, defeating the purpose of the test-isolation work.

-ENVIRONMENT=testing
+APP_ENV=testing

If you still need ENVIRONMENT for other tooling, keep both lines.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ENVIRONMENT=testing
APP_ENV=testing
🤖 Prompt for AI Agents
In the .env.test.example file at line 1, replace the variable name ENVIRONMENT
with APP_ENV to match the backend code's expected environment variable for
configuration loading. If ENVIRONMENT is still needed for other tools, keep both
lines by adding APP_ENV=testing alongside ENVIRONMENT=testing.


PROJECT_NAME="AI Platform"
STACK_NAME=ai-platform

#Backend
SECRET_KEY=changethis
FIRST_SUPERUSER=superuser@example.com
FIRST_SUPERUSER_PASSWORD=changethis
EMAIL_TEST_USER="test@example.com"

# Postgres

POSTGRES_SERVER=localhost
POSTGRES_PORT=5432
POSTGRES_DB=ai_platform_test
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
Comment on lines +14 to +18
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Docker Compose gotcha: localhost won’t reach the DB container.

If tests run via docker compose, POSTGRES_SERVER should be the service name (commonly "postgres"), not localhost. Otherwise the app container can’t reach the DB and tests will fail/flap.

Consider documenting an alternative in the example:

 POSTGRES_SERVER=localhost
+# If running via Docker Compose, prefer the service name:
+# POSTGRES_SERVER=postgres

Run this to verify Compose service naming and usage across the repo:


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Searching for compose files and postgres service names..."
fd -HI 'docker-compose*.yml' -t f || true
fd -HI 'docker-compose*.yaml' -t f || true
fd -HI 'compose*.yml' -t f || true
fd -HI 'compose*.yaml' -t f || true

echo "Look for a postgres service definition..."
rg -n -A3 -i 'services:|^\s*postgres:|container_name:\s*postgres' || true

echo "Check where POSTGRES_SERVER is consumed..."
rg -n 'POSTGRES_SERVER' || true

echo "Check for DATABASE_URL usage that may bypass POSTGRES_*..."
rg -n 'DATABASE_URL' || true

Length of output: 1779


Update Docker Compose host to db
For tests running under Docker Compose, the PostgreSQL host must match the service name (db), not localhost, or the app container won’t be able to reach the database. Consider updating the example in .env.test.example:

 POSTGRES_SERVER=localhost
+# If running tests via Docker Compose, use the `db` service:
+# POSTGRES_SERVER=db
 POSTGRES_PORT=5432
 POSTGRES_DB=ai_platform_test
 POSTGRES_USER=postgres
 POSTGRES_PASSWORD=postgres
  • docker-compose.yml defines the database service as db (lines 1–4)
  • Environment overrides in docker-compose.yml already set POSTGRES_SERVER=db (lines 69 and 105)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
POSTGRES_SERVER=localhost
POSTGRES_PORT=5432
POSTGRES_DB=ai_platform_test
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_SERVER=localhost
# If running tests via Docker Compose, use the `db` service:
# POSTGRES_SERVER=db
POSTGRES_PORT=5432
POSTGRES_DB=ai_platform_test
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
🧰 Tools
🪛 dotenv-linter (3.3.0)

[warning] 15-15: [UnorderedKey] The POSTGRES_PORT key should go before the POSTGRES_SERVER key


[warning] 16-16: [UnorderedKey] The POSTGRES_DB key should go before the POSTGRES_PORT key


[warning] 18-18: [UnorderedKey] The POSTGRES_PASSWORD key should go before the POSTGRES_PORT key

🤖 Prompt for AI Agents
In the .env.test.example file between lines 14 and 18, the POSTGRES_SERVER is
set to localhost, which will prevent the app container from connecting to the
database when running under Docker Compose. Update the POSTGRES_SERVER value
from localhost to db to match the service name defined in docker-compose.yml,
ensuring proper connectivity during tests.


# Configure these with your own Docker registry images

DOCKER_IMAGE_BACKEND=backend
DOCKER_IMAGE_FRONTEND=frontend

# AWS

AWS_ACCESS_KEY_ID=this_is_a_test_key
AWS_SECRET_ACCESS_KEY=this_is_a_test_key
AWS_DEFAULT_REGION=ap-south-1
AWS_S3_BUCKET_PREFIX="bucket-prefix-name"
4 changes: 0 additions & 4 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,6 @@ jobs:
count: [100]

env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
LANGFUSE_PUBLIC_KEY: ${{ secrets.LANGFUSE_PUBLIC_KEY }}
LANGFUSE_SECRET_KEY: ${{ secrets.LANGFUSE_SECRET_KEY }}
LANGFUSE_HOST: ${{ secrets.LANGFUSE_HOST }}
LOCAL_CREDENTIALS_ORG_OPENAI_API_KEY: ${{ secrets.LOCAL_CREDENTIALS_ORG_OPENAI_API_KEY }}
LOCAL_CREDENTIALS_API_KEY: ${{ secrets.LOCAL_CREDENTIALS_API_KEY }}

Expand Down
6 changes: 4 additions & 2 deletions .github/workflows/continuous_integration.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ jobs:
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: ai_platform
POSTGRES_DB: ai_platform_test
ports:
- 5432:5432
options: --health-cmd "pg_isready -U postgres" --health-interval 10s --health-timeout 5s --health-retries 5
Expand All @@ -34,7 +34,9 @@ jobs:
python-version: ${{ matrix.python-version }}

- name: Making env file
run: cp .env.example .env
run: |
cp .env.test.example .env
cp .env.test.example .env.test

- name: Install uv
uses: astral-sh/setup-uv@v6
Expand Down
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ node_modules/
/playwright/.cache/

# Environments
.env
.env*
.venv
env/
venv/
Expand Down
2 changes: 1 addition & 1 deletion backend/app/api/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,5 +37,5 @@
api_router.include_router(users.router)
api_router.include_router(utils.router)

if settings.ENVIRONMENT == "local":
if settings.ENVIRONMENT in ["development", "testing"]:
api_router.include_router(private.router)
45 changes: 29 additions & 16 deletions backend/app/core/config.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import secrets
import warnings
import os
from typing import Annotated, Any, Literal
from typing import Any, Literal

from pydantic import (
EmailStr,
Expand All @@ -15,30 +15,30 @@
from typing_extensions import Self


def parse_cors(v: Any) -> list[str] | str:
if isinstance(v, str) and not v.startswith("["):
return [i.strip() for i in v.split(",")]
elif isinstance(v, list | str):
return v
raise ValueError(v)
def parse_cors(origins: Any) -> list[str] | str:
# If it's a plain comma-separated string, split it into a list
if isinstance(origins, str) and not origins.startswith("["):
return [origin.strip() for origin in origins.split(",")]
# If it's already a list or JSON-style string, just return it
elif isinstance(origins, (list, str)):
return origins
raise ValueError(f"Invalid CORS origins format: {origins!r}")


class Settings(BaseSettings):
model_config = SettingsConfigDict(
# Use top level .env file (one level above ./backend/)
env_file="../.env",
# env_file will be set dynamically in get_settings()
env_ignore_empty=True,
extra="ignore",
)
LANGFUSE_PUBLIC_KEY: str
LANGFUSE_SECRET_KEY: str
LANGFUSE_HOST: str # 🇪🇺 EU region
OPENAI_API_KEY: str

API_V1_STR: str = "/api/v1"
SECRET_KEY: str = secrets.token_urlsafe(32)
# 60 minutes * 24 hours * 1 days = 1 days
ACCESS_TOKEN_EXPIRE_MINUTES: int = 60 * 24 * 1
ENVIRONMENT: Literal["local", "staging", "production"] = "local"
ENVIRONMENT: Literal[
"development", "testing", "staging", "production"
] = "development"

PROJECT_NAME: str
SENTRY_DSN: HttpUrl | None = None
Expand Down Expand Up @@ -84,7 +84,7 @@ def _check_default_secret(self, var_name: str, value: str | None) -> None:
f'The value of {var_name} is "changethis", '
"for security, please change it, at least for deployments."
)
if self.ENVIRONMENT == "local":
if self.ENVIRONMENT in ["development", "testing"]:
warnings.warn(message, stacklevel=1)
else:
raise ValueError(message)
Expand All @@ -100,4 +100,17 @@ def _enforce_non_default_secrets(self) -> Self:
return self


settings = Settings() # type: ignore
def get_settings() -> Settings:
"""Get settings with appropriate env file based on ENVIRONMENT."""
environment = os.getenv("ENVIRONMENT", "development")

# Determine env file
env_files = {"testing": "../.env.test", "development": "../.env"}
env_file = env_files.get(environment, "../.env")

# Create Settings instance with the appropriate env file
return Settings(_env_file=env_file)
Comment on lines +103 to +112
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Relative _env_file paths will mis-resolve when CWD varies (pytest, Docker, IDE) — switch to absolute Path, add existence check, and cache.

"../.env*" is evaluated relative to the current working directory, not this file. Under repo-root pytest or container entrypoints, it will point outside the repository. Also, constructing settings repeatedly is unnecessary. Use an absolute path from __file__, guard for missing files, and cache the instance. Provide a reload hook for tests instead of calling settings.__init__().

Apply within this hunk:

-def get_settings() -> Settings:
-    """Get settings with appropriate env file based on ENVIRONMENT."""
-    environment = os.getenv("ENVIRONMENT", "development")
-
-    # Determine env file
-    env_files = {"testing": "../.env.test", "development": "../.env"}
-    env_file = env_files.get(environment, "../.env")
-
-    # Create Settings instance with the appropriate env file
-    return Settings(_env_file=env_file)
+@lru_cache
+def get_settings() -> Settings:
+    """Get settings with appropriate env file based on ENVIRONMENT."""
+    environment = os.getenv("ENVIRONMENT", "development")
+
+    # Resolve project root from this file: backend/app/core/config.py -> repo root
+    base_dir = Path(__file__).resolve().parents[3]
+
+    # Map env → filename; staging/prod typically rely on real env vars, so no file
+    mapping = {
+        "testing": base_dir / ".env.test",
+        "development": base_dir / ".env",
+    }
+    env_file_path = mapping.get(environment)
+    if env_file_path is not None and not env_file_path.exists():
+        # Fall back to OS env only if the file isn't present
+        env_file_path = None
+
+    return Settings(_env_file=env_file_path)

Add these imports at the top of the file:

from functools import lru_cache
from pathlib import Path

Recommended (outside this hunk): provide an explicit reload hook for tests and fixtures that currently do settings.__init__():

def reload_settings() -> Settings:
    """Clear cached settings and rebuild after dotenv changes in tests."""
    get_settings.cache_clear()
    return get_settings()

And replace any settings.__init__() calls in test bootstrapping with reload_settings() to avoid losing _env_file context.

I can push a follow-up patch updating conftest.py to call reload_settings() after load_environment().

To confirm the risks and cleanup opportunities:

Also applies to: 113-114


🏁 Script executed:

#!/bin/bash
# Where is settings reinitialized?
rg -nP -C2 --type=py 'settings\.__init__\s*\('

# Do we accidentally rely on CWD for dotenv files elsewhere?
rg -nP -C2 --type=py '\.\./\.env(\.test)?'

# Validate the actual presence of example/test dotenv at repo root
fd -H --max-depth 2 '.env*' -x ls -al {} || true

Length of output: 2080


Absolute path resolution and caching for .env loading in config.py and load_env.py

To avoid mis-resolved relative paths when the CWD changes (pytest, Docker, IDE), refactor both modules to compute the project root from __file__, use Path, guard for missing files, and cache the settings:

• backend/app/core/config.py
• Wrap get_settings() with @lru_cache
• Compute base_dir = Path(__file__).resolve().parents[3]
• Map environmentbase_dir/".env" or ".env.test"
• Check env_file_path.exists(), else set to None
• Return Settings(_env_file=env_file_path)

-from functools import lru_cache
+from functools import lru_cache
 from os import getenv
-from pathlib import Path
+from pathlib import Path

 @lru_cache
 def get_settings() -> Settings:
     """Get settings with appropriate env file based on ENVIRONMENT."""
     environment = getenv("ENVIRONMENT", "development")

     # Resolve project root from this file
-    base_dir = Path(__file__).resolve().parents[3]
+    base_dir = Path(__file__).resolve().parents[3]

     # Map env → filename; staging/prod rely on real env vars
     mapping = {
         "testing": base_dir / ".env.test",
         "development": base_dir / ".env",
     }
     env_file_path = mapping.get(environment)
     if env_file_path is not None and not env_file_path.exists():
         # Fall back to OS env only if file isn't present
         env_file_path = None

     return Settings(_env_file=env_file_path)

• backend/app/load_env.py
• Add from pathlib import Path; compute base_dir = Path(__file__).resolve().parents[2]
• Replace hard-coded "../.env"/"../.env.test" with absolute paths
• Only call load_dotenv(env_file_path) if the file exists

-from dotenv import load_dotenv
-# Use the same path as config.py expects (one level above backend/)
-env_file = "../.env"
-if env == "testing":
-    env_file = "../.env.test"
-load_dotenv(env_file)
+from dotenv import load_dotenv
+from pathlib import Path

+# Resolve project root (load_env.py → backend/app → backend → repo root)
+base_dir = Path(__file__).resolve().parents[2]
+env_files = {
+    "testing": base_dir / ".env.test",
+    "development": base_dir / ".env",
+}
+env_file_path = env_files.get(env, base_dir / ".env")
+if env_file_path.exists():
+    load_dotenv(str(env_file_path))
+else:
+    # Skip file-based loading; rely on OS env vars
+    pass

Optional (for tests): add a reload hook instead of calling settings.__init__():

def reload_settings() -> Settings:
    """Clear cached settings and rebuild after dotenv changes in tests."""
    get_settings.cache_clear()
    return get_settings()

Note: no occurrences of settings.__init__() were found in the codebase, so test modifications aren’t required at this time.

🤖 Prompt for AI Agents
In backend/app/core/config.py around lines 101-110, wrap get_settings() with
functools.lru_cache and compute an absolute project root using base_dir =
Path(__file__).resolve().parents[3]; map the ENVIRONMENT value to env file paths
base_dir/".env" or base_dir/".env.test", check env_file_path.exists() and set
env_file to None if missing, then return Settings(_env_file=env_file_path or
None). Also update backend/app/load_env.py to import Path, compute base_dir =
Path(__file__).resolve().parents[2], replace relative "../.env" and
"../.env.test" with absolute base_dir paths, and only call
load_dotenv(env_file_path) if the file exists; optionally add a
reload_settings() helper that clears the get_settings cache and returns
get_settings().



# Export settings instance
settings = get_settings()
24 changes: 22 additions & 2 deletions backend/app/core/db.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,30 @@
from sqlmodel import Session, create_engine, select

from app import crud
from app.core.config import settings
from app.models import User, UserCreate

engine = create_engine(str(settings.SQLALCHEMY_DATABASE_URI))

def get_engine():
"""Get database engine with current settings."""
# Import settings dynamically to get the current instance
from app.core.config import settings

# Configure connection pool settings
# For testing, we need more connections since tests run in parallel
pool_size = 20 if settings.ENVIRONMENT == "development" else 5
max_overflow = 30 if settings.ENVIRONMENT == "development" else 10

return create_engine(
str(settings.SQLALCHEMY_DATABASE_URI),
pool_size=pool_size,
max_overflow=max_overflow,
pool_pre_ping=True,
pool_recycle=300, # Recycle connections after 5 minutes
)


# Create a default engine for backward compatibility
engine = get_engine()


# make sure all SQLModel models are imported (app.models) before initializing DB
Expand Down
13 changes: 13 additions & 0 deletions backend/app/load_env.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
import os
from dotenv import load_dotenv


def load_environment():
env = os.getenv("ENVIRONMENT", "development")

# Use the same path as config.py expects (one level above backend/)
env_file = "../.env"
if env == "testing":
env_file = "../.env.test"

load_dotenv(env_file)
9 changes: 7 additions & 2 deletions backend/app/main.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,24 @@
import sentry_sdk

from fastapi import FastAPI
from fastapi.routing import APIRoute
from asgi_correlation_id.middleware import CorrelationIdMiddleware
from app.api.main import api_router
from app.core.config import settings
import app.core.logger
from app.core.exception_handlers import register_exception_handlers
from app.core.middleware import http_request_logger

from app.load_env import load_environment

# Load environment variables
load_environment()


def custom_generate_unique_id(route: APIRoute) -> str:
return f"{route.tags[0]}-{route.name}"


if settings.SENTRY_DSN and settings.ENVIRONMENT != "local":
if settings.SENTRY_DSN and settings.ENVIRONMENT != "development":
sentry_sdk.init(dsn=str(settings.SENTRY_DSN), enable_tracing=True)

app = FastAPI(
Expand Down
6 changes: 6 additions & 0 deletions backend/app/tests/conftest.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,15 @@
import pytest
import os

# Set environment before importing ANYTHING else
os.environ["ENVIRONMENT"] = "testing"

from fastapi.testclient import TestClient
from sqlmodel import Session
from sqlalchemy import event
from collections.abc import Generator

# Now import after setting environment
from app.core.config import settings
from app.core.db import engine
from app.api.deps import get_db
Expand Down
10 changes: 10 additions & 0 deletions backend/scripts/tests-start.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,16 @@
set -e
set -x

# Set environment for testing
export ENVIRONMENT=testing

python app/tests_pre_start.py

# Run pending migrations for test database
uv run alembic upgrade head
if [ $? -ne 0 ]; then
echo 'Error: Test database migrations failed'
exit 1
fi

Comment on lines +10 to +16
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Redundant exit-code check under set -e

With set -e, the script already exits on a non-zero status. The manual $? check is unnecessary.

Apply this diff:

-# Run pending migrations for test database
-uv run alembic upgrade head
-if [ $? -ne 0 ]; then
-    echo 'Error: Test database migrations failed'
-    exit 1
-fi
+# Run pending migrations for test database
+uv run alembic upgrade head

Also verify that the uv CLI is available in all environments running this script, or fall back to invoking Alembic directly:


🏁 Script executed:

#!/usr/bin/env bash
# Verify uv availability and suggest a fallback to plain alembic if missing
if ! command -v uv >/dev/null 2>&1; then
  echo "Warning: 'uv' not found. Consider using 'alembic upgrade head' instead of 'uv run alembic upgrade head'."
fi

Length of output: 252


Remove redundant exit-code check and ensure Alembic runs whether or not uv is installed

With set -e in place, the manual if [ $? -ne 0 ]…exit 1 block is unnecessary. Additionally, since uv isn’t guaranteed to be available in all environments, fall back to invoking Alembic directly:

• File: backend/scripts/tests-start.sh (around lines 10–16)

-# Run pending migrations for test database
-uv run alembic upgrade head
-if [ $? -ne 0 ]; then
-    echo 'Error: Test database migrations failed'
-    exit 1
-fi
+# Run pending migrations for test database
+if command -v uv >/dev/null 2>&1; then
+  uv run alembic upgrade head
+else
+  alembic upgrade head
+fi

This preserves the “fail fast” behavior of set -e and guarantees migrations run even if uv isn’t installed.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Run pending migrations for test database
uv run alembic upgrade head
if [ $? -ne 0 ]; then
echo 'Error: Test database migrations failed'
exit 1
fi
# Run pending migrations for test database
if command -v uv >/dev/null 2>&1; then
uv run alembic upgrade head
else
alembic upgrade head
fi
🤖 Prompt for AI Agents
In backend/scripts/tests-start.sh around lines 10 to 16, remove the redundant
manual exit-code check (the if [ $? -ne 0 ] ... exit 1 block) because set -e
already enforces fail-fast, and replace the hardcoded `uv run alembic upgrade
head` invocation with a fallback that tries `uv run alembic upgrade head` if
`uv` is available, otherwise runs `alembic upgrade head` directly so migrations
run even when `uv` is not installed; keep existing set -e and ensure the script
exits non-zero on failure.

bash scripts/test.sh "$@"
8 changes: 0 additions & 8 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -72,10 +72,6 @@ services:
- POSTGRES_USER=${POSTGRES_USER?Variable not set}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD?Variable not set}
- SENTRY_DSN=${SENTRY_DSN}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- LANGFUSE_PUBLIC_KEY=${LANGFUSE_PUBLIC_KEY}
- LANGFUSE_SECRET_KEY=${LANGFUSE_SECRET_KEY}
- LANGFUSE_HOST=${LANGFUSE_HOST}
- LOCAL_CREDENTIALS_ORG_OPENAI_API_KEY=${LOCAL_CREDENTIALS_ORG_OPENAI_API_KEY}
- LOCAL_CREDENTIALS_API_KEY=${LOCAL_CREDENTIALS_API_KEY}
- EMAIL_TEST_USER=${EMAIL_TEST_USER}
Expand Down Expand Up @@ -112,10 +108,6 @@ services:
- POSTGRES_USER=${POSTGRES_USER?Variable not set}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD?Variable not set}
- SENTRY_DSN=${SENTRY_DSN}
- OPENAI_API_KEY=${OPENAI_API_KEY}
- LANGFUSE_PUBLIC_KEY=${LANGFUSE_PUBLIC_KEY}
- LANGFUSE_SECRET_KEY=${LANGFUSE_SECRET_KEY}
- LANGFUSE_HOST=${LANGFUSE_HOST}
- LOCAL_CREDENTIALS_ORG_OPENAI_API_KEY=${LOCAL_CREDENTIALS_ORG_OPENAI_API_KEY}
- LOCAL_CREDENTIALS_API_KEY=${LOCAL_CREDENTIALS_API_KEY}
- EMAIL_TEST_USER=${EMAIL_TEST_USER}
Expand Down