Skip to content

Feature Request: Add /v1/memory/batch-ingest API endpoint #132

@ishaanxgupta

Description

@ishaanxgupta

We should implement a dedicated batch ingestion endpoint to handle multiple message pairs in a single HTTP request.

Proposed Solution:

  • Create a new BatchIngestRequest schema that accepts a list of message pairs.
  • Add a POST /v1/memory/batch-ingest endpoint in src/api/routes/memory.py.
  • Use asyncio.gather() inside the endpoint to concurrently process the pairs through the ingest pipeline, using the existing _ingest_semaphore to limit internal concurrency.
  • Return a BatchIngestResponse that includes a summary of successes and any failures.

/Context page should be using Batch Ingest API. Audit of Global Queue will also be required. A batch endpoint could eventually be optimized to do bulk inserts into Neo4j and your vector store, drastically reducing database transaction overhead.

Better Error Handling is required: can return partial successes (e.g., "48 pairs succeeded, 2 failed due to LLM timeout")

Metadata

Metadata

Labels

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions