Skip to content

Conversation

@amindadgar
Copy link
Member

@amindadgar amindadgar commented Jul 15, 2025

…flow and related activities!

  • Introduced BatchVectorIngestionWorkflow for processing multiple document ingestion requests in parallel.
  • Added process_documents_batch activity for handling batch document processing.
  • Updated schema to include BatchIngestionRequest and BatchDocument models.
  • Enhanced README with usage examples and performance considerations for batch processing.

Summary by CodeRabbit

  • New Features

    • Introduced support for batch document ingestion, enabling efficient parallel processing of multiple documents.
    • Added new batch ingestion workflow and activity for handling large-scale document imports.
    • Expanded schema to support batch document requests with customizable metadata exclusions.
  • Documentation

    • Added comprehensive README covering usage, configuration, performance tuning, error handling, and integration for the new ingestion workflows.
  • Chores

    • Updated workflow and activity registry to include new batch processing capabilities.

…flow and related activities!

- Introduced BatchVectorIngestionWorkflow for processing multiple document ingestion requests in parallel.
- Added process_documents_batch activity for handling batch document processing.
- Updated schema to include BatchIngestionRequest and BatchDocument models.
- Enhanced README with usage examples and performance considerations for batch processing.
@coderabbitai
Copy link

coderabbitai bot commented Jul 15, 2025

Walkthrough

These changes introduce batch ingestion capabilities for document processing in the hivemind_etl module. New schema models, activities, and a Temporal workflow enable parallel processing of multiple documents in chunks. The registry and documentation are updated to support and describe both single and batch ingestion workflows, with batch operations now fully integrated.

Changes

File(s) Change Summary
hivemind_etl/simple_ingestion/schema.py Added BatchDocument and BatchIngestionRequest Pydantic models for batch document ingestion.
hivemind_etl/simple_ingestion/pipeline.py Added BatchChunk, BatchVectorIngestionWorkflow, and process_documents_batch for batch ingestion; updated process_document.
hivemind_etl/simple_ingestion/README.md New documentation detailing single and batch ingestion workflows, schemas, usage, and integration.
hivemind_etl/activities.py, workflows.py Extended imports to include batch ingestion activity and workflow.
registry.py Registered process_documents_batch activity and BatchVectorIngestionWorkflow workflow.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant TemporalWorker
    participant BatchVectorIngestionWorkflow
    participant process_documents_batch
    participant CustomIngestionPipeline

    Client->>TemporalWorker: Start BatchVectorIngestionWorkflow(batchRequest)
    TemporalWorker->>BatchVectorIngestionWorkflow: Run(batchRequest)
    loop For each chunk in batchRequest
        BatchVectorIngestionWorkflow->>process_documents_batch: Process(chunk)
        process_documents_batch->>CustomIngestionPipeline: Ingest documents in chunk
        CustomIngestionPipeline-->>process_documents_batch: Return
    end
    BatchVectorIngestionWorkflow-->>TemporalWorker: Done
    TemporalWorker-->>Client: Workflow complete
Loading

Possibly related PRs

Poem

In the meadow of code, where the data flows free,
A rabbit now batches, as quick as can be!
Documents hop in, ten at a time,
Chunks processed in parallel, efficiency sublime.
With workflows and schemas, the system’s in tune—
Batch bunnies are dancing, beneath the data moon!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
hivemind_etl/simple_ingestion/schema.py (2)

41-49: Complete the docstring with parameter descriptions.

The BatchDocument docstring is incomplete and doesn't follow the same format as IngestionRequest. Add parameter descriptions for consistency.

 class BatchDocument(BaseModel):
-    """A model representing a document for batch ingestion.
-    
-    """
+    """A model representing a document for batch ingestion.
+    
+    Parameters
+    ----------
+    docId : str
+        Unique identifier for the document.
+    text : str
+        The text content to be processed.
+    metadata : dict
+        Additional metadata associated with the document.
+    excludedEmbedMetadataKeys : list[str], optional
+        List of metadata keys to exclude from embedding process.
+        Default is an empty list.
+    excludedLlmMetadataKeys : list[str], optional
+        List of metadata keys to exclude from LLM processing.
+        Default is an empty list.
+    """

52-63: Fix the docstring parameter description.

The docstring incorrectly describes the parameter as ingestion_requests : list[IngestionRequest], but the actual field is document: list[BatchDocument].

 class BatchIngestionRequest(BaseModel):
     """A model representing a batch of ingestion requests for document processing.

     Parameters
     ----------
-    ingestion_requests : list[IngestionRequest]
-        A list of ingestion requests.
+    communityId : str
+        The unique identifier of the community.
+    platformId : str
+        The unique identifier of the platform.
+    collectionName : str | None, optional
+        The name of the collection to use for the documents.
+        Default is `None` means it would follow the default pattern of `[communityId]_[platformId]`
+    document : list[BatchDocument]
+        A list of batch documents to process.
     """
hivemind_etl/simple_ingestion/README.md (1)

75-84: Fix the batch workflow usage example.

The usage example incorrectly suggests that batch_size is a parameter passed to the workflow execution. Based on the workflow implementation, batch_size is hardcoded as 10 within the workflow and is not configurable via parameters.

 # Execute batch workflow
 client = await Client.connect("localhost:7233")
 await client.execute_workflow(
     "BatchVectorIngestionWorkflow",
     batch_request,
-    10,  # batch_size: optional, default is 10
     id="batch-ingestion-123", 
     task_queue="hivemind-etl"
 )
hivemind_etl/simple_ingestion/pipeline.py (2)

86-86: Consider making batch_size configurable.

The batch size is currently hardcoded to 10. Consider making it configurable through the workflow input or environment variables for better flexibility.


109-109: Replace unused loop variable with underscore.

The loop variable i is not used within the loop body.

Apply this diff:

-        for i, chunk in enumerate(document_chunks):
+        for chunk in document_chunks:
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3745b99 and da30715.

📒 Files selected for processing (6)
  • hivemind_etl/activities.py (1 hunks)
  • hivemind_etl/simple_ingestion/README.md (1 hunks)
  • hivemind_etl/simple_ingestion/pipeline.py (3 hunks)
  • hivemind_etl/simple_ingestion/schema.py (1 hunks)
  • registry.py (4 hunks)
  • workflows.py (1 hunks)
🧰 Additional context used
🧠 Learnings (5)
hivemind_etl/activities.py (1)
Learnt from: amindadgar
PR: TogetherCrew/temporal-worker-python#1
File: hivemind_etl/activities.py:69-71
Timestamp: 2024-11-25T12:06:15.391Z
Learning: The `say_hello` function in `hivemind_etl/activities.py` is an example and does not require documentation.
workflows.py (3)
Learnt from: amindadgar
PR: TogetherCrew/temporal-worker-python#36
File: workflows.py:13-15
Timestamp: 2025-05-08T06:37:34.094Z
Learning: In the temporal-worker-python project, workflows are defined in domain-specific modules, imported into the root-level workflows.py file, and then imported from workflows.py into registry.py where they're registered in the WORKFLOWS list. Therefore, imports in workflows.py are necessary even if they appear unused within that file itself.
Learnt from: amindadgar
PR: TogetherCrew/temporal-worker-python#1
File: test_run_workflow.py:0-0
Timestamp: 2024-11-25T11:49:42.951Z
Learning: The file `test_run_workflow.py` is used for testing, and code change suggestions are not required for this file.
Learnt from: amindadgar
PR: TogetherCrew/temporal-worker-python#39
File: hivemind_summarizer/activities.py:97-100
Timestamp: 2025-05-12T11:24:54.953Z
Learning: In the temporal-worker-python project, `CustomIngestionPipeline` expects collection names WITHOUT the community_id prefix (e.g., `{platform_id}_summary`), while direct Qdrant queries use collection names WITH the community_id prefix (e.g., `{community_id}_{platform_id}_summary`).
registry.py (1)
Learnt from: amindadgar
PR: TogetherCrew/temporal-worker-python#36
File: workflows.py:13-15
Timestamp: 2025-05-08T06:37:34.094Z
Learning: In the temporal-worker-python project, workflows are defined in domain-specific modules, imported into the root-level workflows.py file, and then imported from workflows.py into registry.py where they're registered in the WORKFLOWS list. Therefore, imports in workflows.py are necessary even if they appear unused within that file itself.
hivemind_etl/simple_ingestion/README.md (2)
Learnt from: amindadgar
PR: TogetherCrew/temporal-worker-python#39
File: hivemind_summarizer/activities.py:97-100
Timestamp: 2025-05-12T11:24:54.953Z
Learning: In the temporal-worker-python project, `CustomIngestionPipeline` expects collection names WITHOUT the community_id prefix (e.g., `{platform_id}_summary`), while direct Qdrant queries use collection names WITH the community_id prefix (e.g., `{community_id}_{platform_id}_summary`).
Learnt from: amindadgar
PR: TogetherCrew/temporal-worker-python#36
File: workflows.py:13-15
Timestamp: 2025-05-08T06:37:34.094Z
Learning: In the temporal-worker-python project, workflows are defined in domain-specific modules, imported into the root-level workflows.py file, and then imported from workflows.py into registry.py where they're registered in the WORKFLOWS list. Therefore, imports in workflows.py are necessary even if they appear unused within that file itself.
hivemind_etl/simple_ingestion/pipeline.py (2)
Learnt from: amindadgar
PR: TogetherCrew/temporal-worker-python#39
File: hivemind_summarizer/activities.py:97-100
Timestamp: 2025-05-12T11:24:54.953Z
Learning: In the temporal-worker-python project, `CustomIngestionPipeline` expects collection names WITHOUT the community_id prefix (e.g., `{platform_id}_summary`), while direct Qdrant queries use collection names WITH the community_id prefix (e.g., `{community_id}_{platform_id}_summary`).
Learnt from: amindadgar
PR: TogetherCrew/temporal-worker-python#30
File: hivemind_summarizer/schema.py:10-14
Timestamp: 2025-04-21T10:12:30.711Z
Learning: For the Telegram summaries feature in the temporal-worker-python project, the user plans to add date validation to ensure end_date is after start_date and dates are in the correct format as a future improvement. This validation would be added to the TelegramSummariesRangeActivityInput class.
🧬 Code Graph Analysis (1)
workflows.py (1)
hivemind_etl/simple_ingestion/pipeline.py (1)
  • BatchVectorIngestionWorkflow (60-119)
🪛 Ruff (0.12.2)
hivemind_etl/activities.py

18-18: hivemind_etl.simple_ingestion.pipeline.process_documents_batch imported but unused

Remove unused import

(F401)

workflows.py

18-18: hivemind_etl.simple_ingestion.pipeline.BatchVectorIngestionWorkflow imported but unused

Remove unused import

(F401)

hivemind_etl/simple_ingestion/pipeline.py

7-7: .schema.BatchDocument imported but unused

Remove unused import: .schema.BatchDocument

(F401)


109-109: Loop control variable i not used within loop body

(B007)

🔇 Additional comments (11)
hivemind_etl/activities.py (1)

16-19: Import correctly added for batch processing activity.

The process_documents_batch import follows the established pattern for activities that are imported here and then registered in registry.py. The static analysis warning can be ignored as this is the expected architecture.

workflows.py (1)

16-19: Import correctly added for batch processing workflow.

The BatchVectorIngestionWorkflow import follows the established pattern for workflows that are imported here and then registered in registry.py. The static analysis warning can be ignored as this is the expected architecture.

registry.py (4)

12-12: Batch activity correctly registered.

The process_documents_batch activity is properly imported and will be registered in the ACTIVITIES list, enabling it for use in the Temporal worker.


28-28: Batch workflow correctly registered.

The BatchVectorIngestionWorkflow workflow is properly imported and will be registered in the WORKFLOWS list, enabling it for use in the Temporal worker.


41-41: Batch workflow correctly added to registry.

The BatchVectorIngestionWorkflow is properly added to the WORKFLOWS list, completing the registration process.


58-58: Batch activity correctly added to registry.

The process_documents_batch activity is properly added to the ACTIVITIES list, completing the registration process.

hivemind_etl/simple_ingestion/README.md (1)

1-187: Comprehensive documentation for batch processing workflows.

The README provides excellent documentation for both single and batch ingestion workflows, including:

  • Clear usage examples with code snippets
  • Detailed schema reference
  • Performance considerations for choosing between workflows
  • Error handling and retry policies
  • Integration instructions

This documentation will be very helpful for developers using the batch processing capabilities.

hivemind_etl/simple_ingestion/pipeline.py (4)

7-7: Skip

While static analysis indicates BatchDocument is unused, it's actually part of the type definition for BatchIngestionRequest.document list items and is correctly imported.


14-17: LGTM!

Clean implementation of the chunk class that properly inherits from BatchIngestionRequest.


156-158: LGTM!

The activity correctly handles the new excluded metadata keys from the ingestion request.


163-206: Well-implemented batch processing activity!

The activity correctly processes multiple documents in a single pipeline run, which is more efficient than processing them individually. The implementation properly handles the conversion from BatchDocument to Document objects.

@amindadgar amindadgar merged commit d61e8d4 into main Jul 15, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants