Add write_parquet operation to offload parquet writing to worker#327
Merged
Edwardvaneechoud merged 1 commit intofeauture/kernel-implementationfrom Feb 9, 2026
Merged
Conversation
The add_python_script method was calling collect().write_parquet() directly on the core process, which is undesirable for performance. This change offloads the collect and parquet writing to the worker process using the existing ExternalDfFetcher infrastructure. Changes: - Add write_parquet operation to worker funcs.py that deserializes a LazyFrame, collects it, and writes to a specified parquet path with fsync - Add write_parquet to OperationType in both worker and core models - Add kwargs support to ExternalDfFetcher and trigger_df_operation so custom parameters (like output_path) can be passed through both WebSocket streaming and REST fallback paths - Update REST /submit_query/ endpoint to read kwargs from X-Kwargs header - Replace direct collect().write_parquet() in add_python_script with ExternalDfFetcher using the new write_parquet operation type https://claude.ai/code/session_01RNWTER2V7VJAgPeYEusNoC
|
Codecov Report❌ Patch coverage is 📢 Thoughts on this report? Let us know! |
7802641
into
feauture/kernel-implementation
13 checks passed
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR adds support for a new
write_parquetoperation type that offloads the collection and writing of LazyFrames to parquet files from the core process to the worker process. This keeps the core process lightweight and prevents race conditions when reading parquet files immediately after writing.Key Changes
write_parquetoperation type to theOperationTypeliteral in models, enabling a new operation mode for subprocess operationswrite_parquetfunction in the worker that deserializes a LazyFrame, collects it, and writes it to a parquet file with proper disk flushingtrigger_df_operationandExternalDfFetcherto support passing additional kwargs via HTTP headers (X-Kwargs), allowing callers to specify operation-specific parameters like output pathsflow_graph.pyto use the newwrite_parquetoperation instead of collecting and writing parquet files in the core process, reducing memory pressure and preventing file race conditionsImplementation Details
write_parquetoperation accepts anoutput_pathparameter via kwargs to specify where the parquet file should be writtenos.fsync()to prevent race conditions when the kernel process immediately reads the fileNone