Skip to content

[SPARK-56324] Add ZeroCopyByteStream to enable PySpark <-> Spark message-based communication#55515

Open
sven-weber-db wants to merge 1 commit intoapache:masterfrom
sven-weber-db:sven-weber_data/ZeroCopyByte
Open

[SPARK-56324] Add ZeroCopyByteStream to enable PySpark <-> Spark message-based communication#55515
sven-weber-db wants to merge 1 commit intoapache:masterfrom
sven-weber-db:sven-weber_data/ZeroCopyByte

Conversation

@sven-weber-db
Copy link
Copy Markdown
Contributor

@sven-weber-db sven-weber-db commented Apr 23, 2026

What changes were proposed in this pull request?

This is the first in a series of PRs that introduce message-based communication to PySpark UDFs. This initiative is part of SPIP SPARK-55278, which proposes language-agnostic UDFs.

The goal of introducing message-based communication to PySpark is to:

  1. Make the communication between Spark <-> PySpark more structured.
  2. Enable new communication protocols (e.g., gRPC) transparently.

The overall goal is to introduce a second communication channel while keeping the existing channel intact. Specifically, we want to introduce gRPC in addition to UDS. The existing UDS channel will not be changed, and its characteristics, including performance, will remain untouched.

As the first step to make PySpark communication message-based, this PR introduces a new class, which implements a file-like interface on top of a stream of byte arrays. This class will be used in follow-up PRs to provide raw gRPC-transmitted bytes to PySpark.

Why are the changes needed?

This is the first step toward a language-agnostic UDF protocol for Spark that enables UDF workers written in any language to communicate with the Spark engine through a well-defined specification and API boundary. The abstractions introduced here will be used to make PySpark transport layer agnostic, which is required for PySpark to support the new protocol.

Does this PR introduce any user-facing change?

No. There will be follow-up PRs to consume the introduced abstractions.

How was this patch tested?

New unit tests have been added for the new modules.

Was this patch authored or co-authored using generative AI tooling?

Partially, yes. However, the code is manually reviewed and adjusted.

@sven-weber-db sven-weber-db changed the title [WIP][SPARK-56324] Introducing ZeroCopyByteStream for PySpark message-base… [SPARK-56324] Frictionless UDF workers: ZeroCopyByteStream for PySpark message-based communication Apr 23, 2026
@sven-weber-db sven-weber-db force-pushed the sven-weber_data/ZeroCopyByte branch 2 times, most recently from 84b90de to dbaec94 Compare April 24, 2026 12:41
@sven-weber-db sven-weber-db force-pushed the sven-weber_data/ZeroCopyByte branch from dbaec94 to caf5e47 Compare April 24, 2026 16:25
@sven-weber-db sven-weber-db changed the title [SPARK-56324] Frictionless UDF workers: ZeroCopyByteStream for PySpark message-based communication [SPARK-56324] Add ZeroCopyByteStream to enable PySpark <-> Spark message-based communication Apr 27, 2026

The chunk to be added cannot be None.
"""
if type(chunk) is not memoryview:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We used isinstance() in __init__ and is not here - I think we can allow subclass right? So maybe use isinstance here for the consistency?

self._current_chunk = initial_view
self._current_position = 0
self._eof = False
self._lock = threading.Lock()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only reason we create this lock is for _condition right? threading.Condition() will create a RLock without any argument - is there a concern to use RLock? I don't want developers to see this lock as something that they can access, if the interface should be just self._condition.

self._current_position += to_read

# If entire chunk consumed, clear it for next chunk
if self._current_position > len(self._current_chunk):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's change this to an assertion. This should never happen if our code is correct. No user data should trigger this.

def read(self, size: int) -> memoryview:
"""
Reads size bytes. If the read failed because the underlying
stream was marked as finished (EOF), None is returned.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is not accurate anymore right?


return result

def read(self, size: int) -> memoryview:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I mentioned this once - should we support read() for read everything? I think in the future we probably just want to read the whole thing and parse.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants