Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Enable Skorch+Dask-ML #5748

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 6 additions & 0 deletions distributed/protocol/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -119,3 +119,9 @@ def _register_cudf():
def _register_cuml():
with suppress(ImportError):
from cuml.comm import serialize


@dask_serialize.register_lazy("skorch")
@dask_deserialize.register_lazy("skorch")
def _register_skorch():
from . import skorch
42 changes: 42 additions & 0 deletions distributed/protocol/skorch.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
import skorch

from . import pickle
from .serialize import dask_deserialize, dask_serialize


@dask_serialize.register(skorch.NeuralNet)
def serialize_skorch(x, context=None):
protocol = (context or {}).get("pickle-protocol", None)
headers = {}
has_module = hasattr(x, "module_")
if has_module:
module = x.__dict__.pop("module_")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious why module_ can't be pickled on its own. Is there any more info on the issues encountered by leaving this?

Also any downside to (temporarily) modifying a user-provided object here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious why module_ can't be pickled on its own. Is there any more info on the issues encountered by leaving this?

So module's is an interactively defined class on client so its namespace is often __main__ . For eg. __main__. MyModule.

Pickle has problems pickling when interactively defined classes when they are set as an attributes of another object. as it tries to look up the class in the namespace. See eg for trace.
By pickling it on its own we are able to serialize successfully

Also any downside to (temporarily) modifying a user-provided object here?

The only side effect i can think of is if the class is redefined in the worker's name-space causing undefined behavior while de-serializing on the worker. I doubt that will really happen in real workflows.

FWIW, I have added a test to verify that at-least for the class is the same after deserialization.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this just an issue with pickle? Does cloudpickle run into this issue or does it work ok?

# module's is an interactively defined class on client so its namespace is often `__main__` .
# Pickle has problems pickling when interactively defined classes when they are
# set as an attributes of another object.
# By pickling it on its own we are able to serialize successfully
frames = [None]
buffer_callback = lambda f: frames.append(memoryview(f))
frames[0] = pickle.dumps(x, buffer_callback=buffer_callback, protocol=protocol)
headers["subframe-split"] = i = len(frames)
frames.append(None)
frames[i] = pickle.dumps(
module, buffer_callback=buffer_callback, protocol=protocol
)
x.__dict__["module_"] = module
else:
frames = [None]
buffer_callback = lambda f: frames.append(memoryview(f))
frames[0] = pickle.dumps(x, buffer_callback=buffer_callback, protocol=protocol)

return headers, frames


@dask_deserialize.register(skorch.NeuralNet)
def deserialize_skorch(header, frames):
i = header.get("subframe-split")
model = pickle.loads(frames[0], buffers=frames[1:i])
if i is not None:
module = pickle.loads(frames[i], buffers=frames[i + 1 :])
model.module_ = module
return model
42 changes: 42 additions & 0 deletions distributed/protocol/tests/test_skorch.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
import pytest

skorch = pytest.importorskip("skorch")
torch = pytest.importorskip("torch")

from distributed import Client
from distributed.protocol import deserialize, serialize


def test_serialize_deserialize_skorch_model():

client = Client(processes=True, n_workers=1)

class MyModule(torch.nn.Module):
def __init__(self, num_units=10):
super().__init__()
self.dense0 = torch.nn.Linear(20, num_units)

def forward(self, X, **kwargs):
return self.dense0(X)

net = skorch.NeuralNetClassifier(
MyModule,
max_epochs=10,
iterator_train__shuffle=True,
)

def test_serialize_skorch(net):
net = net.initialize()
return deserialize(*serialize(net))

# We test on a different worker to ensure that
# errors skorch serialization faces on a different process
# other than the client due to lack of __main__ context
# are actually resolved
# See isssue for context
# https://github.com/dask/dask-ml/issues/549#issuecomment-669924762

deserialized_net = list(client.run(test_serialize_skorch, net).values())[0]
assert isinstance(deserialized_net.module_, MyModule)
Comment on lines +39 to +40
Copy link
Author

@VibhuJawa VibhuJawa Feb 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was the best test i could come up with for testing on the worker . Please let me know is there is a better way to check serialization in a different process.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we would benefit from a roundtrip serialization test like some of the others in this directory (without a cluster) to make sure that is working as expected. Know that doesn't show the error per-se, but it will help catch other errors in the future

In terms of testing a worker, would take a look at some of the other tests. Maybe like this one? Then adapt that to your use case. We shouldn't need to manually do the serialization ourselves, but instead rely on Dask to do that for us and merely check that things work as expected


client.close()