New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Serve: cannot pass a sub object that includes a dag node #38809
Comments
Looks like this is an issue in the
This fails on @ericl any ideas here? |
This seems to be because the
Though, the proper fix is probably a bit more subtle than this. |
I believe the above "early termination" was added to avoid attempting to serialize non-serializable objects. The following test fails:
I'm not sure if this is actually a requirement -- for Serve at least, we only use the py_obj_scanner on objects that'll be serialized anyways. |
As per #38809, you currently cannot pass bound deployments nested within custom objects. This PR lifts that restriction. The approach I took is to remove the "arbitrary object replacement" path in `_PyObjScanner.reducer_override`, which was effectively causing cloudpickle to return early. Instead, we now fully serialize objects aside from the `SourceType` using the standard cloudpickle path. This has one major downside: all objects that `_PyObjScanner` is called on must now be serializable. This is not an issue for its current usage in the code base, but it required me to also add support for finding and replacing multiple types at once (because we currently do multiple passes on each Serve `Query` object).
Re-open until cherry picked into 2.7 |
…project#39015) As per ray-project#38809, you currently cannot pass bound deployments nested within custom objects. This PR lifts that restriction. The approach I took is to remove the "arbitrary object replacement" path in `_PyObjScanner.reducer_override`, which was effectively causing cloudpickle to return early. Instead, we now fully serialize objects aside from the `SourceType` using the standard cloudpickle path. This has one major downside: all objects that `_PyObjScanner` is called on must now be serializable. This is not an issue for its current usage in the code base, but it required me to also add support for finding and replacing multiple types at once (because we currently do multiple passes on each Serve `Query` object).
…) (#39330) As per #38809, you currently cannot pass bound deployments nested within custom objects. This PR lifts that restriction. The approach I took is to remove the "arbitrary object replacement" path in `_PyObjScanner.reducer_override`, which was effectively causing cloudpickle to return early. Instead, we now fully serialize objects aside from the `SourceType` using the standard cloudpickle path. This has one major downside: all objects that `_PyObjScanner` is called on must now be serializable. This is not an issue for its current usage in the code base, but it required me to also add support for finding and replacing multiple types at once (because we currently do multiple passes on each Serve `Query` object).
…project#39015) As per ray-project#38809, you currently cannot pass bound deployments nested within custom objects. This PR lifts that restriction. The approach I took is to remove the "arbitrary object replacement" path in `_PyObjScanner.reducer_override`, which was effectively causing cloudpickle to return early. Instead, we now fully serialize objects aside from the `SourceType` using the standard cloudpickle path. This has one major downside: all objects that `_PyObjScanner` is called on must now be serializable. This is not an issue for its current usage in the code base, but it required me to also add support for finding and replacing multiple types at once (because we currently do multiple passes on each Serve `Query` object).
…project#39015) As per ray-project#38809, you currently cannot pass bound deployments nested within custom objects. This PR lifts that restriction. The approach I took is to remove the "arbitrary object replacement" path in `_PyObjScanner.reducer_override`, which was effectively causing cloudpickle to return early. Instead, we now fully serialize objects aside from the `SourceType` using the standard cloudpickle path. This has one major downside: all objects that `_PyObjScanner` is called on must now be serializable. This is not an issue for its current usage in the code base, but it required me to also add support for finding and replacing multiple types at once (because we currently do multiple passes on each Serve `Query` object). Signed-off-by: Jim Thompson <jimthompson5802@gmail.com>
…project#39015) As per ray-project#38809, you currently cannot pass bound deployments nested within custom objects. This PR lifts that restriction. The approach I took is to remove the "arbitrary object replacement" path in `_PyObjScanner.reducer_override`, which was effectively causing cloudpickle to return early. Instead, we now fully serialize objects aside from the `SourceType` using the standard cloudpickle path. This has one major downside: all objects that `_PyObjScanner` is called on must now be serializable. This is not an issue for its current usage in the code base, but it required me to also add support for finding and replacing multiple types at once (because we currently do multiple passes on each Serve `Query` object). Signed-off-by: Victor <vctr.y.m@example.com>
What happened + What you expected to happen
I am passing a serve dag node into another object, and then passing that wrapper object into another deployment.bind. I am expecting the serve dagnode to be replaced magically with a ray serve deployment handle. instead, i get an infinite recursion.
Versions / Dependencies
Commit:
903899d933ee19159381d823a439d0e8f05a59b0
Reproduction script
This produces:
Issue Severity
Medium: It is a significant difficulty but I can work around it.
The text was updated successfully, but these errors were encountered: