-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Execute Redis commands in Torch Script #489
Conversation
Codecov Report
@@ Coverage Diff @@
## master #489 +/- ##
==========================================
+ Coverage 72.14% 74.04% +1.89%
==========================================
Files 35 37 +2
Lines 5824 5915 +91
==========================================
+ Hits 4202 4380 +178
+ Misses 1622 1535 -87
Continue to review full report at Codecov.
|
066f236
to
4f0b31b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, we need to add a test in which we run SCRIPT on the GPU and the tensor built from a Redis value gets moved to the GPU.
@@ -0,0 +1,66 @@ | |||
|
|||
def redis_string_int_to_tensor(redis_value: Any): | |||
return torch.tensor(int(str(redis_value))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing we need to make sure is that we move the tensor to the right device. We can ask the user to do so, but it needs to be done otherwise a script running on GPU will find itself with an input on CPU and will fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would just add a note here (and maybe in the docs) mentioning the device placement.
The ideal solution here would be to have a SCRIPT_DEVICE
global to allow a user to do
a_tensor = torch.tensor(int(str(redis_value))).to(device=SCRIPT_DEVICE)
and have tensors created within the script be located on the same device as the eventual script inputs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, few comments on SCRIPTRUN command parsing
src/command_parser.c
Outdated
continue; | ||
} | ||
// Parse argument name | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this in a block?
And also, I think that the modified parsing logic allows to avoid INPUTS/OUTPUTS all together. For example, the following command would not raise an error (and will refer all the keys as inputs):
AI.SCRIPTRUN script_key func_name key1 key2 ...
} else if (!strcasecmp(arg_string, "$")) { | ||
continue; | ||
} | ||
if (!strcasecmp(arg_string, "$")) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if (!strcasecmp(arg_string, "$") && is_input)
otherwise, varidic can inputs can come in the middle of OUTPUTS.
} | ||
|
||
if (!strcasecmp(arg_string, "INPUTS") && !is_input) { | ||
is_input = true; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that this way, INPUTS can appear after OUTPUTS (even more than once...)
Consider add another flag like "input_done"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
This PR scope:
redis
namespace resolver to torchredis.execute
custom operator to executeRedisModule_Call
within the operator scope or from auxiliary function.RedisModuleCallReply
to a torch representation - usingtorch::IValue