-
-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rpc: panic during addExport #54
Comments
Yup, this seems like a bug! Looks like there's something about how the ID allocation code is going out-of-bound of the export table. I'll take a look and hopefully have a fix within the next couple of days. |
Btw this happened when I was looping over stuff in RPC: conn := rpc.NewConn(rpc.StreamTransport(client))
defer conn.Close()
nexp := core.RPCFrame{Client: conn.Bootstrap(ctx)}
dt := sd
d = core.EmptyDomain
s := nexp.NewFrame(ctx, func(p core.RPCFrame_newFrame_Params) error {
return nil
}).Result()
for {
r, err := s.Activate(ctx, func(p core.RPCActivationFrame_activate_Params) error {
netExpS, err := core.Serialize(core.RPCExpression{uid, e, k, d, sd})
if err != nil {
return err
}
return p.SetFrame(netExpS)
}).Struct()
if err != nil {
return nil, err
}
viS, err := r.Result()
if err != nil {
return nil, err
}
vi, err := core.Deserialize(viS)
if err != nil {
return nil, err
}
... When I moved the nexp.NewFrame (the bootstrap struct) inside the for loop it fixed the problem for some reason. |
Odd. I'm still not sure how to reproduce this one. In the sample you give, you don't seem to be exporting a capability to the peer: none of those calls create a server. My current hypotheses of what is happening are:
Can you modify your rpc package locally to log a bit of information for me? In particular, I'd like to have:
The first one may log a little bit too much, so you can wrap it in a conditional: // ...
if int(id) < 0 || int(id) >= len(c.exports) {
c.infof("setting export %d, len = %d", id, len(c.exports)
}
// ... |
Actually, I just reproduced a similar crash in my own code. It is the second case (trying to add exports/questions while the connection is dying). I don't think I need any additional information to be able to fix. Thanks for the report! |
The panic identified in #54 is a race between dying and teardown. teardown only starts once Conn.workers is done, so every non-dispatch goroutine checks the state and adds itself to the c.workers pool. This conceptually addresses the issue, but I'm not totally confident that this fixes the bug. It is hard to write a reliable automated repro case for this issue. It seems strictly superior to prior iterations of the code, so I'll run a few experiments. Updates #54
@et4te, would you mind pulling and seeing whether this new commit fixes your issue? |
Marking as fixed, since I haven't heard anything for a couple months. |
Hey there, just came across this possible bug, any idea what its about?
Thx for the lib by the way awesome work.
The text was updated successfully, but these errors were encountered: