Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicate RPC callback and unnecessary lock in RequestResponseFuture #1022

Open
wgdzlh opened this issue Mar 31, 2023 · 0 comments · May be fixed by #1023
Open

Duplicate RPC callback and unnecessary lock in RequestResponseFuture #1022

wgdzlh opened this issue Mar 31, 2023 · 0 comments · May be fixed by #1023

Comments

@wgdzlh
Copy link
Contributor

wgdzlh commented Mar 31, 2023

BUG REPORT

  1. Should only invoke callback when rrf is timout, as in Java client, or the callback will be called twice when RPC is finished normally:
    if rrf.IsTimeout() {
    rrf.CauseErr = fmt.Errorf("correlationId:%s request timeout, no reply message", s)
    }
    rrf.ExecuteRequestCallback()

https://github.com/apache/rocketmq/blob/e7f29798ece70e218f7233a7ec85f01e8706a062/client/src/main/java/org/apache/rocketmq/client/producer/RequestFutureHolder.java#L67

  1. Since we already use chan Done to sync the write and read of response message, there is no need to use mutex:

    case <-rf.Done:
    rf.mtx.RLock()
    rf.mtx.RUnlock()
    return rf.ResponseMsg, nil
    }
    }
    func (rf *RequestResponseFuture) PutResponseMessage(message *primitive.Message) {
    rf.mtx.Lock()
    defer rf.mtx.Unlock()
    rf.ResponseMsg = message
    close(rf.Done)
    }

  2. Consider edge cases, we should add non-blocking check of chan Done before calculate timeout, to avoid any possibility of duplicate callback:

    func (rf *RequestResponseFuture) IsTimeout() bool {
    diff := time.Since(rf.BeginTime)
    return diff > rf.Timeout
    }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant