Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[thirdparty] disable nghttp2 feature in libcurl #14

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

zhangyifan27
Copy link
Owner

To avoid some linking errors

Change-Id: Ic63d17b6e7d1b1fc066cc6b80a0a24f988edfad7
zhangyifan27 pushed a commit that referenced this pull request Apr 3, 2024
A Kudu server might start its shutdown sequence while other thread
is collecting the server's metrics. If that happens, a data race might
manifest itself while fetching the 'rpc_pending_connections' metric.
Running one of the tests under TSAN reproduced such a race with
the report below.

This patch addresses the data race issue.

In addition, I took the liberty of optimizing the instantiation
and initialization of DiagnosticSocket instances used to retrieve the
information on number of pending RPC connections, so now the diagnostic
sockets are instantiated and initialized once per AcceptorPool instance.

This is a follow-up to c0c44a8.

  WARNING: ThreadSanitizer: data race
    Read of size 8 at 0x7b4c00002f78 by thread T63 (mutexes: write M558018781209703984):
      #0 std::__1::vector<std::__1::shared_ptr<kudu::rpc::AcceptorPool>, std::__1::allocator<std::__1::shared_ptr<kudu::rpc::AcceptorPool> > >::begin() thirdparty/installed/tsan/include/c++/v1/vector:1520:30 (libkrpc.so+0x1642b9)
      #1 kudu::rpc::Messenger::GetPendingConnectionsNum() src/kudu/rpc/messenger.cc:171:22 (libkrpc.so+0x15f6fb)
      ...
      #14 kudu::MetricRegistry::WriteAsJson(kudu::JsonWriter*, kudu::MetricJsonOptions const&) const src/kudu/util/metrics.cc:566:7 (libkudu_util.so+0x3ab82c)
      ...
      apache#17 kudu::server::DiagnosticsLog::Start()::$_0::operator()() const src/kudu/server/diagnostics_log.cc:145:46 (libserver_process.so+0x118361)
      ...

    Previous write of size 8 at 0x7b4c00002f78 by main thread (mutexes: write M4638925457023032):
      #0 memset sanitizer_common/sanitizer_common_interceptors.inc:780:3 (kudu+0x454d16)
      #1 memset sanitizer_common/sanitizer_common_interceptors.inc:778:1 (kudu+0x454d16)
      #2 std::__1::vector<std::__1::shared_ptr<kudu::rpc::AcceptorPool>, std::__1::allocator<std::__1::shared_ptr<kudu::rpc::AcceptorPool> > >::__move_assign(std::__1::vector<std::__1::shared_ptr<kudu::rpc::AcceptorPool>, std::__1::allocator<std::__1::shared_ptr<kudu::rpc::AcceptorPool> > >&, std::__1::integral_constant<bool, true>) thirdparty/installed/tsan/include/c++/v1/vector:1392:18 (libkrpc.so+0x16a840)
      ...
      #4 kudu::rpc::Messenger::ShutdownInternal(kudu::rpc::Messenger::ShutdownMode) src/kudu/rpc/messenger.cc:213:23 (libkrpc.so+0x15f509)
      ...

Change-Id: I6aaf3373944eac86664ac62db3b7e6151c874539
Reviewed-on: http://gerrit.cloudera.org:8080/21224
Tested-by: Alexey Serbin <alexey@apache.org>
Reviewed-by: Abhishek Chennaka <achennaka@cloudera.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant