Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

follow-up fixups for atomic_t spinlocks #17611

Merged
merged 3 commits into from Sep 15, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
14 changes: 3 additions & 11 deletions src/msg/Messenger.cc
@@ -1,12 +1,11 @@
// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
// vim: ts=8 sw=2 smarttab

#include <mutex>
#include <random>

#include <netdb.h>

#include "include/types.h"
#include "include/random.h"

#include "Messenger.h"

#include "msg/simple/SimpleMessenger.h"
Expand All @@ -30,14 +29,7 @@ Messenger *Messenger::create(CephContext *cct, const string &type,
{
int r = -1;
if (type == "random") {
static std::random_device seed;
static std::default_random_engine random_engine(seed());

static std::mutex random_lock;
std::lock_guard<std::mutex> lock(random_lock);

std::uniform_int_distribution<> dis(0, 1);
r = dis(random_engine);
r = ceph::util::generate_random_number(0, 1);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The power of libraries!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Brought a tear to my eye, it did!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mind to rename to gen_rand_number? ,to make consistent with rgw naming style (like https://github.com/ceph/ceph/blob/master/src/rgw/rgw_common.cc#L830)

if (r == 0 || type == "simple")
return new SimpleMessenger(cct, name, std::move(lname), nonce);
Expand Down
8 changes: 3 additions & 5 deletions src/msg/async/dpdk/TCP.h
Expand Up @@ -29,7 +29,6 @@
#include <functional>
#include <deque>
#include <chrono>
#include <random>
#include <stdexcept>
#include <system_error>

Expand All @@ -49,6 +48,8 @@
#include "shared_ptr.h"
#include "PacketUtil.h"

#include "include/random.h"

struct tcp_hdr;

enum class tcp_state : uint16_t {
Expand Down Expand Up @@ -381,11 +382,8 @@ class tcp {
// 512 bits secretkey for ISN generating
uint32_t key[16];
isn_secret () {
std::random_device rd;
std::default_random_engine e(rd());
std::uniform_int_distribution<uint32_t> dist{};
for (auto& k : key) {
k = dist(e);
k = ceph::util::generate_random_number<uint32_t>();
}
}
};
Expand Down
39 changes: 18 additions & 21 deletions src/msg/xio/XioConnection.cc
Expand Up @@ -100,7 +100,6 @@ XioConnection::XioConnection(XioMessenger *m, XioConnection::type _type,
in_seq(),
cstate(this)
{
pthread_spin_init(&sp, PTHREAD_PROCESS_PRIVATE);
set_peer_type(peer.name.type());
set_peer_addr(peer.addr);

Expand Down Expand Up @@ -160,7 +159,7 @@ void XioConnection::send_keepalive_or_ack(bool ack, const utime_t *tp)
{
/* If con is not in READY state, we need to queue the request */
if (cstate.session_state.read() != XioConnection::UP) {
pthread_spin_lock(&sp);
std::lock_guad<ceph::util::spinlock> lg(sp);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, and that makes it worrisome that it compiled on both my local system and here... hmm...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with-xio is set to OFF by default, so cmake -DWITH_XIO=ON maybe?

if (cstate.session_state.read() != XioConnection::UP) {
if (ack) {
outgoing.ack = true;
Expand All @@ -169,10 +168,8 @@ void XioConnection::send_keepalive_or_ack(bool ack, const utime_t *tp)
else {
outgoing.keepalive = true;
}
pthread_spin_unlock(&sp);
return;
}
pthread_spin_unlock(&sp);
}

send_keepalive_or_ack_internal(ack, tp);
Expand Down Expand Up @@ -612,7 +609,7 @@ void XioConnection::msg_release_fail(struct xio_msg *msg, int code)
int XioConnection::flush_out_queues(uint32_t flags) {
XioMessenger* msgr = static_cast<XioMessenger*>(get_messenger());
if (! (flags & CState::OP_FLAG_LOCKED))
pthread_spin_lock(&sp);
sp.lock();

if (outgoing.keepalive) {
outgoing.keepalive = false;
Expand All @@ -637,7 +634,7 @@ int XioConnection::flush_out_queues(uint32_t flags) {
msgr->_send_message_impl(m, this);
}
if (! (flags & CState::OP_FLAG_LOCKED))
pthread_spin_unlock(&sp);
sp.unlock();
return 0;
}

Expand All @@ -647,7 +644,7 @@ int XioConnection::discard_out_queues(uint32_t flags)
XioSubmit::Queue deferred_q;

if (! (flags & CState::OP_FLAG_LOCKED))
pthread_spin_lock(&sp);
sp.lock();

/* the two send queues contain different objects:
* - anything on the mqueue is a Message
Expand All @@ -662,7 +659,7 @@ int XioConnection::discard_out_queues(uint32_t flags)
outgoing.keepalive = outgoing.ack = false;

if (! (flags & CState::OP_FLAG_LOCKED))
pthread_spin_unlock(&sp);
sp.unlock();

// mqueue
while (!disc_q.empty()) {
Expand Down Expand Up @@ -700,11 +697,11 @@ int XioConnection::discard_out_queues(uint32_t flags)
int XioConnection::adjust_clru(uint32_t flags)
{
if (flags & CState::OP_FLAG_LOCKED)
pthread_spin_unlock(&sp);
sp.unlock();

XioMessenger* msgr = static_cast<XioMessenger*>(get_messenger());
msgr->conns_sp.lock();
pthread_spin_lock(&sp);
sp.lock();

if (cstate.flags & CState::FLAG_MAPPED) {
XioConnection::ConnList::iterator citer =
Expand All @@ -716,7 +713,7 @@ int XioConnection::adjust_clru(uint32_t flags)
msgr->conns_sp.unlock();

if (! (flags & CState::OP_FLAG_LOCKED))
pthread_spin_unlock(&sp);
sp.unlock();

return 0;
}
Expand All @@ -742,7 +739,7 @@ void XioConnection::mark_down()
int XioConnection::_mark_down(uint32_t flags)
{
if (! (flags & CState::OP_FLAG_LOCKED))
pthread_spin_lock(&sp);
sp.lock();

// per interface comment, we only stage a remote reset if the
// current policy required it
Expand All @@ -756,7 +753,7 @@ int XioConnection::_mark_down(uint32_t flags)
discard_out_queues(flags|CState::OP_FLAG_LOCKED);

if (! (flags & CState::OP_FLAG_LOCKED))
pthread_spin_unlock(&sp);
sp.unlock();

return 0;
}
Expand All @@ -769,28 +766,28 @@ void XioConnection::mark_disposable()
int XioConnection::_mark_disposable(uint32_t flags)
{
if (! (flags & CState::OP_FLAG_LOCKED))
pthread_spin_lock(&sp);
sp.lock();

cstate.policy.lossy = true;

if (! (flags & CState::OP_FLAG_LOCKED))
pthread_spin_unlock(&sp);
sp.unlock();

return 0;
}

int XioConnection::CState::state_up_ready(uint32_t flags)
{
if (! (flags & CState::OP_FLAG_LOCKED))
pthread_spin_lock(&xcon->sp);
xcon->sp.lock();

xcon->flush_out_queues(flags|CState::OP_FLAG_LOCKED);

session_state = session_states::UP;
startup_state = session_startup_states::READY;

if (! (flags & CState::OP_FLAG_LOCKED))
pthread_spin_unlock(&xcon->sp);
xcon->sp.unlock();

return (0);
}
Expand All @@ -806,20 +803,20 @@ int XioConnection::CState::state_discon()
int XioConnection::CState::state_flow_controlled(uint32_t flags)
{
if (! (flags & OP_FLAG_LOCKED))
pthread_spin_lock(&xcon->sp);
xcon->sp.lock();

session_state = session_states::FLOW_CONTROLLED;

if (! (flags & OP_FLAG_LOCKED))
pthread_spin_unlock(&xcon->sp);
xcon->sp.unlock();

return (0);
}

int XioConnection::CState::state_fail(Message* m, uint32_t flags)
{
if (! (flags & OP_FLAG_LOCKED))
pthread_spin_lock(&xcon->sp);
xcon->sp.lock();

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we're going to pass a 'locked' or 'unlocked' flag around, then it's better to just pass a reference to unique_lock

This might be more involved than you want to do right now, though.

On the other hand it's one of those "We're touching it doing cleanups anyway, we may as well get all the obvious, relevant ones done" sorta' things.

And since a flag stating whether something is locked and a lock are basically unique_lock except wandering around separately and not exception safe and not as idiomatic, I'd kind of like to replace use of the LOCKED flag with unique_lock.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, TBH I'm a bit sick today and thus taking the lazy route exactly as you surmised. ;-)
I also think the current implementation is probably more complicated than it needs to be. I'm ok taking time to Do The Right Thing(TM).

// advance to state FAIL, drop queued, msgs, adjust LRU
session_state = session_states::DISCONNECTED);
Expand All @@ -831,7 +828,7 @@ int XioConnection::CState::state_fail(Message* m, uint32_t flags)
xcon->disconnect();

if (! (flags & OP_FLAG_LOCKED))
pthread_spin_unlock(&xcon->sp);
xcon->sp.unlock();

// notify ULP
XioMessenger* msgr = static_cast<XioMessenger*>(xcon->get_messenger());
Expand Down
2 changes: 1 addition & 1 deletion src/msg/xio/XioConnection.h
Expand Up @@ -72,7 +72,7 @@ class XioConnection : public Connection
entity_inst_t peer;
struct xio_session *session;
struct xio_connection *conn;
std::atomic_flag sp = ATOMIC_FLAG_INIT;
ceph::util::spinlock sp;
std::atomic<int64_t> send = { 0 };
std::atomic<int64_t> recv = { 0 };
uint32_t n_reqs; // Accelio-initiated reqs in progress (!counting partials)
Expand Down