Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memcached crashes on setting first key #64

Closed
0xmjk opened this issue Oct 18, 2015 · 11 comments
Closed

memcached crashes on setting first key #64

0xmjk opened this issue Oct 18, 2015 · 11 comments

Comments

@0xmjk
Copy link

0xmjk commented Oct 18, 2015

Hi,

I get the following error

# /src/seastar/build/debug/apps/memcached/memcached
WARNING: debug mode. Not for benchmarking or production
seastar memcached v1.0
memcached: ./core/iostream.hh:197: output_stream<CharType>::~output_stream() [with CharType = char]: Assertion `!_in_batch' failed.
Aborted

when running a simple test:

#!/usr/bin/perl
# $Header: $

use strict;
use warnings;

use Cache::Memcached;

my $memd = new Cache::Memcached {
    'servers' => [ "127.0.0.1:11211" ],
    'debug' => 1,
};

$memd->set("my_key", "Some value");
exit;

I get quite a lot of errors when running ./test.py. (http://pastebin.com/d2dkePc0)
The above error seems related to a commit from couple days ago. However, I'm not sure if it is this or my setup - I'm running under VirtualBox and docker; my lscpu output:

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             1
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 70
Model name:            Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz
Stepping:              1
CPU MHz:               2498.386
BogoMIPS:              4996.77
Hypervisor vendor:     Oracle
Virtualization type:   full
L1d cache:             32K
L1d cache:             32K
L2d cache:             6144K
NUMA node0 CPU(s):     0-3

And Dockerfile:

FROM fedora:21
RUN yum install -y gcc-c++ libaio-devel ninja-build ragel hwloc-devel numactl-devel \
    libpciaccess-devel cryptopp-devel xen-devel boost-devel libxml2-devel xfsprogs-devel \
    python3 ninja-build libasan libubsan git \
    vim gdb
RUN mkdir /src
RUN cd /src && git clone https://github.com/scylladb/seastar
RUN cd /src/seastar && ./configure.py && ninja-build

Thanks!

@avikivity
Copy link
Member

What's the commit hash you were running? A bug was fixed recently in this area.

@gleb-cloudius

@gleb-cloudius
Copy link
Contributor

On Sun, Oct 18, 2015 at 09:10:04AM -0700, Avi Kivity wrote:

What's the commit hash you were running? A bug was fixed recently in this area.

@gleb-cloudius

Looking at memchached I do not see it closing output stream. This will
cause this assertion to happen.

        Gleb.

@0xmjk
Copy link
Author

0xmjk commented Oct 18, 2015

@avikivity It's commit 46fd389. Thanks!

@0xmjk
Copy link
Author

0xmjk commented Oct 18, 2015

@gleb-cloudius when I close the output stream it is not jolly at all:

diff --git a/apps/memcached/memcache.cc b/apps/memcached/memcache.cc
index 659c66b..f487c94 100644
--- a/apps/memcached/memcache.cc
+++ b/apps/memcached/memcache.cc
@@ -1303,7 +1303,9 @@ public:
                 auto conn = make_lw_shared<connection>(std::move(fd), addr, _cache, _system_stats);
                 do_until([conn] { return conn->_in.eof(); }, [this, conn] {
                     return conn->_proto.handle(conn->_in, conn->_out).then([conn] {
-                        return conn->_out.flush();
+                        return conn->_out.flush().then([conn] {
+                            return conn->_out.close();
+                        });
                     });
                 });
             });
# build/debug/apps/memcached/memcached
WARNING: debug mode. Not for benchmarking or production
seastar memcached v1.0
/usr/include/c++/4.9.2/bits/unique_ptr.h:291:14: runtime error: reference binding to null pointer of type 'struct pollable_fd_state'
./core/reactor.hh:1130:48: runtime error: reference binding to null pointer of type 'struct pollable_fd_state'
./core/reactor.hh:1072:6: runtime error: reference binding to null pointer of type 'struct pollable_fd_state'
./core/reactor.hh:1072:6: runtime error: reference binding to null pointer of type 'struct pollable_fd_state'
core/reactor.hh:908:36: runtime error: reference binding to null pointer of type 'struct pollable_fd_state'
core/reactor.cc:385:68: runtime error: reference binding to null pointer of type 'struct pollable_fd_state'
core/reactor.cc:347:26: runtime error: member access within null pointer of type 'struct pollable_fd_state'
ASAN:SIGSEGV
=================================================================
==441==ERROR: AddressSanitizer: SEGV on unknown address 0x00000000000c (pc 0x000000d56e9b sp 0x7fffe3578b40 bp 0x7fffe3578c30 T0)
    #0 0xd56e9a in reactor_backend_epoll::get_epoll_future(pollable_fd_state&, promise<> pollable_fd_state::*, int) core/reactor.cc:347
    #1 0xd57df2 in reactor_backend_epoll::readable(pollable_fd_state&) core/reactor.cc:385
    #2 0xef2215 in reactor::readable(pollable_fd_state&) core/reactor.hh:908
    #3 0x11a4d3d in reactor::read_some(pollable_fd_state&, void*, unsigned long) core/reactor.hh:1072
    #4 0x11a653a in pollable_fd::read_some(char*, unsigned long) core/reactor.hh:1130
    #5 0x1163763 in net::posix_data_source_impl::get() net/posix-stack.cc:140
    #6 0x47b106 in data_source::get() (/src/seastar/build/debug/apps/memcached/memcached+0x47b106)
    #7 0x4e0a60 in future<> input_stream<char>::consume<memcache_ascii_parser>(memcache_ascii_parser&) (/src/seastar/build/debug/apps/memcached/memcached+0x4e0a60)
    #8 0x4aaff4 in memcache::ascii_protocol::handle(input_stream<char>&, output_stream<char>&) (/src/seastar/build/debug/apps/memcached/memcached+0x4aaff4)
    #9 0x4b20a4 in memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}::operator()() const (/src/seastar/build/debug/apps/memcached/memcached+0x4b20a4)
    #10 0x439941 in do_until_continued<memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>, memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>&> core/future-util.hh:145
    #11 0x42a858 in void do_until_continued<memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}, memcache::tcp_server::start()::{lambda()#1}::operator()() const::socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1}>(memcache::tcp_server::start()::{lambda()#1}::operator()() const::socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1}&&, memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}&&, promise<>)::{lambda(future<>)#1}::operator()(memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}&&) (/src/seastar/build/debug/apps/memcached/memcached+0x42a858)
    #12 0x45a6f7 in do_void_futurize_apply<do_until_continued(StopCondition&&, AsyncAction&&, promise<>) [with AsyncAction = memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>; StopCondition = memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>]::<lambda(std::result_of_t<memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>()>)>, future<> > core/future.hh:1078
    #13 0x44bf15 in apply<do_until_continued(StopCondition&&, AsyncAction&&, promise<>) [with AsyncAction = memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>; StopCondition = memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>]::<lambda(std::result_of_t<memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>()>)>, future<> > core/future.hh:1126
    #14 0x468643 in future<> future<>::then_wrapped<void do_until_continued<memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}, memcache::tcp_server::start()::{lambda()#1}::operator()() const::socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1}>(memcache::tcp_server::start()::{lambda()#1}::operator()() const::socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1}&&, memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}&&, promise<>)::{lambda(future<>)#1}, future<> >(void do_until_continued<memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}, memcache::tcp_server::start()::{lambda()#1}::operator()() const::socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1}>(memcache::tcp_server::start()::{lambda()#1}::operator()() const::socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1}&&, memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}&&, promise<>)::{lambda(future<>)#1}&&)::{lambda(memcache::tcp_server::start()::{lambda()#1}::operator()() const::socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1})#1}::operator()<future_state<> >(auto, promise<>) (/src/seastar/build/debug/apps/memcached/memcached+0x468643)
    #15 0x468e56 in run core/future.hh:362
    #16 0xd6e9c2 in reactor::run_tasks(circular_buffer<std::unique_ptr<task, std::default_delete<task> >, std::allocator<std::unique_ptr<task, std::default_delete<task> > > >&) core/reactor.cc:1222
    #17 0xd7330c in reactor::run() core/reactor.cc:1348
    #18 0x1126e77 in app_template::run_deprecated(int, char**, std::function<void ()>&&) core/app-template.cc:122
    #19 0x412700 in main apps/memcached/memcache.cc:1406
    #20 0x7f57aeedffdf in __libc_start_main (/lib64/libc.so.6+0x1ffdf)
    #21 0x40d9a8 (/src/seastar/build/debug/apps/memcached/memcached+0x40d9a8)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV core/reactor.cc:347 reactor_backend_epoll::get_epoll_future(pollable_fd_state&, promise<> pollable_fd_state::*, int)
==441==ABORTING

@gleb-cloudius
Copy link
Contributor

On Sun, Oct 18, 2015 at 09:52:44AM -0700, 0xmjk wrote:

@gleb-cloudius when I close the output stream is not jolly at all:

You need to keep conn alive while close is running, Try to change close
line to:
return conn->_out.close().finally([conn]{});

diff --git a/apps/memcached/memcache.cc b/apps/memcached/memcache.cc
index 659c66b..f487c94 100644
--- a/apps/memcached/memcache.cc
+++ b/apps/memcached/memcache.cc
@@ -1303,7 +1303,9 @@ public:
                 auto conn = make_lw_shared<connection>(std::move(fd), addr, _cache, _system_stats);
                 do_until([conn] { return conn->_in.eof(); }, [this, conn] {
                     return conn->_proto.handle(conn->_in, conn->_out).then([conn] {
-                        return conn->_out.flush();
+                        return conn->_out.flush().then([conn] {
+                            return conn->_out.close();
+                        });
                     });
                 });
             });
# build/debug/apps/memcached/memcached
WARNING: debug mode. Not for benchmarking or production
seastar memcached v1.0
/usr/include/c++/4.9.2/bits/unique_ptr.h:291:14: runtime error: reference binding to null pointer of type 'struct pollable_fd_state'
./core/reactor.hh:1130:48: runtime error: reference binding to null pointer of type 'struct pollable_fd_state'
./core/reactor.hh:1072:6: runtime error: reference binding to null pointer of type 'struct pollable_fd_state'
./core/reactor.hh:1072:6: runtime error: reference binding to null pointer of type 'struct pollable_fd_state'
core/reactor.hh:908:36: runtime error: reference binding to null pointer of type 'struct pollable_fd_state'
core/reactor.cc:385:68: runtime error: reference binding to null pointer of type 'struct pollable_fd_state'
core/reactor.cc:347:26: runtime error: member access within null pointer of type 'struct pollable_fd_state'
ASAN:SIGSEGV
=================================================================
==441==ERROR: AddressSanitizer: SEGV on unknown address 0x00000000000c (pc 0x000000d56e9b sp 0x7fffe3578b40 bp 0x7fffe3578c30 T0)
    #0 0xd56e9a in reactor_backend_epoll::get_epoll_future(pollable_fd_state&, promise<> pollable_fd_state::*, int) core/reactor.cc:347
    #1 0xd57df2 in reactor_backend_epoll::readable(pollable_fd_state&) core/reactor.cc:385
    #2 0xef2215 in reactor::readable(pollable_fd_state&) core/reactor.hh:908
    #3 0x11a4d3d in reactor::read_some(pollable_fd_state&, void*, unsigned long) core/reactor.hh:1072
    #4 0x11a653a in pollable_fd::read_some(char*, unsigned long) core/reactor.hh:1130
    #5 0x1163763 in net::posix_data_source_impl::get() net/posix-stack.cc:140
    #6 0x47b106 in data_source::get() (/src/seastar/build/debug/apps/memcached/memcached+0x47b106)
    #7 0x4e0a60 in future<> input_stream<char>::consume<memcache_ascii_parser>(memcache_ascii_parser&) (/src/seastar/build/debug/apps/memcached/memcached+0x4e0a60)
    #8 0x4aaff4 in memcache::ascii_protocol::handle(input_stream<char>&, output_stream<char>&) (/src/seastar/build/debug/apps/memcached/memcached+0x4aaff4)
    #9 0x4b20a4 in memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}::operator()() const (/src/seastar/build/debug/apps/memcached/memcached+0x4b20a4)
    #10 0x439941 in do_until_continued<memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>, memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>&> core/future-util.hh:145
    #11 0x42a858 in void do_until_continued<memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}, memcache::tcp_server::start()::{lambda()#1}::operator()() const::socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1}>(memcache::tcp_server::start()::{lambda()#1}::operator()() const::socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1}&&, memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}&&, promise<>)::{lambda(future<>)#1}::operator()(memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}&&) (/src/seastar/build/debug/apps/memcached/memcached+0x42a858)
    #12 0x45a6f7 in do_void_futurize_apply<do_until_continued(StopCondition&&, AsyncAction&&, promise<>) [with AsyncAction = memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>; StopCondition = memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>]::<lambda(std::result_of_t<memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>()>)>, future<> > core/future.hh:1078
    #13 0x44bf15 in apply<do_until_continued(StopCondition&&, AsyncAction&&, promise<>) [with AsyncAction = memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>; StopCondition = memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>]::<lambda(std::result_of_t<memcache::tcp_server::start()::<lambda()>::<lambda(connected_socket, socket_address)> mutable::<lambda()>()>)>, future<> > core/future.hh:1126
    #14 0x468643 in future<> future<>::then_wrapped<void do_until_continued<memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}, memcache::tcp_server::start()::{lambda()#1}::operator()() const::socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1}>(memcache::tcp_server::start()::{lambda()#1}::operator()() const::socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1}&&, memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}&&, promise<>)::{lambda(future<>)#1}, future<> >(void do_until_continued<memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}, memcache::tcp_server::start()::{lambda()#1}::operator()() const:
 :socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1}>(memcache::tcp_server::start()::{lambda()#1}::operator()() const::socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1}&&, memcache::tcp_server::start()::{lambda()#1}::operator()() const::{lambda(connected_socket, socket_address)#1}::operator()({lambda()#1}, connected_socket)::{lambda()#2}&&, promise<>)::{lambda(future<>)#1}&&)::{lambda(memcache::tcp_server::start()::{lambda()#1}::operator()() const::socket_address::operator()({lambda()#1}, connected_socket)::{lambda()#1})#1}::operator()<future_state<> >(auto, promise<>) (/src/seastar/build/debug/apps/memcached/memcached+0x468643)
    #15 0x468e56 in run core/future.hh:362
    #16 0xd6e9c2 in reactor::run_tasks(circular_buffer<std::unique_ptr<task, std::default_delete<task> >, std::allocator<std::unique_ptr<task, std::default_delete<task> > > >&) core/reactor.cc:1222
    #17 0xd7330c in reactor::run() core/reactor.cc:1348
    #18 0x1126e77 in app_template::run_deprecated(int, char**, std::function<void ()>&&) core/app-template.cc:122
    #19 0x412700 in main apps/memcached/memcache.cc:1406
    #20 0x7f57aeedffdf in __libc_start_main (/lib64/libc.so.6+0x1ffdf)
    #21 0x40d9a8 (/src/seastar/build/debug/apps/memcached/memcached+0x40d9a8)

AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV core/reactor.cc:347 reactor_backend_epoll::get_epoll_future(pollable_fd_state&, promise<> pollable_fd_state::*, int)
==441==ABORTING

Reply to this email directly or view it on GitHub:
#64 (comment)

        Gleb.

@0xmjk
Copy link
Author

0xmjk commented Oct 18, 2015

Hmm, I've tried it and tried also:

diff --git a/apps/memcached/memcache.cc b/apps/memcached/memcache.cc
index 659c66b..ceca3e8 100644
--- a/apps/memcached/memcache.cc
+++ b/apps/memcached/memcache.cc
@@ -1303,7 +1303,7 @@ public:
                 auto conn = make_lw_shared<connection>(std::move(fd), addr, _cache, _system_stats);
                 do_until([conn] { return conn->_in.eof(); }, [this, conn] {
                     return conn->_proto.handle(conn->_in, conn->_out).then([conn] {
-                        return conn->_out.flush();
+                        return conn->_out.close().finally([conn] {});
                     });
                 });
             });

Which should call flush() according to whats in the iostream-impl:

template <typename CharType>
future<>
output_stream<CharType>::close() {
    return flush().finally([this] {
        if (_in_batch) {
            return _in_batch.value().get_future();
        } else {
            return make_ready_future();
        }
    }).then([this] {
        // report final exception as close error
        if (_ex) {
            std::rethrow_exception(_ex);
        }
    }).finally([this] {
        return _fd.close();
    });
}

But it is still giving me a stacktrace that suggests that conn->_in has been cleared up already.
I'm thinking that maybe I'm doing it in a wrong place? Shouldn't it be rather like do_until([conn] { }).then([conn] conn->_out.close())?

@gleb-cloudius
Copy link
Contributor

On Sun, Oct 18, 2015 at 11:49:32AM -0700, 0xmjk wrote:

Hmm, I've tried it and tried also:

diff --git a/apps/memcached/memcache.cc b/apps/memcached/memcache.cc
index 659c66b..ceca3e8 100644
--- a/apps/memcached/memcache.cc
+++ b/apps/memcached/memcache.cc
@@ -1303,7 +1303,7 @@ public:
                 auto conn = make_lw_shared<connection>(std::move(fd), addr, _cache, _system_stats);
                 do_until([conn] { return conn->_in.eof(); }, [this, conn] {
                     return conn->_proto.handle(conn->_in, conn->_out).then([conn] {
-                        return conn->_out.flush();
+                        return conn->_out.close().finally([conn] {});
                     });
                 });
             });

Which should call flush() according to whats in the iostream-impl:

template <typename CharType>
future<>
output_stream<CharType>::close() {
    return flush().finally([this] {
        if (_in_batch) {
            return _in_batch.value().get_future();
        } else {
            return make_ready_future();
        }
    }).then([this] {
        // report final exception as close error
        if (_ex) {
            std::rethrow_exception(_ex);
        }
    }).finally([this] {
        return _fd.close();
    });
}

But it is still giving me a stacktrace that suggests that conn->_in has been cleared up already.
I'm thinking that maybe I'm doing it in the wrong place? Shouldn't it be rather like do_until([conn] { }).then([conn] conn->_out.close())?

Yes, this looks better. To really be helpful I need to dive into the
code and understand the lifetime of conn object better, but doing it
like above sounds correct from my current understanding.

        Gleb.

@avikivity
Copy link
Member

@tgrabiec ^^

@0xmjk
Copy link
Author

0xmjk commented Oct 19, 2015

I think I resolved it, but I don't understand why it works.

This works

@@ -1303,7 +1307,7 @@ public:
                 auto conn = make_lw_shared<connection>(std::move(fd), addr, _cache, _system_stats);
                 do_until([conn] { return conn->_in.eof(); }, [this, conn] {
                     return conn->_proto.handle(conn->_in, conn->_out).then([conn] {
-                        return conn->_out.flush();
+                        return conn->_out.flush().then([conn] { conn->_out.close().finally([conn] { });});
                     });
                 });
             });

But this doesn't:

@@ -1303,7 +1307,7 @@ public:
                 auto conn = make_lw_shared<connection>(std::move(fd), addr, _cache, _system_stats);
                 do_until([conn] { return conn->_in.eof(); }, [this, conn] {
                     return conn->_proto.handle(conn->_in, conn->_out).then([conn] {
-                        return conn->_out.flush();
+                        return conn->_out.flush().then([conn] { return conn->_out.close().finally([conn] { });});
                     });
                 });
             });

@gleb-cloudius
Copy link
Contributor

On Mon, Oct 19, 2015 at 08:32:18AM -0700, 0xmjk wrote:

I think I resolved it, but I don't understand why it works.

This works

@@ -1303,7 +1307,7 @@ public:
                 auto conn = make_lw_shared<connection>(std::move(fd), addr, _cache, _system_stats);
                 do_until([conn] { return conn->_in.eof(); }, [this, conn] {
                     return conn->_proto.handle(conn->_in, conn->_out).then([conn] {
-                        return conn->_out.flush();
+                        return conn->_out.flush().then([conn] { conn->_out.close().finally([conn] { });});
                     });
                 });
             });

But this doesn't:

@@ -1303,7 +1307,7 @@ public:
                 auto conn = make_lw_shared<connection>(std::move(fd), addr, _cache, _system_stats);
                 do_until([conn] { return conn->_in.eof(); }, [this, conn] {
                     return conn->_proto.handle(conn->_in, conn->_out).then([conn] {
-                        return conn->_out.flush();
+                        return conn->_out.flush().then([conn] { return conn->_out.close().finally([conn] { });});
                     });
                 });
             });

None of those are correct. I do not know why first works, probably
because do_until() does not wait for close() to complete and finishes
while close() is running in the background.

Closing connection here is incorrect since this is processing loop and
client still has not disconnected. Like you've said you should move
close after do_until like the patch below (not tested). Can you check if
it works?

diff --git a/apps/memcached/memcache.cc b/apps/memcached/memcache.cc
index 659c66b..115aa0e 100644
--- a/apps/memcached/memcache.cc
+++ b/apps/memcached/memcache.cc
@@ -1305,6 +1305,8 @@ public:
                     return conn->_proto.handle(conn->_in, conn->_out).then([conn] {
                         return conn->_out.flush();
                     });
+                }).finally([conn] {
+                    return conn->_out.close().finally([conn]{});
                 });
             });
         }).or_terminate();

        Gleb.

@0xmjk
Copy link
Author

0xmjk commented Oct 20, 2015

It works and the tests are much happier http://pastebin.com/nwwRzfdx
Funnily enough I've tried that before, but it turns out that the Perl client module I've used had some issues with communicating with this implementation, so I thought it wasn't working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants