Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client rejected message sent by peer #978

Open
abitmore opened this issue May 26, 2018 · 4 comments

Comments

@abitmore
Copy link
Member

commented May 26, 2018

After firstly synced, when witness_node is in normal running mode, I noticed lots of messages in p2p.log (with log level warn):

p2p:message read_loop process_ordinary_mes ] client rejected message sent by peer xxx.xxx.xxx.xxx {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":579,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"x"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end():

This occurs when p2p failed to push a transaction to object database.

Possible optimization:

  • p2p layer don't push duplicate transactions / blocks to chain logic layer. I think there is already a mechanism about this, but perhaps possible to improve, btw #517 is related.
  • Reduce level of this logging message if it's expected to occur a lot

Thoughts?

@abitmore

This comment has been minimized.

Copy link
Member Author

commented May 26, 2018

By the way, the biggest p2p log file on my test node is 35 MB, which is for a hour. That means we need around 35 MB * 24 * 7 ~= 6 GB of disk space to store p2p log files (which IMO is fine).

@btsabc

This comment has been minimized.

Copy link

commented Jun 6, 2018

I think there are 2 possible ways this can happen:

  1. transaction message hash in p2p layer includes all contents of transaction , but blockchain layer transaction's id (or hash) not includes the signature, so p2p layer cannot prevent receiving the same transaction (same id) with different signatures.
  2. node.cpp::on_item_ids_inventory_message does not check messages received from inactive connections infact.

so,we should add new check code.

related:

graphene::net::trx_message transaction_message_to_broadcast = item_to_broadcast.as<graphene::net::trx_message>();
hash_of_message_contents = transaction_message_to_broadcast.trx.id(); // for debugging
dlog( "broadcasting trx: ${trx}", ("trx", transaction_message_to_broadcast) );
}
message_hash_type hash_of_item_to_broadcast = item_to_broadcast.id();
_message_cache.cache_message( item_to_broadcast, hash_of_item_to_broadcast, propagation_data, hash_of_message_contents );
_new_inventory.insert( item_id(item_to_broadcast.msg_type, hash_of_item_to_broadcast ) );

void node_impl::on_item_ids_inventory_message(peer_connection* originating_peer, const item_ids_inventory_message& item_ids_inventory_message_received)
{
VERIFY_CORRECT_THREAD();
// expire old inventory so we'll be making decisions our about whether to fetch blocks below based only on recent inventory
originating_peer->clear_old_inventory();
dlog( "received inventory of ${count} items from peer ${endpoint}",
( "count", item_ids_inventory_message_received.item_hashes_available.size() )("endpoint", originating_peer->get_remote_endpoint() ) );
for( const item_hash_t& item_hash : item_ids_inventory_message_received.item_hashes_available )
{
item_id advertised_item_id(item_ids_inventory_message_received.item_type, item_hash);
bool we_advertised_this_item_to_a_peer = false;
bool we_requested_this_item_from_a_peer = false;
for (const peer_connection_ptr peer : _active_connections)
{
if (peer->inventory_advertised_to_peer.find(advertised_item_id) != peer->inventory_advertised_to_peer.end())
{
we_advertised_this_item_to_a_peer = true;
break;
}
if (peer->items_requested_from_peer.find(advertised_item_id) != peer->items_requested_from_peer.end())
we_requested_this_item_from_a_peer = true;
}
// if we have already advertised it to a peer, we must have it, no need to do anything else
if (!we_advertised_this_item_to_a_peer)
{

@ryanRfox ryanRfox added this to New -Awaiting Core Team Evaluation in Project Backlog via automation Jun 6, 2018

@abitmore

This comment has been minimized.

Copy link
Member Author

commented Nov 24, 2018

fill_or_kill is another thing that is spamming the p2p log heavily.

catch ( const fc::exception& e )
{
wlog( "client rejected message sent by peer ${peer}, ${e}", ("peer", originating_peer->get_remote_endpoint() )("e", e) );
// record it so we don't try to fetch this item again
_recently_failed_items.insert(peer_connection::timestamped_item_id(item_id(message_to_process.msg_type, message_hash ), fc::time_point::now()));
return;
}

It's better to only log useful exceptions.

@abitmore abitmore self-assigned this Jul 20, 2019

@abitmore abitmore added this to To do in Feature Release (3.3.0) via automation Jul 20, 2019

@abitmore abitmore removed this from New -Awaiting Core Team Evaluation in Project Backlog Jul 20, 2019

@abitmore abitmore moved this from To do to In development in Feature Release (3.3.0) Jul 31, 2019

@abitmore

This comment has been minimized.

Copy link
Member Author

commented Aug 1, 2019

Suppressed some p2p logs in #1875, but didn't deal with the duplicate transaction pushing issue, so I'd like to keep this issue open.

@jmjatlanta jmjatlanta moved this from In development to Done in Feature Release (3.3.0) Aug 7, 2019

@abitmore abitmore added this to New -Awaiting Core Team Evaluation in Project Backlog via automation Aug 7, 2019

@abitmore abitmore removed this from Done in Feature Release (3.3.0) Aug 7, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.