Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Raiden doesn't replay all past state changes before starting the Raiden Service #4498

Closed
eorituz opened this issue Jul 31, 2019 · 9 comments · Fixed by #4550
Closed

Raiden doesn't replay all past state changes before starting the Raiden Service #4498

eorituz opened this issue Jul 31, 2019 · 9 comments · Fixed by #4550
Assignees

Comments

@eorituz
Copy link
Contributor

eorituz commented Jul 31, 2019

Problem Definition

Raiden client doesn't replay all state changes before starting the Raiden Service.
(In my case that led to a crash of the echonode).

raiden.log

2019-07-31 15:44:39.172715 [debug ] Raiden Service started [raiden.raiden_service] node=0xdD84b4E3B4Fb3b7AD427Fa0A0FF2009ca4B363da
Happens way before replaying state changes from the past
2019-07-31 15:44:57.085114 [debug ] State changes [raiden.raiden_service] greenlet_name=AlarmTask._run node:0xdD84b4E3B4Fb3b7AD427Fa0A0FF2009ca4B363da node=0xdD84b4E3B4Fb3b7AD427Fa0A0FF2009ca4B363da state_changes=['{"block_number": "4830525", "gas_limit": "7073246", ...

Expectation

I'd expect the raiden client to replay all state changes from the past before starting the raiden service.

Reproduce

Starting Raiden with new datadir on any? (tested with goerli and rinkeby) network.

System Description

Raiden version v0.100.5.dev0.
Used bbot internal ethnodes.

@eorituz eorituz self-assigned this Jul 31, 2019
@eorituz eorituz changed the title Echo Node crashes with DuplicatedChannelError Raiden doesn't replay all past state changes before starting the Raiden Service Jul 31, 2019
@eorituz eorituz removed their assignment Jul 31, 2019
@LefterisJP
Copy link
Contributor

As promised will take a first look at this.

@LefterisJP LefterisJP self-assigned this Aug 1, 2019
@LefterisJP LefterisJP added this to Backlog in Raiden Client via automation Aug 1, 2019
@LefterisJP LefterisJP moved this from Backlog to In progress in Raiden Client Aug 1, 2019
@LefterisJP
Copy link
Contributor

@eorituz I can't say too much from your logs. I would need:

  1. The exact command you used to run Raiden.
  2. the Database
  3. The full debug logs of all the runs you made.The logs you attached are simply the stdout logs which need to be parsed by the eyes. The debug logs have json entries and as such are easier to parse programmatically. First and foremost I need to see the debug logs of the crash to see what goes wrong. There is no crash in the logs you have posted.

What I can say from the logs you have posted is that:

  1. This is the first run and it is in Rinkeby.
  2. It considers the block number 4801551 as the block number where all queries should start from and not 0. That's I guess due to the deployment of the TokenNetworkRegistry block number for the network you are using. And it sees no block-chain events to query from since that block.

Have you tried to deploy a new TokenNetwork in that network? With v0.100.5.dev0 new contracts were deployed.

@LefterisJP LefterisJP assigned eorituz and unassigned LefterisJP Aug 2, 2019
@eorituz
Copy link
Contributor Author

eorituz commented Aug 5, 2019

Thanks for the review @LefterisJP.
I guess I didn't describe the problem properly. But let me first reply to your questions:

Answers

  1. I did some debugging my self and with ulo and found out that no matter what chain you're using the client never replays all state changes before RaidenService gets called (If it's a "first start" with no existing DB).

I messed up the logs so I created new ones:
raiden-debug_2019-08-05T08:45:13.250303.log
my config file:
raiden
my db:
db.zip
my command:
raiden --config-file XXX --routing-mode "private" --log-config raiden:debug

Problem

The raiden node itself starts perfectly fine. However the raiden service gets startet before the replay of all known blocks is completed:

  • At 2019-08-05 08:45:23.343449 The debug log says latest_block_number=4857651
  • At 2019-08-05 08:46:04.000014 the raiden service gets started
  • At 2019-08-05 08:46:14.812780 the state changes of block 4857649 gets replayed

The problem now is when I use an echo node (or any other software that starts interacting with raiden as soon as the API/service is available) this leads to unexpected behavior.
In my case the echo node crashed because it uses JoinTokenNetwork:
--> The Raiden service tells the echo node that there are no existing open channels (since it didn't replay all state changes yet). So the echo node tries to open channel with random partners. One of these partners however already has an open channel with the echo node. This results in a DuplicatedChannel Error.

@eorituz eorituz assigned LefterisJP and unassigned eorituz Aug 5, 2019
@ulope
Copy link
Collaborator

ulope commented Aug 5, 2019

Ah @eorituz was faster.
Here is a processed debug log that shows the same thing.

Note the highlighted records:

  • 11: AlarmTask.first_run()
  • 15: Token networks are found (but no channels)
  • 34: Rest API is up, meaning raiden is finished starting
  • 55: Channels are found from 'old' blocks

@LefterisJP
Copy link
Contributor

Hey @ulope @eorituz thank you for your additional information.

So I found out what is wrong, but according to the code comments and the git commits there has been some kind of code reorganization around initial blockchain event polling while I was gone that I have no idea about so I am pinging the commiter, @hackaugusto
commit for reference

So let me say what happens first.

  1. At initialization we start with an empty list of token networks as can be seen here:

payment_network = PaymentNetworkState(
self.default_registry.address,
[], # empty list of token network states as it's the node's startup
)

  1. Later down the line at the first run of the alarm task we query the blockchain events of the already installed blockchain filters. But since the token networks are still empty there is no blockchain filter installed for them:

token_networks = views.get_token_network_addresses(
node_state, token_network_registry_proxy.address
)
self.blockchain_events.add_token_network_registry_listener(
token_network_registry_proxy=token_network_registry_proxy,
contract_manager=self.contract_manager,
from_block=from_block,
)
self.blockchain_events.add_secret_registry_listener(
secret_registry_proxy=secret_registry_proxy,
contract_manager=self.contract_manager,
from_block=from_block,
)
for token_network_address in token_networks:
token_network_proxy = self.chain.token_network(token_network_address)
self.blockchain_events.add_token_network_listener(
token_network_proxy=token_network_proxy,
contract_manager=self.contract_manager,
from_block=from_block,
)

  1. At the first run of the alarm task we poll blockchain events of the token network registry and the secret registry (but not of token networks since we dont have any) and the resulting events are sent for handling and tracking

blockchain_events = self.blockchain_events.poll_blockchain_events(
confirmed_block_number
)
for event in blockchain_events:
state_changes.extend(blockchainevent_to_statechange(self, event))
# It's important to /not/ block here, because this function can be
# called from the alarm task greenlet, which should not starve.
#
# All the state changes are dispatched together
self.handle_and_track_state_changes(state_changes)

  1. And here is the unexpected problem. Instead of processing the events and waiting until they are finished since this is the first run of the alarm task they are simply added to some pending greenlets which are processed later.

def handle_state_changes(self, state_changes: List[StateChange]) -> List[Greenlet]:
""" Dispatch the state change and return the processing threads.
Use this for error reporting, failures in the returned greenlets,
should be re-raised using `gevent.joinall` with `raise_error=True`.
"""
assert self.wal, f"WAL not restored. node:{self!r}"
log.debug(
"State changes",
node=to_checksum_address(self.address),
state_changes=[
_redact_secret(DictSerializer.serialize(state_change))
for state_change in state_changes
],
)
old_state = views.state_from_raiden(self)
new_state, raiden_event_list = self.wal.log_and_dispatch(state_changes)
for state_change in state_changes:
after_blockchain_statechange(self, state_change)
for changed_balance_proof in views.detect_balance_proof_change(old_state, new_state):
update_services_from_balance_proof(self, new_state, changed_balance_proof)
log.debug(
"Raiden events",
node=to_checksum_address(self.address),
raiden_events=[
_redact_secret(DictSerializer.serialize(event)) for event in raiden_event_list
],
)
greenlets: List[Greenlet] = list()
if self.ready_to_process_events:
for raiden_event in raiden_event_list:
greenlets.append(
self.handle_event(chain_state=new_state, raiden_event=raiden_event)
)
state_changes_count = self.wal.storage.count_state_changes()
new_snapshot_group = state_changes_count // SNAPSHOT_STATE_CHANGES_COUNT
if new_snapshot_group > self.snapshot_group:
log.debug("Storing snapshot", snapshot_id=new_snapshot_group)
self.wal.snapshot()
self.snapshot_group = new_snapshot_group
return greenlets

  1. So we get out of the alarm task first call, and the API gets initialized and your interaction with the API caused the duplicated channel problem @eorituz since at the same time under the hood the greenlets got processed and the channels appeared after the API join() call happened.

So @hackaugusto two questions here:

  1. Why was this initially introduced? This question is mostly for my benefit to understand the new architecture here.

From the commit:

These changes were introduced to improve decoupling and to allow all the
events for a given block to be processed in a single transaction.

What does a single transaction here mean?

  1. The solution for this issue would be to add a join of all pending greenlets at the end of the alarm task first run here right? Would that still agree with the changed architecture?

raiden/raiden/tasks.py

Lines 206 to 221 in 40fd0b9

def first_run(self, known_block_number):
""" Blocking call to update the local state, if necessary. """
assert self.callbacks, "callbacks not set"
latest_block = self.chain.get_block(block_identifier="latest")
log.debug(
"Alarm task first run",
known_block_number=known_block_number,
latest_block_number=latest_block["number"],
latest_gas_limit=latest_block["gasLimit"],
latest_block_hash=to_hex(latest_block["hash"]),
)
self.known_block_number = known_block_number
self.chain_id = self.chain.network_id
self._maybe_run_callbacks(latest_block)

@hackaugusto
Copy link
Contributor

Why was this initially introduced? This question is mostly for my benefit to understand the new architecture here.

If the problem is that the greenlets are not being waited for, then it was introduced by this PR: #2985 .

The solution for this issue would be to add a join of all pending greenlets at the end of the alarm task first run here right? Would that still agree with the changed architecture?

Some more thought has to go into this. We have to poll for all the filters on the first run, including the filters installed because of first_run itself, but we don't want to wait for the greenlets that are sending transactions, otherwise not only the first run will be slow, but also the restarts.

@hackaugusto
Copy link
Contributor

hackaugusto commented Aug 6, 2019

@LefterisJP On a second look, the greenlets are not related to the problem. The blockchain event callbacks are called here:

for state_change in state_changes:
after_blockchain_statechange(self, state_change)

and this will be called synchronously:

def after_new_token_network_create_filter(
raiden: "RaidenService", state_change: ContractReceiveNewTokenNetwork
) -> None:
""" Handles the creation of a new token network.
Add the filter used to synchronize the node with the new TokenNetwork smart
contract.
"""
block_number = state_change.block_number
token_network_address = state_change.token_network.address
token_network_proxy = raiden.chain.token_network(token_network_address)
raiden.blockchain_events.add_token_network_listener(
token_network_proxy=token_network_proxy,
contract_manager=raiden.contract_manager,
from_block=block_number,
)

IOW, the filter is not installed by the Raiden event handler, but by the Raiden service itself (previously the blockchain event handler).

did I miss something from the explanation?

@LefterisJP
Copy link
Contributor

@hackaugusto I missed that line. Thanks for pointing it out.

But still looking into the code, the filter is installed but not queried. Which I guess is where the problem lies.

LefterisJP added a commit to LefterisJP/raiden that referenced this issue Aug 7, 2019
@LefterisJP LefterisJP moved this from In progress to in review in Raiden Client Aug 8, 2019
LefterisJP added a commit to LefterisJP/raiden that referenced this issue Aug 8, 2019
@hackaugusto
Copy link
Contributor

Some notes about the event polling:

Think of the smart contracts as a collections of trees, where a node is a smart contract, and each deployed smart contract had the deploying as a parent. For the current implementation these trees would have depth 2, where the token network registry is the root, and the token network is the immediate child.

Because of the above, we can get away with just polling the events twice. The first run will poll for the events of the known root smart contracts (The token network registries that are configured to be used by the client), this installs the filters for the children, and the second run is for the token networks.

The above strategy can be describe with this pseudo code:

def first_run(self):
    for _ in range(CONTRACTS_DEPTH):
        self.poll()

def poll(self):
    for stateless_filter in self.filters:
        for event in stateless_filter.poll_all_events_until_latest():
            process(event)

The problem with the above code is that CONTRACTS_DEPTH has to be synchronized with the smart contract implementation. An alternative implementation is to process blocks in order, instead of the above, this would be used:

def pool():
    for curr_block in range(self.latest_processed_block, self.latest_confirmed_block):
        for stateless_filter in self.filters:
            for event in stateless_filter.get_events_until(curr_block):
                process(event)

The above would process one block at the time, as soon as an event for a new subcontract is seen, process install the next filter and on the next_block it will be queried. This has the advantage of not needing a special first_run for the BlockchainEvents, and that all events are processed in order, os if there are any state dependencies across smart contracts it will not be a problem.

Internally get_events_until can be optimized to poll for the events in batches, and only return them by block number.

The above, however, still has one corner case, when a contract is deployed and a transaction is optimistically sent, generating two events for at the same block. To fix this a stack is probably the way to go:

def pool():
    for curr_block in range(self.latest_processed_block, self.latest_confirmed_block):
        pending_filters = list(self.filters)
        while pending_filters:
            stateless_filter = pending_filters.pop()
            for event in stateless_filter.get_events_until(curr_block):
                process(event, pending_filters)  # If necessary add the new filter to the stack

hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 18, 2019
The first run can take a bit of time, depending on the number of token
networks registered and when the original registry smart contract was
deployed. This does not improve how long it takes to synchronize with
these old and full registries, but it does make sure that while doing
so, less memory is used, and the work is not lost if the node is
restarted.

This was achieve by using the same approach from the StatelessFilter, in
there the queries are done in batches to avoid timeouts while doing the
request, in the first run this can be used to limit the number of state
changes that are currently held in memory. In order to make sure the
batches of the RaidenService and the underlying filters are alined, the
same constant for the batch size was used.

This does fix one bug, the issue raiden-network#4498 fixed the problem of not fetching
the events from a newly registered token network on the same batch, but
only for the initialiation and without taking into account the
recoverability. This fixes that bug in a general way, by handling this
corner case in the `synchronize_to_confirmed_block_in_batches` method.
Once that method returns it is known that all events for all the smart
contracts of interest have been handled. And on an event of a crash it
is safe to use the block number from the state machine.

Note that this does not solve the above problem in general, only for
newly registered token networks. The problem in general is a bit harder
since potentially there may be many layers of smart contracts, when
there is a tree of smart contracts that would have to be recursively
followed. The general case would require a form of recursion to handle
all cases.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 18, 2019
The first run can take a bit of time, depending on the number of token
networks registered and when the original registry smart contract was
deployed. This does not improve how long it takes to synchronize with
these old and full registries, but it does make sure that while doing
so, less memory is used, and the work is not lost if the node is
restarted.

This was achieve by using the same approach from the StatelessFilter, in
there the queries are done in batches to avoid timeouts while doing the
request, in the first run this can be used to limit the number of state
changes that are currently held in memory. In order to make sure the
batches of the RaidenService and the underlying filters are alined, the
same constant for the batch size was used.

This does fix one bug, the issue raiden-network#4498 fixed the problem of not fetching
the events from a newly registered token network on the same batch, but
only for the initialiation and without taking into account the
recoverability. This fixes that bug in a general way, by handling this
corner case in the `synchronize_to_confirmed_block_in_batches` method.
Once that method returns it is known that all events for all the smart
contracts of interest have been handled. And on an event of a crash it
is safe to use the block number from the state machine.

Note that this does not solve the above problem in general, only for
newly registered token networks. The problem in general is a bit harder
since potentially there may be many layers of smart contracts, when
there is a tree of smart contracts that would have to be recursively
followed. The general case would require a form of recursion to handle
all cases.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 19, 2019
The first run can take a bit of time, depending on the number of token
networks registered and when the original registry smart contract was
deployed. This does not improve how long it takes to synchronize with
these old and full registries, but it does make sure that while doing
so, less memory is used, and the work is not lost if the node is
restarted.

This was achieve by using the same approach from the StatelessFilter, in
there the queries are done in batches to avoid timeouts while doing the
request, in the first run this can be used to limit the number of state
changes that are currently held in memory. In order to make sure the
batches of the RaidenService and the underlying filters are alined, the
same constant for the batch size was used.

This does fix one bug, the issue raiden-network#4498 fixed the problem of not fetching
the events from a newly registered token network on the same batch, but
only for the initialiation and without taking into account the
recoverability. This fixes that bug in a general way, by handling this
corner case in the `synchronize_to_confirmed_block_in_batches` method.
Once that method returns it is known that all events for all the smart
contracts of interest have been handled. And on an event of a crash it
is safe to use the block number from the state machine.

Note that this does not solve the above problem in general, only for
newly registered token networks. The problem in general is a bit harder
since potentially there may be many layers of smart contracts, when
there is a tree of smart contracts that would have to be recursively
followed. The general case would require a form of recursion to handle
all cases.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 19, 2019
The first run can take a bit of time, depending on the number of token
networks registered and when the original registry smart contract was
deployed. This does not improve how long it takes to synchronize with
these old and full registries, but it does make sure that while doing
so, less memory is used, and the work is not lost if the node is
restarted.

This was achieve by using the same approach from the StatelessFilter, in
there the queries are done in batches to avoid timeouts while doing the
request, in the first run this can be used to limit the number of state
changes that are currently held in memory. In order to make sure the
batches of the RaidenService and the underlying filters are alined, the
same constant for the batch size was used.

This does fix one bug, the issue raiden-network#4498 fixed the problem of not fetching
the events from a newly registered token network on the same batch, but
only for the initialiation and without taking into account the
recoverability. This fixes that bug in a general way, by handling this
corner case in the `synchronize_to_confirmed_block_in_batches` method.
Once that method returns it is known that all events for all the smart
contracts of interest have been handled. And on an event of a crash it
is safe to use the block number from the state machine.

Note that this does not solve the above problem in general, only for
newly registered token networks. The problem in general is a bit harder
since potentially there may be many layers of smart contracts, when
there is a tree of smart contracts that would have to be recursively
followed. The general case would require a form of recursion to handle
all cases.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 19, 2019
The first run can take a bit of time, depending on the number of token
networks registered and when the original registry smart contract was
deployed. This does not improve how long it takes to synchronize with
these old and full registries, but it does make sure that while doing
so, less memory is used, and the work is not lost if the node is
restarted.

This was achieve by using the same approach from the StatelessFilter, in
there the queries are done in batches to avoid timeouts while doing the
request, in the first run this can be used to limit the number of state
changes that are currently held in memory. In order to make sure the
batches of the RaidenService and the underlying filters are alined, the
same constant for the batch size was used.

This does fix one bug, the issue raiden-network#4498 fixed the problem of not fetching
the events from a newly registered token network on the same batch, but
only for the initialiation and without taking into account the
recoverability. This fixes that bug in a general way, by handling this
corner case in the `synchronize_to_confirmed_block_in_batches` method.
Once that method returns it is known that all events for all the smart
contracts of interest have been handled. And on an event of a crash it
is safe to use the block number from the state machine.

Note that this does not solve the above problem in general, only for
newly registered token networks. The problem in general is a bit harder
since potentially there may be many layers of smart contracts, when
there is a tree of smart contracts that would have to be recursively
followed. The general case would require a form of recursion to handle
all cases.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 19, 2019
The first run can take a bit of time, depending on the number of token
networks registered and when the original registry smart contract was
deployed. This does not improve how long it takes to synchronize with
these old and full registries, but it does make sure that while doing
so, less memory is used, and the work is not lost if the node is
restarted.

This was achieve by using the same approach from the StatelessFilter, in
there the queries are done in batches to avoid timeouts while doing the
request, in the first run this can be used to limit the number of state
changes that are currently held in memory. In order to make sure the
batches of the RaidenService and the underlying filters are alined, the
same constant for the batch size was used.

This does fix one bug, the issue raiden-network#4498 fixed the problem of not fetching
the events from a newly registered token network on the same batch, but
only for the initialiation and without taking into account the
recoverability. This fixes that bug in a general way, by handling this
corner case in the `synchronize_to_confirmed_block_in_batches` method.
Once that method returns it is known that all events for all the smart
contracts of interest have been handled. And on an event of a crash it
is safe to use the block number from the state machine.

Note that this does not solve the above problem in general, only for
newly registered token networks. The problem in general is a bit harder
since potentially there may be many layers of smart contracts, when
there is a tree of smart contracts that would have to be recursively
followed. The general case would require a form of recursion to handle
all cases.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 19, 2019
The first run can take a bit of time, depending on the number of token
networks registered and when the original registry smart contract was
deployed. This does not improve how long it takes to synchronize with
these old and full registries, but it does make sure that while doing
so, less memory is used, and the work is not lost if the node is
restarted.

This was achieve by using the same approach from the StatelessFilter, in
there the queries are done in batches to avoid timeouts while doing the
request, in the first run this can be used to limit the number of state
changes that are currently held in memory. In order to make sure the
batches of the RaidenService and the underlying filters are alined, the
same constant for the batch size was used.

This does fix one bug, the issue raiden-network#4498 fixed the problem of not fetching
the events from a newly registered token network on the same batch, but
only for the initialiation and without taking into account the
recoverability. This fixes that bug in a general way, by handling this
corner case in the `synchronize_to_confirmed_block_in_batches` method.
Once that method returns it is known that all events for all the smart
contracts of interest have been handled. And on an event of a crash it
is safe to use the block number from the state machine.

Note that this does not solve the above problem in general, only for
newly registered token networks. The problem in general is a bit harder
since potentially there may be many layers of smart contracts, when
there is a tree of smart contracts that would have to be recursively
followed. The general case would require a form of recursion to handle
all cases.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 19, 2019
The first run can take a bit of time, depending on the number of token
networks registered and when the original registry smart contract was
deployed. This does not improve how long it takes to synchronize with
these old and full registries, but it does make sure that while doing
so, less memory is used, and the work is not lost if the node is
restarted.

This was achieve by using the same approach from the StatelessFilter, in
there the queries are done in batches to avoid timeouts while doing the
request, in the first run this can be used to limit the number of state
changes that are currently held in memory. In order to make sure the
batches of the RaidenService and the underlying filters are alined, the
same constant for the batch size was used.

This does fix one bug, the issue raiden-network#4498 fixed the problem of not fetching
the events from a newly registered token network on the same batch, but
only for the initialiation and without taking into account the
recoverability. This fixes that bug in a general way, by handling this
corner case in the `synchronize_to_confirmed_block_in_batches` method.
Once that method returns it is known that all events for all the smart
contracts of interest have been handled. And on an event of a crash it
is safe to use the block number from the state machine.

Note that this does not solve the above problem in general, only for
newly registered token networks. The problem in general is a bit harder
since potentially there may be many layers of smart contracts, when
there is a tree of smart contracts that would have to be recursively
followed. The general case would require a form of recursion to handle
all cases.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 19, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 19, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 19, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit that referenced this issue Dec 20, 2019
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug #4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit to hackaugusto/raiden that referenced this issue Jan 6, 2020
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug raiden-network#4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
hackaugusto added a commit that referenced this issue May 29, 2020
This introduces batch event polling, the goal is to collapse all
requests into one, so that instead of having one request per filter,
only a single batch request is done for all filters.

This is particularly important for our test enviornments, since it is
possible for a token network registry to end up with hundreds of
registered tokens, were previously that was an equal amount of JSON-RPC
requests per block.

Additionally, this fixes the bug #4498 for the runtime of the node too,
and not just for the initialization. This is achieve by only returning a
batch of events after all filters have been installed and fetched.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Raiden Client
  
Done
Development

Successfully merging a pull request may close this issue.

4 participants