Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

60 TPS ? (parity aura v1.11.11) #9393

Closed
drandreaskrueger opened this issue Aug 21, 2018 · 60 comments
Closed

60 TPS ? (parity aura v1.11.11) #9393

drandreaskrueger opened this issue Aug 21, 2018 · 60 comments
Labels
M2-config 📂 Chain specifications and node configurations. Z1-question 🙋‍♀️ Issue is a question. Closer should answer.
Milestone

Comments

@drandreaskrueger
Copy link

I am benchmarking Ethereum based PoA chains, with my toolbox chainhammer.

My initial results for a dockerized network of parity aura v1.11.8 nodes ...

... leaves space for improvements :-)

Initial -unoptimized- run:

chainreader/img/parity-poa-playground_run1_tps-bt-bs-gas_blks108-211.png

More details here: https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#benchmarking

Please help ...

... by suggesting what we could try, to get this faster than 60 TPS.

(Ideally approx 8 times faster, to beat quorum IBFT.)

Thanks a lot!
Andreas


I'm running:

  • Which Parity version?: v1.11.8
  • Which operating system?: Linux
  • How installed?: via docker
  • Are you fully synchronized?: not relevant
  • Which network are you connected to?: private chain
  • Did you try to restart the node?: yes
  • actual behavior slow
  • expected behavior faster
  • steps to reproduce: See parity.md.
@Tbaut
Copy link
Contributor

Tbaut commented Aug 22, 2018

There is definitely a lot of room for improvement :)
Can you share your setup:

  • how many authorities?
  • how many nodes?
  • are those well connected (same network) ?
  • what is the hardware configuration of the nodes/authorities?
  • how do you broadcast your Txs (do you sign them locally and send them to a node or do you let parity sign them)

@Tbaut Tbaut added the Z1-question 🙋‍♀️ Issue is a question. Closer should answer. label Aug 22, 2018
@Tbaut Tbaut added this to the 2.1 milestone Aug 22, 2018
@ddorgan
Copy link
Collaborator

ddorgan commented Aug 23, 2018

There are common options to help here. They include:

--jsonrpc-server-threads maybe set to 4 or 8.

--tx-queue-size maybe set to 16536

And also scaling verification via: --scale-verifiers

@5chdn
Copy link
Contributor

5chdn commented Aug 23, 2018

What's the block gas limit and aura block time?

Please share config and chain spec.

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Aug 23, 2018

Fantastic, thanks for all the hints (and the tweet ;-) )

Answers are all in here, probably mainly below here.

The author/issue-answerer of that parity-poa-playground seems grumpy - so I am happy that the parity-deploy.sh team is really responsive & helpful. Will try that again instead, next week.

Back in the office on Tuesday. Really looking forward to an optimized run. Have a good weekend, everyone.

@5chdn
Copy link
Contributor

5chdn commented Aug 23, 2018

What's the block gas limit and aura block time?

Ok, I see, this is pretty much your answer ^

You have three authorities with one second block time and a gas floor target of 6 billion.

This is a very good configuration to test TPS however, it does start with a lower block gas limit as it moves slowly up to the target. Did you consider running it for an extended period of time (hours, days) or simply modify the network configuration to start with a very high block gas limit, yet?

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Aug 24, 2018

...

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Aug 24, 2018

--jsonrpc-server-threads maybe set to 4 or 8.

Great.
That looks promising. Earlier in my endeavour, I was surprised to see how little multithreading actually helped with parity (*) but this could explain that yes.

Is that a parity only setting, or can geth do that too?

--tx-queue-size maybe set to 16536

I think that is set already high, no?

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Aug 24, 2018

(*) Actually, back then it was the energywebfoundation "tobalaba" fork of parity. By the way, I think they left some issues unanswered, perhaps anyone of you guys has ideas; after all it is parity 1.8.0, right?

@5chdn
Copy link
Contributor

5chdn commented Aug 24, 2018

don't use the ewf client. parity ethereum now supports chain tobalaba

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Aug 24, 2018

parity ethereum now supports chain tobalaba

Great.

Did I loose time on benchmarking their outdated client then? Tobalaba was one big hickup, until they fixed that.

Not sure I will get the time now to repeat all Tobalaba benchmarking. But feel free to do that yourself, chainhammer is not difficult to use. (Then please pull request, and I include that into chainhammer. Thanks.)

don't use the ewf client.

Is it completely integrated into parity now, with all its EWF added functionality?

Is tobalaba PoA also Aura?

@5chdn
Copy link
Contributor

5chdn commented Aug 24, 2018

Is it completely integrated into parity now, with all its EWF added functionality?

It always has been. They just rebranded the client.

Is tobalaba PoA also Aura?

Yes

@drandreaskrueger
Copy link
Author

This is a very good configuration to test TPS

Good. Still, if you know a better setup than that, I am happy to try that next week.

however, it does start with a lower block gas limit

not sure about that.

(1)
Look at the bottom right diagram here, it shows gasUsed and gasLimit
and gas is not maxed out, in contrast to other runs (scroll down all the way here in this early run - there the TPS was clearly limited by gasLimit).

(2)
Also, gasLimit is set to be 0x165A0BC00 = 6 billion, no?

But I might not understand all those parameters yet. Then sorry. What can I read, which parameters are influencing TPS?

Most importantly, as my time is limited:

simply modify the network configuration

Feel free to (simply run chainhammer yourself, or) send me any other configuration that you think will perform better. Does authority.toml & chain.json fully define that, or are there more settings files?

Thanks a lot.

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Aug 28, 2018

... please send me any other configuration authority.toml & chain.json that you think will perform better.

Thanks.

drandreaskrueger pushed a commit to drandreaskrueger/parity-poa-playground that referenced this issue Aug 29, 2018
@drandreaskrueger
Copy link
Author

drandreaskrueger commented Aug 29, 2018

new tests

I have tried your suggestions.

But

no acceleration!

See https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run2 and below.

63 TPS

that is slow.

... new ideas please. Thanks.

@drandreaskrueger
Copy link
Author

new run6 with some more parameters added, see description of the run in https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run6

--> 65 TPS

@ddorgan
Copy link
Collaborator

ddorgan commented Aug 29, 2018

Just replicating your setup now. But maybe a --gas-floor-target of something more realistic would be a good idea ... e.g. maybe 20m ... Also moving the stepDuration to about 3 to 4 seconds would be needed in a real life situation when not just benchmarking everything on one host.

@drandreaskrueger
Copy link
Author

Just replicating your setup now.

Great, I am happy. Thanks a lot for your time, @ddorgan @5chdn and @Tbaut. Let's find out how we can get parity faster - ideally as fast as geth, no?

And? Got it running? Which rates are you able to see? Or: Need help?

--gas-floor-target of something more realistic would be a good idea ... e.g. maybe 20m ...

Thanks. Explanation:

--gas-floor-target Amount of gas per block to target when sealing a new block (default: 4700000).
https://hudsonjameson.com/2017-06-27-accounts-transactions-gas-ethereum/

Please tell me about all those parameters which might be able to accelerate the current setup. A warning: This whole benchmarking is admittedly not a "realistic" setup which rates to expect when running a network of nodes distributed on the planet. Internet bandwidth, ping times, etc. will always slow it down; I am looking at it more as an attempt to identify the current upper limit; any realistic setup will always be slower than what I have measured. However, if you want to, you can quite as well create a network with nodes on each continent, and then use chainhammer to benchmark that. So:

Also moving the stepDuration to about 3 to 4 seconds

Yes.

However, I have tested most of the other systems with a fast block rate too; my initial focus was on quorum, and raft consensus has sub-seconds block rate, and quorum IBFT runs without problems with 1 second block times - without the internet, of course.

Parity however is not even adhering to its own parameter, I suppose this is a target blocktime of 2 seconds, right? - and the run with 4 nodes then results in 4-8 seconds blocktime ! So aren't we already at the blockspeed that you are suggesting?

These are the parameters that I have been adding to the standard https://github.com/paritytech/parity-deploy

--geth 
--jsonrpc-server-threads 100
--tx-queue-size 20000
--cache-size 4096
--tracing off
--gas-floor-target 20000000
--pruning fast
--tx-queue-mem-limit 0
--no-dapps
--no-secretstore-http

because I found them somewhere in an issue about speed. Which of them are irrelevant?

@drandreaskrueger
Copy link
Author

65 TPS

settings & log of run7: https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run7

results diagrams: https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#result-run-7

https://gitlab.com/electronDLT/chainhammer/raw/master/chainreader/img/parity-aura_run7_tps-bt-bs-gas_blks3-90.png

There is a new README.md --> quickstart now ...

... so if you have any intution or knowledge how to accelerate this, please replicate my setup, and then start modifying the parameters of the network of parity nodes, e.g. with parity-deploy - until you get to better TPS rates.

Then please alert us how you did it. Thanks.

@drandreaskrueger
Copy link
Author

I found a new upper limit of 69 TPS, but with instantseal and only 1 node.

@AyushyaChitransh
Copy link

In around June(don't exactly recall which parity version), we made some scripts to transfer ether from one account to another account. This script pushed transactions to 5 server setup(across different geographical regions) and we were able to achieve maximum TPS of 5001.

  • For block gas limit same as of mainnet(8000029), 3001 tx in one block
  • For increased block gas limit(upto 100 times of mainnet limit) 5001 tx in one block
  • For increased block gas limit(upto 10000 times of mainnet limit) 5001 tx in one block

These results were obtained from not one but multiple blocks.

Environment:

  • StepDuration: 1, which ensures 1 block issuance per second
  • Number of Authorities: 5
  • Consensus Mechanism: Aura

Other than this, there were no more changes to --jsonrpc-server-threads

Most likely this low TPS that you are able to achieve is because of contract transactions.

Cause:

Contract transactions are more heavy in terms of amount of gas used. So few transactions are able to completely fill up the block. Maybe increasing gas limit would allow you to get a higher TPS.

Regarding instantSeal: This may not be required in benchmarking any ethereum client, as it is not built to be decentralised (the core idea of blockchain). Even if one person was able to get a higher TPS on instantSeal, this blockchain would not be of much use to other willing to be a part of that network.

@5chdn
Copy link
Contributor

5chdn commented Sep 5, 2018

I don't have time to reproduce this myself right now @drandreaskrueger - but 65 sounds wrong by orders of magnitude. what @AyushyaChitransh reports sounds more realistic.

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Sep 5, 2018

we were able to achieve maximum TPS of 5001.

Your TPS benchmarking sounds impressive.

Please (someone) publish the exact commands - perhaps in a dockerized form, so that we can replicate it easily within a few minutes.

If it can be replicated, I am happy to include it here in my repo.

Most likely this low TPS that you are able to achieve is because of contract transactions.

Yes to "lower" but no to "low" - because I am putting geth, quorum, etc through the exact same task.

And: Simple money transfer is not relevant for our use case.

My storage .set(x) transaction takes less than 27000 gas, which is at least 10 times your gas usage, right? (your 8000029/3001= ...) So if gas were the only parameter, I should be able to see 300-500 TPS - which I do with geth, but not with parity. Something else but the EVM must be slowing down parity.

I am always using the same transaction, on geth, quorum, parity, etc. - so the TPS values are comparable, right?

I have never benchmarked simple money transfer, because that is not what we do. Instead we need to choose the fastest client for smart contract transactions, and that is currently quorum IBFT (with over 400 TPS) or geth Clique (with over 300 TPS), and they were measured in the exactly same way as I measured parity. Please see quorum-IBFT.md and geth.md. It is a pity, because for years we have always preferred parity and it would mean that we have to revise quite a bit of our inhouse code - but we simply cannot ignore a 6 times faster TPS.

@AyushyaChitransh please you repeat your benchmarking with a simplemost smart contract transaction - storing one value in a contract (or doing one multiplication and one addition - I still want to do that, but haven't had the time yet, see TODO.md).

See the call and the contract.

Maybe increasing gas limit would allow you to get a higher TPS.

Done that, been there.

Please have a look at the bottom right diagram in each of my measurements, then you can see gasLimit and gasUsed per second. When I see one touching the other, I know that I have to raise the gasLimit. Which I did. In the last runs I used 40 million gas as a limit. But no, the blocks were not maxed out.

Thanks for all your answers, but please spend some time looking at my stuff first, thanks.

StepDuration: 1, which ensures 1 block issuance per second

No, it doesn't. I left mine set to 2 seconds, but almost always it ended up to be 4-8 seconds.

across different geographical regions

That is a more realistic setup, to include the effects of the internet. For now, I am benchmarking the client itself, and all my 4-7 nodes are running on the same one machine, a 2016 desktop. But e.g. CPU is not maxing out, it stays around 50% during the whole benchmarking.

Regarding instantSeal: This may not be required in benchmarking any ethereum client

I know. Of course. But it is the most simplistic thing I can ask parity to do, and then I have still seen less than 70 TPS.

Please anyone now replicate that setup of run 8, with the now new quickstart manual of chainhammer. Or -EDIT- follow the exact prescription below.

(@AyushyaChitransh , please publish your experimental setup in a similar way, with the exact commandline commands to execute, so that others can replicate your 3k - 5k TPS. Thanks.)

I don't have time to reproduce this myself right now

I get that, @5chdn . We are all busy. And as you can see in my TODO.md list cited above, I also have some unfinished tasks with this. However, until you our someone else is disproving my findings, it already looks as if we might have our (yes, still preliminary) results:

For our purposes parity is ~6 times slower than geth .

65 sounds wrong by orders of magnitude

Yes, and I am suprised about that myself.

The whole intention of all these interactions here is to get anyone who is more knowledgable about parity than me, to help me find the problem - and fix it. If your team is too small, what about employing more people? Or perhaps there is someone else out there, who would work for no pay? Please help, thanks.

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Sep 5, 2018

chainhammer

Actually, today I tried this again - tested on and optimized for Debian AWS machine (debian-stretch-hvm-x86_64-gp2-2018-08-20-85640) - all this really does work:

How to replicate the results

toolchain

# docker
# this is for Debian Linux, 
# if you run a different distro, google "install docker [distro name]"
sudo apt-get update 
sudo apt-get -y remove docker docker-engine docker.io 
sudo apt-get install -y apt-transport-https ca-certificates wget software-properties-common
wget https://download.docker.com/linux/debian/gpg 
sudo apt-key add gpg
rm gpg
echo "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee -a /etc/apt/sources.list.d/docker.list
sudo apt-get update
sudo apt-cache policy docker-ce
sudo apt-get -y install docker-ce docker-compose
sudo systemctl start docker

sudo usermod -aG docker ${USER}
groups $USER

log out and log back in, to enable those usergroup changes

# parity-deploy
# for a dockerized parity environment
# this is instantseal, NOT a realistic network of nodes
# for 8 different setups see chainhammer-->parity.md
git clone https://github.com/paritytech/parity-deploy.git paritytech_parity-deploy
cd paritytech_parity-deploy
sudo ./clean.sh
./parity-deploy.sh --config dev --name instantseal --geth
docker-compose up

new terminal:

# solc
# someone should PLEASE create a Debian specific installation routine
# see https://solidity.readthedocs.io/en/latest/installing-solidity.html 
# and https://github.com/ethereum/solidity/releases
wget https://github.com/ethereum/solidity/releases/download/v0.4.24/solc-static-linux
chmod 755 solc-static-linux 
echo $PATH
sudo mv solc-static-linux /usr/local/bin/
sudo ln -s /usr/local/bin/solc-static-linux /usr/local/bin/solc
solc --version

Version: 0.4.24+commit.e67f0147.Linux.g++

chainhammer

# chainhammer & dependencies
git clone https://gitlab.com/electronDLT/chainhammer electronDLT_chainhammer
cd electronDLT_chainhammer/

sudo apt install python3-pip libssl-dev
sudo pip3 install virtualenv 
virtualenv -p python3 py3eth
source py3eth/bin/activate

python3 -m pip install --upgrade pip==18.0
pip3 install --upgrade py-solc==2.1.0 web3==4.3.0 web3[tester]==4.3.0 rlp==0.6.0 eth-testrpc==1.3.4 requests pandas jupyter ipykernel matplotlib
ipython kernel install --user --name="Python.3.py3eth"
# configure chainhammer
nano config.py

RPCaddress, RPCaddress2 = 'http://localhost:8545', 'http://localhost:8545'
ROUTE = "web3"
# test connection
touch account-passphrase.txt
./deploy.py 
# start the chainhammer viewer
./tps.py

new terminal

# same virtualenv
cd electronDLT_chainhammer/
source py3eth/bin/activate

# start the chainhammer send routine
./deploy.py notest; ./send.py 

or:

# not blocking but with 23 multi-threading workers
./deploy.py notest; ./send.py threaded2 23

everything below here is not necessary

new terminal

( * )

# check that the transactions are actually successfully executed:

geth attach http://localhost:8545

> web3.eth.getTransaction(web3.eth.getBlock(50)["transactions"][0])
{
  gas: 90000, ...
}

> web3.eth.getTransactionReceipt(web3.eth.getBlock(50)["transactions"][0])
{ 
  gasUsed: 26691,
  status: "0x1", ...
}
> 

geth

( * ) I do not want to install geth locally, but start the geth console from a docker container - but I don't succeed:

docker run ethereum/client-go attach https://localhost:8545

WARN [09-10|09:38:24.984] Sanitizing cache to Go's GC limits provided=1024 updated=331
Fatal: Failed to start the JavaScript console: api modules: Post https://localhost:8545: dial tcp 127.0.0.1:8545: connect: connection refused
Fatal: Failed to start the JavaScript console: api modules: Post https://localhost:8545: dial tcp 127.0.0.1:8545: connect: connection refused

Please help me with ^ this, thanks.


Until that is sorted, I simply install geth locally:

wget https://dl.google.com/go/go1.11.linux-amd64.tar.gz
sudo tar -C /usr/local -xzf go1.11.linux-amd64.tar.gz 
rm go1.11.linux-amd64.tar.gz 
echo "export PATH=\$PATH:/usr/local/go/bin:~/go/bin" >> .profile

logout, log back in

go version

go version go1.11 linux/amd64

go get -d github.com/ethereum/go-ethereum
go install github.com/ethereum/go-ethereum/cmd/geth
geth version

geth version
WARN [09-10|09:56:11.759] Sanitizing cache to Go's GC limits provided=1024 updated=331
Geth
Version: 1.8.16-unstable
Architecture: amd64
Protocol Versions: [63 62]
Network Id: 1
Go Version: go1.11
Operating System: linux
GOPATH=
GOROOT=/usr/local/go

please you now try this

And about "not having the time" - these 2.5 hours happened on my FREE DAY. I must convince them now that I can take those hours off again.

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Sep 21, 2018

Thanks for the idea, @gituser

It might explain why I have seen faster rates with the Tobalaba fork (~ similiar to parity version 1.8.0)


And how?

In https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run-14 it could be done here:

sed -i 's/parity:stable/parity:v1.11.11/g' docker-compose.yml

1.11.11 --> 1.8.x

BUT

curl -s 'https://registry.hub.docker.com/v2/repositories/parity/parity/tags/' | jq -r '."results"[]["name"]' | sort
beta
nightly
stable
v1.11.11
v2.0.4
v2.0.5
v2.0.5-rc0
v2.0.6
v2.1.0
v2.1.1

they have deleted all older versions.

hello parity team - could you please re-create the docker hub image of the most stable 1.8.x version?

Thanks a lot.

@gituser
Copy link

gituser commented Sep 21, 2018

they have deleted all older versions.

weird, from github repository (just cloned freshly):

$ git clone https://github.com/paritytech/parity-ethereum
Cloning into 'parity-ethereum'...
cd pariremote: Enumerating objects: 52, done.
remote: Counting objects: 100% (52/52), done.
remote: Compressing objects: 100% (46/46), done.
Receiving objects:  21% (28972/137960), 16.01 MiB | 10.58 MiB/s    
Receiving objects:  60% (82776/137960), 40.14 MiB | 11.34 MiB/s     
remote: Total 137960 (delta 12), reused 14 (delta 6), pack-reused 137908
Receiving objects: 100% (137960/137960), 55.64 MiB | 9.40 MiB/s, done.
Resolving deltas: 100% (99246/99246), done.
Checking connectivity... done.
$ cd parity-ethereum
$ git tag -l
beta-0.9
beta-0.9.1
beta-release
mac-installer-hotfix
nightly
stable-release
test-tag
v1.0.0
v1.0.0-rc1
v1.0.1
v1.0.2
v1.1.0
v1.10.0
v1.10.0-ci0
v1.10.0-ci1
v1.10.0-ci2
v1.10.0-ci3
v1.10.0-ci4
v1.10.0-ci5
v1.10.0-ci6
v1.10.0-ci7
v1.10.1
v1.10.1-ci0
v1.10.2
v1.10.2-ci0
v1.10.2-ci1
v1.10.3
v1.10.3-ci0
v1.10.3-ci1
v1.10.4
v1.10.5
v1.10.5-ci0
v1.10.5-ci1
v1.10.5-rc0
v1.10.6
v1.10.7
v1.10.7-ci0
v1.10.8
v1.10.8-ci0
v1.10.8-ci1
v1.10.9
v1.10.9-rc0
v1.11.0
v1.11.0-ci0
v1.11.1
v1.11.10
v1.11.11
v1.11.2-ci0
v1.11.3
v1.11.4
v1.11.4-ci0
v1.11.5
v1.11.5-ci0
v1.11.5-ci1
v1.11.6
v1.11.6-rc0
v1.11.6-rc1
v1.11.6-rc2
v1.11.7
v1.11.7-rc0
v1.11.7-rc1
v1.11.7-rc2
v1.11.8
v1.12.0-ci0
v1.12.0-ci1
v1.12.0-ci2
v1.12.0-ci3
v1.12.0-ci4
v1.12.0-ci5
v1.2.0
v1.2.1
v1.2.2
v1.2.3
v1.2.4
v1.3.0
v1.3.1
v1.3.10
v1.3.11
v1.3.12
v1.3.13
v1.3.14
v1.3.15
v1.3.2
v1.3.3
v1.3.4
v1.3.5
v1.3.6
v1.3.7
v1.3.8
v1.3.9
v1.4.0
v1.4.1
v1.4.10
v1.4.11
v1.4.12
v1.4.2
v1.4.3
v1.4.4
v1.4.5
v1.4.6
v1.4.7
v1.4.8
v1.4.9
v1.5.0
v1.5.10
v1.5.11
v1.5.12
v1.5.13
v1.5.2
v1.5.3
v1.5.4
v1.5.6
v1.5.7
v1.5.8
v1.5.9
v1.6.0
v1.6.1
v1.6.10
v1.6.2
v1.6.3
v1.6.4
v1.6.5
v1.6.6
v1.6.7
v1.6.8
v1.6.9
v1.7.0
v1.7.1
v1.7.10
v1.7.11
v1.7.12
v1.7.13
v1.7.2
v1.7.3
v1.7.4
v1.7.5
v1.7.6
v1.7.7
v1.7.8
v1.7.9
v1.8.0
v1.8.1
v1.8.10
v1.8.10-ci0
v1.8.10-ci1
v1.8.10-ci2
v1.8.10-ci3
v1.8.10-ci4
v1.8.10-ci5
v1.8.11
v1.8.11-ci0
v1.8.2
v1.8.3
v1.8.4
v1.8.5
v1.8.6
v1.8.7
v1.8.8
v1.8.8-ci0
v1.8.8-ci1
v1.8.8-ci2
v1.8.8-ci3
v1.8.8-ci4
v1.8.9
v1.8.9-ci0
v1.9.0
v1.9.1
v1.9.1-ci0
v1.9.1-ci1
v1.9.1-ci2
v1.9.1-ci3
v1.9.2
v1.9.2-ci0
v1.9.3
v1.9.3-ci0
v1.9.3-ci1
v1.9.3-ci2
v1.9.3-ci3
v1.9.3-ci4
v1.9.3-ci5
v1.9.4
v1.9.4-ci0
v1.9.5
v1.9.5-ci0
v1.9.5-ci1
v1.9.5-ci2
v1.9.5-ci3
v1.9.5-ci4
v1.9.5-ci5
v1.9.5-ci6
v1.9.6
v1.9.6-ci0
v1.9.6-ci1
v1.9.6-ci2
v1.9.7
v1.9.7-ci0
v1.9.7-ci1
v2.0.0
v2.0.0-rc0
v2.0.0-rc1
v2.0.0-rc2
v2.0.0-rc3
v2.0.1
v2.0.3
v2.0.3-rc0
v2.0.4
v2.0.5
v2.0.5-rc0
v2.0.6
v2.1.0
v2.1.0-rc0
v2.1.0-rc1
v2.1.0-rc2
v2.1.0-rc3
v2.1.0-rc4
v2.1.1
v2.2.0-rc0

There are multiple pages in that link you sent!

Check - curl -s 'https://registry.hub.docker.com/v2/repositories/parity/parity/tags/?page=2'|json_pp
curl -s 'https://registry.hub.docker.com/v2/repositories/parity/parity/tags/?page=3'|json_pp etc..

and yes there is no v1.8.x

  • the last v1.8 version is v1.8.12 (though it's not tagged, commit 74b99ba !)
  • the last v1.9 version is v1.9.8 (though it's not tagged, commit 0e06a35 !)
  • the last v1.10 version is v1.10.9 (it's tagged, commit 23a9eef)

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Sep 22, 2018

Oh fantastic, that should make it easy to test.

Pagination, sigh, had not expected that.
This is the command which returns the first 1000 tag names

curl -s 'https://registry.hub.docker.com/v2/repositories/parity/parity/tags/?page_size=1000'  | jq -r '."results"[]["name"]' | sort -t. -k 1,1n -k 2,2n -k 3,3n

so, all older version are still on dockerhub, that is perfect:

beta
gitlab-next
latest
nightly
stable
v2.0.0
v2.0.0-rc1
v2.0.0-rc2
v2.0.0-rc3
v2.0.1
v2.0.3
v2.0.3-rc0
v2.0.4
v2.0.5
v2.0.5-rc0
v2.0.6
v2.1.0
v2.1.0-rc1
v2.1.0-rc2
v2.1.1
v1.5.13
v1.6.8
v1.6.9
v1.6.10
v1.7.0
v1.7.1
v1.7.2
v1.7.3
v1.7.4
v1.7.5
v1.7.6
v1.7.7
v1.7.8
v1.7.9
v1.7.10
v1.7.11
v1.7.12
v1.7.13
v1.8.0
v1.8.1
v1.8.2
v1.8.3
v1.8.4
v1.8.5
v1.8.6
v1.8.7
v1.8.8
v1.8.9
v1.8.10
v1.8.11
v1.9.1
v1.9.2
v1.9.3
v1.9.4
v1.9.5
v1.9.6
v1.9.7
v1.10.0
v1.10.1
v1.10.2
v1.10.3
v1.10.4
v1.10.5
v1.10.6
v1.10.7
v1.10.8
v1.10.9
v1.11.0
v1.11.1
v1.11.3
v1.11.4
v1.11.5
v1.11.6
v1.11.7
v1.11.7-rc0
v1.11.7-rc1
v1.11.7-rc2
v1.11.8
v1.11.10
v1.11.11

thanks! have a good weekend.

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Sep 22, 2018

I would probably try v1.7.13, to be on the safe side.

Did v1.7 already have aura?

Did it have instaseal developmentChain, so would parity-deploy.sh --dev work ?
That already would show something probably.


And how? In https://gitlab.com/electronDLT/chainhammer/blob/master/reproduce.md#parity change

sed -i 's/parity:stable/parity:v1.11.11/g' docker-compose.yml

1.11.11 --> 1.7.13

or (because Tobalaba)

1.11.11 --> 1.8.0 

or (because stable)

1.11.11 --> 1.8.11

@5chdn
Copy link
Contributor

5chdn commented Sep 25, 2018

hello parity team - could you please re-create the docker hub image of the most stable 1.8.x version?

we don't delete anything. just checkout the tag directly, either on github or on docker

Did v1.7 already have aura?

yes.

@drandreaskrueger
Copy link
Author

yes.

thanks.

@drandreaskrueger
Copy link
Author

hello gituser, Re: your comment above

@gituser
gituser commented 4 days ago •
@drandreaskrueger worth trying older version of parity (e.g. 1.8.x or 1.9.x or 1.10.x)
and see if there is any difference at all.
There was some change in code signing in 1.10 or in 1.9 (not sure though).

I had a lot of hope when you said that.

But I have tried some older versions now:

(run15) v1.7.13 and instantseal https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run-15
(run16) v1.7.13 and aura https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run-16
(run17) v1.8.11 and aura https://gitlab.com/electronDLT/chainhammer/blob/master/parity.md#run-17

not faster.

@5chdn
Copy link
Contributor

5chdn commented Sep 25, 2018

not faster.

would be surprised if the speed varies across versions

@drandreaskrueger
Copy link
Author

would be surprised if the speed varies across versions

yes me too.

but in lack of any other substantial suggestions, and as a test of his comment

@gituser
gituser commented 4 days ago •
@drandreaskrueger worth trying older version of parity (e.g. 1.8.x or 1.9.x or 1.10.x)
and see if there is any difference at all.
There was some change in code signing in 1.10 or in 1.9 (not sure though).

I had to try that.

@drandreaskrueger
Copy link
Author

I could get contact with CodyBorn from Microsoft, he answered to my tweets.

I have summarized what he is revealing about his approach here: https://gitlab.com/electronDLT/chainhammer/blob/master/codyborn.md

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Sep 25, 2018

CodyBorn

What is clear already: He is "too far out" to be applicable to my simple benchmarking.

I don't feel like creating hundreds of sender-accounts just to bypass nonce lookup and then sign my own transactions.

And even if that made parity faster, for me it would simply mean that paritytech should revisit that part of your parity code, to accelerate it - and repeatedly re-run my chainhammer, to notice when you got it faster.

Because his approach with web3.eth.sendRawTransaction() would not be practical for our daily use of parity; we want the Ethereum node to do that work for us efficiently, so we can use eth_sendTransaction(). Edit: Like on geth or quorum.

possibly the most important hint for paritytech

it is probably something outside of sendRawTransaction() but within the parity code base which is slowing down transactions by a factor of more than 500%.

the most important experiment that CodyBorn could do:

replicate his exact approach but instead of parity aura using geth clique or quorum IBFT, see geth.md#results, and quorum-IBFT.md#on-amazon-aws ...


EDIT: ... the latter there contains my best result so far: 524 TPS on average when sending 20k tx into 1 node on a quorum-crux-IBFT network of 4 nodes, on an Amazon c5.4xlarge instance. That's it, from me, for now - until anyone makes any better suggestions for parity.

@drandreaskrueger
Copy link
Author

parity v2.2.0 (single-threaded) seems slightly faster than v1.11.11 (multi-threaded), see these brand new results: #9582 (comment)

@5chdn 5chdn added the M2-config 📂 Chain specifications and node configurations. label Oct 10, 2018
@drandreaskrueger
Copy link
Author

Hey @5chdn @Tbaut @ddorgan @AyushyaChitransh @gituser

Greetings from Berlin, web3summit.

Actually, who of you are you in Berlin now? We should meet up this week!

@ddorgan
Copy link
Collaborator

ddorgan commented Oct 24, 2018

So I just did a test on a c5.xlarge but only using the --geth extra option and I'm seeing this:

blocknumber_start_here = 17
starting timer, at block 17 which has  1  transactions; at timecode 43450.223308099
block 17 | new #TX 661 / 4000 ms = 165.2 TPS_current | total: #TX  662 /  4.0 s = 166.9 TPS_average
block 18 | new #TX 775 / 4000 ms = 193.8 TPS_current | total: #TX 1437 /  8.0 s = 179.9 TPS_average
block 19 | new #TX 750 / 4000 ms = 187.5 TPS_current | total: #TX 2187 / 12.0 s = 182.4 TPS_average
block 20 | new #TX 763 / 4000 ms = 190.8 TPS_current | total: #TX 2950 / 16.0 s = 184.3 TPS_average
block 21 | new #TX 740 / 4000 ms = 185.0 TPS_current | total: #TX 3690 / 20.0 s = 184.4 TPS_average
block 22 | new #TX 740 / 4000 ms = 185.0 TPS_current | total: #TX 4430 / 24.0 s = 184.4 TPS_average
block 23 | new #TX 758 / 4000 ms = 189.5 TPS_curr, wient | total: #TX 5188 / 28.3 s = 183.2 TPS_average
block 24 | new #TX 717 / 4000 ms = 179.2 TPS_current | total: #TX 5905 / 32.0 s = 184.5 TPS_average
block 25 | new #TX 751 / 4000 ms = 187.8 TPS_current | total: #TX 6656 / 36.0 s = 184.8 TPS_average
block 26 | new #TX 712 / 4000 ms = 178.0 TPS_current | total: #TX 7368 / 40.0 s = 184.2 TPS_average
block 27 | new #TX 733 / 4000 ms = 183.2 TPS_current | total: #TX 8101 / 44.0 s = 184.1 TPS_average
block 28 | new #TX 728 / 4000 ms = 182.0 TPS_current | total: #TX 8829 / 48.0 s = 183.9 TPS_average
block 29 | new #TX 742 / 4000 ms = 185.5 TPS_current | total: #TX 9571 / 52.0 s = 183.9 TPS_average

Still seems quite single thread heavy though, will try with some options to speed that up.

@drandreaskrueger
Copy link
Author

Great results. Looks very promising.

We have to find out how to prevent this #9582 to always get to those results, consistently. Please let it run to the 20k end - perhaps even with more than 20k transactions, see config.py

And: Keep me in the loop whatever you find out together with Tomek.

Thanks again for your time on Thursday in the parity office. Great working with you!

Greetings from the train to Prague.

@5chdn 5chdn modified the milestones: 2.2, 2.3 Oct 29, 2018
@drandreaskrueger
Copy link
Author

drandreaskrueger commented Nov 6, 2018

just found this elsewhere:

... validation logic, can be separated into three sections:
• Validation that actions are internally consistent
• Validation that preconditions are met
• Modification of the application state
The read only aspects of this process can be conducted
in parallel, while modification which requires write access
must be conducted sequentially ...

is that also how parity is internally digesting multi-threaded transaction requests?

@ddorgan
Copy link
Collaborator

ddorgan commented Nov 6, 2018

@drandreaskrueger let me come back to you on this.

The bottleneck is most probably the signing itself. I think for geth you are basically pre-signing them because the web3 library is too slow, right?

I may make a change to your script to do the same for parity so that transactions are signed before being submitted.

This would like up with the geth process, right?

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Nov 6, 2018

because the web3 library is too slow

Yes, for low (two digit) TPS it does not make a big difference, only 20% faster. But when I get into the hundreds of TPS, I see considerable gains (twice as fast) when bypassing web3 completely. Please have a quick look at these old experiments: https://github.com/drandreaskrueger/chainhammer/blob/master/log.md#sending-via-web3-versus-sending-via-rpc


basically pre-signing

Not sure we are talking about the same thing actually. Even when bypassing the web3.py library, I am using the RPC method = 'eth_sendTransaction.

Have a look at these two codepieces:

via web3

in https://github.com/drandreaskrueger/chainhammer/blob/93c40384a4d178bdb00cea491d15b14046471b72/send.py#L73-L93
it is simply this one liner https://github.com/drandreaskrueger/chainhammer/blob/93c40384a4d178bdb00cea491d15b14046471b72/send.py#L90

while

via RPC

in https://github.com/drandreaskrueger/chainhammer/blob/93c40384a4d178bdb00cea491d15b14046471b72/send.py#L106-L183
I am doing (contract_method_ID + argument_encoding --> payload), then (plus headers into arequests.post to call eth_sendTransaction), see here:
https://github.com/drandreaskrueger/chainhammer/blob/93c40384a4d178bdb00cea491d15b14046471b72/send.py#L178

choice

I switch between those two routes here https://github.com/drandreaskrueger/chainhammer/blob/93c40384a4d178bdb00cea491d15b14046471b72/send.py#L201

choice constant ROUTE is defined in config.py


to do the same for parity so that transactions are signed before being submitted. This would like up with the geth process, right?

No, no difference between the two.

As long as PARITY_UNLOCK_EACH_TRANSACTION=False https://github.com/drandreaskrueger/chainhammer/blob/93c40384a4d178bdb00cea491d15b14046471b72/config.py#L43 is False, chainhammer is talking to geth identically to how it is talking to parity.


EDIT: Nicer formatting now here in FAQ.md, plus I raised an issue with the web3.py guys ...

@drandreaskrueger
Copy link
Author

drandreaskrueger commented Nov 6, 2018

see above. Plus then:

The bottleneck is most probably the signing itself.

I suggest you compare the tx signing part of the geth go code with the parity rust code.

@5chdn 5chdn modified the milestones: 2.3, 2.4 Jan 10, 2019
@drandreaskrueger
Copy link
Author

drandreaskrueger commented Feb 19, 2019

chainhammer v55

The newest version is fully automated - you run & analyze a whole experiment with one or two CLI lines. I am optimistic that you will now find a clever combination of parity CLI switches to speed it up. Good luck.

Because of general interest, I have created this new issue:
#10382

@5chdn
Copy link
Contributor

5chdn commented Feb 20, 2019

Thanks for sharing 👍

@5chdn 5chdn closed this as completed Feb 20, 2019
@drandreaskrueger
Copy link
Author

drandreaskrueger commented Feb 20, 2019

closed this

So you want to track possible speed improvements in the new issue. Yes, that makes sense.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
M2-config 📂 Chain specifications and node configurations. Z1-question 🙋‍♀️ Issue is a question. Closer should answer.
Projects
None yet
Development

No branches or pull requests

7 participants