Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark structure for UInt classes #553

Merged
merged 20 commits into from
Jan 18, 2019

Conversation

igormcoelho
Copy link
Contributor

Created a basic benchmark structure for UInt classes. Will help on discussion #552.

@igormcoelho
Copy link
Contributor Author

@jsolman I created a sketch of testing classes for resolve the benchmark situation. We can merge this before, so we test lights and your proposal.

@igormcoelho
Copy link
Contributor Author

@lightszero I created this basic testing structure to improve our discussions on the other thread. Right now, I couldn't see significant improvement on performance... In fact, it was slower on my initial measurements.

uut_32_1[i] = new UInt256(base_32_1[i]);
uut_32_2[i] = new UInt256(base_32_2[i]);
}
Stopwatch sw0 = new Stopwatch();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you make a method that runs the stopwatch and takes a delegate with code to be benchmarked. The loop would occur in the passed delegate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since times are quite short... we need to measure them all together, if I measure one by one, system time will be bigger than timing itself.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I wasn’t suggesting measure 1 by 1. I was suggesting a benchmark method that takes a delegate. The passed delegate has the code that loops 1 million times all in one call to the delegate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh!! nice! it would look better indeed for better isolation.

{
uint* lpx = (uint*)px;
uint* lpy = (uint*)py;
for (int i = 8; i >= 0; i--)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should actually be i=7

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True!!

@igormcoelho
Copy link
Contributor Author

I guess the tests are ready.. that will also help future decisions in similar subjects.

Copy link
Contributor

@jsolman jsolman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is nice for a template for benchmark testing, but this test shouldn’t really be pushed to the master branch unless only the code that is being used is enabled for testing.

@lightszero
Copy link
Member

lightszero commented Jan 17, 2019

this is not a good test case.
all test data is not equal.
now we set 50% data is equal,than you can see different.
@igormcoelho

    [TestInitialize]
    public void TestSetup()
    {
        int SEED = 123456789;
        random = new Random(SEED);

        base_32_1 = new byte[MAX_TESTS][];
        base_32_2 = new byte[MAX_TESTS][];
        base_20_1 = new byte[MAX_TESTS][];
        base_20_2 = new byte[MAX_TESTS][];

        for (var i = 0; i < MAX_TESTS; i++)
        {
            base_32_1[i] = RandomBytes(32);
            if (i % 2 == 0)
            {
                base_32_2[i] = RandomBytes(32);
            }
            else
            {
                base_32_2[i] = new byte[32];
                Buffer.BlockCopy(base_32_1[i], 0, base_32_2[i], 0, 32);
            }
            base_20_1[i] = RandomBytes(20);
            base_20_2[i] = RandomBytes(20);
        }
    }

@shargon
Copy link
Member

shargon commented Jan 17, 2019

For benchmarks i ussually use https://benchmarkdotnet.org/articles/overview.html with great results

@lightszero
Copy link
Member

now result is like this,with 50% equal data

Elapsed=00:00:01.1170521 Sum=-1244
Elapsed=00:00:01.0273095 Sum=-1244
Elapsed=00:00:00.4392158 Sum=-1244
Elapsed=00:00:00.3674855 Sum=-1244

@vncoelho
Copy link
Member

Men, you are all genius.
Nice catch about the difficulty of the task, @lightszero.

@igormcoelho
Copy link
Contributor Author

igormcoelho commented Jan 17, 2019

Nice observation @lightszero! Perhaps 50% equals is a better benchmark indeed.

@igormcoelho
Copy link
Contributor Author

Please let's keep 1 million maximum... or it breaks my computer :)

@igormcoelho
Copy link
Contributor Author

igormcoelho commented Jan 17, 2019

Implemented and inlined UInt160 option.... better than pure uint, no loops kkkkkkkkk

        private unsafe int code3_UInt160CompareTo(byte[] b1, byte[] b2)
        {
            // LSB -----------------> MSB
            // --------------------------
            // | 8B      | 8B      | 4B |
            // --------------------------
            //   0l        1l        4i
            // --------------------------
            fixed (byte* px = b1, py = b2)
            {
                uint* lpxi = (uint*)px;
                uint* lpyi = (uint*)py;
                if (lpxi[4] > lpyi[4])
                    return 1;
                if (lpxi[4] < lpyi[4])
                    return -1;

                ulong* lpx = (ulong*)px;
                ulong* lpy = (ulong*)py;
                if (lpx[1] > lpy[1])
                    return 1;
                if (lpx[1] < lpy[1])
                    return -1;
                if (lpx[0] > lpy[0])
                    return 1;
                if (lpx[0] < lpy[0])
                    return -1;
            }
            return 0;
        }

vncoelho
vncoelho previously approved these changes Jan 17, 2019
@vncoelho
Copy link
Member

I think it is good like this. The idea of having comparisons ensure safety because the results of those 3 tries are compared against each other.

@jsolman
Copy link
Contributor

jsolman commented Jan 17, 2019

I should have noticed the data wasn’t the same across runs. Nice catch @lightszero

@igormcoelho igormcoelho merged commit 6849ac2 into neo-project:master Jan 18, 2019
@igormcoelho igormcoelho deleted the benchmarks_uint branch January 18, 2019 14:11
vncoelho added a commit that referenced this pull request Jan 18, 2019
txhsl added a commit to txhsl/neo that referenced this pull request Feb 25, 2019
* Handles escape characters in JSON

*  Pass ApplicationExecution to IPersistencePlugin (neo-project#531)

* Update dependencies: (neo-project#532)

- Akka 1.3.11
- Microsoft.AspNetCore.ResponseCompression 2.2.0
- Microsoft.AspNetCore.Server.Kestrel 2.2.0
- Microsoft.AspNetCore.Server.Kestrel.Https 2.2.0
- Microsoft.AspNetCore.WebSockets 2.2.0
- Microsoft.EntityFrameworkCore.Sqlite 2.2.0
- Microsoft.Extensions.Configuration.Json 2.2.0

* change version to v2.9.4

* Updating Unknown to Policy Fail (neo-project#533)

* Fix a dead lock in `WalletIndexer`

* Downgrade Sqlite to 2.1.4 (neo-project#535)

* RPC call gettransactionheight (neo-project#541)

* getrawtransactionheight

Nowadays two calls are need to get a transaction height, `getrawtransaction` with `verbose` and then use the `blockhash`.
Other option is to use `confirmations`, but it can be misleading.

* Minnor fix

* Shargon's tip

* modified

* Allow to use the wallet inside a RPC plugin (neo-project#536)

* Improve Large MemoryPool Performance - Sort + intelligent TX reverification (neo-project#500)

Improve Large MemoryPool Performance - Sort + intelligent TX reverification (neo-project#500)

* Keep both verified and unverified (previously verified) transactions in sorted trees so ejecting transactions above the pool size is a low latency operation.
* Re-verify unverified transactions when Blockchain actor is idle.
* Don't re-verify transactions needlessly when not at the tip of the chain.
* Support passing a flag to `getrawmempool` to retrieve both verified and unverified TX hashes.
* Support MaxTransactionsPerBlock and MaxFreeTransactionsPerBlock from Policy plugins.
* Rebroadcast re-verified transactions if it has been a while since last broadcast (high priority transactions are rebroadcast more frequently than low priority transactions.

* Policy filter GetRelayResult message (neo-project#543)

* Policy filter GetRelayResult message

* adding fixed numbering for return codes

* Removed enum fixed values

* Add some initial MemoryPool unit tests. Fix bug when Persisting the GenesisBlock (neo-project#549)

* More MemoryPool Unit Tests. Improve Re-broadcast back-off to an increasing linear formula. (neo-project#554)

* Ensuring Object Reference check of SortedSets for speed-up (neo-project#557)

* Minor comments update on Mempool class (neo-project#556)

* Update MemoryPool Unit Test to add random fees to Mock Transactions (neo-project#558)

* Add Unit Test for MemoryPool sort order. Fixed sort order to return descending. (neo-project#559)

* Add unit test to verify memory pool sort order and reverification order. Fixed sort order bug.

* VerifyCanTransactionFitInPool works as intended. Also inadvertantly verified GetLowestFeeTransaction() works.

* Benchmark structure for UInt classes (neo-project#553)

* basic benchmark structure for UInt classes

* commented code2 from lights for now

* updated tests. all seem correct now

* Switch to using a benchmark method taking a method delegate to benchmark.

* Make pass.

* 1 million iterations.

* Switch to ulong for the 4th option, and it is still the same speed.

* fix test data for 50% equal data

* make test pass

* neo.UnitTests/UT_UIntBenchmarks.cs

* neo.UnitTests/UT_UIntBenchmarks.cs

* Base 20 - UInt160 tests

* neo.UnitTests/UT_UIntBenchmarks.cs

* inlined 160

* complete tests with UInt256 and UInt160

* neo.UnitTests/UT_UIntBenchmarks.cs

* Lights division calculation

* Treat lower hashes as higher priority. Fix MemoryPool UT for Hash order. (neo-project#563)

* Treat lower hashes as higher priority. 
* Fix MemoryPool UT for Hash order.
* Renaming Trasanction in PoolItem for clarity.

* Make PoolItem independent and add PoolItem tests (neo-project#562)

* make poolitem independent

* Merging

* Multiply by -1

* Fix other

* Fix Tx

* Removing -1 extra multiplication

* Fix

* make PoolItem internal and added test class

* Update PoolItem.cs

* added comments for PoolItem variables

* getting time from TimeProvider to allow testing

* basic test

* reset time provider

* Add Hash comparison

* Adding time provider again and equals

* Fix arithmetic

* Comment on PoolItem

* Update PoolItem.cs

* protecting tests against TimeProvider changes on fails

* reusing setup part

* fixed serialization properties

* Improve generation of creating mock DateTime values. Implement hash comparison tests.

* Adjust comment.

* Treat Claim transactions as the highest low priority transactions. (neo-project#565)

* Allow persistence plugins to commit as a final step. (neo-project#568)

* Allow persistence plugins to commit as a final step.

* Plugins commit before core commits, once all plugins have handled initial work OnPersist.

* Allow PersistencePlugin to determine whether to crash if commit fails.

* Add ShouldThrowExceptionFromCommit method to IPersistencePlugin.

* Throw all commit exceptions that should be thrown in an AggregateException.

* Add a Plugin type for observing transactions added or removed from the MemoryPool. (neo-project#580)

* Correctly handle conversions between JSON objects (neo-project#586)

* Fix neo-project/neo-node#297 (neo-project#587)

* Replace new JArray with .ToArray (AccountState) (neo-project#581)

* Ensure `LocalNode` to be stoped before shutting down the `NeoSystem`
txhsl added a commit to txhsl/neo that referenced this pull request Feb 25, 2019
* Handles escape characters in JSON

*  Pass ApplicationExecution to IPersistencePlugin (neo-project#531)

* Update dependencies: (neo-project#532)

- Akka 1.3.11
- Microsoft.AspNetCore.ResponseCompression 2.2.0
- Microsoft.AspNetCore.Server.Kestrel 2.2.0
- Microsoft.AspNetCore.Server.Kestrel.Https 2.2.0
- Microsoft.AspNetCore.WebSockets 2.2.0
- Microsoft.EntityFrameworkCore.Sqlite 2.2.0
- Microsoft.Extensions.Configuration.Json 2.2.0

* change version to v2.9.4

* Updating Unknown to Policy Fail (neo-project#533)

* Fix a dead lock in `WalletIndexer`

* Downgrade Sqlite to 2.1.4 (neo-project#535)

* RPC call gettransactionheight (neo-project#541)

* getrawtransactionheight

Nowadays two calls are need to get a transaction height, `getrawtransaction` with `verbose` and then use the `blockhash`.
Other option is to use `confirmations`, but it can be misleading.

* Minnor fix

* Shargon's tip

* modified

* Allow to use the wallet inside a RPC plugin (neo-project#536)

* Improve Large MemoryPool Performance - Sort + intelligent TX reverification (neo-project#500)

Improve Large MemoryPool Performance - Sort + intelligent TX reverification (neo-project#500)

* Keep both verified and unverified (previously verified) transactions in sorted trees so ejecting transactions above the pool size is a low latency operation.
* Re-verify unverified transactions when Blockchain actor is idle.
* Don't re-verify transactions needlessly when not at the tip of the chain.
* Support passing a flag to `getrawmempool` to retrieve both verified and unverified TX hashes.
* Support MaxTransactionsPerBlock and MaxFreeTransactionsPerBlock from Policy plugins.
* Rebroadcast re-verified transactions if it has been a while since last broadcast (high priority transactions are rebroadcast more frequently than low priority transactions.

* Policy filter GetRelayResult message (neo-project#543)

* Policy filter GetRelayResult message

* adding fixed numbering for return codes

* Removed enum fixed values

* Add some initial MemoryPool unit tests. Fix bug when Persisting the GenesisBlock (neo-project#549)

* More MemoryPool Unit Tests. Improve Re-broadcast back-off to an increasing linear formula. (neo-project#554)

* Ensuring Object Reference check of SortedSets for speed-up (neo-project#557)

* Minor comments update on Mempool class (neo-project#556)

* Update MemoryPool Unit Test to add random fees to Mock Transactions (neo-project#558)

* Add Unit Test for MemoryPool sort order. Fixed sort order to return descending. (neo-project#559)

* Add unit test to verify memory pool sort order and reverification order. Fixed sort order bug.

* VerifyCanTransactionFitInPool works as intended. Also inadvertantly verified GetLowestFeeTransaction() works.

* Benchmark structure for UInt classes (neo-project#553)

* basic benchmark structure for UInt classes

* commented code2 from lights for now

* updated tests. all seem correct now

* Switch to using a benchmark method taking a method delegate to benchmark.

* Make pass.

* 1 million iterations.

* Switch to ulong for the 4th option, and it is still the same speed.

* fix test data for 50% equal data

* make test pass

* neo.UnitTests/UT_UIntBenchmarks.cs

* neo.UnitTests/UT_UIntBenchmarks.cs

* Base 20 - UInt160 tests

* neo.UnitTests/UT_UIntBenchmarks.cs

* inlined 160

* complete tests with UInt256 and UInt160

* neo.UnitTests/UT_UIntBenchmarks.cs

* Lights division calculation

* Treat lower hashes as higher priority. Fix MemoryPool UT for Hash order. (neo-project#563)

* Treat lower hashes as higher priority. 
* Fix MemoryPool UT for Hash order.
* Renaming Trasanction in PoolItem for clarity.

* Make PoolItem independent and add PoolItem tests (neo-project#562)

* make poolitem independent

* Merging

* Multiply by -1

* Fix other

* Fix Tx

* Removing -1 extra multiplication

* Fix

* make PoolItem internal and added test class

* Update PoolItem.cs

* added comments for PoolItem variables

* getting time from TimeProvider to allow testing

* basic test

* reset time provider

* Add Hash comparison

* Adding time provider again and equals

* Fix arithmetic

* Comment on PoolItem

* Update PoolItem.cs

* protecting tests against TimeProvider changes on fails

* reusing setup part

* fixed serialization properties

* Improve generation of creating mock DateTime values. Implement hash comparison tests.

* Adjust comment.

* Treat Claim transactions as the highest low priority transactions. (neo-project#565)

* Allow persistence plugins to commit as a final step. (neo-project#568)

* Allow persistence plugins to commit as a final step.

* Plugins commit before core commits, once all plugins have handled initial work OnPersist.

* Allow PersistencePlugin to determine whether to crash if commit fails.

* Add ShouldThrowExceptionFromCommit method to IPersistencePlugin.

* Throw all commit exceptions that should be thrown in an AggregateException.

* Add a Plugin type for observing transactions added or removed from the MemoryPool. (neo-project#580)

* Correctly handle conversions between JSON objects (neo-project#586)

* Fix neo-project/neo-node#297 (neo-project#587)

* Replace new JArray with .ToArray (AccountState) (neo-project#581)

* Ensure `LocalNode` to be stoped before shutting down the `NeoSystem`
base_20_1 = new byte[MAX_TESTS][];
base_20_2 = new byte[MAX_TESTS][];

for(var i=0; i<MAX_TESTS; i++)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@igormcoelho did you calculate the amount of RAM used for this test? is not excessive?

[TestClass]
public class UT_UIntBenchmarks
{
int MAX_TESTS = 1000000; // 1 million
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Travis should not run this normally. Benchmarks should not be enabled normally in unit tests that run after each commit; there is no benefit also because the test doesn’t fail if the benchmark were to take longer than a threshold.

rodoufu pushed a commit to rodoufu/neo that referenced this pull request Mar 5, 2019
* basic benchmark structure for UInt classes

* commented code2 from lights for now

* updated tests. all seem correct now

* Switch to using a benchmark method taking a method delegate to benchmark.

* Make pass.

* 1 million iterations.

* Switch to ulong for the 4th option, and it is still the same speed.

* fix test data for 50% equal data

* make test pass

* neo.UnitTests/UT_UIntBenchmarks.cs

* neo.UnitTests/UT_UIntBenchmarks.cs

* Base 20 - UInt160 tests

* neo.UnitTests/UT_UIntBenchmarks.cs

* inlined 160

* complete tests with UInt256 and UInt160

* neo.UnitTests/UT_UIntBenchmarks.cs

* Lights division calculation
Thacryba pushed a commit to simplitech/neo that referenced this pull request Feb 17, 2020
* Create 2.7.6 branch for CLI API

* Update cli.md (neo-project#548)

* Update cli.md

Revise for the new version's features.

* Update cli.md

Reviese for the example of request.

* Update cli.md

Add blank for values

* Update cli.md

Add blank.

* Update api.md (neo-project#551)

Add getvalidators API

* Create getvalidators.md (neo-project#553)

Add getvalidators.md

* release 2.7.6
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants