You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i just see your working code, its difrent from my invented 8 month old code , where i load in same ram 10x mean 100m keys and my logics, thats will work more 10 time faster then these codes, discus here or in discord group, ?
The text was updated successfully, but these errors were encountered:
Yes a bigger bP table means more speed, theoretically the increment of the size of the bP table is lineal with the increment in the speed, but in practice sometimes this is not true at all, more points in memory means a bigger space in memory to access, more false positives in the Bloom filter or hashtable used, and in my case more binary searchs, depending of the size the increment in the speed can be only from 70% to 90% of the increment in bP table size
let me explain you, you compare full pubkey 66 in generated and minus to real pubkey for result , if you take first 12 char of pub in table, and compare, by this way same size of file, and same memory will work, just when compare hit first 12 char then generate full key for final compare, by this way you will pass 100m keys,m not 10m keys, tested these way before 8 months on cpu, yes if you build these on GPU, maybe much better
i just see your working code, its difrent from my invented 8 month old code , where i load in same ram 10x mean 100m keys and my logics, thats will work more 10 time faster then these codes, discus here or in discord group, ?
The text was updated successfully, but these errors were encountered: