You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Think about setting this up, in front of your cluster.
You can still gather stats, but calling it every 15 seconds isn't a good idea. As a caching layer, there have been issues with calling the stats command in other earlier versions of memcached. Googling doesn't yield great results, but I know because I know from experience. Watching a system that takes billions of transactions per second, running the stats collector, it would impact memcached.
By running twemproxy, you can get around this issue. You'll need to modify your code to get the stats from the port that outputs them via twemproxy, and just connecting to that port outputs them in json.
I've tested out a crappy temporary setup with five hosts just now, and a latest dump of the data.txt file.
Also - the range of output when a key is found is pretty bad - the ouput to plutus.txt dumps a hell of a lot of data, vs just dumping the key that you found. I didn't find anything useful, but I loaded privatekey 1 into memcache, and modified scroo.py to only look in a specific range. https://privatekeys.pw/key/0000000000000000000000000000000000000000000000000000000000000001
Other tweaks I tried, which are unfortunately Intel only at this time, is to grab the secp256k1 module, dll and library from here: https://github.com/iceland2k14/secp256k1
It is much faster using these libraries, and switching some of the random range settings.
With these tweaks in place, memcache is more stable. Have about a decade of experience with memcached. You'll want to change the stats check to something else to get the stability, I haven't tweaked the json collection from twemproxy, but that would be the most ideal.
Additionally, not sure if useful, redis has a bloom filter mode built in. Could be useful for low memory systems that want to do a bloomfilter search the way BSGS works, but I've got no idea how that would work ... yet.
The text was updated successfully, but these errors were encountered:
https://github.com/twitter/twemproxy
Think about setting this up, in front of your cluster.
You can still gather stats, but calling it every 15 seconds isn't a good idea. As a caching layer, there have been issues with calling the stats command in other earlier versions of memcached. Googling doesn't yield great results, but I know because I know from experience. Watching a system that takes billions of transactions per second, running the stats collector, it would impact memcached.
By running twemproxy, you can get around this issue. You'll need to modify your code to get the stats from the port that outputs them via twemproxy, and just connecting to that port outputs them in json.
I've tested out a crappy temporary setup with five hosts just now, and a latest dump of the data.txt file.
Also - the range of output when a key is found is pretty bad - the ouput to plutus.txt dumps a hell of a lot of data, vs just dumping the key that you found. I didn't find anything useful, but I loaded privatekey 1 into memcache, and modified scroo.py to only look in a specific range. https://privatekeys.pw/key/0000000000000000000000000000000000000000000000000000000000000001
Other tweaks I tried, which are unfortunately Intel only at this time, is to grab the secp256k1 module, dll and library from here: https://github.com/iceland2k14/secp256k1
It is much faster using these libraries, and switching some of the random range settings.
With these tweaks in place, memcache is more stable. Have about a decade of experience with memcached. You'll want to change the stats check to something else to get the stability, I haven't tweaked the json collection from twemproxy, but that would be the most ideal.
Additionally, not sure if useful, redis has a bloom filter mode built in. Could be useful for low memory systems that want to do a bloomfilter search the way BSGS works, but I've got no idea how that would work ... yet.
The text was updated successfully, but these errors were encountered: