Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

large file #6

Closed
thearthouse opened this issue Feb 4, 2021 · 5 comments
Closed

large file #6

thearthouse opened this issue Feb 4, 2021 · 5 comments

Comments

@thearthouse
Copy link

i have generated random 20 mln adresses bsgs in range 800000000000000000000000000000:FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF.
load it to keyhunt.
run cammand ./keyhunt -m bsgs -f random_pubs.txt -r 800000000000000000000000000000:FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF -s 60 -R

and its stuck here :
[+] Version 0.1.20210112 BSGS
[+] Setting mode BSGS
[+] Stats output every 60 seconds
[+] Setting random mode.
[+] Opening file random_pubs.txt
[+] Added 19999999 points from file
[+] Setting N up to 17592186044416.
[+] Init bloom filter for 4194304 elements : 7.00 MB
[+] Allocating 128.00 MB for aMP Points
[+] Precalculating 4194304 aMP points
[+] Allocating 160.00 MB for bP Points
[+] precalculating 4194304 bP points
[+] Sorting 4194304 elements
[+] Thread 0: 0000000000000000000000000000000000ec842d5ee3e39c0f384e482569bc7f > running 1 hour no update

why: i want to test if it can load large file and wanna test randomnes

@albertobsd
Copy link
Owner

Hi, thanks to use my code.

Are you using substracted keys for the 120 Puzzle?

Well BSGS works well with less public keys in the file, every 2^X publick keys in the file wil drop the speed by half or in other words will increase the the time of each thread cycle by 2 times.

Lets to suppose that one single publickey takes 10 seconds per cicle

  • 2 Publickeys will take 20 seconds
  • 4 Publickeys will take 40 seconds
  • 8 Publickeys will take 80 seconds

so on, the math behind the speed is per public key, so make you own time estimations.

Good luck, best regards!

@thearthouse
Copy link
Author

thank you

@albertobsd
Copy link
Owner

albertobsd commented Feb 4, 2021

Try first 50 or 100 publickeys, it will take some time but also it will, for 20 millions of keys i recomend to you use the mode xpoint, this is the next:

./keyhunt -m xpoint -f random_xvalues.txt -b 120 -s 60 -R

the file will have only the 32 bytes of the X value of the publickeys in hexadecimal format

like this:

a301697bdfcd704313ba48e51d567543f2a182031efd6915ddc07bbcc4e16070
8b6e862a3556684850b6d4f439a2595047abf695c08b6414f95a13358dd553fd
f694cbaf2b966c1cc5f7f829d3a907819bc70ebcc1b229d9e81bda2712998b10
440ca1f08ea41265981ac4ed1efe7a37122dcc3877d2f9162db0e78b0f83cd58
e80fea14441fb33a7d8adab9475d7fab2019effb5156a792f1a11778e3c0df5d
796634e3f1ad56f0fdba069d9d07bce2ba2fd4f373ddd3ba7777bf279f1048da

the speed of this is not so fast but if you make 20 millons of substracted key the real speed is the program speed * the number of substracted keys.

@thearthouse
Copy link
Author

yess that was what i am looking thank you.

@vishal2241
Copy link

@thearthouse Can you post script to automate generation of subtracted keys by distance and number of keys to be generated. Thanks. I m available at my username gmail address too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants