New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enumeration peformance #169

Open
malb opened this Issue Jul 26, 2016 · 5 comments

Comments

Projects
None yet
3 participants
@malb
Collaborator

malb commented Jul 26, 2016

Running python ./set_mdc.py from https://github.com/fplll/strategizer produces an output stating that my system performs about 40 million enumerations per second. With @cr-marcstevens unpublished libenum (on which the new fplll code is based) I used to get something close to 60 million enumerations per second.

libenum and old fplll

python ./set_mdc.py 1
[enumlib] setting verbose level to quiet.
[enumlib] setting number of threads to 1.
__main__:   fplll :: nodes est:   99492313.2, time: 2.4773s, nodes/s:   40161852.7, nodes act:   70987486
__main__: libenum :: nodes est:   99864843.8, time: 1.6496s, nodes/s:   60540109.0
utilities.mdc: number of core: 1
utilities.mdc: fplll_enum nodes per sec:    40161852.7
utilities.mdc: enumlib nodes per sec:       60540109.0

new fplll

python ./set_mdc.py 1
__main__:   fplll :: nodes:   86815246.0, time: 1.8795s, nodes/s:   46189871.0

Is that to be expected?

@malb malb added the question label Jul 26, 2016

@cr-marcstevens

This comment has been minimized.

Show comment
Hide comment
@cr-marcstevens

cr-marcstevens Jul 26, 2016

Collaborator

Hi Martin,

I wasn't able to use the exact same code structure due to imbedding compatibility,
especially when we used to mix with regular enum and only dispatch to recur enum at specific depths.
In particular there is still an 'if' quite close to the beginning of the recursive function, that is better placed one level higher just before the point where it calls itself.

Since the latest code completely uses recur enum, that is something we can now do.

Also, -march=native might make some difference.
That option would be recommended for local installation, but for packaging it is of course not a good idea.
Maybe nice to make a note about that in the building documentation.

-- Marc

On July 26, 2016 12:03:53 PM GMT+02:00, Martin Albrecht notifications@github.com wrote:

Running python ./set_mdc.py from https://github.com/fplll/strategizer
produces an output stating that my system performs about 40 million
enumerations per second. With @cr-marcstevens unpublished libenum (on
which the new fplll code is based) I used to get something close to 60
million enumerations per second.

libenum and old fplll

python ./set_mdc.py 1
[enumlib] setting verbose level to quiet.
[enumlib] setting number of threads to 1.
__main__:   fplll :: nodes est:   99492313.2, time: 2.4773s, nodes/s:  
40161852.7, nodes act:   70987486
__main__: libenum :: nodes est:   99864843.8, time: 1.6496s, nodes/s:  
60540109.0
utilities.mdc: number of core: 1
utilities.mdc: fplll_enum nodes per sec:    40161852.7
utilities.mdc: enumlib nodes per sec:       60540109.0

new fplll

python ./set_mdc.py 1
__main__:   fplll :: nodes:   86815246.0, time: 1.8795s, nodes/s:  
46189871.0

Is that to be expected?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#169

-- sent from phone

Collaborator

cr-marcstevens commented Jul 26, 2016

Hi Martin,

I wasn't able to use the exact same code structure due to imbedding compatibility,
especially when we used to mix with regular enum and only dispatch to recur enum at specific depths.
In particular there is still an 'if' quite close to the beginning of the recursive function, that is better placed one level higher just before the point where it calls itself.

Since the latest code completely uses recur enum, that is something we can now do.

Also, -march=native might make some difference.
That option would be recommended for local installation, but for packaging it is of course not a good idea.
Maybe nice to make a note about that in the building documentation.

-- Marc

On July 26, 2016 12:03:53 PM GMT+02:00, Martin Albrecht notifications@github.com wrote:

Running python ./set_mdc.py from https://github.com/fplll/strategizer
produces an output stating that my system performs about 40 million
enumerations per second. With @cr-marcstevens unpublished libenum (on
which the new fplll code is based) I used to get something close to 60
million enumerations per second.

libenum and old fplll

python ./set_mdc.py 1
[enumlib] setting verbose level to quiet.
[enumlib] setting number of threads to 1.
__main__:   fplll :: nodes est:   99492313.2, time: 2.4773s, nodes/s:  
40161852.7, nodes act:   70987486
__main__: libenum :: nodes est:   99864843.8, time: 1.6496s, nodes/s:  
60540109.0
utilities.mdc: number of core: 1
utilities.mdc: fplll_enum nodes per sec:    40161852.7
utilities.mdc: enumlib nodes per sec:       60540109.0

new fplll

python ./set_mdc.py 1
__main__:   fplll :: nodes:   86815246.0, time: 1.8795s, nodes/s:  
46189871.0

Is that to be expected?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#169

-- sent from phone

@malb

This comment has been minimized.

Show comment
Hide comment
@malb

malb Jul 26, 2016

Collaborator

Thanks for explaining. Compiling with --march=native -O3 gives me:

python ./set_mdc.py 1
__main__:   fplll :: nodes:   86630232.0, time: 1.8041s, nodes/s:   48019463.0

I guess for the rest we leave this ticket open as a reminder that we should revisit enumeration at some point.

Collaborator

malb commented Jul 26, 2016

Thanks for explaining. Compiling with --march=native -O3 gives me:

python ./set_mdc.py 1
__main__:   fplll :: nodes:   86630232.0, time: 1.8041s, nodes/s:   48019463.0

I guess for the rest we leave this ticket open as a reminder that we should revisit enumeration at some point.

@malb malb referenced this issue Jul 26, 2016

Closed

--march=native -O3 #170

@lducas

This comment has been minimized.

Show comment
Hide comment
@lducas

lducas Aug 3, 2016

Member

BTW, the new enum code is now parrallelized, right ? Can we have this option on the command line ?

Member

lducas commented Aug 3, 2016

BTW, the new enum code is now parrallelized, right ? Can we have this option on the command line ?

@malb

This comment has been minimized.

Show comment
Hide comment
@malb

malb Aug 4, 2016

Collaborator

No, it's not parallelised yet.

Collaborator

malb commented Aug 4, 2016

No, it's not parallelised yet.

@cr-marcstevens

This comment has been minimized.

Show comment
Hide comment
@cr-marcstevens

cr-marcstevens Aug 4, 2016

Collaborator

No, fplll itself is thread-safe, but so far still has no framework for threads.

It should be easy to add with c++11, but a choice for threads and mutexes and specifically a multi-threaded (lockfree) queue is a fplll-wide choice.
My enumlib uses boost, but this can be avoided and probably better for fplll.

On August 3, 2016 7:08:57 PM GMT+02:00, "Léo Ducas" notifications@github.com wrote:

BTW, the new enum code is now parrallelized, right ? Can we have this
option on the command line ?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#169 (comment)

Sent from my Android device with K-9 Mail. Please excuse my brevity.

Collaborator

cr-marcstevens commented Aug 4, 2016

No, fplll itself is thread-safe, but so far still has no framework for threads.

It should be easy to add with c++11, but a choice for threads and mutexes and specifically a multi-threaded (lockfree) queue is a fplll-wide choice.
My enumlib uses boost, but this can be avoided and probably better for fplll.

On August 3, 2016 7:08:57 PM GMT+02:00, "Léo Ducas" notifications@github.com wrote:

BTW, the new enum code is now parrallelized, right ? Can we have this
option on the command line ?


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#169 (comment)

Sent from my Android device with K-9 Mail. Please excuse my brevity.

@malb malb removed the question label Sep 1, 2016

@malb malb referenced this issue Sep 4, 2016

Merged

update readme #202

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment