New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GSOC 2019: Nuitka Benchmarks #231

kayhayen opened this Issue Jan 26, 2019 · 2 comments


None yet
2 participants
Copy link

kayhayen commented Jan 26, 2019

Nuitka has too little in the way of measuring the actual performance gains one has. You would change that.

In a first stage, you would enhance the existing to provide a more
complete set of micro-benchmarks, for the different levels of optimization, with more or less type knowledge. You would then as a second step add a history of commits in some form of graphs that extend over a longer period of time, and automatically identify changes that e.g. produce equivalent C code.

Skills: Python programming, Linux installs of Python, C tooling would be nice, but can be mentored.


This comment has been minimized.

Copy link

Nimishkhurana commented Feb 15, 2019

Hello @kayhayen .
This seems interesting to me. I would like to work on this, but have I have a few questions in mind. What does Without Construct code implies and What different levels of optimization mean .What I understood is that optimization means "Nuitka optimized code" for micro tasks and builtin functions of python.
Please correct me if I am wrong and give some starting pointers on how to start working on this issue.


This comment has been minimized.

Copy link
Member Author

kayhayen commented Feb 15, 2019

Hello Nimish,

what I consider a construct is a small piece of code, say a + b, where we would have various levels of knowledge about the types involved.

In the generic case, we would know nothing about it and execute the required folklore as fast as possible, and with increasing type knowledge, we can be faster, doing e.g. directly a unicode add if we know the types.

I wrote about this in some detail here:

The benchmarks somehow needs to create a lot of variants of roughly the same situation with slightly different context, and then of course also for all operators, not just +, but *, and < and all its many friends.


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment