Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance changes between 1.2.0, 1.3.1 and 2.0a #1614

Closed
samuelcolvin opened this issue Feb 9, 2017 · 34 comments
Closed

Performance changes between 1.2.0, 1.3.1 and 2.0a #1614

samuelcolvin opened this issue Feb 9, 2017 · 34 comments
Labels

Comments

@samuelcolvin
Copy link
Member

I've just updated FrameworkBenchmarks to 1.3.1 to stop the CancelledError issue (TechEmpower/FrameworkBenchmarks#2561).

As part of that, I ran the benchmarks (locally with vagrant, so not the most rigorous testing setup) to see if there were performance changes. Here's the result:

      test  DB engine  Step    1.2.0    1.3.0   Improvement
      json          -     1     5989     5794           -3%
      json          -     2     6242     5703           -9%
      json          -     3     6325     5776           -9%
      json          -     4     5952     5554           -7%
      json          -     5     6192     5483          -11%
      json          -     6     6123     5331          -13%

        db      aiopg     1     2128     1993           -6%
        db      aiopg     2     2165     2058           -5%
        db      aiopg     3     2175     2030           -7%
        db      aiopg     4     2238     2003          -11%
        db      aiopg     5     2063     1974           -4%
        db      aiopg     6     2071     1911           -8%

        db    asyncpg     1     2782     2552           -8%
        db    asyncpg     2     2741     2737           -0%
        db    asyncpg     3     2735     2677           -2%
        db    asyncpg     4     2762     2630           -5%
        db    asyncpg     5     2821     2607           -8%
        db    asyncpg     6     2772     2527           -9%

     query      aiopg     1     1978     1840           -7%
     query      aiopg     2      707      636          -10%
     query      aiopg     3      389      352           -9%
     query      aiopg     4      264      243           -8%
     query      aiopg     5      199      189           -5%

     query    asyncpg     1     2645     2603           -2%
     query    asyncpg     2     1743     1629           -7%
     query    asyncpg     3      825     1113           35%
     query    asyncpg     4      645      831           29%
     query    asyncpg     5      534      684           28%

   fortune      aiopg     1     1814     1779           -2%
   fortune      aiopg     2     1865     1789           -4%
   fortune      aiopg     3     1882     1797           -4%
   fortune      aiopg     4     1870     1721           -8%
   fortune      aiopg     5     1807     1683           -7%
   fortune      aiopg     6     1767     1514          -14%

   fortune    asyncpg     1     1203     2089           74%
   fortune    asyncpg     2     1189     2074           74%
   fortune    asyncpg     3     1280     2087           63%
   fortune    asyncpg     4     1215     2095           72%
   fortune    asyncpg     5     1230     2112           72%
   fortune    asyncpg     6     1217     2052           69%

    update      aiopg     1     1375     1291           -6%
    update      aiopg     2      381      362           -5%
    update      aiopg     3      196      185           -5%
    update      aiopg     4      130      129           -1%
    update      aiopg     5       97       95           -3%

    update    asyncpg     1     1429     2124           49%
    update    asyncpg     2      780      992           27%
    update    asyncpg     3      501      622           24%
    update    asyncpg     4      378      456           21%
    update    asyncpg     5      298      352           18%

 plaintext          -     1     7159     6592           -8%
 plaintext          -     2     6941     6408           -8%
 plaintext          -     3     6732     6080          -10%
 plaintext          -     4     4651     4655            0%

(All numbers are requests per second as given by their results.json output. The different steps refer to number of queries executed for the DB tests and concurrency for the other tests. Apart from aiohttp, no other packages have changed.)

It seems that that is a general trend of ~10% performance regression however, the code using raw asyncpg queries are consistently much faster.

I know these tests are far from perfect but my questions are:

  1. Would it be possible for aiohttp to have some internal performance benchmarks that are run regularly so it's easy to see the changes in performance of different versions? Thereby making it easier to maintain and improve performance as new features, checks & error handling are added.
  2. What has caused the regression in performance with aiopg and with simple requests? Could it be reversed?
@fafhrd91
Copy link
Member

fafhrd91 commented Feb 9, 2017

that's cool. let me check

@samuelcolvin
Copy link
Member Author

By the way, I don't mean to sound negative. Aiohttp's performance is pretty good and from what I can tell with asyncpg it's outperforming all other python frameworks in the test!

@samuelcolvin samuelcolvin changed the title Partial performance degradation between 1.2.0 and 1.3.1 Performance changes between 1.2.0 and 1.3.1 Feb 9, 2017
@fafhrd91
Copy link
Member

fafhrd91 commented Feb 9, 2017

it is totally fine. i actually thought about some consistent performance benchmarks. and this tests suit will significantly help me. especially now, i am working on internal refactoring and http pipelining support

@fafhrd91
Copy link
Member

@samuelcolvin i did some optimization work on pipelining branch. its about 10% faster than 1.2 on simple requests

@fafhrd91
Copy link
Member

all changes are in master now

@samuelcolvin
Copy link
Member Author

Thanks, can you link to the commit where this was done?

As per this discussion do you think pipelining will have noticeable effects on real world applications? That discussion suggests it's unlikely to help in reality.

@fafhrd91
Copy link
Member

pipelining in python application is a benchmark only tool :)
if you need pipelining to satisfy business requirements then python probably is wrong tool.

@fafhrd91
Copy link
Member

@samuelcolvin could you run benchmark again with aiohttp from parser branch?

@samuelcolvin
Copy link
Member Author

samuelcolvin commented Feb 15, 2017 via email

@fafhrd91
Copy link
Member

parser branch is merged to master

@samuelcolvin
Copy link
Member Author

I've split the code into a separate repo as running the full tests and displaying the results was a real pain: https://github.com/samuelcolvin/aiohttp-benchmarks

@fafhrd91
Copy link
Member

Wow! That's great!

@fafhrd91 fafhrd91 changed the title Performance changes between 1.2.0 and 1.3.1 Performance changes between 1.2.0, 1.3.1 and 2.0a Feb 16, 2017
@fafhrd91
Copy link
Member

@asvetlov @1st1 interesting that python 3.6 consistently slower than 3.5. i think it should be faster, at least because of new future implementation

@samuelcolvin
Copy link
Member Author

Yes, I saw that too and was surprised, I thought 3.6 had performance improvements for asyncio.

@argaen
Copy link
Member

argaen commented Feb 16, 2017

There were improvements for asyncio and dicts... Can't see at first sight why it should be slower in 3.6 😓

@samuelcolvin
Copy link
Member Author

I've added the results pivoted to compare python version. Change is fairly consistent.

@pfreixes
Copy link
Contributor

pfreixes commented Feb 16, 2017 via email

@fafhrd91
Copy link
Member

on my mac i get ~5-7% better performance under python3.6, but my test is very simple

@samuelcolvin
Copy link
Member Author

samuelcolvin commented Feb 16, 2017 via email

@fafhrd91
Copy link
Member

I will do

@pfreixes
Copy link
Contributor

pfreixes commented Feb 17, 2017

FYI JSON and simpletext - the ones that I've tried for curiosity - behave in python3.6 at least equal as python3.5. Python 3.6 could have a better performance but it is something not really appreciable at first sight, something around 5% and to make it sure a serious benchmark should be executed.

In any case, the important thing here I can not reproduce the huge decreasing btw 3.5 and 3.6

Note: I ran the JSON and simpletext tests at least 5 times per version, and I picked up the better time. Otherwise handmade tests sharing the CPU with other user processes might bias the results.

@fafhrd91
Copy link
Member

I will run benchmarks on separate aws c3 instances next week

@samuelcolvin
Copy link
Member Author

There will need to be significant changes to my code to allow running on separate machines. I'll see if I can make the changes tomorrow.

@samuelcolvin
Copy link
Member Author

I've modified the benchmark code to run the server remotely and rerun the test: https://github.com/samuelcolvin/aiohttp-benchmarks

python 3.5 vs. 3.6 is much closer but still the trend is that 3.6 is single digit percentage points slower.

Obviously running the test uses up CPU credits pretty quickly but I was careful to make sure the tests finished before the server ran out of credits.

@misiek08
Copy link

We are observing memory leak and 100% CPU usage with 1.3.3. It happens on Linux - on OS X I don't see such problems. Version 1.2.0 works good on both Linux and OS X.

I'm investigating this issue, if there's anyone else with same problem - show your setup. I'm trying to write minimal code needed to reproduce this issue.

@fafhrd91
Copy link
Member

Is it on server or client? Do you see CPU usage immidietly or after some time?

Btw please create new ticket

@fafhrd91
Copy link
Member

@samuelcolvin could you create new PR for FrameworkBenchmarks with aiohttp 2.0

@fafhrd91
Copy link
Member

btw maybe you want to merge aiohttp-benchmarks into aiohttp? or move it to aio-libs?

@samuelcolvin
Copy link
Member Author

Was just thinking about this and was waiting for 2 release. Will do.

I think this can be closed too, any further discussion should happen on new issue.

btw maybe you want to merge aiohttp-benchmarks into aiohttp? or move it to aio-libs?

For me it's not part of the framework so it should be a separate repo. I'll transfer it. I think we should also delete (or move) the current benchmarks directory

@fafhrd91
Copy link
Member

agree on benchmarks directory

@samuelcolvin
Copy link
Member Author

I'll wait 24 hours in case the release causes immediate problems which need fixing with patch releases.

Congratulations on 2.0.0 🎉

@samuelcolvin
Copy link
Member Author

Benchmarks moved into this org: https://github.com/aio-libs/aiohttp-benchmarks.

FrameworkBenchmarks updated (PR pending): TechEmpower/FrameworkBenchmarks#2609

@fafhrd91
Copy link
Member

Awesome! Thanks!

@lock
Copy link

lock bot commented Oct 28, 2019

This thread has been automatically locked since there has not been
any recent activity after it was closed. Please open a new issue for
related bugs.

If you feel like there's important points made in this discussion,
please include those exceprts into that new issue.

@lock lock bot added the outdated label Oct 28, 2019
@lock lock bot locked as resolved and limited conversation to collaborators Oct 28, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

5 participants