New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

this little bit of caching gives us a massive perf gain on Discourse #242

Merged
merged 1 commit into from Mar 19, 2013

Conversation

Projects
None yet
3 participants
@SamSaffron
Copy link
Contributor

SamSaffron commented Mar 18, 2013

Before:

sam@ubuntu:~/Source/active_model_serializers$ ab -c 1 -n 100 http://localhost:3000/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient).....done


Server Software:        thin
Server Hostname:        localhost
Server Port:            3000

Document Path:          /
Document Length:        44792 bytes

Concurrency Level:      1
Time taken for tests:   10.616 seconds
Complete requests:      100
Failed requests:        33
   (Connect: 0, Receive: 0, Length: 33, Exceptions: 0)
Write errors:           0
Total transferred:      4568084 bytes
HTML transferred:       4479163 bytes
Requests per second:    9.42 [#/sec] (mean)
Time per request:       106.160 [ms] (mean)
Time per request:       106.160 [ms] (mean, across all concurrent requests)
Transfer rate:          420.22 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    89  106  33.4     92     228
Waiting:       89  106  33.4     92     228
Total:         89  106  33.4     92     228

Percentage of the requests served within a certain time (ms)
  50%     92
  66%     96
  75%     97
  80%     98
  90%    156
  95%    198
  98%    204
  99%    228
 100%    228 (longest request)
sam@ubuntu:~/Source/active_model_serializers$ 

After:


sam@ubuntu:~/Source/active_model_serializers$ ab -c 1 -n 100 http://localhost:3000/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient).....done


Server Software:        thin
Server Hostname:        localhost
Server Port:            3000

Document Path:          /
Document Length:        44792 bytes

Concurrency Level:      1
Time taken for tests:   8.599 seconds
Complete requests:      100
Failed requests:        18
   (Connect: 0, Receive: 0, Length: 18, Exceptions: 0)
Write errors:           0
Total transferred:      4568161 bytes
HTML transferred:       4479179 bytes
Requests per second:    11.63 [#/sec] (mean)
Time per request:       85.988 [ms] (mean)
Time per request:       85.988 [ms] (mean, across all concurrent requests)
Transfer rate:          518.80 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:    75   86  28.1     77     188
Waiting:       75   86  28.1     77     188
Total:         75   86  28.1     77     188

Percentage of the requests served within a certain time (ms)
  50%     77
  66%     78
  75%     78
  80%     80
  90%     87
  95%    179
  98%    182
  99%    188
 100%    188 (longest request)

So in English, median request has gone down for 106ms to 85ms. A huge gain.

This happens cause internally, AM calls pluralize over and over again ... and pluralize is slow

https://github.com/rails-api/active_model_serializers/blob/master/lib/active_model/serializer/associations.rb#L157

As isolated by the yet to be announced MiniProfiler flame graph:

image

@ZogStriP

This comment has been minimized.

Copy link

ZogStriP commented Mar 18, 2013

👍

@@ -136,6 +144,7 @@ def serialize_ids
end

class HasOne < Config #:nodoc:

This comment has been minimized.

@steveklabnik

steveklabnik Mar 18, 2013

Contributor

✂️

This comment has been minimized.

@SamSaffron

SamSaffron Mar 18, 2013

Contributor

sorry about that ... will sort it out, took a lot of dicking to find a clean patch

This comment has been minimized.

@steveklabnik

steveklabnik Mar 18, 2013

Contributor

No worries. :D

@steveklabnik

This comment has been minimized.

Copy link
Contributor

steveklabnik commented Mar 18, 2013

Seems good. :)

@SamSaffron

This comment has been minimized.

Copy link
Contributor

SamSaffron commented Mar 18, 2013

ok 🍦 we should be good now ... also has anyone looked at: #241 I really want to get us on latest am serializer, not sure what the best approach is to fix it

@steveklabnik

This comment has been minimized.

Copy link
Contributor

steveklabnik commented Mar 19, 2013

I have AMS work on my plate for tomorrow, so I haven't looked into it yet.

steveklabnik added a commit that referenced this pull request Mar 19, 2013

Merge pull request #242 from SamSaffron/fix
this little bit of caching gives us a massive perf gain on Discourse

@steveklabnik steveklabnik merged commit 5d2ae9b into rails-api:master Mar 19, 2013

1 check passed

default The Travis build passed
Details
@steveklabnik

This comment has been minimized.

Copy link
Contributor

steveklabnik commented Mar 19, 2013

❤️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment