New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
this little bit of caching gives us a massive perf gain on Discourse #242
Conversation
|
|
| @@ -136,6 +144,7 @@ def serialize_ids | |||
| end | |||
|
|
|||
| class HasOne < Config #:nodoc: | |||
|
|
|||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry about that ... will sort it out, took a lot of dicking to find a clean patch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No worries. :D
|
Seems good. :) |
|
ok |
|
I have AMS work on my plate for tomorrow, so I haven't looked into it yet. |
this little bit of caching gives us a massive perf gain on Discourse
|
|
Before:
After:
So in English, median request has gone down for 106ms to 85ms. A huge gain.
This happens cause internally, AM calls pluralize over and over again ... and pluralize is slow
https://github.com/rails-api/active_model_serializers/blob/master/lib/active_model/serializer/associations.rb#L157
As isolated by the yet to be announced MiniProfiler flame graph: