Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimization for PG2 ETS insertion #1311

Merged
merged 1 commit into from Oct 25, 2015

Conversation

@gabiz
Copy link
Contributor

@gabiz gabiz commented Oct 25, 2015

ETS :bag insertion for elements with the same key grows linearly (on the number of elements with same key). On the other hand :duplicate_bag insertion is constant.
This change does not increase the size of the ETS and there is no performance degradation for lookups and deletions.
I was able to replicate insertion timeouts when there are lots of elements with same key using :bag but it does not happen using :duplicate_bag.

…e for elements with same key
@Gazler
Copy link
Member

@Gazler Gazler commented Oct 25, 2015

Before:

16gb-330k

After:
phoenix420k-10k-arrival-16gb

chrismccord added a commit that referenced this pull request Oct 25, 2015
Optimization for PG2 ETS insertion
@chrismccord chrismccord merged commit 6d0dbb3 into phoenixframework:master Oct 25, 2015
1 check failed
1 check failed
continuous-integration/travis-ci/pr The Travis CI build failed
Details
@chrismccord
Copy link
Member

@chrismccord chrismccord commented Oct 25, 2015

@gabiz You are my hero! This increased our arrival rate by 10x, as well as solved the mysterious contention causing our subscribe timeouts! Thanks so much!!! 😍🚀🚀

@bryansray
Copy link
Contributor

@bryansray bryansray commented Oct 25, 2015

Awesome commit @gabiz

@josevalim
Copy link
Member

@josevalim josevalim commented Oct 25, 2015

@gabiz this is so awesome!

@josevalim
Copy link
Member

@josevalim josevalim commented Oct 26, 2015

@chrismccord just to document, it would be really nice to move the writes back to Local (both inserts and deletes) and see how they affect performance now that we are using the proper ETS type! Once you do, remove the write_concurrency: true and :public flags. It may help performance (or not at all), that's why we need to measure. :)

@chrismccord
Copy link
Member

@chrismccord chrismccord commented Oct 26, 2015

That's the first thing on my todo list tomorrow

Sent from my iPhone

On Oct 25, 2015, at 8:20 PM, José Valim notifications@github.com wrote:

@chrismccord just to document, it would be really nice to move the writes back to Local (both inserts and deletes) and see how they affect performance now that we are using the proper ETS type! Once you do, remove the write_concurrency: true and :public flags. It may help performance (or not at all), that's why we need to measure. :)


Reply to this email directly or view it on GitHub.

@gabiz
Copy link
Contributor Author

@gabiz gabiz commented Oct 26, 2015

@josevalim the keyless match_delete is very expensive. Needs to traverse the whole table.
https://github.com/phoenixframework/phoenix/blob/master/lib/phoenix/pubsub/gc.ex#L59

Have you considered using a reverse ETS table to store the pid -> topic info. Using that the delete is several order of magnitude faster and avoids blocking other gen_server operations.

@gabiz
Copy link
Contributor Author

@gabiz gabiz commented Oct 26, 2015

@josevalim I just noticed that we used to have a reverse HashDict pid to topic back in time. We certainly need something like that if we plan to move the deletes back to the Local. Otherwise a bunch of disconnects are going to trigger the expensive deletes and block incoming subscriptions.

@indera
Copy link

@indera indera commented Dec 7, 2017

@gabiz - what tools do you recommend to profile Elixir code... To fix this kind of issues there must be a better way than looking at :observer.start
Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked issues

Successfully merging this pull request may close these issues.

None yet

6 participants
You can’t perform that action at this time.