-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement delete edges #1063
Implement delete edges #1063
Conversation
zlcook
commented
Oct 14, 2019
•
edited
Loading
edited
- Delete Edges sentence:
Jenkins go |
Unit testing passed. |
Jenkins go |
Unit testing passed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking care of it. The pr looks good to me.
Unit testing failed. |
Jenkins go |
Unit testing passed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clean code! Could you explain why we remove the where clause in delete edges?
Just to keep it simple, the |
LGTM . Thanks for taking care of it! |
Unit testing passed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Unit testing failed. |
Jenkins go |
Could you fix the conflicts? @zlcook |
Unit testing failed. |
I will do |
cf01f0c
Jenkins go |
Unit testing passed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* implement delete edges * rebase master
* implement delete edges * rebase master
## What type of PR is this? - [ ] bug - [ ] feature - [X] enhancement ## What problem(s) does this PR solve? #### Issue(s) number: #### Description: Enhance memory usage of `AtomicLogBuffer`, when a the log is big enough, it would make the memory usage way more bigger than expected. Because we allow 5 dirty nodes, each node contain 64 log. So if the size of log is 1M, it would occupy 1M * 64 * 5 = 320M memory (and it is only 1 part). Usually a single log's size will be trivial, but when rebuild index/sync, we will batch the operations, this will make storage OOM. ## How do you solve it? When the buffer's size is bigger than expected, try to trigger gc anyway (not only when we have enough dirty node). ## Special notes for your reviewer, ex. impact of this fix, design document, etc: ## Checklist: Tests: - [X] Unit test(positive and negative cases) - [ ] Function test - [ ] Performance test - [ ] N/A Affects: - [ ] Documentation affected (Please add the label if documentation needs to be modified.) - [ ] Incompatibility (If it breaks the compatibility, please describe it and add the label.) - [ ] If it's needed to cherry-pick (If cherry-pick to some branches is required, please label the destination version(s).) - [ ] Performance impacted: Consumes more CPU/Memory ## Release notes: Please confirm whether to be reflected in release notes and how to describe: > ex. Fixed the bug ..... Migrated from vesoft-inc#4386 Co-authored-by: Doodle <13706157+critical27@users.noreply.github.com>