Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement delete edges #1063

Merged
merged 2 commits into from
Dec 2, 2019
Merged

Implement delete edges #1063

merged 2 commits into from
Dec 2, 2019

Conversation

zlcook
Copy link
Contributor

@zlcook zlcook commented Oct 14, 2019

  • Delete Edges sentence:
DELETE EDGE <edge_type> <vid> -> <vid>[@weight] [, <vid> -> <vid>[@weight] ...]

@dangleptr dangleptr added the ready-for-testing PR: ready for the CI test label Oct 15, 2019
@dangleptr
Copy link
Contributor

Jenkins go

@nebula-community-bot
Copy link
Member

Unit testing passed.

@zlcook
Copy link
Contributor Author

zlcook commented Oct 18, 2019

Jenkins go

@nebula-community-bot
Copy link
Member

Unit testing passed.

dangleptr
dangleptr previously approved these changes Nov 8, 2019
Copy link
Contributor

@dangleptr dangleptr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for taking care of it. The pr looks good to me.

@nebula-community-bot
Copy link
Member

Unit testing failed.

@Amber1990Zhang Amber1990Zhang mentioned this pull request Nov 8, 2019
@zlcook
Copy link
Contributor Author

zlcook commented Nov 20, 2019

Jenkins go

@nebula-community-bot
Copy link
Member

Unit testing passed.

Copy link
Contributor

@critical27 critical27 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clean code! Could you explain why we remove the where clause in delete edges?

@zlcook
Copy link
Contributor Author

zlcook commented Nov 20, 2019

Clean code! Could you explain why we remove the where clause in delete edges?

Just to keep it simple, the where clause will be implemented in next PR.

@bright-starry-sky
Copy link
Contributor

LGTM . Thanks for taking care of it!

@nebula-community-bot
Copy link
Member

Unit testing passed.

dangleptr
dangleptr previously approved these changes Nov 29, 2019
Copy link
Contributor

@dangleptr dangleptr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@nebula-community-bot
Copy link
Member

Unit testing failed.

@dangleptr
Copy link
Contributor

Jenkins go

@dangleptr
Copy link
Contributor

Could you fix the conflicts? @zlcook

@nebula-community-bot
Copy link
Member

Unit testing failed.

@zlcook
Copy link
Contributor Author

zlcook commented Nov 29, 2019

Could you fix the conflicts? @zlcook

I will do

@zlcook
Copy link
Contributor Author

zlcook commented Dec 2, 2019

Jenkins go

@nebula-community-bot
Copy link
Member

Unit testing passed.

Copy link
Contributor

@darionyaphet darionyaphet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dangleptr dangleptr merged commit c8cba92 into vesoft-inc:master Dec 2, 2019
@zlcook zlcook deleted the delete_edge branch December 2, 2019 09:40
yixinglu pushed a commit to yixinglu/nebula that referenced this pull request Feb 16, 2020
* implement delete edges

* rebase master
tong-hao pushed a commit to tong-hao/nebula that referenced this pull request Jun 1, 2021
* implement delete edges

* rebase master
yixinglu pushed a commit to yixinglu/nebula that referenced this pull request Jan 31, 2023
## What type of PR is this?
- [ ] bug
- [ ] feature
- [X] enhancement

## What problem(s) does this PR solve?
#### Issue(s) number: 

#### Description:

Enhance memory usage of `AtomicLogBuffer`, when a the log is big enough, it would make the memory usage way more bigger than expected. Because we allow 5 dirty nodes, each node contain 64 log. So if the size of log is 1M, it would occupy 1M * 64 * 5 = 320M memory (and it is only 1 part). 

Usually a single log's size will be trivial, but when rebuild index/sync, we will batch the operations, this will make storage OOM.


## How do you solve it?
When the buffer's size is bigger than expected, try to trigger gc anyway (not only when we have enough dirty node).


## Special notes for your reviewer, ex. impact of this fix, design document, etc:



## Checklist:
Tests:
- [X] Unit test(positive and negative cases)
- [ ] Function test
- [ ] Performance test
- [ ] N/A

Affects:
- [ ] Documentation affected (Please add the label if documentation needs to be modified.)
- [ ] Incompatibility (If it breaks the compatibility, please describe it and add the label.)
- [ ] If it's needed to cherry-pick (If cherry-pick to some branches is required, please label the destination version(s).)
- [ ] Performance impacted: Consumes more CPU/Memory


## Release notes:

Please confirm whether to be reflected in release notes and how to describe:
> ex. Fixed the bug .....


Migrated from vesoft-inc#4386

Co-authored-by: Doodle <13706157+critical27@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready-for-testing PR: ready for the CI test
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants