Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix the wrong usage for LRUCache #2267

Merged
merged 8 commits into from Aug 21, 2020
Merged

Fix the wrong usage for LRUCache #2267

merged 8 commits into from Aug 21, 2020

Conversation

dangleptr
Copy link
Contributor

https://discuss.nebula-graph.com.cn/t/topic/948

As current usage, we can't ensure the data consistency between cache and data.

For example, reader data -> evict cache (write) -> write cache(in reader) will cause the dirty data forever before next write comes.

We should fix the problem with putIfAbsent when reading but insert overwrite when writing.

Of course, update/delete still has the problem, will fix them later.

if (FLAGS_enable_vertex_cache && vertexCache_ != nullptr) {
cacheData.emplace_back(v.get_id(), tag.get_tag_id(), tag.get_props());
}
data.emplace_back(std::move(key), std::move(tag.get_props()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One question,Does it cause duplicate indexes to be inserted? for example :
insert vertex ({vid1_tag1_version, prop1}, {vid1_tag1_version, prop2}...)

Copy link
Contributor Author

@dangleptr dangleptr Aug 4, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you insert {vid1_tag1_version, prop2}, the index generated by {vid1_tag1_version, prop1} will be deleted as our logic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good one. I miss the logic. Maybe we could combine the inserts before calling kvstore::asyncPut

list_.push_front(key);
map_.emplace(std::forward<key_type>(key),
std::make_tuple(std::forward<value_type>(value), list_.begin()));
} else {
// Overwrite the value
std::get<0>(it->second) = std::move(value);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also need to move this value to front of list ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If so, the operation becomes evict the old key then insert the new key.
I am not sure which way is better.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps move to first make more sense since it is LRU, we could first check whether key exists, do what we want, then evict if necessary. We could use list::splice method instead of evict the old key then insert the new key.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try this list_.splice(list_.begin(), list_, std::get<1>(it->second));

VLOG(3) << "Insert key " << key << ", val " << value;
// insert the new item
// insert item into the cache, but first check if it is full
if (size() >= capacity_) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if I have a 10 capacity LRU,
which I have key 1-10 in it. (list 1 to 10)
Then if I insert a key 5,
it will pop 10, which is not really necessary

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make sense

@dangleptr dangleptr added the ready-for-testing PR: ready for the CI test label Aug 13, 2020
Copy link
Contributor

@bright-starry-sky bright-starry-sky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well done. A good PR to learn.

Copy link
Contributor

@critical27 critical27 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dangleptr dangleptr merged commit 12e89a6 into vesoft-inc:master Aug 21, 2020
xuguruogu pushed a commit to xuguruogu/nebula that referenced this pull request Sep 12, 2020
* Fix the wrong usage for LRUCache

* Overwrite the old value when calling insert

* Address comments

* Address liuyu's comments

* Address critical27's comments

Co-authored-by: heng <heng.chen@vesoft.com>
tong-hao pushed a commit to tong-hao/nebula that referenced this pull request Jun 1, 2021
* Fix the wrong usage for LRUCache

* Overwrite the old value when calling insert

* Address comments

* Address liuyu's comments

* Address critical27's comments

Co-authored-by: heng <heng.chen@vesoft.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/med-pri Priority: medium ready-for-testing PR: ready for the CI test
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants