Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add lock/unlock codes for kvstore #43

Closed
wants to merge 4 commits into from
Closed

Add lock/unlock codes for kvstore #43

wants to merge 4 commits into from

Conversation

albamc
Copy link

@albamc albamc commented Jan 24, 2017

What I did

Add a lock/unlock codes for kvstore

How I did

  • libkv support Lock(), Unlock() functions.
  • create a kvstore lock for path (uri.path + "/" + "gorblock")
  • use this lock before program exits

How to test it

  • no error message from libkv :)
  • I'll update a case soon...

What to do

  • log level missing
    • According to this stackoverflow thread, If I set logrus parameter, it'll be effected every go file which import logrus files.
    • But I found that if main.go set debug level "debug", context.go or store.go logrus's loglevel is still info.
    • I couldn't find why it happens but I guess It can be a multi-import logrus problems. Any helps will be appreciated...

references

https://github.com/docker/libkv/blob/master/docs/examples.md

Signed-off-by: Jonghyun Lee <albam.c@navercorp.com>
@codecov-io
Copy link

codecov-io commented Jan 24, 2017

Codecov Report

Merging #43 into master will not impact coverage.

@@           Coverage Diff           @@
##           master      #43   +/-   ##
=======================================
  Coverage   96.93%   96.93%           
=======================================
  Files          13       13           
  Lines         196      196           
=======================================
  Hits          190      190           
  Misses          4        4           
  Partials        2        2

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b169c92...2d06db9. Read the comment docs.

Jonghyun Lee added 3 commits January 26, 2017 07:45
Signed-off-by: Jonghyun Lee <albam.c@navercorp.com>
Signed-off-by: Jonghyun Lee <albam.c@navercorp.com>
@kobolog
Copy link
Owner

kobolog commented Feb 8, 2017

Hey - thanks for this PR! Do I understand it correctly that its purpose is to synchronize KV store access when multiple GORBs use it to avoid reading some sort of invalid data? In this case, what sort of race conditions or data corruption you envision if using KV store without locking?

Regarding the logging levels – yeah, if not configured explicitly, all packages will end up using a global Logger object, so updating a setting in one place should affect it everywhere else. I'm not sure why this is not the behavior you're experiencing – I'll investigate!

Also, some basic tests would be nice to have before landing this PR =)

@albamc
Copy link
Author

albamc commented Feb 10, 2017

Umm... I think if multiple gorb access kvstore, each gorb should access different kvstore path.
This lock feature is not for multiple gorb access.
It's for the situations which each "host" report their LB info directly to kvstore (like registrator)
If gorb's policy is modify L4 data must use gorb api, then this PR is no use and should be closed.
And I think some cases requires locks for example host sides want to pause L4 modify before they finish register L4 info.
Actually, I developed client like gorb-docker-link to report gorb L4 data to consul directly, but this program contains many our company's features so it's not fit for open sources.

@kobolog
Copy link
Owner

kobolog commented Feb 13, 2017

Oh, so you mean this is for cases when there's a separate service somewhere that might modify parts of KV store that are also accessed by a Gorb instance and in this case there might be a data race? I haven't really thought about this usecase – the mental model is for all LB topology changes to go through the Gorb API. Are you saying that you have an internal service that utilizes this direct KV store modifications to control weights?

@albamc
Copy link
Author

albamc commented Feb 23, 2017

Sorry for late answer.
In my environments, multiple clients report their container status to consul and gorb get this status to build L4 entries.
At first there's many errors and one of the reason seems like race conditions.
But finally, I found the race condition is not a reason.
The reason is mis-configured my consul server-agent configurations.
As you commented, this PR is not fit for gorb's mental model and locking is not necessary for my environments.
So... please let me know what to do...

  • just drop it
  • make it optional for future use
  • or any.
    Sorry for my hasty jobs. (really sorry!)

@kobolog
Copy link
Owner

kobolog commented Mar 7, 2017

Yeah, I still don't see how this fixes anything but I see how this could promote the usecase where a 3rd-party app or service is fiddling with Gorb's KV namespace, which I don't think is a good thing. So I'm gonna close it.

@kobolog kobolog closed this Mar 7, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants