New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose Low Latency memcache #13227

Closed
LukasReschke opened this Issue Jan 10, 2015 · 10 comments

Comments

Projects
None yet
5 participants
@LukasReschke
Copy link
Member

LukasReschke commented Jan 10, 2015

public function createLowLatency($prefix = '') {
$prefix = $this->globalPrefix . '/' . $prefix;
if (XCache::isAvailable()) {
return new XCache($prefix);
} elseif (APCu::isAvailable()) {
return new APCu($prefix);
} elseif (APC::isAvailable()) {
return new APC($prefix);
} else {
return null;
}
}
has a low latence cache which only uses in-memory caches so that no remote hosts will get called.

IMHO it would make sense to expose this as well in the public interface? There are often reasons where it makes sense to store something in a local cache but it wouldn't make sense to store it in a remote cache. (for example because the lookup time might be too long or it is only specific to that instance etc…)

@LukasReschke

This comment has been minimized.

Copy link
Member Author

LukasReschke commented Jan 10, 2015

@DeepDiver1975

This comment has been minimized.

Copy link
Member

DeepDiver1975 commented Jan 10, 2015

IMHO it would make sense to expose this as well in the public interface?

indeed - good catch - thx

@DeepDiver1975 DeepDiver1975 added this to the 8.0-current milestone Jan 10, 2015

@LukasReschke

This comment has been minimized.

Copy link
Member Author

LukasReschke commented Jan 10, 2015

Might be sensible to reorder the other cache mechanism then so that Redis et al. are chosen before a local in-memory cache

@DeepDiver1975

This comment has been minimized.

Copy link
Member

DeepDiver1975 commented Jan 10, 2015

Might be sensible to reorder the other cache mechanism then so that Redis et al. are chosen before a local in-memory cachey

let's chat Monday about this

@MorrisJobke

This comment has been minimized.

Copy link
Member

MorrisJobke commented Jan 13, 2015

@DeepDiver1975 What is the conclusion here?

@DeepDiver1975

This comment has been minimized.

Copy link
Member

DeepDiver1975 commented Jan 13, 2015

@DeepDiver1975 What is the conclusion here?

non yet 🙈

@LukasReschke

This comment has been minimized.

Copy link
Member Author

LukasReschke commented Jan 13, 2015

My key points here: Our current caching implementation is dangerous – we cache data in it which is not supposed to be shared such as paths to binaries. Those may differ in distributed deployments on every machine. (a sendmail may be in different directories)

I'd suggest the following changes:

  1. Make the low latency cache public
  2. Move the priority in the general latency cache to first use distributed memcaches and then local memory caches – currently it's impossible to use e.g. Redis and APCu together since APCu will be used first when it is installed.
  3. Evaluate current usages of the cache and migrate to the low latency cache if useful

To sum up:

  • lowLatencyCache
    • APCu
    • APC
    • XCache
    • In-Array Implementation (to be discussed)
  • regularCache
    • Redis / Memcache / whatever
    • if not installed: fallback to lowlatencycache

Does that sound reasonable?

@DeepDiver1975

This comment has been minimized.

Copy link
Member

DeepDiver1975 commented Jan 14, 2015

My key points here: Our current caching implementation is dangerous – we cache data in it which is not supposed to be shared such as paths to binaries. Those may differ in distributed deployments on every machine. (a sendmail may be in different directories)

Not that much of an issue - load balanced servers should be setup identical from my pov - but at the end - yes can cause issues

@DeepDiver1975

This comment has been minimized.

Copy link
Member

DeepDiver1975 commented Jan 14, 2015

Does that sound reasonable?

indeed - go for it if time permits - THX

@LukasReschke

This comment has been minimized.

Copy link
Member Author

LukasReschke commented Jan 14, 2015

Assigned to @Xenopathic as he volunteers to do that. Thank you very much, Robin!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment