Skip to content

Conversation

thomas4019
Copy link

I'm curious what people think of what I have so far. It's working but doesn't yet delete cache entries once it exceeds the limit.

@thomas4019 thomas4019 mentioned this pull request Apr 29, 2015
@jaredfolkins
Copy link

This is racey no? What is protecting the Entries map?

@thomas4019
Copy link
Author

Yeah, good catch. I've added a RWMutex to protect the map.

@jaredfolkins
Copy link

Hi @thomas4019,

Just a friendly FYI. When writing code (like a cache) in Go that you know will be accessed by multiple goroutines, always pass the -race flag go run -race main.go when developing and set an environment variable export GORACE=halt_on_error=1, this will cause your app to dump the stack trace upon the first detection of a race. You can then go and figure out what two functions were contending for the resource.

Your RWMutex is a good start but you'll soon run into performance issues. If you think about it, imagine 20k requests per second coming into your server and they all need to check with the cache first for an asset. They will all suddenly be vying for that one and only lock. It will slow everything down.

What you want to read up on is caching algorithms and consistent hashing. You could also dive into my BadActor project which uses an LRU and the jump consistent hashing algo. I would not use the jump hash algo for this usecase but it will give you some ideas.

Feel free to ask any questions you want.

Later,
Jared

@pedronasser
Copy link

Ok, you've done a middleware that saves the cache in the memory.
How about using this package for that? https://github.com/coocood/freecache
I've not tried it yet, but seems to work really well (read some benchmarks).
Could you try it?

@mholt
Copy link
Member

mholt commented May 1, 2015

We're looking into some caching libraries, including that one.

Great feedback so far!

@avelino
Copy link

avelino commented May 3, 2015

The implementation of @thomas4019 works, I think we can accept and then evolve the caching system (or choose another)

@qq08420
Copy link

qq08420 commented May 4, 2015

As a user.Here is some problem.
1.How to use the route?Just like the keyword location in nginx...
location /xxx {
root /xxx
}
2.When I use root and proxy as same time , the root will be failed
root /
proxy / http://xxxx

@mholt
Copy link
Member

mholt commented May 4, 2015

@qq08420 Please open a separate issue with your question, and explain what you mean, like what error message you're getting, etc.

@thomas4019
Copy link
Author

Thanks for all the suggestions and ideas for the cache middleware. Sadly, I won't have time to make any improvements in the near future. If anyone has time there are definitely some improvements that can be made to the header processing and the actual cache implementation. This middleware works as is, but as @jaredfolkins said, the performance could be improved a lot by using an alternative to the plain mutex. Anybody have time and desire to improve this middleware?

@nemosupremo
Copy link

While it technically works, I'm not sure, as is, if this middleware does more good than harm. Once pulled in, it more likely for people to believe that this middleware "works" and then be bitten by multiple issues.

  1. The Locking issue is probably the biggest blocker. The reason someone might want caching is to handle a large amount of requests. If, under stress, the cache doesn't perform then the cache middleware does more harm than good - after all people expect caching to help under moments of high usage.
  2. The cache header parsing isn't really fully fleshed out. The Expires header isn't supported (surprising someone who might depend on Caddy's cache) and these are the kind of bugs that won't be visible to someone who doesn't have access to Caddy's source code. While we could go the route of simply not opting to cache things with an Expires header - I'm not sure how useful it is to merge in and distribute an half baked cache implementation.
  3. Following off 2., I'm not sure the standards are fully respected here (everything with a 2xx code is cached, but the standards say only status codes 200, 203, 206, 300, 301 or 410 may be cached. While this might be minor, this is easily something that can cause headaches for someone and have them wondering why their server is returning stale data
  4. MaxCacheSize and Age options are provided, but aren't enforced. Either don't allow the user to set them or actually implement the code needed to enforce them. A user might come in expecting these options to work and then become frustrated when Caddy "doesn't work"

In short, as is, this could cause a lot of headaches and future issues that may not be worth it. It seems wiser to either leave this issue and have this as a WIP branch, rather than merging it and claiming that there's a cache middleware when there really isn't.

@abiosoft
Copy link

abiosoft commented May 4, 2015

I agree with @nemothekid. I would prefer it remains as a WIP until someone is able to pick it up and finish it.

@mholt mholt added the help wanted 🆘 Extra attention is needed label May 6, 2015
@mholt
Copy link
Member

mholt commented May 12, 2015

@thomas4019 If it's alright with you, I'm going to close the PR since it turns out caches are really hard. One of us will be able to use your code as a starting point, though. Thank you for working on this!

@mholt mholt closed this May 12, 2015
@mholt mholt removed the help wanted 🆘 Extra attention is needed label May 12, 2015
ProfessorLogout pushed a commit to ProfessorLogout/caddy that referenced this pull request Jun 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants