Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option to pass a cachebusting parameter #575

Open
linuxd3v opened this issue Oct 6, 2021 · 8 comments · May be fixed by #3525
Open

Add option to pass a cachebusting parameter #575

linuxd3v opened this issue Oct 6, 2021 · 8 comments · May be fixed by #3525
Labels
area:monitor Everything related to monitors feature-request Request for new features to be added type:enhance-existing feature wants to enhance existing monitor

Comments

@linuxd3v
Copy link

linuxd3v commented Oct 6, 2021

Is it a duplicate question?
no

Is your feature request related to a problem? Please describe.
Some websites use edge-caches such as AWS cloudfront.
Meaning site HTML response is fully cached on some aws cloudfront pop server.
Meaning "uptime kuma" will in essence be measuring uptime of random amazon cloudfront pop server in this case...
same if site uses varnish caching proxy server for example.
I would argue most folks would like to see origin uptimes instead.

Describe the solution you'd like
Add an option to inject dynamically generated random query param so HTTP request breaks through cache and reaches origin.
Add that dynamic query param cachebreaker to all urls:
http://example.com?cb=RANDOMSTRING

Describe alternatives you've considered
im not aware of any.

Additional context
uptime kuma

@linuxd3v linuxd3v added the feature-request Request for new features to be added label Oct 6, 2021
@linuxd3v
Copy link
Author

linuxd3v commented Oct 21, 2022

Basically the same idea as this *cachebuster* in uptime robot:
https://blog.uptimerobot.com/cachebuster-a-pro-tip-for-bypassing-cache/

@007hacky007
Copy link

No one implemented this yet? Any patch available?
+1 for this feature.

@b-a0
Copy link

b-a0 commented May 22, 2023

I just ran into the same issue. I am behind a corporate proxy that sits between Uptime Kuma and the host I am trying to reach and it caches (for a very long time) the response of that host.

If inside the Uptime Kume docker container I run curl without any further options, I get a cached response (X-Cache: HIT) eventhough the application at otp2.sub01.domain.tld is down and returns a 503 if I visit it in browser.

user@host: curl -I otp2.sub01.domain.tld
HTTP/1.1 200 OK
Server: nginx/1.23.3
Date: Mon, 22 May 2023 10:33:27 GMT
Content-Type: text/html
ETag: "174185864-1667555858000"
Last-Modified: Fri, 04 Nov 2022 09:57:38 GMT
Age: 17118
X-Cache: HIT from app-proxy11.domain.tld
X-Cache-Lookup: HIT from app-proxy11.domain.tld:8080
Via: 1.1 app-proxy11.domain.tld (squid/4.15)
Connection: keep-alive

If I ask curl to ignore the cache, I get the correct response:

user@host: curl -I -H 'Cache-Control: no-cache' otp2.sub01.domain.tld
HTTP/1.1 503 Service Unavailable
Server: squid/4.15
Mime-Version: 1.0
Date: Mon, 22 May 2023 15:18:36 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3699
X-Squid-Error: ERR_CONNECT_FAIL 111
Vary: Accept-Language
Content-Language: en
X-Cache: MISS from app-proxy11.domain.tld
X-Cache-Lookup: MISS from app-proxy11.domain.tld:8080
Via: 1.1 app-proxy11.domain.tld (squid/4.15)
Connection: keep-alive

Similarly if I add a random (non-existent) query parameter I also get the correct response:

user@host: curl -I otp2.sub01.domain.tld?asdfsdf
HTTP/1.1 503 Service Unavailable
Server: squid/4.15
Mime-Version: 1.0
Date: Mon, 22 May 2023 15:20:13 GMT
Content-Type: text/html;charset=utf-8
Content-Length: 3678
X-Squid-Error: ERR_CONNECT_FAIL 111
Vary: Accept-Language
Content-Language: en
X-Cache: MISS from app-proxy11.domain.tld
X-Cache-Lookup: MISS from app-proxy11.domain.tld:8080
Via: 1.1 app-proxy11.domain.tld (squid/4.15)
Connection: keep-alive

Is there maybe an environment variable that forces node to skip cache? Or another way to work around this issue?

Update:
Not helping this issue in particular, but I found that in my case I could set the docker daemon config such that calls to (subdomains of) domain.tld would skip the proxy. This is what I added in ~/.docker/config.json:

{
"auths": {},
"proxies": {
        "default": {
                "httpProxy": "http://proxy.domain.tld:8080",
                "httpsProxy": "http://proxy.domain.tld:8080",
                "noProxy": ".domain.tld,127.0.0.0/8"
        }
}

Of course the .domain.tld is just a dummy value and you should ensure that hosts in that domain are reachable from within the container without a proxy by doing curl -x '' -I -X GET http://site.domain.tld

Hopefully this helps someone else as well.

@CommanderStorm
Copy link
Collaborator

Adding a cachebusting PR would likely get accepted, as this is not a big feature + eazy to maintain.
I think a user should either be:

  • able to state the parameter outright, or
  • set a verrrrry specific parameter (random_uptime_kuma_cachebuster)
    I would 1. with 2. as a set default 😉

As a workaround until someone gets around to providing a PR:
use a Proxy which adds the required headers
image

@dansullivan86 dansullivan86 linked a pull request Aug 4, 2023 that will close this issue
7 tasks
@CommanderStorm CommanderStorm added area:monitor Everything related to monitors type:enhance-existing feature wants to enhance existing monitor labels Dec 8, 2023
@CommanderStorm CommanderStorm changed the title Taking cloudfront into consideration. (or any caching proxies for that matter) Add option to pass a cachebusting parameter Dec 14, 2023
@jordantrizz
Copy link

It looks like this will be in version 2.1.0 based on milestones, but no due date has been set. Are we looking at a year out for this?

@tallesairan

This comment was marked as spam.

@CommanderStorm
Copy link
Collaborator

but no due date has been set. Are we looking at a year out for this?

First, as an volunteer run project, we don't give out esitimates. See #noestimates for further details.

Secondly, I don't know how you are getting this number. v2.0 is currently nearly in beta.
For a more reasonable esitmate without an actual estimate see the release cadence of previous releases:

@jordantrizz
Copy link

First, as an volunteer run project, we don't give out esitimates. See #noestimates for further details.

I understand, doesn't hurt to ask.

Secondly, I don't know how you are getting this number. v2.0 is currently nearly in beta. For a more reasonable esitmate without an actual estimate see the release cadence of previous releases:

That's what I did, and this is why I asked to get a general temperature. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:monitor Everything related to monitors feature-request Request for new features to be added type:enhance-existing feature wants to enhance existing monitor
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants