Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP checks on non-standard ports reporting as down #1821

Closed
2 tasks done
webworxshop opened this issue Jun 23, 2022 · 21 comments · Fixed by #1823
Closed
2 tasks done

HTTP checks on non-standard ports reporting as down #1821

webworxshop opened this issue Jun 23, 2022 · 21 comments · Fixed by #1823
Labels
bug Something isn't working priority:high High Priority

Comments

@webworxshop
Copy link

⚠️ Please verify that this bug has NOT been raised before.

  • I checked and didn't find similar issue

🛡️ Security Policy

Description

As of version 1.17.0, HTTP checks on non-standard ports are reporting as down. The resulting notification reports that the check in unable to reach port 80, which is wrong. See screenshot below:

image

In this case the Jellyfin service is configured on port 8096.

👟 Reproduction steps

Create a check for a HTTP service on a non-standard port.

👀 Expected behavior

The service should be reported as up when it is up and down when it is down!

😓 Actual Behavior

The service is always reported as down.

🐻 Uptime-Kuma Version

1.17.0

💻 Operating System and Arch

Ubuntu 20.04

🌐 Browser

Doesn't matter

🐋 Docker Version

Docker 20.10.14

🟩 NodeJS Version

No response

📝 Relevant log output

No response

@webworxshop webworxshop added the bug Something isn't working label Jun 23, 2022
@ThomasChr
Copy link
Contributor

Will take a look

@louislam
Copy link
Owner

louislam commented Jun 23, 2022

Is there any special settings like proxy or basic auth etc?

I saw someone have also reported similar issue.
#1815 (comment)

But I cannot reproduce:
image

@ThomasChr
Copy link
Contributor

Can't reproduce on d5da5af / 1.17.0
Tried with leading slash and without.

Looking forward to debugging more if I get more info, like @louislam said, there must be some special configuration.
As far as I can see the port is not treated specially. We're just filling an options struct for the axiosClient with the url and let it go it's merry way.

monitor.js:
grafik
grafik

There is some trickery involved with a proxy or ntlm though:
grafik
grafik

@osc86
Copy link

osc86 commented Jun 23, 2022

3 http(s) services are reported down since my container was updated to 1.17 today. There's definitely an issue, 1 uses a custom port, the other 2 use standard port 443. All hosts are up and reachable from the container, I just checked with curl.
I'm not using a proxy, trying to find out what's different from the ones that still work.

@louislam
Copy link
Owner

louislam commented Jun 23, 2022

I suspect it is related to axios dns cache (#1598) or ntlm (#1639).

monitor.js diff:
1.16.1...1.17.0

@louislam
Copy link
Owner

louislam commented Jun 23, 2022

Oh no, I found an issue, if I use the tailscale domain name instead of the ip, it goes down. While in 1.16.1, it is up.

@osc86 @webworxshop Are you using similar domain name too?

image

@louislam louislam added the priority:high High Priority label Jun 23, 2022
@mads03dk
Copy link

Confirming the issue:
image

Got this overnight, right after auto-update.
If it's any help; this monitor is looking through a subdomain.

@louislam louislam mentioned this issue Jun 23, 2022
@louislam
Copy link
Owner

It should be related to #1598, I am reverting this pull request, 1.17.1 should be released soon.

@louislam louislam linked a pull request Jun 23, 2022 that will close this issue
@maeries
Copy link

maeries commented Jun 23, 2022

I have a similar looking issue, but I think the cause is different. I have a monitor for http://wg-access-server:8000, but I got DOWN - connect ECONNREFUSED 172.30.0.4:80, so a different port. If I set the port to something random like 1111, I get DOWN - connect ECONNREFUSED 172.30.0.4:1111.

I believe the issue is that there is a forward

$ curl -I wg-access-server:8000
HTTP/1.1 307 Temporary Redirect
Content-Type: text/html; charset=utf-8
Location: /signin
Date: Thu, 23 Jun 2022 08:04:24 GMT

and when uptime kuma follows that it kinda loses the port and checks for the standard port instead. If I set the monitor to http://wg-access-server:8000/signin it works

@louislam
Copy link
Owner

1.17.1 has been release, it should fix the issue.

@maeries
Copy link

maeries commented Jun 23, 2022

It does

@louislam
Copy link
Owner

Thanks for your testing.

I will transfer issues to axios-cached-dns-resolve.

cc: @paul-michael, since you reported in another thread, just let you know the issue has been fixed.

@JacksonChen666
Copy link

hmmm... not sure how to reproduce

Screen Shot 2022-06-23 at 11 07 06

@louislam
Copy link
Owner

@JacksonChen666 It has been fixed, please read my previous comments in this thread.

@JacksonChen666
Copy link

@louislam i'm on 1.17.0, which is supposedly the version with the bug.

Screen Shot 2022-06-23 at 11 12 04

either way i do not have issues and will remain on version 1.17.0 because there's not much to get from 1.17.1 for me.

@louislam
Copy link
Owner

louislam commented Jun 23, 2022

@JacksonChen666
The issue title is not correct actually.

It hit the users who are using such as Wireguard/Tailscale networks with custom local domain only.

It is hard to spot the problem before final release, that why I am afraid to merge new large features now.

@JacksonChen666
Copy link

@louislam ah, makes sense.
consider changing the title?

@webworxshop
Copy link
Author

Wow, looks like this got some traction while I was AFK 😃

@louislam Yes, I am using a domain name, but from my local DNS server, not tailscale.

I can confirm the issue is fixed with 1.17.1.

Thank you for the awesomely quick resolution and the great software!

@davidus05
Copy link

Regarding DNS cache: Why are we using this at all? We have three (and more) caches then, at least: DNS provider cache, local server DNS cache, Uptime Kuma cache, …

I think that this is not necessary or am I missing something?

@sevmonster
Copy link

Regarding DNS cache: Why are we using this at all? We have three (and more) caches then, at least: DNS provider cache, local server DNS cache, Uptime Kuma cache, …

Depending on the configuration, there may not be a local cache. GNU/Linux does not cache DNS by default. Projects like systemd-resolvd (not often configured to run by default on many setups even if they use systemd), dnsmasq, etc are needed. If, in your setup, you are constantly querying DNS, you will see worse, sporadic performance out of Uptime Kuna. See the screenshot on #1598.
If you already have a local caching resolver, great. But this would explain the poor ping delay I was getting with Uptime Kuma, 200~300ms or more for local services where it should probably be less. There is also the case that your services will appear to go down if your upstream resolvers disappear.

@jordantrizz
Copy link

Can this be a configurable option per monitor, it would benefit some monitors and not others. New features that would be considered beta should be off by default. Turn on new features if you want to try them out.

Depending on the configuration, there may not be a local cache.

If, in your setup, you are constantly querying DNS, you will see worse, sporadic performance out of Uptime Kuna. See the screenshot on #1598.

There is also the case that your services will appear to go down if your upstream resolvers disappear.

Bypass DNS resolution altogether set the URL to https:// and the header "Host: domain.com". If you want to know if the DNS record changes, set up a DNS monitor

Having the option of enabling DNS resolution or specifying the IP/HOST during the monitor creation would save time versus setting headers each time. If you choose DNS resolution, you can pick local resolve, remote resolver or build in Kum DNS cache.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working priority:high High Priority
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants