-
-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DNS spam #728
Comments
Related to #387? |
Also try this option: |
Could very well be this. Can the internal IP be cached? This would almost never change so it would make sense to reuse the value insted of reaching out to DNS every time. Why are there requests for a and aaaa records since I have not set up ipv6? Is that a Node quirk? |
It might be an alpine-node quirk. Try the option i sent on the other comment |
#722 (comment) tried this without the extra hosts in docker compose and ran a recently added scan. It did not help, a lot of DNS traffic was generated. This problem probably only gets worse with a large library and / or a full scan. This definitely needs caching. If this is deployed on a host with a lot of containers it could make other services unavailable indirectly due to DNS rate limiting. So far only the extra hosts attribute helps. |
@Fallenbagel Just had the same issue on a fresh install. NodeJS does not cache DNS requests. When running on large libraries lots of calls to the Jellyfin server are done. Since the DNS lookup is not cached, it will reach for the DNS server for each call to resolve the hostname. Some DNS servers (like Pi-Hole) will throttle/block calls when a certain threshold of requests is reached in a limited timeframe. This is what generates these errors:
Jellystat had the same issue, and it was fixed by implementing cacheable-lookup in axios. Maybe the same could be done for Jellyseerr? |
Oh yeah. Actually I looked into axios-cached-dns-resolve and it might be better for jellyseerr |
…nd it to external api class This fix should in theory use nodeCache with the help of the cacheManager class to cache jellyfin/emby api requests. Jellyfin's standard Time-To-Live was set as 6 hours so as to ensure that the cached data is relatively up to date without making excessive API requests. In addition, this fix sets the checkPeriod for the jellyfin cache to 30 minutes as it sounds suitable enough for checking and cleaning up expired cach entries without causing perfomance overhead. fix #728, fix #319
…nd it to external api class This fix should in theory use nodeCache with the help of the cacheManager class to cache jellyfin/emby api requests. Jellyfin's standard Time-To-Live was set as 6 hours so as to ensure that the cached data is relatively up to date without making excessive API requests. In addition, this fix sets the checkPeriod for the jellyfin cache to 30 minutes as it sounds suitable enough for checking and cleaning up expired cach entries without causing perfomance overhead. fix #728, fix #387
) * refactor(jellyfinapi): use the external api class for jellyfin api requests refactors jellyfin api requests to be handled by the external api to be consistent with how other external api requests are made related #728, related #387 * style: prettier formatted * refactor(jellyfinapi): rename device in auth header as jellyseerr * refactor(error): rename api error code generic to unknown * refactor(errorcodes): consistent casing of error code enums
🎉 This issue has been resolved in version 1.9.1 🎉 The release is available on:
Your semantic-release bot 📦🚀 |
Description
There seems to be an issue that makes Jellyseerr spam my DNS with aaaa requests for my internal Jellyfin URL. Not sure why there is even a request for an aaaa record since I don't use ipv6.
I tested this on 1.7.0 because that is the version I had. Updated to 1.8.1 because I thought maybe it was fixed in the meantime but it wasn't.
Not sure what triggers this. Maybe it's a sync job. This has been going on for a while but I only just noticed because my primary DNS would ratelimit the IP for this machine just long enough for my secondary DNS to take over and serve the requests. Then the secondary DNS would ratelimit the IP just in time for the primary to take over again.
Version
1.7.0., 1.8.1
Steps to Reproduce
I used Pi hole to monitor traffic volume
On the host where Jellyseerr is installed I used: sudo tcpdump -i ens18 'dst port 53' to monitor for outgoing DNS requests.
I am running Jellyseerr in a docker container so I added this to my compose file
extra_hosts: - "internal.mrga.dev:IP"
This adds the local ip in the container hosts file but I don't like it as a permanent solution.
Screenshots
Logs
Will add if needed.
Platform
desktop
Device
N/A
Operating System
N/A
Browser
N/A
Additional Context
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: