Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Repeated addons.xml.gz download when unable to write to ~/.kodi/userdata/Database/AddonsXX.db #24829

Closed
C-Otto opened this issue Mar 9, 2024 · 15 comments · Fixed by #24886
Closed
Assignees
Labels
Component: Add-ons Resolution: Fixed issue was resolved by a code change Triage: Confirmed issue has been reproduced by a team member
Milestone

Comments

@C-Otto
Copy link

C-Otto commented Mar 9, 2024

Bug report

Disk or file system corruption causes xbmc to repeatedly download the addons.xml.gz file.
As this seems to be happen quite a lot with consumer hardware (worn out flash drives/cards?), this effectively causes a distributed denial of service (DDoS) attack against mirror servers (like mine) providing this data.

Describe the bug

If xbmc is unable to write to the SQLite database at ~/.kodi/userdata/Database/AddonsXX.db as part of the database.SetRepoUpdateData invocation in RepositoryUpdater.cpp, the nextcheck column remains unchanged instead of configuring another attempt in (at least) one hour.

I simulated a re-download with a disk failure as follows:

  • configure manual updates in kodi
  • create LVM volume and mount it at ~/.kodi/userdata/Database, copy over old DB information, adapt permissions
  • unset "checksum" column in "repo" table
  • start kodi
  • umount -l ~/.kodi/userdata/Database
  • trigger update

As a result of this, mirrors see repeated downloads. I tweaked /usr/share/kodi/addons/repository.xbmc.org/addon.xml to always use my mirror ftp.halifax.rwth-aachen.de. On the server I see this (limited to my own IP address):

1709997208.710 1247300 0.088 GET /xbmc/addons/nexus/addons.xml.gz HTTP/2.0
1709997209.512 1247300 0.113 GET /xbmc/addons/nexus/addons.xml.gz HTTP/2.0
1709997210.230 1247300 0.148 GET /xbmc/addons/nexus/addons.xml.gz HTTP/2.0
1709997210.953 1247300 0.123 GET /xbmc/addons/nexus/addons.xml.gz HTTP/2.0
1709997211.766 1247300 0.136 GET /xbmc/addons/nexus/addons.xml.gz HTTP/2.0
1709997212.627 1247300 0.134 GET /xbmc/addons/nexus/addons.xml.gz HTTP/2.0
1709997213.398 1247300 0.148 GET /xbmc/addons/nexus/addons.xml.gz HTTP/2.0
1709997214.288 1247300 0.157 GET /xbmc/addons/nexus/addons.xml.gz HTTP/2.0
1709997215.011 1247300 0.136 GET /xbmc/addons/nexus/addons.xml.gz HTTP/2.0
1709997215.946 1247300 0.176 GET /xbmc/addons/nexus/addons.xml.gz HTTP/2.0
1709997216.930 1247300 0.239 GET /xbmc/addons/nexus/addons.xml.gz HTTP/2.0

The debug logs show (limited to "grep decompressing", as this happens once per download):

2024-03-09 16:14:32.113 T:2002858   debug <general>: CRepository 'https://ftp.halifax.rwth-aachen.de/xbmc/addons/nexus/addons.xml.gz' is gzip. decompressing
2024-03-09 16:14:32.904 T:2002858   debug <general>: CRepository 'https://ftp.halifax.rwth-aachen.de/xbmc/addons/nexus/addons.xml.gz' is gzip. decompressing
2024-03-09 16:14:33.615 T:2002858   debug <general>: CRepository 'https://ftp.halifax.rwth-aachen.de/xbmc/addons/nexus/addons.xml.gz' is gzip. decompressing
2024-03-09 16:14:34.409 T:2002858   debug <general>: CRepository 'https://ftp.halifax.rwth-aachen.de/xbmc/addons/nexus/addons.xml.gz' is gzip. decompressing
2024-03-09 16:14:35.201 T:2002858   debug <general>: CRepository 'https://ftp.halifax.rwth-aachen.de/xbmc/addons/nexus/addons.xml.gz' is gzip. decompressing
2024-03-09 16:14:36.047 T:2002858   debug <general>: CRepository 'https://ftp.halifax.rwth-aachen.de/xbmc/addons/nexus/addons.xml.gz' is gzip. decompressing
2024-03-09 16:14:36.852 T:2002858   debug <general>: CRepository 'https://ftp.halifax.rwth-aachen.de/xbmc/addons/nexus/addons.xml.gz' is gzip. decompressing
2024-03-09 16:14:37.711 T:2002858   debug <general>: CRepository 'https://ftp.halifax.rwth-aachen.de/xbmc/addons/nexus/addons.xml.gz' is gzip. decompressing
2024-03-09 16:14:38.490 T:2002858   debug <general>: CRepository 'https://ftp.halifax.rwth-aachen.de/xbmc/addons/nexus/addons.xml.gz' is gzip. decompressing
2024-03-09 16:14:39.449 T:2002858   debug <general>: CRepository 'https://ftp.halifax.rwth-aachen.de/xbmc/addons/nexus/addons.xml.gz' is gzip. decompressing
2024-03-09 16:14:40.400 T:2002858   debug <general>: CRepository 'https://ftp.halifax.rwth-aachen.de/xbmc/addons/nexus/addons.xml.gz' is gzip. decompressing

Note the short delay of around a second between each download.

Expected Behavior

Kodi waits before it issues another download request.

Actual Behavior

The addons.xml.gz file is downloaded roughly once a second, possibly only limited by network/mirror/device speed.

Possible Fix

In-memory "persistence" might be necessary for this, but possibly it's better to crash/exit in case of DB (write) issues.

To Reproduce

See above.

Debuglog

kodi.log

Your Environment

Tested with Debian, version 2:20.1+dfsg-1. On the mirror I see this with various versions of kodi/xbmc/osmc, though.

@xbmc-gh-bot xbmc-gh-bot bot added the Triage: Needed (managed by bot!) issue that was just created and needs someone looking at it label Mar 9, 2024
@C-Otto C-Otto changed the title Repeated addons.xml.gz download for failure when writing to ~/.kodi/userdata/Database/AddonsXX.db Repeated addons.xml.gz download when unable to write to ~/.kodi/userdata/Database/AddonsXX.db Mar 9, 2024
@yol
Copy link
Member

yol commented Mar 11, 2024

Thanks Carsten! That is a very good find. We did see some increased requests for addons.xml from some IPs in the past, but just blocked them when it occurred and never got to the bottom of it. For now, we will implement a rate limiting rule on our mirror redirector for requests to addons.xml to protect downstream mirrors from excessive requests.

We will also take a look at this issue and try to fix it for the future, but that is most likely to happen during/after our DevCon next week :)

@yol
Copy link
Member

yol commented Mar 11, 2024

@wsnipex Do you want to take care of the mirror nginx adjustments or should I give it a try?

@wsnipex
Copy link
Member

wsnipex commented Mar 11, 2024

If you don't beat me to it, I'll try to look into it tomorrow.

@wsnipex
Copy link
Member

wsnipex commented Mar 12, 2024

I've added this to our nginx, let's see if it helps. I already see tons of rate limits hitting the addonsxml zone

limit_req_zone $binary_remote_addr zone=addonsxml:10m rate=4r/m;

location ~ ^/addons/.+/addons.xml.gz$ {
       limit_req zone=addonsxml burst=6 delay=180;
}

We can still tweak this if needed

@wsnipex
Copy link
Member

wsnipex commented Mar 15, 2024

@C-Otto did the rate limit help with the abuse on your mirror?

@C-Otto
Copy link
Author

C-Otto commented Mar 15, 2024

Not really. I don't see any "dent" in my graphs. My script (fail2ban) still finds plenty of abuse with downloads happening every 10-120 seconds or so. I wasn't able to find a host that tried to download faster than that, say once a second.

@C-Otto
Copy link
Author

C-Otto commented Mar 15, 2024

Here's a recent example (that's just one host!):

1710491165.306 1421290 0.253 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710491185.523 1421290 0.264 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710491266.440 1421290 0.174 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710491307.078 1421290 0.212 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710491529.055 1421290 0.178 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710491569.355 1421290 0.157 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710491852.558 1421290 0.199 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710491961.264 1421290 0.161 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710491966.541 1421290 0.000 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710492949.513 1421290 0.302 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710492969.533 1421290 0.293 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710493090.848 1421290 0.234 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710493110.992 1421290 0.284 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710493171.661 1421290 0.194 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710493252.524 1421290 0.174 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710493272.659 1421290 0.250 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710493292.619 1421290 0.000 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710493312.847 1421290 0.000 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0
1710493474.772 1421290 0.191 GET /xbmc/addons/leia/addons.xml.gz HTTP/2.0

@wsnipex
Copy link
Member

wsnipex commented Mar 15, 2024

that would mean, that we redirect once, but then the clients retry on the final URL (mirror servers, like your's).
I'm afraid in that case the rate limiting our our mirror redirector won't help and you'd have to put it on your end :(

@C-Otto
Copy link
Author

C-Otto commented Mar 15, 2024

That might be the cause, yes. I'd love to see a solution in code, though (which obviously only affects users that run the updated code). If you find a way to stop unnecessary downloads, for example in the "can't write to database" situation explained above, that would help a lot in the long run.

@yol yol added Component: Add-ons Triage: Confirmed issue has been reproduced by a team member and removed Triage: Needed (managed by bot!) issue that was just created and needs someone looking at it labels Mar 21, 2024
@yol yol self-assigned this Mar 21, 2024
@yol yol added this to the Omega 21.0 milestone Mar 21, 2024
@yol
Copy link
Member

yol commented Mar 21, 2024

@wsnipex @C-Otto from what I can see in the code, redownloads due to failure to write to the addon database should always go to our mirror redirector. we never save the redirected URL permanently. so I'm not sure why the rule is not helping 🤔

@C-Otto
Copy link
Author

C-Otto commented Mar 21, 2024

I don't really know, sorry. Just to state it once, a single IP address (especially IPv4) might be used by several real devices - some providers even map multiple customers to the same public IP. I don't think that's contributing a significant amount to this issue, though.

@C-Otto
Copy link
Author

C-Otto commented Mar 21, 2024

If you have the option to debug live or with recent logs, I'd be happy to provide IP addresses and such.

@yol
Copy link
Member

yol commented Mar 23, 2024

@C-Otto Can you try #24886?

yol added a commit to yol/xbmc that referenced this issue Mar 23, 2024
Repository update can fail in a way that would lead to an
immediate reschedule of the update. A common reason would be
a failure to update the `nextcheck` column of the `repo` table.
This leads to mirror and repository servers getting constantly
hammered.

Since there is no reason to have a smaller update interval than
an hour anyway, we just enforce this as the minimum in all
regular updates. This should catch all possible errors leading
to a continuous series of requests.

Fixes xbmc#24829.
@C-Otto
Copy link
Author

C-Otto commented Mar 23, 2024

Not really, no. I don't use xbmc myself and I don't think I'm able to compile that myself.

@neo1973 neo1973 linked a pull request Mar 23, 2024 that will close this issue
14 tasks
@yol
Copy link
Member

yol commented Mar 23, 2024

@C-Otto OK, then I guess our own testing will have to suffice :)

The log that you shared is actually consistent with our current rate-limiting settings. We do see a very large amount of rejected requests though, so you should have seen a significant drop in abusive requests. We could try to limit it further I guess? @wsnipex?

yol added a commit to yol/xbmc that referenced this issue Mar 23, 2024
Repository update can fail in a way that would lead to an
immediate reschedule of the update. A common reason would be
a failure to update the `nextcheck` column of the `repo` table.
This leads to mirror and repository servers getting constantly
hammered.

Since there is no reason to have a smaller update interval than
an hour anyway, we just enforce this as the minimum in all
regular updates. This should catch all possible errors leading
to a continuous series of requests.

Fixes xbmc#24829.
howie-f pushed a commit to howie-f/xbmc that referenced this issue Mar 27, 2024
Repository update can fail in a way that would lead to an
immediate reschedule of the update. A common reason would be
a failure to update the `nextcheck` column of the `repo` table.
This leads to mirror and repository servers getting constantly
hammered.

Since there is no reason to have a smaller update interval than
an hour anyway, we just enforce this as the minimum in all
regular updates. This should catch all possible errors leading
to a continuous series of requests.

Fixes xbmc#24829.
popcornmix pushed a commit to popcornmix/xbmc that referenced this issue Apr 8, 2024
Repository update can fail in a way that would lead to an
immediate reschedule of the update. A common reason would be
a failure to update the `nextcheck` column of the `repo` table.
This leads to mirror and repository servers getting constantly
hammered.

Since there is no reason to have a smaller update interval than
an hour anyway, we just enforce this as the minimum in all
regular updates. This should catch all possible errors leading
to a continuous series of requests.

Fixes xbmc#24829.
@thexai thexai added the Resolution: Fixed issue was resolved by a code change label Apr 21, 2024
tiben20 pushed a commit to tiben20/xbmc that referenced this issue Jun 7, 2024
Repository update can fail in a way that would lead to an
immediate reschedule of the update. A common reason would be
a failure to update the `nextcheck` column of the `repo` table.
This leads to mirror and repository servers getting constantly
hammered.

Since there is no reason to have a smaller update interval than
an hour anyway, we just enforce this as the minimum in all
regular updates. This should catch all possible errors leading
to a continuous series of requests.

Fixes xbmc#24829.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: Add-ons Resolution: Fixed issue was resolved by a code change Triage: Confirmed issue has been reproduced by a team member
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants