Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Too many requests - blocked by Tidal's Cloudfront #20

Closed
Kafkamorph opened this issue Apr 28, 2023 · 15 comments
Closed

Too many requests - blocked by Tidal's Cloudfront #20

Kafkamorph opened this issue Apr 28, 2023 · 15 comments

Comments

@Kafkamorph
Copy link

Any chance to implement some kind of throttling?

Getting a lot of these:
"429 Client Error: Too Many Requests for url: https://api.tidal.com/v1/search?sessionId=XXX",
'X-Cache': 'Error from cloudfront'
('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) occurred, retrying 2 times

and then the script just hangs and the playlist created on Tidal has 0 tracks.

@timrae
Copy link
Collaborator

timrae commented Apr 28, 2023

Ahh yeah that would be a useful addition. I'm not sure when I'd be able to get around to it, but the ratelimit package would probably be a good path forward if someone would like to make a PR

@Kafkamorph
Copy link
Author

That would be an elegant solution indeed.

For now, my not-elegant-at-all alternative was to reduce subprocesses from 50 to 5:
tidal_tracks = call_async_with_progress(tidal_search, spotify_tracks, task_description, config.get('subprocesses', 5), tidal_session=tidal_session)

I know it's ugly and slows down everything, but it seems to work, as Tidal's Cloudfront is no longer complaining.

@tehkillerbee
Copy link

tehkillerbee commented May 16, 2023

@Kafkamorph This is a problem that I have experienced too when using the Tidalapi to update a local Mopidy playlist. Perhaps these improvements would make sense to add directly to the python-tidal api?

While it might be a hack, it will at least give a more predictable behaviour.

@RobinHirst11
Copy link
Contributor

That would be an elegant solution indeed.

For now, my not-elegant-at-all alternative was to reduce subprocesses from 50 to 5: tidal_tracks = call_async_with_progress(tidal_search, spotify_tracks, task_description, config.get('subprocesses', 5), tidal_session=tidal_session)

I know it's ugly and slows down everything, but it seems to work, as Tidal's Cloudfront is no longer complaining.

to be fair, this is the best solution, id probably close this issue and suggest slowly increasing the number until you get to a sweet spot, id recommend halving it each time, so if 50 doesn't work, go to 25, 25 doesn't, try 12. keep going until it DOES work, then work your way up. for example, if 12 works, just do (12+25)/2. you get the gist

@xtarlit
Copy link

xtarlit commented May 7, 2024

For now, my not-elegant-at-all alternative was to reduce subprocesses from 50 to 5:

In my case, I had to reduce it to just 1 or else it would get blocked after about 600 songs.

@BlueDrink9
Copy link

https://pypi.org/project/backoff/

Great library for this purpose. Don't have time to figure out exactly where to place the decorator, but it'd replace the existing backoff code.

Also check out https://pypi.org/project/ratelimit/ if you want to ratelimit at the client rather than waiting for the tidal API to return an error

@RobinHirst11
Copy link
Contributor

https://pypi.org/project/backoff/

Great library for this purpose. Don't have time to figure out exactly where to place the decorator, but it'd replace the existing backoff code.

Also check out https://pypi.org/project/ratelimit/ if you want to ratelimit at the client rather than waiting for the tidal API to return an error

thanks for this, im planning on incoroprating it into my new version

@timrae
Copy link
Collaborator

timrae commented May 22, 2024

Sounds like a good idea, though it might be a bit tricky to get rate limiting working properly due to the use of the multiprocessing module. It would be a lot easier to do this if we could use Async await syntax with the tidal API... I would strongly recommend to submit a prototype for review to get feedback from us before going too far down a given route

@BlueDrink9
Copy link

fyi iirc both ratelimit and backoff are thread-safe

@timrae
Copy link
Collaborator

timrae commented May 25, 2024

We use multiprocessing here though, not multi-threading... So I suspect they will not work. I think we can probably just implement some basic rate-limiting with multiprocessing.Semaphore though, I'll have a go tomorrow as this error has been becoming more and more of a problem for me recently

@timrae
Copy link
Collaborator

timrae commented May 26, 2024

OK this should be working now in #43, I've added a new rate limit configuration parameter. Please give it a try and let me know how it goes for you (and if the default parameters work). I'm hopefully going to add one more performance optimisation to that PR.

@timrae timrae closed this as completed May 26, 2024
@BlueDrink9
Copy link

Out of curiosity, why use multiprocessing rather than multithreading? This is IO-bound rather than cpu-bound, isn't it?

@timrae
Copy link
Collaborator

timrae commented May 26, 2024 via email

@BlueDrink9
Copy link

BlueDrink9 commented May 26, 2024

In that case, I would suggest switching to multithreading (eg with futures, or you could abstract it with asyncio) next time you go in for a big refactor.

All my reading when I researched a similar API-driven task states multithreading is a better choice when io-bound. In general it simplifies execution and works with more libraries. Can't remember the exact details for why, but probably worth reading up on. Probably easier/more performant to sync things with shared memory rather than relying on locks?

I'd give you some code snippets but they're all work code, I haven't used it recently for OSS

@timrae
Copy link
Collaborator

timrae commented May 26, 2024

Thanks yeah I'm just finishing up a big refactor right now. I've got an implementation with asyncio working, and it's looking pretty good so far AFAICT. You can test it / code review it now at #43 :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants