Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model use cases #2

Open
mizabrik opened this issue Mar 30, 2019 · 1 comment
Open

Model use cases #2

mizabrik opened this issue Mar 30, 2019 · 1 comment

Comments

@mizabrik
Copy link

To understand how the library should look like, it's good to understand how it will be used. Let's discuss them in this issue; later, they should be coded as examples.

HTTP server

A user develops a web application; he'd like to limit the rate of requests per IP/user or location.

We should probably add some support for popular HTTP servers; fast googling suggests aiohttp, Sanic and Tornado.

Telegram bot

A user writes a Telegram bot involving some lengthy computation; he'd like to limit the rate of some requests per user.

@code-of-kpp
Copy link

code-of-kpp commented Apr 1, 2019

HTTP server

Is generally a non-goal. When someone faces the huge amount of requests it is not a problem to setup nginx. Of course we can add some helpers for pure python HTTP servers or web-frameworks, but this is probably a very low priority.

HTTP server configurator

Often it is very hard to configure reasonable zones and limits for nginx. One way to do that is to perform stress-test simulating expected behavior. Hopefully our library will help with corresponding analysis and parameter tuning.

General asynchronous priority queue

asyncio has built-in Queue and simple (7 lines long) PriorityQueue implementation. In separate project we are discussing the ability to use them as engines for different kinds of JobQueues. The biggest problem with prioritizing tasks here is that while it is relatively easy to consume tasks in desired order still it is hard to produce them slow enough. For example this can result in running out of memory as total number of tasks produced will be greater than number of consumed tasks. So instead of implementing additional logic (watch the queue size and don't push tasks if there are more than N elements there) in producer a programmer can just use our library to slow it down.

HTTP client (scrapper)

Some public service provides API with throttling rules. You got banned if you don't follow them. The task is to fetch as many information as possible and as fast as possible. Using our library it should be pretty easy to control requests intensity.

Chat-bots

With chat bots you usually interact with some server and you can't simply limit requests from it without affecting all of your users. So you first extract information about the sender and then you decide if you need to delay execution of particular coro/job/task or forbid it at all. This kind of logic obviously cannot be implemented with pure nginx config, hence the need for such a library.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants