-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Database is locked in thread ... error #111
Comments
Update on the above:
|
Dear Jens, thank you for writing in, and apologies for the late reply. grafana-wtf was conceived as a sloppy tool fulfilling some of our needs and to make use of grafana-client the other day. It looks like we did not get the code right when bringing concurrent HTTP downloads together with the SQLite-based request cache -- it apparently "just worked" for us and others up until now that you are reporting this flaw. We will have to look into it where corresponding locks would be missing, that will prevent accessing shared resources concurrently. In the meanwhile, you may be successful by using the With kind regards, |
There are a few discussions on the issue tracker of the In general, they indicate the SQLite-based caching has limits when it comes to high concurrency situations, but there are also specific recommendations which may need to be applied to the grafana-wtf code base. If you have the capacity to dive into the details, we will be more than happy.
Now that it trips at a concurrency of "5" already, indicates that either the code of Another recommendation I've picked up from quickly scanning those discussions would be to use a different cache backend. Please let us know if you have such needs, we think it will be easy to prepare grafana-wtf to offer different cache backends on behalf of |
There is another option we may want to consider.
Some people are also using that option in order to emulate a Maybe it is also the right time to finally offer such an option, so that In this spirit, when implemented, |
Hi again, taking all this into consideration, specifically @JWCook's comment at requests-cache/requests-cache#870 (comment), ...
... and reviewing So, we figure a good first measure would be to actually use With kind regards, |
It looks like the actual requests are sent by the I don't think you need to switch backends; SQLite should be more than able to handle this. For reference, though, there are some recommendations on choosing a different backend here. Thankfully, there is a api = GrafanaApi(...)
session = CachedSession(cache_name=__appname__, expire_after=expire_after, use_cache_dir=True)
session.headers['User-Agent'] = api.client.user_agent
api.client.s = session Good luck! Feel free to open an issue or discussion on requests-cache if you get stuck. |
Thanks @amotl ! Makes perfectly sense and setting |
Hi there, thank you both for your excellent replies. Jordan, the suggestion to patch Cheers, |
Hi again. There is a patch now, which may improve the situation. |
Dear Jens and Jordan, grafana-wtf 0.19.0 has been released, including corresponding patches to improve the situation. Can you tell us if it works better for you now? Thanks for your support! With kind regards, |
Issue still persists with 0.19.0 |
Dear Edgaras, thank you for writing in. Do you think grafana-wtf should start providing an alternative cache backend, which is less problematic in concurrent access situations? @JWCook and @Ousret: Can you spot any obvious problems with the current implementation after improving based on your feedback? Maybe we are still doing things unfortunate or even completely wrong? With kind regards, |
@amotl I looked over your changes and they look good to me. This issue could potentially be fixable with some backend settings, for example SQLite write-ahead logging. This comes with a few tradeoffs, but notably it allows read operations to not block writes. This can be enabled by passing It's also worth mentioning that I can't make any guarantees about niquests' compatibility with requests-cache. From a quick look at its Session class, it should behave the same as I can help debug further, but I'll need some more information. @edgarasg can you provide the exact error message you're getting, and post a complete example that reproduces this issue? Are you getting exceptions, or just log warnings? And @amotl, roughly how many threads are making concurrent requests? Is there a user-configurable thread limit, automatic based on the CPU, or other? |
Dear Jordan, thanks for your swift reply. We used your suggestion about grafana-wtf 0.19.1 includes this improvement. May we ask you to use that version for proceeding into further tests, @edgarasg?
This is right on the spot. There could be something fishy, but up until further notice, I am also assuming "it will just work". Do you have any idea why that could go south in concurrency situations, because of unfortunate side effects because Niquests is async, @Ousret?
Actually, I didn't mean to bother you too much with that issue, so I appreciate your reply and offer very much. Let's hear back what @edgarasg might report about 0.19.1 before conducting another round of investigations.
The default value for the concurrency setting is With kind regards, |
Niquests advertise itself as a drop in replacement. It would lost his purpose if this was "broken" or if it diverged too much.
Niquests is thread safe in a synchronous enclosure and task safe in asynchronous. So nothing to be concerned there. regards, |
Thanks for swift response. Now I'm getting such error:
|
Sorry to hear, and thanks for reporting. Please also use |
Hi again @edgarasg, can you confirm it is a hard error on your side, or is it just a warning with version 0.19.1, so that in general, it works for you now? @JensRichnow: Can I also humbly ask you to try 0.19.1 or higher on your environment and report back about it? With kind regards, |
I'm using v0.17.0. When running
grafana-wtf --drop-cache find xyz
I get a DB lock errror:Running on Apple M2 Max.
How can I recover from that?
Cheers
The text was updated successfully, but these errors were encountered: