Replies: 7 comments 2 replies
-
I should also note that it appears to complete after some time regardless of the timeout. |
Beta Was this translation helpful? Give feedback.
-
Are you running on very slow hardware or a remote database or something like that? A couple of thousand entries shouldn't normally take over 100 seconds to process. The API behavior is basically dictated by how upstream works, so I don't think this is something we can really fix independently. |
Beta Was this translation helpful? Give feedback.
-
Yeah, it's a Synology DS918+ with WD red drives. It takes a lot longer than 100 seconds to process an import/purge of that many records. Fair enough it can't be fixed here though. Do you think I should try opening an issue upstream? It definitely completes though, because I can hear the NAS drives churning for a couple of minutes pretty loudly and when it stops I know it's done 😂 Thanks for the reply. |
Beta Was this translation helpful? Give feedback.
-
Well, we can't fix the CloudFlare timeout of course. Also excepting the request and parse it in the background doesn't match upstream, so also not an option. And reporting to upstream will probably not help, unless the time to process the import takes to long on the browser/client side. |
Beta Was this translation helpful? Give feedback.
-
I'm not sure if you're agreeing that a "Synology DS918+ with WD red drive" is slow hardware? It doesn't sound that slow per se, but maybe your RAID configuration is such that random disk writes are slow. If you can't fix your write speed, you could probably manually chunk your import file, or not use Cloudflare, at least temporarily. But I don't think upstream is going to be receptive to making a relatively complex change to accommodate someone's relatively unusual configuration. |
Beta Was this translation helpful? Give feedback.
-
I was agreeing it's not a super fast machine, powered by Intel Celeron J3455 quad-core CPU. Could definitely be the RAID configuration, I'm using the Synology Hybrid RAID (SHR) on the btrfs file system. This was more of a user experience thing, as it ultimately does complete its task, it's just the user wouldn't know and it seemed strange regardless it would leave the request open. Maybe I'll try disabling write caching on the disks or some other configuration tweaks. It does appear to be a disk issue though as CPU usage was low and disk access increased quite a bit (I could both see it in the monitor and hear it, since this is a loud unit). I'll close this issue out and report back if anything improved the performance. Thanks for everyone's time, it's appreciated. |
Beta Was this translation helpful? Give feedback.
-
Well, it seems that the same problem arose here. However, this time, it occurs in downloading icon... After importing hundreds of passwords into bitwarden, it get DDoS-ed by downloading account favicon... The software is running according to logs, just offer no web service due to it's downloading icon... |
Beta Was this translation helpful? Give feedback.
-
Subject of the issue
When importing a Bitwarden JSON file behind Cloudflare it's one long post request and ultimately times out once it hits Cloudflare's 100 second limit: https://support.cloudflare.com/hc/en-us/articles/115003011431-Error-524-A-timeout-occurred#524error
Deployment environment
Install method: Docker
Clients used: web vault
Reverse proxy and version: Nginx via Cloudfalre
MySQL/MariaDB or PostgreSQL version:
Other relevant details:
Steps to reproduce
Note: this also happens when purging or deleting a lot of items
Expected behaviour
The import would succeed. Using some kind of queue process or ajax that checks progress rather than leaving the connection open until the import is complete.
Actual behaviour
Hits Cloudflare's 100 second limit and returns an HTTP 524 error
Troubleshooting data
When posting to:
https://<host>:<port>/api/ciphers/import
Beta Was this translation helpful? Give feedback.
All reactions