New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] ElasticSearch Backup #15
Comments
@toxisch I what way exactly does the backup fail? Do you have any error messages in the app log? Does the app run out of memory perhaps? Does it work if you run elasticdump yourself manually with your DB? |
Hi @JamesClonk, sorry for the long response time. There is no difference between automatic and manual backup. Also no mem problem. Here is an ElasticSearch backup log. It ends with a termination in the S3 service. But S3 works fine. Mongo and Maria backups work on this system without problems.
|
Hi guys, I'm having the same issue trying to back up my elastic instances with backman to an S3 storage.
But the upload to S3 never completes nor is anything stored there at all. |
Hi everyone, Are you also using a ES with Stack monitoring feature? Can this issue be affecting you as well? |
To me it seems elasticdump is never even started or ends immediately. |
Hi @pvolkemer , for me the problem was solved by using the latest backman version. |
@toxisch Your Problem was different from mine. In my case, elasticdump doesn't seem to do anything so there is nothing that can be uploaded to S3. |
So I do not really need backman to backup an ES instance since our parser vector (because logstash is a waste for parsing logs in my opinion) can also push logs to other destinations. Now because some guys chose a too big instance I was forced to have a downgrad which would mean 1. backup, 2. recreate the service and 3. replay the backup. But it seems like that service never got backuped. There seems to be no config for the ES instance but backman should give it a default cron schedule. Did that ever work? @JamesClonk |
Constantly getting this error |
failed before 7 days timeout reached @JamesClonk any suggestions regarding when it could be fixed? |
I've created a new release https://github.com/swisscom/backman/releases/tag/v1.28.0 that adds a Unfortunately I do not use Elasticsearch myself and there's also no integration tests, etc. currently for it in the CI workflow. I can't test and support it if it does not work. |
I checked and for me worked as expected. Didn't checked yet on big volumes, but plan to do so in 1 of June |
Works good for the last half year |
thanks 👍️ |
update from swisscom:master
On large DBs the backup does not work - where large is relative. Sometimes a little data + system data is enough to allow the backup to fail.
I also played with the sources and limited the ElasticDump script to single indexes. In this case the backup works fine. Therefore I assume that it is due to the size of the backup.
It is also no timeout issue! I did my tests with 10h timeout.
The text was updated successfully, but these errors were encountered: