-
Notifications
You must be signed in to change notification settings - Fork 142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Sanity checking around backing up files #395
base: master
Are you sure you want to change the base?
[WIP] Sanity checking around backing up files #395
Conversation
During node backup, the python process running as the `medusa` user was unable to locate snapshot files on disk. If no plausible directories with snapshot files exist on the host, consider this a failure condition and abort. Do not let execution proceed erroneously leading to a "backup successful cess" message.
…me for a given node
Would |
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
@kiddom-kq, it looks like this broke the integration tests. You can run them locally using |
Any improvement getting a more specific use of exceptions is welcome. Feel free to make any change necessary. |
I am having some trouble understanding the test. ./run_integration_tests.sh --test=16 --cassandra-version=2.2.19 -vv That returns a failure because the What is supposed to happen when Looking at the test:
It looks like When I disable the assert, the execution continues to call Those functions put a record of the backup happening in the storage/index... even though nothing was backed up. Can you confirm that the |
@kiddom-kq, differential backups put the sstables into a While full backups use this layout: Those 100 rows should definitely be backed up in the |
Hi @kiddom-kq, is this still something you're working on? |
ping @kiddom-kq |
While trying to deploy Medusa, i ran into a few issues. The root cause of #390 was an issue file systems permissions and the silent-failure property of python's
glob()
This PR implements some of the debug logging that I wish I had had while troubleshooting and a basic sanity check to abort execution as soon as an error is observed rather than waiting for a (misleading) failure at a later point in execution.
┆Issue is synchronized with this Jira Task by Unito
┆friendlyId: K8SSAND-1398
┆priority: Medium