-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recover data after docker-compose down #37
Comments
Hi @pedropalb, The services' containers indeed have their data folders mapped to the host file system... This might have something to do with an issue during the Elasticsearch container initialization - can you please share the container's log? You can get the log using the following command: $ docker logs trains-elastic |
It seems to be a problem with the free disk space. But I believe that 23.2 GB (the current free space in my disk) would be enough. The used space in c:\opt\trains is only 250 MB. The available space for docker disk image is 16 GB (from which only 3.1 GB). It follows the logs file: |
Hi @pedropalb, By default, the high watermark is 90% (see here), so the question is not how much free space you have on your disk, but what is the percentage of used space - try clearing up some space to see if it helps. Alternatively, you can also configure the elasticsearch container using a different watermark by setting the value to a different percentage or a hard-coded number of bytes - simply edit your docker compose file and add a new line under the services:
...
elasticsearch:
...
environment:
...
cluster.routing.allocation.disk.watermark.high: "15gb"
In the following example, elasticsearch will hit the high watermark only when you have less than 15 gigabytes free on your disk. Please let me know if that works for you :) |
Oh I see! I managed to free some space and it seems to have fixed the elastic search issue. But still, all my data has gone after the docker-compose down and up. Every time we restart the server we have to create a new user and credentials? Can't I recover my data anymore? |
Hi @pedropalb,
The user and credentials are stored in the configuration files, not in the Elasticsearch data - did you lose those as well? Regarding Elasticsearch, the data should still be there - can you find and send the directory contents of the |
It seems the Elasticsearch data is still there in the path you said. But the MongoDB is almost empty. I tried to query tasks, projects, users, etc. Everything is empty but the user collection that has only the newest user I created. What does go to MongoDB and what does go to Elasticsearch? I didn't lose the configuration file but it has only the old credential. I didn't specify a user in the config file. I did that through the Web UI. So after the restart, I had to create new credentials and replace them in the config file. With the old credentials I couldn't even use the APIClient(). |
Hi @pedropalb, You are correct in assuming that tasks, projects etc. (including user credentials) are stored in mongodb. Can you please share the A few other thoughts:
|
Here is the MongoDB log: There is nothing in
Thanks! |
Hi @bmartinn! The issue is in the volume mapping in the docker-compose.yaml for the MongoDB service. By default, MongoDB writes the data in So, the solution - I hope - is to change the mapping from I noted that docker-compose.win10.yml has this issue but the docker-compose.yml has not. The latter has no volume named trains_mongodata. It maps Would this change impact other TRAINS' functionalities? Thanks! |
Hi @pedropalb, I just found where the shared drives feature was moved to in the new Docker Desktop: please go to |
Hi @bmartinn, |
Hi, Is there any other place where you change the mongo default data directory to |
Hi @pedropalb |
Hello!
In order to get rid of the bug below, I used
docker-compose -f .\docker-compose-win10.yml down
and thendocker-compose -f .\docker-compose-win10.yml up -d
.After that, I don't see any of all the experiments I have done until now. I thought that the services' containers had all the data mapped into the host filesystem (c:\opt\trains in my case).
How can I recover my experiments' data?
The text was updated successfully, but these errors were encountered: