Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added missing mysql root password to sample app #1761

Closed
wants to merge 1 commit into from

Conversation

abonas
Copy link

@abonas abonas commented Apr 16, 2015

MySQL docker image requires a setting env MSQL_ROOT_PASSWORD, it doesn't work without it.
As a result pf its failure/exit, there were a lot of mysql based container instances created, and the machine ended up with "out of space", and origin was stopped as well.

@mnagy
Copy link
Contributor

mnagy commented Apr 16, 2015

NACK. This is actually a different error. What happened is that I've changed the variables in #1329 but forgot to change the image from mysql to openshift/mysql-55-centos7. It should work with the image changed, can you please do that and test whether it works?

@abonas
Copy link
Author

abonas commented Apr 16, 2015

NACK. This is actually a different error. What happened is that I've changed the variables in #1329 but forgot to change the image from mysql to openshift/mysql-55-centos7. It should work with the image changed, can you please do that and test whether it works?

Looking at your PR #1329 seems like you're right. will wait for an official fix for that.

This whole situation raises several concerns and questions, adding @smarterclayton too.

  1. Since mysql container failed to start, the system handled that by creating zillion new mysql container instances which all ended with the same error, and eventually the system ran out of space. why isn't there a limit in such situation to control the number of created containers (on k8s level perhaps?) ?
    for example, retry 5 times and that's it.
  2. the implication of one missing env variable that causes the entire system to fail is quite a big issue. (origin was stopped by docker as well)
  3. is there any way to prevent this in the first place before it reaches container at runtime? correlate somehow the "env" section in pod/container definition to the ones required by an image and fail the creation of a pod?
    imagine how common those errors will be, and with no validation, how hard will it be to detect and recover from those in runtime.

@mnagy
Copy link
Contributor

mnagy commented Apr 16, 2015

@abonas fix here: #1762

@smarterclayton
Copy link
Contributor

The fact the containers aren't cleaned up is definitely a bug - the container gc must not be working.

@mfojtik
Copy link
Contributor

mfojtik commented Apr 16, 2015

@smarterclayton @abonas I saw a Docker (?) bug last week on my F21 vagrant box where the space was not reclaimed and Docker filled up my partition. I tried to reboot, restart Docker, cleanup all container/images but the only solution was to rm -rf /var/lib/docker...

and i'm using native docker not lxc

@abonas
Copy link
Author

abonas commented Apr 16, 2015

@mfojtik mine was on fedora 20, docker version 1.5.
I removed all containers, and I also rm -rf all contents of var/lib/docker/device-mapper/metadata.
Then after I fixed the template, I have no more failed containers so it's good.
and maybe it is indeed a docker bug.

@smarterclayton
Copy link
Contributor

Is this now fixed?

@mnagy
Copy link
Contributor

mnagy commented May 6, 2015

@smarterclayton The template was fixed in #1762.

@smarterclayton
Copy link
Contributor

Ok, closing then. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants