Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while installing server via docker #543

Closed
nlehuby opened this issue Jun 3, 2020 · 7 comments · Fixed by e-mission/e-mission-docker#7
Closed

Error while installing server via docker #543

nlehuby opened this issue Jun 3, 2020 · 7 comments · Fixed by e-mission/e-mission-docker#7

Comments

@nlehuby
Copy link

nlehuby commented Jun 3, 2020

I've tried to run the e-mission server using the docker files from this repo

docker-compose -f docker-compose.dev.yml up ends up with an error on the webserver: ModuleNotFoundError: No module named 'future'

I'm not familiar with conda, so it may not be related, but during the installation, I got
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'. and an error while installing package 'conda-forge::cachetools-2.1.0-py_0'

@shankari
Copy link
Contributor

shankari commented Jun 3, 2020

I found the conda-forge::cachetools-2.1.0-py_0 error and checked in a workaround in
#511

Let me see why docker-compose -f docker-compose.dev.yml up is not picking that up.

@shankari
Copy link
Contributor

shankari commented Jun 3, 2020

so I currently have a docker-based CI in the server repo (https://github.com/e-mission/e-mission-server/actions?query=workflow%3Atest-with-docker) which passed 4 days ago.

So there is nothing wrong with the docker image - the difference must be in the two startup scripts. Comparing clone_and_start_server.sh in the e-mission-docker repo and setup/tests/start_script.sh in the e-mission-server repo, the main differences are in the setup script that is run and in the line source /opt/conda/etc/profile.d/conda.sh

The two setup scripts are functionally identical

 echo "Setting up blank environment"
-conda create --name emission python=3.6
-conda activate emission
+conda create --name emissiontest python=3.6
+conda activate emissiontest

 echo "Downloading packages"
 curl -o /tmp/cachetools-2.1.0-py_0.tar.bz2 -L https://anaconda.org/conda-forge/cachetools/2.1.0/download/noarch/cachetools-2.1.0-py_0.tar.bz2
@@ -22,5 +17,7 @@
 echo "Installing manually downloaded packages"
 conda install /tmp/*.bz2

-echo "Updating using conda now"
-conda env update --name emission --file setup/environment36.yml
+conda env update --name emissiontest --file setup/environment36.yml

So the difference must be the conda profile script.

@shankari
Copy link
Contributor

shankari commented Jun 3, 2020

Let me try testing this out locally and fix it. I should really turn on CI for the docker-compose images as well 😄 ETA tonight pacific time

@shankari
Copy link
Contributor

shankari commented Jun 4, 2020

Fixed by uploading new "latest" version to dockerhub pointing to version 2.8.2

The container works fine now, I can access localhost:8080. Will deal with CI after I think a bit about how best to organize the various docker options. Right now, it looks like the e-mission server testing docker is very close to the dev docker-compose. Can we unify them somehow?

C02KT61MFFT0:em-server shankari$ docker-compose -f docker-compose.dev.yml up
...
Creating em-server_db_1 ... done
Creating em-server_web-server_1 ... done
Attaching to em-server_db_1, em-server_web-server_1
web-server_1  | Cloning from repo https://github.com/e-mission/e-mission-server.git and branch master
db_1          | 2020-06-04T05:47:50.254+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=5bb3a71c1d34
db_1          | 2020-06-04T05:47:50.256+0000 I CONTROL  [initandlisten] db version v3.4.19
db_1          | 2020-06-04T05:47:50.257+0000 I CONTROL  [initandlisten] git version: a2d97db8fe449d15eb8e275bbf318491781472bf
...
web-server_1  | Installing manually downloaded packages
########## | 100%
########## | 100%
########## | 100%
########## | 100%
########## | 100%
########## | 100%
web-server_1  | Downloading and Extracting Packages
web-server_1  | Preparing transaction: ...working... done
web-server_1  | Verifying transaction: ...working... done
web-server_1  | Executing transaction: ...working... done
web-server_1  |
web-server_1  | Updating using conda now
web-server_1  | Solving environment: ...working... done
web-server_1  |
...
web-server_1  | Connecting to database URL db
web-server_1  | analysis.debug.conf.json not configured, falling back to sample, default configuration
web-server_1  | Finished configuring logging for <RootLogger root (WARNING)>
web-server_1  | Replaced json_dumps in plugin with the one from bson
web-server_1  | Changing bt.json_loads from <function <lambda> at 0x7feeeaf3bd08> to <function loads at 0x7feee91b4d90>
web-server_1  | Running with HTTPS turned OFF - use a reverse proxy on production
web-server_1  | START 2020-06-04 06:06:11.284368 GET /
web-server_1  | END 2020-06-04 06:06:11.299876 GET /  0.015290260314941406
web-server_1  | START 2020-06-04 06:06:11.443965 GET /lib/ionic/css/ionic.css
web-server_1  | END 2020-06-04 06:06:11.447049 GET /lib/ionic/css/ionic.css  0.002894878387451172

@nlehuby
Copy link
Author

nlehuby commented Jun 10, 2020

Indeed, it works now, thank you.

I don't really understand why there are so many images and docker files: in the beginning, I had started using this one which hasn't been updated for a year, because of that docker-compose example and it's not clear what are the differences with the image you have updated.

Since we're talking about CI, I'm wondering if it wouldn't be better to have the Dockerfile in the initial repositories, and to automatically build these images on push on master.
The docker repo would then contain only the docker-compose files that use those images. And to develop on the server for instance, you'll need to change the docker-compose file to build the python part instead of use the image.

@shankari
Copy link
Contributor

shankari commented Jun 10, 2020

@nlehuby That's a good suggestion, and I wonder if we can brainstorm about how the docker stuff should work in general.

The additional Dockerfiles (e.g. nomkl) were to make the docker image small enough to run in Try in PWD and to include ipython to connect to the server for analysts to play with the data more easily.

The original docker file was indeed the one in the server code. I then moved out all the docker stuff as part of modularizing the server code, and to allow people to build the docker images without checking out all the server code.

The difference between the two is that the one in the server passes in all the files in the current directory into the build, while the one in the docker repo checks out the docker repo as part of the Dockerfile.

I do want to modularize the server further - e.g. pull out the emisssion.core.wrapper and emission.core.storage into their own separate python modules that are installed using pip, and the user visible components into microservices (e.g. #506).

In that case, I guess each individual microservice should have its own docker image which will be built on each push to master.

While it is clear to me how people will use docker for production, I am still unclear about how they would use it for development. Do people want to build an image and test it? Or do they want the docker container to mount the code that they have checked out on their laptop?

I don't use docker for dev, so I don't have a clear idea of what the preferred flow would be. Since it looks like you do, can you let me know what your flow is? I can make sure to support that one flow well, and then we can add others depending on request.

@nlehuby
Copy link
Author

nlehuby commented Jun 10, 2020

I do use docker for dev because I'm not familiar enough with some dependancies (mongodb & conda mostly) to make a full install on my laptop.
The ideal workflow for me is

  • using a docker-compose that pull images (as the one in the docker repo) : I can then play around and test a few things, in production conditions
  • then I want to do some changes in the code, so I use a docker-compose that build an image from source for the part I want to modify (for instance the server)
  • I made my changes locally, and for each changes, I build and run (docker-compose up --build) to test them in real conditions
  • (for simple python applications, you can even add the whole python code source in a volume so you don't need to build it at each changes made locally)
  • when I'm ok with my changes, I can commit and PR.
  • The CI tests it, the maintainers review it, the changes eventually get merged to master, dockerhub builds it and publishes a new image and the story can begin again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants