Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build bdcs-cli and run tests inside Docker #21

Merged
merged 3 commits into from Sep 22, 2017

Conversation

atodorov
Copy link
Member

No description provided.

@atodorov atodorov requested a review from bcl September 15, 2017 14:55
@atodorov
Copy link
Member Author

Copy link
Collaborator

@bcl bcl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks pretty good, but I really want to be able to run this using the Fedora docker packages.

Makefile Outdated
[ -f "metadata.db" ] || curl https://s3.amazonaws.com/weldr/metadata.db > metadata.db
sudo docker ps | grep api || sudo docker run -d --rm --name api -p 4000:4000 -v `pwd`:/mddb --security-opt label=disable --network welder welder/bdcs-api-rs:latest

sudo docker build -t $(ORG_NAME)/bdcs-cli:latest -f $< --cache-from $(ORG_NAME)/bdcs-cli:latest .
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to figure out how to make the --cache-from conditional. We should be able to run these on our local dev systems with the Fedora docker packages.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. How about I deal with this separately b/c the --cache-from option is used at several different places in the welder repos ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PS: I can try aliasing docker build to something which passes the --cache-from option if we are in CI but it's not going to be very pretty.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, a separate set of patches is fine.

Dockerfile.build Outdated

# install the build dependencies first so we can cache this layer
COPY BDCSCli.cabal .
RUN cabal update && cabal install --dependencies-only --enable-tests --force-reinstall
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the install really needed? It may mask problems with the ghc rpm packages.
I think we want to switch to depending on just the rpms so that we can be sure everything is packaged correctly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, do we know if we have all necessary requirements as RPMs ? Should I just figure out their names and dnf install together with the other packages in this Dockerfile ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bcl

cabal: Encountered missing dependencies:
aeson -any,
aeson-pretty -any,
cond -any,
gi-ggit -any,
gi-gio -any,
gi-glib -any,
hspec -any,
htoml -any,
lens -any,
lens-aeson -any,
monad-loops -any,
semver -any,
split -any,
temporary -any,
unordered-containers -any,
wreq -any

This is after I install ghc-aeson and pretty much everything else. My command was for i in cabal install --dependencies-only --dry-run | grep - | grep -v "In order" |
grep -v basement | grep -v overloading | sed -r 's|-[0-9]+(.[0-9]+)*||'; do \ echo ghc-$i; done | xargs dnf -y install && dnf clean all

any ideas ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's what I came up with for my docker development image (I currently can't get my F26 desktop to work because some of these conflict with desktop packages I have installed) - https://github.com/bcl/docker-bdcs-hs

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, that worked. I needed the -devel counterpars of everything plus --enable-testing. Not sure if I need --enable-coverage with --dry-run though.

@atodorov
Copy link
Member Author

Just FYI in test_depsolve.py instead of running the tests against localhost:4000 I use api:4000. When I execute these commands on my computer all of the containers are on the welder network and they can see each other via the container name used as a hostname. It looks like inside Travis this is not the case (at least not for now).

@atodorov
Copy link
Member Author

Update: looks like the API container has been started and reachable via http://api:4000/ but for some reason I don't receive a package list but nothing. Can't really figure out what is going on wrong.

@bcl
Copy link
Collaborator

bcl commented Sep 19, 2017

Running it locally I'm seeing several things:
Not all the right packages appear to be getting installed: https://travis-ci.org/weldr/bdcs-cli/builds/277292248#L3300

I'm hitting the OOM killer here when using the database from s3. If I use the one from Jenkins I don't have memory problems.
ETA: It looks like docker in F26 has picked up the --cache-from option (docker-1.13.1-22.gitb5e3294.fc26.x86_64)

@atodorov
Copy link
Member Author

OK, I think we do have a weird packaging/dependency problem. CC'ing @dashea

The bdcs-deps repository provides RPM packages for ghc-memory and ghc-cryptonite but their names are suffixed with -weldr. As a result they are installed in the first step and then when trying to install all cabal dependencies we also get the RPMs from the Fedora repos and end up with:

# rpm -qa | grep memory
ghc-memory-devel-0.14.1-1.fc26.x86_64
ghc-memory-weldr-0.14.7-1.fc26.x86_64
ghc-memory-0.14.1-1.fc26.x86_64
ghc-memory-weldr-devel-0.14.7-1.fc26.x86_64

# rpm -qa | grep cryptonite
ghc-cryptonite-0.21-1.fc26.x86_64
ghc-cryptonite-devel-0.21-1.fc26.x86_64
ghc-cryptonite-weldr-devel-0.24-2.fc26.x86_64
ghc-cryptonite-weldr-0.24-2.fc26.x86_64

Here's what dnf install ghc-cryptohash-devel looks like

Installing:
 ghc-cryptohash-devel                     x86_64                     0.11.9-1.fc26                       fedora                     180 k
Installing dependencies:
 ghc-cryptohash                           x86_64                     0.11.9-1.fc26                       fedora                      60 k
 ghc-cryptonite                           x86_64                     0.21-1.fc26                         fedora                     688 k
 ghc-cryptonite-devel                     x86_64                     0.21-1.fc26                         fedora                     2.2 M
 ghc-memory                               x86_64                     0.14.1-1.fc26                       fedora                      94 k
 ghc-memory-devel                         x86_64                     0.14.1-1.fc26                       fedora                     339 k
# rpm -qR ghc-cryptohash | egrep 'memory|cryptonite'
libHScryptonite-0.21-BpO3rZu57xu84Yfxwqkb5K-ghc8.0.2.so()(64bit)
libHSmemory-0.14.1-J8LClTQE9ts1CQQG3Fd7ul-ghc8.0.2.so()(64bit)

# rpm -q --provides ghc-memory-weldr-0.14.7-1.fc26.x86_64
... skip ...
libHSmemory-0.14.7-I9vl6QXF4YjFsD4Rqouxq0-ghc8.0.2.so()(64bit)

# rpm -q --provides ghc-cryptonite-weldr-0.24-2.fc26.x86_64
... skip ...
libHScryptonite-0.24-DFIgYUXmK2tD1jKezXgHob-ghc8.0.2.so()(64bit)

As you can see the required library versions are with different numbers than what we provide.

@dashea what should we do about ^^^^ ? Aren't you guys seeing the same thing when trying to build locally ?

@atodorov
Copy link
Member Author

For the record: the metadata.db files from both Jenkins and S3 appear to be the same currently.

@dashea
Copy link
Contributor

dashea commented Sep 20, 2017

@atodorov We could do the same with cryptohash as with the cryptonite: build a -weldr version (which would use the -weldr versions of cryptonite and memory), and then rebuild packages against ghc-cryptohash-weldr. It looks like wreq is the only thing we use that requires it.

The trickier part with cryptohash is that the version in Fedora is already the highest version available, so we need something other than version numbers during the build of wreq to ensure that the -weldr version of cryptohash is used.

@bcl
Copy link
Collaborator

bcl commented Sep 20, 2017

For the record: the metadata.db files from both Jenkins and S3 appear to be the same currently.

Sorry, I meant the metadata.db produced by the demo, not the centos7 one - http://jenkins.install.bos.redhat.com/view/Composer/job/welder-deployment/lastSuccessfulBuild/

@atodorov
Copy link
Member Author

@bcl these two are different but the OOM issue should be the one triggered when there are multiple versions of the same package. I checked Jenkins and unfortunately right as I submitted this PR it imported some newly updated RPMs into the DB which started causing the OOM. Will rebuild metadata.db and try again.

On a side note: we should resolve the ghc-memory and ghc-cryptonite version mismatch but I think this doesn't affect this particular PR.

@atodorov
Copy link
Member Author

atodorov commented Sep 22, 2017

After newly generated metadata.db The tests are now failing with:

AssertionError: u'jboss-servlet-3.0-api' not found in 'Recipe: jboss v0.0.1\n'

I've checked the CentOS repo and the import log and also inspected metadata.db and sure there is the jboss-servlet-3.0-api package. The log says:

Imported /centos/7/os/x86_64/Packages/jboss-servlet-3.0-api-1.0.1-9.el7.noarch.rpm

I'm in favor of merging this PR because its taken too long and then trying to figure out what's wrong with the jboss recipe. Thoughts ?

@atodorov atodorov force-pushed the images_testing branch 2 times, most recently from 843dfb6 to c5154e4 Compare September 22, 2017 13:46
@coveralls
Copy link

Coverage Status

Changes Unknown when pulling 3749490 on atodorov:images_testing into ** on weldr:master**.

Copy link
Collaborator

@bcl bcl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for all the work, I think this is as good as we can get it for now.

@atodorov atodorov merged commit e78f867 into weldr:master Sep 22, 2017
@atodorov atodorov deleted the images_testing branch September 22, 2017 17:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants