Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Other CI tools with same MQT scripts #313

Closed
moylop260 opened this issue Feb 11, 2016 · 32 comments
Closed

Other CI tools with same MQT scripts #313

moylop260 opened this issue Feb 11, 2016 · 32 comments
Labels

Comments

@moylop260
Copy link
Contributor

Hello, today I saw a good blog (thanks to @lasley) to integrate Bamboo as CI tools with the same script of MQT.
Vauxoo is using runbot_travis2docker in our runbot (We have ssh connection to a particular build feature here).
All them with docker technology.

I want to open the discuss to implement other CI for OCA and implement runbot with docker.

What do you think?

NOTE: We don't need add nothing to our projects because the magic is created from MQT, travis2docker and .travis.yml and this tools existing just is join the puzzle.

@pedrobaeza
Copy link
Member

Advantages of this CI? Maybe a backup?

@lasley
Copy link
Contributor

lasley commented Feb 11, 2016

I would be happy to assist in this, at least on the Jenkins/Bamboo side of things. They offer much better reporting than Travis due to XML integration, which Travis has advised they will never support.

Here's a comparison on vertical-medical:

@lasley
Copy link
Contributor

lasley commented Feb 11, 2016

Note in order for compatibility, XMLRunner needed to be injected into the Odoo unit tests. May be considered slightly hack-y, but worth it to not have to read logs every time.

https://github.com/laslabs/docker-odoo-image/blob/feature/xmlrunner/odoo-shippable/files/entrypoint_image#L31

@max3903
Copy link
Sponsor Member

max3903 commented Feb 11, 2016

May I suggest we focus on completing the process up to the deployment step by providing packages: OCA/maintainer-tools#156

before multiplying tools on steps we already do.

@moylop260
Copy link
Contributor Author

@pedrobaeza
Thank for reply.

The main advantages of integrate docker to our CI tools are:

  • Connection to ssh
  • Completed build isolated
  • Avoid re-work runbot and other ci:
    • Currently we have manually:
      • Configuration of repositories depends in runbot.repo
      • Installation of pip packages
      • Installation of apt-packages
  • Enforce the tools to reply a build in local (with docker)
    • Many times we have a error in runbot or travis but we can't reproduce it in local.
  • Test with different environments
    • With runbot dockerized we can specify a particular postgresql, ubuntu, python... version
      runbot

@pedrobaeza
Copy link
Member

This is indeed interesting, but as @max3903 says, if this something to work on, better to concentrate efforts in the already started projects. If you have already developed it, you can always present a POC to be there for anyone that wants to check it.

@moylop260
Copy link
Contributor Author

@max3903 and @pedrobaeza
We can start with dockerized runbot without a new develop, all pieces of the puzzle are developed.

  • travis2docker
  • MQT with support to travis2docker
  • runbot with support to travis2docker

If you give me a access to a server I can set it to a POC real.
Or I can create a droplet just for that.

What flexibility we have for this?

@max3903
Copy link
Sponsor Member

max3903 commented Feb 11, 2016

@moylop260 We already have 2 servers running for Runbot and today their cost is fairly predictable and under control.

The target of the next step/investment is to complete the deployment process with:

  • wheel packages available on pypi
  • deb packages available on launchpad.net
  • rpm packages available on build.opensuse.org

Pessimistic scenario: We have to host every packages on 1, 2 or 3 servers (higher cost)
Optimistic scenario: We can reach the target and there is no recurring cost. In that case, we can re-allocate the budget to sprint organizations, events, or extra server to a dockerized runbot (but I can't guarantee any of those directions).

@lasley
Copy link
Contributor

lasley commented Feb 11, 2016

Regarding Bamboo - My team will be completing this project in order to get everything situated in our dev environments. Atlassian does offer free open source instances, so if this is something you would like to see, I will rig it up for us once we finalize everything in terms of the process (major blocker is lack of integration with Lint/Flake)

@moylop260
Copy link
Contributor Author

@lasley
FYI You can integrate Lint/Flake8 using: docker run... -e LINT_CHECK="1" ...

@lasley
Copy link
Contributor

lasley commented Feb 11, 2016

@moylop260

Thanks - We checked that out, but the problem is that we want the full JUnit XML integration so that the lines get broken out of the logs into line items into their specific context. This is one of the major things that's been killing our productivity right now with the Travis workflow.

Our Docker for the Bamboo integration replaces the standard unittest test runner with https://github.com/xmlrunner/unittest-xml-reporting in order to output to JUnit compatible format and yield fancy build results.

This integration doesn't exist for Flake/Lint, so it's on our dev board to create.

@lasley
Copy link
Contributor

lasley commented Feb 11, 2016

We're looking for this, instead of logs essentially. The links go directly to the logs if needed:
image

@moylop260
Copy link
Contributor Author

@lasley
heart-eyes-emoticon-3

@lasley
Copy link
Contributor

lasley commented Feb 11, 2016

Hahaha you see my point! I will definitely share once this is all complete 😄

Jenkins does the same thing. Sad, Travis said they never don't plan on it 😦 travis-ci/travis-ci#239

@moylop260
Copy link
Contributor Author

Those logs avoid the current process:

  1. Open log of runbot/travis
  2. Search a pattern of error.
  3. Next, next, next
  4. I got it I have my error

Cool feature.
Other advantage of use other standard CI's

@lasley
Copy link
Contributor

lasley commented Feb 11, 2016

Exactly. It also allows for reporting based on the count of tests in both an aggregate sense, or by the Job (Unit, Flake, OCB, etc). Travis is a serious pain to get any sort of metrics from.

@nhomar
Copy link
Member

nhomar commented Feb 12, 2016

Quarintined - Skipped..... That's sorcery!!!! I love it!

@nhomar
Copy link
Member

nhomar commented Feb 12, 2016

@moylop260 We should read the standard because I think it is what Odoo - Runbot do in the logs link (as usual without standard).

And creat a new controller which shows such report ino runbot itself can give both worlds... No new CI but Correct and better standarized output.

@lasley
Copy link
Contributor

lasley commented Feb 12, 2016

Integration into Runbot was exactly what we truly want too! It's really just a matter of parsing the XML output, then displaying it nicely. The spec is actually rather small, I just really suck at design - http://windyroad.com.au/dl/Open%20Source/JUnit.xsd

@lasley
Copy link
Contributor

lasley commented Feb 12, 2016

(The Quarantined and Skipped voodoo is all Bamboo though, it's pretty smart about which errors it shows you)

@nhomar
Copy link
Member

nhomar commented Feb 12, 2016

my friend @max3903 I think packages are "Ok", but that's not the only correct way to go for.

I think because the size of what we are doing git + ci is the correct way to go for in terms of better community management in terms of effectivity.

@lepistone
Copy link
Member

Hi!

I completely agree that the test output that odoo produces is really too
noisy.

IMHO, this can be reasonably made much better without moving away from text
output.

At the moment, the internal odoo test runner tries to convert output to
logging (adding a lot of noise), then it tries to parse it, searching for
words like "ERROR" and pasting tracebacks at the wrong place and so on.

Some time ago I tried to just delete that code and make the test runner
behave like standard (including printing instead of logging, and not trying
to parse output), like this:

lepistone/odoo@a12327e

You can get even better text output running the existing tests with nose
(for example with anybox.recipe.odoo "scripts" feature).

This is already much more understandable IMO. I think the compact output,
with a single dot per passed test, would be best.

In MQT I get the impression we do again the same thing: we parse again,
searching for words like ERROR, and try to paste the message at the end,
messing up a result that was better to start with (at least from me).

My problem with XML reporting is that it depends on an application to see
the results. I have nothing against it, but I consider more important to
have sane text output as a start, like unittest does by itself.

Thanks!

On Thu, Feb 11, 2016 at 9:27 PM, Dave Lasley notifications@github.com
wrote:

I would be happy to assist in this, at least on the Jenkins/Bamboo side of
things. They offer much better reporting than Travis due to XML
integration, which Travis has advised they will never support.

@lasley
Copy link
Contributor

lasley commented Feb 12, 2016

You're definitely right that we should start with sane results, then expand upon that. IMO the Runbot log page is more of a hinderance than a help when there are a large amount of errors due to the way that it is presented.

What were the results of your testing in the linked branch? Seems like the code there should work.

@lepistone
Copy link
Member

@lasley you get the standard output from the unittest module: It is printed instead of logged, does not try to filter lines and attribute log levels or write a summary at the end, does not have datestamps and log levels in every line.

@lmignon
Copy link
Sponsor Contributor

lmignon commented Feb 15, 2016

@moylop260 IMO, it would be nice to refactor MQT into a set of pip installable standalone scripts that can be launched in any python env. MQT has been designed to closely work with Travis but lack of a proper packaging and genericity. For exemple, I've refactored the way we install the server an run tests into a single python command https://github.com/lmignon/buildbot-utils . This experimental refactoring let's us run the installation of an odoo server and run tests the same way as in travis in buildbot or on our own computer with a simple command:
install

. venv/bin/activate
test_odoo_server -s $SRC_DIR_TO_TEST -d $DB_NAME -v8.0 -i

test

. venv/bin/activate
test_odoo_server -s $SRC_DIR_TO_TEST -d $DB_NAME -v8.0 -t

@lasley @lepistone I share your concern on the test ouptut that odoo produce. I've done some experiments to replace the test runner hardcoded by odoo by the XmlTestRunner (https://code.launchpad.net/~acsone-openerp/openobject-server/7.0-xml-test-report). Others experiment exists from anybox to at least display a well formatted summary at the end of the log when running tests (http://bazaar.launchpad.net/~anybox/ocb-server/7.0-test-report/revision/5253)
IMO, these 2 experiments are too intrusive and are failing attempts to workaround the non standard way Odoo runs tests.
With django, you can specify a custom test runner in your settings or on the command line

./manage.py test --testrunner=green.djangorunner.DjangoRunner

But, even if we were able to specify the class to use as test runner, it remains a problem that Odoo creates a test suite by module. In the case where multiple modules are to be tested, we must aggregate results to determine the final result of all the tests.

@lepistone
Copy link
Member

In the case where multiple modules are to be tested, we must aggregate results
to determine the final result of all the tests.

You're right @lmignon: I was forgetting that point. So maybe there is a point in aggregating results. Unsure how we can avoid parsing output though (which I was hoping to avoid).

@lasley
Copy link
Contributor

lasley commented Feb 17, 2016

Unsure how we can avoid parsing output though

I feel like the answer lies in monkey patching the unittest.TextTestRunner to catch the result that is returned from the run method. In theory it would just be a matter of running the patch in the build environment. Still seems dirty though

@nhomar
Copy link
Member

nhomar commented Feb 17, 2016

@lmignon You are right, I mentioned that some moths ago and propose use the cookiecutter template which comply the very well documented "Open Source In the right Way Strategy".

I think it means 2 things.

  1. Convert in pipable system which can be autotested (as it is now) and with some sort of standards complied.
  2. Solve the topic specific of this issue (it means enable the ability to run more generically s you mentioned).

I totally agree with you in terms of packaging, in the only point I fear a little is in the "split" part in several scripts, I think we even should join mqt and mt only 1 pacakage for everything, but that is another topic even.

@moylop260
Copy link
Contributor Author

FYI the Proof-Of-Concept (POC) of runbot_travis2docker module to reuse MQT scripts in runbot and implement docker in runbot was created a instace of runbot with many OCA projects in:

http://runbot.vauxoo.com:58069/runbot

You feedback is welcome.

NOTE: The workers and builds are numbers very short just for POC and you will need add manually :58069 in all links

@alan196
Copy link

alan196 commented Apr 28, 2016

I have created a video that shows how to install and use Runbot with runbot_travis2docker module. I hope this video helps to understand how this module works.

@pedrobaeza
Copy link
Member

Closing this old discussion. Any news or advanced will be discussed in the proper PR.

@moylop260
Copy link
Contributor Author

FYI this issue finish since that @gurneyalex told us that now oca runbot server has installed runbot_travis2docker
Thanks @alan196 @lasley @jjscarafia for contributions from Belgium's OCA sprint 2016 and support

Thanks @gurneyalex for the great work in the OCA server runbot to migrate and deploy this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants