Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion: Continuous Integration tools & features #130

Closed
virtualtam opened this issue Feb 26, 2015 · 11 comments
Closed

Discussion: Continuous Integration tools & features #130

virtualtam opened this issue Feb 26, 2015 · 11 comments
Labels
cleanup code cleanup and refactoring discussion tools developer tools
Milestone

Comments

@virtualtam
Copy link
Member

virtualtam commented Feb 26, 2015

This issue focuses on adding Continuous Integration support for Shaarli (and is a bit Travis-oriented, too).

For a PHP website, it is relevant for checking:

  • code is clean (static analysis),
  • unitary and functional tests are passing,
  • how well the codebase is covered by tests (code coverage).

Travis CI is free for Open Source projects; registration can be made through Github authentication. Once enabled, builds are triggered on any branch, i.e. from merges on master as from PRs.
Note that you can customize badges to reflect build status for given branches:

Note: Coveralls is a similar service that allows to check this and display a badge as well. IMHO, this is a bit of a gadget when everything can be run within a Travis job.

The build status reflects the exit code of the executed instructions; typically you'd want:

  • static analysis to be non-blocking,
  • compilation tasks to be blocking,
  • unitary and functional tests to be blocking.

Given that Shaarli is a quite small-sized project, having a build job display the following information should suffice:

More advanced features could be achieved through the use of a dedicated CI service (Jenkins, Buildbot), such as:

  • mainline (master) stats / graphs:
    • number of static checker warnings (Jenkins: Violations)
    • number unit tests / # passed & failed unit tests (Jenkins: JUnit)
    • percentage of code covered by tests (Jenkins: Cobertura)
@e2jk
Copy link

e2jk commented Feb 27, 2015

Great initiative, let's do this.

@ArthurHoaro
Copy link
Member

I never used Travis but I see it often, probably good enough. And Jenkins might be a bit overkill and a bit hard to maintain. Coverall is used to get unit test coverage of the code. I don't know if Travis can do it. Good work anyway!

@nodiscc
Copy link
Member

nodiscc commented Feb 27, 2015

#124 related

  • code is clean (static analysis): yes we should check for that, but before submitting pull requests, Travis does it after the fact.
  • unitary and functional are passing: we don't have any tests
  • how well the codebase is covered by tests (code coverage): we don't have any tests

@nodiscc nodiscc added discussion cleanup code cleanup and refactoring labels Feb 27, 2015
@nodiscc
Copy link
Member

nodiscc commented Mar 4, 2015

I think this will be safe to close once #124 is merged. Or are there remaining points we need to discuss? Or we may continue in #95

@virtualtam
Copy link
Member Author

As Travis support is not likely to be brought in #124, I suggest we keep this issue open to define:

  • what should be tested (features),
  • which PHP versions are expected to be supported by Shaarli,
  • which tasks should be run in a CI tool.

@ArthurHoaro
Copy link
Member

Also, how to organize the code to allow unit testing.

@virtualtam
Copy link
Member Author

Though it's not that straightforward, it's actually the other way around :)

Here's an efficient workflow, to bring tests to a software:

  • identify core / sensitive features,
  • cover them with tests, as relevant as possible:
    • nominal usage: what is a method supposed to do?,
    • nominal error handling: what is likely to break often?,
    • borderline cases: erroneous values, invalid arguments, etc.
  • this will likely highlight issues & flaws in the original code,
  • fix'em all!

Having a good coverage before doing any refactoring makes the process way easier: any new issue or behavior alteration will be instantly spotted!

Then, refactoring may happen as follows:

  • think of a cool, elegant new design,
  • update test code to fit the expected design
    • all tests will break!
  • update / refactor the code until all tests are passing
    • again, issues may be spotted, covered and fixed.

Tests specify what the software is expected to perform, and how it is used, thus their importance ;-)

As an example, one of the major projects I've been working on for the last couple years:

  • started as a conglomerate of Bash, Perl and AWK scripts
    • with lots of unreadable (at least for a normal human being) instructions: crazy regular expressions, parameter substitutions, global / env vars, obscure command chaining...
    • which made spotting and solving issues a constant pain
  • ... then moved to more uniform Python scripts
    • with static analysis,
    • with growing test coverage,
  • ... and is slowly starting to adopt a test-driven-development process
    • think of a feature,
    • write the tests that represent the feature,
    • write the code that makes the tests pass.

What unit and functional tests have brought:

  • instant spotting of any regression,
  • hyper-stable codebase,
  • bugs we're facing are mostly minor, and can be fixed very quickly,
  • easy refactoring,
  • developer serenity ;-)

@nodiscc
Copy link
Member

nodiscc commented Mar 6, 2015

test-driven-development process

This also adds workload (I don't argue it's a best practice). We can't expect more contributors if the contribution process gets complicated.

Your move from bash/perl/awk to standardized python scripts was probably the biggest improvement here. Adding tests is ok if you want to do things the right way™, but come on we're working on a 2500-line PHP script here... In my eyes it's really low priority compared to other code cleanup tasks.

@virtualtam
Copy link
Member Author

Well, let's do these 2500 lines of code the right way then :D

Oh, and the nice things about testing:

  • writing them is not that hard, just time-consuming at first,
  • it makes contributing simpler:
    • for the contributor: test tell you if your contribution is good/stable enough for submission,
    • for reviewers: you can see what has been brought by the PR, how it works,
  • it won't prevent people from contributing; quite the opposite, actually: it shows you care about stability.

@nodiscc
Copy link
Member

nodiscc commented Mar 6, 2015

I agree. A patch for a simple, initial test would be accepted.

@virtualtam
Copy link
Member Author

Closing this issue, as significant efforts are being made towards having a clean, robust and well-documented codebase - thanks to all contributors & issue reporters ;-)

@virtualtam virtualtam added the tools developer tools label Dec 2, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cleanup code cleanup and refactoring discussion tools developer tools
Projects
None yet
Development

No branches or pull requests

4 participants