-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for automatic tests (CI) #213
Comments
Won't the compilation fail if it doesn't fit in the boot section specified in the makefile? If so, this item is also done in #212.
What's the difference between 4 and 5? Regarding 4/5, I'm visualizing this being done in each job by cloning https://github.com/Optiboot/optiboot.git to some subfolder and then redoing the same compilation there. Probably the most simple comparison would be to just run avr-size on both the generated .hex files. I'd also like to suggest that the warning count be compared. New warnings should not be tolerated unless there is a good justification for it. |
You are right, it should fail at linking stage.
In 4. I think more about checking md5sum, to confirm that result is really the same. I think that's a lot of work to do on Optiboot involving some reorganization of code, move some sections between files etc. It's good to be sure that nothing is changed in real code. Or for example something could be changed in BIGBOOT targets, so others shoudn't change. In 5. it's just comparison to check how real changes (or compiler) influence size of code. About implementation of 4/5: I think about setting some external service which could aggregate data from many builds and targets and then put it back to commit/pull request as one nice comment or review. So, I don't want to compile multiple commits at one go to make comparison. I think that it's doable using Amazon Lambda and Dynamodb using free of charge limits, but it's another story :-)
Good idea! |
I was hoping to reduce the incidence of .hex files in the repository, since they really clutter up the diffs, and since (at least theoretically) they could be included in the "releases" instead. Can the tools extract code from one place, and "comparison binaries" from elsewhere? Are there any tutorials for travis-ci ? |
Yes, that's direction I want to go. As for documentation: it's at https://docs.travis-ci.com/ |
Well, a script running in travis can download a release file and compare its .hex with the actual generated .hex. Or, before compile travis do a checkout on the repository. One build step could be checkout release, recompile, checkout latest, compile then compare both. A simples code can check, after compilation success, if the current build is made in Master branch and contains a git tag of version. If so, zip results and upload to proper release space. That´s what CI means ;) |
Good work! One question, in the early discussion you mentioned use PlatformIO. After that I notice you opted not use PlatformIO. Just satisfying my curiosity, is there a reason you followed other path? |
@jrbenito, I thought about PlatformIO because it looked at first glance as some way to do tests. Then I learned more about Travis-CI and found out that it's easier and faster to get only Arduino, unpack avr-gcc tools and environment is ready. Less to download, less things to configure, everything is simpler, build is faster. |
All thing are done from initial list and are in #246 (testing PR are not tested). @WestfW , please install https://github.com/marketplace/travis-ci application on this repository to make it work. Choose Open Source plan to use it for free. |
This is example output from size/compilation check (I screwed in this build some virtual boot targets) pasted into github, with emoji :-) |
New update for Travis-ci: #257 |
It would be good to use some recent development practices in Optiboot:-)
Automatic tests should make maintaining of this project much easier.
What can be done:
There are free services which can be used for this purpose like https://travis-ci.org/ or https://circleci.com/
Not everything is achievable out of the box, but even implementing small subset of automatic tests should help in this project.
The text was updated successfully, but these errors were encountered: