-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to perform CI testing with integration tests? #19
Comments
Hey @ihmccreery. What i did in Also i think that the option 1 is the one which best replicates production without losing sustainability. But i would like to further discuss it. =) |
Has anyone explored, as far as you know, using secure variables on Travis to run live integration tests? Of course this will not run on untrusted builds, (i.e. not on pull requests,) so we still need to cover code with unit tests, but it provides some level of live integration test functionality, which is nice. |
@tokengeek I know you've wrestled with this before, (fog/fog#2112). If you have input, please chime in. |
@ihmccreery Not that i´m aware of. The main issue is that we can´t run tests on the PR´s thus we don´t know if it is ok. =s EDIT: Even with this secure feature, a malicious user will be able to access your provider and create unintended stuff there. |
@plribeiro3000 Can you say more about your previous edit, "... a malicious user will be able to access your provider and create unintended stuff there"? |
Sure. An user will be able to create stuff on your account. They have fog and access to your account so a user can create a vm there. |
Hm. I'm not sure I understand. At least in theory, we should be able to encrypt our credentials using the secure variables on Travis, so that they are not available to anyone except those who have admin access to the Travis account, (i.e. committers to |
Yeah. The user won´t be able to get your credentials but it will be able to create a vm, ssh into it and push some malicious code there. Thanks to fog, it won´t be necessary to know your credentials if he can create a script with its permissions. Instead of code, the person can submit this code: conn = # open connection
server = conn.servers.create(options)
server.ssh do
# some malicious stuff
# like host a porn website
end Until you get to it, your account will be hosting bad stuff. |
Doesn't this prevent that, as long as we don't pull that code into the main repo?
(From the docs). |
@ihmccreery yes, that's what @plribeiro3000 is saying. Since Google has a free tier (or if they just want to give us a whitelisted account with small quota), I think this is the best solution. Things that will happen that are probably reasonable risks:
|
@ihmccreery I think the point is: if you don't run the tests on unstrusted builds (builds for open Pull Requests) how are you going to check if the tests pass for the patch submitted? If you have to go and run the tests by yourself then we are going to loose the advantage of having a CI. =s The only two ways i can think of to solve this is to use an automated tool like Just like a friend of mine always say:
cc/ @matiasleidemer |
I'm not sure how we should handle this kind of testing. I feel that sometimes we miss some tests accessing the api for real but i don't feel safe in any of the possible ways to do so. Perhaps we could use |
The correct way to do this, in my opinion, is to run unit & functional The fact is, though, that the correct way is not always feasible—unit & Make sense? On Tue, Aug 4, 2015 at 8:26 AM, Paulo Henrique Lopes Ribeiro <
|
Yeah. i do agree. The only issue with your suggestion is how that would help my PR (#57) not break everything without a merge. Lets try to make it simpler before we actually take some action. Why is |
Can we reopen this discussion? I thought I might take a crack at updating the google-api-client to 0.9, but I'm unable to test it easily. Specifically, all the tests run against the mock implementations. There's some integration code in the examples/ directory, but it's very annoying to run (I have to configure credentials, then run individually via 'bundle exec ruby ' after modifying the file itself to disable mocking). It would be nice to have VCR functionality for some sort of integration test that at least verifies that everything is working beyond the mocks. Is anyone opposed to having some sort of VCR setup? |
@selmanj The problem with VCR is that the tapes bitrot quite fast and don't catch errors or changes on the backend. I would rather see someone fix the integration tests to be honest (not the |
I can look at fixing those up. I'm curious though what you mean by the VCR tapes suffering from bitrot; technically they are recorded against a specific version of the google API, so we should be safe in assuming that the API remains fairly unchanged, right? Any breakages due to bitrot are likely problems with the Google API no longer behaving correctly for a fixed version. Regarding VCR not testing errors or changes on the backend; this is true, but again the fixed version of the API should enforce that these changes should not occur. In fact, if this does happen I would prefer to have the VCR tapes since it provides a clear example of the breaking change. |
@selmanj
With that being said - with current very limited resources on the project, if someone does want to help with tests, I would prefer them to finish the integration tests first so we have proper coverage against a real backend. Otherwise we may end up in a situation where we have yet another unfinished test suite on the project. I have no issues if you want to introduce VCR tests after though. Makes sense? |
@Temikus I agree. Having working integration-level tests is definitely the first step. |
I'm going to propose the following general strategy for tests in minitest going forward:
I'm working on 1 and 2 above right now for compute. Personally i'd prefer to have more mock-based unit tests on some of the edge-conditions where code gets complicated as doing integration-tests can be very slow. |
@selmanj LGTM, let's get this ball rolling. On 1: If we're testing requests I would go for more coverage, even if it's simple. So it's definitely important that each service is covered be it via workflow or request test. |
Integration tests are live along with unit tests \o/ Travis now runs unit tests as well, helping us verify that newly submitted models conform to basic lifecycle requirements. |
Per #18.
I've written an integration test framework and suite of tests that work with a live integration setup, and we need to figure out how to get those tests to not fail on Travis CI. We need these tests run regularly to make sure that the library actually works against the current API. Options I can think of:
@plribeiro3000 Your thoughts would be helpful here. Does the Fog community, as far as you know, have working solutions to this problem?
@erjohnso Do you have suggestions, based on how other projects are doing this?
The text was updated successfully, but these errors were encountered: