-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changed the implementation of Containers.migration to match the 'lxc … #319
Conversation
Codecov Report
@@ Coverage Diff @@
## master #319 +/- ##
==========================================
+ Coverage 96.84% 96.87% +0.03%
==========================================
Files 11 11
Lines 918 928 +10
Branches 106 108 +2
==========================================
+ Hits 889 899 +10
Misses 10 10
Partials 19 19
Continue to review full report at Codecov.
|
390e58f
to
39d4569
Compare
Thanks for submitting the PR. It's a great start. It would be very good to have both unit tests and an idea of an integration test for it, please. Many thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs unit tests and, (hopefully, if possible) an integration test as well.
Thanks @ajkavanagh, I'll do that, just two questions
|
Hi @gabrik, To run the unit tests you need To the run the integration tests you just need a machine with lxd installed on it - I use libvirt virtual machines with the snap version of lxd installed, so that I can test 2.0.x and 3.0.x as well as the more recent versions. The integration test runner creates a privileged lxd container and then runs the tests inside that, so as to not affect the host machine. As an example, take a look in https://github.com/lxc/pylxd/blob/master/integration/test_containers.py which is the integration test for containers. Come back to me if you have any further questions. |
Thanks @ajkavanagh I'm having some issues on adding this tests, I have added some other tests on the migration. |
@gabrik It's hard to see, but I suspect, from briefly looking at the tests, that the |
@ajkavanagh yes I think so, I have added a test in which the container is started and then migrated, in this case the test should enter also in the |
Hi @gabrik -- I've been away for a while. Anyway, back now. The main issue with the unit test coverage is that the unit tests are not hitting the exception at: line 429. The test code needs to get the Also checking for a string in the error message is probably a bit brittle. I suspect, that reading the docs that in the handler, Thanks. |
Hi @ajkavanagh thanks for the response. |
@gabrik, in the test function, you'll need to mock out |
@ajkavanagh |
This is great. Okay, I'll pull your branch down and do some integration testing with it (which I need to automate and add to the repo, and should have it done by the end of September). If all is good, it'll be merged. Thanks! |
Actually I'm updating also the script used for integration test, in a way to have two containers for testing the migration. I'll ping you when I finish |
@ajkavanagh |
Okay, based on lxc/lxd#4980 we can't have a direct integration test lxd-in-lxd, but only on metal. So drop the integration test into a separate 'run_migration_integration_test-18-04` file and we'll just have to run it on metal (I have access to a MaaS cluster). Also, sadly your branch has overwritten the symlink Let's get those fixed up and then we can merge. Thanks. |
@ajkavanagh |
I'm still getting problems with testing the containers. The run_integration_tests link gives me:
How did you test it (i.e. what was your setup)? |
I had 2 VMs with LXD installed I created a simple alpine linux container on the first one and then I migrated it 5 times using the pylxd api. From what I see the error is in the configuration of the network, in https://github.com/gabrik/pylxd/blob/master/run_migration_integration_tests-18-04 I just create 2 containers in the same network with address 10.0.3.111 and 10.0.3.222 and then a container should be migrated between the two |
@ajkavanagh after reading the errors with fresh mind I can figure out where my error is, as these test case are in the same file as the one running fine in LXD in LXD, when you use the current script it does not create the 2 containers with those IP addresses, a solution can be to move the migration tests in a separate sets that integration? |
@gabrik So the problem is, is that I'm running the Can you run the |
@ajkavanagh I have updated the tests, I was mixing the tests this is why it was not working, |
@gabrik I really do want to get this landed! :) However, I've just done some minor changes to master (please take a look) that moves the integration test runner scripts into the integration directory -- this sadly, breaks the PR, which means it needs to move things around too. The background to this work is that I'm getting the integration scripts to work in a libvirt on a CI system, so that pylxd will be tested against the master and stable branches of lxd as lxd has new code/changes committed. The other aspect is that the integration scripts (currently) expect a pre-configured LXD environment so that it can launch containers. Moving forward, where the integration script(s) will be run automatically, this won't be the case; I'm doing some work to configure a "fresh" xenial/bionic machine so that the integration scripts can be run. To cut a long story short, I think you need to:
I'm (trying to) test the migration tests and will post a yay/nay here as soon as I've got the "configure a base machine" script going (today!). Then we'll get this branch landed. And then I'll do the work to move the script to the 'correct' place and integrate it so that the CI system will run it everyday in a configured environment. I hope this is okay!? |
…ption Signed-off-by: gabrik <gabriele.baldoni@gmail.com>
Signed-off-by: gabrik <gabriele.baldoni@gmail.com>
Signed-off-by: gabrik <gabriele.baldoni@gmail.com>
Signed-off-by: gabrik <gabriele.baldoni@gmail.com>
…container is already running Signed-off-by: gabrik <gabriele.baldoni@gmail.com>
Signed-off-by: gabrik <gabriele.baldoni@gmail.com>
…e on LXD main repo Signed-off-by: gabrik <gabriele.baldoni@gmail.com>
…ide LXD Signed-off-by: gabrik <gabriele.baldoni@gmail.com>
Signed-off-by: gabrik <gabriele.baldoni@gmail.com>
Signed-off-by: gabrik <gabriele.baldoni@gmail.com>
Signed-off-by: gabrik <gabriele.baldoni@gmail.com>
Signed-off-by: gabrik <gabriele.baldoni@gmail.com>
@ajkavanagh now everything should be ok ;) |
@ajkavanagh can we release 2.2.8 after we merge this change? Really looking forward for the container execute method fix since a while. |
@ajkavanagh any update? we have been waiting for more than a 45 days now with issues in pylxd 2.2.7. |
@ajkavanagh any news about this PR? |
So I took another crack at testing this branch, but no luck so far. @gabrik please could you list the steps you did to run the |
Sure @ajkavanagh To run the integration tests I created a VM with Ubuntu 18.04 and latest LXD from snap Then in this LXD i created a Network with this address Then I created two Ubuntu 18.04 containers, both connected to this network, with addresses In both container I installed Then I condfigured LXD to be accessed from the network using these commands:
Then I started the test on the first container, that will use the second one only for test the migration. Let me know if you need any other help |
@gabrik so I spent a good portion of today trying to get this working. I'm failing with CRIU errors on an Ubuntu 18.04 LTS host with CRIU 3.6.2. I'm going to need more details of how you tested this:
I'm also surprised that this worked from within a container; AFAICT CRIU migration doesn't work inside a container -- but that may have changed. It appears that lxd migration (statefull) is still very, very, experimental, but I at least have to have managed it once before I can merge the code. |
Sorry @ajkavanagh my mistake, I have created 2 VMs and configured in the same way as the two containers, but I have tested the migration manually, not using the script (because I do not have access to something to provision VMs in a similar way) So I think that if you have access to an openstack or something like that you can create 2 vms with the same configuration I have written in the previous comment, and the tests should pass |
@gabrik please, please, may I have the exact specs that you are using to do the test. If I can't replicate it, I can't merge it. CRIU is experimental and flaky, but I at least have to be able to replicate what you've been doing to have confidence that the code works. Thanks. |
Sure
|
Okay, I've now tested this and it works; the CRIU issues are incidental to the code, and it would work if CRIU was stable (which it isn't). Stopping a container, then migrating and then restarting the container (i.e. without CRIU) works fine. I'm going to merge this once the conflict is resolved. Thanks very much for the patch. |
This should solve definetively the issue #315
And the behaviour of migration will be the same of using
Also added myself to contributors.rst
Hope this is usefull