Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
examples_tests: fix the libpipeline test. #326
Conversation
|
I ended up doing a massive refactor because I hated writing the installed tests. I'm sorry about the size of this one. |
fgimenez
reviewed
Feb 22, 2016
| @@ -0,0 +1,161 @@ | ||
| +# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*- | ||
| +# | ||
| +# Copyright (C) 2015, 2016 Canonical Ltd |
elopio
Feb 23, 2016
Member
The file is new, but I started writing this code last year. I'm just refactoring it now. So as I understand it, it should have both years.
fgimenez
reviewed
Feb 22, 2016
| + # per execution. | ||
| + config['snappy_image'] = snappy_image | ||
| + # Delete the image when the execution exits. | ||
| + atexit.register(shutil.rmtree, temp_dir) |
elopio
Feb 23, 2016
Member
The idea is to create the image only once per execution, because it's what takes more time.
I tried using setupModule, but with this refactor it will mean once per example, which is also a lot of time. So the remaining options were to do it on main.py, or do a lazy creation and delete when the main thread exits. I liked a little more this last one, but I'm ok to change it if you have a better option in mind.
fgimenez
reviewed
Feb 22, 2016
| @@ -0,0 +1,34 @@ | ||
| +# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*- | ||
| +# | ||
| +# Copyright (C) 2015, 2016 Canonical Ltd |
fgimenez
reviewed
Feb 22, 2016
| @@ -0,0 +1,112 @@ | ||
| +# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*- | ||
| +# | ||
| +# Copyright (C) 2015, 2016 Canonical Ltd |
fgimenez
reviewed
Feb 22, 2016
| + 'custom libpipeline called\n' | ||
| + 'command-pipelinetest.wrapper\n' | ||
| + 'include\n') | ||
| + self.assertEqual(output, expected) |
fgimenez
reviewed
Feb 22, 2016
| + snap_local_path = os.path.join( | ||
| + 'examples', example_dir, snap_file_name) | ||
| + self.snappy_testbed.copy_file(snap_local_path, '/home/ubuntu') | ||
| + self.snappy_testbed.run_command([ |
fgimenez
Feb 22, 2016
Contributor
Maybe you could check the output from run_command here, or catch any possible exception.
elopio
Feb 23, 2016
Member
I pushed a check for the output. Any possible exception will be raised and the test will fail, which I think is enough.
fgimenez
reviewed
Feb 22, 2016
| + subprocess.check_call( | ||
| + ['sudo', 'ubuntu-device-flash', '--verbose', | ||
| + 'core', '15.04', '--channel', 'stable', | ||
| + '--output', image_path, '--developer-mode']) |
fgimenez
Feb 22, 2016
Contributor
You could check the return code here, or possible exceptions raised. The textual output from udf could also be helpful, maybe at a high verbosity log level.
elopio
Feb 23, 2016
Member
check_call will raise an exception if the return code is different than 0. That will stop the test and show the traceback, which I think it's enough.
fgimenez
reviewed
Feb 22, 2016
| + return image_path | ||
| + | ||
| + | ||
| +class SshTestbed: |
fgimenez
Feb 22, 2016
Contributor
Have you considered using a library like paramiko? http://docs.paramiko.org/en/1.16/api/client.html
elopio
Feb 23, 2016
Member
That could be nice. And pexpect + paramiko might be a good combination. After finishing my tasks for this week, I'll take a look at rewriting this with the lib.
|
Looks very good, only real concerns about checking the output and eventual exceptions from command calls. |
|
|
|
retest this please |
|
So the plan is to slowly move away from the big fixture of scenarios? I actually like this |
elopio commentedFeb 18, 2016
Refactor the examples tests to be able to write them independently, each
on its own file.
LP: #1546771