Skip to content
This repository was archived by the owner on Apr 30, 2020. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 7 additions & 4 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,12 @@
*.pyo
*.pyc
__pycache__
/artifacts
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure why this gets removed.
An own directory gets created for each nevr only during integration tests so far. When running ansible-playbook ... command the results get saved to artifacts.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you overriding artifacts variable on the command line?
ansible-playbook ... -e artifacts=path

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nevermind, maybe I misunderstood the question. Either way, I also believe it's better to keep this in .gitignore. When somebody runs the playbook with default vars, that's where the artifacts get created.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not overriding artifacts, I just took README and tried running as it says :)
I will add it back.

Besides, while test.log gets overridden with each subsequent run, output.log keeps previous results (also with default vars).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, get it back, my bad, I haven't considered direct runs.

Copy link
Collaborator Author

@kparal kparal Feb 3, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@irushchyshyn When running it locally, it's your duty to clear artifacts dir (or not). The task can't know your intended use case. Of course, you can e.g. generate random artifacts dir paths inside the playbook (the same way it generates workdirs), in case the variable is not set externally. It's your task, you can do whatever you prefer for local runs :) Local runs are mostly for development.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kparal thanks for explanation.

/dist
/.tox
/.eggs
/artifacts/
/artifacts-*/
/build/
/dist/
/.tox/
/.eggs/
.cache
*.egg-info
tests.retry
2 changes: 1 addition & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ install:
- docker build -t taskotron .

script:
- docker run -v $(pwd):$(pwd) -w $(pwd) -i -t taskotron
- docker run --cap-add=SYS_ADMIN -v $(pwd):$(pwd) -w $(pwd) -i -t taskotron
5 changes: 2 additions & 3 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
FROM fedora

RUN dnf -y install --setopt=install_weak_deps=false --setopt=tsflags=nodocs \
--setopt=deltarpm=false python2-rpm libtaskotron-core libtaskotron-fedora \
python3-rpm tox python2 python3 python2-dnf python3-dnf \
python2-libarchive-c python-bugzilla && dnf clean all
--setopt=deltarpm=false python2-rpm python3-rpm tox python2-dnf \
python3-dnf mock && dnf clean all

ENV LANG=C.UTF-8 LC_ALL=C.UTF-8

Expand Down
55 changes: 30 additions & 25 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,46 +25,51 @@ Currently the following checks are available:
Running
-------

You can run the checks locally with
`Taskotron <https://fedoraproject.org/wiki/Taskotron>`__. First,
install it (you can
follow the
`Quickstart <https://qa.fedoraproject.org/docs/libtaskotron/latest/quickstart.html>`__).
You'll also need the ``rpm``, ``dnf`` and ``libarchive-c`` Python 2 modules
(``python2-rpm``, ``python2-dnf``, ``python2-libarchive-c``).
Note that Taskotron unfortunately runs on Python 2, but the code in
this repository is Python 3 compatible as well.
To run this task locally, execute the following command as root (don't do this
on a production machine!)::

Once everything is installed you can run the task on a Koji build
using the
``name-(epoch:)version-release`` (``nevr``) identifier.
$ ansible-playbook tests.yml -e taskotron_item=<nevr>

.. code:: console
where ``nevr`` is a Koji build ``name-(epoch:)version-release`` identifier.

$ runtask -i <nevr> -t koji_build runtask.yml
For example::

For example:
$ ansible-playbook tests.yml -e taskotron_item=python-gear-0.11.0-1.fc27

.. code:: console
You can see the results in ``./artifacts/`` directory.

$ runtask -i eric-6.1.6-2.fc25 -t koji_build runtask.yml
You can also run the above in mock::

$ mock -r ./mock.cfg --init
$ mock -r ./mock.cfg --copyin taskotron_python_versions *.py tests.yml /
$ mock -r ./mock.cfg --shell 'ansible-playbook tests.yml -e taskotron_item=python-gear-0.11.0-1.fc27'
$ mock -r ./mock.cfg --copyout artifacts artifacts

Tests
-----

There are also automatic tests available. You can run them using
`tox <https://tox.readthedocs.io/>`__.
You'll need the above mentioned dependencies and ``python3-rpm``
and ``python3-dnf`` installed as well.

.. code:: console
This task is covered with functional and integration tests.
You can run them using `tox <https://tox.readthedocs.io/>`__, but
you will need ``mock``, ``python3-rpm`` and ``python3-dnf`` installed.
For mock configuration see
`mock setup <https://github.com/rpm-software-management/mock/wiki#setup>`__
instructions. Use the following command to run the test suite::

$ tox

Automatic tests also happen on `Tarvis
The integration tests may take a while to execute, as they are
running real tasks in mock. However, for development you may
speed them up by reusing the results of the previous test run.
This is useful if you modify the test itself, without changing the
implementation of task checks. Use the following command to run
integration tests in a fake mode::

$ tox -e integration -- --fake

The tests are also being executed on `Travis
CI <https://travis-ci.org/fedora-python/taskotron-python-versions/>`__.
Since Travis CI runs on Ubuntu
and Ubuntu lacks the RPM Python bindings and Taskotron,
and Ubuntu lacks the RPM Python bindings and mock,
`Docker <https://docs.travis-ci.com/user/docker/>`__ is used
to run the tests on Fedora. You can run the tests in Docker as well,
just use the commands from the ``.travis.yml`` file.
Expand Down
49 changes: 49 additions & 0 deletions download_rpms.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# -*- coding: utf-8 -*-

'''Download correct NVRs for python-versions to operate on.'''

import sys
import logging
from libtaskotron.directives import koji_directive


def download_rpms(koji_build, rpmsdir, arch=['x86_64'], arch_exclude=[],
src=True, debuginfo=False, build_log=True):
'''Download RPMs for a koji build NVR.'''

koji = koji_directive.KojiDirective()

print('Downloading rpms for %s into %s' % (koji_build, rpmsdir))
params = {
'action': 'download',
'koji_build': koji_build,
'arch': arch,
'arch_exclude': arch_exclude,
'src': src,
'debuginfo': debuginfo,
'target_dir': rpmsdir,
'build_log': build_log,
}
arg_data = {'workdir': None}
koji.process(params, arg_data)

print('Downloading complete')


if __name__ == '__main__':
print('Running script: %s' % sys.argv)
logging.basicConfig()
logging.getLogger('libtaskotron').setLevel(logging.DEBUG)
args = {}

# arch is supposed to be a comma delimited string, but optional
arches = sys.argv[3] if len(sys.argv) >= 4 else ''
arches = [arch.strip() for arch in arches.split(',')]
if arches:
print('Requested arches: %s' % arches)
args['arch'] = arches

download_rpms(koji_build=sys.argv[1],
rpmsdir=sys.argv[2],
**args
)
7 changes: 7 additions & 0 deletions mock.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
include('/etc/mock/fedora-27-x86_64.cfg')

config_opts['chroot_setup_cmd'] = 'install ansible dnf'
config_opts['use_host_resolv'] = True
config_opts['rpmbuild_networking'] = True
config_opts['use_nspawn'] = False
config_opts['root'] = 'fedora-27-x86_64-taskotron'
38 changes: 27 additions & 11 deletions python_versions_check.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# -*- coding: utf-8 -*-

import logging
if __name__ == '__main__':
# Set up logging ASAP to see potential problems during import.
Expand Down Expand Up @@ -25,9 +27,12 @@
from taskotron_python_versions.common import log, Package, PackageException


def run(koji_build, workdir='.', artifactsdir='artifacts'):
def run(koji_build, workdir='.', artifactsdir='artifacts',
testcase='dist.python-versions'):
'''The main method to run from Taskotron'''
workdir = os.path.abspath(workdir)
results_path = os.path.join(artifactsdir, 'taskotron', 'results.yml')
artifact = os.path.join(artifactsdir, 'output.log')

# find files to run on
files = sorted(os.listdir(workdir))
Expand Down Expand Up @@ -57,8 +62,6 @@ def run(koji_build, workdir='.', artifactsdir='artifacts'):
if not logs:
log.warn('No build.log found, that should not happen')

artifact = os.path.join(artifactsdir, 'output.log')

# put all the details form subtask in this list
details = []
details.append(task_two_three(packages, koji_build, artifact))
Expand All @@ -71,26 +74,39 @@ def run(koji_build, workdir='.', artifactsdir='artifacts'):
srpm_packages + packages, koji_build, artifact))
details.append(task_python_usage(logs, koji_build, artifact))

# update testcase for all subtasks (use their existing testcase as a
# suffix)
for detail in details:
detail.checkname = '{}.{}'.format(testcase, detail.checkname)

# finally, the main detail with overall results
outcome = 'PASSED'
for detail in details:
if detail.outcome == 'FAILED':
outcome = 'FAILED'
break

details.append(check.CheckDetail(checkname='python-versions',
item=koji_build,
report_type=check.ReportType.KOJI_BUILD,
outcome=outcome))
overall_detail = check.CheckDetail(checkname=testcase,
item=koji_build,
report_type=check.ReportType.KOJI_BUILD,
outcome=outcome)
if outcome == 'FAILED':
details[-1].artifact = artifact
overall_detail.artifact = artifact
details.append(overall_detail)

summary = 'python-versions {} for {}.'.format(outcome, koji_build)
log.info(summary)

# generate output reportable to ResultsDB
output = check.export_YAML(details)
return output
with open(results_path, 'w') as results_file:
results_file.write(output)

return 0 if overall_detail.outcome in ['PASSED', 'INFO'] else 1


if __name__ == '__main__':
run('test')
rc = run(koji_build=sys.argv[1],
workdir=sys.argv[2],
artifactsdir=sys.argv[3],
testcase=sys.argv[4])
sys.exit(rc)
40 changes: 0 additions & 40 deletions runtask.yml

This file was deleted.

4 changes: 2 additions & 2 deletions taskotron_python_versions/executables.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ def task_executables(packages, koji_build, artifact):
package, '\n * '.join(sorted(bins)))

detail = check.CheckDetail(
checkname='python-versions.executables',
checkname='executables',
item=koji_build,
report_type=check.ReportType.KOJI_BUILD,
outcome=outcome)
Expand All @@ -99,7 +99,7 @@ def task_executables(packages, koji_build, artifact):
detail.artifact = artifact
write_to_artifact(artifact, MESSAGE.format(message), INFO_URL)

log.info('python-versions.executables {} for {}'.format(
log.info('subcheck executables {} for {}'.format(
outcome, koji_build))

return detail
4 changes: 2 additions & 2 deletions taskotron_python_versions/naming_scheme.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ def task_naming_scheme(packages, koji_build, artifact):
package.filename))

detail = check.CheckDetail(
checkname='python-versions.naming_scheme',
checkname='naming_scheme',
item=koji_build,
report_type=check.ReportType.KOJI_BUILD,
outcome=outcome)
Expand All @@ -104,7 +104,7 @@ def task_naming_scheme(packages, koji_build, artifact):
else:
problems = 'No problems found.'

summary = 'python-versions.naming_scheme {} for {}. {}'.format(
summary = 'subcheck naming_scheme {} for {}. {}'.format(
outcome, koji_build, problems)
log.info(summary)

Expand Down
4 changes: 2 additions & 2 deletions taskotron_python_versions/py3_support.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ def task_py3_support(packages, koji_build, artifact):
' upstream, skipping Py3 support check')

detail = check.CheckDetail(
checkname='python-versions.py3_support',
checkname='py3_support',
item=koji_build,
report_type=check.ReportType.KOJI_BUILD,
outcome=outcome)
Expand All @@ -118,7 +118,7 @@ def task_py3_support(packages, koji_build, artifact):
detail.artifact = artifact
write_to_artifact(artifact, MESSAGE.format(message), INFO_URL)

log.info('python-versions.py3_support {} for {}'.format(
log.info('subcheck py3_support {} for {}'.format(
outcome, koji_build))

return detail
4 changes: 2 additions & 2 deletions taskotron_python_versions/python_usage.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ def task_python_usage(logs, koji_build, artifact):
outcome = 'FAILED'

detail = check.CheckDetail(
checkname='python-versions.python_usage',
checkname='python_usage',
item=koji_build,
report_type=check.ReportType.KOJI_BUILD,
outcome=outcome)
Expand All @@ -74,7 +74,7 @@ def task_python_usage(logs, koji_build, artifact):
else:
problems = 'No problems found.'

summary = 'python-versions.python_usage {} for {}. {}'.format(
summary = 'subcheck python_usage {} for {}. {}'.format(
outcome, koji_build, problems)
log.info(summary)

Expand Down
4 changes: 2 additions & 2 deletions taskotron_python_versions/requires.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ def task_requires_naming_scheme(packages, koji_build, artifact):
message_rpms += message

detail = check.CheckDetail(
checkname='python-versions.requires_naming_scheme',
checkname='requires_naming_scheme',
item=koji_build,
report_type=check.ReportType.KOJI_BUILD,
outcome=outcome)
Expand All @@ -174,7 +174,7 @@ def task_requires_naming_scheme(packages, koji_build, artifact):
else:
problems = 'No problems found.'

summary = 'python-versions.requires_naming_scheme {} for {}. {}'.format(
summary = 'subcheck requires_naming_scheme {} for {}. {}'.format(
outcome, koji_build, problems)
log.info(summary)

Expand Down
4 changes: 2 additions & 2 deletions taskotron_python_versions/two_three.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ def task_two_three(packages, koji_build, artifact):
outcome = 'FAILED'
bads[package.filename] = py_versions

detail = check.CheckDetail(checkname='python-versions.two_three',
detail = check.CheckDetail(checkname='two_three',
item=koji_build,
report_type=check.ReportType.KOJI_BUILD,
outcome=outcome)
Expand All @@ -143,7 +143,7 @@ def task_two_three(packages, koji_build, artifact):
else:
problems = 'No problems found.'

summary = 'python-versions.two_three {} for {}. {}'.format(
summary = 'subcheck two_three {} for {}. {}'.format(
outcome, koji_build, problems)
log.info(summary)

Expand Down
Loading