Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@
0.9.10 (unreleased)
-------------------

- Nothing changed yet.
- Mostly documentation updates. The documentation is promoted to *good enough*.
Issue `#17 <https://github.com/nodev-io/pytest-nodev/issues/17>`_.


0.9.9 (2016-07-16)
Expand Down
13 changes: 7 additions & 6 deletions CONTRIBUTING.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@

.. highlight:: console

This project is Free and Open Source Software released under the terms of the
`MIT license <http://opensource.org/licenses/MIT>`_.
Contributions are highly welcomed and appreciated. Every little help counts, so do not hesitate!
Expand All @@ -16,19 +18,19 @@ Submit a pull request
---------------------

Contributors are invited to review the
`product high level design <https://pytest-nodev.readthedocs.io/en/stable/design.html>`_
and the `short term planning <https://github.com/nodev-io/pytest-nodev/milestones>`_.
`product high level design <https://pytest-nodev.readthedocs.io/en/latest/design.html>`_
and the `short term product planning <https://github.com/nodev-io/pytest-nodev/milestones>`_.

Tests can be run with `pytest <https://pytest.org>`_ with:
Tests can be run with `pytest <https://pytest.org>`_ with::

$ py.test -v --timeout=0 --pep8 --flakes --mccabe --cov=pytest_nodev --cov-report=html \
--cache-clear pytest_nodev tests

coverage is can be checked with:
coverage is can be checked with::

$ open htmlcov/index.html

the complete python versions tests can be run via `tox <https://tox.readthedocs.io>`_ with:
the complete python versions tests can be run via `tox <https://tox.readthedocs.io>`_ with::

$ tox

Expand All @@ -43,4 +45,3 @@ you can build a local copy with::

$ sphinx-build docs docs/html
$ open docs/html/index.html

17 changes: 12 additions & 5 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@

.. NOTE: only the first couple of lines of the README are shown on GitHub mobile

.. highlight:: console

pytest-nodev is a simple test-driven search engine for Python code,
it finds classes and functions that match the behaviour specified by the given tests.

Expand All @@ -24,7 +26,9 @@ on all objects in the Python standard library and in all the modules you have in

**Show me how it works in practice.**
**I need to write a** ``parse_bool`` **function that robustly parses a boolean value from a string.**
**Here is the test I intend to use to validate my own implementation once I write it.**::
**Here is the test I intend to use to validate my own implementation once I write it**:

.. code-block:: python

def test_parse_bool():
assert not parse_bool('false')
Expand All @@ -41,7 +45,9 @@ from the Python Package Index::
$ pip install pytest-nodev

Then copy your specification test to the ``test_parse_bool.py`` file and
decorate it with ``pytest.mark.candidate`` as follows::
decorate it with ``pytest.mark.candidate`` as follows:

.. code-block:: python

import pytest

Expand Down Expand Up @@ -75,7 +81,7 @@ Finally, instruct pytest to run your test on all candidate callables in the Pyth
In just over a minute pytest-nodev collected 4000 functions from the standard library,
run your specification test on all of them and
reported that the `strtobool`_ function in the distutils.util module
is the only one that passes your test.
is the only candidate that passes your test.

Now you can review it and if you like it you may use it in your code.
No need to write your own implementation!
Expand All @@ -99,14 +105,15 @@ Here are some of them in rough order of importance:
BIG FAT WARNING!
----------------

Searching code with pytest-nodev looks very much like running arbitrary callables with random arguments.
A lot of functions called with the wrong set of arguments may have unexpected consequences ranging
from slightly annoying, think ``os.mkdir('false')``,
to **utterly catastrophic**, think ``shutil.rmtree('/', True)``.
Serious use of pytest-nodev, in particular using ``--candidates-from-all``,
require running the tests with operating-system level isolation,
e.g. as a dedicated user or even better inside a dedicated container.
The `User's guide <http://pytest-nodev.readthedocs.io/en/stable/usersguide.html>`_
documents how to run pytest-nodev safely and efficiently.
The `Starter kit <http://pytest-nodev.readthedocs.io/en/stable/starterkit.html>`_
guide documents how to run pytest-nodev safely and efficiently.


Project resources
Expand Down
73 changes: 36 additions & 37 deletions docs/concepts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,6 @@
Concepts
========

.. warning:: This section is work in progress and there will be areas that are lacking.

Motivation
----------

Expand All @@ -19,10 +17,11 @@ On the *custom* side of the spectrum there is all the code that defines the
features of the software and all the choices of its implementation. That one is code that need
to be written.

On the other hand a seasoned software developer is trained to spot
On the other hand seasoned software developers are trained to spot
pieces of functionality that lie far enough on the *generic* side of the range
that with high probability a library already implements it
**and documents it well enough to be discovered with an internet search**.
that with high probability are already implemented in a **librariy** or a **framework**
and that are documented well enough to be discovered with a
**keyword-based search**, e.g. on StackOverflow and Google.

In between the two extremes there is a huge gray area populated by pieces of functionality
that are not *generic* enough to obviously deserve a place in a library, but are
Expand All @@ -36,51 +35,51 @@ Or is it?
Test-driven code search
-----------------------

When developing new functionalities developers spend significant efforts searching for
code to reuse, mainly via keyword-based searches, e.g. on StackOverflow and Google.
Keyword-based search is quite effective in finding code that is explicitly designed and
documented to be reused, e.g. libraries and frameworks,
but typically fails to identify reusable functions and classes in the large corpus of
auxiliary code of software projects.
To address the limits of keyword-based search *test-driven code search*
focuses on code behaviour and semantics instead.

The **search query** is a test function that is executed once for every
candidate class or function available to the **search engine**
and the **search result** is the list of candidates that pass the test.

Due to its nature the approach is better suited for discovering smaller functions
with a generic signature.

*pytest-nodev* is a pytest plugin that enables *test-driven code search* for Python.


Test-driven code reuse
----------------------

*Test-driven reuse* (TDR) is an extension of the well known *test-driven development* (TDD)
development practice.

TDR aims to address the limits of keyword-based search with test-driven code search
that focuses instead on code behaviour and semantics.
Developing a new feature in TDR starts with the developer writing the tests
that will validate candidate implementations of the desired functionality.
that will validate the correct implementation of the desired functionality.

Before writing any functional code the tests are run against all functions
and classes of all available projects.

Any code passing the tests is presented to the developer
as a candidate implementation for the target feature.
as a candidate implementation for the target feature:

pytest-nodev is a pytest plugin that enables *test-driven code search* and
consequently a software development strategy called
*test-driven reuse* or TDR that we call *nodev*,
that is an extension of the well known *test-driven development* or TDD.

The idea is that once the developer has written the tests that define the behaviour of a new
function to a degree sufficient to validate the implementation they are going to write
it is good enough to validate
any implementation. Running the tests on a large set of functions may result in a *passed*, that is
a function that already implements their feature.
- if nothing passes the tests the developer need to implement the feature and TDR reduces to TDD
- if any code passes the tests the developer can:

Due to its nature the approach is better suited for discovering smaller functions
with a generic signature.
- **import**: accept code as a dependency and use the class / function directly
- **fork**: copy the code and the related tests into their project
- **study**: use the code and the related tests as guidelines for their implementation,
in particular identifyng corner cases and optimizations


Tests validation
----------------
Unit tests validation
---------------------

Another use for pytest-nodev is, with a bit of additional work, to validate a project test suite.
An independent use case for test-driven code search is unit tests validation.
If a test passes with an unexpected object there are two possibilities,
either the test is not strict enough and allows for false positives and needs to be updated,
or the *passed* is actually a function you could use instead of your implementation.


Keywords:

* Source code *search by feature*, *search by functionality*, *search by specification* or *nodev*
* *Feature-specification test* and test suite or *Requirement-specification test*
* *Test-driven reuse* or *test-driven code search* or *test-driven source code search*
or the *PASSED* is actually a function you could use instead of your implementation.


Bibliography
Expand Down
5 changes: 5 additions & 0 deletions docs/contributing.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@

Contributing
============

.. include:: ../CONTRIBUTING.rst
5 changes: 3 additions & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ New to the concept of *test-driven code search*?
Jump to the :doc:`quickstart` for a 2 minutes hands-on overview.
Curious about the technique?
Head over to the :doc:`concepts` section
or go through our :doc:`tutorial`.
or download our :doc:`starterkit`.
The :doc:`usersguide` documents pytest-nodev usage in details and
covers a few more examples.

Expand All @@ -28,7 +28,8 @@ https://github.com/nodev-io/pytest-nodev
:caption: Table of Contents

quickstart
tutorial
starterkit
usersguide
concepts
contributing
design
102 changes: 102 additions & 0 deletions docs/starterkit.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@

.. highlight:: console

Starter kit
===========

**nodev-starter-kit** lets you perform test-driven code search queries
with `pytest-nodev <https://pypi.python.org/pypi/pytest-nodev>`_
safely and efficiently using `docker <https://docker.com>`_.

**Why do I need special care to run pytest-nodev?**

Searching code with pytest-nodev looks very much like running arbitrary callables with random arguments.
A lot of functions called with the wrong set of arguments may have unexpected consequences ranging
from slightly annoying, think ``os.mkdir('false')``,
to **utterly catastrophic**, think ``shutil.rmtree('/', True)``.
Serious use of pytest-nodev, in particular using ``--candidates-from-all``,
require running the tests with operating-system level isolation,
e.g. as a dedicated user or even better inside a dedicated container.

**But isn't it docker overkill? Can't I just use a dedicated user to run pytest-nodev?**

We tried hard to find a simpler setup, but once all the nitty-gritty details are factored in
we choose docker as the best trade-off between safety, reproducibility and easiness of use.


Install nodev-starter-kit
-------------------------

To install *nodev-starter-kit* clone the `official repo <https://github.com/nodev-io/nodev-startet-kit>`_::

$ git clone https://github.com/nodev-io/nodev-starter-kit.git
$ cd nodev-starter-kit

Advanced GitHub users are suggested to
`fork the offical repo <https://help.github.com/articles/fork-a-repo/>`_ and clone their fork.


Install docker-engine and docker
--------------------------------

In order to run pytest-nodev you need to access a docker-engine server via the docker client,
if you don't have Docker already setup
you need to follow the official installation instructions for your platform:

- `Docker for Linux <https://docs.docker.com/engine/installation/linux/>`_
- `Docker for MacOS <https://docs.docker.com/docker-for-mac/>`_
- `Docker for Windows <https://docs.docker.com/docker-for-windows/>`_

Only on Ubuntu 16.04 you can use the script we provide::

$ bash ./docker-engine-setup.sh

And test your setup with::

$ sudo docker info

Refer to the official Docker documentation for trouble-shooting and additional configurations.


Create the nodev image
----------------------

The *nodev* docker image will be your search engine,
it needs to be created once and updated every time you want to
change the packages installed in the search engine environment.

With an editor fill the requirements.txt file with the packages to be installed in the search engine.

Build the docker image with::

$ sudo docker build -t nodev .


Execute a search
----------------

Run the search engine container on a local docker-engine server, e.g. with::

$ sudo docker run --rm -it -v `pwd`:/src nodev --candidates-from-stdlib tests/test_parse_bool.py

Or alternatively after having set the ``DOCKER_HOST`` environment variable, e.g. with::

$ export DOCKER_HOST='tcp://127.0.0.1:4243' # change '127.0.0.1:4243' with the IP address and port
# of your remote docker-engine host

you can run the search engine container on a remote docker-engine server, e.g. with::

$ python docker-nodev.py --candidates-from-stdlib tests/test_parse_bool.py
======================= test session starts ==========================
platform darwin -- Python 3.5.1, pytest-2.9.2, py-1.4.31, pluggy-0.3.1
rootdir: /tmp, inifile: setup.cfg
plugins: nodev-1.0.0, timeout-1.0.0
collected 4000 items

test_parse_bool.py xxxxxxxxxxxx[...]xxxxxxxxXxxxxxxxx[...]xxxxxxxxxxxx

====================== pytest_nodev: 1 passed ========================

test_parse_bool.py::test_parse_bool[distutils.util:strtobool] PASSED

=== 3999 xfailed, 1 xpassed, 260 pytest-warnings in 75.38 seconds ====
33 changes: 0 additions & 33 deletions docs/tutorial.rst

This file was deleted.

Loading