Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

CEP 801: PyPy backend support

The goal of this CEP is to find ways to make Cython code run in the PyPy Python runtime environment.

  • Status: Open
  • Implementation status: Interfacing at the C level is mostly implemented.

Relevant discussions on the PyPy mailing list:

http://thread.gmane.org/gmane.comp.python.pypy/9437/focus=9452

The current state of the discussion seems to be that PyPy provides ways to talk to C code, but nothing as complete as CPython's C-API in the sense that it allows efficient two-way communication between C code and Python objects. However, in order to accommodate for different needs of externally written software in general and Cython code in particular, a two-way interface is required.

Approaches

There are different ideas about approaches that would bring Cython and PyPy closer:

  • interfacing at the C level
  • interfacing at the assembly level
  • generating Python code that uses ctypes
  • generating (R)Python code that uses a new FFI

Interfacing at the C level

This would require PyPy to gain a usable C-API in order to let external code talk to its object layer. Given that there is a partial implementation of this C-API already (cpyext), this originally appeared to be the easiest approach. However, Armin Rigo stated that this would be harder to do than it seems and also inefficient because it would require PyPy to restrict itself in the way it a) handles objects and b) is being designed internally. Basically, different PyPy configurations may require a different view on their internals, which renders a common C-API infeasible.

It is not clear yet if this would really be infeasible (or just inefficient) for Cython, because the generated C code could do different things when compiled for different PyPys, just as it currently does different things for different CPython versions. Reconfiguration of internals at runtime would, however, clearly represent a problem.

Update (arigo): I don't really understand the points made above, but the main issue is that you can't have some static C code (either manually written or generated by Cython) and hope that it can interact closely with PyPy's objects. It's just not possible, as far as I can tell. (So cpyext for example is an indirection layer, and is slow.)

Update (sbehnel): My point is that it is a difference for statically compiled code if you let it work against a given C-API and then replace that by a slower indirection layer that it needs to call through, or if you replace higher-level parts of the code itself by better targeted code right at compile time. Cython generated code can go much further here than what you could get by applying C macros only, and it would be an incremental optimisation process to improve the C interaction between the two, as opposed to a completely new way of interfacing. Basically, try the quickest changes to get it to work at all, then optimise it to make it fast.

One obvious optimisation could be to make both sides understand each other's internal function implementations and drop function calls into native code, instead of going through tuple packing and unpacking. Basically, PyPy could learn to introspect Cython's CyFunction type and generate a corresponding native call for a given instance, and for the other direction, Cython could use a more low-level calling convention, e.g. using a struct rather than a tuple object when certain argument types are known.

Here is an example. I think it should be quite easy for PyPy to bypass Cython's CyFunction to call directly into the underlying C function, with unboxed parameters. Cython could provide some sort of C signature introspection feature that PyPy could analyse and optimise for.

But that's only that direction. Calling from Cython code back into PyPy compiled code efficiently is (from what I've heard so far) likely going to be trickier because Cython would have to know how to do that at C compilation time at the latest, while PyPy decides about theses things at runtime. Still, there could be a way for Cython to tell PyPy about the signature it wants to use for a specific unboxed call, and PyPy could generate an optimised wrapper for that and eventually a specialised function implementation for the statically known set of argument types in a specific call. A simple varargs approach may work here, imagine something like this:

error = PyPy_CallFunctionWithSignature(
    func_obj_ptr, "(int, object, list, option=bint) -> int",
    i, object_ptr, list_obj_ptr, 0, int_result*)

And given that the constant signature string would be interned by the C compiler, a simple pointer comparison should suffice for branching inside of PyPy. That would in many cases drop the need to convert all parameters to objects and to pack them into an argument tuple. Even keyword arguments could often be folded into positional varargs, as I indicated above.

Other things that need different handling in CPython's C-API than in PyPy:

  • PyDict_Next() is horribly slow in cpyext. It's best replaced with normal iterator usage.
  • Simple macros like PyTuple_GET_ITEM() are much more involved in cpyext than in CPython. A generic function for sequence unpacking could help, e.g. {{{ int PySequence_Unpack(PyObject* iterable, Py_ssize_t min_unpack, Py_ssize_t max_unpack, ...) }}} The max_unpack argument gives the number of varargs, which are all of type PyObject** or NULL. Each non-NULL pointer will be set to a new reference of the item at the corresponding index. If the sequence is shorter than min_unpack, raise an error and return -1. Assignments may or may not have taken place at this point, but no owned references are passed back in this case.

Interfacing at the assembly level

Armin Rigo proposed to compile Cython code to C and down to assembly (potentially using the MIPS instruction set). This could then be interpreted by PyPy.

It is unclear if this can solve the problem. For one, assembly is an extremely low-level interface that provides almost no higher-level code semantics, which makes it hard to run such code efficiently. Also, this approach does not lift the requirement for an interaction with the CPython C-API, which means that this API still needs to be implemented completely, in one way or another.

Generating Python code that uses ctypes

This is what was initially tried in Romain's summer of code project. The problem is that ctypes is restricted in two ways. It is not capable of expressing Cython specific code semantics, and it is designed to work at an ABI level instead of an API level (which is used by Cython).

Extending the ctypes implementation in PyPy may be an option. Providing a dedicated (R)Python module in PyPy for the specific use by Cython code would be another.

Generating (R)Python code that uses a new FFI (like rffi)

This is a variant of the ctypes approach above. PyPy currently has two FFIs: ctypes and rffi. According to the core developers, rffi is a layer on top of ctypes that works more or less for RPython code but is not designed to be used by anything else.

More work needs to be done to design a foreign function interface that could be used by a Cython backend and potentially other code.

Open Questions

RPython code generation

Could Cython generate RPython code instead of Python code? This would require arbitrary Python code to be transformable into RPython code, e.g. using variable renaming, templating, code specialisation or other compiler techniques.

Something went wrong with that request. Please try again.