Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Dynamic memory allocation is mostly a non-issue in Python. The simplest example is that of lists, which can grow to any required size.

Lists can be used in Cython, but they can only hold Python objects which incurs a certain amount of overhead. For C data types, this must be dealt with in a similar way to the usual C language: namely by using malloc, realloc and, importantly, free.

One can get a feeling for this need by studying the example contained in primes.pyx: the return array has a fixed, maximum length, which is hard coded in the script. This length cannot be changed at run time, to change it one has to recompile.

To gain access to these function is very easy: just use "cimport libc.stdlib" (see the Cython/Includes/ directory).

The .pxd file actually defines the relevant functions like so:

# standard cimport file libc/stdlib.pxd

cdef extern from "stdlib.h":
    void free(void* ptr)
    void* malloc(size_t size)
    void* realloc(void* ptr, size_t size)
    # ...

This declares the required functions as provided by the C standard library.

A very simple example of malloc usage is the following:

import random
from libc.stdlib cimport malloc, free

def random_noise(int number=1):
    cdef int i
    cdef double *my_array = <double *>malloc(number * sizeof(double))
    if not my_array:
        raise MemoryError()

    try:
        ran = random.normalvariate
        for i in range(number):
            my_array[i] = ran(0,1)

        L = [ my_array[i] for i in range(number) ]
    finally:
        # whatever happens, make sure we do not leak memory
        free(my_array)

    return L

One important thing to remember is that blocks of memory obtained with malloc must be manually released with free when one is done with them or it won't be reclaimed until the python process exits. This is called a memory leak.

Something went wrong with that request. Please try again.