Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate and either fix or remove the @python decorator #668

Closed
EmilyBourne opened this issue Jan 7, 2021 · 14 comments · Fixed by #1068
Closed

Investigate and either fix or remove the @python decorator #668

EmilyBourne opened this issue Jan 7, 2021 · 14 comments · Fixed by #1068
Labels

Comments

@EmilyBourne
Copy link
Member

Describe the bug
In pyccel.decorators there is a @python decorator which leads to pyccel.ast.core.PythonFunction objects being created. This decorator has no doc string to explain what it does. Old documentation states:

due to the limitations of Pyccel the python decorator gives you the freedom to write whatever Python code you want and will be executed dynamically during the code generation process

@python
def f(x):
    x = [None]*5
    x = 5
    x = 5.
    return x

We should investigate the utility of this decorator. If it is not useful then it should be removed. If it is useful then we should investigate whether it still works and fix it if it doesn't. A proper doc-string should also be added to the decorator to explain its purpose to users

@EmilyBourne EmilyBourne added the bug label Jan 7, 2021
@saidctb
Copy link
Member

saidctb commented Jan 20, 2021

The idea behind the @python is to do more general stuff than the @sympy.
Example

@sympy
def f(x):
    from sympy import tan
    return tan(x).diff

pyccel is gonna execute the function, and generate

def f(x):
    return tan(x)**2 + 1

@EmilyBourne
Copy link
Member Author

We cannot think of a more general example, so the @python decorator should be removed

@EmilyBourne
Copy link
Member Author

If I have understood correctly, the @python decorator is similar to constexpr in C++. It implies that the function is executed by the compiler and the result is used directly in the code. But historically it existed to avoid Pyccel's limitations rather than to exploit this feature?

@EmilyBourne
Copy link
Member Author

Searching for examples for constexpr shows that the suggested example is often a way of pre-calculating the size of a stack array. Would this provide a suitable example for this decorator?

import numpy as np
from pyccel.decorators import python, stack_array

@python
def triangle_size(n : int):
    return sum(i for i in range(1,n+1))

@stack_array('x')
def first_5_pascal_triangle_rows():
    x = np.zeros(triangle_size(5))
    x[0] = 1
    row_start_idx = 1
    for n in range(2,6):
        x[row_start_idx] = 1
        x[row_start_idx+n-1] = 1
        for j in range(1,n-1):
            x[row_start_idx+j] = x[row_start_idx-n+1+j] + x[row_start_idx-n+j]
        row_start_idx += n
    print(x)

first_5_pascal_triangle_rows()

@EmilyBourne
Copy link
Member Author

@saidctb @yguclu Do we still want to remove this decorator ?

@yguclu
Copy link
Member

yguclu commented Feb 3, 2022

@EmilyBourne In your example the @pythondecorator is a way to tell Pyccel that the function should be evaluated at compile time, if the arguments are known at compile time.

There are surely situations were this mechanism can be useful. But the size of a stack array does not need to be known at compile time... 😉

@EmilyBourne
Copy link
Member Author

There are surely situations were this mechanism can be useful. But the size of a stack array does not need to be known at compile time... 😉

It doesn't in Fortran, but I am less sure in C and I know it needs to be known in C++ if we ever want to add another language

@yguclu
Copy link
Member

yguclu commented Feb 3, 2022

Variable length arrays are part of the C99 standard, see here. This works for example

#include <stdio.h>

int sum_1_to_(int n)
{
        int a[n];
        for (int i = 0; i < n; ++i)
                a[i] = 1 + i;

        int c = 0;
        for (int i = 0; i < n; ++i)
                c += a[i];

        return c;
}

int main(int argc, char* argv[])
{
        int n = 5;

        printf("sum_1_to_(%d)    = %d\n", n, sum_1_to_(n));
        printf("%d * (%d + 1) / 2 = %d\n", n, n, n * (n + 1) / 2);

        return 0;
}

I did not know that C++ was so slow at adopting this. Apparently it is part of the C++14 standard

@EmilyBourne
Copy link
Member Author

I did not know that C++ was so slow at adopting this. Apparently it is part of the C++14 standard

Do you have a reference for this? The only things I can find say it will be in the C++14 standard. But here they say that finally it didn't make it into C++14 or C++17.

Generally a compiler can optimise better for a known size, so even if you don't have to, if you are able to calculate the size at compile time then that's usually better

@yguclu
Copy link
Member

yguclu commented Feb 4, 2022

I did not know that C++ was so slow at adopting this. Apparently it is part of the C++14 standard

Do you have a reference for this? The only things I can find say it will be in the C++14 standard. But here they say that finally it didn't make it into C++14 or C++17.

Generally a compiler can optimise better for a known size, so even if you don't have to, if you are able to calculate the size at compile time then that's usually better

I stand corrected. VLAs did not make it into any C++ standard 🤷

EmilyBourne added a commit that referenced this issue Feb 4, 2022
@EmilyBourne
Copy link
Member Author

After discussion, it seems that while this could be useful it is not necessary or a priority. It does not work currently so the small amount of existing code can be removed and reinserted if and when it is appropriate.

yguclu pushed a commit that referenced this issue Feb 7, 2022
In order to facilitate future development on issues such as #227, it is important to have a clean Scope class that is easy to work with from anywhere in pyccel. This is the first PR cleaning up the scope. This PR does not change the way pyccel works at all. It just moves the Scope to its own file and puts in place some of the functions which will be useful later. This will make future PRs easier to follow.

**Commit Summary**
- Move `Scope` class to its own file
- Change storage system inside `Scope` to avoid forgetting objects inside the `imports`
- Add accessor functions to `Scope` to interact with `Scope` directly rather than via the `SemanticParser`
- Remove static functions
- Remove `@python` decorator. Fixes #668
@saidctb
Copy link
Member

saidctb commented Apr 2, 2024

If I have understood correctly, the @python decorator is similar to constexpr in C++. It implies that the function is executed by the compiler and the result is used directly in the code. But historically it existed to avoid Pyccel's limitations rather than to exploit this feature.

I have looked into this and I think this feature can be really useful in many instances, where we need to precompute some expressions and hardcode them directly in the code to make the code faster.
We can also use it in unit testing and generate an executable that does everything instead of separating the implementation from the unit testing.
For example we can have something like

import numpy as np

# Matrix Matrix product
def matmat(A:'float[:,:]', B:'float[:,:]'):
    """
    Compute the matrix matrix product.

    """
    ...

    return C
#======================================================
#UNIT TESTING
# Use the precompute decorator to create the test cases in the compilation time
@precompute
def generate_matrices(n:'int'):
    A = np.random.rand(n,n)
    B = np.random.rand(n,n)
    C = A@B
    return A,B,C

if __name__ == '__main__':

    A1,B1,C1 = generate_matrices(5)
    A2,B2,C2 = generate_matrices(8)
    A3,B3,C3 = generate_matrices(12)

    err1 = abs(C1-matmat(A1,B1)).sum()
    err2 = abs(C2-matmat(A2,B2)).sum()
    err3 = abs(C3-matmat(A3,B3)).sum()
    for r in (err1, err2, err3):
        if abs(r)<tol:
            print("PASSED!")
        else:
            print('FAILED!')
``

@saidctb saidctb reopened this Apr 2, 2024
@saidctb
Copy link
Member

saidctb commented Apr 2, 2024

We can also have an example like this

import numpy as np

# Define the functions to be integrated
def f1(x:'float'):
    return x**2 + np.sin(x)

def f2(x:'float'):
    return np.cos(x)
    
def f3(x:'float'):
    return np.tan(x)**2
    
def f4(x:'float'):
    return x**2 + x**3

# numerical integration
def numerical_integral(func:'(float)(float)', a:'float', b:'float'):
    """
    Compute the definite integral of a function numerically using the trapezoidal rule.

    :param func: Function to integrate.
    :param a: Lower limit of integration.
    :param b: Upper limit of integration.
    :return: Numerical result of the definite integral.
    """
    ...
    return result

#======================================================
#UNIT TESTING
def numerical_integrals(a:'float',b:'float'):
    integral1 = numerical_integral(f1, a, b)
    integral2 = numerical_integral(f2, a, b)
    integral3 = numerical_integral(f3, a, b)
    integral4 = numerical_integral(f4, a, b)
    return integral1,integral2, integral3, integral4

# Use the precompute decorator to compute exact integrals
@precompute
def exact_integrals(a:'float',b:'float'):
    from sympy.abc import x
    integral1 = sp.integrate(f1(x), (x, a, b))
    integral2 = sp.integrate(f2(x), (x, a, b))
    integral3 = sp.integrate(f3(x), (x, a, b))
    integral4 = sp.integrate(f4(x), (x, a, b))
    return integral1,integral2, integral3, integral4

if __name__ == '__main__':

    num = numerical_integrals(0.,1.)
    ext = exact_integrals(0.,1.)
    for r1,r2 in zip(num, ext):
        if abs(r1-r2)<tol:
            print("PASSED!")
        else:
            print('FAILED!')
``

@saidctb
Copy link
Member

saidctb commented Apr 2, 2024

Another example using gradient descent

import numpy as np

# Define the function to be optimized
def f(x:'float'):
    return (x - 2) ** 2 + np.sin(x)

# Gradient descent optimization
def gradient_descent(func, initial_guess, learning_rate=0.01, max_iter=1000, tol=1e-6):
    x = initial_guess
    for _ in range(max_iter):
        grad = (func(x + tol) - func(x)) / tol  # Approximate gradient
        x_new = x - learning_rate * grad
        if abs(x_new - x) < tol:
            break
        x = x_new
    return x

#======================================================
# UNIT TESTING
@precompute
def symbolic_optimization():
    import sympy as sp
    x = sp.Symbol('x')
    gradient = sp.diff(f(x), x)
    critical_points = sp.solve(gradient, x)
    return float(critical_points)

@precompute
def exact_critical_points():
    return symbolic_optimization()

if __name__ == '__main__':
    initial_guess = 0  # Initial guess for numerical optimization

    # Numerical optimization using gradient descent
    numerical_result = gradient_descent(f, initial_guess)

    # Exact optimization
    exact_result = exact_critical_points()

    # Unit test
    if np.isclose(numerical_result, exact_result):
        print("PASSED!")
    else:
        print("FAILED!")
``

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants