-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate and either fix or remove the @python
decorator
#668
Comments
The idea behind the @sympy
def f(x):
from sympy import tan
return tan(x).diff pyccel is gonna execute the function, and generate def f(x):
return tan(x)**2 + 1 |
We cannot think of a more general example, so the |
If I have understood correctly, the |
Searching for examples for import numpy as np
from pyccel.decorators import python, stack_array
@python
def triangle_size(n : int):
return sum(i for i in range(1,n+1))
@stack_array('x')
def first_5_pascal_triangle_rows():
x = np.zeros(triangle_size(5))
x[0] = 1
row_start_idx = 1
for n in range(2,6):
x[row_start_idx] = 1
x[row_start_idx+n-1] = 1
for j in range(1,n-1):
x[row_start_idx+j] = x[row_start_idx-n+1+j] + x[row_start_idx-n+j]
row_start_idx += n
print(x)
first_5_pascal_triangle_rows() |
@EmilyBourne In your example the There are surely situations were this mechanism can be useful. But the size of a stack array does not need to be known at compile time... 😉 |
It doesn't in Fortran, but I am less sure in C and I know it needs to be known in C++ if we ever want to add another language |
Variable length arrays are part of the C99 standard, see here. This works for example #include <stdio.h>
int sum_1_to_(int n)
{
int a[n];
for (int i = 0; i < n; ++i)
a[i] = 1 + i;
int c = 0;
for (int i = 0; i < n; ++i)
c += a[i];
return c;
}
int main(int argc, char* argv[])
{
int n = 5;
printf("sum_1_to_(%d) = %d\n", n, sum_1_to_(n));
printf("%d * (%d + 1) / 2 = %d\n", n, n, n * (n + 1) / 2);
return 0;
} I did not know that C++ was so slow at adopting this. Apparently it is part of the C++14 standard |
Do you have a reference for this? The only things I can find say it will be in the C++14 standard. But here they say that finally it didn't make it into C++14 or C++17. Generally a compiler can optimise better for a known size, so even if you don't have to, if you are able to calculate the size at compile time then that's usually better |
I stand corrected. VLAs did not make it into any C++ standard 🤷 |
After discussion, it seems that while this could be useful it is not necessary or a priority. It does not work currently so the small amount of existing code can be removed and reinserted if and when it is appropriate. |
In order to facilitate future development on issues such as #227, it is important to have a clean Scope class that is easy to work with from anywhere in pyccel. This is the first PR cleaning up the scope. This PR does not change the way pyccel works at all. It just moves the Scope to its own file and puts in place some of the functions which will be useful later. This will make future PRs easier to follow. **Commit Summary** - Move `Scope` class to its own file - Change storage system inside `Scope` to avoid forgetting objects inside the `imports` - Add accessor functions to `Scope` to interact with `Scope` directly rather than via the `SemanticParser` - Remove static functions - Remove `@python` decorator. Fixes #668
I have looked into this and I think this feature can be really useful in many instances, where we need to precompute some expressions and hardcode them directly in the code to make the code faster. import numpy as np
# Matrix Matrix product
def matmat(A:'float[:,:]', B:'float[:,:]'):
"""
Compute the matrix matrix product.
"""
...
return C
#======================================================
#UNIT TESTING
# Use the precompute decorator to create the test cases in the compilation time
@precompute
def generate_matrices(n:'int'):
A = np.random.rand(n,n)
B = np.random.rand(n,n)
C = A@B
return A,B,C
if __name__ == '__main__':
A1,B1,C1 = generate_matrices(5)
A2,B2,C2 = generate_matrices(8)
A3,B3,C3 = generate_matrices(12)
err1 = abs(C1-matmat(A1,B1)).sum()
err2 = abs(C2-matmat(A2,B2)).sum()
err3 = abs(C3-matmat(A3,B3)).sum()
for r in (err1, err2, err3):
if abs(r)<tol:
print("PASSED!")
else:
print('FAILED!')
`` |
We can also have an example like this import numpy as np
# Define the functions to be integrated
def f1(x:'float'):
return x**2 + np.sin(x)
def f2(x:'float'):
return np.cos(x)
def f3(x:'float'):
return np.tan(x)**2
def f4(x:'float'):
return x**2 + x**3
# numerical integration
def numerical_integral(func:'(float)(float)', a:'float', b:'float'):
"""
Compute the definite integral of a function numerically using the trapezoidal rule.
:param func: Function to integrate.
:param a: Lower limit of integration.
:param b: Upper limit of integration.
:return: Numerical result of the definite integral.
"""
...
return result
#======================================================
#UNIT TESTING
def numerical_integrals(a:'float',b:'float'):
integral1 = numerical_integral(f1, a, b)
integral2 = numerical_integral(f2, a, b)
integral3 = numerical_integral(f3, a, b)
integral4 = numerical_integral(f4, a, b)
return integral1,integral2, integral3, integral4
# Use the precompute decorator to compute exact integrals
@precompute
def exact_integrals(a:'float',b:'float'):
from sympy.abc import x
integral1 = sp.integrate(f1(x), (x, a, b))
integral2 = sp.integrate(f2(x), (x, a, b))
integral3 = sp.integrate(f3(x), (x, a, b))
integral4 = sp.integrate(f4(x), (x, a, b))
return integral1,integral2, integral3, integral4
if __name__ == '__main__':
num = numerical_integrals(0.,1.)
ext = exact_integrals(0.,1.)
for r1,r2 in zip(num, ext):
if abs(r1-r2)<tol:
print("PASSED!")
else:
print('FAILED!')
`` |
Another example using gradient descent import numpy as np
# Define the function to be optimized
def f(x:'float'):
return (x - 2) ** 2 + np.sin(x)
# Gradient descent optimization
def gradient_descent(func, initial_guess, learning_rate=0.01, max_iter=1000, tol=1e-6):
x = initial_guess
for _ in range(max_iter):
grad = (func(x + tol) - func(x)) / tol # Approximate gradient
x_new = x - learning_rate * grad
if abs(x_new - x) < tol:
break
x = x_new
return x
#======================================================
# UNIT TESTING
@precompute
def symbolic_optimization():
import sympy as sp
x = sp.Symbol('x')
gradient = sp.diff(f(x), x)
critical_points = sp.solve(gradient, x)
return float(critical_points)
@precompute
def exact_critical_points():
return symbolic_optimization()
if __name__ == '__main__':
initial_guess = 0 # Initial guess for numerical optimization
# Numerical optimization using gradient descent
numerical_result = gradient_descent(f, initial_guess)
# Exact optimization
exact_result = exact_critical_points()
# Unit test
if np.isclose(numerical_result, exact_result):
print("PASSED!")
else:
print("FAILED!")
`` |
Describe the bug
In
pyccel.decorators
there is a@python
decorator which leads topyccel.ast.core.PythonFunction
objects being created. This decorator has no doc string to explain what it does. Old documentation states:We should investigate the utility of this decorator. If it is not useful then it should be removed. If it is useful then we should investigate whether it still works and fix it if it doesn't. A proper doc-string should also be added to the decorator to explain its purpose to users
The text was updated successfully, but these errors were encountered: