-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can you support Tensor datatype from Eigen library? #1377
Comments
Please also add support for Eigen::TensorMap at the same time. I added support for these in my code 'the pybind::class_< Eigen::Tensor... > way' and it works but not optimally:
|
That's basically what the whole |
I'm not sure if this is still important to you, but its pretty easy to wrap an input Here comes the cpp file which returns and rank 3 ndarray with reversed ordering of an input #include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
#include <unsupported/Eigen/CXX11/Tensor>
namespace py = pybind11;
// return 3 dimensional ndarray with reversed order of input ndarray
template<class T>
py::array_t<T> eigenTensor(py::array_t<T> inArray) {
// request a buffer descriptor from Python
py::buffer_info buffer_info = inArray.request();
// extract data an shape of input array
T *data = static_cast<T *>(buffer_info.ptr);
std::vector<ssize_t> shape = buffer_info.shape;
// wrap ndarray in Eigen::Map:
// the second template argument is the rank of the tensor and has to be known at compile time
Eigen::TensorMap<Eigen::Tensor<T, 3>> in_tensor(data, shape[0], shape[1], shape[2]);
// build result tensor with reverse ordering
Eigen::Tensor<T, 3> out_tensor(2, 2, 2);
for (int i=0; i < shape[0]; i++) {
for (int j=0; j < shape[1]; j++) {
for (int k=0; k < shape[2]; k++) {
out_tensor(k, j, i) = in_tensor(i, j, k);
}
}
}
// return numpy array wrapping eigen tensor's pointer
return py::array_t<T>(shape, // shape
{shape[0] * shape[1] * sizeof(T), shape[1] * sizeof(T), sizeof(T)}, // strides
out_tensor.data()); // data pointer
}
PYBIND11_MODULE(eigen, m) {
m.def("eigenTensor", &eigenTensor<double>, py::return_value_policy::move,
py::arg("inArray"));
} And you can test it with import numpy as np
from eigen import eigenTensor
def eigenTensorPython(inArray):
outArray = np.zeros((2, 2, 2))
for i in range(inArray.shape[0]):
for j in range(inArray.shape[0]):
for k in range(inArray.shape[0]):
outArray[k, j, i] = inArray[i, j, k]
return outArray
n = int(1.0e5)
# initialize unique ndarray
inArray = np.zeros((2, 2, 2))
for i in range(inArray.shape[0]):
for j in range(inArray.shape[0]):
for k in range(inArray.shape[0]):
inArray[i, j, k] = i * 2 * 2 + j * 2 + k
print(inArray)
# print(eigenTensor(inArray))
# print(eigenTensorPython(inArray))
assert( (eigenTensorPython(inArray) == eigenTensor(inArray)).all() ) If it's not of interest for you it might help others to use Nota bene: As written in the doc Best wishes |
@JonasHarsch , so is possible to pass a Eigen::Tensor from C++ to Python...and get back from Python to C++ using Pybind11? |
Yes, at least it was when I wrote the answer above. Maybe it's best if you try to get my example code working. Tell me if you have problems, then I can try it myself tomorrow. |
Hi everyone, I just started working with I was trying to tweak the above example by @JonasHarsch with something like: #include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
#include <unsupported/Eigen/CXX11/Tensor>
namespace py = pybind11;
// return 4 dimensional ndarray with determinant broadcasted across trailing axis
template<class T>
py::array_t<T> calculateDeterminant(py::array_t<T> inArray) {
// request a buffer descriptor from Python
py::buffer_info buffer_info = inArray.request();
// extract data an shape of input array
T *data = static_cast<T *>(buffer_info.ptr);
std::vector<ssize_t> shape = buffer_info.shape;
std::vector<ssize_t> shape_out = {shape[2], shape[3]};
// wrap ndarray in Eigen::Map:
// the second template argument is the rank of the tensor and has to be known at compile time
Eigen::TensorMap<Eigen::Tensor<T, 4>> A(data, shape[0], shape[1], shape[2], shape[3]);
// build result tensor with reverse ordering
Eigen::Tensor<T, 2> out_tensor(shape[2], shape[3]);
for (int i=0; i < shape_out[0]; i++) {
for (int j=0; j < shape_out[1]; j++) {
out_tensor(i, j) = A(0, 0, i, j) * (A(1, 1, i, j) * A(2, 2, i, j) - A(2, 1, i, j) * A(1, 2, i, j))
- A(0, 1, i, j) * (A(1, 0, i, j) * A(2, 2, i, j) - A(2, 0, i, j) * A(1, 2, i, j))
+ A(0, 2, i, j) * (A(1, 0, i, j) * A(2, 1, i, j) - A(2, 0, i, j) * A(1, 1, i, j));
}
}
// return numpy array wrapping eigen tensor's pointer
return py::array_t<T>(shape_out,
out_tensor.data()); // data pointer
}
PYBIND11_MODULE(SIGNATURE, m) {
m.def("calculateDeterminant", &calculateDeterminant<double>, py::return_value_policy::move,
py::arg("inArray"));
}
which somehow gives an incorrect answer for a test array of shape Could anyone point me to what could be going wrong? Of course, when I throw a single To be complete, here is how I would achieve this in python: from numpy import random, zeros_like
def det(A):
detA = zeros_like(A[0, 0])
detA = A[0, 0] * (A[1, 1] * A[2, 2] -
A[1, 2] * A[2, 1]) -\
A[0, 1] * (A[1, 0] * A[2, 2] -
A[1, 2] * A[2, 0]) +\
A[0, 2] * (A[1, 0] * A[2, 1] -
A[1, 1] * A[2, 0])
return detA
F = random.random((3,3,1000,4))
detF = det(F) |
@bhaveshshrimali In which way is the answer incorrect? Contrary to @JonasHarsch's example from above, you are not passing strides, and Eigen seems to be column-major/Fortran-style by default. Is that the issue? |
@YannickJadoul , #include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
#include <unsupported/Eigen/CXX11/Tensor>
namespace py = pybind11;
// return 4 dimensional ndarray with determinant broadcasted across trailing axis
template<class T>
py::array_t<T> calculateDeterminant(py::array_t<T> inArray) {
// request a buffer descriptor from Python
py::buffer_info buffer_info = inArray.request();
// extract data an shape of input array
T *data = static_cast<T *>(buffer_info.ptr);
std::vector<ssize_t> shape = buffer_info.shape;
std::vector<ssize_t> shape_out = {shape[2], shape[3]};
// wrap ndarray in Eigen::Map:
// the second template argument is the rank of the tensor and has to be known at compile time
Eigen::TensorMap<Eigen::Tensor<T, 4>> A(data, shape[0], shape[1], shape[2], shape[3]);
// build result tensor with reverse ordering
Eigen::Tensor<T, 2> out_tensor(shape[2], shape[3]);
for (int i=0; i < shape_out[0]; i++) {
for (int j=0; j < shape_out[1]; j++) {
out_tensor(i, j) = A(0, 0, i, j) * (A(1, 1, i, j) * A(2, 2, i, j) - A(2, 1, i, j) * A(1, 2, i, j))
- A(0, 1, i, j) * (A(1, 0, i, j) * A(2, 2, i, j) - A(2, 0, i, j) * A(1, 2, i, j))
+ A(0, 2, i, j) * (A(1, 0, i, j) * A(2, 1, i, j) - A(2, 0, i, j) * A(1, 1, i, j));
}
}
// return numpy array wrapping eigen tensor's pointer
return py::array_t<T>(shape_out,
{shape[1] * sizeof(T), sizeof(T)},
out_tensor.data()); // data pointer
}
PYBIND11_MODULE(eigen, m) {
m.def("calculateDeterminant", &calculateDeterminant<double>, py::return_value_policy::move,
py::arg("inArray"));
}
Testfrom numpy import zeros_like, random, allclose
from eigen import calculateDeterminant
def det(A):
detA = zeros_like(A[0, 0])
detA = A[0, 0] * (A[1, 1] * A[2, 2] -
A[1, 2] * A[2, 1]) -\
A[0, 1] * (A[1, 0] * A[2, 2] -
A[1, 2] * A[2, 0]) +\
A[0, 2] * (A[1, 0] * A[2, 1] -
A[1, 1] * A[2, 0])
return detA
arr = random.random((3,3,1000,4))
detCpp = calculateDeterminant(arr)
detPy = det(arr)
assert allclose(detCpp, detPy) # fails
arr2 = random.random((3,3,1,1))
assert allclose( calculateDeterminant(arr2), det(arr2)) # passes Even though the latter (with |
I messed up with row and column majored storage layout in my previous example. So thank you very much for pointing out this error with this example! Below you find the perfect working code with correct ordering on the eigen side, due to numpy's default storage layout, see here. There where three mistakes in your code (1) wrong input tensor storage layout
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
#include <unsupported/Eigen/CXX11/Tensor>
namespace py = pybind11;
// return 4 dimensional ndarray with determinant broadcasted across trailing axis
template<class T>
py::array_t<T> calculateDeterminant(py::array_t<T> inArray) {
// request a buffer descriptor from Python
py::buffer_info buffer_info = inArray.request();
// extract data an shape of input array
T *data = static_cast<T *>(buffer_info.ptr);
std::vector<ssize_t> shape = buffer_info.shape;
std::vector<ssize_t> shape_out = {shape[2], shape[3]};
// wrap ndarray in Eigen::Map:
// the second template argument is the rank of the tensor and has to be known at compile time
Eigen::TensorMap<Eigen::Tensor<T, 4, Eigen::RowMajor>> A(data, shape[0], shape[1], shape[2], shape[3]); // (1) corrected storage layout of input tensor
// build result tensor with reverse ordering
Eigen::Tensor<T, 2, Eigen::RowMajor> out_tensor(shape[2], shape[3]); // (2) corrected storage layout of output tensor
for (int i=0; i < shape_out[0]; i++) {
for (int j=0; j < shape_out[1]; j++) {
out_tensor(i, j) = A(0, 0, i, j) * (A(1, 1, i, j) * A(2, 2, i, j) - A(1, 2, i, j) * A(2, 1, i, j))
- A(0, 1, i, j) * (A(1, 0, i, j) * A(2, 2, i, j) - A(1, 2, i, j) * A(2, 0, i, j))
+ A(0, 2, i, j) * (A(1, 0, i, j) * A(2, 1, i, j) - A(1, 1, i, j) * A(2, 0, i, j));
}
}
// return numpy array wrapping eigen tensor's pointer
return py::array_t<T>(shape_out,
{shape_out[1] * sizeof(T), sizeof(T)}, // (3) corrected strides
out_tensor.data()); // data pointer
// as YannickJadoul pointed out an even easier solution is
/*
return py::array_t<T, py::array::c_style>(shape_out,
out_tensor.data()); // data pointer
*/
}
PYBIND11_MODULE(eigen, m) {
m.def("calculateDeterminant", &calculateDeterminant<double>, py::return_value_policy::move,
py::arg("inArray"));
}
import numpy as np
from eigen import calculateDeterminant
def det(A):
detA = np.zeros_like(A[0, 0])
detA = A[0, 0] * (A[1, 1] * A[2, 2] - A[1, 2] * A[2, 1]) - \
A[0, 1] * (A[1, 0] * A[2, 2] - A[1, 2] * A[2, 0]) + \
A[0, 2] * (A[1, 0] * A[2, 1] - A[1, 1] * A[2, 0])
return detA
arr = np.random.random((3,3,10,4))
detCpp = calculateDeterminant(arr)
detPy = det(arr)
assert np.allclose(detCpp, detPy) # passes
diff = detCpp - detPy
print(f'detCpp:\n{detCpp}')
print(f'detPy:\n{detPy}')
print(f'diff:\n{diff}')
arr2 = np.random.random((3,4,1,1))
assert np.allclose( calculateDeterminant(arr2), det(arr2)) # passes |
Thanks a ton @JonasHarsch!! (3) was pretty sloppy on my part :-) |
Thanks, @JonasHarsch! Minor addition: if you use |
Thanks, @YannickJadoul, that's a really convenient way I wasn't aware of! I've adapted the solution with this option. |
Hi folks! And thank you for your brilliant library.
I need to pass 3-dimensional arrays to my pybind-made Python module that wraps some C++ code. I want to pass it a 3D
numpy.ndarray
and automatically convert it into Eigen Tensor.Can you support it?
The text was updated successfully, but these errors were encountered: