You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The qml.kernels.utils.py file contains the utility functions to compute the square kernel matrix of a training set as well as the kernel matrix between training and test data. There are some aspects that could be updated though:
These functions are not compatible with all frameworks, for example the usage of np.array in these functions prohibits using them with Torch or JITting, or computing their derivative in JAX even without JITting.
The Returns section of the docstring of kernel_matrix reads The square matrix of kernel values, which was copied from square_kernel_matrix and is incorrect.
As an additional feature, one could support simultaneous computation of batched kernel matrices from devices with shot vectors: When passing an iterable of integers to the shots of a default qubit device, the sampling of the kernel value is performed from a single circuit execution, and thus much more efficient than repeating the simulation. By allowing square_kernel_matrix and kernel_matrix to process non-scalar return values of the passed function kernel, this nice feature can be made available to kernel matrices.
The filled in values 1. for the diagonal when assume_normalized_kernel=True need to be adapted to the output shape of the kernel function
The reshape statement has to be made more flexible, for example via
...
shape= (N, N) iflen(qml.math.shape(matrix[1]))==0else (N, N, -1)
returnnp.array(matrix).reshape(shape)
which of course would need to be adapted to the changes in the first bullet point.
Implementation
No response
How important would you say this feature is?
1: Not important. Would be nice to have.
Additional information
No response
The text was updated successfully, but these errors were encountered:
Feature details
The
qml.kernels.utils.py
file contains the utility functions to compute the square kernel matrix of a training set as well as the kernel matrix between training and test data. There are some aspects that could be updated though:np.array
in these functions prohibits using them with Torch or JITting, or computing their derivative in JAX even without JITting.Returns
section of the docstring ofkernel_matrix
readsThe square matrix of kernel values
, which was copied fromsquare_kernel_matrix
and is incorrect.shots
of a default qubit device, the sampling of the kernel value is performed from a single circuit execution, and thus much more efficient than repeating the simulation. By allowingsquare_kernel_matrix
andkernel_matrix
to process non-scalar return values of the passed functionkernel
, this nice feature can be made available to kernel matrices.1.
for the diagonal whenassume_normalized_kernel=True
need to be adapted to the output shape of thekernel
functionreshape
statement has to be made more flexible, for example viaImplementation
No response
How important would you say this feature is?
1: Not important. Would be nice to have.
Additional information
No response
The text was updated successfully, but these errors were encountered: