-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QST] Composing operators #686
Comments
I may as well provide all the code. Its a work in progress haha, so pls excuse it if is contains gross inefficiencies. Assume k = 2, m = 2000, n = 70,000, d = 784 matx::tensor_t<float, 2> GsDBSCAN::findDistancesMatX(matx::tensor_t<float, 2> &X_t, matx::tensor_t<int, 2> &A_t, matx::tensor_t<int, 2> &B_t, float alpha) {
const int k = A_t.Shape()[1] / 2;
const int m = B_t.Shape()[1];
const int n = X_t.Shape()[0];
const int d = X_t.Shape()[1];
const int batchSize = GsDBSCAN::findDistanceBatchSize(alpha, n, d, k, m); // Around 300
auto AFlat_t = matx::reshape(A_t, {n * 2 * k});
auto ABatchFlat_t = matx::make_tensor<int>( {batchSize * 2 * k});
auto BBatch_t = matx::make_tensor<int>( {ABatchFlat_t.Size(0), m});
auto XBatch_t = matx::make_tensor<float>( {2*batchSize*k*m, d});
auto XSubset_t = matx::make_tensor<float>( {batchSize, d});
auto YBatch_t = matx::make_tensor<float>({batchSize, 2*k*m, d});
auto distancesBatch_t = matx::make_tensor<float>({batchSize, 2 * k * m});
auto distances_t = matx::make_tensor<float>({n, 2*k*m});
for (int i = 0; i < n; i += batchSize) {
int maxBatchIdx = i + batchSize - 1; // Index within X along the ROWS
(XSubset_t = matx::slice(X_t, {i, 0}, {maxBatchIdx + 1, matx::matxEnd})).run();
// XSubset_t = matx::slice(X_t, {i, 0}, {maxBatchIdx + 1, matx::matxEnd});
(ABatchFlat_t = matx::slice(AFlat_t, {i * 2 * k}, {(maxBatchIdx + 1) * 2 * k})).run();
// ABatchFlat_t = matx::slice(AFlat_t, {i * 2 * k}, {(maxBatchIdx + 1) * 2 * k});
(BBatch_t = matx::remap<0>(B_t, ABatchFlat_t)).run();
// BBatch_t = matx::remap<0>(B_t, ABatchFlat_t);
auto BBatch_t_flat = matx::flatten(BBatch_t);
(XBatch_t = matx::remap<0>(X_t, BBatch_t_flat)).run();
// XBatch_t = matx::remap<0>(X_t, BBatch_t_flat);
auto XBatchReshaped_t = matx::reshape(XBatch_t, {batchSize, 2*k*m, d});
auto XSubsetReshaped_t = matx::reshape(XSubset_t, {batchSize, 1, d});
(YBatch_t = XBatchReshaped_t - matx::repmat(XSubsetReshaped_t, {1, 2*k*m, 1})).run(); // Repmat is a workaround for minusing naively incompatibhle tensor shapes
(distancesBatch_t = matx::vector_norm(YBatch_t, {2}, matx::NormOrder::L2)).run();
(matx::slice(distances_t, {i, 0}, {maxBatchIdx + 1, matx::matxEnd}) = distancesBatch_t).run();
}
return distances_t;
} Essentially what I find is that my
Then the new distances_t tensor is different - failing the tests I've written to check it's contents. FYI these results are different after calling |
This line copies a permutation of B into A based on the indices in idx.
This line does nothing since you are not storing the output into an operator.
You need to construct the operator and pass it into the next operator to chain these together. That is your line should like something like this:
Now A_primeOp can be ran directly like this:
Or passed into another operation like this:
the advantage of chaining together operators into other operators is that you get fewer kernel calls which generally leads to significantly less launch latency and memory traffic. Each call to .run() indicates a single kernel call. |
also I didn't inspect your code closely to see if you have this issue but another possible explanation would be if you were trying to write to indices that other threads read to in the same kernel. Something like this is not allowed:
The reason is each element of the tensor get's processed by a different thread. This can lead to a race condition between the read of A(i,j) and the write of A(i,j). |
Ok, to make sure I understand, |
Right, you store the operator to A_primeOp. At this point nothing has
happened yet. Once you call run on the operator it will run. However
instead of calling run on the operator you can pass that into another
operator which will then return a new operator composed of the earlier
operator and the new one. Once you call run on the outermost operator all
the inner operators will run in the correct order.
…On Thu, Aug 1, 2024, 9:06 PM Hugo Phibbs ***@***.***> wrote:
Ok, to make sure I understand, A_prime = matx::remap<0>(B, idx) returns
an *operator*? - that you then need to store? Hence why we need auto
A_primeOp = (A_prime = matx::remap<0>(B, idx)); instead?
—
Reply to this email directly, view it on GitHub
<#686 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABSFS4UELJBIE7P3MZ7GIK3ZPLZRTAVCNFSM6AAAAABL3V2SA2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRUGQZDINZTHA>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Ok thx, for the below:
is I.e. what I'm trying to say, is that if you wish to do deferred execution of the operators, do you more or less have to use a dummy variable? - assuming that's what |
I'm not sure what you mean by a dummy variable here.
A_primeOp is an object with a compiler deduced type which stores all in
information required to run the operation.
…On Thu, Aug 1, 2024, 9:12 PM Hugo Phibbs ***@***.***> wrote:
Ok for the below:
auto A_primeOp = (A_prime = matx::remap<0>(B, idx));
is A_prime more or less a dummy variable then?~
—
Reply to this email directly, view it on GitHub
<#686 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABSFS4WK4BCFD54APFAJ723ZPL2K7AVCNFSM6AAAAABL3V2SA2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRUGQZTAMZUGE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
In that example A_prime is a tensor that you're storing to, and A_primeOp is a variable describing the operation of storing into that tensor. When you call run() that is triggering the statement to do what your operator is describing, which is storing the output into that tensor. In reality you likely wouldn't store that statement into a variable and instead would just do (A_prime = matx::remap<0>(B, idx)).run() |
@cliffburdick ok thx. But if I wanted optimality interms of not having to launch the kernel multiple times etc. I would be best storing the statement right? |
Yes. See this example here where several lines are composed, and only the final line runs the kernel. |
Ok thx, I followed the But now I've got a small error that I feel like shouldn't be a problem... Would you know why the below doesn't work? auto YBatch_t_op = (XBatchReshaped_t_op - matx::repmat(XSubsetReshaped_t_op, {1, 2*k*m, 1}));
auto YBatch_t_norm_op = matx::vector_norm(YBatch_t_op, {2}, matx::NormOrder::L2); // Gives 'no instance of constructor' error
(matx::slice(distances_t, {i, 0}, {maxBatchIdx + 1, matx::matxEnd}) = YBatch_t_norm_op).run(); |
Can you paste the exact error? |
Yep its quite long though.
|
Can you please paste all the code like you did above? I want to make sure this error makes sense for the type |
Yep sure thing. I removed virtually all the initializations from before the loop - they're now not necessary.. matx::tensor_t<float, 2> GsDBSCAN::findDistancesMatX(matx::tensor_t<float, 2> &X_t, matx::tensor_t<int, 2> &A_t, matx::tensor_t<int, 2> &B_t, float alpha) {
const int k = A_t.Shape()[1] / 2;
const int m = B_t.Shape()[1];
const int n = X_t.Shape()[0];
const int d = X_t.Shape()[1];
const int batchSize = GsDBSCAN::findDistanceBatchSize(alpha, n, d, k, m);
auto AFlat_t = matx::flatten(A_t);
auto distances_t = matx::make_tensor<float>({n, 2*k*m});
for (int i = 0; i < n; i += batchSize) {
int maxBatchIdx = i + batchSize - 1; // Index within X along the ROWS
auto XSubset_t_op = matx::slice(X_t, {i, 0}, {maxBatchIdx + 1, matx::matxEnd});
auto ABatchFlat_t_op = matx::slice(AFlat_t, {i * 2 * k}, {(maxBatchIdx + 1) * 2 * k});
auto BBatch_t_op = matx::remap<0>(B_t, ABatchFlat_t_op);
auto XBatch_t_op = matx::remap<0>(X_t, matx::flatten(BBatch_t_op));
auto XBatchReshaped_t_op = matx::reshape(XBatch_t_op, {batchSize, 2*k*m, d});
auto XSubsetReshaped_t_op = matx::reshape(XSubset_t_op, {batchSize, 1, d});
auto YBatch_t_op = (XBatchReshaped_t_op - matx::repmat(XSubsetReshaped_t_op, {1, 2*k*m, 1})); // Repmat is a workaround for minusing naively incompatibhle tensor shapes
auto YBatch_t_norm_op = matx::vector_norm(YBatch_t_op, {2}, matx::NormOrder::L2);
(matx::slice(distances_t, {i, 0}, {maxBatchIdx + 1, matx::matxEnd}) = YBatch_t_norm_op).run();
}
return distances_t;
} |
Thanks. I'm not at a computer right now but I'll take a look when I have a chance unless someone else beats me to it. |
Ok thx v much
…On Fri, 2 Aug 2024, 4:30 pm Cliff Burdick, ***@***.***> wrote:
Thanks. I'm not at a computer right now but I'll take a look when I have a
chance unless someone else beats me to it.
—
Reply to this email directly, view it on GitHub
<#686 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ATOET6LMYPKUOPRBM6WIN5TZPMDPDAVCNFSM6AAAAABL3V2SA2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDENRUGUZDKNBQGQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
It looks like we have a template mismatch here. I think I understand what is going on. First look here: https://github.com/NVIDIA/MatX/blob/main/include/matx/operators/norm.h#L162-L163 template <typename Op, int D> What I believe is happening is this: Op is one type but permop is another type. This leads to the type passed into detail::NormOp<Op... being different than permop causing this error. One fix would be to replace this:
with this:
If that doesn't work then we can probably replace <Op...> with <decltype(permop)...> |
@luitjens ok thx. First fix didn't seem to work, but replacing |
Instead of closing this we should leave it open until we get a PR which
fixes the issue.
…On Fri, Aug 2, 2024, 8:48 PM Hugo Phibbs ***@***.***> wrote:
Closed #686 <#686> as completed.
—
Reply to this email directly, view it on GitHub
<#686 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABSFS4WOJ33EDCG6H26WEIDZPRAHLAVCNFSM6AAAAABL3V2SA2VHI2DSMVQWIX3LMV45UABCJFZXG5LFIV3GK3TUJZXXI2LGNFRWC5DJN5XDWMJTG42DOOJZGE3DOMY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
fixed in #696 |
Hi. I couldn't find anything in the docs about this, but what is the best way to compose operators?
I've found that I get sometimes get indeterminacy based on whether I call
run()
on an operator immediately, or until its used somewhere down the line in another operator.Basically, I'm getting results that are similar to this situation:
I didn't want to send my full code bc it may be confusing, but if its necessary to debug this, I can provide it.
Thx.
The text was updated successfully, but these errors were encountered: