-
Notifications
You must be signed in to change notification settings - Fork 531
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Conjugate Gradient example to benchmarks #1599
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Few minor changes.
|
||
static array A; | ||
static array spA; // Sparse A | ||
static array x0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this looks too similar to a hex number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
x0 is a mathematical notation used in conjugate gradient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shehzan10 FYI, try to use X0 in such cases.
array r = b - matmul(A, x); | ||
array p = r; | ||
|
||
for (int i = 0; i < maxIter; ++i) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doesn't the timeit function execute this multiple times anyway?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not for timing, conjugate gradient is an iterative solver.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. I don't know what I was thinking.
x = x + tile(alpha, Ap.dims())*p; | ||
array beta_num = dot(r, r); | ||
array beta = beta_num/alpha_num; | ||
p = r + tile(beta, p.dims())*p; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
p *= r + tile(beta, p.dims());
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@umar456 These expressions are not equivalent. We want something like this p = r + scalar*p
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doh! Clearly its time for bed.
array alpha_den = dot(p, Ap); | ||
array alpha = alpha_num/alpha_den; | ||
r = r - tile(alpha, Ap.dims())*Ap; | ||
x = x + tile(alpha, Ap.dims())*p; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
x += tile(alpha, Ap.dims())*p;
array alpha_num = dot(r, r); | ||
array alpha_den = dot(p, Ap); | ||
array alpha = alpha_num/alpha_den; | ||
r = r - tile(alpha, Ap.dims())*Ap; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
r -= tile(alpha, Ap.dims())*Ap;
Compare the performance and memory usage of sparse vs dense using conjugate gradient example
Compare the performance and memory usage of sparse vs dense using conjugate
gradient example
[skip arrayfire ci]