-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GTH_SOLVE: Numba version #103
Conversation
In this particular instance, factoring out the loops should not provide performance gains, because of the automatic loop-lifting feature that was introduced recently in numba. I haven't tested it, but it should be enough to just |
@albop Thanks. I tried a bit putting With Numba 0.14.0, this function (with matrix operations replaced with for loops) seems not to allow nonpython mode, where the second one of the Loop Jitting Constraints (version 0.14.0) seems to be violated. (Note that this is the version contained in the current Anaconda; see Package Documentation.) I installed Numba 0.15.1, the constraint above has been removed from the Loop Jitting Constraints (version 0.15.1). It seems the automatic nopython mode is working, but it is much slower than the original numpy version; it is 50x slower for a 100x100 matrix (and much worse for 1000x1000). See the demonstration. Maybe I need some more tricks to let this work? |
@oyamad I see two possible explanations:
|
@albop Great, you are right. I removed the It works even without type declaration.
(For an irreducible matrix input, |
Excellent ! I'm glad I could help. I wonder where the small difference between 1 and 2-3 comes from. If they come from the loop-lifting part, they are probably going to vanish over time. |
This is pretty cool. What do other people think of the speed up / complexity trade off? My rule of thumb has been that if it's not somewhere near two orders of magnitude then it's not worth putting effort into, but maybe that's too strict. Is a one order of magnitude speed up worth the extra complexity? |
Adding Manifest In File to Distribute LICENSE and README.md
…here numba is not installed
`numba_installed` flag introduced
I updated
(This PR is to be merged into the |
Changes Unknown when pulling 8ea1206 on oyamad:gth_solve_jit into * on QuantEcon:master*. |
1 similar comment
Changes Unknown when pulling 8ea1206 on oyamad:gth_solve_jit into * on QuantEcon:master*. |
Something strange is going on. |
That is strange. Are they failing locally or when you update the PR? |
…version that works in 0.18.2
The issue has been handled amazingly quickly in numba/numba#1104. The fix in |
As another issue, which is better, import numpy as np
P = np.zeros((3, 3))
P[0, [0, 2]] = 0.4, 0.6
P[1, 1] = 1
P[2, [0, 2]] = 0.2, 0.8
print P
[[ 0.4 0. 0.6]
[ 0. 1. 0. ]
[ 0.2 0. 0.8]]
rec_class0 = [0, 2]
P_0 = P[rec_class0, :][:, rec_class0]
P_1 = P[:, rec_class0][rec_class0, :]
print(P_0)
print(P_0).flags
[[ 0.4 0.6]
[ 0.2 0.8]]
C_CONTIGUOUS : False
F_CONTIGUOUS : True
OWNDATA : False
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
print(P_1)
print(P_1).flags
[[ 0.4 0.6]
[ 0.2 0.8]]
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False Does |
Test should fail for this commit
For use with Numba <= 0.18.2
… a[n-1] <= v Docstring and test added
Yes, it's a view, as far as I know, while print P_0.base
[[ 0.4 0.2]
[ 0.6 0.8]]
print P_1.base
None So |
@oyamad @mmcky Hi guys, I've kind of lost track of what's happening with this PR and the different issues. Would you mind to update me? From memory, we are looking to merge in
Just a quick update of the situation would be helpful. Also, if possible I would like the utilities file added soon because one of Tom's coauthors has some utilities that look useful and are needed for a lecture Tom wants to add to quant-econ.net. |
Changes Unknown when pulling 89302ec on oyamad:gth_solve_jit into * on QuantEcon:master*. |
@jstac Regarding this PR (on
If it's ok, I would like to merge this PR into As for the discussions regarding |
@oyamad That sounds like a good next step. Please go ahead. (Thanks for the summary.) |
Merged into 'numba_improvements`. Closing. |
This is to open a discussion on the trade-off between speed and simplicity/clarity of the code, with a concrete example with
gth_solve
where I replaced the current, Numpy-based vectorized version with a Numba version._gth_solve_jit
._gth_solve_jit
, while the code there is now almost the same as the Julia version.(The option
overwrite
has been dropped as it did not contribute much to speed-up.)You guys have already had a discussion on the trade-off before #36. What degree of speed-up justifies the use of Numba? Any thoughts on this particular case?