-
Notifications
You must be signed in to change notification settings - Fork 217
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] Float32 support #981
Conversation
…loating point variables and lower required accuracy in some cases
# Conflicts: # brian2/tests/test_functions.py # brian2/tests/test_refractory.py # brian2/tests/test_subgroup.py # brian2/tests/test_timedarray.py
the same dtype as time)
Hi @mstimberg. Does brian2genn already work out of the box with this brian2 preference by any chance? :) or would it be possible to add it for our comparisons (I imagine it to be easy to translate the brian pref to genn pref?) |
No, it doesn't work with Brian2GeNN (it's not even merged in Brian 2 yet :) ). In principle, it should be easy to translate, but I see potential problems with variables like |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me except for the doctest issue which would be good to address if we can?
I'm actually amazed by how few changes were needed to the actual code (not to the tests) to make this work. Great work!
@@ -14,16 +14,14 @@ def dtype_repr(dtype): | |||
|
|||
|
|||
def default_float_dtype_validator(dtype): | |||
return dtype is float64 | |||
return dtype in [float32, float64] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need it as a priority, but I wonder how much work it would be to allow for other types such as float16 or float128?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we'd need any additional changes for that, it's just more effort for testing. But we could certainly think about it again when a use case arises.
brian2/monitors/spikemonitor.py
Outdated
@@ -237,16 +237,18 @@ def values(self, var): | |||
Examples | |||
-------- | |||
>>> from brian2 import * | |||
>>> G = NeuronGroup(2, """dv/dt = 100*Hz : 1 | |||
>>> G = NeuronGroup(2, """dv/dt = 100.0001*Hz : 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really ugly because it's actually in the documentation. Is there no way around this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll have a look, we can certainly come up with a different kind of example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are quite a few examples like this (not just this one).
brian2/monitors/spikemonitor.py
Outdated
array([ 0.5, 0.5, 0.5, 0.5]) | ||
>>> v_values[1] | ||
array([ 1., 1.]) | ||
>>> np.set_printoptions(precision=4) # show fewer digits than default |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. I hate to have stuff like this actually in our docs.
brian2/tests/test_GSL.py
Outdated
assert not all(trace_conventional.v[0]==trace_GSL.v[0]), \ | ||
('output of GSL stateupdater is exactly the same as Brians stateupdater (unlikely to be right)') | ||
# assert not all(trace_conventional.v[0]==trace_GSL.v[0]), \ | ||
# ('output of GSL stateupdater is exactly the same as Brians stateupdater (unlikely to be right)') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this commented out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, right. The test was a bit weird, checking that the results of two state updaters for the same equations were not exactly the same. It was more a "during-development" test, Charlee used it to make sure that the simulation was actually using her state updater. I think this test failed with float32
, i.e. the results were exactly the same. We can simply delete it.
@@ -654,7 +655,7 @@ def test_threshold_reset(): | |||
threshold='v > 1', reset='v=0.5') | |||
G.v = np.array([0, 1, 2]) | |||
run(defaultclock.dt) | |||
assert_equal(G.v[:], np.array([0, 1, 0.5])) | |||
assert_allclose(G.v[:], np.array([0, 1, 0.5])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment not about this particular instance, but I wonder if we lose anything by weakening our tests like this? My feeling is probably not, but it makes me mildly anxious. I guess you've already pretty much covered this with the float-type dependent version of allclose.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general I think we need to do a better job separating our tests for basic functionality from the tests for mathematical/numerical correctness. Currently, we only have a few tests (e.g. for STDP) that compare the exact results for non-trivial simulations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worth creating an issue for? I'm not entirely sure it's necessary, I feel like allclose
is probably as good as equal
practically. Can we think of an example where it might be a problem that it passes allclose but not equal?
Oh, by the way do the CI tests now run for float32 and float64? |
They do, or almost: we are testing float32 on all codegen targets and on all platforms, but only for Python 2 (our test suite is already huge as it is...) |
If I just |
By default it uses float64, to get |
Sorry, I have to leave... Can you check that you have the line I linked in my previous comment? Just before running the "codegen-independent" tests it sets |
That line is there, yep. Might need a bit of help debugging this one. Will be around tomorrow afternoon. |
For debugging this, can you see whether you see the same problem when you run the success = [brian2.test(long_tests=True, test_standalone='cpp_standalone', float_dtype=np.float32),
brian2.test(long_tests=True, test_standalone='cpp_standalone', float_dtype=np.float64)] by success = [brian2.test([], test_standalone='cpp_standalone', float_dtype=np.float32),
brian2.test([], test_standalone='cpp_standalone', float_dtype=np.float64)] If yes, that should at least make the debugging much faster! |
About the GeNN question: We allow variables to be declared as "scalar x" and then x gets the type that was selected with the preference |
Thanks for clearing this up @tnowotny . I don't see us using |
Yep that works! Will open brian dev gitter if you want to debug together. |
I'm in the gitter chat (not sure whether it's the right one :) ) |
However, I just remembered a couple of other uses of the
All of these make it a good idea to set the precision. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests now pass!
Same here, I'll merge it right away even though the appveyor tests did not pass yet (this will take > 5h still...) |
This is a problem, unfortunately, since in Brian >>> Quantity(1e-3, dtype=np.float32, dim=second.dim)
1.00000005 * msecond I see two solutions:
(1) is trivial to do, but of course we lose most of the performance benefits when every function call etc. still uses double precision. I'll look into (2), maybe it is not that difficult to fix. We can still raise a warning to make users aware that Brian2GeNN is not calculating exactly the same thing as Brian. |
I can see the logic of using double precision for |
Woah this is prescient! In the past (genn-team/genn#183) we talked about abandoning floating point time altogether and I started implementing that. However it totally breaks backwards compatibility and ends up making the model code really ugly so I was thinking that adding an option to make time double precision is probably a pragmatic solution - single precision t gets totally mangled after running my STDP model for 2000s 😢 |
C++11 (which I think is required by GeNN?) has overloaded math functions for single precision. |
Sadly I think the CUDA maths functions are C-style - as in the float ones have an f at the end rather than relying on overloading |
hum ... @neworderofjamie, how were you thinking to make a double precision time version? We could introduce a new GeNN flag for it and then just make |
Brian2GeNN then could just set that GeNN flag and of we would go ... I am probably missing something. |
This sounds good to me. But a general question: is there a way to find out about the GeNN version "at runtime"? Even a simple thing like a |
That would be exactly my suggestion
…On Wed, 29 Aug 2018, 13:04 Thomas Nowotny, ***@***.***> wrote:
Brian2GeNN then could just set that GeNN flag and of we would go ... I am
probably missing something.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#981 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGeoGqr6gAM4zpTzIOMMKpadVVJMtlvcks5uVoNFgaJpZM4VMbjM>
.
|
@neworderofjamie, that is very simple to do. I guess we would rely on the compiler to auto-cast whenever |
As far as I know, CUDA math functions are overloaded for single and double precision. At least in CUDA 9.0 (no idea about pervious versions). See here. Even though there is only an example for |
Looks like it in the documentation you referenced. In that case we could relax with our |
Sadly, for reasons best known to NVIDIA, that seems to only apply to log - see https://docs.nvidia.com/cuda/cuda-math-api/group__CUDA__MATH__SINGLE.html#group__CUDA__MATH__SINGLE |
Actually, apologies, that is just terrible documentation! In __host__ __device__ __cudart_builtin__ float sin(float in) { return sinf(in); } I will check older CUDA's and see if this is consistent - I would IMAGINE it came in when CUDA started supporting C++. |
I don't think that's the case, their documentation is just... not that good. The It seems that this has been in CUDA for a long time, I've found "Introduction to CUDA" slides from 2011 that mention that math functions are overloaded. |
Let's continue discussions about float32 support in Brian2GeNN here: brian-team/brian2genn#69 |
Due to summer vacations this will still take a while to merge I think, but I'm putting it out here since it is basically ready from my side and it should be of high interest to the @brian-team/brian2cuda and @brian-team/brian2genn backends.
There are a few tests (mostly the multicompartmental stuff) that test numerical results vs. theoretical predictions that are just off too far with single precision, so I decided to simply skip them when single precision is used. In the long run, we should also probably split these type of tests from the more basic unit/integration tests.
Travis and appveyor run tests with single precision (but only for Python 2.7, the test suits already take way too long...) and everything seems to work fine.
A remark about the clock: The variables
t
anddt
always use double precision regardless of the preference setting, because everything gets very imprecise otherwise. To avoid problems, other variables or subexpressions that are direct functions of time (e.g.lastspike
) are using the same dtype.