-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add clipped exponential distribution. #779
Conversation
Oh. I'm working on the |
I know of that problem, but I am not sure whether to add that tolerance in the distribution or wherever it is used (the required tolerance might depend on the usage?) |
Can we add the tolerance as a parameter in the constructor? |
We could ... |
Or we could just use the minimum tolerance for the type: high = np.nextafter(self.high, np.asarray(-np.inf, dtype=exp_val.dtype)) |
This might be the best solution. I didn't know about |
Indeed. This is the first time i've seen |
This seems fine to me. At some point, we might need to move some of the more esoteric distributions in |
I don't think that this distribution is specific to the SPA. You could as well build a scalar thresholding ensemble with it. Also, I don't think, that an exponential distribution is very esoteric. But I could see moving the |
This needs a changelog entry. Maybe mention briefly what it's useful for? It would also be great to have a notebook example showing it off, but we can add this down the road. (Maybe have one notebook documenting all distributions? Though I feel like Gaussian and Uniform are more self-explanatory.) Also at some point we might want to consider building the Laplace distribution into this class (or making it a subclass or something), since it's very similar and takes the same parameters. (At the moment I'm not sure if the Laplace distribution is actually useful for anything we do, which is why I'm not suggesting this for now.) |
Oh, also write the PDF in the docstring. You can probably copy it from the Numpy exponential distribution docstring, though frankly I'm not totally keen on all the LaTeX in their formatting, because it makes it hard to read if you just open the docstring in Python. EDIT: I know we don't do this for other distributions (maybe we should), but I just thought of it here because right now it's not clear what the "scale" parameter is (not that the math will totally help with that). Alternatively, you could think of a way to describe the scale parameter; it does handily correspond to the mean of the distribution, so maybe it's enough to say that? Though if they add a shift, then the mean is going to be the scale plus the shift. Either way, I don't think it would be bad to have the math as well, since it's a less common distribution (and just from the name, I actually couldn't remember if it was the single-sided one (which it is) or the double-sided Laplace distribution). |
@hunse added the docs you requested. Is this now ready to merge? |
Okay, I fixed up the docstring more. Can somebody look that over? |
LGTM |
Previously, the `Exponential` distribution produced values in the range [low, high]. It now produces values in the range [low, high), to ensure that intercepts right at the radius of an ensemble are not generated.
@jgosmann and I talked, and decided to change the name of this distribution to |
👍 for exponential -- I was also going to mention that the As a sidenote, I've been using this distribution a lot lately for inhibitory populations, and it's extremely useful. Thanks @jgosmann! Now I can switch back to master! |
This distribution is useful for thresholding in ensembles. It performs better than a uniform distribution of intercepts or setting all intercepts exactly to the threshold.
blue: fixed intercepts
green: uniform
red: exponential distribution
The ideal function would be a step function going from 0 to 1 at 0.3 (plus synaptic filtering).
I am planning to update the
AssociativeMemory
to use this distribution and maybe implement a thresholding network.