Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LinearNetwork support for compensating for input/output synapse #125

Open
arvoelke opened this issue Aug 17, 2017 · 5 comments
Open

LinearNetwork support for compensating for input/output synapse #125

arvoelke opened this issue Aug 17, 2017 · 5 comments

Comments

@arvoelke
Copy link
Owner

arvoelke commented Aug 17, 2017

There are ways to change the readout to account for extra filtering. This involves either assuming D != 0 or \dot{u} = 0 and then computing the derivatives of y needed to cancel out the additional filter (this can be done analytically from the state-space representation). For example, if the extra filter is Lowpass(tau), then the output would need to be (in the continuous case) tau*\dot{y} + y, where y = Cx + Du and \dot{y} = C\dot{x} + D\dot{u}.

@arvoelke
Copy link
Owner Author

arvoelke commented Aug 17, 2017

Also note that the special case of C = I and D = 0 might be useful as a way of re-encoding state (using different encoders) or simply probing the state without any filter, but this becomes SIMO/MIMO.

@arvoelke
Copy link
Owner Author

Full support connected to #106.

@arvoelke
Copy link
Owner Author

The network's state.input field actually helps a fair way to this end (at least in the special case where the extra filter is equal to the recurrent synapse), since this gives x as a PSC. This can then be transformed with C in a biologically plausible way to give y without filtering.

@arvoelke
Copy link
Owner Author

Also note that it's possible to avoid needing du if u has already been filtered in which case we might not need the extra filter on this part.

@arvoelke
Copy link
Owner Author

arvoelke commented Jun 15, 2018

Special case (for rolling window) of getting the state encoded into another ensemble while accounting for the extra filtering:

nengo.Connection(dn.input, X, transform=dn.B, synapse=dn.input_synapse)
nengo.Connection(dn.state, X, transform=dn.A, synapse=dn.synapse)

May also need to provide the same solver used by _make_core from dn.state.

For LinearNetwork can simply use dn.state.input?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant