Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caching of partial results in skfuzzy.control #74

Closed
JDWarner opened this issue Feb 12, 2016 · 3 comments
Closed

Caching of partial results in skfuzzy.control #74

JDWarner opened this issue Feb 12, 2016 · 3 comments

Comments

@JDWarner
Copy link
Collaborator

Execution of the fuzzy system should only recalculate needed quantities given changed values. Tracking which inputs have changed, along with the inbuilt networkx graph, makes this possible.

@jsexauer
Copy link
Collaborator

I'm thinking that in order to do this well, we need to have a stronger separation between the "state" of the controller and the "definition" of the controller.

Some easy examples of state are the input and output crip-values and variable membership-values.

Some easy examples of definition are the variables/terms themselves, and the rules. If any of these things change, you've arguably made a completely different controller, and as such, we shouldn't need to worry about caching.

Some more difficult choices are like membership functions. These are usually pretty static, but there are some control design which vary the membership functions between iterations (for example, using particle-swarm optimization to optimize the location of the various terms).

Still mulling over what the best way to do this is. I welcome your thoughts.

@JDWarner
Copy link
Collaborator Author

Agreed. Changes in the overall structure/definition of the controller should trigger a full recalculation of the system.

The main reason I had caching at all was to make repeated calls to an established system as efficient as possible. I think changes to membership functions could in theory be tracked and flagged without requiring the whole system to be recalculated.

However, all of this comes at the cost of major additional complexity, computational overhead, and maintenance burden. It seemed like a good idea, and still does - in theory - but I don't want this to get in our way.

Let's put caching on the back burner for now and revisit once more confident in the API and overarching structure.

@JDWarner
Copy link
Collaborator Author

JDWarner commented Apr 5, 2016

This was partially addressed by #94, but only complete calculated results were referenced. We probably could do more, but I don't honestly see the need right now.

Closing for the moment; if the current solution ends up not being performant enough we can reassess and reopen.

@JDWarner JDWarner closed this as completed Apr 5, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants