Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache expensive strategy methods #174

Merged
merged 1 commit into from Apr 9, 2021
Merged

Cache expensive strategy methods #174

merged 1 commit into from Apr 9, 2021

Conversation

pdcribeiro
Copy link
Contributor

@pdcribeiro pdcribeiro commented Apr 9, 2021

This is my attempt at this task

The cached decorator can then be used like this:

    @property
    @cached
    def atr(self):
        return atr(self.candles, self.vars['atr_period'])

However, when running the TurtleRules strategy, I had no noticeable performance gains. It may be that this strategy in particular is not that computationally expensive, though

I tried these three options and got more or less the same results:

# Using functools.cache
def cached(method):
    def decorated(self, *args, **kwargs):
        cached_method = self._cached_methods.get(method)
        if cached_method is None:
            cached_method = cache(method)
            self._cached_methods[method] = cached_method
        return cached_method(self, *args, **kwargs)
    return decorated


# Using functools.lru_cache
def cached(method):
    def decorated(self, *args, **kwargs):
        cached_method = self._cached_methods.get(method)
        if cached_method is None:
            cached_method = lru_cache()(method)
            self._cached_methods[method] = cached_method
        return cached_method(self, *args, **kwargs)
    return decorated


# Using functools.lru_cache with maxsize = 1
def cached(*dargs, maxsize=1, **dkwargs):
    def decorator(method):
        def decorated(self, *args, **kwargs):
            cached_method = self._cached_methods.get(method)
            if cached_method is None:
                dkwargs['maxsize'] = maxsize
                cached_method = lru_cache(*dargs, **dkwargs)(method)
                self._cached_methods[method] = cached_method
            return cached_method(self, *args, **kwargs)
        return decorated
    return decorator

Also, I didn't find any methods in Strategy.py worth caching, but I still haven't gone through all of them

Let me know what you think and if I should change something

@cryptocoinserver
Copy link
Collaborator

Awesome. I will take a look and do some tests. Looks good so far. Thank you.
The only problem seems to be that the cache function you import only is available in 3.9 so we have to manually import / hardcode that for the older python versions. That's the reason the checks fail (in part).

@cryptocoinserver
Copy link
Collaborator

About what might be worth caching: All properties and functions that return something / are used by the user - depending on the strategy someone might call them often and even it's only a small performance gain it sums up. self.candles and self.get_candles will probably be the most called ones.

We need to / can leave the events (on_), abstractmethods and functions/properties with a leading underscore _ uncached.

@cryptocoinserver
Copy link
Collaborator

The biggest gain is possible caching used indicators on the user side, as every call of an indicator currently repeats the calculations.

@cryptocoinserver cryptocoinserver merged commit cd4e4cd into jesse-ai:master Apr 9, 2021
@cryptocoinserver
Copy link
Collaborator

I did some minor adjustments, more tests and finally merged it to master. Again thank you!

@pdcribeiro
Copy link
Contributor Author

I'm happy I could help! We can use lru_cache instead of cache, which was added in 3.2. Would that be enough?

@pdcribeiro pdcribeiro deleted the cache-strategy-methods branch April 9, 2021 20:26
@pdcribeiro
Copy link
Contributor Author

I see you've already did that. Nice work. You've got an awesome project going on here! :)

@saleh-mir
Copy link
Member

The biggest gain is possible caching used indicators on the user side, as every call of an indicator currently repeats the calculations.

I tried caching indicators values a while back ago. It performed worse. That was where I realized that C libraries (Ta-Lib for example) are really fast, usually faster than the Python code that is used to cache the values in Python. So we gotta be careful what we cache and what we don't.

self.candles for instance is really fast, but the result of self.get_candles() when used to get candles for a bigger timeframe is slow sometimes (because it merges the forming current candle with previously closed candles).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants