Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#233 Added interface for performing resource clean up and updated tests #236

Merged
merged 16 commits into from May 25, 2017
Merged

Conversation

Quinny
Copy link
Contributor

@Quinny Quinny commented May 23, 2017

No description provided.

@codecov-io
Copy link

codecov-io commented May 23, 2017

Codecov Report

Merging #236 into master will increase coverage by 0.03%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #236      +/-   ##
==========================================
+ Coverage      99%   99.03%   +0.03%     
==========================================
  Files          12       12              
  Lines         705      727      +22     
  Branches       76       77       +1     
==========================================
+ Hits          698      720      +22     
  Misses          7        7
Impacted Files Coverage Δ
aiocache/base.py 100% <100%> (ø) ⬆️
aiocache/decorators.py 100% <100%> (ø) ⬆️
aiocache/backends/memcached.py 100% <100%> (ø) ⬆️
aiocache/backends/memory.py 100% <100%> (ø) ⬆️
aiocache/backends/redis.py 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 9f2358d...c034795. Read the comment docs.

Copy link
Member

@argaen argaen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your work, looks good! One extra comment, could you try to not decrease the coverage? :). You can check what you are missing with make cov

@@ -121,6 +121,13 @@ def __init__(
kwargs["encoding"] = encoding
return await getattr(_conn, command)(*args, **kwargs)

@conn
async def _close(self, *args, _conn=None, **kwargs):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Close doesn't need the _conn to be propagated

Copy link
Contributor Author

@Quinny Quinny May 25, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

aiocache/base.py Outdated
@API.register
@API.aiocache_enabled()
@API.timeout
@API.plugins
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you remove register, aiocache_enabled and plugins decorators? Those decorators are for cache specific commands

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -55,7 +55,9 @@ def cached_decorator(func):

try:
if await cache_instance.exists(cache_key):
return await cache_instance.get(cache_key)
result = await cache_instance.get(cache_key)
await cache_instance.close()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmmm we don't need to block the user until the pool is closed. asyncio.ensure_future would be sufficient here

Copy link
Contributor Author

@Quinny Quinny May 25, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, but won't that bring back the warnings? The future created must still be awaited at some point.

I think it would be best to re-use the same connection within the decorator, hence my original proposal. Opening and closing the connection on each function call seems superfluous to me.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you on reusing the connection for the decorators, this is something I have in mind because there are a couple of edge cases.

Regardless of reusing or not the connection, its not worth it to delay the response to the user waiting for the close to pool. I prefer having a message saying Task X wasn't executed rather than unneeded delayed responses. We can think how to close those pending connections later :)

Copy link
Contributor Author

@Quinny Quinny May 25, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see where you are coming from, but I don't think this "delay" is as big of a deal as you are making it out to be. Since the code is using asyncio it is not as if the program will be idle whilst closing the connection, the event loop will context switch to another task.

I have made the changes and the error messages have now returned. I would advise either reverting back to how I originally wrote it (awaiting the close), or reusing the same cache instance within a decorator. Leaving these connections and co-routines dangling to spew errors is more of a code smell then either of those options in my opinion. Ultimately though, it is up to you.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'm not talking about this kind of speed but speed for a single call. In order for the function to return, it will have to wait for the pool to close (and the loop to reschedule it) when it really doesn't have to to retrieve the result.

In an API with high load, this can lead to unnecessary delayed calls. Imagine the following case:

  1. New call arrives to the decorated function F
  2. Other 1000 async calls arrive (no need to be on the same function, any async will do)
  3. F gets the loop and executes until arriving to the await close. (I know there are more breaks in the middle but for the sake of simplicity lets imagine we are there already). Because of finding the await it is rescheduled to the queue of tasks to execute (as the last one because its round robin).
  4. Until all 1000 async tasks aren't processed (i.e. find their first break), the loop won't pick again F and it will pick it just to return something it could have returned already.

Also there can be side effects when trying to close the connection (https://github.com/aio-libs/aioredis/blob/master/aioredis/connection.py#L315) that raise exceptions.

Because all of that I prefer to use ensure_future. I agree with you that its a code smell and we won't be freeing all the resources but I prefer it that way. I have in mind a way to solve these situations, when I mature the idea a bit more I'll ping you to validate it if that's okay.

I hope this convinces you :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not entirely, I guess we will agree to disagree haha. I have made the changes to use ensure future anyways.

@@ -67,6 +69,7 @@ def cached_decorator(func):
except Exception:
logger.exception("Unexpected error with %s", cache_instance)

await cache_instance.close()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same

@@ -145,6 +148,7 @@ def multi_cached_decorator(func):
except Exception:
logger.exception("Unexpected error with %s", cache_instance)

await cache_instance.close()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same

@@ -33,6 +33,7 @@
assert await cache.get("key") == "value"
assert isinstance(cache, SimpleMemoryCache)
assert isinstance(cache.serializer, DefaultSerializer)
await cache.close()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not ideal. caches.get('default' returns a singleton identified by 'default'. User may want to use this cache in another function.

Global aliased caches should be closed when the program exits

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The user can choose to close this cache whenever they want. This is just an example usage, no?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup, its just for the sake of "good" example. Its not ideal to close a singleton cache in the middle of a cache :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough, I've moved the closing of the cache into the body of test_alias

@Quinny
Copy link
Contributor Author

Quinny commented May 25, 2017

All checks now passing

@argaen
Copy link
Member

argaen commented May 25, 2017

Thanks for your contribution. As said I'll ping you when I have the solution for cleaning up the rest of resources :)

@argaen argaen merged commit 333119e into aio-libs:master May 25, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants