-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
async impl of DynamoDBBackend #37
Conversation
For the DynamoDB container file permissions, you could manually specify the container's user, use a bind mount to a local directory, and However, since this is just used for tests and persistence isn't important, using an in-memory database is going to be simpler. I just updated the |
Maybe tangential, but how are exceptions expected to be handled? I think that for most exceptions outside initial init, they should be swallowed. If I time out in reading from my cache, I would expect my session to fetch from origin, and if the save_response similarly times out, I still expect to get the response. Is that atypical? i.e. if I want that behavior, should I impl a custom cache that likely wraps and caches, or should the base backend do some of that exception handling? Basically in my actual impl, I swallow all exceptions during read/write and emit logs/metrics but I still want my response data from origin when my cache is falling over. |
Right. If there's an exception when fetching a cached response, |
Well, CacheBackend.get_response catches KeyError/TypeError right now, but for example if there's some sort of timeout in reading from the cache, it looks like that's uncaught. I'm not sure if you'd rather have catch-all |
Good point. If there's something like a timeout or other backend-specific thing you know you want to ignore, you could handle those in the backend class, like: except botocore.exceptions.ClientError as error:
if error.response['Error']['Code'] == 'TimeoutError': # Or whatever error code(s) you want to check for
pass For non-backend-specific error handling in Also, I added |
@JWCook what do you think, re: mypy errors with type return. I used AsyncIterable here. |
I had to think about that one for a minute. I believe your type hint is correct. From the mypy docs:
Currently in the other backend classes (and base class), they're typed as Iterable since that's what they return after being awaited, e,g, usage looks like Since you're yielding within an |
That would be great! Thanks! |
Is this ready to merge? If so, would you mind rebasing and squashing your commits? Otherwise this looks good to me! |
bae4e65
to
87bd795
Compare
87bd795
to
c516034
Compare
Yes. Commits squashed. But you know github lets you do a squash commit from the PR. Curious if you've found issues with that, as I've gotten accustomed to cleaning up commit log, e.g. adding conventional commit tags, during the squash commit in github and am always curious to hear what others are doing. |
Sure, just wanted to give you the chance to make any commit log edits yourself, if you wanted to. The 'Squash and merge' option from GitHub only lets the reviewer/merger edit the squashed commit message, right? As far as I know, the only way to let the PR author make those edits is to do an interactive rebase before merging. That's how my team at work does it with GitLab, at least. Does GitHub have a feature to squash & edit commits as the PR author? |
Your changes are in the latest pre-release build, Thanks for the contribution, and the other suggestions along the way! This has been really helpful. |
Gotcha. That makes sense.
Not that I know of. We've gotten into the habit of updating the PR title because github seems to use this as the default message for the squash commit... but seemingly not always. I'm not sure if that's a UI/caching thing, but our habit is to use the PR title as the staged commit log message by convention/ shared expectation, and double-check that it matches as the merger, heh. |
Draft PR to raise some questions I had while starting to work on this.
The minimal integration tests work for me if I run the dynamodb local service, but I have to run that as root to workaround perms issues sqlite raises when trying to write to the mounted volume. I'm not sure if this is specific to my setup.
I'll raise other questions inline.