Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stop rewriting same PlayStore RPC data to cache #22

Conversation

ricky-crichq
Copy link
Contributor

Previously the code would load from the cache and then rewrite the same data
to cache. This is a problem for concurrency because reading/writing from
a file will break when another process is doing the same thing.

This adds a new class, DiscoveryAPI, to coordinate access to the cache
file and it reduces collisions by only writing to the cache when there
isn't a file.

CandyCheck issue: #17

Previously the code would load from cache and then rewrite the same data
to cache. This is a problem for concurrency because reading/writing from
a file will break when another process is doing the same thing.

This adds a new class, DiscoveryAPI, to coordinate access to the cache
file and it reduces collisions by only writing to the cache when there
isn't a file.
@coveralls
Copy link

coveralls commented Nov 1, 2017

Coverage Status

Coverage decreased (-0.5%) to 98.086% when pulling 9e575e5 on NuffieProductions:avoid-writing-same-data-to-file into a61a689 on jnbt:master.

@jnbt
Copy link
Owner

jnbt commented Nov 2, 2017

@ricky-crichq I think this is a step forward into the correct direction. Especially the extraction into an own class.

Do we need further improvements? Especially in multi-process scenarios (e.g. using Unicorn) I expect to have the same problems!?

@ricky-crichq
Copy link
Contributor Author

ricky-crichq commented Nov 2, 2017

@jnbt you are right, there is still the possibility for something to go wrong here but the likelihood is now drastically reduced.

Before this change we were writing to the file upon every call to discover!. We do still have to write to the file but this will only happen once, when a new server/container springs up. There is still the liklihood of collisons at this point but after this as long as the file doesn't get removed everything should just work.

We are using Puma with multiple workers and multiple threads and will be testing our own fork on our Staging servers. Will let you know how we get on.


A proper solution for this would probably involve something like a Read/Write Lock: http://ruby-concurrency.github.io/concurrent-ruby/Concurrent/ReentrantReadWriteLock.html

In saying that, there are probably other considerations, I am not sure why Google recommends caching the discovery data but I suppose that this should only be cached for a certain length of time. 1 day, 1 week, 1 month, 1 year? I am not sure but as things stand now the gem will keep reading from cache as long as the server/container is alive and the file is present.

This may not actually be the best thing to be doing. Do you know anymore about this?

@jnbt
Copy link
Owner

jnbt commented Nov 7, 2019

The whole cache is not needed anymore with the rewrite of the Google API part in #35. See the Migration Guide for more information.

@jnbt jnbt closed this Nov 7, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants