Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support multi client storage #290

Closed
wants to merge 8 commits into from

Conversation

fractalwrench
Copy link
Contributor

Adds support for storage of files on a per-client basis, by using a different directory for each client based on a hash of their configured API key and endpoint.

This PR alters the FileStore class and its descendants so that new errors/sessions are stored in the appropriate location. Previously cached errors/sessions will still be flushed from disk from the old location - JUnit test coverage is added for this.

Also see #286 and #285

ensures that error/session files in the old location are found
The filestore now uses a different directory for each different client api key/endpoint
configuration. This supports the case where an application is configured with multiple clients. Old
error reports stored on the device should be delivered in the same way.
increase wait on mazerunner scenario to enforce request order
use correct directory for native storage dir
@fractalwrench fractalwrench requested a review from a team April 19, 2018 13:53
use File api rather than paths (only introduced in api 26)
@coveralls
Copy link

coveralls commented Apr 19, 2018

Coverage Status

Coverage decreased (-0.7%) to 73.611% when pulling f69ec3a on multi-client-storage into 52c06f6 on multi-client-scenario.

tidy cleanup of files for filestore tests
Copy link
Contributor

@bengourley bengourley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I gave this a once-over. On the surface of it, it looks good to me, though I didn't run it, nor understand it all.

I have a couple of questions at a high-level:

If the app crashes, then is updated with a new version of Bugsnag, is the "old" state of the filesystem compatible with this new logic?
Specifically, will the app succesfully send the crashes to Bugsnag? Could the state of the filesystem cause Bugsnag to crash?

@fractalwrench fractalwrench requested a review from a team April 20, 2018 08:57
@fractalwrench
Copy link
Contributor Author

If the app crashes, then is updated with a new version of Bugsnag, is the "old" state of the filesystem compatible with this new logic?

Yes. findStoredFiles should find any Files in either the new location, or previous location.

Could the state of the filesystem cause Bugsnag to crash?

I'm not aware of anything within these changes that would cause an unhandled crash within the notifier, although the risky areas would be around:

  • ensuring the old/new location actually exists
  • handling IOErrors when Files cannot be created
  • null/File#exists() checks

final String storeDirectory;
final String oldDirectory;

File storageDir;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why should this property be renamed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted it to be clear that there is a difference between the old directory we previously used, and the client-specific directory that we will use in future. Maybe the nomenclature is a bit off here?


// support multiple clients in the same app by using a unique directory path

private File getStorageDir(String path, @NonNull Configuration config) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a project rotates API keys or the on-premise endpoint changes, will stored files ever be sent (or cleared)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not currently. Is this something we need to address?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah - in the worst case, files shouldn't be left on disk indefinitely. In the best case, everything is delivered according to the configuration at the time the report was captured. Splitting directories by endpoint and key works until either component needs to change. A better approach would be to write the entire request to disk including the endpoint and the headers. For example, a (nearly) raw HTTP request (format amended to encode protocol and port):

POST https://notify.bugsnag.com
Bugsnag-Api-Key: aaaabaaaabaaaabaaaa
Content-Type: application/json

{"events": [{"severity": "warning", "exceptions":[]}]

on premise example for contrast:

POST https://bugsnag.example.internal:79002
Bugsnag-Api-Key: aaaabaaaabaaaabaaaa
Content-Type: application/json

{"events": [{"severity": "warning", "exceptions":[]}]

Which could then be parsed until encountering the double newline for the endpoint and headers, then stream the rest as the body directly.

This should:

  • Avoid cycling directories, ensuring everything is delivered and cleared from disk
  • Tie API key to a request even if cached on disk
  • Not affect reports not cached to disk
  • Have the same integration tests
  • Have the same/similar logic for handling reports in the old location

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this approach makes sense.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's also worth mentioning that we use the cache directory when storing files on Android, so Files may be deleted if they are left forever and the system is low on disk space.

Reading through the docs it looks like we may want to perform a check on our quota when writing files - I'll create a separate ticket for that.

I think it still makes sense to have a limit of around 128 reports/sessions on disk at any time.

for (File file : files) {
assertTrue(file.getAbsolutePath().endsWith("foo.json"));
}
assertEquals(2, files.size());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there are multiple clients, will all attempt to find and send files from the "old" directory?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I've added a mazerunner scenario which covers this.

adds a scenario to verify that errors stored in the old directory are reported by multiple clients
if (exceptionDir.isDirectory()) {
File[] files = exceptionDir.listFiles();
if (storageDir.isDirectory()) {
File[] files = storageDir.listFiles();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs fixing as it doesn't take the previous directory into account.

alters the filestore so that it obtains all files (including from the legacy directory) when the max
number of files is reached, and the oldest file needs to be deleted
@fractalwrench
Copy link
Contributor Author

@kattrali I wanted to check whether the remaining inline comments require any actions, namely:

  • Should oldDirectory be renamed to something else?
  • Do we need to handle the case of API key/endpoint rotation, and if so, how would that be achieved?

@kattrali
Copy link
Contributor

kattrali commented May 2, 2018

Should oldDirectory be renamed to something else?

Seems fine, perhaps should be annotated for removal in the next major

Do we need to handle the case of API key/endpoint rotation, and if so, how would that be achieved?

Ah, added detail inline for this.

@fractalwrench
Copy link
Contributor Author

Closing this PR off as the intended approach has changed somewhat.

@fractalwrench fractalwrench deleted the multi-client-storage branch October 1, 2018 08:17
rich-bugsnag pushed a commit that referenced this pull request Sep 3, 2021
Silence iOS test fixtures build warnings
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants