-
Notifications
You must be signed in to change notification settings - Fork 329
Add more analytics #5985
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add more analytics #5985
Conversation
flutter-idea/src/io/flutter/analytics/FlutterAnalysisServerListener.java
Outdated
Show resolved
Hide resolved
26dd4f0 to
155616f
Compare
155616f to
fb8ae55
Compare
|
This is ready for review. Note that due to changes in |
| } | ||
|
|
||
| @Override | ||
| public void computedErrors(String path, List<AnalysisError> list) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this getting called for every file in a project?
If it is called for more than each root path, I'm concerned we are logging too much.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not every file, but all Dart files plus some unexpected files. All three AndroidManifest.xml files were checked plus pubspec.yaml. Also analysis_options.yaml.
| FlutterInitializer.getAnalytics().sendTiming(E2E_IJ_COMPLETION_TIME, FAILURE, e2eCompletionMS); // test: logE2ECompletionErrorMS() | ||
| } | ||
|
|
||
| private void logAnalysisError(@Nullable AnalysisError error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm concerned this is logging too much. If you have 10,000 errors, we are still trying to log 100 events.
What are we trying to achieve by logging the error codes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, we log 100 events plus the analysis time of every error in an open editor. So, that's potentially worse. We'd have to check with @jwren for design rationale.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to log far fewer events. Logging analytics needs to not be the reason why users have poor performance or complain our tools use too much of their bandwidth. I would suggest changing all these analytics so a single analytic event or a couple of events are logged summarizing the state of all errors reported rather than ever emitting events proportional to the number of errors or files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this will be straight-forward to change. I'll drop the call to logAnalysisError for each error and clean up unused elements. We already log the summary info in serverStatus.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done! PTAL
| public void computedErrors(String path, List<AnalysisError> list) { | ||
| assert list != null; | ||
| list.forEach(this::logAnalysisError); | ||
| pathToErrors.put(path, list); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens if you restart the analysis server? Do we get a new event with new errors.
Wish we had a merged Dart and Flutter plugin so we didn't have to duplicate this logic that the analysis errors window already handles.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. It would make sense to check if the file had already been analyzed and, if the errors are the same, ignore it, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added that test and asked for an opinion from jwren.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No right choice. We can discuss, there are pros and cons for both (as with all things). If you don't send the information you won't have the signal that users are re-analyzing which is a signal in-itself, but if you do send the information there will be fuller logs with the same information.
jacob314
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few minor comments then LGTM.
Main thing I'm worried about is ensuring we don't start logging too much and cause performance problems particularly for c cases where an IDE is already struggling with 1000s of errors in the analysis errors view.
|
@jacob314 My main concern is that we are doing a lot of work that 90% of our users won't see for a year. The new Dart API is (or was -- I have not checked this week) only present in the EAP version of the Dart plugin. BTW you have to click the Approve button now. LGTM in comments is no longer sufficient to merge. No hurry, I have not yet looked into your questions. |
132f677 to
fdacbf5
Compare
| public void computedErrors(String path, List<AnalysisError> list) { | ||
| assert list != null; | ||
| list.forEach(this::logAnalysisError); | ||
| pathToErrors.put(path, list); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. It would make sense to check if the file had already been analyzed and, if the errors are the same, ignore it, I think.
| FlutterInitializer.getAnalytics().sendTiming(E2E_IJ_COMPLETION_TIME, FAILURE, e2eCompletionMS); // test: logE2ECompletionErrorMS() | ||
| } | ||
|
|
||
| private void logAnalysisError(@Nullable AnalysisError error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, we log 100 events plus the analysis time of every error in an open editor. So, that's potentially worse. We'd have to check with @jwren for design rationale.
| public void computedErrors(String path, List<AnalysisError> list) { | ||
| assert list != null; | ||
| List<AnalysisError> existing = pathToErrors.get(path); | ||
| if (existing != null && existing.equals(list)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jwren Do you think this test makes sense? And will it be expensive?
709f05c to
e699f85
Compare
jacob314
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a38996f to
8f372a6
Compare
|
@jacob314 @jwren After looking at the log file and seeing just how much stuff was getting sent I decided to throttle everything to one transmission per minute. The cumulative error counts are sent when the project is closed, and every two hours, for back-end percentile analysis. Both of those intervals can be adjusted. |
flutter-idea/src/io/flutter/analytics/FlutterAnalysisServerListener.java
Outdated
Show resolved
Hide resolved
| if (lintCount > 0) { | ||
| analytics.sendEventMetric(DAS_STATUS_EVENT_TYPE, LINTS, lintCount); // test: serverStatus() | ||
| } | ||
| errorCount = warningCount = hintCount = lintCount = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we zero out the counts? This seems wrong. I would expect these #s should match the # of errors reported in the analysis server window. As is, I'm not clear how I would interpret these #s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are accumulated while analysis is active, then sent when analysis is complete. They need to be zeroed so the accumulated values are accurate.
|
|
||
| void logE2ECompletionSuccessMS(long e2eCompletionMS) { | ||
| FlutterInitializer.getAnalytics().sendTiming(E2E_IJ_COMPLETION_TIME, SUCCESS, e2eCompletionMS); // test: logE2ECompletionSuccessMS() | ||
| maybeReport(true, (analytics) -> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the completion time, rather than throttling, what you could alternately report a single metric on intellij close that reports the P50, P90, and P95 times for the entire session.
That would make the completion time numbers less noisy than filtering to only report 1 completion per minute.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the comment below for why we cannot do it when exiting.
flutter-idea/src/io/flutter/analytics/FlutterAnalysisServerListener.java
Show resolved
Hide resolved
flutter-idea/src/io/flutter/analytics/FlutterAnalysisServerListener.java
Show resolved
Hide resolved
| if (IS_TESTING) { | ||
| errorCount = warningCount = hintCount = lintCount = 1; | ||
| } | ||
| maybeReportErrorCounts(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
have you manually verified that these #s match what the analyzer window shows?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's been a while, but yes.
|
Rather than rate limiting, I think we should emit summary statistics when the IntelliJ session is closed. The reason is that just rate limiting like this can cause the data to be a bit noisy and skewed. Example summary statistics format that I think would work better: This indicates that the user had 2000 autocomplete events with a P50 time of 73, P90 time of 350, and P95 time of 700. |
|
We can't do all the computation and reporting at exit. IntelliJ enforces a strict, limited time for exit processing (i.e. project close). We can't know what else is going on, so relying on it would potentially limit the data we would collect. |
Percentiles are computed on the server. The same analytic events are used here as are used there. My statistics is rusty, and my stats book doesn't even mention this, but I'm not sure you'd get the same percentiles if you tried combining percentiles from samples rather than computing the population percentiles on the server. |
jwren
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Ok I'm ok with sampling for most things instead. Perhaps I'm being too conservative on how many analytics events we should send. |
|
I agree the median of the median is not the same as the median of all the data but in some ways it is the metric we really want. |

Needs testing, and tests. Some basic work is still not done, too.