-
Notifications
You must be signed in to change notification settings - Fork 1.1k
[feat] Improving Crashlytics MCP Tools #9181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @maxl0rd, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the Crashlytics MCP tools by introducing a new Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces several improvements to the Crashlytics MCP tools, including adding a batchGetEvents
tool, enhancing error handling by removing redundant try-catch blocks, and hardening the tools against common errors with better validation and data simplification. The changes are well-structured and improve the robustness of the tools. I've identified a few areas for improvement, mainly concerning object mutation which can lead to side effects, and some minor inconsistencies that could be addressed to improve maintainability and clarity.
bad75a8
to
87e25c9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A couple small comments, otherwise LGTM.
.optional() | ||
.describe( | ||
`Count FATAL events (crashes), NON_FATAL events (exceptions) or ANR events (application not responding)`, | ||
`Counts FATAL events (crashes), NON_FATAL events (exceptions) or ANR events (application not responding)`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we include a default value here? It seems like this is not being set by the LLM and maybe we should do the inverse and make the LLM add others if it wants them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We haven't fully figured out when is the right time to guide it to fatals yet. Let's crack that off as a separate problem.
const simplifiedReport = cloneDeep(report); | ||
if (!simplifiedReport.groups) return report; | ||
simplifiedReport.groups.forEach((group) => { | ||
// Leaves displayName only in each group, which is the appropriate field to use |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: can you describe what is included in the displayName in this comment?
3. Use the 'crashlytics_list_events' tool to get an example crash for this issue. | ||
3a. Apply the same filtering criteria that you used to find the issue, so that you find an appropriate event. | ||
3. Use the 'crashlytics_batch_get_events' tool to get an example crash for this issue. Use the event names in the sampleEvent fields. | ||
3a. If you need to read more events, use the 'crashlytics_list_events' tool. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have you seen it be able to effectively use this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's reliably using the get tool with the provided sample events. This is pulling much more accurate samples now than the list path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My major concern is about "How are we communicating to the LLM about the failure of MCP tool calls?". Do we have a broader way to handle that better?
*/ | ||
export function validateEventFilters(filter: EventFilter): void { | ||
if (!filter) return; | ||
const ninetyDaysAgo = new Date(Date.now() - 90 * 24 * 60 * 60 * 1000); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we utilize a constant for a day. That way it is easier to understand and resuable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, just a few comments on the test coverage
expect(nock.isDone()).to.be.true; | ||
}); | ||
|
||
it("should throw a FirebaseError if the API call fails", async () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add tests to show how errors are handled now? Similarly for the other tools as well.
additionalPrompt = "This report response contains no results."; | ||
} | ||
if (additionalPrompt) { | ||
reportResponse.usage = (reportResponse.usage || "").concat("\n", additionalPrompt); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does changing the usage field compare to just removing the extra usage field altogether.
queryParams: queryParams, | ||
timeout: TIMEOUT, | ||
}); | ||
response.body.events ??= []; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: can you add a test to check that [] is returned.
I removed many instances of "log and throw" in this PR. The general exception handling works well, both for ts errors and api response errors. There's no need to wrap every exception in every tool call. Removing this is better allowing the model to see the status code and exception message from the service, which is helping in many cases. |
Have you pushed the changes? I don't see any new commits. |
Going to call this resolved since the goal is to communicate the response directly from the LLM. So, all good here. |
87e25c9
to
adc8626
Compare
Description