Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large request post data causes Bugsnag_Client::notify() to fail #119

Closed
philbates35 opened this issue Apr 7, 2016 · 3 comments
Closed
Labels
bug Confirmed bug

Comments

@philbates35
Copy link

Background:

  • My Application imports emails sent to it by Mandrill.
  • The emails contain a large encoded excel spreadsheet.
  • Stupidly, I wasn't using any sort of queuing system, so Mandrill hit my URL webhook end point, and then the application synchronously tried to import from the spreadsheet, which can take a long time.

Every time that the email was attempted to be imported, the max_execution_time was being hit, causing an PHP Fatal error: Maximum execution time of 60 seconds exceeded error. This has been happening for a couple of weeks, and none of these errors have appeared on my Bugsnag account.

After some debugging, I've worked out that the POST data in the original request by Mandrill is larger than the Bugsnag POST data limit, so when Bugsnag tries to notify about the Maximum execution time of 60 seconds exceeded error I get a 400 (bad request) response, and therefore the error is never logged. The docs state that a 400 bad request indicates that the payload was too large.

You can see here that the Bugsnag library never considers the size of the $_POST variable before adding it to the $requestData.

Do you think it would be sensible to check the size of the post data just before sending the curl request. Then, if the size is larger than the Bugsnag post data limit (which isn't mentioned in the docs so I'm not sure what that is), the POST data (or params to use the term actually used in $requestData) would be truncated sufficiently so that the POST data falls below the Bugsnag limit. Alternatively, the POST data could be excluded all together. Ultimately, given that Bugsnag have set a POST data limit, to me it makes sense to make sure that the Bugsnag library never tries to send a request with POST data larger than that limit.

As I was relying on Bugsnag to notify me of any errors, I was not made aware the imports were failing until a couple of weeks later when the customer alerted me.

(For the record, I'm in the process of implementing a queuing system to handle this, so the original request from Mandrill with the massive post data should very quickly be handled and a job added to the queue. That said, if some other unrelated error occurs during this request, once again that error would never reach Bugsnag).

@kattrali
Copy link
Contributor

kattrali commented Apr 7, 2016

Thank you for the thorough report, @philbates35.

Do you think it would be sensible to check the size of the post data just before sending the curl request.

Definitely. There are two fixes which would significantly improve usage here, the first is to add the actual payload size limit to the documentation (which I'm fixing now) and the second is to truncate the payloads to that size.

This behavior was recently implemented for other notifier libraries, notably Ruby (bugsnag/bugsnag-ruby#288, bugsnag/bugsnag-ruby#290) and Java (bugsnag/bugsnag-java#35). The general strategy is to check the payload size, trim long strings, and then if needed, truncate long arrays (which tend to be stacktraces).

The PHP implementation should do something similar to the other notifiers, as well as exposing the maximum payload size, if you or other users want to implement payload truncation in the before notify callback in a way which makes more sense for your reports.

I'll work on this after some documentation updates, unless someone beats me to it. In the meantime, a payload under 1Mb should be accepted.

@kattrali kattrali added bug Confirmed bug feature parity labels Apr 7, 2016
@GrahamCampbell
Copy link
Contributor

The limit currently stands at 512k. I have now fixed this. What we do is break up batch requests if they're too large, and as a last resort, strip out the meta data too.

@GrahamCampbell
Copy link
Contributor

GrahamCampbell commented Jun 23, 2016

This fix will ship with v2.9.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Confirmed bug
Projects
None yet
Development

No branches or pull requests

3 participants