You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My Application imports emails sent to it by Mandrill.
The emails contain a large encoded excel spreadsheet.
Stupidly, I wasn't using any sort of queuing system, so Mandrill hit my URL webhook end point, and then the application synchronously tried to import from the spreadsheet, which can take a long time.
Every time that the email was attempted to be imported, the max_execution_time was being hit, causing an PHP Fatal error: Maximum execution time of 60 seconds exceeded error. This has been happening for a couple of weeks, and none of these errors have appeared on my Bugsnag account.
After some debugging, I've worked out that the POST data in the original request by Mandrill is larger than the Bugsnag POST data limit, so when Bugsnag tries to notify about the Maximum execution time of 60 seconds exceeded error I get a 400 (bad request) response, and therefore the error is never logged. The docs state that a 400 bad request indicates that the payload was too large.
You can see here that the Bugsnag library never considers the size of the $_POST variable before adding it to the $requestData.
Do you think it would be sensible to check the size of the post data just before sending the curl request. Then, if the size is larger than the Bugsnag post data limit (which isn't mentioned in the docs so I'm not sure what that is), the POST data (or params to use the term actually used in $requestData) would be truncated sufficiently so that the POST data falls below the Bugsnag limit. Alternatively, the POST data could be excluded all together. Ultimately, given that Bugsnag have set a POST data limit, to me it makes sense to make sure that the Bugsnag library never tries to send a request with POST data larger than that limit.
As I was relying on Bugsnag to notify me of any errors, I was not made aware the imports were failing until a couple of weeks later when the customer alerted me.
(For the record, I'm in the process of implementing a queuing system to handle this, so the original request from Mandrill with the massive post data should very quickly be handled and a job added to the queue. That said, if some other unrelated error occurs during this request, once again that error would never reach Bugsnag).
The text was updated successfully, but these errors were encountered:
Do you think it would be sensible to check the size of the post data just before sending the curl request.
Definitely. There are two fixes which would significantly improve usage here, the first is to add the actual payload size limit to the documentation (which I'm fixing now) and the second is to truncate the payloads to that size.
The PHP implementation should do something similar to the other notifiers, as well as exposing the maximum payload size, if you or other users want to implement payload truncation in the before notify callback in a way which makes more sense for your reports.
I'll work on this after some documentation updates, unless someone beats me to it. In the meantime, a payload under 1Mb should be accepted.
The limit currently stands at 512k. I have now fixed this. What we do is break up batch requests if they're too large, and as a last resort, strip out the meta data too.
Background:
Every time that the email was attempted to be imported, the
max_execution_time
was being hit, causing anPHP Fatal error: Maximum execution time of 60 seconds exceeded
error. This has been happening for a couple of weeks, and none of these errors have appeared on my Bugsnag account.After some debugging, I've worked out that the POST data in the original request by Mandrill is larger than the Bugsnag POST data limit, so when Bugsnag tries to notify about the
Maximum execution time of 60 seconds exceeded
error I get a400 (bad request)
response, and therefore the error is never logged. The docs state that a 400 bad request indicates that the payload was too large.You can see here that the Bugsnag library never considers the size of the
$_POST
variable before adding it to the$requestData
.Do you think it would be sensible to check the size of the post data just before sending the curl request. Then, if the size is larger than the Bugsnag post data limit (which isn't mentioned in the docs so I'm not sure what that is), the POST data (or
params
to use the term actually used in$requestData
) would be truncated sufficiently so that the POST data falls below the Bugsnag limit. Alternatively, the POST data could be excluded all together. Ultimately, given that Bugsnag have set a POST data limit, to me it makes sense to make sure that the Bugsnag library never tries to send a request with POST data larger than that limit.As I was relying on Bugsnag to notify me of any errors, I was not made aware the imports were failing until a couple of weeks later when the customer alerted me.
(For the record, I'm in the process of implementing a queuing system to handle this, so the original request from Mandrill with the massive post data should very quickly be handled and a job added to the queue. That said, if some other unrelated error occurs during this request, once again that error would never reach Bugsnag).
The text was updated successfully, but these errors were encountered: