-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Email error when log is too big to send. #76
Comments
I think there is currently some very crude truncation going on in an attempt to keep the logs for each individual Task pretty small. These could still pile up if you have a bunch of tasks. I'm in favor of adding configuration around sending logs and, if this is a common problem, configuring a max size on emails. Both should be their own issues. |
Yeah this was another feature I was thinking of. I know our processes will probably have a lot of STDOUT, so sending it directly in the email, while convenient, can be a bit much. With utkarsh's logs code, you could have an email template that links to the logs in the web UI from the email, instead of putting them directly in the email. |
I love this idea. Would reduce the need for both of those email config vars, I would think. |
Having a link would be cool, but you still have the problem of people who want it in the email. If you are cool with that I will work on that and summit a pull request. Edit: English |
Yes, in either case there should be detection for the email being too large, so +1 @surbas |
I think we might be hitting this issue with one of our jobs, so this is on our radar. (we have a long running job every morning that produces a lot of output that doesn't end up having an email sent.) Although looking through the dagobah log, we arent getting these errors, we're getting something like:
I see a bunch a few of these OperationFailure errors, here's another:
|
The exact issue here is probably a Mongo server-side bug, but we do need a better way in general for handling giant logs. |
@rclough Do you think it makes more sense to just drop logs over a certain size and warn the user about it or to implement something that could actually handle giant logs? We could try using GridFS in Mongo but as far as I know SQLite is going to be constrained by whatever maximum size we set for that column. No idea how large we can go, but it seems probably inefficient. |
I feel like if you are going to be running jobs with huge logs, you are probably going to want a backend more robust than SQLite. That said, being able to get full logs is pretty important, I think. A short fix would be dropping large logs with a warning, but often times in my experience, there's not much option to fix it (ie would be sacrificing necessariy info for job failures). I don't know how GridFS works but quickly glancing it seems like a cool idea. It might be handy to have emails send a dagobah link to the log if the log is too big. |
I log a lot in my processes and got the following in stdout on completion of a job.
Traceback (most recent call last):
File "c:\dagobah\dagobah\core\components.py", line 31, in emit
method.call(_args, *_kwargs)
File "c:\dagobah\dagobah\daemon\daemon.py", line 151, in job_complete_email
email_handler.send_job_completed(kwargs['event_params'])
File "c:\dagobah\dagobah\email\basic.py", line 24, in send_job_completed
self._construct_and_send('Job Completed: %s' % data.get('name', None))
File "c:\dagobah\dagobah\email\common.py", line 39, in _construct_and_send
self._send_message()
File "c:\dagobah\dagobah\email\common.py", line 72, in _send_message
self.message.as_string())
File "C:\Python27\Lib\smtplib.py", line 739, in sendmail
raise SMTPDataError(code, resp)
SMTPDataError: (552, 'message line is too long')
So should we truncate logs if they are too big? This is actually controlled by the smtp server, so how big were my logs, and what is too big?
Should we offer an option not to send a log with an email (I personally don't need to see my logs in email).
The text was updated successfully, but these errors were encountered: