Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't swallow the original IOException thrown at JSON parsing in slowlogs #19573

Closed
astefan opened this issue Jul 25, 2016 · 6 comments
Closed
Labels
>bug :Core/Infra/Logging Log management and logging utilities discuss

Comments

@astefan
Copy link
Contributor

astefan commented Jul 25, 2016

Elasticsearch version: 1.7.5

JVM version: 1.8.0_77

Description of the problem including expected versus actual behavior:

The slowlogs are hidding and not logging the original IOException thrown in the slowlog indexing and searching utility classes. The same seems to be happening for 2.3.x.

As such in slowlogs entries like the following are logged and there are no clues as to what the original request was and why it failed:

[2016-07-25 09:10:38,310][INFO ][index.indexing.slowlog.index] [some_node] [some_index][3] took[7.2s], took_millis[7200], type[some_type], id[12345], routing[12345], source[source[_failed_to_convert_]
@rjernst
Copy link
Member

rjernst commented Jul 25, 2016

This is a typical case of exception swallowing. For master, the code is in IndexingSlowLog line 193. We could easily change this catch to rethrow as an UncheckedIOException. However, that would propagate up and cause the indexing request to fail, even though the actual indexing already succeeded. So we could either do that, or log the exception in the slow log itself. I don't know which is better, I've marked this issue for discussion.

@nik9000
Copy link
Member

nik9000 commented Jul 25, 2016

Maybe we should log it in the main log file (not the slow log) at warn level? Then we can keep the slow log the way it is. I don't expect this to be especially common so warn level ought to be fine. If it become common we can bump that log level to error as a work around and we can fix the issue that makes it common in the mean time.

@javanna
Copy link
Member

javanna commented Jul 25, 2016

I don't think slowlog should cause a request to fail, but we should totally not lose what the problem was and log all its details. The odd part of polluting the slowlog file would be that it may break parsers out there that rely on a fixed format? So I think I agree with Nik to try and log it in the main log file if possible.

@rjernst
Copy link
Member

rjernst commented Jul 25, 2016

Well, one problem here is rethrowing and putting into the general log means we will not have this indexing request in the slow log. I think that is kind of bogus? Propagating to fail the indexing request would have been painful yes, but we would know what the issue is and could fix much faster than it just being hidden away in another log file.

@lcawl lcawl added :Core/Infra/Core Core issues without another label and removed :Exceptions labels Feb 13, 2018
@jasontedor jasontedor added :Core/Infra/Logging Log management and logging utilities and removed :Core/Infra/Core Core issues without another label labels Mar 14, 2018
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-infra

@jasontedor
Copy link
Member

Letting the exception percolate up here will not fail the indexing request. The slow logs are implemented as indexing operation listeners. The post indexing operation loop that invokes such listeners catches all exceptions and logs them at the warn level. After logging, these exceptions percolate no further. Therefore, we can choose to not write the entry to the slow log, but instead carry the exception up to the post index listener loop where it will be logged at the warn level. With it, we carry the entry that would have been entered into the slow log (modulo the failed source conversion). I opened #29043.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug :Core/Infra/Logging Log management and logging utilities discuss
Projects
None yet
Development

No branches or pull requests

8 participants