Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix #1233. this just uses the number that makes the test case pass #1274

Conversation

burnettk
Copy link
Contributor

#1233 includes a test case with a long message that blows up mongo. one of the messages that was blowing up our mongo instance could be fixed with a truncation threshold of 995, but it feels better to fix more cases. this will affect all new messages with sizes between 964 and 1000, but doesn't really feel like it will break much. and it feels a lot better to store the notice and index something rather than blowing up. i wish i understood the details of what was going on here a bit better. i tried calling message.bytes.size on the test case message, but the number that came out didn't match exactly up with the number in the exception that was raised (it was closer than message.size, though):

message.size: 1000
message.bytes.size: 1047
# note that mongo sees 1059 below. note sure where the extra 12 bytes come from.

Failures:

  1) Notice#message= truncates the long es message
     Failure/Error:
       Problem.where('_id' => id).find_one_and_update({
         '$set' => {
           'environment'                            => notice.environment_name,
           'error_class'                            => notice.error_class,
           'last_notice_at'                         => notice.created_at.utc,
           'message'                                => notice.message,
           'resolved'                               => false,
           'resolved_at'                            => nil,
           'where'                                  => notice.where,
           "messages.#{message_digest}.value"       => notice.message,

     Mongo::Error::OperationFailure:
       WiredTigerIndex::insert: key too large to index, failing  1059 { : "Elasticsearch::Transport::Transport::Errors::InternalServerError: [500] {"error":"SearchPhaseExecutionException[Failed to execute phase [query_fetch],..." } (17280)
     # ./app/models/problem.rb:84:in `cache_notice'
     # ./spec/fabricators/notice_fabricator.rb:12:in `block (2 levels) in <top (required)>'
     # ./spec/models/notice_spec.rb:73:in `block (3 levels) in <top (required)>'

@rud
Copy link
Contributor

rud commented Jul 10, 2018

An idea: the different message lengths are probably due to timestamps being slightly variable in length - milliseconds sometimes come out as 0 and are left out when rendering as a string. I've seen that elsewhere at least.

@stevecrozz stevecrozz force-pushed the feature/work-around-key-too-large-to-index-errors branch from af305e2 to 9a71249 Compare November 1, 2018 04:47
@stevecrozz
Copy link
Member

I found the issue with this and added a commit to your branch @burnettk, and then rebased onto master. Will merge if it passes.

Mongo accepts index keys only if they are less than 1,024 bytes
including some number of bytes of BSON overhead as defined by the BSON
spec. I'm not sure if that number of bytes is predictable and the value
of getting every last possible byte of this message in mongo seems low
since we're arbitrarily truncating anyway. So 1,000 bytes it is.
@stevecrozz stevecrozz force-pushed the feature/work-around-key-too-large-to-index-errors branch from 9a71249 to f49c661 Compare November 1, 2018 04:57
@burnettk
Copy link
Contributor Author

burnettk commented Nov 1, 2018

it passed, super cool; glad to see traction on this long-lived and poorly-understood issue!

@stevecrozz stevecrozz merged commit d4cfec0 into errbit:master Nov 1, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants