Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

raise_record_not_found_exception #3388

Closed
2 tasks done
Jehops opened this issue May 28, 2017 · 4 comments
Closed
2 tasks done

raise_record_not_found_exception #3388

Jehops opened this issue May 28, 2017 · 4 comments
Labels
bug Something isn't working

Comments

@Jehops
Copy link
Contributor

Jehops commented May 28, 2017

I have started noticing these exceptions in the background workers log. It's possible, though I'm not certain, they started after upgrading to 1.4rc5.

2017-05-28T10:53:32.639Z 85437 TID-9n3irk WARN: ActiveRecord::RecordNotFound: Couldn't find Status with 'id'=122
2017-05-28T10:53:32.639Z 85437 TID-9n3irk WARN: /usr/local/lib/ruby/gems/2.3/gems/activerecord-5.0.3/lib/active_record/relation/finder_methods.rb:353:in `raise_record_not_found_exception!'
/usr/local/lib/ruby/gems/2.3/gems/activerecord-5.0.3/lib/active_record/relation/finder_methods.rb:479:in `find_one'
/usr/local/lib/ruby/gems/2.3/gems/activerecord-5.0.3/lib/active_record/relation/finder_methods.rb:458:in `find_with_ids'
/usr/local/lib/ruby/gems/2.3/gems/activerecord-5.0.3/lib/active_record/relation/finder_methods.rb:66:in `find'
/usr/local/lib/ruby/gems/2.3/gems/activerecord-5.0.3/lib/active_record/querying.rb:3:in `find'
/usr/local/lib/ruby/gems/2.3/gems/activerecord-5.0.3/lib/active_record/core.rb:151:in `find'
/usr/local/www/mastodon/app/workers/thread_resolve_worker.rb:9:in `perform'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:204:in `execute_job'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:175:in `block (2 levels) in process'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/middleware/chain.rb:128:in `block in invoke'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-unique-jobs-5.0.8/lib/sidekiq_unique_jobs/server/middleware.rb:17:in `call'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/middleware/chain.rb:130:in `block in invoke'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/middleware/chain.rb:133:in `invoke'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:174:in `block in process'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:146:in `block (6 levels) in dispatch'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/job_retry.rb:97:in `local'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:145:in `block (5 levels) in dispatch'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/rails.rb:43:in `block in call'
/usr/local/lib/ruby/gems/2.3/gems/activesupport-5.0.3/lib/active_support/execution_wrapper.rb:85:in `wrap'
/usr/local/lib/ruby/gems/2.3/gems/activesupport-5.0.3/lib/active_support/reloader.rb:68:in `block in wrap'
/usr/local/lib/ruby/gems/2.3/gems/activesupport-5.0.3/lib/active_support/execution_wrapper.rb:85:in `wrap'
/usr/local/lib/ruby/gems/2.3/gems/activesupport-5.0.3/lib/active_support/reloader.rb:67:in `wrap'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/rails.rb:42:in `call'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:141:in `block (4 levels) in dispatch'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:220:in `stats'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:136:in `block (3 levels) in dispatch'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/job_logger.rb:8:in `call'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:135:in `block (2 levels) in dispatch'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/job_retry.rb:72:in `global'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:134:in `block in dispatch'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/logging.rb:32:in `with_context'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:133:in `dispatch'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:173:in `process'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:85:in `process_one'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/processor.rb:73:in `run'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/util.rb:17:in `watchdog'
/usr/local/lib/ruby/gems/2.3/gems/sidekiq-5.0.0/lib/sidekiq/util.rb:26:in `block in safe_thread'

  • I searched or browsed the repo's other issues to ensure this is not a duplicate.
  • This bug happens on a tagged release
@akihikodaki akihikodaki added the bug Something isn't working label May 30, 2017
@ClearlyClaire
Copy link
Contributor

ClearlyClaire commented Jun 5, 2017

I have the same issue here. That very much looks like a race condition, as the record that can't be found is created right before calling the worker.
I don't know enough about Ruby on Rails to go much further in diagnosing this, though.

I looked a bit at why it only happened in ThreadResolveWorker, and it's probably not because the other workers avoid race conditions, but instead because they fail silently when such an error occurs (rescue ActiveRecord::RecordNotFound).

EDIT: I'm indeed pretty sure the workers often get to run before the transactions are actually committed to the database, so this is actually a bug, and not only in ThreadResolveWorker, but also in all workers silently ignoring this exception. Those workers need to run after the transactions are committed, but I don't know how to do that.

@ClearlyClaire
Copy link
Contributor

This is still an issue, even after #3599. The problem is essentially that create_status is re-entrant: indeed, if the processed toot is a boost, ProcessFeedService#shared_status_from_xml will be called from within the transaction block. This last method will call FetchRemoteStatusService.new.call, calling FetchRemoteService again and leading to nested transactions, causing the worker spawned from the inner transaction to work on un-committed records.

This is a bit more tricky than the error fixed by #3599 and I'm not completely sure how to handle that, nor have I the time to look into this right now.

@Gargron
Copy link
Member

Gargron commented Jun 7, 2017

@ClearlyClaire
Copy link
Contributor

Actually, I think I have an idea on how to handle that, but it'll have to wait a few hours for me to be back from $dayjob

ClearlyClaire added a commit to ClearlyClaire/mastodon that referenced this issue Jun 7, 2017
…ore transaction block

Steps to reproduce the original issue:
1. Have two remote accounts, A that you don't follow, and B that you follow.
2. Have A post a toot and reply to it.
3. Boost A's reply from remote account B.

This used to cause the local instance to get A's reply but fail to link it to
the original post.
@Gargron Gargron closed this as completed in 28d2920 Jun 7, 2017
koteitan pushed a commit to koteitan/mastodon that referenced this issue Jun 25, 2017
…ore transaction block (mastodon#3622)

Steps to reproduce the original issue:
1. Have two remote accounts, A that you don't follow, and B that you follow.
2. Have A post a toot and reply to it.
3. Boost A's reply from remote account B.

This used to cause the local instance to get A's reply but fail to link it to
the original post.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants