Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FrozenError on shutdown #2065

Closed
ryansch opened this issue Jun 9, 2023 · 6 comments
Closed

FrozenError on shutdown #2065

ryansch opened this issue Jun 9, 2023 · 6 comments
Assignees
Labels
bug community To tag external issues and PRs submitted by the community

Comments

@ryansch
Copy link

ryansch commented Jun 9, 2023

Description

During agent shutdown I occasionally get this error:

D, [2023-06-09T09:24:08.275619 #5] DEBUG -- sentry: Shutting down background worker
D, [2023-06-09T09:24:08.285696 #5] DEBUG -- sentry: Killing session flusher
** [NewRelic][2023-06-09 09:24:08 -0500 run.* (5)] INFO : Starting Agent shutdown
/app/vendor/bundle/ruby/3.1.0/gems/newrelic_rpm-9.2.2/lib/new_relic/agent/agent_helpers/start_worker_thread.rb:48:in `join': �[1mcan't modify frozen fatal: #<fatal: exception reentered> (�[1;4mFrozenError�[m�[1m)�[m
  from /app/vendor/bundle/ruby/3.1.0/gems/newrelic_rpm-9.2.2/lib/new_relic/agent/agent_helpers/start_worker_thread.rb:48:in `stop_event_loop'
  from /app/vendor/bundle/ruby/3.1.0/gems/newrelic_rpm-9.2.2/lib/new_relic/agent/agent_helpers/shutdown.rb:16:in `shutdown'
  from /app/vendor/bundle/ruby/3.1.0/gems/newrelic_rpm-9.2.2/lib/new_relic/agent/agent_helpers/special_startup.rb:69:in `block in install_exit_handler'
bin/rails: �[1mexception reentered (�[1;4mfatal�[m�[1m)�[m

Expected Behavior

I expected the agent to shutdown without issue.

Your Environment

This is happening during a postdeploy script on Heroku.
I can't help but wonder if this is an issue with having both sentry and newrelic installed.

For Maintainers Only or Hero Triaging this bug

Suggested Priority (P1,P2,P3,P4,P5):
Suggested T-Shirt size (S, M, L, XL, Unknown):

@ryansch ryansch added the bug label Jun 9, 2023
@workato-integration
Copy link

@github-actions github-actions bot added the community To tag external issues and PRs submitted by the community label Jun 9, 2023
@tannalynn
Copy link
Contributor

Thank you for bringing this to our attention @ryansch. We'll take a look at seeing if we can reproduce this issue. What version of sentry are you using? Any other information about the environment would be helpful as well for reproduction purposes.

@ryansch
Copy link
Author

ryansch commented Jun 14, 2023

@tannalynn We're running sentry (rails, ruby, sidekiq) 5.9.0.

We're currently on Rails 6.1.7.3 on Ruby 3.1.2.

@kaylareopelle kaylareopelle self-assigned this Jun 27, 2023
@kaylareopelle
Copy link
Contributor

Hi @ryansch, thanks for sharing your environment information.

I created an application using the specifications you shared but wasn't able to reproduce the error.

I think there might be something in your environment causing the issue that's missing from my reproduction or something in the postdeploy script that exercises code outside the realm of my test.

Could you try to reproduce the error using https://github.com/kaylareopelle/issue_2065_repro as a starting point?

@kaylareopelle
Copy link
Contributor

Hi @ryansch! Checking back in on this issue. Have you had a chance to try out the repro?

@fallwith
Copy link
Contributor

Hi @ryansch. We're going to close this issue for now. If you have any luck in future with the branch referenced earlier:

Could you try to reproduce the error using https://github.com/kaylareopelle/issue_2065_repro as a starting point?

or determine any other way for us to reproduce the problem, please let us know. You can re-open this issue or create a new one.

Your trace's final line of code being executed references lib/new_relic/agent/agent_helpers/start_worker_thread.rb:48, which performs a Thread#joinoperation. From the RubyDocs on Thread exception handling:

When an unhandled exception is raised inside a thread, it will terminate. By default, this exception will not propagate to other threads. The exception is stored and when another thread calls value or join, the exception will be re-raised in that thread.

So it looks like the Thread#join call brings forth the exception that occurred previously in the thread. Unfortunately we don't have a lot to go on as far as what caused that initial exception. For an exception related to attempts to modify a frozen class instance, we'd expect to see the class name referenced. For example:

can't modify frozen Array
can't modify frozen String

but in your PR description we have:

can't modify frozen fatal

This seems to suggest that the error text regarding the frozen object is getting mangled / overwritten by additional error text, so the underlying frozen object issue is obfuscated.

So I think we have 2 objectives here:

  1. Determine what the frozen object is
  2. Determine what bit of code is attempting to modify it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug community To tag external issues and PRs submitted by the community
Projects
Archived in project
Development

No branches or pull requests

4 participants