New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redis connection to *** failed - connect ECONNREFUSED #88
Comments
Hi @Prinzhorn, it's hard to tell immediately if this is caused by the newrelic module or not. Due to the way the module instruments code, it will always end up in an error stack trace somewhere. Having said that, we are definitely interested in making sure the module is not interacting with your product in a negative way, so I am not ruling anything out from the start. If you're willing to do a little troubleshooting we may be able to hunt down what's going wrong.
We provide a detailed log of what the agent is doing, but on Heroku there is no persistent file system so I believe we stream the log to your |
I will try to create a test case locally. I don't want to mess with the production env in any way. But on the other hand, will newrelic work locally? I don't want to stats to be mixed with the production stats. |
The data generated by the newrelic module is segmented by application name, and license key. If you give your local application a different name from production it will show up as a separate application in the web ui. The var APP_NAME;
if (process.env.NODE_ENV == 'production') {
APP_NAME = "My App (Production)";
} else {
APP_NAME = "My App (Development)";
}
exports.config = {
app_name : [APP_NAME],
license_key : 'license key here'
}; |
Never mind, the underlying error was caused by redis/node-redis#457 |
I'm reopening this because I just noticed the following: In those few seconds where the redis instance was down, new relic says I had 14.3k rpm (not sure what the peak in this particular minute was, about 29k I guess from the response time graph). It's usually orders of magnitude less than that. I know that this was caused by an infinite loop (I didn't exit the process in the |
New Relic isn't originating any of these requests. It's just observing them. The default I'm not 100% sure, but I think the error count metric inside New Relic may count towards the RPM figure, which would account for the request spike you saw. I'm still working out with the team responsible for that part of the application whether that's the case. If so, it's a little confusing, but it's not really a bug in the Node module, just an inconsistency between how New Relic thinks about requests and how things happen in an asynchronously concurrent environment like Node's. (Not the first area in which this has happened.) Unless you've got a reproducible test case we can look at, there's nothing we can act on here. Either way, thanks for the information! |
I should have mentioned that the error rate/count was zero during this period. That's because during the processing of the
I can try to create one by forcing a type error like |
If your process is caught in a tight loop of Just FYI, v1.2.0 of New Relic for Node uses a completely different, much more unobtrusive mechanism for error-handling that should deal with this kind of weirdness better. If you haven't given it a spin yet, please do! |
I did definitely not get 14k requests within one second. So something must have been sent to new relic.
I will upgrade within the next days! |
…protobufjs-6.11.3 Bump protobufjs from 6.11.2 to 6.11.3
Release/v6.0.0
Added Node 16 to CI
Added Node 16 to CI
Node v0.10.22 on Heroku
Newrelic 1.0.x (I will upgrade to 1.1.x if it matters)
I'm not sure if this is an issue with New Relic itself.
It seems like the connection to my remote redis instance dropped for about two seconds. Which is not an issue itself, maybe the network had a hiccup.
What does concern me though is that over the course of about two seconds I got hundreds of these logentries with basically zero load on the server, which means that something else was going crazy over this and I assume it was newrelic.
The text was updated successfully, but these errors were encountered: