Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Critical]: Registered downtime 2018-04-26T14:41:09 due to RangeError: Maximum call stack size exceeded #73

Closed
MathBunny opened this issue Apr 26, 2018 · 4 comments
Assignees

Comments

@MathBunny
Copy link
Owner

Received user report that the segment and rides search pages were down, and forwarded the user to the Strava OAuth page. Restarting the Node.js seemed to work and resolve the issue, but this may occur again.

In the future, basic QC checks are appropriate to ensure basic functionality during run-time.

By looking at the logs:

  exports.active = process.domain = this;
                                  ^

RangeError: Maximum call stack size exceeded
    at Domain.enter (domain.js:167:35)
    at bound (domain.js:299:8)
    at Domain.runBound (domain.js:314:12)
    at emitOne (events.js:115:13)
    at Domain.emit (events.js:210:7)
    at Domain._errorHandler (domain.js:118:23)
    at process._fatalException (bootstrap_node.js:326:33)
    at Domain._errorHandler (domain.js:145:26)
    at process._fatalException (bootstrap_node.js:326:33)
    at Domain._errorHandler (domain.js:145:26)
npm ERR! code ELIFECYCLE
npm ERR! errno 7
npm ERR! stravawindanalysis@0.0.0 start: `node ./bin/www`
npm ERR! Exit status 7
npm ERR! 
npm ERR! Failed at the stravawindanalysis@0.0.0 start script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/ubuntu/.npm/_logs/2018-04-26T23_34_53_425Z-debug.log
@MathBunny MathBunny added the bug label Apr 26, 2018
@MathBunny MathBunny self-assigned this Apr 26, 2018
@MathBunny
Copy link
Owner Author

On more inspection, it seems like Redis is experiencing some issues persisting. There isn't enough RAM on the AWS instance ...

@MathBunny
Copy link
Owner Author

Same issue is happening

@MathBunny
Copy link
Owner Author

MathBunny commented Apr 28, 2018

Restarted the EC2 instance, resulting in the site being down for 1 minutes and 1 seconds.

However, the redis-server process seems to be still running.

@MathBunny
Copy link
Owner Author

Fixed the issue, by running sudo /etc/init.d/redis-server stop. Memory usage is now low.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant