Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All Requests Receive Timeout Error #1855

Closed
jgaull opened this issue May 20, 2016 · 83 comments
Closed

All Requests Receive Timeout Error #1855

jgaull opened this issue May 20, 2016 · 83 comments

Comments

@jgaull
Copy link

jgaull commented May 20, 2016

Environment Setup

  • Server: parse-server 2.2.10 (also tested on 2.2.9) , node 5.11.1, heroku
  • Database: mLab mongoDB 3.0.9

Steps to reproduce

I'm sorry this sucks, but we have not been able to narrow down the steps any more than this:

  1. Upload build to server
  2. Wait 5-15 minutes

Result: All requests begin timing out.

Code

Please note, this code does not appear to cause the problem as timeouts have begun without this ever running. However, I've been using this function when troubleshooting.

    console.log('running create!');
    //configure parameters
    var user = request.user;

    var title = request.params.title; //required
    var imageData = request.params.imageData; //required
    var videoData = request.params.videoData; //optional
    var content = request.params.content; //optional
    var attachment = request.params.attachment; //optional always URL

    //create a new story
    var newStory = new Story();
    //set the default parameters
    newStory.set("user", user);
    newStory.set("inspireCount", 0);
    newStory.set("commentCount", 0);
    newStory.set("viewCount", 0);
    newStory.set("notificationsOptOut", []);
    newStory.set("archived", false);
    newStory.set("toAllUsers", true);
    //add params that have been passed in
    newStory.set("title", title);
    newStory.set("content", content);
    newStory.set("attachment", attachment);
    //get the user's name and set it to the story
    var creatorName = user.get("name");
    newStory.set("creatorName", creatorName);

    //set the ACL
    newStory.setACL(createACLForStory(newStory));

    //this will be used to store promises when saving files
    var mediaFileSavePromises = [];
    //create image
    var image = new Parse.File("Image.jpg", imageData, "image/jpg");
    //save the image and push the promise into an array of promises
    mediaFileSavePromises.push(image.save());
    newStory.set("imageFile", image);

    //if there is a video
    if (videoData) {
        //create a file
        var video = new Parse.File("Video.mp4", videoData, "video/mp4");
        //save the file and push the promise onto the array
        mediaFileSavePromises.push(video.save());
        newStory.set("videoFile", video);
    }

    Parse.Promise.when(mediaFileSavePromises).then(function (mediaFiles) {
        //save the story
        console.log("media saved");
        return newStory.save();

    })
//more stuff. But never runs after timeouts start

Logs/Trace

Once timeouts begin I'm seeing this with VERBOSE enabled:

2016-05-20T20:18:26.942301+00:00 app[web.1]: verbose: GET /parse/classes/ClinkComment { host: '****.herokuapp.com',
2016-05-20T20:18:26.942311+00:00 app[web.1]:   connection: 'close',
2016-05-20T20:18:26.942313+00:00 app[web.1]:   'user-agent': 'node-XMLHttpRequest, Parse/js1.8.5 (NodeJS 5.11.1)',
2016-05-20T20:18:26.942313+00:00 app[web.1]:   accept: '*/*',
2016-05-20T20:18:26.942314+00:00 app[web.1]:   'content-type': 'text/plain',
2016-05-20T20:18:26.942317+00:00 app[web.1]:   'x-request-id': '9a7593e2-87c6-4e48-bc56-d1f9d05a0177',
2016-05-20T20:18:26.942318+00:00 app[web.1]:   'x-forwarded-for': '54.242.116.62',
2016-05-20T20:18:26.942319+00:00 app[web.1]:   'x-forwarded-proto': 'https',
2016-05-20T20:18:26.942319+00:00 app[web.1]:   'x-forwarded-port': '443',
2016-05-20T20:18:26.942320+00:00 app[web.1]:   via: '1.1 vegur',
2016-05-20T20:18:26.942320+00:00 app[web.1]:   'connect-time': '2',
2016-05-20T20:18:26.942321+00:00 app[web.1]:   'x-request-start': '1463775506929',
2016-05-20T20:18:26.942322+00:00 app[web.1]:   'total-route-time': '0',
2016-05-20T20:18:26.942322+00:00 app[web.1]:   'content-length': '349' } {
2016-05-20T20:18:26.942323+00:00 app[web.1]:   "where": {
2016-05-20T20:18:26.942324+00:00 app[web.1]:     "clink": {
2016-05-20T20:18:26.942324+00:00 app[web.1]:       "__type": "Pointer",
2016-05-20T20:18:26.942325+00:00 app[web.1]:       "className": "Clink",
2016-05-20T20:18:26.942325+00:00 app[web.1]:       "objectId": "B1vNrvL9sC"
2016-05-20T20:18:26.942326+00:00 app[web.1]:     }
2016-05-20T20:18:26.942327+00:00 app[web.1]:   },
2016-05-20T20:18:26.942327+00:00 app[web.1]:   "include": "user",
2016-05-20T20:18:26.942328+00:00 app[web.1]:   "limit": 25,
2016-05-20T20:18:26.942328+00:00 app[web.1]:   "order": "-createdAt"
2016-05-20T20:18:26.942329+00:00 app[web.1]: }
2016-05-20T20:18:26.696219+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=POST path="/parse/classes/ClinkComment" host=***.herokuapp.com request_id=dcac6a25-54f5-42b0-b15e-dd43556a7a4a fwd="54.242.116.62" dyno=web.1 connect=1ms service=30000ms status=503 bytes=0
2016-05-20T20:18:31.243011+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=POST path="/parse/functions/fetchStories" host=***.herokuapp.com request_id=3d6c2b02-57f4-430c-b836-54563d5ace3f fwd="108.212.64.230" dyno=web.1 connect=1ms service=30000ms status=503 bytes=0
2016-05-20T20:18:31.260092+00:00 heroku[router]: at=error code=H12 desc="Request timeout" method=POST path="/parse/classes/Clink" host=***.herokuapp.com request_id=0fbf25bd-2def-43a3-81c3-c0e875950825 fwd="54.242.116.62" dyno=web.1 connect=1ms service=30000ms status=503 bytes=0

If I wait around long enough I see this error:

2016-05-20T19:19:53.738611+00:00 app[web.1]: error! Received an error with invalid JSON from Parse: <!DOCTYPE html>
2016-05-20T19:19:53.738637+00:00 app[web.1]:     <html>
2016-05-20T19:19:53.738638+00:00 app[web.1]:     <head>
2016-05-20T19:19:53.738639+00:00 app[web.1]:       <meta name="viewport" content="width=device-width, initial-scale=1">
2016-05-20T19:19:53.738640+00:00 app[web.1]:       <style type="text/css">
2016-05-20T19:19:53.738641+00:00 app[web.1]:         html, body, iframe { margin: 0; padding: 0; height: 100%; }
2016-05-20T19:19:53.738643+00:00 app[web.1]:         iframe { display: block; width: 100%; border: none; }
2016-05-20T19:19:53.738643+00:00 app[web.1]:       </style>
2016-05-20T19:19:53.738645+00:00 app[web.1]:     <title>Application Error</title>
2016-05-20T19:19:53.738645+00:00 app[web.1]:     </head>
2016-05-20T19:19:53.738646+00:00 app[web.1]:     <body>
2016-05-20T19:19:53.738647+00:00 app[web.1]:       <iframe src="//s3.amazonaws.com/heroku_pages/error.html">
2016-05-20T19:19:53.738647+00:00 app[web.1]:         <p>Application Error</p>
2016-05-20T19:19:53.738648+00:00 app[web.1]:       </iframe>
2016-05-20T19:19:53.738648+00:00 app[web.1]:     </body>
2016-05-20T19:19:53.738649+00:00 app[web.1]:     </html>

Tested These Things

  • Spun up a new server instance. Still saw issue.
  • Waited until issue started and visited https://my-app.herokuapp.com/ and saw "I dream of being a website"
  • Added console.log statements in cloud code (above). Cloud functions get called even when requests are timing out. PFFiles are successfully saved. Nothing happens after newStory.save().
  • Enabled "log-runtime-metrics". Memory usage was stable.
  • Restarted server. This solves the issue temporarily.
  • Rolled back to a previous deploy. This solved the issue. Diffed code between releases and saw only trivial changes. A few console.logs.

I'm very much at a loss for how to continue troubleshooting this problem. Thanks so much for your help!

@drew-gross
Copy link
Contributor

You are receiving an AWS error, which seems to indicate an issue with your AWS config. If you run the server locally, do you get the same issue?

@jgaull
Copy link
Author

jgaull commented May 20, 2016

Thanks @drew-gross! I'm not set up to run the server locally. I can work on this.

Is it possible that this AWS error is related to S3Adapter? As far as I know that's the only AWS we integrate with.

@drew-gross
Copy link
Contributor

That is possible, yes. Possibly if you have an extremely large file are are trying to proxy it through a very small heroku instance or something.

@jgaull
Copy link
Author

jgaull commented May 20, 2016

@drew-gross largest file in S3 is 7.7mb. Should be fine. It looks like #1854 describes exactly what our team is experiencing.

@jadsonlourenco
Copy link

I have no sure about the problem, but before upgrade to 2.2.10 all works fine, local and remote (same server), and I'm using Google Cloud Storage to files, so I don't think that it is an issue because of server load or proxy, before upgrade everything works fine including fetch files... But I don't know.

The MongoDb I don't think too, because I can access data normally on the parse-server database, using MongoChef.

Also after restart the Docker container of parse-server it back to normal, but then down again. It's mean is something related with parse-server.

I'm waiting the log here, soon I paste here after the parser-serve go to down.

@jadsonlourenco
Copy link

Ok, after few minutes the server not respond anymore, but show nothing on the logs (VERBOSE). Can be related with database, I'm installing the database locally now to see the logs too.
Just to share ☺️

@drew-gross
Copy link
Contributor

drew-gross commented May 21, 2016

@jgaull If your ClinkComment class is large and there is no index on createdAt then mongo may be forced to do a full table scan which takes too long. This is why my suggestion was to check the mongo logs for long running queries.

@dcdspace
Copy link

@drew-gross do you have any ideas on why this happened only after updating, and can be temporarily fixed for under 15 minutes by resetting the dyno?

@davide-scalzo
Copy link

I'm experiencing the same issue, however I deploy on EB. Here's the Nginx error log after I switched to a single instance:

-------------------------------------
/var/log/nginx/error.log
-------------------------------------
2016/05/21 02:13:57 [warn] 3409#0: duplicate MIME type "text/html" in /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf:42
2016/05/21 02:25:14 [error] 3432#0: *6 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 75.165.113.35, server: , request: "POST /api/classes/Brand HTTP/1.1", upstream: "http://127.0.0.1:8081/api/classes/Brand", host: "www.example.com"
2016/05/21 02:25:14 [error] 3432#0: *7 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 75.165.113.35, server: , request: "POST /api/classes/Review HTTP/1.1", upstream: "http://127.0.0.1:8081/api/classes/Review", host: "www.example.com"
2016/05/21 02:25:14 [error] 3432#0: *5 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 75.165.113.35, server: , request: "POST /api/classes/Location HTTP/1.1", upstream: "http://127.0.0.1:8081/api/classes/Location", host: "www.example.com"
2016/05/21 02:25:14 [error] 3432#0: *8 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 75.165.113.35, server: , request: "POST /api/classes/Review HTTP/1.1", upstream: "http://127.0.0.1:8081/api/classes/Review", host: "www.example.com"

Otherwise with auto-scaling there is a 111 error:

failed (111: Connection refused) while connecting to upstream

@jgaull
Copy link
Author

jgaull commented May 21, 2016

@drew-gross There are only 232 records in the table. I can double check that there's an index tomorrow.

@drew-gross
Copy link
Contributor

No, I don't have any ideas unfortunately, hence why I'm looking for logs. There weren't actually very many changes in the recent versions of Parse Server, and looking at them I don't see anything that seems relevant.

@drew-gross
Copy link
Contributor

@jgaull 232 records wouldn't be enough to cause issues even if a tablescan is going on. Checking the mongo logs for slow queries would still be useful though.

@johndrewing
Copy link

@drew-groos I'm still can seeing data is showing on parse.com dashboard. (I was already migrated data) but it not showing from my server.

@davide-scalzo
Copy link

@jadsonlourenco @jgaull are the three of us all on mongoLab sandbox plan?

@jadsonlourenco
Copy link

@davodesign84 yes and no. I did a test with MLab, but I'm using Docker (https://hub.docker.com/r/jadsonlourenco/mongo-rocks/), and I did a test using a local mongo server (3.2) over OSX, get the same bug.

@davide-scalzo
Copy link

So, I launched a new mongo deployment via Cloud Manager on AWS, same issue. Mongo logs seems pretty good:

2016-05-21T09:00:47.896+0000 I NETWORK [conn307] end connection 127.0.0.1:34507 (2 connections now open)
2016-05-21T09:00:37.838+0000 I ACCESS [conn307] Successfully authenticated as principal __system on local
2016-05-21T09:00:37.826+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:34507 #307 (3 connections now open)
2016-05-21T09:00:37.825+0000 I NETWORK [conn306] end connection 127.0.0.1:34503 (2 connections now open)
2016-05-21T09:00:36.751+0000 I ACCESS [conn139] Successfully authenticated as principal mms-monitoring-agent on admin
2016-05-21T08:59:37.043+0000 I ACCESS [conn139] Successfully authenticated as principal mms-monitoring-agent on admin
2016-05-21T08:59:29.042+0000 I ACCESS [conn139] Successfully authenticated as principal mms-monitoring-agent on admin
2016-05-21T08:59:27.463+0000 I ACCESS [conn300] Successfully authenticated as principal __system on local
2016-05-21T08:59:27.451+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:34486 #300 (3 connections now open)
2016-05-21T08:59:27.451+0000 I NETWORK [conn299] end connection 127.0.0.1:34483 (2 connections now open)
2016-05-21T08:59:21.042+0000 I ACCESS [conn139] Successfully authenticated as principal mms-monitoring-agent on admin
2016-05-21T08:59:17.406+0000 I ACCESS [conn299] Successfully authenticated as principal __system on local
2016-05-21T08:59:17.394+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:34483 #299 (3 connections now open)
2016-05-21T08:59:17.393+0000 I NETWORK [conn298] end connection 127.0.0.1:34481 (2 connections now open)
2016-05-21T08:59:13.043+0000 I ACCESS [conn139] Successfully authenticated as principal mms-monitoring-agent on admin
2016-05-21T08:59:07.340+0000 I ACCESS [conn298] Successfully authenticated as principal __system on local
2016-05-21T08:59:07.328+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:34481 #298 (3 connections now open)
2016-05-21T08:59:07.327+0000 I NETWORK [conn297] end connection 127.0.0.1:34480 (2 connections now open)
2016-05-21T08:59:05.043+0000 I ACCESS [conn139] Successfully authenticated as principal mms-monitoring-agent on admin
2016-05-21T08:58:57.295+0000 I ACCESS [conn297] Successfully authenticated as principal __system on local
2016-05-21T08:58:57.283+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:34480 #297 (3 connections now open)
2016-05-21T08:58:57.282+0000 I NETWORK [conn296] end connection 127.0.0.1:34475 (2 connections now open)
2016-05-21T08:58:57.043+0000 I ACCESS [conn139] Successfully authenticated as principal mms-monitoring-agent on admin
2016-05-21T08:58:47.230+0000 I ACCESS [conn296] Successfully authenticated as principal __system on local
2016-05-21T08:58:47.218+0000 I NETWORK [initandlisten] connection accepted from 127.0.0.1:34475 #296 (3 connections now open)

but nginx logs are not happy:

2016/05/21 08:35:17 [warn] 3515#0: duplicate MIME type "text/html" in /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf:42
2016/05/21 08:57:56 [error] 3533#0: *328 upstream prematurely closed connection while reading response header from upstream, client: 172.31.63.230, server: , request: "GET /api/classes/Brand HTTP/1.1", upstream: "http://127.0.0.1:8081/api/classes/Brand", host: "api-production-dramsclub.us-east-1.elasticbeanstalk.com"
2016/05/21 08:57:56 [error] 3533#0: *328 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 172.31.63.230, server: , request: "GET /api/classes/Brand HTTP/1.1", upstream: "http://127.0.0.1:8081/api/classes/Brand", host: "api-production-dramsclub.us-east-1.elasticbeanstalk.com"
2016/05/21 08:57:58 [warn] 4111#0: duplicate MIME type "text/html" in /etc/nginx/conf.d/00_elastic_beanstalk_proxy.conf:42
2016/05/21 08:57:59 [error] 4116#0: *2 connect() failed (111: Connection refused) while connecting to upstream, client: 172.31.63.230, server: , request: "GET /api/classes/Brand HTTP/1.1", upstream: "http://127.0.0.1:8081/api/classes/Brand", host: "api-production-dramsclub.us-east-1.elasticbeanstalk.com"

@davide-scalzo
Copy link

davide-scalzo commented May 21, 2016

Interestingly the access log shows a lot of POST requests

172.31.63.230 - - [21/May/2016:08:57:56 +0000] "GET /api/classes/Brand HTTP/1.1" 502 574 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36" "82.2.89.115"
172.31.63.230 - - [21/May/2016:08:57:59 +0000] "GET /api/classes/Brand HTTP/1.1" 502 574 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36" "82.2.89.115"
172.31.63.230 - - [21/May/2016:08:58:03 +0000] "GET /api/classes/Brand HTTP/1.1" 200 14212 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36" "82.2.89.115"
172.31.51.62 - - [21/May/2016:08:59:08 +0000] "GET /api/classes/Brand HTTP/1.1" 200 14212 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36" "82.2.89.115"
172.31.63.230 - - [21/May/2016:08:59:15 +0000] "GET /api/classes/Review HTTP/1.1" 200 11414 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36" "82.2.89.115"
172.31.51.62 - - [21/May/2016:09:00:19 +0000] "POST /api/classes/Location HTTP/1.1" 200 1512 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:19 +0000] "POST /api/classes/Brand HTTP/1.1" 200 51770 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:19 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:21 +0000] "POST /api/classes/Review HTTP/1.1" 200 31561 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.63.230 - - [21/May/2016:09:00:35 +0000] "GET /api/classes/Brand HTTP/1.1" 200 14212 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36" "82.2.89.115"
172.31.51.62 - - [21/May/2016:09:00:41 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:41 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:41 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:41 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:42 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:42 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:42 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:42 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:42 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:42 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:53 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:53 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:53 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:53 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:53 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:53 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:53 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:53 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:55 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:00:55 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216"
172.31.51.62 - - [21/May/2016:09:01:08 +0000] "POST /api/classes/Review HTTP/1.1" 200 14 "-" "okhttp/2.5.0" "77.154.204.216

okhttp should my react-native android client, which hasn't been changed in the last 3-4 weeks.... but I'll redeploy with the verbose flag and see home many requests are actually received by node.

@jadsonlourenco
Copy link

Well for me the Mongo and Parse-server logs show nothing strange, just normal info. One thing I noticed that I deleted all the extra classes, just let the default classes, and removed all content. So in this case with a standard database the parse-server not stop after minutes. I think this issue is related with parse-server, after some query, but I don't know where I can see it, because the logs not appear any error, and is not in the database, I think, who did not show anything unusual on logs too.

@DevJoghurt
Copy link

I'm facing the same issue with modulus.io. Nothing found in logs. Switching back to older versions of parse-server does not solve the problem

@jadsonlourenco
Copy link

Tests report:

  • OS: Linux (Debian), OSX (10.11.5);
  • Node: 5, 6.1, 6.2;
  • Mongo: 3.0 (rocksdb), 3.2 (default);
  • Parse-server: 2.2.7, 2.2.9, 2.2.10;
  • Parse-dashboard: 1.0.11;
  • Database: "standard" and "custom" classes, small db at all;
  • Files: Google Cloud Storage, small files 1mb.
  • Client: IOS SDK 1.13.0

The parse-server goes down after few minutes only after run the mobile app, I can get the content without error. Before run the mobile app the mongo logs:

2016-05-21T11:01:06.319+0000 I NETWORK  [initandlisten] connection accepted from 172.20.0.3:33286 #73 (14 connections now open)
2016-05-21T11:01:06.376+0000 I ACCESS   [conn73] Successfully authenticated as principal parse on parse

But, as I said after run the app, get the content from parse-server correctly, the server down and on mongo logs not show this message any more, don't show any logs from parse-server...

@davide-scalzo
Copy link

davide-scalzo commented May 21, 2016

I'm seeing a similar pattern, a lot of duplicated requests to nginx but only one to node, also from CFNetwork.

  • Tested Mongolab, AWS with Cloud Manager and locally.

@apvlv
Copy link

apvlv commented May 21, 2016

same here

Environment Setup

Server: parse-server 2.2.10, AWS Beanstalk
Database: mLab mongoDB 3.0.9
Node Version: 4.4.3
Client is a react-native app.

I have only 50+ records in the database and only try to read them. After the first or second successful request, there is a timeout and the latency in the AWS Monitoring goes to 60 sec...

@jadsonlourenco
Copy link

Update: I tried without files, removed all files columns, but get the same issue...

@jadsonlourenco
Copy link

jadsonlourenco commented May 21, 2016

Update-2: If my app has two collections to fetch the parse-server will down (current issue), but if I have only one query on the same page the server works fine. (in this case I'm not using files, just fetch text data, few data).

Does anyone have this same problem?

EDIT: I have sure about that, this issue is related with multiples query in same page!

@drew-gross
Copy link
Contributor

Having multiple queries on the same page could be a red herring, as it could just be that your server only has enough memory to handle 1 query at a time, or something like that (although that seems unlikely, unless you have the table scan problem I mentioned above)

Some ideas to try:

Can you try issuing the same queries one after another, instead of simultaneously, and see if the problem still occurs?

If you have managed to reproduce locally, can you send me the steps to reproduce?

If you are using mongo version 3.0.{something this is not 8}, can you try mongo 3.0.8? This is the one we run the tests on, the one we use internally at Parse, and people have previously reported issues with other versions.

@dcdspace
Copy link

Running Parse Server locally works for me, and then within 5 minutes of switching back to Heroku I get the timeout issues again.

@dozzman
Copy link

dozzman commented May 22, 2016

I have also run into the same problem recently -- however I have not upgraded my server since parse version 2.2.7. I have a stable version which I last built from source on 14th of May, and my currently failing version of 2.2.7 which I have built today. Assuming I am being affected by the same root cause, perhaps this is a dependency issue?

This is the output from running npm outdated on my STABLE parse 2.2.7 deployment:

Package                    Current  Wanted   Latest  Location
babel-cli                    6.8.0   6.9.0    6.9.0  parse-server
babel-core                   6.8.0   6.9.0    6.9.0  parse-server
babel-istanbul               0.6.1   0.6.1    0.8.0  parse-server
babel-polyfill               6.8.0   6.9.0    6.9.0  parse-server
babel-preset-es2015          6.6.0   6.9.0    6.9.0  parse-server
babel-register               6.8.0   6.9.0    6.9.0  parse-server
babel-runtime                6.6.1   6.9.0    6.9.0  parse-server
flow-bin                    0.22.1  0.22.1   0.25.0  parse-server
gaze                         0.5.2   0.5.2    1.0.0  parse-server
mongodb                     2.1.18  2.1.19   2.1.19  parse-server
mongodb-runner              3.1.15  3.1.15    3.3.2  parse-server
redis                        2.5.3   2.5.3  2.6.0-2  parse-server
winston-daily-rotate-file    1.0.1   1.1.0    1.1.0  parse-server

I've also included gists from the output of npm ls in my stable and unstable parse server directories respectively:

STABLE:
https://gist.github.com/dozzman/818200dd59b4e81891b42eebd48459cd

UNSTABLE:
https://gist.github.com/dozzman/d9bf350d71721a2dc1fd8941b1a4a3e3

Hope this helps.

@drew-gross
Copy link
Contributor

Looking at the list of dependencies, I don't see anything that could be causing problems. I don't have many more ideas, especially since people are reporting that memory usage is stable. Does your mongo continue to respond to queries issued from the mongo cli? Have you tried disabling all logging and caching? Since memory is stable I wouldn't expect that to be the issue, but it could be that the logs and cache are using up too many resources.

@davide-scalzo
Copy link

Did anybody have any luck with Mongo 3.0.8? I seem to still experience the issue :/

@chrisckchang
Copy link

Hi folks,

Per https://jira.mongodb.org/browse/NODE-718, it looks like 2.1.21 may address the issue. Would any folks be able to test the latest version and see if it works for them? If not, we should let the MongoDB team know asap.

@jadsonlourenco
Copy link

@chrisckchang You are right the 2.1.21 fixed this issue, I tested it and works fine, thanks!

@chrisckchang
Copy link

@jadsonlourenco Sorry for not posting earlier, it looks like the current recommendation is to use 2.1.18 per https://jira.mongodb.org/browse/NODE-722. It looks like there may have been some bugs introduced when trying to fix an issue for Windows.

@joy4eg
Copy link
Contributor

joy4eg commented Jun 15, 2016

Hi folks,

I have the same problem in parse-server 2.2.13, i also tried to upgrade the mongodb adapter to 2.2.21 but problem still occurs ...
"PUT /api/v1/classes/_Installation/56M3q1bVX2 HTTP/1.1" 499 0 "-" "Parse Android SDK 1.13.1 (XXX/YYY) API Level 22" 13.184

Where 13.184 is request time, and parse-server didn't sent any response.
Any help are very appreciated.
Thanks.

@skparticles
Copy link

Hi All,
We also having same experience like @joy4eg , save installation no response from server and also timeout.

@joy4eg
Copy link
Contributor

joy4eg commented Jun 29, 2016

@partikles

As workaround, you can setup a limit for requests per second by nginx.
In our experience, we have 20 RPS with two parse server instances. (i.e 10 RPS for one instance), and now they working well. (we don't need realtime data saving, just data storage).
Also, you may look at our second workaround https://gist.github.com/joy4eg/ab44931de0606b76d78c8edb4ccffcf0 to reduce schema requests to MongoDB (we observed ~ 140 requests for ONE just installed our application, and it may be a bottleneck)

@jadsonlourenco
Copy link

@partikles I get the same issue again, even with mongodb 2.1.18, to fix it I update to MongoDB 3.2 (https://hub.docker.com/r/jadsonlourenco/mongo-rocks/), I solved this issue and get a better performance.

@skparticles
Copy link

Thanks @joy4eg @jadsonlourenco

@miracle7
Copy link

Hello all, sorry to revive this thread but I have mongodb pinned as 2.1.18 in my package.json file to resolve this issue, but got the following email from mLab the other week saying that my deployment will be affected:

MongoDB 3.2 is now generally available on mLab, so we are in the process of making it the default release version for all new mLab deployments. We are also taking steps to discontinue our support for MongoDB 2.6.

To this end, on Tuesday, September 27, we will start a four-day maintenance window to upgrade all free Sandbox databases running MongoDB 3.0 to MongoDB 3.2. In addition, we will upgrade all for-pay Shared databases running MongoDB 2.6 to MongoDB 3.0. Dedicated plan deployments will not be impacted.

Key details about this maintenance window

This four-day maintenance window starts Tuesday, September 27 at 10:00 am PDT / 5:00 pm UTC (convert to your timezone).
Sandbox and Shared Single-node databases will be unavailable for approximately five minutes at some point during this four-day window. Highly-available Shared Cluster databases will experience no downtime from this maintenance, just failovers.
You must ensure that your free Sandbox databases running 3.0 meet the upgrade requirements for 3.2.
You must ensure that your for-pay Shared plan databases running 2.6 meet the upgrade requirements for 3.0 which includes a list of MongoDB 3.0-compatible drivers. If you do not upgrade your app to a 3.0-compatible driver before this upgrade, you may be unable to connect to your deployment.
No other configuration changes will be necessary on your end.

Does anyone know what I need to do to avoid issues when they perform this upgrade?

@fiznool
Copy link

fiznool commented Jul 27, 2016 via email

@miracle7
Copy link

@fiznool Well, doesn't get much easier than that. Thanks! :D

@rkand4
Copy link

rkand4 commented Aug 8, 2016

we faced the same issue and we had to change the Mongo DB dependency version to v2.1.18. So scary issue and pretty bad. Actually the version fix seems not working for us. Any help is much appreciated. Access logs from nginx:

174.65.165.4 - - [08/Aug/2016:03:24:35 +0000] "POST /parse/users HTTP/1.1" 504 182 "-" "Parse Android SDK 1.13.0 (com.XXX/67) API Level 19"

HTTP 504 errors ..

@ghost
Copy link

ghost commented Aug 24, 2016

Hi guys,

Any feedback on this? This still happens on 2.1.18 so not sure why it has been closed as the problem is still there???

@flovilmart
Copy link
Contributor

@execMobile with only the parse-server logs that doesn't help much, as many stated in the thread, this was resolved by pinning mongo to 2.1.18. What version of parse-server are you running? We also added a reconnection mechanism in case the connection was closed to the DB for inactivity.

@ghost
Copy link

ghost commented Aug 25, 2016

Hi @flovilmart,

The dependencies are as follows:

"dependencies": {
   "body-parser": "^1.15.1",
   "ejs": "^1.0.0",
   "express": "~4.11.x",
   "kerberos": "~0.0.x",
   "mandrill-api": ">=1.0.2",
   "parse": "~1.8.0",
   "parse-server": "~2.2.18",
   "mongodb":"2.1.18"
 }

We have changed the parse-server to 2.2.18 and hopefully it will work as expected. Will post anything if the problem occurs again.

mstrazds pushed a commit to mstrazds/docker-parse-server that referenced this issue Feb 28, 2017
@LucasBadico
Copy link

LucasBadico commented Mar 21, 2017

Guys, same thing with my application, this could be a s3 adapter issue?

I have a mlab deployinment of parse-server. And yesterday out of the blue all my files go out of reach. My parse does not throw any error and the mlab dashboard says that my database is up and running

edited

@flovilmart
Copy link
Contributor

@LucasBadico can you expand on that?

@LucasBadico
Copy link

@flovilmart sorry, I read and write a detail wrongly... fixing it and aswering you... a moment.

@LucasBadico
Copy link

@flovilmart so, I have this app up and running with a parse-server on digital ocean. Yesterday my images are saying this

Failed to load resource: the server responded with a status of 504 (Gateway Time-out)
578682bf9608ae67f7afa89dfd5b4eee_S0DinLEeOC_prophoto.jpeg 

@joy4eg
Copy link
Contributor

joy4eg commented Mar 21, 2017

@LucasBadico Please, explain your problem more detailed.
As about me, we fixed this issue a few mouth ago ... we just switched to RockDB and created indexes for important data, now all works fine. (DigitalOcean)

@LucasBadico
Copy link

What type of information do you guys need? the configuration setup?

@flovilmart
Copy link
Contributor

@LucasBadico that's an issue with S3, not sure how to help there. Also, can you please open a new issue as this one has been closed for a while, and what related to an imcompatibility with the mongoDB driver that we addressed already.

@LucasBadico
Copy link

@flovilmart oks! thanks! I will.

@pmaganti
Copy link

pmaganti commented Aug 4, 2017

Today, we saw this issue on parse-server@2.3.2 connecting to mLab sandbox. Same version on parse-server is working fine while connected mLab DB running with replicas.

Strangely, when we upgraded the parse-server@2.5.3 we don't see any issue even connecting to the sandbox.

@araskin
Copy link

araskin commented Aug 15, 2017

Our production server is crashing every 5 hours after an upgrade to Parse-server. Indeed, the error messages seem similar to what is described in this issue. So I have tried to pin mongoDB library to 2.1.18 in package.json

monogopin

However when I do an npm list I see the following

parsedep

Does this mean that Parse isn't using 2.1.18?

When I scroll further up the list I CAN see the 2.1.18, just not sure if its being used

correct

Perhaps I have two copies and the wrong one is being used?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests