New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

result.body is empty in Node but result.text is present #1078

Closed
tkesgar opened this Issue Oct 4, 2016 · 6 comments

Comments

Projects
None yet
3 participants
@tkesgar

tkesgar commented Oct 4, 2016

I am using superagent in https://github.com/tkesgar/miyako for both Node backend and frontend. However, I found a problem when running this part of server code:

superagent.get(`${ENDPOINT}${id}`)
    .query({ access_token : TOKEN })
    .query(fields ? { fields } : {})
    .query(data)
    .then(result => {
      // callback(result.body)
      callback(JSON.parse(result.text))
    })
    .catch(error => errorCallback(error))

In the promise handling, result.body is {}, but result.text contains the full endpoint response (endpoint is Facebook Graph API). .buffer(true) doesn't work (result.body is still empty), and content-type is already JSON.

I was able to get it working with JSON.parse(result.text), though. Also, I can use result.body on client-side without problems.

I'll try to reproduce the problem if I have some time.

Environment:

  • Node 6.3.0 64-bit
  • Windows 10 64-bit
@focusaurus

This comment has been minimized.

Show comment
Hide comment
@focusaurus

focusaurus Oct 4, 2016

Collaborator

Please send the full HTTP request and response headers if you can. That will help us troubleshoot. Auto-parsing of response bodies is primarily based on the Content-Type header in the response.

Collaborator

focusaurus commented Oct 4, 2016

Please send the full HTTP request and response headers if you can. That will help us troubleshoot. Auto-parsing of response bodies is primarily based on the Content-Type header in the response.

@tkesgar

This comment has been minimized.

Show comment
Hide comment
@tkesgar

tkesgar Oct 5, 2016

Node requests seems to set the wrong content-type.

curl-ing the request (to Facebook Graph API) returns the correct content-type:

curl -i -X GET "https://graph.facebook.com/v2.7/335148063288907?access_token=<token>"

HTTP/2 200
access-control-allow-origin: *
etag: "ea63bae8487b4f133cb3877b366e553a01fd2e35"
pragma: no-cache
cache-control: private, no-cache, no-store, must-revalidate
facebook-api-version: v2.7
expires: Sat, 01 Jan 2000 00:00:00 GMT
content-type: application/json; charset=UTF-8
x-fb-trace-id: Bfie5260Olg
x-fb-rev: 2602612
x-fb-debug: /J4vr6vfR2BJdlDT1tr6SSDYSciqKO+KEONMWkUfYUoZg/TS3uQkmA4y5PHNZeY1pq9CMhKUj7e26oSZQps6yg==
date: Wed, 05 Oct 2016 00:29:01 GMT
content-length: 66

{"created_time":"2013-10-01T18:01:15+0000","id":"335148063288907"}

I run both this on Node and browser (result is attached):

request.get('https://graph.facebook.com/v2.7/335148063288907')
.buffer(true)
.query({ access_token : <token> })
.then((success, failure) => {
  console.log(JSON.stringify(success, null, 2))
})

content-type is application/json; charset=UTF-8 in browser, but text/javascript; charset=UTF-8 in Node.

On an unrelated note, I can't use .buffer(true) in browser, but I removed it and it still works.

browser.txt
server.txt

tkesgar commented Oct 5, 2016

Node requests seems to set the wrong content-type.

curl-ing the request (to Facebook Graph API) returns the correct content-type:

curl -i -X GET "https://graph.facebook.com/v2.7/335148063288907?access_token=<token>"

HTTP/2 200
access-control-allow-origin: *
etag: "ea63bae8487b4f133cb3877b366e553a01fd2e35"
pragma: no-cache
cache-control: private, no-cache, no-store, must-revalidate
facebook-api-version: v2.7
expires: Sat, 01 Jan 2000 00:00:00 GMT
content-type: application/json; charset=UTF-8
x-fb-trace-id: Bfie5260Olg
x-fb-rev: 2602612
x-fb-debug: /J4vr6vfR2BJdlDT1tr6SSDYSciqKO+KEONMWkUfYUoZg/TS3uQkmA4y5PHNZeY1pq9CMhKUj7e26oSZQps6yg==
date: Wed, 05 Oct 2016 00:29:01 GMT
content-length: 66

{"created_time":"2013-10-01T18:01:15+0000","id":"335148063288907"}

I run both this on Node and browser (result is attached):

request.get('https://graph.facebook.com/v2.7/335148063288907')
.buffer(true)
.query({ access_token : <token> })
.then((success, failure) => {
  console.log(JSON.stringify(success, null, 2))
})

content-type is application/json; charset=UTF-8 in browser, but text/javascript; charset=UTF-8 in Node.

On an unrelated note, I can't use .buffer(true) in browser, but I removed it and it still works.

browser.txt
server.txt

@kornelski

This comment has been minimized.

Show comment
Hide comment
@kornelski

kornelski Nov 9, 2016

Collaborator

I've found this is because Facebook is sensitive to the Accept header, and browsers and curl force their own default one, but superagent doesn't.

Adding .accept('json') to the request happens to do exactly what it should, and makes FB send JSON.

Collaborator

kornelski commented Nov 9, 2016

I've found this is because Facebook is sensitive to the Accept header, and browsers and curl force their own default one, but superagent doesn't.

Adding .accept('json') to the request happens to do exactly what it should, and makes FB send JSON.

@kornelski

This comment has been minimized.

Show comment
Hide comment
@kornelski

kornelski Nov 9, 2016

Collaborator

So given this, I wonder what should we do:

  • add more documentation/warnings/hints about use of .accept('json')?
  • try to parse text/javascript as JSON? What if it's actually JS (e.g. jsonp) and fails to parse?
Collaborator

kornelski commented Nov 9, 2016

So given this, I wonder what should we do:

  • add more documentation/warnings/hints about use of .accept('json')?
  • try to parse text/javascript as JSON? What if it's actually JS (e.g. jsonp) and fails to parse?
@focusaurus

This comment has been minimized.

Show comment
Hide comment
@focusaurus

focusaurus Nov 9, 2016

Collaborator

+1 on documentation. -1 on presumptive parsing

Collaborator

focusaurus commented Nov 9, 2016

+1 on documentation. -1 on presumptive parsing

@tkesgar

This comment has been minimized.

Show comment
Hide comment
@tkesgar

tkesgar Nov 9, 2016

I think adding docs should be enough.

tkesgar commented Nov 9, 2016

I think adding docs should be enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment