Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tsserver mangels output when more than one request is pending #2758

Closed
dbaeumer opened this issue Apr 14, 2015 · 19 comments
Closed

tsserver mangels output when more than one request is pending #2758

dbaeumer opened this issue Apr 14, 2015 · 19 comments
Labels
Bug

Comments

@dbaeumer
Copy link
Member

@dbaeumer dbaeumer commented Apr 14, 2015

We are sending quite some requests to the tsserver. We make sure that what to put onto stdin is always one request, however we don't queue them on the sending side.

The output send from the tsserver mangels different response. See the attached msg.json. Interesting lines are: end of line 1 (e.g {"nContent-Length: 214 )or 125 (e.g. {"Content-Length: 70021 )

From the output it looks like the inference comes mostly from the syntax and semantic events send from the server.

@dbaeumer

This comment has been minimized.

Copy link
Member Author

@dbaeumer dbaeumer commented Apr 14, 2015

@steveluc since I can't attach files I will send you the msg.json by mail

@dbaeumer

This comment has been minimized.

Copy link
Member Author

@dbaeumer dbaeumer commented Apr 14, 2015

So far we can't only reproduce under Mac. Doesn't reproduce under Windows so far. One additional thing to note is that under Mac we execute on atom.exe which is based io.js instead of node.js

@DanielRosenwasser

This comment has been minimized.

Copy link
Member

@DanielRosenwasser DanielRosenwasser commented Apr 14, 2015

You could potentially post a gist for the log.

@mhegazy

This comment has been minimized.

Copy link

@mhegazy mhegazy commented Apr 14, 2015

how are you reading the input? are you checking if content-length < size of message to keep reading?

is it something like: https://github.com/Microsoft/TypeScript-Sublime-Plugin/blob/master/libs/nodeclient.py#L178

@steveluc

This comment has been minimized.

Copy link
Contributor

@steveluc steveluc commented Apr 14, 2015

Based on what you've told us so far, that looks like a case in which two 'threads' are both trying to write to stdout. For example, the syntax and semantics diagnostics happen in a timer 'thread' that writes events to stdout, whereas the readline handler processes requests and writes responses to stdout. Is it possible that io.js will allow a timer interrupt to take place in the middle of a readline handler? This doesn't seem like correct node behavior, but it does match the scenario you are describing.

@dbaeumer

This comment has been minimized.

Copy link
Member Author

@dbaeumer dbaeumer commented Apr 18, 2015

@DanielRosenwasser thanks for the gist tip. Wasn't aware of the feature. The log can be found here https://gist.github.com/dbaeumer/53d84d04d787aa23bd50

@dbaeumer

This comment has been minimized.

Copy link
Member Author

@dbaeumer dbaeumer commented Apr 18, 2015

@mhegazy yes, we are handling content-length . We already receive a mangled buffer

@dbaeumer

This comment has been minimized.

Copy link
Member Author

@dbaeumer dbaeumer commented Apr 18, 2015

@steveluc: I don't think this is a threadding problem after scanning through the code (I first thought it is a async/callback problem but TS isn't async :-). I think the problem is in the TS System implementation for node. The TSServer calls System.write which is implemented as

write(s: string): void {
// 1 is a standard descriptor for stdout
_fs.writeSync(1, s);
}

But writeSync returns the number of bytes written. So we need to have a loop around this. I will see if implementing this correctly will fix it (test with our Mac setup on Monday) and provide a pull request if it does.

@steveluc

This comment has been minimized.

Copy link
Contributor

@steveluc steveluc commented Apr 18, 2015

Great, Dirk! Looking forward to the results.

@DanielRosenwasser

This comment has been minimized.

Copy link
Member

@DanielRosenwasser DanielRosenwasser commented Apr 18, 2015

@dbaeumer can we make sure that this is the root of the cause? In other words, verify that the number of bytes written indeed doesn't match the size of the string given when something goes wrong.

@mhegazy

This comment has been minimized.

Copy link

@mhegazy mhegazy commented Apr 20, 2015

@dbaeumer thanks for digging into this! would you be interested in sending a PR for it?

@mhegazy mhegazy added the Bug label Apr 20, 2015
@dbaeumer

This comment has been minimized.

Copy link
Member Author

@dbaeumer dbaeumer commented Apr 21, 2015

OK. Here is what I did so far: I changed the implementation to have a loop around writeSync which checks the written bytes. This now sends more data, but revealed another bug: sometimes the writeSync fails with: -1 EAGAIN (Resource temporarily unavailable). However this is only seen under atom not under node.

In general writeSync is IMO bad since it requires that someone on the receiving side reads which might not always be the case.

Since we send a lot of data over stdout I think the implementation should not use write from System. Instead the tsserver should do the following.

  • maintain an output queue containing responses
  • have a function that sends the output in an async way and considers the drain event.

If you think that is the right thing to do I am happy to work on a PR (after //build) to get this into the tsserver.

@mhegazy

This comment has been minimized.

Copy link

@mhegazy mhegazy commented Apr 21, 2015

@dbaeumer i agree with the approach. @steveluc any objections?

@DanielRosenwasser

This comment has been minimized.

Copy link
Member

@DanielRosenwasser DanielRosenwasser commented Apr 21, 2015

Worth pointing out from http://www.daveeddy.com/2013/03/26/synchronous-file-io-in-nodejs/:

If you want to open a file for synchronous IO, you'll have to use the lower level fs functions that Node offers such as fs.open() and fs.fsync().

@steveluc

This comment has been minimized.

Copy link
Contributor

@steveluc steveluc commented Apr 21, 2015

I agree we should move to asynchronous I/O, since this is the preferred node model. I think though that we should simultaneously consider re-factoring the protocol if we find we are regularly sending messages larger than 64K. writeSync on stdout only blocks if the 64K buffer fills (the buffer can also be made bigger). It makes sense to use async I/O for rare cases where we might fill that buffer, but performance will suffer if we are regularly stuffing that buffer so full that the client reader thread isn't draining it, whether or not the I/O is async. So I would like to understand what response message(s) are sending large amounts of data and whether we can re-factor the protocol to avoid this.

@mhegazy mhegazy added the help wanted label Apr 22, 2015
@dbaeumer

This comment has been minimized.

Copy link
Member Author

@dbaeumer dbaeumer commented Apr 22, 2015

Completions can send a lot of data when triggered in cases where no prefix exists and where the global scope is available. var foo:

We can still implement the queue and send small messages using writeSync. I wouldn't change the protocol since it might require changing it when implementations change

@mhegazy

This comment has been minimized.

Copy link

@mhegazy mhegazy commented Jul 13, 2015

thanks @dbaeumer

@mhegazy mhegazy closed this Jul 13, 2015
@mhegazy mhegazy added Fixed and removed help wanted labels Jul 13, 2015
@zhengbli

This comment has been minimized.

Copy link
Contributor

@zhengbli zhengbli commented Oct 21, 2015

This problem happens again with Node v4.2.1 on OSX, which starts to throw the EAGAIN exception and truncates the output if the string we send to writeSync is too long. Considering node v4 is the first long-term supported version, we should probably figure out a way to work around this issue. Reopening it.

@mhegazy

This comment has been minimized.

Copy link

@mhegazy mhegazy commented Dec 16, 2016

This has not been an issue. closing.

@mhegazy mhegazy closed this Dec 16, 2016
@microsoft microsoft locked and limited conversation to collaborators Jun 18, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
5 participants
You can’t perform that action at this time.