Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sync client and settimeout() #80

Open
EnTerr opened this issue Sep 5, 2015 · 6 comments
Open

sync client and settimeout() #80

EnTerr opened this issue Sep 5, 2015 · 6 comments
Labels

Comments

@EnTerr
Copy link

EnTerr commented Sep 5, 2015

I am using the sync client, receiving about 30 msgs/sec. I don't want to get stuck waiting if there is no data ready, so i am trying this:

  ws.sock:settimeout(0)
  local message, opcode, close_was_clean, close_code, close_reason = ws:receive()
  ws.sock:settimeout()

My idea/hope being that if there is no data available, it will fail and return me nil or some such and then i will do my other job and keep polling from time to time till i get something. I get nil - HOWEVER it also seems to close the connection, so i never really receive anything!

From my point of view seems like a bug. What can i do?

@EnTerr
Copy link
Author

EnTerr commented Sep 6, 2015

After some debugging seems that sync.lua's receive()'s clean() is way too aggressive (why oh why?) in closing the connection on any kind of error. I added an if and it seems to work for me now:

  local clean = function(was_clean,code,reason)
    if reason ~= 'timeout' then
      self.state = 'CLOSED'
      self:sock_close()
      if self.on_close then
        self:on_close()
      end
    end
    return nil,nil,was_clean,code,reason or 'closed'
  end

PS. not sure if that won't cause problem if a timeout happens after the first sock_receive, when there is data in encoded and thus it will be lost?

@lipp lipp added the feature label Sep 13, 2015
@lipp
Copy link
Owner

lipp commented Sep 13, 2015

@EnTerr Added this as feature request. It's not that trivial, as you must think of partially received messages/frames.

@lipp
Copy link
Owner

lipp commented Sep 15, 2015

great. i'll keep this issue open, until ws:receive() accepts an optional timeout

@EnTerr
Copy link
Author

EnTerr commented May 30, 2016

Ok, so here is a more complete fix i came up with, most code is round about https://github.com/lipp/lua-websockets/blob/master/src/websocket/sync.lua#L24

  ---
  if self._saved then
    local _ = self._saved
    first_opcode = _.first_opcode
    frames = _.frames
    bytes = _.bytes
    encoded = _.encoded
    self._saved = nil   -- erase the saved state
  end
  ---
  while true do
    local chunk, err, partial = self:sock_receive(bytes)

    if err then
      if err == 'timeout'  then
        if #partial > 0 then
          -- there was some partial data, update
          encoded = encoded .. partial
          bytes = bytes - #partial
        end
        -- save state for next call
        self._saved = {
          first_opcode = first_opcode, 
          frames = frames,
          bytes = bytes,
          encoded = encoded,
        }
      end
      return clean(false,1006,err)
    end

So every time receive() is called, it checks if there is a pending self._saved and if so restores state from it. Then on timeout error it saves partial data for the next time.

In addition, close() and connect() have to do self._saved = nil to ensure old state does not persist for potential next connection. And... that's it!

@lipp
Copy link
Owner

lipp commented May 31, 2016

nice! can you make a PR from this?

@EnTerr
Copy link
Author

EnTerr commented May 31, 2016

Oops, afraid i can't. I don't have the code under git - besides i am working on almost a year old lua-websockets version, under svn. I did skim the /master/src/websocket/sync.lua here and there are no substantial/relevant changes - but i am nor really set up to PR. Maybe if one day i get current i will

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants