Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestion: receive 'pushed' cloud updates in a CloudSession #3

Closed
TheLogFather opened this issue Feb 2, 2016 · 8 comments
Closed

Comments

@TheLogFather
Copy link

I figured out how to do this - you just need to listen for data arriving on port 531. (Dunno where port 843 comes in - can't make any sense of that...)

Just needs something like this in CloudSession (sorry, still using older version of ScratchAPI.py that doesn't have the new underscores and things you've changed in your recent commit Now prefixed "connection" in snippet below with the underscore - I think that's all it needs, right?):

def check_updates(self, timeout, maxCount=10):
    count = 0  # keep track of how many updates it has received
    updates = {}  # keep a dict of all name+value pairs received
    self._connection.settimeout(timeout)  # recv will wait for given time if nothing rcvd
    while count<maxCount:  # only allow up to maximum no. of updates
      try:
        data = self._connection.recv(4096)  # this causes exception if no data by timeout
      except:
        break  # get out when recv throws exception 'cos there's no data
      self._connection.settimeout(0.01)  # quick next check in case there's more data
      count = count + 1
      try:
        data = json.loads(data.decode('utf-8'))  # put this in here so it still tries to get...
        name = data['name']  # ...more data if decode or 'name'+'value' extraction fails
        value = str(data['value'])  # should be string anyway?
        if name.startswith('☁'+chr(32)):
          updates[name[2:]] = value  # avoid leading cloud+space chars
        else:
          updates[name] = value  # probably never happens?
      except:  # just ignore data if we can't get 'name'+'value' from it
        continue  # and go back to see if there's more to recv
    self._connection.settimeout(None)  # return timeout to default
    return updates  # give back dict of updated vars

With the above, you can call c.check_updates(secs), where c is a CloudSession, to wait for 'secs' seconds for updates to the project's cloudvars.

Much kinder on cloud servers than polling varserver URL every couple of seconds! :)

@PolyEdge
Copy link
Owner

PolyEdge commented Feb 2, 2016

Hmm... I'll try it when I'm not on mobile.

@TheLogFather
Copy link
Author

OK - works really nicely for me... it means it reacts to cloudvar changes almost instantly, rather than having to wait for the next poll of varserver.

Note that it returns as soon as it has received and decoded the data [almost... it does try for an extra 1/00th sec after receiving data, just in case there's another cloudvar change straight afterwards, until it runs out of data to receive. I guess it's vaguely possible that means it could get 'stuck' in that loop if there are changes coming continuously more often than every 1/100th sec. It's rather unlikely, but it may be that it should include some check to ensure that it never stays in that loop more than a certain amount of time, or for more than some upper limit on the number of received updates, which could be an optional param to it, I guess...]

EDIT: added the upper limit for number of changes into code snippet above - default is 10, which seems reasonable since there's [meant to be] a limit of 10 cloudvars for a project. Means it can only be stuck for max of extra 1/10th sec after receiving first update.

@TheLogFather
Copy link
Author

Used the port 531 check_updates receiver for https://scratch.mit.edu/projects/96491582/
So much nicer (and quicker) than polling the varserver URL every 2 or 3 secs. :)

@PolyEdge
Copy link
Owner

PolyEdge commented Feb 4, 2016

I am testing, but busy so I should have it pushed in about 10 hours (going on a trip yay)

@TheLogFather
Copy link
Author

Woah, has something changed after the maintenance...? -The above receiver is no longer working!

It looks like updates to long cloudvars can be split across receives.
And multiple updates can be in a single receive?

Recoding it...

@TheLogFather
Copy link
Author

Well, this was a bit tedious (and it's starting to look somewhat messy, so could maybe do with some rearranging), but it seems to be working now with my cloud speedtest project (linked above):

...
    self._rollover = []  # add into CloudSession.__init__
...

def check_updates(self, timeout, maxCount=10):
    count = 0
    updates = {}  # keep a dict of all name+value pairs received
    self._connection.settimeout(timeout)  # recv will wait for given time
    while count<maxCount:
      data = ''.encode('utf-8')  # start off blank
      while True:
        try:  # keep concatenating receives (until ended with \n)
          data = data + self._connection.recv(4096)  # raises exception if no data by timeout
          if data[-1]==10:  break  # get out if we found terminating \n
          self._connection.settimeout(0.1)  # allow time for more data
        except:  # or until recv throws exception 'cos there's no data
          break
      if not data:  break  # get out if nothing received
      self._connection.settimeout(0.01)  # allow quick check for more data
      if data[0]==123:  # starts with left brace, so don't prepend rollover
        self._rollover = []  # this rollover thing does seem to happen occasionally...
      data = self._rollover + data.decode('utf-8').split('\n')  # split up multiple updates
      if data[-1]:  # last line was incomplete, so roll it over...
        print('Warning: last line of data incomplete?! '+data[-1].encode('utf-8'))  # FYI for now...
        self._rollover = [data[-1]]  # put it into rollover for next receive
      else:
        self._rollover = []
      for line in data[:-1]:  # never need last line - it's either blank or it's rolled over
        if line:  # ignore blank lines (shouldn't get any?)
          try:
            line = json.loads(line)  # try to parse this entry
            name = line['name']  # try to extract var name
            value = str(line['value'])  # should be string anyway?
            if name.startswith('�'+chr(32)):
              updates[name[2:]] = value  # avoid leading cloud+space chars
            else:
              updates[name] = value  # probably never happens?
            count = count + 1  # count how many updates we've successfully parsed
          except:  # just ignore data if we can't get 'name'+'value' from it
            continue  # get next entry, or go back to receive more
    self._connection.settimeout(None)  # reset timeout to default
    return updates

EDIT: added the encode('utf-8') to the FYI print when it sees incomplete data...

@TheLogFather
Copy link
Author

I just saw a rollover case in my custom client for https://scratch.mit.edu/projects/96491582/

Unfortunately, I had a stupid mistake in my version of the FYI print statement, meaning it crashed and I therefore don't yet have any idea if it would've properly rolled over to the next call of check_updates.

But it does at least suggest it ought to have something to deal with the case where the last received thing isn't the whole of the update data...

@PolyEdge
Copy link
Owner

Since this is added, I can close now, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants