-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue#84 TP4, RSC15a (test), RSC19e (test), .. #87
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One comment, otherwise looks good. Please confirm which of the points at #84 this addresses for clarity
test/ably/restinit_test.py
Outdated
] | ||
|
||
for aux in fallback_hosts: | ||
ably = AblyRest(token='foo', fallback_hosts=fallback_hosts) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bit confused, you're passing in fallback_hosts
yet that's a two dimensional array?
Good catch, fixed (it's RSC15a). |
test/ably/restauth_test.py
Outdated
self.assertGreater(token.expires, time.time()*1000) | ||
self.assertIsNot(new_token, token) | ||
# Second does not | ||
with self.assertRaises(AblyAuthException): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure this is correct if it fails? What is the value of TOKEN_EXPIRY_BUFFER
? If it's say 2s, this test is very racy as the realtime system will still respond if the token is valid i.e. there is no buffer factored in and the TTL is literal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TOKEN_EXPIRY_BUFFER is 15s, I think it is specified somewhere
Travis is complaining. The timeout errors I had them in local too with the sandbox server, they went away using the staging server. |
Thanks @jdavid, looks good. 👍 |
Umhh, the last commit is to use staging, to see whether the bizarre errors go away or not (just temporary of course). |
It sometimes builds up a backlog we've found, seems to have run and failed :( Shout if you want to discuss the failures to see if we can help. |
The builds are broken actually, it looks like Travis was victim of the Amazon issues. https://www.traviscistatus.com/incidents/hmwq9yy5dh9d |
Ok, restarted the builds now |
Latest try, only the usual random time failures. The bizarre failure is gone. Locally I don't have the time failures, nor any other error. It looks to me that these are somehow related to Travis. There are the same kind of failures |
I agree, merging now. Can you please do me a favour though and just raise an issue to address the intermittent test issues i.e. simply document which ones are intermittent so that we can pick this up at some point in the future? |
* Use of py.test instead of nosetests * Added several plugins commented for the future * Use xdist to run tests in parallel, decreased time to 66 secs from 137 * Setup of tox flake8 environment for code standard checks * Ignored a lot of errors/warnings to kickstart this. TODO: Remove them from setup.cfg * Coveralls moved to travis, as coverage should only be submited from CI servers * Deleted test.py file (no idea what it was doing there)
Those two python versions are known to be troublesome and defective.
client_id is special as cannot be changed once it is set
Useful if run locally with a virtualenv inside the project, so pytest does not run tests for other software.
No description provided.