-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add check on status code to long-polling #5
Comments
Hi @nbyrnes-acv Thanks for the suggestion. |
That is what I am working on determining now... My status check in long_polling does not seem to be catching the error. I can replicate the condition I am trying to catch once or twice a day, but I am now thinking the post in _send_final_payload may not be the correct place, or, sfdc is not sending the error as expected according to their docs.... is that session.post where the actual messages from SFDC are received? Unfortunately, there is no error message. What happens is that SFDC just stops sending events, and the long_polling just times out, and re-connects forever. I don't know that I've hit the limit until try to reconnect to SFDC, at which point I get a subscribe failed error message which tells me: '403::Organization total events daily limit exceeded' It seems to me that I need to alert someone should this ever happen in production, rather than just silently sitting there.... |
It seems that SFDC will only emit the API limit error status on subscribe, not on the POST.... So, We need to find an indirect means of tracking this and quitting before SFDC stops sending us messages..... Thanks for engaging. |
There are two possibilities. I'm not entirely sure that this is not a bug in the client's implementation. Unfortunately it's very tricky to test this, since once the daily event delivery limit is reached, I will get just a single chance to observe this, because on the next connection attempt everything behaves correctly. I'll try to reproduce this and log all responses from SF to see exactly how it behaves. |
I was able to reproduce the problem by exhausting my daily platform event limit and for some reason I didn't get any errors, nor did the client time out. SF just stopped delivering new events. This behavior makes sense, in the case when a client is subscribed to multiple different event types, and if the platform event limit is reached, then SF will continue to deliver the remaining event types, like events from push topics for example. But in your case, if you're only interested in platform events, this is problematic. I'll give it one more try tomorrow. |
It occurred to me that perhaps there could be a configurable option that enables a behavior that would allow detecting SF's stopping of sending events. Specifically, what I am thinking of is a counter that would, after some number of long_polling timeouts with no messages received, try re-subscribing. The re-subscription will emit an error in the event of exceeded limits, and it should just continue as normal if they are not. This would increase the timeliness of "out of event" detection, and facilitate error handling and communicating.... thoughts? |
Hi @nbyrnes-acv I ran my test application a few more times, but I never got any response that would indicate that the platform event limit has been reached. The client continued the communication with the server, but the server stopped delivering messages.
This is exactly what I wanted to recommend to you. |
Hi there, We're using this to connect to salesforce. One thing we've observed is that there is no exception when salesforce runs out of platformevents for the day. We feel it would make sense for some kind of exception to be thrown when this happens. (But we're certainly open to other ways to detect and manage any error codes returned from the server). Does it make sense to check the response.status in transports/long_polling.py and throw an exception if it is not a 200 after the session.post which collects the payload from salesforce? The code we are thinking of looks like:
The text was updated successfully, but these errors were encountered: