New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error 500 Timeout #75
Comments
@danielfnfaria Is there any more information that this? What is the actual output -- it looks like it wrote 48 bytes? Can you check the connect logs to see if there are any relevant messages? |
I'm not experiencing @danielfnfaria's precise issues, but something I've noticed is that some erroneous requests to Kafka Connect distributed workers cause their internal REST endpoints to die, with no error returned to the caller or even logged at all. The first example that I experienced was because I assumed that the Confluent Platform's uber-RPM included the S3 connector, when it turns out that it doesn't. Before I realized this, any attempt I made at registering an S3 sink timed out with a 500 error; and not just that, after such a timeout, all requests to the worker's REST interface would time out thereafter until the worker was restarted. Once I realized that the S3 connector jars were not actually there and installed that RPM separately, the registration request succeeded. So basically, whatever problem @danielfnfaria is experiencing here, the bigger problem is that distributed workers swallow exceptions and die when you send them a "killer request." |
@ewencp Same problem here, without any exception, just 500 timeout when update or delete a connector. And when this happens, all the PUT/POST requests will not work. |
I am getting same error for GET or POST /connectors API. I am using confluent-3.3.0 package. 2017-08-08 10:42:02 INFO RestServer:60 - 10.160.240.125 - - [08/Aug/2017:10:40:32 +0000] "GET /connectors HTTP/1.1" 500 48 90007 Please help to resolve this error. |
By downgrading confluent to 3.2.0 version, I am able to access /connectors API. |
Same problem |
Same problem, any updates? |
I solve this problem by set the |
I too am having this issue on v3.3.0 of kafka-connect. The /connectors endpoint appears to be broken in this version. |
#116 recently enhanced the connector to use exponential backoff. That was merged into the |
any update on this issue? we are running to the same issue (get /connectors timeout)? |
I am getting the same problem. were you able to solve it, seems some small config is missing :-( |
Getting the same issue.
But when I start this one
I can't request even list of connectors |
That was my stupid error. By default replication factor is 3. I fixed my problem by setting |
Looks like this issue happened for a while for some situations. I am using confluent version 4.0.1 distribute mode, I can reproduce this issue. For my situation, I have one JdbcSourceConnector and one RedshiftSinkConnector. The first deploy or deletion REST work for either connector, but all the following REST call will hang. I went through this thread http://mail-archives.apache.org/mod_mbox/kafka-users/201612.mbox/%3CC5AB03B2-8CB5-4258-82B3-1E105D52F567@trulia.com%3E, also confluentinc/kafka-connect-jdbc#302. but these don't help my situation. Does anyone have a suggestion? |
I got my issue solved. For my case, the problem is that we are using "timestamp+incrementing" mode, but the source is huge table without index on the timestamp column, so after the source connector is created, it start to query the DB and wait for the result until timeout. And then it runs the query again and again. During the query time, rest api reports "500: timeout" for any new connector deployment(I don't know how connector handle that logic internally). But when I change to anther table with index built on. It works. Not sure if there are some connector monitor can be used to detect this corner case. But definitely, query timeout should not bring down the rest api. |
I am also facing time out error while posting the source connector for DB2,POST API wait for almost 90 second and after that it time out with below error [2019-06-02 00:17:17,906] INFO 192.168.1.2 - - [01/Jun/2019:18:45:47 +0000] "POST /connectors HTTP/1.1" 500 48 90004 (org.apache.kafka.connect.runtime.rest.RestServer:60) I can see below warning also in Kafka connect log just before the time out error This member will leave the group because consumer poll timeout has expired. This means the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time processing messages. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:1011) is there any configuration to increase the API time out. I have also noticed interesting behavior, when i run the kafka connect in standalone mode it works perfectly. I can see the DB2 table data in kafka topic. |
Hi Do you have any solution for this. Im also facing the exact issue when loading a source connector in distributed mode. Please kindly reply if anybody has any solutions for this. |
Facing the same issue here with |
Closing this as the original issue has been resolved. Follow up commentary pertains to other connectors. |
I got a similar issue. I posted my solution at https://stackoverflow.com/questions/71520181/got-500-request-timed-out-for-kafka-connect-rest-api-post-put-delete To me, simply restart the Kafka Connect, the issue will be gone for me.
So far, the timeout issue hasn't showed up again. |
POST or GET in /connectors return 500 Timeout in distribuited mode.
[2017-03-21 21:26:04,794] INFO 127.0.0.1 - - [21/Mar/2017:21:24:34 +0000] "GET /connectors HTTP/1.1" 500 48 90235 (org.apache.kafka.connect.runtime.rest.RestServer:60)
The text was updated successfully, but these errors were encountered: