You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@DarioBalinzo is this similar to #74 and the workaround is what you are suggesting here? If so, what is an ETA for a solution using ES mappings?
Thanks!
Running with 1.5.4. For stack trace see below.
Stack trace (expand)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:223)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:149)
at org.apache.kafka.connect.runtime.WorkerSourceTask.convertTransformedRecord(WorkerSourceTask.java:330)
at org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:356)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:258)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.connect.errors.DataException: Invalid type for INT64: class java.lang.Double
at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:671)
at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:617)
at org.apache.kafka.connect.json.JsonConverter.convertToJson(JsonConverter.java:662)
at org.apache.kafka.connect.json.JsonConverter.convertToJsonWithoutEnvelope(JsonConverter.java:554)
at org.apache.kafka.connect.json.JsonConverter.fromConnectData(JsonConverter.java:304)
at org.apache.kafka.connect.storage.Converter.fromConnectData(Converter.java:64)
at org.apache.kafka.connect.runtime.WorkerSourceTask.lambda$convertTransformedRecord$3(WorkerSourceTask.java:330)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:173)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:207)
... 11 more
The text was updated successfully, but these errors were encountered:
Yes is the same issue with the related workaround. Right now I'm prioritising only security updates for the connector, but I will try in the next week/month to implement a fix for this issue
However, I just noticed for field: with the failing filed name is missing in my stack trace. So, I don't know what field it is failing on. My documents are ~100KB in size and are full of ints/floats (telemetry metrics) so it's rather difficult to pinpoint which one is the culprit.
Regarding this workaround you are suggesting
* try to use the json serializer (not avro)
what configuration parameter to the connector that would be?
@DarioBalinzo is this similar to #74 and the workaround is what you are suggesting here? If so, what is an ETA for a solution using ES mappings?
Thanks!
Running with 1.5.4. For stack trace see below.
Stack trace (expand)
The text was updated successfully, but these errors were encountered: