Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

io.apicurio.registry.rest.client.exception.RestClientException: HTTP/1.1 header parser received no bytes #3276

Closed
HelloSunilSaini opened this issue Apr 18, 2023 · 5 comments

Comments

@HelloSunilSaini
Copy link

Description

Registry
Version
: 2.3.1
Persistence type: sql

Environment

kubernetes server version : version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.16-eks-48e63af", GitCommit:"e6332a8a3feb9e0fe3db851878f88cb73d49dd7a", GitTreeState:"clean", BuildDate:"2023-01-24T19:18:15Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}

sardes client : apicurio-registry-distro-connect-converter-2.3.1.Final

Steps to Reproduce

Expected vs Actual Behaviour

i was expecting kafka connect to retry this but as its not an instance of RetriableException kafka connect will not retry it want it to be retriableException

Logs

io.apicurio.registry.rest.client.exception.RestClientException: HTTP/1.1 header parser received no bytes
	at io.apicurio.registry.rest.client.impl.ErrorHandler.parseError(ErrorHandler.java:95)
	at io.apicurio.rest.client.JdkHttpClient.sendRequest(JdkHttpClient.java:207)
	at io.apicurio.registry.rest.client.impl.RegistryClientImpl.getContentByGlobalId(RegistryClientImpl.java:324)
	at io.apicurio.registry.resolver.AbstractSchemaResolver.lambda$resolveSchemaByGlobalId$1(AbstractSchemaResolver.java:183)
	at io.apicurio.registry.resolver.ERCache.lambda$getValue$0(ERCache.java:142)
	at io.apicurio.registry.resolver.ERCache.retry(ERCache.java:181)
	at io.apicurio.registry.resolver.ERCache.getValue(ERCache.java:141)
	at io.apicurio.registry.resolver.ERCache.getByGlobalId(ERCache.java:116)
	at io.apicurio.registry.resolver.AbstractSchemaResolver.resolveSchemaByGlobalId(AbstractSchemaResolver.java:178)
	at io.apicurio.registry.resolver.DefaultSchemaResolver.resolveSchemaByArtifactReference(DefaultSchemaResolver.java:164)
	at io.apicurio.registry.serde.AbstractKafkaDeserializer.resolve(AbstractKafkaDeserializer.java:147)
	at io.apicurio.registry.serde.AbstractKafkaDeserializer.deserialize(AbstractKafkaDeserializer.java:104)
	at io.apicurio.registry.serde.AbstractKafkaDeserializer.deserialize(AbstractKafkaDeserializer.java:126)
	at io.apicurio.registry.utils.converter.SerdeBasedConverter.toConnectData(SerdeBasedConverter.java:139)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$4(WorkerSinkTask.java:516)
	at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:173)
	at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:207)
	at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:149)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:516)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:493)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:332)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)
	at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)
	at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
	at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	at java.base/java.lang.Thread.run(Thread.java:829)
@HelloSunilSaini HelloSunilSaini added the Bug Something isn't working label Apr 18, 2023
@apicurio-bot
Copy link

apicurio-bot bot commented Apr 18, 2023

Thank you for reporting an issue!

Pinging @jsenko to respond or triage.

@EricWittmann
Copy link
Member

Can you reproduce this error? Any details on how often it happens? Any specific steps to reproduce?

@Khan-Saad
Copy link

Khan-Saad commented May 10, 2023

Hi, I'm encountering the exact same issue. Every ~30 mins to ~1 hour I receive this error, and it drops the request. I looked into the container logs to see if i can gain any insights but it doesn't log an error for this. Any idea where I can start troublehshooting this?

@nvlong198
Copy link

nvlong198 commented May 20, 2023

I got the same issue in the newest Registry Version: 2.4.2.final,
is this caused by JDK version ? I'm using openjdk full version "11.0.17+8-LTS"

[2023-05-19 19:15:28,298] INFO [AdminClient clientId=ani-sink-cluster--shared-admin] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient:937)
[2023-05-19 19:17:30,099] ERROR [prod-sink-flx-connector-pk-custid|task-0] Error encountered in task prod-sink-flx-connector-pk-custid-0. Executing stage 'KEY_CONVERTER' with class 'io.apicurio.registry.utils.converter.AvroConverter', where consumed record is {topic='bosit__cfmast', partition=0, offset=160, timestamp=1684498649935, timestampType=CreateTime}. (org.apache.kafka.connect.runtime.errors.LogReporter:66)
io.apicurio.registry.rest.client.exception.RestClientException: HTTP/1.1 header parser received no bytes
        at io.apicurio.registry.rest.client.impl.ErrorHandler.parseError(ErrorHandler.java:95)
        at io.apicurio.rest.client.JdkHttpClient.sendRequest(JdkHttpClient.java:207)
        at io.apicurio.registry.rest.client.impl.RegistryClientImpl.getContentByGlobalId(RegistryClientImpl.java:376)
        at io.apicurio.registry.resolver.AbstractSchemaResolver.lambda$resolveSchemaByGlobalId$1(AbstractSchemaResolver.java:183)
        at io.apicurio.registry.resolver.ERCache.lambda$getValue$0(ERCache.java:156)
        at io.apicurio.registry.resolver.ERCache.retry(ERCache.java:197)
        at io.apicurio.registry.resolver.ERCache.getValue(ERCache.java:155)
        at io.apicurio.registry.resolver.ERCache.getByGlobalId(ERCache.java:125)
        at io.apicurio.registry.resolver.AbstractSchemaResolver.resolveSchemaByGlobalId(AbstractSchemaResolver.java:178)
        at io.apicurio.registry.resolver.DefaultSchemaResolver.resolveSchemaByArtifactReference(DefaultSchemaResolver.java:169)
        at io.apicurio.registry.serde.AbstractKafkaDeserializer.resolve(AbstractKafkaDeserializer.java:147)
        at io.apicurio.registry.serde.AbstractKafkaDeserializer.deserialize(AbstractKafkaDeserializer.java:104)
        at io.apicurio.registry.serde.AbstractKafkaDeserializer.deserialize(AbstractKafkaDeserializer.java:126)
        at io.apicurio.registry.utils.converter.SerdeBasedConverter.toConnectData(SerdeBasedConverter.java:139)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$3(WorkerSinkTask.java:515)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:180)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:214)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:156)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:515)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:495)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:335)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:237)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:206)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:202)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:257)
        at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:177)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
        at java.base/java.lang.Thread.run(Thread.java:829)
[2023-05-19 19:17:30,116] INFO [prod-sink-flx-connector-pk-custid|task-0] [Producer clientId=connector-dlq-producer-prod-sink-flx-connector-pk-custid-0] Resetting the last seen epoch of partition flx-connector.error-0 to 20 since the associated topicId changed from null to RsyGqS7CSvSlXPK-6i4nUA (org.apache.kafka.clients.Metadata:402

@carlesarnal
Copy link
Member

This is fixed on main with the use of the new client based on Kiota. We will provide instructions on how to migrate existing applications from one version to the other.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants