Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connector failed to run #111

Open
umesh-kushwaha opened this issue Apr 30, 2024 · 0 comments
Open

Connector failed to run #111

umesh-kushwaha opened this issue Apr 30, 2024 · 0 comments

Comments

@umesh-kushwaha
Copy link

umesh-kushwaha commented Apr 30, 2024

I am trying to run the connector but its failing. I tried other connector with same docker-compose file and its working.
Error

kafka-connect-poc-connect-1 | [2024-04-30 06:56:08,725] INFO Added aliases 'PrincipalConnectorClientConfigOverridePolicy' and 'Principal' to plugin 'org.apache.kafka.connect.connector.policy.PrincipalConnectorClientConfigOverridePolicy' (org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader) kafka-connect-poc-connect-1 | [2024-04-30 06:56:08,734] ERROR Stopping due to error (org.apache.kafka.connect.cli.ConnectDistributed) kafka-connect-poc-connect-1 | java.lang.NoSuchFieldError: DEFAULT kafka-connect-poc-connect-1 | at org.apache.kafka.connect.runtime.WorkerConfig.baseConfigDef(WorkerConfig.java:258) kafka-connect-poc-connect-1 | at org.apache.kafka.connect.runtime.distributed.DistributedConfig.<clinit>(DistributedConfig.java:181) kafka-connect-poc-connect-1 | at org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:95) kafka-connect-poc-connect-1 | at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:80) kafka-connect-poc-control-center-1

My docker-compose file

`version: '2.1'

services:

elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.23
ports:
- "9200:9200"
environment:
- discovery.type=single-node
- cluster.name=elasticsearch
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"

zookeeper:
image: zookeeper:3.4.9
restart: unless-stopped
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOO_PORT: 2181
ZOO_SERVERS: server.1=zookeeper:2888:3888

kafka:
image: confluentinc/cp-enterprise-kafka:5.5.12
hostname: kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_BROKER_ID: 1
KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_METRIC_REPORTERS: "io.confluent.metrics.reporter.ConfluentMetricsReporter"
KAFKA_CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: "kafka:19092"
KAFKA_CONFLUENT_METRICS_REPORTER_ZOOKEEPER_CONNECT: "zookeeper:2181"
depends_on:
- zookeeper

schema-registry:
image: confluentinc/cp-schema-registry:5.5.12
hostname: schema-registry
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: PLAINTEXT://kafka:19092
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_LISTENERS: http://0.0.0.0:8081
depends_on:
- zookeeper
- kafka

connect:
image: confluentinc/cp-kafka-connect:5.5.12
hostname: connect
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: "kafka:19092"
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: "connect-group"
CONNECT_CONFIG_STORAGE_TOPIC: "connect-configs"
CONNECT_OFFSET_STORAGE_TOPIC: "connect-offsets"
CONNECT_STATUS_STORAGE_TOPIC: "connect-status"
CONNECT_KEY_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.storage.StringConverter"
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_REST_ADVERTISED_HOST_NAME: "connect"
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR,io.zeebe.kafka.connect=TRACE,io.zeebe.client=WARN"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_PLUGIN_PATH: '/usr/share/java,/etc/kafka-connect/jars'
volumes:
- ./connectors:/etc/kafka-connect/jars/
depends_on:
- schema-registry
- kafka

`

anyone help me to fix this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant