-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Description
I am trying to Integrate Kerberos Kafka with Pinot.When I am starting Pinot With the help of quick-start-batch.sh file and set JAAS file Location In quick-start-batch.sh it worked as expected and I am able to ingest Data from Kerberos kafka. Now when I am trying to start each component(Controller,Server,Broker) Independently and setting JAAS file Location In all these files (start-controller.sh,start-server.sh,start-broker.sh) and trying to add realtime table its throwing below error Message.
error Message:2021/07/06 21:16:01.653 INFO [AddTableCommand] [main]
{"code":500,"error":"org.apache.kafka.common.KafkaException: Failed to construct kafka consumer"}
When I checked Controller log File I found this Error.
"Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration.
System property 'java.security.auth.login.config' is not set"
Looks Like Its Not able to read "java.security.auth.login.config" Property from start-controller.sh file. When I am using same Property in quick-start-batch.sh then its working fine.
Please Refer the below Details for More Information.
In quick-start-batch.sh
exec "$JAVACMD" $ALL_JAVA_OPTS \
-classpath "$CLASSPATH" \
-Djava.security.auth.login.config="/home/dev/client_jaas.conf" \
-Dapp.name="quick-start-batch" \
Command Used to start Pinot:
bin/quick-start-batch.sh
WORKED AS EXPECTED,Able to see the data in Pinot Table from Kerberos Kafka Topic.
Now I have created one controller.conf File.(also created server.conf,broker.conf)
controller.data.dir=hdfs://path/in/hdfs/for/controller/segment
controller.local.temp.dir=/tmp/pinot/
controller.zk.str=<ZOOKEEPER_HOST:ZOOKEEPER_PORT>
controller.enable.split.commit=true
controller.access.protocols.http.port=9000
controller.helix.cluster.name=PinotCluster
pinot.controller.storage.factory.class.hdfs=org.apache.pinot.plugin.filesystem.HadoopPinotFS
pinot.controller.storage.factory.hdfs.hadoop.conf.path=/path/to/hadoop/conf/directory/
pinot.controller.segment.fetcher.protocols=file,http,hdfs
pinot.controller.segment.fetcher.hdfs.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher
pinot.controller.segment.fetcher.hdfs.hadoop.kerberos.principle=<your kerberos principal>
pinot.controller.segment.fetcher.hdfs.hadoop.kerberos.keytab=<your kerberos keytab>
controller.vip.port=9000
controller.port=9000
pinot.set.instance.id.to.hostname=true
pinot.server.grpc.enable=true
Used same Property in start-controller.sh,start-server.sh,start-broker.sh files.
exec "$JAVACMD" $ALL_JAVA_OPTS \
-classpath "$CLASSPATH" \
-Djava.security.auth.login.config="/home/dev/client_jaas.conf" \
-Dapp.name="start-controller" \
Extra Kerberos Properties in Table Config File.
"security.protocol":"SASL_PLAINTEXT",
"sasl.kerberos.service.name":"kafka"
Started Each Component Independently(server,broker,controller).
bin/pinot-admin.sh StartController \
-configFileName /home/dev/Pinot/apache-pinot-incubating-0.7.1-bin/bin/controller.conf
Note: I have created server.conf,broker.conf and started server and broker also with server.conf,broker.conf ,same as above.
Add Table:
bin/pinot-admin.sh AddTable \
-schemaFile /home/dev/Pinot/transaction_schema.json \
-tableConfigFile /home/dev/Pinot/transaction_realtime_config.json \
-exec
Error Message:
error Message:2021/07/06 21:16:01.653 INFO [AddTableCommand] [main]
{"code":500,"error":"org.apache.kafka.common.KafkaException: Failed to construct kafka consumer"}
When I checked Controller log file found below error.
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration.
System property 'java.security.auth.login.config' is not set".
P.S. I am also starting remaining Components (broker and server ) same as controller . i.e. with the help of server.conf/broker.conf files and added jaas file location in start-server.sh /start-broker.sh files.
Kindly Suggest What is the issue here.Why Its not able to read "java.security.auth.login.config" Property from files.
When I tried to compare the logs which are generated from quick-start-batch.sh and start-controller.sh file this is what i found.
in quick-start-batch.sh file log
2021/07/08 11:27:41.140 INFO [ServerCnxnFactory] [Thread-2] Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory
2021/07/08 11:27:41.145 INFO [NIOServerCnxnFactory] [Thread-2] Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers.
2021/07/08 11:27:41.151 INFO [NIOServerCnxnFactory] [Thread-2] binding to port 0.0.0.0/0.0.0.0:2123
2021/07/08 11:27:41.168 INFO [ZKDatabase] [Thread-2] zookeeper.snapshotSizeFactor = 0.33
2021/07/08 11:27:41.172 INFO [FileTxnSnapLog] [Thread-2] Snapshotting: 0x0 to /tmp/1625743660221/baseballStats/rawdata/PinotZkDir/version-2/snapshot.0
2021/07/08 11:27:41.175 INFO [FileTxnSnapLog] [Thread-2] Snapshotting: 0x0 to /tmp/1625743660221/baseballStats/rawdata/PinotZkDir/version-2/snapshot.0
2021/07/08 11:27:41.193 INFO [ContainerManager] [Thread-2] Using checkIntervalMs=60000 maxPerMinute=10000
2021/07/08 11:27:42.119 INFO [ZkClient] [main] JAAS File name: /home/dev/client_jaas.conf
2021/0
I am able to see the JAAS File name in log.but there is no JAAS keyword I could find in the log which is generated by controller. I guess when we are passing configName parameter and giving controller.conf file location while starting controller it's expecting JAAS File Property (java.security.auth.login.config) in controller.conf file and same is not able to read from start-controller.sh file.