Kafka-php is a php client with Zookeeper integration for apache Kafka. It only supports the latest version of Kafka 0.8 which is still under development, so this module is not production ready so far.
The Zookeeper integration does the following jobs:
- Loads broker metadata from Zookeeper before we can communicate with the Kafka server
- Watches broker state, if broker changes, the client will refresh broker and topic metadata stored in the client
- Minimum PHP version: 5.3.3.
- Apache Kafka 0.8.x
- You need to have access to your Kafka instance and be able to connect through TCP. You can obtain a copy and instructions on how to setup kafka at https://github.com/kafka-dev/kafka kafka-08-quick-start
- The PHP Zookeeper extension
Add the lib directory to the PHP include_path and use an autoloader like the one in the examples directory (the code follows the PEAR/Zend one-class-per-file convention).
hostList
: zookeeper host list , example 127.0.0.1:2181,192.168.1.114:2181timeout
: zookeeper timeout
ack
: This field indicates how many acknowledgements the servers should receive before responding to the request.
topicName
: The topic that data is being published to.partitionId
: The partition that data is being published to.messages
: [Array] publish message.
send message sets to the server.
$produce = \Kafka\Produce::getInstance('localhost:2181', 3000);
$produce->setRequireAck(-1);
$produce->setMessages('test', 0, array('test1111111'));
$produce->setMessages('test6', 0, array('test1111111'));
$produce->setMessages('test6', 2, array('test1111111'));
$produce->setMessages('test6', 1, array('test111111111133'));
$result = $produce->send();
var_dump($result);
hostList
: zookeeper host list , example 127.0.0.1:2181,192.168.1.114:2181timeout
: zookeeper timeout
groupName
: Specify consumer group.
topicName
: The topic that data is being fetch to.partitionId
: The partition that data is being fetch to.offset
: set fetch offset. default0
.
return fetch message Iterator. \Kafka\Protocol\Fetch\Topic
this object is iterator
key
: topic name
value
: \Kafka\Protocol\Fetch\Partition
this object is iterator.
key
: partition id
value
: messageSet object
return partition fetch errcode.
return partition fetch offset.
this object is iterator. \Kafka\Protocol\Fetch\Message
$consumer = \Kafka\Consumer::getInstance('localhost:2181');
$consumer->setGroup('testgroup');
$consumer->setPartition('test', 0);
$consumer->setPartition('test6', 2, 10);
$result = $consumer->fetch();
foreach ($result as $topicName => $topic) {
foreach ($topic as $partId => $partition) {
var_dump($partition->getHighOffset());
foreach ($partition as $message) {
var_dump((string)$message);
}
}
}
The produce API is used to send message sets to the server. For efficiency it allows sending message sets intended for many topic partitions in a single request.
\Kafka\Protocol\Encoder::produceRequest
array(
'required_ack' => 1,
// This field indicates how many acknowledgements the servers should receive before responding to the request. default `0`
// If it is 0 the server will not send any response
// If it is -1 the server will block until the message is committed by all in sync replicas before sending a response
// For any number > 1 the server will block waiting for this number of acknowledgements to occur
'timeout' => 1000,
// This provides a maximum time in milliseconds the server can await the receipt of the number of acknowledgements in RequiredAcks.
'data' => array(
array(
'topic_name' => 'testtopic',
// The topic that data is being published to.[String]
'partitions' => array(
array(
'partition_id' => 0,
// The partition that data is being published to.
'messages' => array(
'message1',
// [String] message
),
),
),
),
),
);
Array
$data = array(
'required_ack' => 1,
'timeout' => 1000,
'data' => array(
array(
'topic_name' => 'test',
'partitions' => array(
array(
'partition_id' => 0,
'messages' => array(
'message1',
'message2',
),
),
),
),
),
);
$conn = new \Kafka\Socket('localhost', '9092');
$conn->connect();
$encoder = new \Kafka\Protocol\Encoder($conn);
$encoder->produceRequest($data);
$decoder = new \Kafka\Protocol\Decoder($conn);
$result = $decoder->produceResponse();
var_dump($result);
The fetch API is used to fetch a chunk of one or more logs for some topic-partitions. Logically one specifies the topics, partitions, and starting offset at which to begin the fetch and gets back a chunk of messages
\Kafka\Protocol\Encoder::fetchRequest
array(
'replica_id' => -1,
// The replica id indicates the node id of the replica initiating this request. default `-1`
'max_wait_time' => 100,
// The max wait time is the maximum amount of time in milliseconds to block waiting if insufficient data is available at the time the request is issued. default 100 ms.
'min_bytes' => 64 * 1024 // 64k
// This is the minimum number of bytes of messages that must be available to give a response. default 64k.
'data' => array(
array(
'topic_name' => 'testtopic',
// The topic that data is being published to.[String]
'partitions' => array(
array(
'partition_id' => 0,
// The partition that data is being published to.
'offset' => 0,
// The offset to begin this fetch from. default 0
'max_bytes' => 100 * 1024 * 1024,
// This is the minimum number of bytes of messages that must be available to give a response. default 100Mb
),
),
),
),
);
\Kafka\Protocol\Fetch\Topic iterator
$data = array(
'data' => array(
array(
'topic_name' => 'test',
'partitions' => array(
array(
'partition_id' => 0,
'offset' => 0,
),
),
),
),
);
$conn = new \Kafka\Socket('localhost', '9092');
$conn->connect();
$encoder = new \Kafka\Protocol\Encoder($conn);
$encoder->fetchRequest($data);
$decoder = new \Kafka\Protocol\Decoder($conn);
$result = $decoder->fetchResponse();
var_dump($result);
This API describes the valid offset range available for a set of topic-partitions. As with the produce and fetch APIs requests must be directed to the broker that is currently the leader for the partitions in question. This can be determined using the metadata API.
\Kafka\Protocol\Encoder::offsetRequest
####param struct
array(
'replica_id' => -1,
// The replica id indicates the node id of the replica initiating this request. default `-1`
'data' => array(
array(
'topic_name' => 'testtopic',
// The topic that data is being published to.[String]
'partitions' => array(
array(
'partition_id' => 0,
// The partition that get offset .
'time' => -1,
// Used to ask for all messages before a certain time (ms).
// Specify -1 to receive the latest offsets
// Specify -2 to receive the earliest available offset.
'max_offset' => 1,
// max return offset element. default 10000.
),
),
),
),
);
Array.
$data = array(
'data' => array(
array(
'topic_name' => 'test',
'partitions' => array(
array(
'partition_id' => 0,
'max_offset' => 10,
'time' => -1,
),
),
),
),
);
$conn = new \Kafka\Socket('localhost', '9092');
$conn->connect();
$encoder = new \Kafka\Protocol\Encoder($conn);
$encoder->offsetRequest($data);
$decoder = new \Kafka\Protocol\Decoder($conn);
$result = $decoder->offsetResponse();
var_dump($result);
The metdata returned is at the partition level, but grouped together by topic for convenience and to avoid redundancy. For each partition the metadata contains the information for the leader as well as for all the replicas and the list of replicas that are currently in-sync.
\Kafka\Protocol\Encoder::metadataRequest
####param struct
array(
'topic_name1', // topic name
);
Array.
$data = array(
'test'
);
$conn = new \Kafka\Socket('localhost', '9092');
$conn->connect();
$encoder = new \Kafka\Protocol\Encoder($conn);
$encoder->metadataRequest($data);
$decoder = new \Kafka\Protocol\Decoder($conn);
$result = $decoder->metadataResponse();
var_dump($result);
These APIs allow for centralized management of offsets.
\Kafka\Protocol\Encoder::commitOffsetRequest
####param struct
array(
'group_id' => 'testgroup',
// consumer group
'data' => array(
array(
'topic_name' => 'testtopic',
// The topic that data is being published to.[String]
'partitions' => array(
array(
'partition_id' => 0,
// The partition that get offset .
'offset' => 0,
// The offset to begin this fetch from.
'time' => -1,
// If the time stamp field is set to -1, then the broker sets the time stamp to the receive time before committing the offset.
),
),
),
),
);
Array.
$data = array(
'group_id' => 'testgroup',
'data' => array(
array(
'topic_name' => 'test',
'partitions' => array(
array(
'partition_id' => 0,
'offset' => 2,
),
),
),
),
);
$conn = new \Kafka\Socket('localhost', '9092');
$conn->connect();
$encoder = new \Kafka\Protocol\Encoder($conn);
$encoder->commitOffsetRequest($data);
$decoder = new \Kafka\Protocol\Decoder($conn);
$result = $decoder->commitOffsetResponse();
var_dump($result);
These APIs allow for centralized management of offsets.
\Kafka\Protocol\Encoder::fetchOffsetRequest
####param struct
array(
'group_id' => 'testgroup',
// consumer group
'data' => array(
array(
'topic_name' => 'testtopic',
// The topic that data is being published to.[String]
'partitions' => array(
array(
'partition_id' => 0,
// The partition that get offset .
),
),
),
),
);
Array.
$data = array(
'group_id' => 'testgroup',
'data' => array(
array(
'topic_name' => 'test',
'partitions' => array(
array(
'partition_id' => 0,
),
),
),
),
);
$conn = new \Kafka\Socket('localhost', '9092');
$conn->connect();
$encoder = new \Kafka\Protocol\Encoder($conn);
$encoder->fetchOffsetRequest($data);
$decoder = new \Kafka\Protocol\Decoder($conn);
$result = $decoder->fetchOffsetResponse();
var_dump($result);