Skip to content

0.8.4

Compare
Choose a tag to compare
@edenhill edenhill released this 18 Aug 09:49
· 4018 commits to master since this release

The 0.8.4 release introduces the new C++ interface, a configure script for increased portability and a large number of enhancements and bug fixes.

New features and enhancement

  • Native C++ interface - see src-cpp/rdkafkacpp.h
  • ./configure support using the mklove framework: https://github.com/edenhill/mklove
  • pkg-config rdkafka.pc file is now generated during build
  • Broker-based offset storage using OffsetCommit and OffsetFetch
  • Added Metadata API (issues #90 and #94)
  • rdkafka_example: Added -L (list) mode to list metadata
  • Added batch producer rd_kafka_produce_batch() for even more performant producing (issue #112)
  • Added queued.max.messages.kbytes (local consumer queue limit) (issue #81)
  • Added enforce.isr.cnt to locally enforce that this value is <= cluster ISR cnt (issue #91)
  • rdkafka_performance: Added -G <pipe command> to pipe stats output directly to command.
  • rdkafka_performance: Added -r <msgs/s> producer rate limiting
  • Added broker.address.family config property (limits broker connectivity to v4 or v6)
  • Added experimental msg_dr_cb() passing a complex message struct (NOT API SAFE just yet)
  • Added rd_kafka_opaque() to return the opaque pointer set by set_opaque
  • Added socket_cb and open_cb config properties (issue #77) (for CLOEXEC)
  • Added RPM package spec file for building RPM package - see rpm/librdkafka.spec
  • Added "msgs" to toppar stats output
  • rdkafka_performance: added tabular output, and more..
  • Solaris/SunOS support
  • Bart Warmerdam added an example how to integrate with ZooKeeper in examples/rdkafka_zookeeper_example.c
  • Support for infinite message timeouts message.timeout.ms=0 (thanks to Bart Warmerdam)
  • Added socket.keepalive.enable config property to enable SO_KEEPALIVE
  • Added socket.max.fails to disconnect from broker on request timeouts (defaults to 3)
  • Added produce.offset.report config property to propagate message offset of produced message back to application.
  • Added RD_KAFKA_OFFSET_TAIL(cnt) - start consuming cnt messages from end.
  • New consumer queue re-routing to pull messages from multiple topics+partitions simultaneously on the same thread with one single call. (rd_kafka_consume_start_queue())

Bug fixes

  • All signals are now properly blocked by rdkafka threads - no need to set up SIGPIPE handlers
    in the application anymore.
  • Fixed compilation errors and warnings on various platforms
  • rdkafka_performance: use message.timeout.ms default value (issue #74)
  • Added redundant but correctly spelled "committed_offset" field to stats (issue #80)
  • Dont retry failed Fetch requests, wait for next intervalled Fetch (issue #84)
  • rd_kafka_*conf_dump() did not properly return all types of values (namely string2int mappings)
  • More consistent handling of unknown topics and partitions (issue #96)
  • Handle transient partition_cnt=0 during topic auto creation (issue #99)
  • Add pthread_attr_destroy() (thanks to Bart Warmerdam)
  • Consuming from a cluster with more than 500 partitions would crash rdkafka, this is now fixed. (issue #111)
  • Fixed MacOSX 10.9 compilation
  • Use .dylib extension for shared library on MacOSX
  • Fixed dead-lock race condition on metadata update (issue #131)
  • Topics were not destroyed in example code (issue #130)

The OpenSuse build service is now used to ensure portability, please see
https://build.opensuse.org/package/show/home:edenhill/librdkafka
for build results and packages.

Sponsors

The fix for issue #131 (dead-lock race condition on metadata update) was sponsored by
the redBorder project (http://redBorder.org)

Big thanks goes to LonelyCoder (https://github.com/andoma) for his help in conceiving the C++ interface and throwing down Changs, sometimes simultaneously - kap kom kap!