Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Additional kafka_franz input kafka consumer configuration #2573

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
58 changes: 44 additions & 14 deletions internal/impl/kafka/input_kafka_franz.go
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,18 @@ Finally, it's also possible to specify an explicit offset to consume from by add
Description("The period of time between each commit of the current partition offsets. Offsets are always committed during shutdown.").
Default("5s").
Advanced()).
Field(service.NewIntField("fetch_min_bytes").
Description("Determines the minimum amount of data that to receive from the broker when fetching records").
Default(1).
Advanced()).
Field(service.NewDurationField("fetch_max_wait_duration").
Description("Determines how long in for the broker to wait until it has enough data to send before responding").
Default(time.Duration.Seconds(5)).
Advanced()).
Field(service.NewIntField("max_partition_fetch_bytes").
Description("Determines the maximum amount of data to receive from a single partition in a single fetch request").
Default(1000000).
Advanced()).
Field(service.NewBoolField("start_from_oldest").
Description("Determines whether to consume from the oldest available offset, otherwise messages are consumed from the latest offset. The setting is applied when creating a new consumer group or the saved offset no longer exists.").
Default(true).
Expand Down Expand Up @@ -129,20 +141,23 @@ type batchWithAckFn struct {
}

type franzKafkaReader struct {
seedBrokers []string
topics []string
topicPartitions map[string]map[int32]kgo.Offset
clientID string
rackID string
consumerGroup string
tlsConf *tls.Config
saslConfs []sasl.Mechanism
checkpointLimit int
startFromOldest bool
commitPeriod time.Duration
regexPattern bool
multiHeader bool
batchPolicy service.BatchPolicy
seedBrokers []string
topics []string
topicPartitions map[string]map[int32]kgo.Offset
clientID string
rackID string
consumerGroup string
tlsConf *tls.Config
saslConfs []sasl.Mechanism
checkpointLimit int
startFromOldest bool
commitPeriod time.Duration
fetchMinBytes int
fetchMaxWaitDuration time.Duration
maxPartitionFetchBytes int
regexPattern bool
multiHeader bool
batchPolicy service.BatchPolicy

batchChan atomic.Value
res *service.Resources
Expand Down Expand Up @@ -227,6 +242,18 @@ func newFranzKafkaReaderFromConfig(conf *service.ParsedConfig, res *service.Reso
return nil, err
}

if f.fetchMinBytes, err = conf.FieldInt("fetch_min_bytes"); err != nil {
return nil, err
}

if f.fetchMaxWaitDuration, err = conf.FieldDuration("fetch_max_wait_duration"); err != nil {
return nil, err
}

if f.maxPartitionFetchBytes, err = conf.FieldInt("max_partition_fetch_bytes"); err != nil {
return nil, err
}

if f.batchPolicy, err = conf.FieldBatchPolicy("batching"); err != nil {
return nil, err
}
Expand Down Expand Up @@ -616,6 +643,9 @@ func (f *franzKafkaReader) Connect(ctx context.Context) error {
kgo.ConsumerGroup(f.consumerGroup),
kgo.ClientID(f.clientID),
kgo.Rack(f.rackID),
kgo.FetchMinBytes(int32(f.fetchMinBytes)),
kgo.FetchMaxWait(f.fetchMaxWaitDuration),
kgo.FetchMaxPartitionBytes(int32(f.maxPartitionFetchBytes)),
}

if f.consumerGroup != "" {
Expand Down
27 changes: 27 additions & 0 deletions website/docs/components/inputs/kafka_franz.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,9 @@ input:
checkpoint_limit: 1024
auto_replay_nacks: true
commit_period: 5s
fetch_min_bytes: 1
fetch_max_wait_duration: 5e-09
max_partition_fetch_bytes: 1000000
start_from_oldest: true
tls:
enabled: false
Expand Down Expand Up @@ -213,6 +216,30 @@ The period of time between each commit of the current partition offsets. Offsets
Type: `string`
Default: `"5s"`

### `fetch_min_bytes`

Determines the minimum amount of data that to receive from the broker when fetching records


Type: `int`
Default: `1`

### `fetch_max_wait_duration`

Determines how long in for the broker to wait until it has enough data to send before responding


Type: `string`
Default: `5e-9`

### `max_partition_fetch_bytes`

Determines the maximum amount of data to receive from a single partition in a single fetch request


Type: `int`
Default: `1000000`

### `start_from_oldest`

Determines whether to consume from the oldest available offset, otherwise messages are consumed from the latest offset. The setting is applied when creating a new consumer group or the saved offset no longer exists.
Expand Down