Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KAFKA-3765; Kafka Code style corrections #1442

Closed
wants to merge 5 commits into from

Conversation

@rekhajoshm
Copy link
Contributor

commented May 28, 2016

Removed explicit returns, not needed parentheses, corrected variables, removed unused imports
Using isEmpty/nonEmpty instead of size check, using head, flatmap instead of map-flatten

rekhajoshm added 2 commits May 26, 2016
Merge pull request #2 from apache/trunk
Apache Kafka trunk pull

@rekhajoshm rekhajoshm changed the title [KAFKA-3765] Minor Code style corrections [KAFKA-3765] Kafka Code style corrections May 28, 2016

@rekhajoshm

This comment has been minimized.

Copy link
Contributor Author

commented May 28, 2016

@ijuma Please take a look.thanks.

@@ -149,7 +149,7 @@ class AdminClient(val time: Time,
return List.empty[ConsumerSummary]

if (group.protocolType != ConsumerProtocol.PROTOCOL_TYPE)
throw new IllegalArgumentException(s"Group ${groupId} with protocol type '${group.protocolType}' is not a valid consumer group")
throw new IllegalArgumentException(s"Group $groupId with protocol type '$group.protocolType' is not a valid consumer group")

This comment has been minimized.

Copy link
@ijuma

ijuma May 28, 2016

Contributor

We need {} in ${group.protocolType}, otherwise the behaviour is wrong. I would prefer if these string interpolation changes weren't included unless it's really obvious that it's fine as the compiler doesn't help us. For example, $groupId is obviously fine. But if there are quotes around the variable and so on, then I think it's fine to keep the braces.

val brokerList = partitionList(i).split(":").map(s => s.trim().toInt)
if (brokerList.size <= 0)
if (brokerList.length <= 0)

This comment has been minimized.

Copy link
@ijuma

ijuma May 28, 2016

Contributor

isEmpty would be better here.

@@ -134,7 +134,7 @@ case class ProducerRequest(versionId: Short = ProducerRequest.CurrentVersion,
}
else {
val producerResponseStatus = data.map {
case (topicAndPartition, data) =>
case (topicAndPartition, data: ByteBufferMessageSet) =>

This comment has been minimized.

Copy link
@ijuma

ijuma May 28, 2016

Contributor

This doesn't seem required.

@@ -20,6 +20,7 @@ package kafka.log
import java.util.Arrays
import java.security.MessageDigest
import java.nio.ByteBuffer

This comment has been minimized.

Copy link
@ijuma

ijuma May 28, 2016

Contributor

I don't think it's worth changing the file for this.

throw new AdminOperationException("replication factor must be larger than 0")
if (brokerList.size != brokerList.toSet.size)
if (brokerList.length != brokerList.toSet.size)

This comment has been minimized.

Copy link
@ijuma

ijuma May 28, 2016

Contributor

I'm not sure there's a strong reason to prefer length over size. There are pros and cons.

@rekhajoshm

This comment has been minimized.

Copy link
Contributor Author

commented May 28, 2016

thanks for the review @ijuma updated, please check.thanks.

@ijuma

This comment has been minimized.

Copy link
Contributor

commented May 28, 2016

Thanks for the PR @rekhajoshm. It's mostly fine although I still see many examples where size was changed to length for arrays. We don't have a rule or guideline that arrays should use length instead of size in Kafka, so it would be better to remove those changes.

Also, can you please update the PR title to follow our convention as described here https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes? Something like:

KAFKA-3765; Kafka Code style corrections

Finally, the PR description doesn't have to repeat the PR title and please update it so that it doesn't include "using slength where optimum instead of size".

@rekhajoshm rekhajoshm changed the title [KAFKA-3765] Kafka Code style corrections KAFKA-3765; Kafka Code style corrections May 28, 2016

@rekhajoshm

This comment has been minimized.

Copy link
Contributor Author

commented May 28, 2016

done, please check.thanks @ijuma
Where it fits, length over size in scala, as size needs additional implicit conversion to SeqLike, length can provide significant performance advantage.However as you called out, scala doc does not have a clear guideline on that, hence updated as per your review comment.thanks.

@ijuma

This comment has been minimized.

Copy link
Contributor

commented May 29, 2016

Thanks for the PR, LGTM. Merging to trunk.

With regards to the Array.size, note that the implicit conversions rely on value classes (i.e. they extend from AnyVal), so there should be no object creation as of Scala 2.10 (it was different in Scala 2.9.x).

@asfgit asfgit closed this in 404b696 May 29, 2016

@rekhajoshm

This comment has been minimized.

Copy link
Contributor Author

commented May 29, 2016

thanks @ijuma

bbejeck added a commit to bbejeck/kafka that referenced this pull request Jun 2, 2016
KAFKA-3443 Changes made per review comments
KAFKA-3735: Dispose all RocksObejcts upon completeness

Author: Guozhang Wang <wangguoz@gmail.com>

Reviewers: Roger Hoover, Eno Thereska, Ismael Juma

Closes apache#1411 from guozhangwang/K3735-dispose-rocksobject

MINOR: Specify keyalg RSA for SSL key generation

Author: Sriharsha Chintalapani <harsha@hortonworks.com>

Reviewers: Ismael Juma <ismael@juma.me.uk>

Closes apache#1416 from harshach/ssl-doc-fix

KAFKA-3747; Close `RecordBatch.records` when append to batch fails

With this change, `test_producer_throughput` with message_size=10000, compression_type=snappy and a snappy buffer size of 32k can be executed in a heap of 192m in a local environment (768m is needed without this change).

Author: Ismael Juma <ismael@juma.me.uk>

Reviewers: Guozhang Wang <wangguoz@gmail.com>

Closes apache#1418 from ijuma/kafka-3747-close-record-batch-when-append-fails

MINOR: Fix documentation table of contents and `BLOCK_ON_BUFFER_FULL_DOC`

Author: Ismael Juma <ismael@juma.me.uk>

Reviewers: Gwen Shapira

Closes apache#1423 from ijuma/minor-doc-fixes

Minor: Fix ps command example in docs

Process grep command has been updated. Previous "ps  | grep server-1.properties"  command is showing nothing.

Author: Satendra Kumar <satendra@knoldus.com>

Reviewers: Gwen Shapira

Closes apache#1386 from satendrakumar06/patch-1

KAFKA-3683; Add file descriptor recommendation to ops guide

Adding sizing recommendations for file descriptors to the ops guide.

Author: Dustin Cote <dustin@confluent.io>
Author: Dustin Cote <dustin@dustins-mbp.attlocal.net>

Reviewers: Gwen Shapira

Closes apache#1353 from cotedm/KAFKA-3683 and squashes the following commits:

8120318 [Dustin Cote] Adding file descriptor sizing recommendations
0908aa9 [Dustin Cote] Merge https://github.com/apache/kafka into trunk
32315e4 [Dustin Cote] Merge branch 'trunk' of https://github.com/cotedm/kafka into trunk
13309ed [Dustin Cote] Update links for new consumer API
4dcffc1 [Dustin Cote] Update links for new consumer API

MINOR: Add virtual env to Kafka system test README.md

Author: Liquan Pei <liquanpei@gmail.com>

Reviewers: Gwen Shapira

Closes apache#1346 from Ishiihara/add-venv

MINOR: Removed 1/2 of the hardcoded sleeps in Streams

Author: Eno Thereska <eno.thereska@gmail.com>

Reviewers: Guozhang Wang <wangguoz@gmail.com>, Ismael Juma <ismael@juma.me.uk>

Closes apache#1422 from enothereska/minor-integration-timeout2

KAFKA-3732: Add an auto accept option to kafka-acls.sh

Added a new argument to AclCommand: --yes. When set, automatically answer yes to prompts

Author: Mickael Maison <mickael.maison@gmail.com>

Reviewers: Gwen Shapira

Closes apache#1406 from mimaison/KAFKA-3732

KAFKA-3718; propagate all KafkaConfig __consumer_offsets configs to OffsetConfig instantiation

Kafka has two configurable compression codecs: the one used by the client (source codec) and the one finally used when storing into the log (target codec). The target codec defaults to KafkaConfig.compressionType and can be dynamically configured through zookeeper.

The GroupCoordinator appends group membership information into the __consumer_offsets topic by:
1. making a message with group membership information
2. making a MessageSet with the single message compressed with the source codec
3. doing a log.append on the MessageSet

Without this patch, KafkaConfig.offsetsTopicCompressionCodec doesn't get propagated to OffsetConfig instantiation, so GroupMetadataManager uses a source codec of NoCompressionCodec when making the MessageSet. Let's say we have enough group information such that the message formed exceeds KafkaConfig.messageMaxBytes before compression but would fall below the threshold after compression using our source codec. Even if we had dynamically configured __consumer_offsets with our favorite compression codec, the log.append will throw RecordTooLargeException during analyzeAndValidateMessageSet since the message was unexpectedly uncompressed instead of having been compressed with the source codec defined by KafkaConfig.offsetsTopicCompressionCodec.

Author: Onur Karaman <okaraman@linkedin.com>

Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>, Jason Gustafson <jason@confluent.io>, Ismael Juma <ismael@juma.me.uk>

Closes apache#1394 from onurkaraman/KAFKA-3718

Setting broker state as running after publishing to ZK

junrao

Currently, the broker state is set to running before it registers itself in ZooKeeper.  This is too early in the broker lifecycle.  If clients use the broker state as an indicator that the broker is ready to accept requests, they will get errors.  This change is to delay setting the broker state to running until it's registered in ZK.

Author: Roger Hoover <roger.hoover@gmail.com>

Reviewers: Jun Rao <junrao@gmail.com>

Closes apache#1426 from theduderog/broker-running-after-zk

MINOR: Use `--force` instead of `--yes` in `AclCommand`

To be consistent with `ConfigCommand` and `TopicCommand`.

No release includes this option yet, so we can simply change it.

Author: Ismael Juma <ismael@juma.me.uk>

Reviewers: Mickael Maison, Grant Henke

Closes apache#1430 from ijuma/use-force-instead-of-yes-in-acl-command and squashes the following commits:

bdf3a57 [Ismael Juma] Update `AclCommandTest`
78b8467 [Ismael Juma] Change variable name to `forceOpt`
0bb27af [Ismael Juma] Use `--force` instead of `--yes` in `AclCommand`

MINOR: Fix wrong comments

Author: Yukun Guo <gyk.net@gmail.com>

Reviewers: Gwen Shapira

Closes apache#1198 from gyk/fix-comment

KAFKA-3723: Cannot change size of schema cache for JSON converter

Author: Christian Posta <christian.posta@gmail.com>

Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>

Closes apache#1401 from christian-posta/ceposta-connect-class-cast-error

KAFKA-3710: MemoryOffsetBackingStore shutdown

ExecutorService needs to be shutdown on close, lest a zombie thread
prevent clean shutdown.

ewencp

Author: Peter Davis <peter.davis@expeditors.com>

Reviewers: Liquan Pei <liquanpei@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>

Closes apache#1383 from davispw/KAFKA-3710

MINOR: Delete unused code in FileStreamSourceTask

Author: leisore <leisore@gmail.com>

Reviewers: Ismael Juma <ismael@juma.me.uk>, Ewen Cheslack-Postava <ewen@confluent.io>

Closes apache#1433 from leisore/master

KAFKA-3749; fix "BOOSTRAP_SERVERS_DOC" typo

Author: manuzhang <owenzhang1990@gmail.com>

Reviewers: Guozhang Wang <wangguoz@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>, Ismael Juma <ismael@juma.me.uk>

Closes apache#1420 from manuzhang/KAFKA-3749

MINOR: Fix tracing in KafkaApis.handle()

requestObj() returns null for the o.a.k.c.requests objects so use header() for these.

Once all the requests will have been replaced by o.a.k.c.requests objects, we should be able to clean that up, but in the meantime it's useful to trace both.

Author: Mickael Maison <mickael.maison@gmail.com>

Reviewers: Ismael Juma <ismael@juma.me.uk>

Closes apache#1435 from mimaison/kafkaapis_trace

MINOR: Fix a couple of scaladoc typos

Author: Vahid Hashemian <vahidhashemian@us.ibm.com>

Reviewers: Ismael Juma <ismael@juma.me.uk>

Closes apache#1440 from vahidhashemian/typo06/fix_typos_in_code_comments

KAFKA-3682; ArrayIndexOutOfBoundsException thrown by

SkimpyOffsetMap.get() when full

Limited number of attempts to number of map slots after the internal
positionOf() goes into linear search mode.
Added unit test

Co-developed with mimaison

Author: edoardo <ecomar@uk.ibm.com>

Reviewers: Jun Rao <junrao@gmail.com>

Closes apache#1352 from edoardocomar/KAFKA-3682

KAFKA-3678: Removed sleep from streams integration tests

Author: Eno Thereska <eno.thereska@gmail.com>

Reviewers: Guozhang Wang <wangguoz@gmail.com>

Closes apache#1439 from enothereska/KAFKA-3678-timeouts1

KAFKA-3767; Add missing license to connect-test.properties

This address to https://issues.apache.org/jira/browse/KAFKA-3767.

Author: Sasaki Toru <sasakitoa@nttdata.co.jp>

Reviewers: Ismael Juma <ismael@juma.me.uk>

Closes apache#1443 from sasakitoa/test_failure_no_license

KAFKA-3158; ConsumerGroupCommand should tell whether group is actually dead

This patch fix differentiates between when a consumer group is rebalancing or dead and reports the appropriate error message.

Author: Ishita Mandhan <imandha@us.ibm.com>

Reviewers: Vahid Hashemian <vahidhashemian@us.ibm.com>, Jason Gustafson <jason@confluent.io>, Ismael Juma <ismael@juma.me.uk>

Closes apache#1429 from imandhan/KAFKA-3158

KAFKA-3765; Kafka Code style corrections

Removed explicit returns, not needed parentheses, corrected variables, removed unused imports
Using isEmpty/nonEmpty  instead of size check, using head, flatmap instead of map-flatten

Author: Joshi <rekhajoshm@gmail.com>
Author: Rekha Joshi <rekhajoshm@gmail.com>

Reviewers: Ismael Juma <ismael@juma.me.uk>

Closes apache#1442 from rekhajoshm/KAFKA-3765

MINOR: Remove synchronized as the tasks are executed sequentially

Author: Liquan Pei <liquanpei@gmail.com>

Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>

Closes apache#1441 from Ishiihara/remove-synchronized

MINOR: Avoid trace logging computation in `checkEnoughReplicasReachOffset`

`numAcks` is only used in the `trace` logging statement so it should be a `def` instead of a `val`. Also took the chance to improve the code and documentation a little.

Author: Ismael Juma <ismael@juma.me.uk>

Reviewers: Guozhang Wang <wangguoz@gmail.com>, Ewen Cheslack-Postava <ewen@confluent.io>

Closes apache#1449 from ijuma/minor-avoid-trace-logging-computation-in-partition
gfodor added a commit to AltspaceVR/kafka that referenced this pull request Jun 3, 2016
KAFKA-3765; Kafka Code style corrections
Removed explicit returns, not needed parentheses, corrected variables, removed unused imports
Using isEmpty/nonEmpty  instead of size check, using head, flatmap instead of map-flatten

Author: Joshi <rekhajoshm@gmail.com>
Author: Rekha Joshi <rekhajoshm@gmail.com>

Reviewers: Ismael Juma <ismael@juma.me.uk>

Closes apache#1442 from rekhajoshm/KAFKA-3765
kamalcph pushed a commit to kamalcph/kafka that referenced this pull request Jun 28, 2016
KAFKA-3765; Kafka Code style corrections
Removed explicit returns, not needed parentheses, corrected variables, removed unused imports
Using isEmpty/nonEmpty  instead of size check, using head, flatmap instead of map-flatten

Author: Joshi <rekhajoshm@gmail.com>
Author: Rekha Joshi <rekhajoshm@gmail.com>

Reviewers: Ismael Juma <ismael@juma.me.uk>

Closes apache#1442 from rekhajoshm/KAFKA-3765
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.