Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MINOR: Fix typos in code comments #662

Conversation

vahidhashemian
Copy link
Contributor

No description provided.

vahidhashemian and others added 10 commits December 10, 2015 11:54
…ternal rebalancer

…ternal rebalancer

Author: Gwen Shapira <cshapi@gmail.com>

Reviewers: Ismael Juma <ismael@juma.me.uk>, Sriharsha Chintalapani <harsha@hortonworks.com>, Ewen Cheslack-Postava <ewen@confluent.io>

Closes apache#611 from gwenshap/KAFKA-2926
Fixed version sanity checks by updated kafkatest version to match kafka version

Author: Geoff Anderson <geoff@confluent.io>

Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>

Closes apache#656 from granders/KAFKA-2928-fix-version-sanity-checks
Partition re-assignment tests with and without broker failure.

Author: Anna Povzner <anna@confluent.io>

Reviewers: Ben Stopford <ben@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>, Geoff Anderson <geoff@confluent.io>

Closes apache#655 from apovzner/kafka_2896
Split kafka logging into two levels - DEBUG and INFO, and do not collect DEBUG by default.

Author: Geoff Anderson <geoff@confluent.io>

Reviewers: Ben Stopford <ben@confluent.io>, Ewen Cheslack-Postava <ewen@confluent.io>

Closes apache#657 from granders/KAFKA-2927-reduce-log-footprint
Moves test output from the project files and allows `gradle clean` to clean up the output.

Author: Grant Henke <granthenke@gmail.com>

Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>

Closes apache#664 from granthenke/target
guozhangwang
* a test for ktable state store creation

Author: Yasuhiro Matsuda <yasuhiro@confluent.io>

Reviewers: Guozhang Wang

Closes apache#661 from ymatsuda/more_ktable_test
…uivalent

…uivalent

Author: Grant Henke <granthenke@gmail.com>

Reviewers: Ewen Cheslack-Postava <ewen@confluent.io>

Closes apache#663 from granthenke/offset-list
https://issues.apache.org/jira/browse/KAFKA-2981

Author: Xin Wang <best.wangxin@163.com>

Reviewers: Guozhang Wang

Closes apache#668 from vesense/patch-2
@@ -301,7 +301,7 @@ public void putConnectorConfig(String connector, Map<String, String> properties)
*/
public void putTaskConfigs(Map<ConnectorTaskId, Map<String, String>> configs) {
// Make sure we're at the end of the log. We should be the only writer, but we want to make sure we don't have
// any outstanding lagging data to consume.
// any outstanding logging data to consume.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lagging is actually the correct word here. We are consuming messages that are outstanding because the consumer is lagging behind the latest data in the broker.

@gwenshap
Copy link
Contributor

oh wow, thanks for all the corrections. Must be serious effort.

Just one little correction that should be reverted (see comment above) and I'll be happy to merge.

…tBrokerFailure

I can reproduced this transient failure, it seldom happen;
code is like below:
 // rolling bounce brokers
    for (i <- 0 until numServers) {
      for (server <- servers) {
        server.shutdown()
        server.awaitShutdown()
        server.startup()
        Thread.sleep(2000)
      }

      // Make sure the producer do not see any exception
      // in returned metadata due to broker failures
      assertTrue(scheduler.failed == false)

      // Make sure the leader still exists after bouncing brokers
      (0 until numPartitions).foreach(partition => TestUtils.waitUntilLeaderIsElectedOrChanged(zkUtils, topic1, partition))
Brokers keep rolling restart, and producer keep sending messages;
In every loop, it will wait for election of partition leader;
But if the election is slow, more messages will be buffered in RecordAccumulator's BufferPool;
The limit for buffer is set to be 30000;
TimeoutException("Failed to allocate memory within the configured max blocking time") will show up when out of memory;
Since for every restart of the broker, it will sleep for 2000 ms,  so this transient failure seldom happen;
But if I reduce the sleeping period, the bigger chance failure happens;
for example if the broker with role of controller suffered a restart, it will take time to select controller first, then select leader, which will lead to more messges blocked in KafkaProducer:RecordAccumulator:BufferPool;
In this fix, I just enlarge the producer's buffer size to be 1MB;
guozhangwang , Could you give some comments?

Author: jinxing <jinxing@fenbi.com>
Author: ZoneMayor <jinxing6042@126.com>

Reviewers: Guozhang Wang

Closes apache#648 from ZoneMayor/trunk-KAFKA-2837
@gwenshap
Copy link
Contributor

@vahidhashemian will you have time to correct my comment above?

@vahidhashemian
Copy link
Contributor Author

@gwenshap yes, I'll try to get to it today. sorry for the delay.

@vahidhashemian
Copy link
Contributor Author

Will create a new pull request.

@vahidhashemian
Copy link
Contributor Author

The new patch is available at #673.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
5 participants