Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FLINK-23423][docs-zh] Translate the page of "Elasticsearch Connector" into Chinese #16547

Merged
merged 6 commits into from Aug 3, 2021

Conversation

huxixiang
Copy link
Contributor

What is the purpose of the change

Translate the page of "Elasticsearch Connector" into Chinese

The page url is "https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/connectors/datastream/elasticsearch/".

Brief change log

Translate "flink/docs/content.zh/docs/connectors/datastream/elasticsearch.md"

Verifying this change

This change is a trivial rework / code cleanup without any test coverage.

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (no)
  • The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
  • The serializers: (no)
  • The runtime per-record code paths (performance sensitive): (no)
  • Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
  • The S3 file system connector: (no)

Documentation

  • Does this pull request introduce a new feature? (no)
  • If yes, how is the feature documented? (no)

@flinkbot
Copy link
Collaborator

flinkbot commented Jul 21, 2021

Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
to review your pull request. We will use this comment to track the progress of the review.

Automated Checks

Last check on commit ca79e17 (Sat Aug 28 13:07:04 UTC 2021)

Warnings:

  • No documentation files were touched! Remember to keep the Flink docs up to date!

Mention the bot in a comment to re-run the automated checks.

Review Progress

  • ❓ 1. The [description] looks good.
  • ❓ 2. There is [consensus] that the contribution should go into to Flink.
  • ❓ 3. Needs [attention] from.
  • ❓ 4. The change fits into the overall [architecture].
  • ❓ 5. Overall code [quality] is good.

Please see the Pull Request Review Guide for a full explanation of the review process.


The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required Bot commands
The @flinkbot bot supports the following commands:

  • @flinkbot approve description to approve one or more aspects (aspects: description, consensus, architecture and quality)
  • @flinkbot approve all to approve all aspects
  • @flinkbot approve-until architecture to approve everything until architecture
  • @flinkbot attention @username1 [@username2 ..] to require somebody's attention
  • @flinkbot disapprove architecture to remove an approval you gave earlier

@flinkbot
Copy link
Collaborator

flinkbot commented Jul 21, 2021

CI report:

Bot commands The @flinkbot bot supports the following commands:
  • @flinkbot run travis re-run the last Travis build
  • @flinkbot run azure re-run the last Azure build

Copy link
Contributor

@movesan movesan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

注释字符 "//" 与字符之间应存在空格

@@ -396,13 +374,13 @@ input.addSink(new ElasticsearchSink(
RequestIndexer indexer) {

if (ExceptionUtils.findThrowable(failure, EsRejectedExecutionException.class).isPresent()) {
// full queue; re-add document for indexing
//队列已满;重新添加文档进行索引
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

注释缩进问题

indexer.add(action)
} else if (ExceptionUtils.findThrowable(failure, ElasticsearchParseException.class).isPresent()) {
// malformed document; simply drop request without failing sink
//文档格式错误;简单地删除请求避免接收器失败
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

* **bulk.flush.backoff.retries**: The amount of backoff retries to attempt.
* **bulk.flush.max.actions**:刷新前缓存的最大操作数。
* **bulk.flush.max.size.mb**:刷新前缓存的最大数据大小(以兆字节为单位)。
* **bulk.flush.interval.ms**:不论缓存操作的数量或大小如何,刷新的时间间隔。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"不论缓存操作的数量或大小如何,刷新的时间间隔。" 是否可进行语序调整,如:"刷新的时间间隔(不论缓存操作的数量或大小如何)。"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

感谢,我修改一下。

@huxixiang
Copy link
Contributor Author

cc @wuchong , could you review my PR when you're free? Looking for your feedbacks, thanks a lot.


## Elasticsearch Sink
## Elasticsearch 接收器
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

保持“Sink”这个词,无需翻译为“接收器”。参见其它已翻译的 connector 文档,如 RabbitMQNiFi,下文中的 sink 也是如此。

the name of your cluster.
对于仍然使用已被弃用的 `TransportClient` 和 Elasticsearch 集群通信的 Elasticsearch 版本 (即,小于或等于 5.x 的版本),
请注意如何使用一个 `String` 类型的 `Map` 配置 `ElasticsearchSink`。在创建内部使用的 `TransportClient` 时将直接转发此配置映射。
配置键记录在[此处](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html)的 Elasticsearch 文档中。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

“配置键记录在” -> “配置项参见”?

time of checkpoints. This effectively assures that all requests before the
checkpoint was triggered have been successfully acknowledged by Elasticsearch, before
proceeding to process more records sent to the sink.
启用 Flink 的检查点后,Flink Elasticsearch 接收器保证至少一次将操作请求发送到 Elasticsearch 集群。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

保持“checkpointing”这个词,无需翻译为“检查点”,参见 Checkpointing 中文文档,下文中的 checkpoint 也是如此。

checkpoint was triggered have been successfully acknowledged by Elasticsearch, before
proceeding to process more records sent to the sink.
启用 Flink 的检查点后,Flink Elasticsearch 接收器保证至少一次将操作请求发送到 Elasticsearch 集群。
它通过在检查点时等待 `BulkProcessor` 中所有挂起的操作请求来实现。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

“它” -> “该特性”?


To use fault tolerant Elasticsearch Sinks, checkpointing of the topology needs to be enabled at the execution environment:
要使用容错 Elasticsearch Sinks,需要在执行环境启用拓扑检查点:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的“topology”应该指的是 Flink 的作业拓扑图,“拓扑检查点”容易让用户误解为一种特别的 checkpoint。结合上述评论,可以考虑一下如何修改。

delivery guarantees anymore, even with checkpoint for the topology enabled.
<b>注意</b>: 如果用户愿意,可以通过在创建的
<b> ElasticsearchSink </b>上调用 <b>disableFlushOnCheckpoint()</b> 来禁用刷新。请注意,
这实质上意味着接收器将不再提供任何强大的交付保证,即使启用了拓扑检查点。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

“strong delivery guarantees”指的是保证 exactly-once 地向 es 写入数据,“强大的交付保证”让人不太能理解...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

感谢,这里强大的交付保证是指上文的at-least-once

queue capacity saturation and drop requests with malformed documents, without
failing the sink. For all other failures, the sink will fail. If a `ActionRequestFailureHandler`
is not provided to the constructor, the sink will fail for any kind of error.
上面的示例接收器重新添加由于队列容量饱和而失败的请求并丢弃文档格式错误的请求,而不会使接收器失败。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-> “上述的示例接收器会重新添加由于队列容量饱和而失败的请求,同时丢弃文档格式错误的请求,而...”

an exponential backoff. For more information on the behaviour of the
internal `BulkProcessor` and how to configure it, please see the following section.
注意,`onFailure` 仅在 `BulkProcessor` 内部完成所有补偿重试尝试后仍发生故障时被调用。
默认情况下,`BulkProcessor` 最多重试 8 次,并采用指数补偿。有关 `BulkProcessor` 内部行为以及如何配置它的更多信息,请参阅以下部分。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

“并采用指数补偿” -> “两次重试之间的等待时间呈指数级增长”。下文出现的“补偿”也应重新考虑翻译方式。

system-wide, i.e. for all job being run.
## 将 Elasticsearch 连接器打包到 Uber-Jar 中

为了执行你的 Flink 程序,建议构建一个叫做 uber-jar (可执行的 jar),其中包含了你所有的依赖
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

去掉“叫做”

为了执行你的 Flink 程序,建议构建一个叫做 uber-jar (可执行的 jar),其中包含了你所有的依赖
(更多信息参见[此处]({{< ref "docs/dev/datastream/project-configuration" >}}))。

或者,你可以将连接器的 jar 文件放入 Flink 的 `lib/` 目录下,使其在系统范围内可用,即用于所有正在运行的作业。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

“系统范围‘ -> “全局范围”。

“being run”不是进行时,是表修饰的被动语态。“即用于所有正在运行的作业” -> “即用于所有作业”。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

感谢,已根据您的反馈进行了修改。

@huxixiang
Copy link
Contributor Author

cc @95chenjz ,Hi jianzhang,could you review my PR when you're free? thanks a lot.

@huxixiang
Copy link
Contributor Author

cc @RocMarshal, could you review my pr when you're free? Looking for your feedbacks, thanks a lot.

@RocMarshal
Copy link
Contributor

Thank you for your contribution.I'll check this pr, wait a moment @huxixiang

Copy link
Contributor

@RocMarshal RocMarshal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@huxixiang Hi, so sorry for my late dealing. And I left some comments. Please let me know what's your opinion.

distribution. See [here]({{< ref "docs/dev/datastream/project-configuration" >}}) for information
about how to package the program with the libraries for cluster execution.
请注意,流连接器目前不是二进制发行版的一部分。
有关如何将程序和用于集群执行的库一起打包,参考[此处]({{< ref "docs/dev/datastream/project-configuration" >}})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[此处] -> [此文档]
only a minor comment.

Make sure to set and remember a cluster name. This must be set when
creating an `ElasticsearchSink` for requesting document actions against your cluster.
Elasticsearch 集群的设置可以参考[此处](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html)。
确保设置并记住集群名称。这是在创建 `ElasticsearchSink` 请求集群文档操作时必须要设置的。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

确保->确认

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted, Thanks.


## Elasticsearch Sink

The `ElasticsearchSink` uses a `TransportClient` (before 6.x) or `RestHighLevelClient` (starting with 6.x) to communicate with an
Elasticsearch cluster.
`ElasticsearchSink` 使用 `TransportClient` (6.x 之前) 或者 `RestHighLevelClient` (6.x 开始) 和 Elasticsearch 集群进行通信。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`ElasticsearchSink` 使用 `TransportClient` (6.x 之前) 或者 `RestHighLevelClient` (6.x 开始) 和 Elasticsearch 集群进行通信。
`ElasticsearchSink` 使用 `TransportClient`(6.x 之前)或者 `RestHighLevelClient`(6.x 开始)和 Elasticsearch 集群进行通信。

Just keep an English space between words and Chinese.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted, Thanks.

@@ -98,7 +92,7 @@ DataStream<String> input = ...;

Map<String, String> config = new HashMap<>();
config.put("cluster.name", "my-cluster-name");
// This instructs the sink to emit after every element, otherwise they would be buffered
// 这指示 sink 在接收每个元素之后立即提交,否则它们将被缓存
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe you could translate it in a better way.
// 这指示 sink 在接收每个元素之后立即提交,否则它们将被缓存

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted, I have modified the previous translation to
// 下面的设置使 sink 在接收每个元素之后立即提交,否则这些元素将被缓存起来

@@ -166,10 +160,10 @@ ElasticsearchSink.Builder<String> esSinkBuilder = new ElasticsearchSink.Builder<
}
);

// configuration for the bulk requests; this instructs the sink to emit after every element, otherwise they would be buffered
// 批量请求的配置;这指示 sink 在接收每个元素之后立即提交,否则它们将被缓存
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

指示->设置
Free translation is easier to understand than literal translation. Only a minor comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted, thanks.

@@ -396,13 +374,13 @@ input.addSink(new ElasticsearchSink(
RequestIndexer indexer) {

if (ExceptionUtils.findThrowable(failure, EsRejectedExecutionException.class).isPresent()) {
// full queue; re-add document for indexing
// 队列已满;重新添加文档进行索引
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// 队列已满;重新添加文档进行索引
// 队列已满;重新添加文档进行索引

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted, thanks.

indexer.add(action)
} else if (ExceptionUtils.findThrowable(failure, ElasticsearchParseException.class).isPresent()) {
// malformed document; simply drop request without failing sink
// 文档格式错误;简单地删除请求避免 sink 失败
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// 文档格式错误;简单地删除请求避免 sink 失败
// 文档格式错误;简单地删除请求避免 sink 失败

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted, thanks.

will need to wait until Elasticsearch node queues have enough capacity for
all the pending requests. This also means that if re-added requests never
succeed, the checkpoint will never finish.
<b>重要提示</b>:在失败时将请求重新添加回内部 <b>BulkProcessor</b> 会导致更长的 checkpoint,因为在进行 checkpoint 时, sink 还需要等待重新添加的请求被刷新。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
<b>重要提示</b>:在失败时将请求重新添加回内部 <b>BulkProcessor</b> 会导致更长的 checkpoint,因为在进行 checkpoint 时, sink 还需要等待重新添加的请求被刷新。
<b>重要提示</b>:在失败时将请求重新添加回内部 <b>BulkProcessor</b> 会导致更长的 checkpoint,因为在进行 checkpoint 时,sink 还需要等待重新添加的请求被刷新。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted, thanks.

system-wide, i.e. for all job being run.
## 将 Elasticsearch 连接器打包到 Uber-Jar 中

为了执行你的 Flink 程序,建议构建一个 uber-jar (可执行的 jar),其中包含了你所有的依赖
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
为了执行你的 Flink 程序,建议构建一个 uber-jar (可执行的 jar),其中包含了你所有的依赖
建议构建一个包含程序所有依赖的 uber-jar (可执行的 jar),以便更好地执行你的 Flink 程序。

Only a minor suggestion.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted, thanks.

为了执行你的 Flink 程序,建议构建一个 uber-jar (可执行的 jar),其中包含了你所有的依赖
(更多信息参见[此处]({{< ref "docs/dev/datastream/project-configuration" >}}))。

或者,你可以将连接器的 jar 文件放入 Flink 的 `lib/` 目录下,使其在全局范围内可用,即用于所有的作业。
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
或者,你可以将连接器的 jar 文件放入 Flink 的 `lib/` 目录下,使其在全局范围内可用,即用于所有的作业
或者,你可以将连接器的 jar 文件放入 Flink 的 `lib/` 目录下,使其在全局范围内可用,即可用于所有的作业

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Accepted, thanks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RocMarshal ,thanks for your comments and i have made all the modifications according to your comments, could you review it again, thanks a lot.

Copy link
Contributor

@RocMarshal RocMarshal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@huxixiang Thanks for update. LGTM +1. Now ping @wuchong .

@wuchong
Copy link
Member

wuchong commented Aug 3, 2021

Merging...

@wuchong wuchong merged commit e0237a0 into apache:master Aug 3, 2021
hhkkxxx133 pushed a commit to hhkkxxx133/flink that referenced this pull request Aug 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
7 participants