Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JA: translation xla/overview #497

Merged
merged 4 commits into from May 9, 2019

Conversation

Projects
None yet
6 participants
@nuka137
Copy link
Contributor

commented Apr 17, 2019

This patch is the first en->jp translation of XLA documents.
Original documents are located on https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/g3doc/overview.md.

@nuka137 nuka137 requested review from lamberta and MarkDaoust as code owners Apr 17, 2019

@tfdocsbot

This comment has been minimized.

Copy link
Collaborator

commented Apr 17, 2019

Reviewers added, please take a look.
@ohtaman

When your review is finished, approve the pull request or include "LGTM" in your comment.

@lamberta

This comment has been minimized.

Copy link
Member

commented Apr 17, 2019

Awesome, thanks! Will see about finding a reviewer


> 注意:XLAは現在開発中であるため、特定の状況でメモリ使用量の増大や性能の悪化を引き起こす場合があります。
XLAは線形代数の演算に特化したコンパイラで、XLAを使うことでTensorFlowの演算を最適化し、メモリ使用量の削減や性能の向上を期待できます。

This comment has been minimized.

Copy link
@ohtaman

ohtaman Apr 23, 2019

Contributor
  1. As same as original text, "XLA(Accelerated Linear Algebra)は" is better than "XLAは".
  2. It looks like "portability" (移植性) is missing.

This comment has been minimized.

Copy link
@nuka137

nuka137 Apr 25, 2019

Author Contributor

Thanks, Fix it.

> 注意:XLAは現在開発中であるため、特定の状況でメモリ使用量の増大や性能の悪化を引き起こす場合があります。
XLAは線形代数の演算に特化したコンパイラで、XLAを使うことでTensorFlowの演算を最適化し、メモリ使用量の削減や性能の向上を期待できます。
XLAは、実行時にTensorFlowの計算グラフをコンパイルして実行する [just-in-time (JIT) コンパイル機能](https://www.tensorflow.org/xla/jit) と、TensorFlowの計算グラフを実行コードにコンパイルする [ahead-of-time (AOT) コンパイル機能](https://www.tensorflow.org/xla/tfcompile) の2つの機能を提供しています。

This comment has been minimized.

Copy link
@ohtaman

ohtaman Apr 23, 2019

Contributor

Last paragraph and some sentences which mention that XLA is experimental are missing.

@lamberta Is there a some rules or standards for, how faithful the translation should be to the original one? Since translation by community, I think we should balance the faithfulness and cost.

This comment has been minimized.

Copy link
@lamberta

lamberta Apr 23, 2019

Member

Good question.
I think mentioning experimental is important here since that seems like something a Japanese dev would want to know before investing time.

But we should perhaps come up with some rules and standards for what gets translated and what is considered out-of-date.
The community is welcome to add translation tips in the ja/README.md (you can see his is what the Korean translation folks are doing: ko/README.md)

Suggestions?

This comment has been minimized.

Copy link
@nuka137

nuka137 Apr 25, 2019

Author Contributor

Thanks for your comment.
I agree with including the sentence which mention that XLA is experimental.

For now, I translate the document to match original document as possible, because there is no documentation guideline.

This comment has been minimized.

Copy link
@lamberta

lamberta Apr 26, 2019

Member

Thank you. Our current guidelines are here: https://www.tensorflow.org/community/contribute/docs#community_translations

The English docs are the source-of-truth and translations should follow these guides as close as possible. That said, translations are written for the communities they serve. If the English terminology, phrasing, style, or tone does not translate to another language, please use a translation appropriate for the reader.

Technical content should remain as close as possible but style is not so rigid. I'd prefer it read naturally.

@nuka137

This comment has been minimized.

Copy link
Contributor Author

commented Apr 25, 2019

@ohtaman

Thanks for reviewing.
I fixed whole document to match original document as possible.
Could you review revised document again?

@ohtaman
Copy link
Contributor

left a comment

Add some comments.

<img style="width:50%" src="https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/compiler/xla/g3doc/images/xlalogo.png">
</div>

> 注意:XLAは現在開発中であるため、特定の状況でメモリ使用量の増大や性能の悪化を引き起こす場合があります。

This comment has been minimized.

Copy link
@ohtaman

This comment has been minimized.

Copy link
@nuka137

nuka137 May 7, 2019

Author Contributor

Thanks, I fixed it.


> 注意:XLAは現在開発中であるため、特定の状況でメモリ使用量の増大や性能の悪化を引き起こす場合があります。
XLA(Accelerated Linear Algebra)は線形代数の演算に特化したコンパイラで、XLAを使うことでTensorFlowの演算を最適化し、メモリ使用量、性能、サーバやモバイル環境での移植性の面での改善が期待できます。ほとんどのユーザにとって、XLAを使うことによる大きな恩恵は得られないかもしれません。このため、[just-in-time (JIT) コンパイル機能](https://www.tensorflow.org/xla/jit) や [ahead-of-time (AOT) コンパイル機能](https://www.tensorflow.org/xla/tfcompile) を通してXLA実験的に使ってもらえればと思います。ただし、新たなハードウエアアクセラレータの開発者については、XLAを試してみることをおすすめします。

This comment has been minimized.

Copy link
@ohtaman

ohtaman Apr 26, 2019

Contributor

ほとんどのユーザにとって、XLAを使うことによる大きな恩恵は得られないかもしれません。このため、... XLA実験的に使ってもらえればと思います

is maybe

"現在のところ、ほとんどのユーザにとってXLAを使うことによる大きな恩恵は得られないかもしれませんが、.... 実験的にXLAを使っていただくのは歓迎です。"

Also, the last paragraph is missing.

The XLA framework is experimental and in active development. In particular, while it is unlikely that the semantics of existing operations will change, it is expected that more operations will be added to cover important use cases. The team welcomes feedback from the community about missing functionality and community contributions via GitHub.

This comment has been minimized.

Copy link
@nuka137

nuka137 May 7, 2019

Author Contributor

Thanks for the suggestion. I fixed it.

Also, the last paragraph is missing.

I forgot translating this paragraph. I added the translation.
Could you review it too?

This comment has been minimized.

Copy link
@ohtaman

ohtaman May 7, 2019

Contributor

Yes, I reviewed it.


* **実行速度の向上**: サブグラフをコンパイルすることで、TensorFlowのランタイムによるオーバヘッド削減、複数オペレーションの結合によるメモリのオーバヘッド削減、テンソルの形が決まることによる定数畳み込みの促進により、軽量なオペレーションの実行時間を短縮することができます。
* **メモリ使用量の改善**: メモリ使用量の解析とスケジューリングによって、オペレーションの中間データを保持する領域を削減します。
* **独自オペレーションへの依存度削減**: 低レベルのオペレーションを自動的に結合することにより、人の手で独自の結合オペレーションを作成する必要がなくなります。

This comment has been minimized.

Copy link
@ohtaman

ohtaman Apr 26, 2019

Contributor

Weak suggestion: Original text seems to be about "performance improvement" of fused ops.

Reduce reliance on custom Ops. Remove the need for many custom Ops by improving the performance of automatically fused low-level Ops to match the performance of custom Ops that were fused by hand.

e.g.
自動的に結合された低レベルなオペレーションのパフォーマンスを向上させ、人手による独自オペレーションと同等のパフォーマンスが得られるようにすることで、独自オペレーションを作成する必要性がなくなります。

This comment has been minimized.

Copy link
@nuka137

nuka137 May 7, 2019

Author Contributor

I think your translation is much better than mine, so I replaced old translation to your translation.

This comment has been minimized.

Copy link
@ohtaman

ohtaman May 7, 2019

Contributor

Thanks


TensorFlowと連携するXLAを開発した目的はいくつかあります。

* **実行速度の向上**: サブグラフをコンパイルすることで、TensorFlowのランタイムによるオーバヘッド削減、複数オペレーションの結合によるメモリのオーバヘッド削減、テンソルの形が決まることによる定数畳み込みの促進により、軽量なオペレーションの実行時間を短縮することができます。

This comment has been minimized.

Copy link
@ohtaman

ohtaman Apr 26, 2019

Contributor

Very weak suggestion:

'軽量なオペレーションの実行時間を短縮', '複数オペレーションの結合' and '定数畳み込みの促進' seems to be parallel things in the original text. In this translation '複数オペレーションの結合' and '定数畳み込みの促進' seems to be for '軽量なオペレーションの実行時間を短縮'.

e.g.
サブグラフをコンパイルして軽量なオペレーションの実行時間を短縮することでTensorFlowランタイムのオーバヘッド削減し、複数オペレーションの結合によりメモリのオーバヘッドを削減し、既知のテンソル形状に特化して、より積極的に定数畳み込みができるようにします。

This comment has been minimized.

Copy link
@nuka137

nuka137 May 7, 2019

Author Contributor

I fixed it. But I have a suggestion about this comment.

'軽量なオペレーションの実行時間を短縮' seems to be the result from 'サブグラフをコンパイル' and 'TensorFlowランタイムのオーバヘッド削減'.
I think there are 3 topics in this sentence.

  • Reduce light-weight operation: Compile subgraph and reduce TensorFlow runtime overhead
  • Reduce memory overhead: Fuse operations
  • Aggressive constant propagation: Specialize to known tensor shapes

Any suggestions?

nuka137 and others added some commits May 7, 2019

@lamberta
Copy link
Member

left a comment

Thank you!

@TensorFlow-Docs-Copybara TensorFlow-Docs-Copybara merged commit 269f5fe into tensorflow:master May 9, 2019

2 checks passed

cla/google All necessary CLAs are signed
import/copybara Change imported to the internal review system
Details

TensorFlow-Docs-Copybara pushed a commit that referenced this pull request May 9, 2019

Copybara-Service
Merge pull request #497 from nuka137:pr-1
PiperOrigin-RevId: 247518350
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.