Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MINOR][DOCS] Fixed closing tags in running-on-kubernetes.md #35561

Closed
wants to merge 1 commit into from
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/running-on-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -1141,7 +1141,7 @@ See the [configuration page](configuration.html) for information on Spark config
<td><code>spark.kubernetes.memoryOverheadFactor</code></td>
<td><code>0.1</code></td>
<td>
This sets the Memory Overhead Factor that will allocate memory to non-JVM memory, which includes off-heap memory allocations, non-JVM tasks, various systems processes, and <code>tmpfs</code>-based local directories when <code>spark.kubernetes.local.dirs.tmpfs<code> is <code>true</code>. For JVM-based jobs this value will default to 0.10 and 0.40 for non-JVM jobs.
This sets the Memory Overhead Factor that will allocate memory to non-JVM memory, which includes off-heap memory allocations, non-JVM tasks, various systems processes, and <code>tmpfs</code>-based local directories when <code>spark.kubernetes.local.dirs.tmpfs</code> is <code>true</code>. For JVM-based jobs this value will default to 0.10 and 0.40 for non-JVM jobs.
This is done as non-JVM tasks need more non-JVM heap space and such tasks commonly fail with "Memory Overhead Exceeded" errors. This preempts this error with a higher default.
</td>
<td>2.4.0</td>
Expand Down Expand Up @@ -1314,7 +1314,7 @@ See the [configuration page](configuration.html) for information on Spark config
<td>3.0.0</td>
</tr>
<tr>
<td><code>spark.kubernetes.executor.decommmissionLabel<code></td>
<td><code>spark.kubernetes.executor.decommmissionLabel</code></td>
<td>(none)</td>
<td>
Label to be applied to pods which are exiting or being decommissioned. Intended for use
Expand All @@ -1323,7 +1323,7 @@ See the [configuration page](configuration.html) for information on Spark config
<td>3.3.0</td>
</tr>
<tr>
<td><code>spark.kubernetes.executor.decommmissionLabelValue<code></td>
<td><code>spark.kubernetes.executor.decommmissionLabelValue</code></td>
<td>(none)</td>
<td>
Value to be applied with the label when
Expand Down