Skip to content

Commit

Permalink
Merge branch 'master' into a11yContextViewTests
Browse files Browse the repository at this point in the history
  • Loading branch information
elasticmachine committed Mar 6, 2020
2 parents 325e04a + 708d92a commit 2c6ef66
Show file tree
Hide file tree
Showing 550 changed files with 11,230 additions and 7,084 deletions.
14 changes: 8 additions & 6 deletions .ci/Jenkinsfile_visual_baseline
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,15 @@ kibanaLibrary.load()
kibanaPipeline(timeoutMinutes: 120) {
catchError {
parallel([
workers.base(name: 'oss-visualRegression', label: 'linux && immutable') {
kibanaPipeline.buildOss()
kibanaPipeline.functionalTestProcess('oss-visualRegression', './test/scripts/jenkins_visual_regression.sh')
'oss-visualRegression': {
workers.ci(name: 'oss-visualRegression', label: 'linux && immutable', ramDisk: false) {
kibanaPipeline.functionalTestProcess('oss-visualRegression', './test/scripts/jenkins_visual_regression.sh')(1)
}
},
workers.base(name: 'xpack-visualRegression', label: 'linux && immutable') {
kibanaPipeline.buildXpack()
kibanaPipeline.functionalTestProcess('xpack-visualRegression', './test/scripts/jenkins_xpack_visual_regression.sh')
'xpack-visualRegression': {
workers.ci(name: 'xpack-visualRegression', label: 'linux && immutable', ramDisk: false) {
kibanaPipeline.functionalTestProcess('xpack-visualRegression', './test/scripts/jenkins_xpack_visual_regression.sh')(1)
}
},
])
}
Expand Down
1 change: 1 addition & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,7 @@
/src/legacy/server/logging/ @elastic/kibana-platform
/src/legacy/server/saved_objects/ @elastic/kibana-platform
/src/legacy/server/status/ @elastic/kibana-platform
/src/plugins/status_page/ @elastic/kibana-platform
/src/dev/run_check_core_api_changes.ts @elastic/kibana-platform

# Security
Expand Down
7 changes: 4 additions & 3 deletions .github/workflows/pr-project-assigner.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,9 @@ jobs:
with:
issue-mappings: |
[
{ "label": "Team:AppArch", "projectName": "kibana-app-arch", "columnId": 6173897 },
{ "label": "Feature:Lens", "projectName": "Lens", "columnId": 6219362 },
{ "label": "Team:Canvas", "projectName": "canvas", "columnId": 6187580 }
]
ghToken: ${{ secrets.PROJECT_ASSIGNER_TOKEN }}

# { "label": "Team:AppArch", "projectName": "kibana-app-arch", "columnId": 6173897 },
# { "label": "Feature:Lens", "projectName": "Lens", "columnId": 6219362 },
# { "label": "Team:Canvas", "projectName": "canvas", "columnId": 6187580 }
Binary file added docs/images/intro-dashboard.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/intro-data-tutorial.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/intro-discover.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/intro-kibana.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/intro-management.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/intro-spaces.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
122 changes: 67 additions & 55 deletions docs/management/snapshot-restore/index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@
[[snapshot-repositories]]
== Snapshot and Restore

*Snapshot and Restore* enables you to backup your {es}
indices and clusters using data and state snapshots.
Snapshots are important because they provide a copy of your data in case
*Snapshot and Restore* enables you to backup your {es}
indices and clusters using data and state snapshots.
Snapshots are important because they provide a copy of your data in case
something goes wrong. If you need to roll back to an older version of your data,
you can restore a snapshot from the repository.

You’ll find *Snapshot and Restore* under *Management > Elasticsearch*.
You’ll find *Snapshot and Restore* under *Management > Elasticsearch*.
With this UI, you can:

* Register a repository for storing your snapshots
Expand All @@ -20,29 +20,42 @@ With this UI, you can:
[role="screenshot"]
image:management/snapshot-restore/images/snapshot_list.png["Snapshot list"]

Before using this feature, you should be familiar with how snapshots work.
{ref}/snapshot-restore.html[Snapshot and Restore] is a good source for
Before using this feature, you should be familiar with how snapshots work.
{ref}/snapshot-restore.html[Snapshot and Restore] is a good source for
more detailed information.

[float]
[[snapshot-permissions]]
=== Required permissions
The minimum required permissions to access *Snapshot and Restore* include:

* Cluster privileges: `monitor`, `manage_slm`, `cluster:admin/snapshot`, and `cluster:admin/repository`
* Index privileges: `all` on the `monitor` index if you want to access content in the *Restore Status* tab

You can add these privileges in *Management > Security > Roles*.

[role="screenshot"]
image:management/snapshot-restore/images/snapshot_permissions.png["Edit Role"]

[float]
[[kib-snapshot-register-repository]]
=== Register a repository
A repository is where your snapshots live. You must register a snapshot
repository before you can perform snapshot and restore operations.
A repository is where your snapshots live. You must register a snapshot
repository before you can perform snapshot and restore operations.

If you don't have a repository, Kibana walks you through the process of
registering one.
If you don't have a repository, Kibana walks you through the process of
registering one.
{kib} supports three repository types
out of the box: shared file system, read-only URL, and source-only.
For more information on these repositories and their settings,
out of the box: shared file system, read-only URL, and source-only.
For more information on these repositories and their settings,
see {ref}/snapshots-register-repository.html[Repositories].
To use other repositories, such as S3, see
To use other repositories, such as S3, see
{ref}/snapshots-register-repository.html#snapshots-repository-plugins[Repository plugins].


Once you create a repository, it is listed in the *Repositories*
view.
Click a repository name to view its type, number of snapshots, and settings,
Once you create a repository, it is listed in the *Repositories*
view.
Click a repository name to view its type, number of snapshots, and settings,
and to verify status.

[role="screenshot"]
Expand All @@ -53,46 +66,46 @@ image:management/snapshot-restore/images/repository_list.png["Repository list"]
[[kib-view-snapshot]]
=== View your snapshots

A snapshot is a backup taken from a running {es} cluster. You'll find an overview of
your snapshots in the *Snapshots* view, and you can drill down
A snapshot is a backup taken from a running {es} cluster. You'll find an overview of
your snapshots in the *Snapshots* view, and you can drill down
into each snapshot for further investigation.

[role="screenshot"]
image:management/snapshot-restore/images/snapshot_details.png["Snapshot details"]

If you don’t have any snapshots, you can create them from the {kib} <<console-kibana, Console>>. The
If you don’t have any snapshots, you can create them from the {kib} <<console-kibana, Console>>. The
{ref}/snapshots-take-snapshot.html[snapshot API]
takes the current state and data in your index or cluster, and then saves it to a
shared repository.
takes the current state and data in your index or cluster, and then saves it to a
shared repository.

The snapshot process is "smart." Your first snapshot is a complete copy of
The snapshot process is "smart." Your first snapshot is a complete copy of
the data in your index or cluster.
All subsequent snapshots save the changes between the existing snapshots and
All subsequent snapshots save the changes between the existing snapshots and
the new data.

[float]
[[kib-restore-snapshot]]
=== Restore a snapshot

The information stored in a snapshot is not tied to a specific
The information stored in a snapshot is not tied to a specific
cluster or a cluster name. This enables you to
restore a snapshot made from one cluster to another cluster. You might
restore a snapshot made from one cluster to another cluster. You might
use the restore operation to:

* Recover data lost due to a failure
* Migrate a current Elasticsearch cluster to a new version
* Move data from one cluster to another cluster

To get started, go to the *Snapshots* view, find the
snapshot, and click the restore icon in the *Actions* column.
To get started, go to the *Snapshots* view, find the
snapshot, and click the restore icon in the *Actions* column.
The Restore wizard presents
options for the restore operation, including which
options for the restore operation, including which
indices to restore and whether to modify the index settings.
You can restore an existing index only if it’s closed and has the same
You can restore an existing index only if it’s closed and has the same
number of shards as the index in the snapshot.

Once you initiate the restore, you're navigated to the *Restore Status* view,
where you can track the current state for each shard in the snapshot.
where you can track the current state for each shard in the snapshot.

[role="screenshot"]
image:management/snapshot-restore/images/snapshot-restore.png["Snapshot details"]
Expand All @@ -102,26 +115,26 @@ image:management/snapshot-restore/images/snapshot-restore.png["Snapshot details"
[[kib-snapshot-policy]]
=== Create a snapshot lifecycle policy

Use a {ref}/snapshot-lifecycle-management-api.html[snapshot lifecycle policy]
to automate the creation and deletion
Use a {ref}/snapshot-lifecycle-management-api.html[snapshot lifecycle policy]
to automate the creation and deletion
of cluster snapshots. Taking automatic snapshots:

* Ensures your {es} indices and clusters are backed up on a regular basis
* Ensures a recent and relevant snapshot is available if a situation
* Ensures a recent and relevant snapshot is available if a situation
arises where a cluster needs to be recovered
* Allows you to manage your snapshots in {kib}, instead of using a
* Allows you to manage your snapshots in {kib}, instead of using a
third-party tool
If you don’t have any snapshot policies, follow the
*Create policy* wizard. It walks you through defining
when and where to take snapshots, the settings you want,

If you don’t have any snapshot policies, follow the
*Create policy* wizard. It walks you through defining
when and where to take snapshots, the settings you want,
and how long to retain snapshots.

[role="screenshot"]
image:management/snapshot-restore/images/snapshot-retention.png["Snapshot details"]

An overview of your policies is on the *Policies* view.
You can drill down into each policy to examine its settings and last successful and failed run.
You can drill down into each policy to examine its settings and last successful and failed run.

You can perform the following actions on a snapshot policy:

Expand All @@ -139,8 +152,8 @@ image:management/snapshot-restore/images/create-policy.png["Snapshot details"]
=== Delete a snapshot

Delete snapshots to manage your repository storage space.
Find the snapshot in the *Snapshots* view and click the trash icon in the
*Actions* column. To delete snapshots in bulk, select their checkboxes,
Find the snapshot in the *Snapshots* view and click the trash icon in the
*Actions* column. To delete snapshots in bulk, select their checkboxes,
and then click *Delete snapshots*.

[[snapshot-repositories-example]]
Expand All @@ -159,10 +172,10 @@ Ready to try *Snapshot and Restore*? In this tutorial, you'll learn to:

==== Before you begin

This example shows you how to register a shared file system repository
This example shows you how to register a shared file system repository
and store snapshots.
Before you begin, you must register the location of the repository in the
{ref}/snapshots-register-repository.html#snapshots-filesystem-repository[path.repo] setting on
Before you begin, you must register the location of the repository in the
{ref}/snapshots-register-repository.html#snapshots-filesystem-repository[path.repo] setting on
your master and data nodes. You can do this in one of two ways:

* Edit your `elasticsearch.yml` to include the `path.repo` setting.
Expand All @@ -175,14 +188,14 @@ your master and data nodes. You can do this in one of two ways:
[[register-repo-example]]
==== Register a repository

Use *Snapshot and Restore* to register the repository where your snapshots
will live.
Use *Snapshot and Restore* to register the repository where your snapshots
will live.

. Go to *Management > Elasticsearch > Snapshot and Restore*.
. Click *Register a repository* in either the introductory message or *Repository view*.
. Enter a name for your repository, for example, `my_backup`.
. Select *Shared file system*.
+
+
[role="screenshot"]
image:management/snapshot-restore/images/register_repo.png["Register repository"]

Expand All @@ -205,13 +218,13 @@ Use the {ref}/snapshots-take-snapshot.html[snapshot API] to create a snapshot.
[source,js]
PUT /_snapshot/my_backup/2019-04-25_snapshot?wait_for_completion=true
+
In this example, the snapshot name is `2019-04-25_snapshot`. You can also
In this example, the snapshot name is `2019-04-25_snapshot`. You can also
use {ref}/date-math-index-names.html[date math expression] for the snapshot name.
+
[role="screenshot"]
image:management/snapshot-restore/images/create_snapshot.png["Create snapshot"]

. Return to *Snapshot and Restore*.
. Return to *Snapshot and Restore*.
+
Your new snapshot is available in the *Snapshots* view.

Expand All @@ -223,7 +236,7 @@ using the repository created in the previous example.

. Open the *Policies* view.
. Click *Create a policy*.
+
+
[role="screenshot"]
image:management/snapshot-restore/images/create-policy-example.png["Create policy wizard"]

Expand Down Expand Up @@ -288,17 +301,16 @@ Finally, you'll restore indices from an existing snapshot.
|*Index&nbsp;settings* |

|Modify&nbsp;index&nbsp;settings
|Toggle to overwrite index settings when they are restored,
|Toggle to overwrite index settings when they are restored,
or leave in place to keep existing settings.

|Reset&nbsp;index&nbsp;settings
|Toggle to reset index settings back to the default when they are restored,
|Toggle to reset index settings back to the default when they are restored,
or leave in place to keep existing settings.
|===

. Review your restore settings, and then click *Restore snapshot*.
+
The operation loads for a few seconds,
and then you’re navigated to *Restore Status*,
The operation loads for a few seconds,
and then you’re navigated to *Restore Status*,
where you can monitor the status of your restored indices.

22 changes: 22 additions & 0 deletions docs/settings/reporting-settings.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -95,6 +95,8 @@ index for any pending Reporting jobs. Defaults to `3000` (3 seconds).
[[xpack-reporting-q-timeout]]`xpack.reporting.queue.timeout`::
How long each worker has to produce a report. If your machine is slow or under
heavy load, you might need to increase this timeout. Specified in milliseconds.
If a Reporting job execution time goes over this time limit, the job will be
marked as a failure and there will not be a download available.
Defaults to `120000` (two minutes).

[float]
Expand All @@ -104,6 +106,26 @@ Defaults to `120000` (two minutes).
Reporting works by capturing screenshots from Kibana. The following settings
control the capturing process.

`xpack.reporting.capture.timeouts.openUrl`::
How long to allow the Reporting browser to wait for the initial data of the
Kibana page to load. Defaults to `30000` (30 seconds).

`xpack.reporting.capture.timeouts.waitForElements`::
How long to allow the Reporting browser to wait for the visualization panels to
load on the Kibana page. Defaults to `30000` (30 seconds).

`xpack.reporting.capture.timeouts.renderComplete`::
How long to allow the Reporting brwoser to wait for each visualization to
signal that it is done renderings. Defaults to `30000` (30 seconds).

[NOTE]
============
If any timeouts from `xpack.reporting.capture.timeouts.*` settings occur when
running a report job, Reporting will log the error and try to continue
capturing the page with a screenshot. As a result, a download will be
available, but there will likely be errors in the visualizations in the report.
============

`xpack.reporting.capture.maxAttempts`::
If capturing a report fails for any reason, Kibana will re-attempt othe reporting
job, as many times as this setting. Defaults to `3`.
Expand Down
Loading

0 comments on commit 2c6ef66

Please sign in to comment.