Skip to content

Commit

Permalink
[DOCS] Updated gs based on review comments
Browse files Browse the repository at this point in the history
  • Loading branch information
gchaps committed Jul 24, 2019
1 parent 7fcda0c commit a280563
Show file tree
Hide file tree
Showing 7 changed files with 217 additions and 217 deletions.
8 changes: 3 additions & 5 deletions docs/getting-started.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ to add one of the sample data sets.
=== Before you begin

Make sure you've <<install, installed Kibana>> and established
a {kibana-ref}/connect-to-elasticsearch.html[connection to Elasticsearch].
a <<connect-to-elasticsearch, connection to Elasticsearch>>.

If you are running our https://cloud.elastic.co[hosted Elasticsearch Service]
on Elastic Cloud, you can access Kibana with a single click.
Expand Down Expand Up @@ -42,18 +42,16 @@ If you uninstall and reinstall a data set, the timestamps will change to reflect
[float]
=== Next steps

* Explore {kib} by following the {kibana-ref}/tutorial-sample-data.html[sample data tutorial].
* Explore {kib} by following the <<tutorial-sample-data, sample data tutorial>>.

* Learn how to load data, define index patterns, and build visualizations by {kibana-ref}/tutorial-build-dashboard.html[building your own dashboard].
* Learn how to load data, define index patterns, and build visualizations by <<tutorial-build-dashboard, building your own dashboard>>.

--

include::getting-started/tutorial-sample-data.asciidoc[]

include::getting-started/tutorial-full-experience.asciidoc[]

include::getting-started/tutorial-load-dataset.asciidoc[]

include::getting-started/tutorial-define-index.asciidoc[]

include::getting-started/tutorial-discovering.asciidoc[]
Expand Down
5 changes: 2 additions & 3 deletions docs/getting-started/tutorial-dashboard.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,7 @@ but sometimes you need to look at the actual data to
understand what's really going on. You can inspect the data behind any visualization
and view the {es} query used to retrieve it.

. In the dashboard, hover the pointer over the pie chart.
. Click the icon in the upper right.
. In the dashboard, hover the pointer over the pie chart, and then click the icon in the upper right.
. From the *Options* menu, select *Inspect*.
+
[role="screenshot"]
Expand All @@ -42,7 +41,7 @@ image::images/tutorial-full-inspect1.png[]
in the upper right of the Inspect pane.

[float]
=== Wrapping up
=== Next steps

Now that you have a handle on the basics, you're ready to start exploring
your own data with Kibana.
Expand Down
11 changes: 4 additions & 7 deletions docs/getting-started/tutorial-define-index.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ of the log data from May 2018, you could specify the index pattern

First you'll create patterns for the Shakespeare data set, which has an
index named `shakespeare,` and the accounts data set, which has an index named
`bank.` These data sets don't contain time-series data.
`bank`. These data sets don't contain time series data.

. In Kibana, open *Management*, and then click *Index Patterns.*
. If this is your first index pattern, the *Create index pattern* page opens automatically.
Expand All @@ -34,20 +34,17 @@ You’re presented a table of all fields and associated data types in the index.
. Return to the *Index patterns* overview page and define a second index pattern named `ba*`.

[float]
==== Create an index pattern for time-series data
==== Create an index pattern for time series data

Now create an index pattern for the Logstash index, which
contains time-series data.
contains time series data.

. Define an index pattern named `logstash*`.
. Click *Next step*.
. Open the *Time Filter field name* dropdown and select *@timestamp*.
. Click *Create index pattern*.

[float]
==== A note about index patterns

When you define an index pattern, the indices that match that pattern must
NOTE: When you define an index pattern, the indices that match that pattern must
exist in Elasticsearch and they must contain data. To check which indices are
available, go to *Dev Tools > Console* and enter `GET _cat/indices`. Alternately, use
`curl -XGET "http://localhost:9200/_cat/indices"`.
Expand Down
198 changes: 198 additions & 0 deletions docs/getting-started/tutorial-full-experience.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,3 +13,201 @@ When you complete this tutorial, you'll have a dashboard that looks like this.

[role="screenshot"]
image::images/tutorial-dashboard.png[]

[float]
[[tutorial-load-dataset]]
=== Load sample data

This tutorial requires that you download three data sets:

* The complete works of William Shakespeare, suitably parsed into fields
* A set of fictitious accounts with randomly generated data
* A set of randomly generated log files

[float]
==== Download the data sets

Create a new working directory where you want to download the files. From that directory, run the following commands:

[source,shell]
curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/shakespeare.json
curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/accounts.zip
curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/logs.jsonl.gz

Two of the data sets are compressed. To extract the files, use these commands:

[source,shell]
unzip accounts.zip
gunzip logs.jsonl.gz

[float]
==== Structure of the data sets

The Shakespeare data set has this structure:

[source,json]
{
"line_id": INT,
"play_name": "String",
"speech_number": INT,
"line_number": "String",
"speaker": "String",
"text_entry": "String",
}

The accounts data set is structured as follows:

[source,json]
{
"account_number": INT,
"balance": INT,
"firstname": "String",
"lastname": "String",
"age": INT,
"gender": "M or F",
"address": "String",
"employer": "String",
"email": "String",
"city": "String",
"state": "String"
}

The logs data set has dozens of different fields. Here are the notable fields for this tutorial:

[source,json]
{
"memory": INT,
"geo.coordinates": "geo_point"
"@timestamp": "date"
}

[float]
==== Set up mappings

Before you load the Shakespeare and logs data sets, you must set up {ref}/mapping.html[_mappings_] for the fields.
Mappings divide the documents in the index into logical groups and specify the characteristics
of the fields. These characteristics include the searchability of the field
and whether it's _tokenized_, or broken up into separate words.

NOTE: If security is enabled, you must have the `all` Kibana privilege to run this tutorial.
You must also have the `create`, `manage` `read`, `write,` and `delete`
index privileges. See {xpack-ref}/security-privileges.html[Security Privileges]
for more information.

In Kibana *Dev Tools > Console*, set up a mapping for the Shakespeare data set:

[source,js]
PUT /shakespeare
{
"mappings": {
"properties": {
"speaker": {"type": "keyword"},
"play_name": {"type": "keyword"},
"line_id": {"type": "integer"},
"speech_number": {"type": "integer"}
}
}
}

//CONSOLE

This mapping specifies field characteristics for the data set:

* The `speaker` and `play_name` fields are keyword fields. These fields are not analyzed.
The strings are treated as a single unit even if they contain multiple words.
* The `line_id` and `speech_number` fields are integers.

The logs data set requires a mapping to label the latitude and longitude pairs
as geographic locations by applying the `geo_point` type.

[source,js]
PUT /logstash-2015.05.18
{
"mappings": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}

//CONSOLE

[source,js]
PUT /logstash-2015.05.19
{
"mappings": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}

//CONSOLE

[source,js]
PUT /logstash-2015.05.20
{
"mappings": {
"properties": {
"geo": {
"properties": {
"coordinates": {
"type": "geo_point"
}
}
}
}
}
}

//CONSOLE

The accounts data set doesn't require any mappings.

[float]
==== Load the data sets

At this point, you're ready to use the Elasticsearch {ref}/docs-bulk.html[bulk]
API to load the data sets:

[source,shell]
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/bank/account/_bulk?pretty' --data-binary @accounts.json
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/shakespeare/_bulk?pretty' --data-binary @shakespeare.json
curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST '<host>:<port>/_bulk?pretty' --data-binary @logs.jsonl

Or for Windows users, in Powershell:
[source,shell]
Invoke-RestMethod "http://<host>:<port>/bank/account/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "accounts.json"
Invoke-RestMethod "http://<host>:<port>/shakespeare/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "shakespeare.json"
Invoke-RestMethod "http://<host>:<port>/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "logs.jsonl"

These commands might take some time to execute, depending on the available computing resources.

Verify successful loading:

[source,js]
GET /_cat/indices?v

//CONSOLE

Your output should look similar to this:

[source,shell]
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open bank 1 1 1000 0 418.2kb 418.2kb
yellow open shakespeare 1 1 111396 0 17.6mb 17.6mb
yellow open logstash-2015.05.18 1 1 4631 0 15.6mb 15.6mb
yellow open logstash-2015.05.19 1 1 4624 0 15.7mb 15.7mb
yellow open logstash-2015.05.20 1 1 4750 0 16.4mb 16.4mb
Loading

0 comments on commit a280563

Please sign in to comment.