diff --git a/docs/getting-started.asciidoc b/docs/getting-started.asciidoc index a7016f5f9986df..2487064b25a6c3 100644 --- a/docs/getting-started.asciidoc +++ b/docs/getting-started.asciidoc @@ -11,7 +11,7 @@ to add one of the sample data sets. === Before you begin Make sure you've <> and established -a {kibana-ref}/connect-to-elasticsearch.html[connection to Elasticsearch]. +a <>. If you are running our https://cloud.elastic.co[hosted Elasticsearch Service] on Elastic Cloud, you can access Kibana with a single click. @@ -42,9 +42,9 @@ If you uninstall and reinstall a data set, the timestamps will change to reflect [float] === Next steps -* Explore {kib} by following the {kibana-ref}/tutorial-sample-data.html[sample data tutorial]. +* Explore {kib} by following the <>. -* Learn how to load data, define index patterns, and build visualizations by {kibana-ref}/tutorial-build-dashboard.html[building your own dashboard]. +* Learn how to load data, define index patterns, and build visualizations by <>. -- @@ -52,8 +52,6 @@ include::getting-started/tutorial-sample-data.asciidoc[] include::getting-started/tutorial-full-experience.asciidoc[] -include::getting-started/tutorial-load-dataset.asciidoc[] - include::getting-started/tutorial-define-index.asciidoc[] include::getting-started/tutorial-discovering.asciidoc[] diff --git a/docs/getting-started/tutorial-dashboard.asciidoc b/docs/getting-started/tutorial-dashboard.asciidoc index 8e57c7105ffc5d..aab93eb51ca232 100644 --- a/docs/getting-started/tutorial-dashboard.asciidoc +++ b/docs/getting-started/tutorial-dashboard.asciidoc @@ -31,8 +31,7 @@ but sometimes you need to look at the actual data to understand what's really going on. You can inspect the data behind any visualization and view the {es} query used to retrieve it. -. In the dashboard, hover the pointer over the pie chart. -. Click the icon in the upper right. +. In the dashboard, hover the pointer over the pie chart, and then click the icon in the upper right. . From the *Options* menu, select *Inspect*. + [role="screenshot"] @@ -42,7 +41,7 @@ image::images/tutorial-full-inspect1.png[] in the upper right of the Inspect pane. [float] -=== Wrapping up +=== Next steps Now that you have a handle on the basics, you're ready to start exploring your own data with Kibana. diff --git a/docs/getting-started/tutorial-define-index.asciidoc b/docs/getting-started/tutorial-define-index.asciidoc index 25a203a54045e2..fcb852b1b80c20 100644 --- a/docs/getting-started/tutorial-define-index.asciidoc +++ b/docs/getting-started/tutorial-define-index.asciidoc @@ -16,7 +16,7 @@ of the log data from May 2018, you could specify the index pattern First you'll create patterns for the Shakespeare data set, which has an index named `shakespeare,` and the accounts data set, which has an index named -`bank.` These data sets don't contain time-series data. +`bank`. These data sets don't contain time series data. . In Kibana, open *Management*, and then click *Index Patterns.* . If this is your first index pattern, the *Create index pattern* page opens automatically. @@ -34,20 +34,17 @@ You’re presented a table of all fields and associated data types in the index. . Return to the *Index patterns* overview page and define a second index pattern named `ba*`. [float] -==== Create an index pattern for time-series data +==== Create an index pattern for time series data Now create an index pattern for the Logstash index, which -contains time-series data. +contains time series data. . Define an index pattern named `logstash*`. . Click *Next step*. . Open the *Time Filter field name* dropdown and select *@timestamp*. . Click *Create index pattern*. -[float] -==== A note about index patterns - -When you define an index pattern, the indices that match that pattern must +NOTE: When you define an index pattern, the indices that match that pattern must exist in Elasticsearch and they must contain data. To check which indices are available, go to *Dev Tools > Console* and enter `GET _cat/indices`. Alternately, use `curl -XGET "http://localhost:9200/_cat/indices"`. diff --git a/docs/getting-started/tutorial-full-experience.asciidoc b/docs/getting-started/tutorial-full-experience.asciidoc index 4dad798061ba8c..475aa0b4a94de2 100644 --- a/docs/getting-started/tutorial-full-experience.asciidoc +++ b/docs/getting-started/tutorial-full-experience.asciidoc @@ -13,3 +13,201 @@ When you complete this tutorial, you'll have a dashboard that looks like this. [role="screenshot"] image::images/tutorial-dashboard.png[] + +[float] +[[tutorial-load-dataset]] +=== Load sample data + +This tutorial requires that you download three data sets: + +* The complete works of William Shakespeare, suitably parsed into fields +* A set of fictitious accounts with randomly generated data +* A set of randomly generated log files + +[float] +==== Download the data sets + +Create a new working directory where you want to download the files. From that directory, run the following commands: + +[source,shell] +curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/shakespeare.json +curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/accounts.zip +curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/logs.jsonl.gz + +Two of the data sets are compressed. To extract the files, use these commands: + +[source,shell] +unzip accounts.zip +gunzip logs.jsonl.gz + +[float] +==== Structure of the data sets + +The Shakespeare data set has this structure: + +[source,json] +{ + "line_id": INT, + "play_name": "String", + "speech_number": INT, + "line_number": "String", + "speaker": "String", + "text_entry": "String", +} + +The accounts data set is structured as follows: + +[source,json] +{ + "account_number": INT, + "balance": INT, + "firstname": "String", + "lastname": "String", + "age": INT, + "gender": "M or F", + "address": "String", + "employer": "String", + "email": "String", + "city": "String", + "state": "String" +} + +The logs data set has dozens of different fields. Here are the notable fields for this tutorial: + +[source,json] +{ + "memory": INT, + "geo.coordinates": "geo_point" + "@timestamp": "date" +} + +[float] +==== Set up mappings + +Before you load the Shakespeare and logs data sets, you must set up {ref}/mapping.html[_mappings_] for the fields. +Mappings divide the documents in the index into logical groups and specify the characteristics +of the fields. These characteristics include the searchability of the field +and whether it's _tokenized_, or broken up into separate words. + +NOTE: If security is enabled, you must have the `all` Kibana privilege to run this tutorial. +You must also have the `create`, `manage` `read`, `write,` and `delete` +index privileges. See {xpack-ref}/security-privileges.html[Security Privileges] +for more information. + +In Kibana *Dev Tools > Console*, set up a mapping for the Shakespeare data set: + +[source,js] +PUT /shakespeare +{ + "mappings": { + "properties": { + "speaker": {"type": "keyword"}, + "play_name": {"type": "keyword"}, + "line_id": {"type": "integer"}, + "speech_number": {"type": "integer"} + } + } +} + +//CONSOLE + +This mapping specifies field characteristics for the data set: + +* The `speaker` and `play_name` fields are keyword fields. These fields are not analyzed. +The strings are treated as a single unit even if they contain multiple words. +* The `line_id` and `speech_number` fields are integers. + +The logs data set requires a mapping to label the latitude and longitude pairs +as geographic locations by applying the `geo_point` type. + +[source,js] +PUT /logstash-2015.05.18 +{ + "mappings": { + "properties": { + "geo": { + "properties": { + "coordinates": { + "type": "geo_point" + } + } + } + } + } +} + +//CONSOLE + +[source,js] +PUT /logstash-2015.05.19 +{ + "mappings": { + "properties": { + "geo": { + "properties": { + "coordinates": { + "type": "geo_point" + } + } + } + } + } +} + +//CONSOLE + +[source,js] +PUT /logstash-2015.05.20 +{ + "mappings": { + "properties": { + "geo": { + "properties": { + "coordinates": { + "type": "geo_point" + } + } + } + } + } +} + +//CONSOLE + +The accounts data set doesn't require any mappings. + +[float] +==== Load the data sets + +At this point, you're ready to use the Elasticsearch {ref}/docs-bulk.html[bulk] +API to load the data sets: + +[source,shell] +curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST ':/bank/account/_bulk?pretty' --data-binary @accounts.json +curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST ':/shakespeare/_bulk?pretty' --data-binary @shakespeare.json +curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST ':/_bulk?pretty' --data-binary @logs.jsonl + +Or for Windows users, in Powershell: +[source,shell] +Invoke-RestMethod "http://:/bank/account/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "accounts.json" +Invoke-RestMethod "http://:/shakespeare/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "shakespeare.json" +Invoke-RestMethod "http://:/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "logs.jsonl" + +These commands might take some time to execute, depending on the available computing resources. + +Verify successful loading: + +[source,js] +GET /_cat/indices?v + +//CONSOLE + +Your output should look similar to this: + +[source,shell] +health status index pri rep docs.count docs.deleted store.size pri.store.size +yellow open bank 1 1 1000 0 418.2kb 418.2kb +yellow open shakespeare 1 1 111396 0 17.6mb 17.6mb +yellow open logstash-2015.05.18 1 1 4631 0 15.6mb 15.6mb +yellow open logstash-2015.05.19 1 1 4624 0 15.7mb 15.7mb +yellow open logstash-2015.05.20 1 1 4750 0 16.4mb 16.4mb diff --git a/docs/getting-started/tutorial-load-dataset.asciidoc b/docs/getting-started/tutorial-load-dataset.asciidoc deleted file mode 100644 index 1824948707c090..00000000000000 --- a/docs/getting-started/tutorial-load-dataset.asciidoc +++ /dev/null @@ -1,190 +0,0 @@ -[[tutorial-load-dataset]] -=== Load sample data - -This tutorial requires three data sets: - -* The complete works of William Shakespeare, suitably parsed into fields -* A set of fictitious accounts with randomly generated data -* A set of randomly generated log files - -Create a new working directory where you want to download the files. From that directory, run the following commands: - -[source,shell] -curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/shakespeare.json -curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/accounts.zip -curl -O https://download.elastic.co/demos/kibana/gettingstarted/8.x/logs.jsonl.gz - -Two of the data sets are compressed. To extract the files, use these commands: - -[source,shell] -unzip accounts.zip -gunzip logs.jsonl.gz - -==== Structure of the data sets - -The Shakespeare data set has this structure: - -[source,json] -{ - "line_id": INT, - "play_name": "String", - "speech_number": INT, - "line_number": "String", - "speaker": "String", - "text_entry": "String", -} - -The accounts data set is structured as follows: - -[source,json] -{ - "account_number": INT, - "balance": INT, - "firstname": "String", - "lastname": "String", - "age": INT, - "gender": "M or F", - "address": "String", - "employer": "String", - "email": "String", - "city": "String", - "state": "String" -} - -The logs data set has dozens of different fields. Here are the notable fields for this tutorial: - -[source,json] -{ - "memory": INT, - "geo.coordinates": "geo_point" - "@timestamp": "date" -} - -==== Set up mappings - -Before you load the Shakespeare and logs data sets, you must set up {ref}/mapping.html[_mappings_] for the fields. -Mappings divide the documents in the index into logical groups and specify the characteristics -of the fields. These characteristics include the searchability of the field -and whether it's _tokenized_, or broken up into separate words. - -NOTE: If security is enabled, you must have the `all` Kibana privilege to run this tutorial. -You must also have the `create`, `manage` `read`, `write,` and `delete` -index privileges. See {xpack-ref}/security-privileges.html[Security Privileges] -for more information. - -In Kibana *Dev Tools > Console*, set up a mapping for the Shakespeare data set: - -[source,js] -PUT /shakespeare -{ - "mappings": { - "properties": { - "speaker": {"type": "keyword"}, - "play_name": {"type": "keyword"}, - "line_id": {"type": "integer"}, - "speech_number": {"type": "integer"} - } - } -} - -//CONSOLE - -This mapping specifies field characteristics for the data set: - -* The `speaker` and `play_name` fields are keyword fields. These fields are not analyzed. -The strings are treated as a single unit even if they contain multiple words. -* The `line_id` and `speech_number` fields are integers. - -The logs data set requires a mapping to label the latitude and longitude pairs -as geographic locations by applying the `geo_point` type. - -[source,js] -PUT /logstash-2015.05.18 -{ - "mappings": { - "properties": { - "geo": { - "properties": { - "coordinates": { - "type": "geo_point" - } - } - } - } - } -} - -//CONSOLE - -[source,js] -PUT /logstash-2015.05.19 -{ - "mappings": { - "properties": { - "geo": { - "properties": { - "coordinates": { - "type": "geo_point" - } - } - } - } - } -} - -//CONSOLE - -[source,js] -PUT /logstash-2015.05.20 -{ - "mappings": { - "properties": { - "geo": { - "properties": { - "coordinates": { - "type": "geo_point" - } - } - } - } - } -} - -//CONSOLE - -The accounts data set doesn't require any mappings. - -==== Load the data sets - -At this point, you're ready to use the Elasticsearch {ref}/docs-bulk.html[bulk] -API to load the data sets: - -[source,shell] -curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST ':/bank/account/_bulk?pretty' --data-binary @accounts.json -curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST ':/shakespeare/_bulk?pretty' --data-binary @shakespeare.json -curl -u elastic -H 'Content-Type: application/x-ndjson' -XPOST ':/_bulk?pretty' --data-binary @logs.jsonl - -Or for Windows users, in Powershell: -[source,shell] -Invoke-RestMethod "http://:/bank/account/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "accounts.json" -Invoke-RestMethod "http://:/shakespeare/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "shakespeare.json" -Invoke-RestMethod "http://:/_bulk?pretty" -Method Post -ContentType 'application/x-ndjson' -InFile "logs.jsonl" - -These commands might take some time to execute, depending on the available computing resources. - -Verify successful loading: - -[source,js] -GET /_cat/indices?v - -//CONSOLE - -Your output should look similar to this: - -[source,shell] -health status index pri rep docs.count docs.deleted store.size pri.store.size -yellow open bank 1 1 1000 0 418.2kb 418.2kb -yellow open shakespeare 1 1 111396 0 17.6mb 17.6mb -yellow open logstash-2015.05.18 1 1 4631 0 15.6mb 15.6mb -yellow open logstash-2015.05.19 1 1 4624 0 15.7mb 15.7mb -yellow open logstash-2015.05.20 1 1 4750 0 16.4mb 16.4mb diff --git a/docs/getting-started/tutorial-sample-data.asciidoc b/docs/getting-started/tutorial-sample-data.asciidoc index b0387042a3a003..59b97d8e6b6cce 100644 --- a/docs/getting-started/tutorial-sample-data.asciidoc +++ b/docs/getting-started/tutorial-sample-data.asciidoc @@ -140,7 +140,7 @@ categories, or buckets. . In the *Buckets* pane, select *Add > Split group*. . In the *Aggregation* dropdown, select *Terms*. . In the *Field* dropdown, select *Carrier*. -. Set *Descending* to 4. +. Set *Descending* to *4*. . Click *Apply changes* image:images/apply-changes-button.png[]. + You now see the average ticket price for all four airlines. @@ -171,8 +171,7 @@ but sometimes you need to look at the actual data to understand what's really going on. You can inspect the data behind any visualization and view the {es} query used to retrieve it. -. In the dashboard, hover the pointer over the pie chart. -. Click the icon in the upper right. +. In the dashboard, hover the pointer over the pie chart, and then click the icon in the upper right. . From the *Options* menu, select *Inspect*. + The initial view shows the document count. @@ -192,17 +191,16 @@ When you’re done experimenting with the sample data set, you can remove it. . On the *Sample flight data* card, click *Remove*. [float] -=== Wrapping up +=== Next steps -Now that you have a handle on the {kib} basics, you might be interested in: - -* <>. You’ll learn how to load your own -data, define an index pattern, and create visualizations and dashboards. -* <>. You'll learn more about searching data and filtering by field. -* <>. You’ll find information about all the visualization types -{kib} has to offer. -* <>. You have the ability to share a dashboard, or embed the dashboard in a web page. +Now that you have a handle on the {kib} basics, you might be interested in the +tutorial <>, where you'll learn to: +* Load data +* Define an index pattern +* Discover and explore data +* Create visualizations +* Add visualizations to a dashboard diff --git a/docs/images/tutorial-sample-dashboard.png b/docs/images/tutorial-sample-dashboard.png index 66a54094c57412..9f287640f201c2 100644 Binary files a/docs/images/tutorial-sample-dashboard.png and b/docs/images/tutorial-sample-dashboard.png differ