No description, website, or topics provided.
JavaScript CSS HTML CoffeeScript Groovy Shell Nginx
Latest commit 160ddde Jan 31, 2017 @divideby0 divideby0 committed on GitHub Merge pull request #71 from Spantree/feature/xpack-security-slides
Adds slides for User and Role creation in XPack

README.md

Elasticsearch Talk

The following repository corresponds to Spantree's Elasticsearch workshop. It uses Docker and Docker Compose to launch number of containers for exploration of Elasticsearch. It also contains all the slides in our deck, courtesy of reveal.js.

Screencast

If you're more of a visual and auditory learner, we've got you covered! We recorded a live version of this talk during our workshop at the StrangeLoop Conference in September 2014, though most of the artifacts have since been updated for later versions of Elasticsearch.

Authors

Cedric Hurst: Principal & Lead Software Engineer
Kevin Greene: Senior Software Engineer
Gary Turovsky: Senior Software Engineer Emeritus
Jonathan Freeman: Senior Software Engineer

Instructions for setting up this sample project

We ask that you walk through these steps before you stop by since you'll need to download stuff and we don't want to crush the hotel bandwidth. The project itself will likely evolve up until the time of the presentation, but the Docker stuff shouldn't change too much.

Tools You'll Need

Install the following tools to bootstrap your environment. We've tested this setup primarily on Macs.

Note: If you're running Docker for Mac, be sure to assign about 4-6GB to your Docker engine by clicking on the whale in the Mac status bar, selecting "Preferences" and then going to the "Advanced" tab. The more memory, the better.

Set DFM Memory

Clone this repository and initialize the containers

git clone --depth 1 https://github.com/Spantree/elasticsearch-talk.git
cd elasticsearch-talk
./init-all-the-things.sh

Note: Our reveal.js-based slide deck downloads a good chunk of the internet to fulfill its NPM dependencies. Unfortunately, downloads can sometimes get stuck. If you find yourself staring a screen for over 10 minutes with a message Waiting for port 9000 to be open, cancel out of the process by hitting Ctrl-C and try ./init-all-the-things.sh again. You can also open another Terminal tab to the same folder and run docker-compose logs -f to see what's going on with the containers running in the background of this script.

Start your containers

Now that all the containers have been initialized, bring them up with the following command:

docker-compose up

Dance!

That's it. That's all there is to it.

Once the containers come up, you should now be able to access a multitude of services on your machine from a web browser:

Elasticsearch

The HTTP interface for interacting with Elasticsearch

Elasticsearch Screenshot

Kibana

The dashboard and UI portal for Elasticsearch

Kibana Screenshot

Sense

A web-based IDE for messing with Elasticsearch queries.

Sense Screenshot

Inquisitor

A interactive debugging tool that shows how analyzers and tokenizers workshop

Inquisitor Screenshot

Kopf

A status and realtime control panel for Elasticsearch clusters.

Kopf Screenshot

Marvel

A monitoring system to track the health and performance of your cluster over time

Marvel Screenshot

Slide Deck

The slides that go along with the tutorial. If you'd like us to see us present this tutorial live, please contact info@spantree.net.

Slide Deck Screenshot

Stay up-to-date

As mentioned, we may be altering the vagrant configuration up until the time of the presentation, so make sure you have the latest changes by doing the following from your host terminal:

git pull
docker-compose up --pull

Reclaim your precious disk space

Once you're done with the tutorial, you can remove the Docker containers and sample data by running the following commands (if you're still running the containers in Docker Compose, hit Ctrl-C to stop them):

docker-compose stop
docker-compose rm -f
docker volume ls -qf dangling=true | xargs docker volume rm
rm data/*

Show us some love

Email info@spantree.net if you run into issues. We'd be happy to help.