Skip to content

Latest commit

 

History

History
101 lines (84 loc) · 2.86 KB

README.md

File metadata and controls

101 lines (84 loc) · 2.86 KB

Ingest Pipelines UI

Summary

The ingest_pipelines plugin provides Kibana support for Elasticsearch's ingest pipelines.

This plugin allows Kibana to create, edit, clone and delete ingest pipelines. It also provides support to simulate a pipeline.

It requires a Basic license and the following cluster privileges: manage_pipeline and cluster:monitor/nodes/info.


Development

A new app called Ingest Pipelines is registered in the Management section and follows a typical CRUD UI pattern. The client-side portion of this app lives in public/application and uses endpoints registered in server/routes/api. For more information on the pipeline processors editor component, check out the component readme.

See the kibana contributing guide for instructions on setting up your development environment.

Test coverage

The app has the following test coverage:

  • API integration tests
  • Smoke-level functional test
  • Client-integration tests

Quick steps for manual testing

You can run the following request in Console to create an ingest pipeline:

PUT _ingest/pipeline/test_pipeline
{
   "description": "_description",
    "processors": [
      {
        "set": {
          "field": "field1",
          "value": "value1"
        }
      },
      {
        "rename": {
          "field": "dont_exist",
          "target_field": "field1",
          "ignore_failure": true
        }
      },
      {
        "rename": {
          "field": "foofield",
          "target_field": "new_field",
          "on_failure": [
            {
              "set": {
                "field": "field2",
                "value": "value2"
              }
            }
          ]
        }
      },
      {
        "drop": {
          "if": "false"
        }
      },
      {
        "drop": {
          "if": "true"
        }
      }
    ]
}

Then, go to the Ingest Pipelines UI to edit, delete, clone, or view details of the pipeline.

To simulate a pipeline, go to the "Edit" page of your pipeline. Click the "Add documents" link under the "Processors" section. You may add the following sample documents to test the pipeline:

// The first document in this example should trigger the on_failure processor in the pipeline, while the second one should succeed.
[
  {
    "_index": "my_index",
    "_id": "id1",
    "_source": {
      "foo": "bar"
    }
  },
  {
    "_index": "my_index",
    "_id": "id2",
    "_source": {
      "foo": "baz",
      "foofield": "bar"
    }
  }
]

Alternatively, you can add a document from an existing index, or create some sample data of your own. Afterward, click the "Run the pipeline" button to view the output.