Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 6 additions & 8 deletions docs/getting-started/automatic-import.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ TIP: Click https://elastic.navattic.com/automatic-import[here] to access an inte
.Requirements
[sidebar]
--
- A working <<assistant-connect-to-bedrock, Amazon Bedrock connector>>. Automatic Import currently works with all variants of Claude 3. Other models are not supported in this technical preview, but will be supported in future versions.
- A working <<llm-connector-guides, LLM connector>>. Recommended models: `Claude 3.5 Sonnet`; `GPT-4o`; `Gemini-1.5-pro-002`.
- An https://www.elastic.co/pricing[Enterprise] subscription.
- A sample of the data you want to import, in JSON or NDJSON format.
- A sample of the data you want to import, in a structured or unstructured format (including JSON, NDJSON, and Syslog).
--

IMPORTANT: Using Automatic Import allows users to create new third-party data integrations through the use of third-party generative AI models (“GAI models”). Any third-party GAI models that you choose to use are owned and operated by their respective providers. Elastic does not own or control these third-party GAI models, nor does it influence their design, training, or data-handling practices. Using third-party GAI models with Elastic solutions, and using your data with third-party GAI models is at your discretion. Elastic bears no responsibility or liability for the content, operation, or use of these third-party GAI models, nor for any potential loss or damage arising from their use. Users are advised to exercise caution when using GAI models with personal, sensitive, or confidential information, as data submitted may be used to train the models or for other purposes. Elastic recommends familiarizing yourself with the development practices and terms of use of any third-party GAI models before use. You are responsible for ensuring that your use of Automatic Import complies with the terms and conditions of any third-party platform you connect with.
Expand All @@ -35,20 +35,18 @@ IMPORTANT: Using Automatic Import allows users to create new third-party data in
image::images/auto-import-create-new-integration-button.png[The Integrations page with the Create new integration button highlighted]
+
3. Click **Create integration**.
4. Select an <<assistant-connect-to-bedrock, Amazon Bedrock connector>>.
4. Select an <<llm-connector-guides, LLM connector>>.
5. Define how your new integration will appear on the Integrations page by providing a **Title**, **Description**, and **Logo**. Click **Next**.
6. Define your integration's package name, which will prefix the imported event fields.
7. Define your **Data stream title**, **Data stream description**, and **Data stream name**. These fields appear on the integration's configuration page to help identify the data stream it writes to.
8. Select your {filebeat-ref}/configuration-filebeat-options.html[**Data collection method**]. This determines how your new integration will ingest the data (for example, from an S3 bucket, an HTTP endpoint, or a file stream).
9. Upload a sample of your data in JSON or NDJSON format. Make sure to include all the types of events that you want the new integration to handle.
9. Upload a sample of your data. Make sure to include all the types of events that you want the new integration to handle.
+
.Best practices for sample data
[sidebar]
--
- The file extension (`.JSON` or `.NDJSON`) must match the file format.
- Only the first 10 events in the sample are analyzed. In this technical preview, additional data is truncated.
- Ensure each JSON or NDJSON object represents an event, and avoid deeply nested object structures.
- The more variety in your sample, the more accurate the pipeline will be (for example, include 10 unique log entries instead of the same type of entry 10 times).
- For JSON and NDJSON samples, each object in your sample should represent an event, and you should avoid deeply nested object structures.
- The more variety in your sample, the more accurate the pipeline will be. Include a wide range of unique log entries instead of just repeating the same type of entry. Automatic Import will select up to 100 different events from your sample to use as the basis for the new integration.
- Ideally, each field name should describe what the field does.
--
+
Expand Down