+
+
+
+## Key Features
+
+* **LLM Integration with Firebolt** - Connect your AI assistants directly to your data warehouse
+ - Enable AI agents to autonomously query your data and build analytics solutions
+ - Provide LLMs with specialized knowledge of Firebolt's capabilities and features
+
+* **SQL Query Execution**
+ - Direct query execution against Firebolt databases
+ - Support for multiple query types and execution modes
+
+* **Documentation Access**
+ - Comprehensive Firebolt documentation available to the LLM
+ - SQL reference, function reference, and more
+
+* **Account Management**
+ - Connect to different accounts and engines
+ - Manage authentication seamlessly
+
+* **Multi-platform Support**
+ - Run on any platform supporting Go binaries
+ - Docker container support for easy deployment
+
+## How To Use
+
+To get started with the Firebolt MCP Server, you'll need a Firebolt service account. If you don't have a Firebolt account yet, [sign up here](https://www.firebolt.io/signup).
+
+### Option 1: Use the Docker image
+
+```bash
+# Run with Docker
+docker run -p 8080:8080 \
+ -e FIREBOLT_MCP_CLIENT_ID=your-client-id \
+ -e FIREBOLT_MCP_CLIENT_SECRET=your-client-secret \
+ -e FIREBOLT_MCP_TRANSPORT=sse \
+ firebolt/mcp-server:latest
+```
+
+### Option 2: Download and run the binary
+
+```bash
+# Download the latest release for your platform from:
+# https://github.com/firebolt-db/mcp-server/releases
+
+# Run the server
+./firebolt-mcp-server \
+ --client-id your-client-id \
+ --client-secret your-client-secret \
+ --transport sse
+```
+
+### Connecting your LLM
+
+Once the server is running, you can connect to it using any MCP-compatible client. For example:
+
+```bash
+# Using the OpenAI API with MCP extension
+curl -X POST https://api.openai.com/v1/chat/completions \
+ -H "Content-Type: application/json" \
+ -H "Authorization: Bearer $OPENAI_API_KEY" \
+ -d '{
+ "model": "gpt-4",
+ "messages": [
+ {"role": "system", "content": "You are a data analyst working with Firebolt."},
+ {"role": "user", "content": "How many users did we have last month?"}
+ ],
+ "tools": [
+ {
+ "type": "mcp",
+ "mcp": {
+ "endpoint": "http://localhost:8080",
+ "auth": {
+ "type": "bearer",
+ "token": "YOUR_TOKEN"
+ }
+ }
+ }
+ ]
+ }'
+```
+
+## Requirements
+
+- Firebolt service account credentials (client ID and client secret)
+- For development: Go 1.24.1 or later
+- For deployment: Docker (optional)
+
+## Architecture
+
+The Firebolt MCP Server implements the [Model Context Protocol](https://github.com/anthropics/anthropic-cookbook/tree/main/model_context_protocol) specification, providing:
+
+1. **Tools** - Task-specific capabilities provided to the LLM:
+ - `Connect`: Establish connections to Firebolt engines and databases
+ - `Docs`: Access Firebolt documentation
+ - `Query`: Execute SQL queries against Firebolt
+
+2. **Resources** - Data that can be referenced by the LLM:
+ - Documentation articles
+ - Account information
+ - Database schema
+ - Engine statistics
+
+3. **Prompts** - Predefined instructions for the LLM:
+ - Firebolt Expert: Prompts the model to act as a Firebolt specialist
+
+## Development
+
+To set up the development environment:
+
+```bash
+# Clone this repository
+git clone https://github.com/firebolt-db/mcp-server.git
+
+# Go into the repository
+cd mcp-server
+
+# Install Task (if you don't have it already)
+go install github.com/go-task/task/v3/cmd/task@latest
+
+# Update Go dependencies
+task mod
+
+# Build the application
+task build
+```
+
+### Running tests
+
+```bash
+go test ./...
+```
+
+### Building Docker image
+
+```bash
+docker build -t firebolt-mcp-server .
+```
+
+## License
+
+MIT
+
+---
+
+> [firebolt.io](https://www.firebolt.io) ·
+> GitHub [@firebolt-db](https://github.com/firebolt-db) ·
+> Twitter [@FireboltDB](https://twitter.com/FireboltDB)
diff --git a/Taskfile.yaml b/Taskfile.yaml
new file mode 100644
index 0000000..097af17
--- /dev/null
+++ b/Taskfile.yaml
@@ -0,0 +1,37 @@
+version: 3
+
+tasks:
+
+ help:
+ desc: Display this help screen
+ silent: true
+ cmds:
+ - task --list
+
+ mod:
+ desc: tidy Go modules, download dependencies
+ silent: true
+ cmd: |
+ go mod tidy
+ go mod download
+
+ build:
+ desc: Build application binary
+ silent: true
+ deps:
+ - task: goreleaser
+ vars:
+ CLI_ARGS: build --clean --snapshot --single-target
+
+ goreleaser:
+ desc: Build application binary
+ silent: true
+ cmd: |
+ docker run --rm --privileged \
+ -v $PWD:/src \
+ -v /var/run/docker.sock:/var/run/docker.sock \
+ -w /src \
+ -e GOOS={{OS}} \
+ -e GOARCH={{ARCH}} \
+ goreleaser/goreleaser:v2.8.2 \
+ {{.CLI_ARGS}}
diff --git a/cmd/docs-scrapper/README.md b/cmd/docs-scrapper/README.md
new file mode 100644
index 0000000..fa4582a
--- /dev/null
+++ b/cmd/docs-scrapper/README.md
@@ -0,0 +1,4 @@
+# Firebolt Documentation Scrapper
+
+This tools scrapes Firebolt documentation website and puts content to a local directory.
+This is a temporary solution until we rework our documentation to be LLM-friendly.
diff --git a/cmd/docs-scrapper/fireboltdocs/api_reference.md b/cmd/docs-scrapper/fireboltdocs/api_reference.md
new file mode 100644
index 0000000..92af181
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/api_reference.md
@@ -0,0 +1,90 @@
+# [](#api-reference)API reference
+
+The Firebolt API enables programmatic interaction with Firebolt databases for running SQL statements, retrieving data, and managing engines. Use API calls to submit queries, retrieve results, and perform administrative tasks without the user interface (UI).
+
+Firebolt offers official SDKs and drivers to simplify API usage. These drivers interface between your application and Firebolt, handling authentication, SQL statement submission, and result processing.
+
+
+
+To submit an API request, set up a Firebolt driver and use it to send a query to Firebolt, as explained in the following sections.
+
+**Topics:**
+
+- [Prerequisites](#prerequisites) – Set up your account and credentials before submitting an API request.
+- [Set up a driver](#set-up-a-driver) – Download, install, and configure a Firebolt driver to send queries using the Firebolt API.
+- [Submit a query](#submit-a-query) – Use a driver to connect to Firebolt and submit a query.
+
+## [](#prerequisites)Prerequisites
+
+Before you submit API queries, you need the following:
+
+1. **A Firebolt account** – Ensure that you have access to an active Firebolt account. If you don’t have access, you can [sign up for an account](https://www.firebolt.io/sign-up). For more information about how to register with Firebolt, see [Get started with Firebolt](/Guides/getting-started/).
+2. **A Firebolt service account** – You must have access to an active Firebolt [service account](/Guides/managing-your-organization/service-accounts.html), which facilitates programmatic access to Firebolt.
+3. **A user associated with the Firebolt service account** – You must associate a [user](/Guides/managing-your-organization/managing-users.html#-users) with your service account, and the user must have the necessary permissions to run the query on the specified database using the specified engine.
+4. **Sufficient permissions** If you want to query user data through a specific engine, you must have sufficient permissions on the engine, as well as on any tables and databases you access.
+
+## [](#set-up-a-driver)Set up a driver
+
+Drivers are software components that facilitate communication between applications and databases. Use a Firebolt driver to connect to a Firebolt database, authenticate securely, and run SQL statements with minimal setup.
+
+Use a Firebolt driver for the following:
+
+- **Simplified API access** – Manage authentication and request formatting, eliminating the need for manual API calls. Requires only installation and basic configuration to connect and run SQL statements.
+- **Optimized performance** – Improve query processing and connection management for faster response times.
+- **Secure authentication** – Use service accounts and industry-standard methods to ensure secure access.
+
+Firebolt provides multiple drivers and SDKs. Refer to the following [driver documentation](/Guides/integrations/integrations.html) for installation instructions:
+
+- [Node.js SDK](/Guides/developing-with-firebolt/connecting-with-nodejs.html) – For JavaScript-based applications.
+- [Python SDK](/Guides/developing-with-firebolt/connecting-with-Python.html) – For Python-based applications and data workflows.
+- [JDBC Driver](/Guides/developing-with-firebolt/connecting-with-jdbc.html) – For Java applications.
+- [SQLAlchemy](/Guides/developing-with-firebolt/connecting-with-sqlalchemy.html) – For ORM-based integrations in Python.
+- [.NET SDK](/Guides/developing-with-firebolt/connecting-with-net-sdk.html) – For applications running on the .NET framework.
+- [Go SDK](/Guides/developing-with-firebolt/connecting-with-go.html) – For applications using the Go programming language.
+
+## [](#submit-a-query)Submit a query
+
+After setting up a Firebolt driver, submit a query to verify connectivity and validate your credentials.
+
+Submitting a query through a Firebolt drivers and SDKs have similar formats. The following code example shows how to submit a query using the [Python SDK](/Guides/developing-with-firebolt/connecting-with-Python.html). For other languages, consult the specific driver for details:
+
+```
+from firebolt.db import connect
+from firebolt.client.auth import ClientCredentials
+
+id = "service_account_id"
+secret = "service_account_secret"
+engine_name = "your_engine_name"
+database_name = "your_test_db"
+account_name = "your_account_name"
+
+firstQuery = """
+ SELECT 42;
+ """
+secondQuery = """
+ SELECT 'my second query';
+"""
+
+with connect(
+ engine_name=engine_name,
+ database=database_name,
+ account_name=account_name,
+ auth=ClientCredentials(id, secret),
+) as connection:
+ cursor = connection.cursor()
+ cursor.execute(firstQuery)
+ for row in cursor.fetchall():
+ print(row)
+ # The cursor can be reused for multiple queries.
+ cursor.execute(secondQuery)
+ for row in cursor.fetchall():
+ print(row)
+```
+
+### [](#query-types)Query types
+
+Firebolt supports two types of query modes: **synchronous** and **asynchronous** queries.
+
+A [synchronous query](/API-reference/using-sync-queries.html) waits for a response before proceeding. This mode is ideal for interactive queries that require immediate results, such as dashboard queries or user-initiated requests. Firebolt maintains an open HTTP connection for the duration of the query and streams results back as they become available.
+
+An [asynchronous query](/API-reference/using-async-queries.html) runs in the background, allowing your application to continue executing other tasks. This is useful for long-running queries, such as [INSERT](/sql_reference/commands/data-management/insert.html), or [ALTER ENGINE](/sql_reference/commands/engines/alter-engine.html), where waiting for a response is unnecessary. The query status can be checked periodically using a query token.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/api_reference_using_async_queries.md b/cmd/docs-scrapper/fireboltdocs/api_reference_using_async_queries.md
new file mode 100644
index 0000000..1fb0c73
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/api_reference_using_async_queries.md
@@ -0,0 +1,119 @@
+# [](#asynchronous-queries)Asynchronous queries
+
+An asynchronous query runs in the background and returns a successful response once it is accepted by the computing cluster, so that a client can proceed with other tasks without waiting for the statement to finish. The status of an asynchronous query can be checked at specified intervals, which provides flexibility, so that you can check the query’s status at meaningful times based on the expected duration of the operation. For example, a user can avoid unnecessary resource consumption by only checking the status periodically, rather than maintaining an open connection for the entire duration of the query, which might be unreliable or unnecessary for certain tasks.
+
+Asynchronous queries are ideal for long-running SQL statements, such as `INSERT`, `STOP ENGINE`, and `ALTER ENGINE`, where keeping an HTTP connection open is both unreliable and unnecessary, and where the statement might return zero rows. In addition, tracking them can be challenging. Using an asynchronous query allows you to check the status of operations at intervals, based on the expected duration.
+
+You should use asynchronous queries for any supported operation that may take more than a few minutes for which there are no results.
+
+**Supported asynchronous queries**
+
+- [INSERT](/sql_reference/commands/data-management/insert.html) – Inserts one or more values into a specified table.
+- [COPY FROM](/sql_reference/commands/data-management/copy-from.html) – Loads data from an Amazon S3 bucket into Firebolt.
+- [COPY TO](/sql_reference/commands/data-management/copy-to.html) – Copies the result of a `SELECT` query to an Amazon S3 location.
+- [VACUUM](/sql_reference/commands/data-management/vacuum.html) – Optimizes tablets for query performance.
+- [CREATE AGGREGATING INDEX](/sql_reference/commands/data-definition/create-aggregating-index.html) – Creates an index for precomputing and storing frequent aggregations.
+- [CREATE AS SELECT](/sql_reference/commands/data-definition/create-fact-dimension-table-as-select.html) – Creates a table and loads data into it based on a `SELECT` query.
+- [Engine commands](/sql_reference/commands/engines/) including [ALTER ENGINE](/sql_reference/commands/engines/alter-engine.html), [STOP ENGINE](/sql_reference/commands/engines/stop-engine.html), and [START ENGINE](/sql_reference/commands/engines/start-engine.html). By default, Firebolt engines finish running queries before returning results, which can take significant time. Starting an engine can also take more than a few minutes.
+
+## [](#how-to-submit-an-asynchronous-query)How to submit an asynchronous query
+
+You can only submit a synchronous query programmatically using the Firebolt API or the following listed drivers. Every SQL statement submitted using the Firebolt **Develop Space** user interface is a synchronous query.
+
+The following are required prerequisites to submit a query programmatically:
+
+1. **A Firebolt account** – Ensure that you have access to an active Firebolt account. If you don’t have access, you can [sign up for an account](https://www.firebolt.io/sign-up). For more information about how to register with Firebolt, see [Get started with Firebolt](/Guides/getting-started/).
+2. **A Firebolt service account** – You must have access to an active Firebolt [service account](/Guides/managing-your-organization/service-accounts.html), which facilitates programmatic access to Firebolt.
+3. **A user associated with the Firebolt service account** – You must associate a [user](/Guides/managing-your-organization/managing-users.html#-users) with your service account, and the user must have the necessary permissions to run the query on the specified database using the specified engine.
+4. **Sufficient permissions** If you want to query user data through a specific engine, you must have sufficient permissions on the engine, as well as on any tables and databases you access.
+
+To submit an asynchronous query via a raw HTTP request, you must use Firebolt protocol version 2.3 or later, while query status can be checked with any client. You can verify the protocol version by checking the X-Firebolt-Protocol-Version header in API response.
+
+## [](#use-a-firebolt-driver)Use a Firebolt Driver
+
+Use a Firebolt driver to connect to a Firebolt database, authenticate securely, and run SQL statements with minimal setup. The driver provides built-in methods for running SQL statements, handling responses, and managing connections. Only some Firebolt drivers support synchronous queries. See the documentation for each driver for specific details on how to submit asynchronous queries programmatically:
+
+- [Python SDK](/Guides/developing-with-firebolt/connecting-with-Python.html) – Firebolt Python SDK
+- [Node.js](/Guides/developing-with-firebolt/connecting-with-nodejs.html) – Firebolt Node SDK
+
+## [](#submit-a-query)Submit a query
+
+Submitting a query through a Firebolt drivers and SDKs have similar formats. The following code example shows how to submit an asynchronous query using the [Python SDK](/Guides/developing-with-firebolt/connecting-with-Python.html). For other languages, consult the specific driver for details:
+
+The following code example establishes a connection to a Firebolt database using a service account, submits an asynchronous `INSERT` statement that groups generated numbers, periodically checks its run status, and then retrieves the row count from the `example` table:
+
+```
+from time import sleep
+
+from firebolt.db import connect
+from firebolt.client.auth import ClientCredentials
+
+id = "service_account_id"
+secret = "service_account_secret"
+engine_name = "your_engine_name"
+database_name = "your_test_db"
+account_name = "your_account_name"
+
+query = """
+ INSERT INTO example SELECT idMod7 as id
+ FROM (
+ SELECT id%7 as idMod7
+ FROM GENERATE_SERIES(1, 10000000000) s(id)
+ )
+ GROUP BY idMod7;
+ """
+
+with connect(
+ engine_name=engine_name,
+ database=database_name,
+ account_name=account_name,
+ auth=ClientCredentials(id, secret),
+) as connection:
+ cursor = connection.cursor()
+
+ cursor.execute_async(query) # Needs firebolt-sdk 1.9.0 or later
+ # Token lets us check the status of the query later
+ token = cursor.async_query_token
+ print(f"Query Token: {token}")
+
+ # Block until the query is done
+ # You can also do other work here
+ while connection.is_async_query_running(token):
+ print("Checking query status...")
+ sleep(5)
+
+ status = "Success" if connection.is_async_query_successful(token) else "Failed"
+ print(f"Query Status: {status}")
+
+ cursor.execute("SELECT count(*) FROM example;") # Should contain 7 rows
+ for row in cursor.fetchall():
+ print(row)
+```
+
+### [](#check-query-statusy)Check query statusy
+
+The query status token is included in the initial response when the query is submitted. If needed, you can also retrieve the token from the [engine\_running\_queries](/sql_reference/information-schema/engine-running-queries.html) view.
+
+To check the status of an asynchronous query, use the token with the `CALL fb_GetAsyncStatus` function as follows:
+
+```
+CALL fb_GetAsyncStatus('');
+```
+
+The previous code example returns a single row with the following schema:
+
+Column Name Data Type Description account\_name TEXT The name of the account where the asynchronous query was submitted. user\_name TEXT The name of the user who submitted the asynchronous query. request\_id TEXT Unique ID of the request which submitted the asynchronous query. query\_id TEXT Unique ID of the asynchronous query. status TEXT Current status of the query: SUSPENDED, RUNNING, CANCELLED, FAILED, SUCCEEDED or IN\_DOUBT. submitted\_time TIMESTAMPTZ The time the asynchronous query was submitted. start\_time TIMESTAMPTZ The time the async query was most recently started. end\_time TIMESTAMPTZ If the asynchronous query is completed, the time it finished. error\_message TEXT If the asynchronous query failed, the error message from the failure. retries LONG The number of times the asynchronous query has retried. scanned\_bytes LONG The number of bytes scanned by the asynchronous query. scanned\_rows LONG The number of rows scanned by the asynchronous query.
+
+### [](#cancel-a-query)Cancel a query
+
+A running asynchronous query can be cancelled using the [CANCEL](/sql_reference/commands/queries/cancel.html) statement as follows:
+
+```
+CANCEL QUERY '';
+```
+
+In the previous code example, retrieve the query ID from the [engine\_running\_queries](/sql_reference/information-schema/engine-running-queries.html) view or from the original query submission response.
+
+## [](#error-handling)Error handling
+
+Error Type Cause Solution **Protocol version mismatch** Using an outdated Firebolt protocol version. Make sure your driver supports async queries. **Query failure** The query encounters an execution error. Check the error message in `fb_GetAsyncStatus` and validate the query syntax. **Token not found** The provided async query token is invalid or expired. Verify that the correct token is being used and that the query has not expired. **Engine does not exist or you don’t have permission to access it** The specified Firebolt engine is not running or you don’t have permission to access it. Start the engine before submitting the query and double check permissions.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/api_reference_using_sync_queries.md b/cmd/docs-scrapper/fireboltdocs/api_reference_using_sync_queries.md
new file mode 100644
index 0000000..181b5f6
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/api_reference_using_sync_queries.md
@@ -0,0 +1,88 @@
+# [](#synchronous-queries)Synchronous queries
+
+Synchronous queries in Firebolt process SQL statements and wait for a response before proceeding with other operations. These queries are best suited for interactive analytics, dashboards, and data retrieval where low-latency performance is essential. Synchronous queries complete within a single request-response cycle.
+
+Synchronous queries are the default query mode for submitting SQL statements in Firebolt. All statements in the [SQL reference](/sql_reference/) guide can be used inside a synchronous query.
+
+## [](#how-to-submit-a-synchronous-query)How to submit a synchronous query
+
+You can submit a synchronous query using the user interface (UI) in the Firebolt **Develop Space**. Every SQL statement submitted using the UI is a synchronous query. For more information about how to submit a SQL statement using the UI, see [Get started using SQL](/Guides/getting-started/get-started-sql.html).
+
+You can also submit a synchronous query programmatically using the Firebolt API. The following are required prerequisites to submit a query programmatically:
+
+1. **A Firebolt account** – Ensure that you have access to an active Firebolt account. If you don’t have access, you can [sign up for an account](https://www.firebolt.io/sign-up). For more information about how to register with Firebolt, see [Get started with Firebolt](/Guides/getting-started/).
+2. **A Firebolt service account** – You must have access to an active Firebolt [service account](/Guides/managing-your-organization/service-accounts.html), which facilitates programmatic access to Firebolt.
+3. **A user associated with the Firebolt service account** – You must associate a [user](/Guides/managing-your-organization/managing-users.html#-users) with your service account, and the user must have the necessary permissions to run the query on the specified database using the specified engine.
+4. **Sufficient permissions** If you want to query user data through a specific engine, you must have sufficient permissions on the engine, as well as on any tables and databases you access.
+
+To submit a synchronous query programatically, use a Firebolt Driver to send an HTTP request with the SQL statement to Firebolt’s API endpoint.
+
+### [](#use-a-firebolt-driver)Use a Firebolt driver
+
+Use a Firebolt driver to connect to a Firebolt database, authenticate securely, and run SQL statements with minimal setup. The driver provides built-in methods for running SQL statements, handling responses, and managing connections. All Firebolt drivers support synchronous queries. See the documentation for each driver for specific details on how to submit synchronous queries programmatically:
+
+- [Node.js SDK](/Guides/developing-with-firebolt/connecting-with-nodejs.html) – Firebolt Node.js SDK
+- [Python SDK](/Guides/developing-with-firebolt/connecting-with-Python.html) – Firebolt Python SDK
+- [JDBC Driver](/Guides/developing-with-firebolt/connecting-with-jdbc.html) – Firebolt JDBC Driver
+- [SQLAlchemy](/Guides/developing-with-firebolt/connecting-with-sqlalchemy.html) – Firebolt SQLAlchemy adapter
+- [.NET SDK](/Guides/developing-with-firebolt/connecting-with-net-sdk.html) – Firebolt .NET SDK
+- [Go SDK](/Guides/developing-with-firebolt/connecting-with-go.html) – Firebolt Go SDK
+
+### [](#submit-a-query)Submit a query
+
+After setting up a Firebolt driver, submit a query to verify connectivity and validate your credentials.
+
+Submitting a query through a Firebolt drivers and SDKs have similar formats. The following code example shows how to establish a connection to a Firebolt database using a service account’s credentials, runs a simple `SELECT` statement, retrieves and prints the result using the [Python SDK](/Guides/developing-with-firebolt/connecting-with-Python.html). For other languages, consult the specific driver for details.
+
+```
+from firebolt.db import connect
+from firebolt.client.auth import ClientCredentials
+
+id = "service_account_id"
+secret = "service_account_secret"
+engine_name = "your_engine_name"
+database_name = "your_test_db"
+account_name = "your_account_name"
+
+query = """
+ SELECT 42;
+ """
+
+with connect(
+ engine_name=engine_name,
+ database=database_name,
+ account_name=account_name,
+ auth=ClientCredentials(id, secret),
+) as connection:
+ cursor = connection.cursor()
+
+ cursor.execute(query)
+ for row in cursor.fetchall():
+ print(row)
+```
+
+#### [](#handling-long-running-synchronous-queries)Handling long-running synchronous queries
+
+Synchronous queries maintain an open HTTP connection for the duration of the query, and stream results back as they become available. While there is no strict time limit, queries running longer than one hour may experience connectivity interruptions. If the HTTP connection is lost, some SQL statements, including `INSERT`, continue to run by default, while `SELECT` statements are cancelled. You can modify this behavior using the [cancel\_query\_on\_connection\_drop](/Reference/system-settings.html#query-cancellation-mode-on-connection-drop) setting.
+
+To avoid connection issues, consider submitting long-running queries as [asynchronous](/API-reference/using-async-queries.html) queries.
+
+#### [](#check-query-status)Check query status
+
+The queries running on an engine are available in the [engine\_running\_queries](/sql_reference/information-schema/engine-running-queries.html) view.
+
+#### [](#cancel-a-query)Cancel a query
+
+A running synchronous query can be cancelled using the [CANCEL](/sql_reference/commands/queries/cancel.html) statement as follows:
+
+```
+CANCEL QUERY '';
+```
+
+Use the query ID retrieved from the [engine\_running\_queries](/sql_reference/information-schema/engine-running-queries.html) view to cancel a specific query.
+
+## [](#error-handling)Error handling
+
+Common errors and solutions when using synchronous queries:
+
+Error Type Cause Solution **Connection loss** The HTTP connection is interrupted. Depending on the type of query, the query may still be running. Check [engine\_running\_queries](/sql_reference/information-schema/engine-running-queries.html) to verify, and use the `cancel_query_on_connection_drop` setting to modify behavior. **Engine does not exist or you don’t have permission to access it** The user lacks required permissions. Ensure the user has `USAGE` permission on the engine and that the engine exists.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/docs.go b/cmd/docs-scrapper/fireboltdocs/docs.go
new file mode 100644
index 0000000..351c44e
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/docs.go
@@ -0,0 +1,6 @@
+package fireboltdocs
+
+import "embed"
+
+//go:embed *.md
+var FS embed.FS
diff --git a/cmd/docs-scrapper/fireboltdocs/guides.md b/cmd/docs-scrapper/fireboltdocs/guides.md
new file mode 100644
index 0000000..4912fde
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides.md
@@ -0,0 +1,15 @@
+# [](#guides)Guides
+
+Learn how to configure, govern, develop and query with Firebolt.
+
+* * *
+
+- [Manage organization](/Guides/managing-your-organization/)
+- [Get started](/Guides/getting-started/)
+- [Operate Engines](/Guides/operate-engines/operate-engines.html)
+- [Load data](/Guides/loading-data/loading-data.html)
+- [Query data](/Guides/query-data/)
+- [Configure security](/Guides/security/)
+- [Develop with Firebolt](/Guides/developing-with-firebolt/)
+- [Integrate with Firebolt](/Guides/integrations/integrations.html)
+- [Export data](/Guides/exporting-data.html)
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt.md b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt.md
new file mode 100644
index 0000000..ae6fa95
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt.md
@@ -0,0 +1,23 @@
+# [](#developing-with-firebolt)Developing with Firebolt
+
+Firebolt provides multiple SDKs, drivers, and libraries to integrate with various programming environments, enabling developers to run queries, manage databases, and build data-driven applications efficiently.
+
+This guide covers how to develop with Firebolt using different languages and frameworks, including:
+
+- [Node.js](/Guides/developing-with-firebolt/connecting-with-nodejs.html) – Use the Firebolt Node.js SDK to interact with Firebolt databases.
+- [Python](/Guides/developing-with-firebolt/connecting-with-Python.html) – Leverage the Firebolt Python SDK for data analysis and automation.
+- [JDBC](/Guides/developing-with-firebolt/connecting-with-jdbc.html) – Connect Firebolt to Java-based applications with the JDBC driver.
+- [SQLAlchemy](/Guides/developing-with-firebolt/connecting-with-sqlalchemy.html) – Integrate Firebolt with SQLAlchemy for ORM-based workflows.
+- [.NET SDK](/Guides/developing-with-firebolt/connecting-with-net-sdk.html) – Work with Firebolt databases using .NET applications.
+- [Go](/Guides/developing-with-firebolt/connecting-with-go.html) – Access Firebolt from Go applications with the Firebolt Go client.
+
+Each section provides installation instructions, authentication methods, and query examples tailored to the respective language or framework.
+
+* * *
+
+- [Node.js](/Guides/developing-with-firebolt/connecting-with-nodejs.html)
+- [Python](/Guides/developing-with-firebolt/connecting-with-Python.html)
+- [JDBC](/Guides/developing-with-firebolt/connecting-with-jdbc.html)
+- [SQLAlchemy](/Guides/developing-with-firebolt/connecting-with-sqlalchemy.html)
+- [.NET SDK](/Guides/developing-with-firebolt/connecting-with-net-sdk.html)
+- [Go](/Guides/developing-with-firebolt/connecting-with-go.html)
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_go.md b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_go.md
new file mode 100644
index 0000000..370c32b
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_go.md
@@ -0,0 +1,195 @@
+# [](#firebolt-go-sdk-documentation)Firebolt Go SDK Documentation
+
+## [](#overview)Overview
+
+The Firebolt Go SDK is an implementation of Go’s `database/sql/driver` interface, enabling Go developers to connect to and interact with Firebolt databases seamlessly.
+
+## [](#prerequisites)Prerequisites
+
+You must have the following prerequisites before you can connect your Firebolt account to Go:
+
+- **Go installed and configured** on your system. The minimum supported version is 1.18 or higher. If you do not have Go installed, you can download the [latest version](https://go.dev/dl/). After installing, if you don’t have a Go module yet, you’ll need to initialize one. See the [Go documentation on modules](https://go.dev/doc/tutorial/create-module) for detailed instructions on how to create and initialize a Go module.
+- **Firebolt account** – You need an active Firebolt account. If you do not have one, you can [sign up](https://go.firebolt.io/signup) for one.
+- **Firebolt service account** – You must have access to an active Firebolt [service account](/Guides/managing-your-organization/service-accounts.html), which facilitates programmatic access to Firebolt, its ID and secret.
+- **Firebolt user** – You must have a user that is [associated](/Guides/managing-your-organization/service-accounts.html#create-a-user) with your service account. The user should have [USAGE](/Overview/Security/Role-Based%20Access%20Control/database-permissions/) permission to query your database, and [OPERATE](/Overview/Security/Role-Based%20Access%20Control/engine-permissions.html) permission to start and stop an engine if it is not already started.
+- **Firebolt database and engine (optional)** – You can optionally connect to a Firebolt database and/or engine. If you do not have one yet, you can [create a database](/Guides/getting-started/get-started-sql.html#create-a-database) and also [create an engine](/Guides/getting-started/get-started-sql.html#create-an-engine). You would need a database if want to access stored data in Firebolt and an engine if you want to load and query stored data.
+
+## [](#installation)Installation
+
+To install the Firebolt Go SDK, run the following `go get` command from inside your Go module:
+
+```
+go get github.com/firebolt-db/firebolt-go-sdk
+```
+
+## [](#dsn-parameters)DSN Parameters
+
+Go passes a data source name (DSN) to Firebolt’s Go SDK to connect to Firebolt. The SDK parses the DSN string for parameters to authenticate and connect to a Firebolt account, database, and engine.
+
+The DSN string supports the following parameters:
+
+- `client_id`: client ID of your [service account](/Guides/managing-your-organization/service-accounts.html).
+- `client_secret`: client secret of your [service account](/Guides/managing-your-organization/service-accounts.html).
+- `account_name`: The name of your Firebolt [account](/Guides/managing-your-organization/managing-accounts.html).
+- `database`: (Optional) The name of the [database](/Overview/Security/Role-Based%20Access%20Control/database-permissions/) to connect to.
+- `engine`: (Optional) The name of the [engine](/Overview/Security/Role-Based%20Access%20Control/engine-permissions.html) to run SQL queries on.
+
+The following is an example DSN string:
+
+```
+firebolt://[/]?account_name=&client_id=&client_secret=&engine=
+```
+
+## [](#connect-to-firebolt)Connect to Firebolt
+
+To establish a connection to a Firebolt database, construct a DSN string with your credentials and database details. The following example contains a script to connect to Firebolt that you can place in a file (e.g `main.go`) and run using `go run main.go` inside your Go module:
+
+```
+package main
+
+import (
+ "database/sql"
+ "fmt"
+ // Import the Firebolt Go SDK
+ _ "github.com/firebolt-db/firebolt-go-sdk"
+)
+
+func main() {
+ // Replace with your Firebolt credentials and database details
+ clientId := "your_client_id"
+ clientSecret := "your_client_secret"
+ accountName := "your_account_name"
+ databaseName := "your_database_name" // Optional parameter
+ engineName := "your_engine_name" // Optional parameter
+ dsn := fmt.Sprintf("firebolt:///%s?account_name=%s&client_id=%s&client_secret=%s&engine=%s", databaseName, accountName, clientId, clientSecret, engineName)
+
+ // Open a connection to the Firebolt database
+ db, err := sql.Open("firebolt", dsn)
+ if err != nil {
+ log.Fatalf("Error opening database connection: %v\n", err)
+ return
+ }
+ defer db.Close()
+
+ // Your database operations go here
+}
+```
+
+## [](#run-queries)Run queries
+
+Once connected, you can run SQL queries. The following examples show you how to create a table, insert data, and retrieve data. You can place them inside the previous script under `// Your database operations go here`\`:
+
+```
+// Create a table
+_, err = db.Exec("CREATE TABLE IF NOT EXISTS test_table (id INT, value TEXT)")
+if err != nil {
+ log.Fatalf("Error creating table: %v\n", err)
+ return
+}
+
+// Insert data into the table
+_, err = db.Exec("INSERT INTO test_table (id, value) VALUES (?, ?)", 1, "sample value")
+if err != nil {
+ log.Fatalf("Error inserting data: %v\n", err)
+ return
+}
+
+// Query data from the table
+rows, err := db.Query("SELECT id, value FROM test_table")
+if err != nil {
+ log.Fatalf("Error querying data: %v\n", err)
+ return
+}
+defer rows.Close()
+
+// Iterate over the result set
+for rows.Next() {
+ var id int
+ var value string
+ if err := rows.Scan(&id, &value); err != nil {
+ log.Fatalf("Error scanning row: %v\n", err)
+ return
+ }
+ log.Print("Row: id=%d, value=%s\n", id, value)
+}
+```
+
+## [](#streaming-queries)Streaming Queries
+
+Firebolt supports streaming large query results using `rows.Next()`, allowing efficient processing of large datasets.
+
+If you enable result streaming, the query execution might finish successfully, but the actual error might be returned while iterating the rows.
+
+To enable streaming, use the `firebolt-go-sdk/context` package to create a context with streaming enabled:
+
+```
+package main
+
+import (
+ "context"
+ "database/sql"
+ "fmt"
+ "log"
+
+ "github.com/firebolt-db/firebolt-go-sdk"
+ fireboltContext "github.com/firebolt-db/firebolt-go-sdk/context"
+
+)
+
+func main() {
+ dsn := "firebolt:///your_database_name?account_name=your_account_name&client_id=your_client_id&client_secret=your_client_secret"
+ db, err := sql.Open("firebolt", dsn)
+ if err != nil {
+ log.Fatalf("Failed to open database: %v", err)
+ }
+ defer db.Close()
+
+ streamingCtx := fireboltContext.WithStreaming(context.Background())
+
+ // Execute a query with streaming enabled. Imitate large query result
+ rows, err := db.QueryContext(ctx, "SELECT 123, 'data' FROM generate_series(1, 100000000)")
+ if err != nil {
+ log.Fatalf("Query execution failed: %v", err)
+ }
+ defer rows.Close()
+
+ for rows.Next() {
+ var col1 string
+ var col2 int
+ if err := rows.Scan(&col1, &col2); err != nil {
+ log.Fatalf("Error scanning row: %v", err)
+ }
+ log.Print("Row: col1=%s, col2=%d\n", col1, col2)
+ }
+ if err := rows.Err(); err != nil {
+ log.Fatalf("Row iteration error: %v", err)
+ }
+}
+```
+
+Streaming queries are particularly useful when dealing with large datasets, as they avoid loading the entire result set into memory at once.
+
+## [](#troubleshooting)Troubleshooting
+
+When building a DSN to connect with Firebolt using the Go SDK, follow these best practices to ensure correct connection string formatting and avoid parsing errors. The DSN must follow this structure:
+
+```
+firebolt:///?account_name=&client_id=&client_secret=&engine=
+```
+
+**Guidelines**
+
+- Place the database name in the URI path after `firebolt:///`.
+- Use only letters, numbers, and underscores (\_) in the database name. Avoid hyphens (-), as they may cause parsing errors.
+- Ensure the `account_name` matches the name shown in the Firebolt Console URL, which is usually lowercase with no special characters.
+- Use the exact engine name as shown in the Firebolt Workspace.
+- Do not pass the database name as a query parameter. The SDK does not support `&database=` in the DSN.
+
+### [](#common-errors-and-solutions)Common errors and solutions
+
+Error message Likely cause Solution `invalid connection string format` URI format is invalid or it contains illegal characters (like `-`) Double check the URI format and remove illegal characters. `unknown parameter name database` Attempted to pass `database` as a query parameter. Move the database name into the URI path. `error opening database connection` Incorrect connection credentials. Verify connection parameters values in the Firebolt UI and use exact values.
+
+## [](#additional-resources)Additional Resources
+
+- [Firebolt Go SDK GitHub Repository](https://github.com/firebolt-db/firebolt-go-sdk)
+- [Firebolt Documentation: Connecting with Go](/Guides/developing-with-firebolt/connecting-with-go.html)
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_jdbc.md b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_jdbc.md
new file mode 100644
index 0000000..9a54077
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_jdbc.md
@@ -0,0 +1,183 @@
+# [](#jdbc-driver)JDBC driver
+
+Firebolt’s [type 4](https://en.wikipedia.org/wiki/JDBC_driver#Type_4_driver_%E2%80%93_Database-Protocol_driver/Thin_Driver%28Pure_Java_driver%29) JDBC driver lets Java applications connect to Firebolt. The JDBC driver is open-source software released under an Apache 2 license. You can browse, fork, download, and contribute to its development on [GitHub](https://github.com/firebolt-db/jdbc).
+
+- [Download the JAR file](#download-the-jar-file)
+- [Adding the Firebolt JDBC driver as a Maven dependency](#adding-the-firebolt-jdbc-driver-as-a-maven-dependency)
+- [Adding the Firebolt JDBC driver as a Gradle dependency](#adding-the-firebolt-jdbc-driver-as-a-gradle-dependency)
+- [Connecting to Firebolt with the JDBC driver](#connecting-to-firebolt-with-the-jdbc-driver)
+- [Authentication](#authentication)
+
+ - [Available connection parameters](#available-connection-parameters)
+ - [System settings as connection parameters](#system-settings-as-connection-parameters)
+- [Applying system settings using SET](#applying-system-settings-using-set)
+- [Connection validation](#connection-validation)
+- [Full reference documentation](#full-reference-documentation)
+
+## [](#download-the-jar-file)Download the JAR file
+
+The Firebolt JDBC driver is provided as a JAR file and requires [Java 11](https://java.com/en/download/manual.jsp) or later.
+
+Download the driver from [GitHub JDBC releases](https://github.com/firebolt-db/jdbc/releases).
+
+## [](#adding-the-firebolt-jdbc-driver-as-a-maven-dependency)Adding the Firebolt JDBC driver as a Maven dependency
+
+To connect your project to Firebolt using [Apache Maven](https://maven.apache.org/), add the Firebolt JDBC driver as a dependency in your **pom.xml** configuration file. Link to the [Firebolt Maven repository](https://central.sonatype.com/artifact/io.firebolt/firebolt-jdbc), so that Maven can download and include the JDBC driver in your project, as shown in the following code example:
+
+```
+
+
+
+
+ io.firebolt
+ firebolt-jdbc
+ 3.3.0
+
+
+```
+
+In the previous code example, replace `3.3.0` with the latest version available in the [Firebolt Maven Central repository](https://central.sonatype.com/artifact/io.firebolt/firebolt-jdbc).
+
+## [](#adding-the-firebolt-jdbc-driver-as-a-gradle-dependency)Adding the Firebolt JDBC driver as a Gradle dependency
+
+If you are using the [Gradle Build Tool](https://gradle.org/), you can configure your Gradle project to use the Firebolt JDBC driver by specifying Apache’s [Maven Central](https://maven.apache.org/repository/index.html) as a repository and adding the Firebolt JDBC driver as a dependency as follows:
+
+```
+/* build.gradle */
+
+repositories {
+ mavenCentral()
+}
+
+dependencies {
+ implementation 'io.firebolt:firebolt-jdbc:3.3.0'
+}
+```
+
+In the previous code example, replace `3.3.0` with the latest version available in the [Firebolt Maven Central repository](https://central.sonatype.com/artifact/io.firebolt/firebolt-jdbc).
+
+## [](#connecting-to-firebolt-with-the-jdbc-driver)Connecting to Firebolt with the JDBC driver
+
+Provide connection details to the Firebolt JDBC driver using a connection string in the following format:
+
+```
+jdbc:firebolt:?
+```
+
+In the previous connection example, the following apply:
+
+- `` - Specifies the name of the Firebolt database to connect to.
+- `` - A list of connection parameters formatted as a standard [URL query string](https://en.wikipedia.org/wiki/Query_string#Structure).
+
+## [](#authentication)Authentication
+
+To authenticate, use a [service account ID and secret](/Guides/managing-your-organization/service-accounts.html). A service account, which is used for programmatic access to Firebolt, uses a `client_id` and a `client_secret` for identification. To ensure compatibility with tools external to Firebolt, you can specify the service account’s `client_id` as `user` and `client_secret` as `password`.
+
+The following are examples of how to specify connection strings for authentication and configuration:
+
+**Example**
+
+The following example connection string configures the Firebolt JDBC driver to connect to `my_database` using a specified `client_id` and `secret_id` for authentication:
+
+```
+ jdbc:firebolt:my_database?client_id=&client_secret=&account=my_account&engine=my_engine&buffer_size=1000000&connection_timeout_millis=10000
+```
+
+The previous example string also specifies an account name `my_account`, an engine name `my_engine`, a buffer size of `1000000` bytes, and a connection timeout of `10000` milliseconds, or `10` seconds.
+
+**Example**
+
+The following example provides `client_id` and `client_secret` as separate properties, rather than embedding them directly in the connection string, as shown in the previous example.
+
+Connection string:
+
+```
+ jdbc:firebolt:my_database?account=my_account&engine=my_engine&buffer_size=1000000&connection_timeout_millis=10000`
+```
+
+Connection properties:
+
+```
+ client_id=
+ client_secret=
+```
+
+**Example**
+
+The following example connects to `my_database` using only connection properties for authentication and parameters, without including any parameters directly in the string.
+
+Connection string:
+
+```
+ jdbc:firebolt:my_database
+```
+
+Connection properties:
+
+```
+ client_id=
+ client_secret=
+ account=my_account
+ engine=my_engine
+ buffer_size=1000000
+ connection_timeout_millis=10000
+```
+
+**Example**
+
+The following example is a minimal URL that connects to `my_database` using `client_id` and `client_secret` as connection properties for authentication, omitting the engine name and therefore connects to default engine and relying on default values for all other parameters:
+
+Connection string:
+
+```
+ jdbc:firebolt:my_database
+```
+
+Connection properties:
+
+```
+ client_id=
+ client_secret=
+ account=my_account
+```
+
+Because the previous configuration example omits specifying the engine name, `my_database` connects to the default engine.
+
+Since the connection string is a URI, make sure to [percent-encode](https://en.wikipedia.org/wiki/Percent-encoding) any reserved characters or special characters used in parameter keys or parameter values.
+
+### [](#available-connection-parameters)Available connection parameters
+
+The following table lists the available parameters that can be added to a Firebolt JDBC connection string. All parameter keys are case-sensitive.
+
+Parameter key Data type Default value Range Description client\_id TEXT No default value. (**Required**) The Firebolt service account ID. client\_secret TEXT No default value. (**Required**) The secret generated for the Firebolt service account. account TEXT No default value. (**Required**) Your Firebolt account name. database TEXT No default value. The name of the database to connect to. Takes precedence over the database name provided as a path parameter. engine TEXT The default engine attached to the specified database. The name of the engine to connect to. buffer\_size INTEGER `65536` `1` to `2147483647` The buffer size, in bytes, that the driver uses to read the responses from the Firebolt API. connection\_timeout\_millis INTEGER `60000` `0` to `2147483647` The wait time in milliseconds before a connection to the server is considered failed. A timeout value of zero means that the connection will wait indefinitely. max\_connections\_total INTEGER `300` `1` to `2147483647` The maximum total number of connections. socket\_timeout\_millis INTEGER `0` `0` to `2147483647` The socket timeout, in milliseconds, which specifies the maximum wait time for data, defining the longest allowed inactivity between consecutive data packets. A value of zero means that there is no timeout limit. connection\_keep\_alive\_timeout\_millis INTEGER `300000` `1` to `2147483647` Defines the duration to keep a server connection open in the connection pool before it is closed. ssl\_mode TEXT `strict` `strict` or `none` When set to `strict`, the SSL or TLS certificate is validated for accuracy and authenticity. If set to `none`, certificate verification is omitted. ssl\_certificate\_path TEXT No default value. The absolute file path for the SSL root certificate. validate\_on\_system\_engine BOOLEAN `FALSE` `TRUE` or `FALSE` When set to `TRUE`, the connection is always validated against a system engine, even if it’s connected to a regular engine. For more information, see [Connection validation](#connection-validation).
+
+### [](#system-settings-as-connection-parameters)System settings as connection parameters
+
+In addition to the parameters specified in the previous table, any [system setting](/Reference/system-settings.html) can be passed as a connection string parameter. For example, to set a custom time zone, use the following format:
+
+```
+jdbc:firebolt:my_database?time_zone=UTC&
+```
+
+## [](#applying-system-settings-using-set)Applying system settings using SET
+
+In addition to passing system settings as connection string parameters, any [system setting](/Reference/system-settings.html) can be passed using the SQL `SET` command. Multiple `SET` statements can be run consecutively, separated by semicolons, as shown below:
+
+```
+SET time_zone = 'UTC';
+SET standard_conforming_strings = false;
+```
+
+## [](#connection-validation)Connection validation
+
+The Firebolt JDBC driver validates the connection by sending a `SELECT 1` query to the system engine. If this query fails, the driver throws an exception. You can use the `validate_on_system_engine` parameter to customize validation. When it is set to `true`, the validation query is sent to the system engine, even if the connection is established with a regular engine. This feature can be useful if you want to stop the regular engine but still need to validate the connection.
+
+The following example configures the Firebolt JDBC driver to connect to `my_database` and validate the connection using the system engine with additional connection parameters specified in `other_connection_parameters`:
+
+```
+jdbc:firebolt:my_database?validate_on_system_engine=true&
+```
+
+## [](#full-reference-documentation)Full reference documentation
+
+The complete documentation for classes and methods in the Firebolt JDBC driver is available in the [Firebolt JDBC API reference guide](https://jdbc.docs.firebolt.io/javadoc/).
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_net_sdk.md b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_net_sdk.md
new file mode 100644
index 0000000..f02ea9d
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_net_sdk.md
@@ -0,0 +1,96 @@
+# [](#firebolt-net-sdk)Firebolt .NET SDK
+
+## [](#overview)Overview
+
+The Firebolt .NET SDK is a software development kit designed to facilitate the integration of Firebolt’s high-performance database capabilities into .NET applications. This SDK provides developers with the tools and interfaces needed to interact with Firebolt databases efficiently, enabling effective data manipulation and query execution.
+
+## [](#installation)Installation
+
+Install the Firebolt .NET SDK by adding the NuGet package to your project. You can do this in several ways:
+
+### [](#via-package-manager-console)Via Package Manager Console
+
+```
+Install-Package FireboltNetSdk
+```
+
+### [](#via-net-cli)Via .NET CLI
+
+```
+dotnet add package FireboltNetSdk
+```
+
+### [](#via-packagereference)Via PackageReference
+
+Add the following line to your project file:
+
+```
+
+```
+
+Make sure to replace `x.x.x` with the specific version you want to use.
+
+### [](#via-visual-studio-ui)Via **Visual Studio UI**
+
+`Tools` > `NuGet Package Manager` > `Manage NuGet Packages for Solution` and search for `Firebolt`
+
+For more details and versioning information, please visit the [NuGet Gallery](https://www.nuget.org/packages/FireboltNetSdk/).
+
+## [](#quick-start)Quick Start
+
+Here’s a simple example to get started with the Firebolt .NET SDK:
+
+```
+using System.Data.Common;
+using FireboltDotNetSdk.Client;
+
+public class Program
+{
+ public static async Task Main(string[] args)
+ {
+ // Name of your Firebolt account
+ string account = "my_firebolt_account";
+ // Client credentials, that you want to use to connect
+ string clientId = "my_client_id";
+ string clientSecret = "my_client_secret";
+ // Name of database and engine to connect to (Optional)
+ string database = "my_database_name";
+ string engine = "my_engine_name";
+
+ // Construct a connection string using defined parameter
+ string conn_string = $"account={account};clientid={clientId};clientsecret={clientSecret};database={database};engine={engine}";
+
+ // Create a new connection using generated connection string
+ using var conn = new FireboltConnection(conn_string);
+ // Open a connection
+ conn.Open();
+
+ // First you would need to create a command
+ var command = conn.CreateCommand();
+
+ // ... and set the SQL query
+ command.CommandText = "SELECT * FROM my_table";
+
+ // Execute a SQL query and get a DB reader
+ DbDataReader reader = command.ExecuteReader();
+
+ // Optionally you can check whether the result set has rows
+ Console.WriteLine($"Has rows: {reader.HasRows}");
+
+ // Close the connection after all operations are done
+ conn.Close();
+ }
+}
+```
+
+## [](#documentation)Documentation
+
+For more detailed documentation, including API references and advanced usage, please refer to the [README](https://github.com/firebolt-db/firebolt-net-sdk/blob/main/README.md) file in the repository.
+
+## [](#support)Support
+
+For support, issues, or contributions, please refer to the repository’s issue tracker and contributing guidelines.
+
+## [](#license)License
+
+This SDK is released under **Apache License 2.0**. Please see the [LICENSE](https://github.com/firebolt-db/firebolt-net-sdk/blob/main/LICENSE) file for more details.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_nodejs.md b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_nodejs.md
new file mode 100644
index 0000000..de5322c
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_nodejs.md
@@ -0,0 +1,132 @@
+# [](#nodejs)Node.js
+
+- [Overview](#overview)
+- [Installation](#installation)
+- [Authentication](#authentication)
+- [Quick start](#quick-start)
+- [Contribution](#contribution)
+- [License](#license)
+
+## [](#overview)Overview
+
+The Firebolt Node SDK is a software development kit designed to facilitate the integration of Firebolt’s high-performance database capabilities into Node.js applications. This SDK provides a set of tools and interfaces for developers to interact with Firebolt databases, enabling efficient data manipulation and query execution. For more detailed documentation, including API references and advanced usage, refer to the [README](https://github.com/firebolt-db/firebolt-node-sdk/blob/main/README.md) file in the Firebolt Node SDK repository.
+
+## [](#installation)Installation
+
+To install the Firebolt Node SDK, run the following command in your project directory:
+
+```
+npm install firebolt-sdk
+```
+
+## [](#authentication)Authentication
+
+After installation, you must authenticate before you can use the SDK to establish connections, run queries, and manage database resources. The following code example sets up a connection using your Firebolt [service account](/Guides/managing-your-organization/service-accounts.html) credentials:
+
+```
+const connection = await firebolt.connect({
+ auth: {
+ client_id: '12345678-90123-4567-8901-234567890123',
+ client_secret: 'secret',
+ },
+ engineName: 'engine_name',
+ account: 'account_name',
+ database: 'database',
+});
+```
+
+In the previous code example, the following details apply:
+
+- `client_id` and `client_secret`: These are your service account credentials. Refer to Firebolt’s guide to learn how to [create a service account](/Guides/managing-your-organization/service-accounts.html#create-a-service-account) and obtain its [ID](/Guides/managing-your-organization/service-accounts.html#get-a-service-account-id) and [secret](/Guides/managing-your-organization/service-accounts.html#generate-a-secret).
+- `engineName`: The name of the engine used to run your queries on.
+- `database`: The target database where your tables will be stored.
+- `account`: The object within your organization that encapsulates resources for storing, querying, and managing data. In the Node.js SDK, the [account](/Overview/organizations-accounts.html#accounts) parameter specifies which organizational environment the connection will use.
+
+## [](#quick-start)Quick start
+
+In the following code example, credentials are stored in environment variables.
+
+```
+import { Firebolt } from 'firebolt-sdk'
+
+// Initialize client
+const firebolt = Firebolt();
+
+// Establish connection to Firebolt using environment variables for credentials and configuration
+const connection = await firebolt.connect({
+ auth: {
+ client_id: process.env.FIREBOLT_CLIENT_ID,
+ client_secret: process.env.FIREBOLT_CLIENT_SECRET,
+ },
+ account: process.env.FIREBOLT_ACCOUNT,
+ database: process.env.FIREBOLT_DATABASE,
+ engineName: process.env.FIREBOLT_ENGINE_NAME
+});
+
+// Create a "users" table
+await connection.execute(`
+ CREATE TABLE IF NOT EXISTS users (
+ id INT,
+ name STRING,
+ age INT
+ )
+`);
+
+// Insert sample data
+await connection.execute(`
+ INSERT INTO users (id, name, age) VALUES
+ (1, 'Alice', 30),
+ (2, 'Bob', 25)
+`);
+
+// Update rows
+await connection.execute(`
+ UPDATE users SET age = 31 WHERE id = 1
+`);
+
+// Fetch data with a query
+const statement = await connection.execute("SELECT * FROM users");
+
+// Fetch the complete result set
+const { data, meta } = await statement.fetchResult();
+
+// Log metadata describing the columns of the result set
+console.log(meta)
+// Outputs:
+// [
+// Meta { type: 'int null', name: 'id' },
+// Meta { type: 'text null', name: 'name' },
+// Meta { type: 'int null', name: 'age' }
+// ]
+
+// Alternatively, stream the result set row by row
+const { data } = await statement.streamResult();
+
+data.on("metadata", metadata => {
+ console.log(metadata);
+});
+
+// Handle metadata event
+data.on("error", error => {
+ console.log(error);
+});
+
+const rows = []
+
+for await (const row of data) {
+ rows.push(row);
+}
+
+// Log the collected rows
+console.log(rows)
+// Outputs:
+// [ [ 1, 'Alice', 31 ], [ 2, 'Bob', 25 ] ]
+```
+
+## [](#contribution)Contribution
+
+To receive support, report issues, or contribute, please refer to the Firebolt Node SDK repository [issue tracker](https://github.com/firebolt-db/firebolt-node-sdk/issues).
+
+## [](#license)License
+
+This SDK is released under **Apache License 2.0**. See the [LICENSE](https://github.com/firebolt-db/firebolt-node-sdk/blob/main/LICENSE) file for more details.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_python.md b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_python.md
new file mode 100644
index 0000000..873bfcb
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_python.md
@@ -0,0 +1,7 @@
+# [](#python)Python
+
+You can use the Python SDK to work with Firebolt. See the resources below for more information.
+
+- [Firebolt Python SDK documentation](https://python.docs.firebolt.io/sdk_documenation/latest/)
+- The [firebolt-python-sdk repository on GitHub](https://github.com/firebolt-db/firebolt-python-sdk/)
+- Code examples (in Jupyter notebooks) in the SDK repository that demonstrate common [data tasks](https://github.com/firebolt-db/firebolt-python-sdk/blob/main/examples/dbapi.ipynb) and [management tasks](https://github.com/firebolt-db/firebolt-python-sdk/blob/main/examples/management.ipynb)
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_sqlalchemy.md b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_sqlalchemy.md
new file mode 100644
index 0000000..71ccf26
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_developing_with_firebolt_connecting_with_sqlalchemy.md
@@ -0,0 +1,11 @@
+# [](#connect-with-sqlalchemy)Connect with SQLAlchemy
+
+SQLAlchemy is an open-source SQL toolkit and object-relational mapper for the Python programming language.
+
+Firebolt’s adapter for SQLAlchemy acts as an interface for other supported third-party applications including Superset and Preset. When the SQLAlchemy adapter is successfully connected, these applications are able to communicate with Firebolt databases through the REST API.
+
+The adapter is written in Python using the SQLAlchemy toolkit.
+
+### [](#get-started)Get started
+
+Follow the guidelines for SQLAlchemy integration in the Firebolt-SQLAlchemy [Github repository](https://github.com/firebolt-db/firebolt-sqlalchemy/).
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_exporting_data.md b/cmd/docs-scrapper/fireboltdocs/guides_exporting_data.md
new file mode 100644
index 0000000..3a065f3
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_exporting_data.md
@@ -0,0 +1,90 @@
+# [](#export-data)Export data
+
+You can export data from a `SELECT` query directly to an Amazon S3 location using [COPY TO](/sql_reference/commands/data-management/copy-to.html). This method is more flexible and efficient than downloading query results manually from the **Firebolt Workspace**, making it ideal for data sharing, integration, and archival.
+
+## [](#how-to-export-data)How to export data
+
+The following code example uses `COPY TO` to export the result of a `SELECT` query from `my_table` to a specified Amazon S3 bucket in CSV format using the provided [AWS credentials](/sql_reference/commands/data-management/copy-to.html#credentials):
+
+```
+COPY (
+ SELECT column1, column2 FROM my_table WHERE condition
+)
+TO 's3://your-bucket/path/'
+WITH (FORMAT = 'CSV')
+CREDENTIALS = ('aws_key_id'='your-key' 'aws_secret_key'='your-secret');
+```
+
+## [](#choose-the-right-export-format)Choose the right export format
+
+Format Best For Characteristics Recommended Use **CSV (Comma-Separated)** General data exchange, spreadsheets, SQL. Simple, widely supported, and easy to read. Best for spreadsheets, databases, or general data exchange. **TSV (Tab-Separated)** Structured text data. Like CSV, but uses tabs instead of commas. Best for Excel, databases, or general data exchange. **JSON** APIs, web applications, NoSQL databases. Flexible, human-readable, and supports nested data. Best for web apps, APIs, or NoSQL integrations. **PARQUET** Big data processing, analytics workloads. Compressed, columnar, and optimized for querying. Ideal for analytics, performance-sensitive workloads, and large datasets.
+
+## [](#examples)Examples
+
+**Export data in CSV format**
+
+Use CSV when you need a simple, widely supported format for spreadsheets, relational databases, or data exchange.
+
+The following code example exports `user_id`, `event_type`, and `timestamp` data and headers from the `user_events` table to a CSV file in an Amazon S3 bucket:
+
+```
+COPY (SELECT user_id, event_type, timestamp FROM user_events)
+TO 's3://my-export-bucket/user_events.csv'
+WITH (FORMAT = 'CSV', HEADER = TRUE)
+CREDENTIALS = ('aws_key_id'='your-key' 'aws_secret_key'='your-secret');
+```
+
+**Export data in Parquet format**
+
+Parquet is best for big data workloads, as it offers compressed, columnar storage optimized for analytics and query performance.
+
+The following code example exports all data from the `sales_data` table to an Amazon S3 bucket in Parquet format using the provided AWS credentials:
+
+```
+COPY (SELECT * FROM sales_data)
+TO 's3://my-export-bucket/sales_data.parquet'
+WITH (FORMAT = 'PARQUET')
+CREDENTIALS = ('aws_key_id'='your-key' 'aws_secret_key'='your-secret');
+```
+
+**Export data in JSON format**
+
+JSON is ideal for APIs, web applications, and NoSQL databases, as it supports nested and flexible data structures.
+
+The following code example exports `order_id` and `order_details` from the `orders` table to an Amazon S3 bucket in JSON format using the provided AWS credentials:
+
+```
+COPY (SELECT order_id, order_details FROM orders)
+TO 's3://my-export-bucket/orders.json'
+WITH (FORMAT = 'JSON')
+CREDENTIALS = ('aws_key_id'='your-key' 'aws_secret_key'='your-secret');
+```
+
+**Export data in TSV format**
+
+TSV is similar to CSV but uses tab delimiters, making it useful for structured text data that may contain commas.
+
+The following code example exports `name`, `age`, and `city` from the `customers` table to an Amazon S3 bucket in TSV format using the provided AWS credentials:
+
+```
+COPY (SELECT name, age, city FROM customers)
+TO 's3://my-export-bucket/customers.tsv'
+WITH (FORMAT = 'TSV')
+CREDENTIALS = ('aws_key_id'='your-key' 'aws_secret_key'='your-secret');
+```
+
+## [](#additional-considerations)Additional Considerations
+
+**Performance tips**
+
+- Export only required columns and use filters to reduce data volume
+- Ensure proper permissions are set on your S3 bucket
+
+**Security and credentials**
+
+- Always use **secure AWS credentials**.
+- Use **IAM roles** instead setting credentials directly in the code for better security.
+
+## [](#next-steps)Next Steps
+
+For more information about advanced options including **compression**, **partitioning**, and **null handling**, see [COPY TO](/sql_reference/commands/data-management/copy-to.html).
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_getting_started.md b/cmd/docs-scrapper/fireboltdocs/guides_getting_started.md
new file mode 100644
index 0000000..202b86e
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_getting_started.md
@@ -0,0 +1,16 @@
+# [](#get-started-with-firebolt)Get started with Firebolt
+
+Welcome to the beginning of your journey with Firebolt! This tutorial guides you through all of the steps you need to run a basic workflow which includes setting up your Firebolt account, creating a database and engine, importing a sample dataset, creating indexes, and running a query. If you encounter any issues, reach out to [support@firebolt.io](mailto:support@firebolt.io) for help.
+
+To get started, you must [register](https://go.firebolt.io/signup) and create a Firebolt account. Then, you can either use the **Develop Space** inside the **Firebolt Workspace**, or use the **Load data** wizard to create a database and engine, and load data. Then, you can run your first query to obtain baseline performance statistics. Next, you can tune your workflow using Firebolt’s optimization strategies to reduce query run times. You can set a primary index and use aggregating indexes to speed up your query times significantly. Lastly, you can export your data to an external table. These steps are illustrated in the following workflow:
+
+
+
+After you register, you can either use the [Load data wizard](/Guides/getting-started/get-started-load-data-wizard.html) or the [use SQL](/Guides/getting-started/get-started-sql.html). Use the Load data wizard if your data is in either CSV or Parquet format, and you want to use a graphical user interface to guide you through the first three steps of the workflow. Use the Firebolt **Develop Space** or an API if you prefer to enter SQL, or need a more customized workflow.
+
+## [](#next-steps)Next steps
+
+Choose either of the following:
+
+- [Get started using a wizard](/Guides/getting-started/get-started-load-data-wizard.html)
+- [Get started using SQL](/Guides/getting-started/get-started-sql.html)
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_getting_started_get_started_load_data_wizard.md b/cmd/docs-scrapper/fireboltdocs/guides_getting_started_get_started_load_data_wizard.md
new file mode 100644
index 0000000..0a0cf68
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_getting_started_get_started_load_data_wizard.md
@@ -0,0 +1,48 @@
+# [](#get-started-using-a-wizard)Get started using a wizard
+
+The **Load data** wizard guides you through creating a database and engine, and loading data from an Amazon S3 bucket. You can specify basic configurations, including what character to use as a file delimiter, which columns to import and their schema. After loading your data, continue working in the **Develop Space** to run and optimize a query, and export to an external table, as shown in the following diagram:
+
+
+
+## [](#register-with-firebolt)Register with Firebolt
+
+
+
+Use the following steps to register with Firebolt:
+
+1. [Sign up](https://go.firebolt.io/signup) on Firebolt’s registration page. Fill in your email, name, choose a password, and select **Get Started**.
+2. Firebolt will send a confirmation to the address that you provided. To complete your registration, select **Verify** in the email to take you to Firebolt’s [login page](https://go.firebolt.io/login).
+3. Type in your email and password and select **Log In**.
+
+New accounts receive credits ($200) to get started exploring Firebolt’s capabilities. Credits must be used within 30 days of account creation.
+
+Firebolt’s billing is based on engine runtime, measured in seconds. AWS S3 storage costs are passed through at the rate of $23 per TB per month. Your cost depends primarily on which engines you use and how long those engines are running.
+
+You can view your total cost in FBU up to the latest second and in $USD up to the latest day. For more information, see the following **Create a Database** section. For more information about costs, see [Data Warehouse Pricing](https://www.firebolt.io/pricing). If you need to buy additional credits, connect Firebolt with your AWS Marketplace account. For more information about AWS Marketplace, see the following section: [Registering through AWS Marketplace section](/Guides/getting-started/get-started-next.html#register-through-the-aws-marketplace).
+
+## [](#use-the-load-data-wizard)Use the Load data wizard
+
+
+
+You can use the **Load data** wizard to load data in either CSV or Parquet form.
+
+To start the **Load data** wizard, select the plus (+) icon in the **Develop Space** next to **Databases** in the left navigation pane and select **Load data**. The wizard will guide you through creating a database, an engine, and loading data. See [Load data using a wizard](/Guides/loading-data/loading-data-wizard.html#load-data-using-a-wizard) for detailed information about the workflow and the available options in the wizard.
+
+Even though the **Load data** wizard creates a database and engine for you, the [**Create a Database**](/Guides/getting-started/get-started-sql.html#create-a-database) and [**Create an Engine**](/Guides/getting-started/get-started-sql.html#create-an-engine) sections in the [Use SQL to load data](/Guides/getting-started/get-started-sql.html) guide contain useful information about billing for engine runtime and schema.
+
+To use the **Load data** wizard, select the plus (+) icon. For detailed information about how to use the **Load data** wizard, see the [Load data](/Guides/loading-data/loading-data.html) guide.
+
+## [](#run-query-optimize-clean-up-and-export)Run query, optimize, clean up, and export
+
+
+
+After you have loaded your data in the wizard, the rest of the steps in getting started are the same as if you ran your workflow in SQL. You can use either the **Develop Space** in the **Firebolt Workspace** to enter SQL, or use the [Firebolt API](/Guides/query-data/using-the-api.html).
+
+- For information about how to get started running a query, see [Run query](/Guides/getting-started/get-started-sql.html#run-query).
+- For information about how to get started optimizing your workflow, see [Optimize your workflow](get-started-sql#optimize-your-workflow).
+- For information about how to get started cleaning up resources and data, see [Clean up resources](./get-started-sql#clean-up).
+- For information on how to export your data, see [Export data](/Guides/getting-started/get-started-sql.html#export-data).
+
+## [](#next-steps)Next steps
+
+To continue learning about Firebolt’s architecture, capabilities, using Firebolt after your trial period, and setting up your organization, see [Resources beyond getting started](/Guides/getting-started/get-started-next.html).
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_getting_started_get_started_next.md b/cmd/docs-scrapper/fireboltdocs/guides_getting_started_get_started_next.md
new file mode 100644
index 0000000..aad1d86
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_getting_started_get_started_next.md
@@ -0,0 +1,28 @@
+# [](#resources-beyond-getting-started)Resources beyond getting started
+
+Now that you have successfully created your first engine and database, run your first query, created indexes, copied data into Firebolt and exported data out, you can continue exploring Firebolt’s capabilities.
+
+## [](#register-through-the-aws-marketplace)Register through the AWS Marketplace
+
+If you have exhausted your initial $200 credit, you can continue to use Firebolt after registering through the [AWS Marketplace](https://aws.amazon.com/marketplace). You must set up an account for billing in order to continue using Firebolt’s engines to run queries.
+
+**To register**
+
+1. On the [Firebolt Workspace page](https://go.firebolt.io/), select the **Configure**() icon from the left navigation pane.
+2. Under **Organization settings**, select **Billing**.
+3. Click **Connect to AWS Marketplace** to take you to the Firebolt page on AWS Marketplace.
+4. On the AWS Marketplace page, click the **View Purchase Options** in the top right hand corner of the screen.
+5. Click **Setup Your Account**.
+
+Your account should now be associated with AWS Marketplace.
+
+## [](#learn-more-about-firebolt)Learn more about Firebolt
+
+- Learn about Firebolt’s unique [architecture](/Overview/architecture-overview.html).
+- Learn more about creating tables and [managing your data](/Overview/data-management.html) in order to let Firebolt provide the fastest query times.
+- Learn about the [engines](/Overview/engine-fundamentals.html) that Firebolt uses to process queries and how to select the right size.
+- Learn how to [load](/Guides/loading-data/loading-data.html) different kinds of data.
+- Learn more about [querying data](/Guides/query-data/).
+- Learn more about using [indexes](/Overview/indexes/using-indexes.html) to optimize your query times.
+- Learn how to [set up your organization](/Guides/managing-your-organization/) to use Firebolt.
+- Learn how to [integrate Firebolt](/Guides/integrations/integrations.html) with third party tools and applications.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_getting_started_get_started_sql.md b/cmd/docs-scrapper/fireboltdocs/guides_getting_started_get_started_sql.md
new file mode 100644
index 0000000..da70cb4
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_getting_started_get_started_sql.md
@@ -0,0 +1,373 @@
+# [](#get-started-using-sql)Get started using SQL
+
+You can also use SQL to create a database and engine, and load data. If you use the **Develop Space** inside the **Firebolt Workspace**, you can customize your workflow to handle more unique workflows than with the **Load data** wizard, including loading data in TSV, Avro, JSON Lines, or ORC formats.
+
+The following sections will guide you through a simple workflow to register, create a database and engine, load and query data, learn how to optimize your workflow, and clean up resources as shown in the following diagram:
+
+
+## [](#register-with-firebolt)Register with Firebolt
+
+
+To get started using Firebolt, begin by registering using the following steps:
+
+1. [Register](https://go.firebolt.io/signup) with Firebolt. Fill in your email, name, choose a password, and select ‘Get Started’.
+2. Firebolt will send a confirmation to the address that you provided. To complete your registration, select **Verify** in the email to take you to Firebolt’s [login page](https://go.firebolt.io/login).
+3. Type in your email and password and select **Log In**.
+
+New accounts receive credits ($200) to get started exploring Firebolt’s capabilities. These credits must be used within 30 days of account creation.
+
+Firebolt’s billing is based on engine runtime, measured in seconds. We also pass through AWS S3 storage costs at the rate of $23 per TB per month. The amount that you spend is dependent primarily on which engines you use and how long those engines are running.
+
+You can view your total cost in FBU up to the latest second and in $USD up to the latest day. For more information, see the following **Create a Database** section. For more information about costs, see [Data Warehouse Pricing](https://www.firebolt.io/pricing). If you need to buy additional credits, connect Firebolt with your AWS Marketplace account. For more information about AWS Marketplace, see the following section: [Registering through AWS Marketplace section](/Guides/getting-started/get-started-next.html#register-through-the-aws-marketplace).
+
+## [](#create-a-database)Create a Database
+
+
+Firebolt decouples storage and compute resources so that multiple engines can run computations on the same database. You can also configure different engine sizes for different workloads. These workloads can run in parallel or separately. Because storage is decoupled from compute, you must first create both a database and an engine before you can run your first query.
+
+Firebolt’s structure is organized as follows:
+
+- A database holds the elements that you need to run queries such as tables, views and information schema.
+- An [engine](/Overview/engine-fundamentals.html) provides the compute resources for ingesting data and running queries. For more information on using Firebolt engines and how to select the correct size for your workload, see [Operate engines](/Guides/operate-engines/operate-engines.html).
+
+If you used the **Load data** wizard, Firebolt has already created a database for you, and you can skip creating a database.
+
+The following instructions show you how to create a database and then an engine. Note that you can also create the engine first.
+
+1. In the left navigation pane, select the **+** to the right **Databases**.
+2. Select **Create new database**.
+3. Enter the name for your database in the **Database Name** field. For this example, use “tutorial\_database” as your database name. In Firebolt, the names of engines and databases are **case-sensitive**. If you are using uppercase characters in their names, enclose their name inside double quotes (“) when you refer to them in SQL.
+
+Firebolt creates a new database with the following two default schemas:
+
+- **Public** - A namespace where you can create and manage your database objects including tables, engines and queries. The default schema includes **tables**, **external tables**, and **views**.
+- **Information\_schema** - A standardized set of read-only views that provide metadata about database objects including tables, engines, cost information, and queries.
+
+You can find these schema by selecting your database under **Databases** in the left navigation pane. Next to the name of your database, select the drop-down arrow to expand and view the schemas and their contents. You can view your total cost in FBU up to the latest second and in $USD up to the latest day in **Information\_schema**.
+
+If you’re using the **Develop Space**, expand **Information\_schema**, and then **Views** to show the following:
+
+- **engine\_metering\_history** - contains information about billing cost in FBU up to the latest second in **consumed\_fbu**.
+- **engine\_billing** - contains information about billing cost in US dollars up to the latest day in **billed\_cost**.
+
+To see values for the previous costs, select the **More options** icon () next to either **consumed\_fbu** or **billed\_cost**, Then select **Preview data**. You can also run a query in the script tab as shown in the following code example:
+
+```
+SELECT *
+FROM information_schema.engine_metering_history
+```
+
+## [](#create-an-engine)Create an Engine
+
+
+To process a query, you must use an engine. You can either create an engine based on the following recommendations, or use the system engine. You can only use the system engine to run metadata-related queries, but it is always running, so you don’t have to wait for it to start. You can use the system engine to process data in any database. If you create your own engine, there is a small start up time associated with it.
+
+Firebolt recommends the following initial engine configurations based on where you are in your exploration of Firebolt’s capabilities. An FBU stands for a Firebolt Unit, and is equivalent to 35 US cents.
+
+Each FBU is related to the amount of time as follows:
+
+Task Expected Usage Ingest initial data 4-16 FBU Run test queries 8-32 FBU Find optimal query performance 32-240 FBU Find optimal test integrations 32-240 FBU
+
+Each engine node can cache data locally to improve performance.
+
+Small and medium engines are available for use right away. If you want to use a large or extra-large engine, reach out to support@firebolt.io. The default engine configuration uses a small node, which is sufficient for this tutorial. To learn more about how to select the correct engine size for your workload, see [Sizing Engines](/Guides/operate-engines/sizing-engines.html).
+
+By default, when you login to **Firebolt’s Workspace** for the first time, Firebolt creates a tab in the **Develop Space** called **Script 1**. The following apply:
+
+- The database that **Script 1** will run using is located directly below the tab name. If you want to change the database, select another database from the drop-down list.
+- An engine must be running to process the script in a selected tab. The name and status of the engine that **Script 1** uses for computation is located to the right of the current selected database. To change either the engine or the status, select the drop-down arrow next to the engine name. You can select a new engine and change its status from **Stopped** to **Running** by selecting **Start engine**. If you select **Run** at the bottom of the workspace, the selected engine starts automatically. Select **Stop engine** to change the status to **Stopped**. Firebolt automatically stops your engine if it is inactive for 20 minutes.
+
+Because an engine is a dedicated compute node that nobody else can use, you are charged for each second that your engine is **Running**, even if it’s not processing a query.
+
+If you used the **Load data** wizard, Firebolt has already created an engine for you, and you can skip the following step.
+
+1. Select the **(+)** icon next to **Databases**.
+2. Select **Create new engine** from the drop-down list.
+3. Enter the name of your engine in the **New engine name** text box. For this example, enter “tutorial\_engine” as your engine name.
+
+## [](#load-data)Load Data
+
+
+
+After creating an engine, you can load your data. This tutorial uses Firebolt’s publicly available Firebolt’s sample dataset, from the fictional [“Ultra Fast Gaming Inc.”](https://help.firebolt.io/t/ultra-fast-gaming-firebolt-sample-dataset/250) company. This dataset does not require access credentials. If your personal dataset requires access credentials, you will need to provide them. For examples of how to provide access credentials and more complex loading workflows, see [Loading data](/Guides/loading-data/loading-data.html). For more information about AWS access credentials, see [Creating Access key and Secret ID](/Guides/loading-data/creating-access-keys-aws.html)
+
+If you used the **Load data** wizard, skip ahead to the following **Run query** section.
+
+Use [COPY FROM](/sql_reference/commands/data-management/copy-from.html) in the **Develop Space** to copy data directly from a source into a Firebolt managed table.
+
+1. Enter the following into the **Script 1** tab to load data using the following steps:
+
+ ```
+ COPY INTO tutorial FROM 's3://firebolt-publishing-public/help_center_assets/firebolt_sample_dataset/levels.csv' WITH HEADER=TRUE;
+ ```
+
+ For examples of more complex loading workflows, see [Load data](/Guides/loading-data/loading-data.html).
+2. Select **Run**.
+3. In the left navigation pane under the **Tutorial\_Database**, **Tables** now contains the **tutorial** table.
+4. Expand the drop down menu next to **Columns** to view the name and data format of each column.
+5. Select the **More options** icon () next to the data type of each column name to open a pop-up that allows you to insert the name of the column into your SQL script. You can also select **Preview data**.
+6. To view the contents of the **tutorial** table, run a SELECT query as shown in the following code example. To run this in a new tab, select the (**+**) icon next to the **Script 1** tab.
+
+ ```
+ SELECT
+ *
+ FROM
+ tutorial
+ ```
+7. Select **Run**. The bottom of your workspace includes information about your processing job in the following tabs:
+
+ - The **Results** tab at the bottom of your **Develop Space** shows the contents returned by your query. After running the previous SELECT statement, the **Results** tab should display column names and values for the data in the tutorial.
+
+
+
+ - Select the filter icon () to change which columns are shown.
+ - Select the **More options** icon () to export the contents of the **Results** tab to a JSON or CSV file.
+ - The Statistics tab shows information about running your query including how long it took to run and its status. After running the previous SELECT statement, the **Statistics** tab shows the status of the statement, its STATUS as having succeeded or failed, how long it took to run the query, the number of rows processed, and the amount of data scanned.
+ - Select the **More options** icon () to export the contents of the **Statistics** tab to a JSON or CSV file.
+ - The **Query Profile** tab contains metrics for each operator used in your query and a **Query id**. Select an operation to view its metrics. These metrics include the following:
+
+ - The output cardinality - the number of rows that each operator produced.
+ - The thread time - the sum of the wall clock time that threads spent to run the selected operation across all nodes.
+ - The CPU time - the sum of the time that threads that ran the operator were scheduled on a CPU core.
+ - The output types - the data types of the result of the operator.
+
+You can use these metrics to analyze and measure the efficiency and performance of your query. For example, If the CPU time is much smaller than thread time, the input-output (IO) latency may be high or the engine that you are using may be running multiple queries at the same time. For more information, see [Example with ANALYZE](/sql_reference/commands/queries/explain.html).
+
+- The **Engine monitoring** tab shows monitoring information including the percent CPU, memory, disk use and cache read. Information is shown from the last 5 minutes by default. Select a different time interval from the drop-down menu next to **Last 5 minutes**. You can also select the **Refresh** icon next to the drop-down menu to update the graphical information.
+- The **Query history** tab shows detailed information associated with each query, listed by its **Query id**. This information includes the query status, start time, number of rows and bytes scanned during the load, user and account information. You can choose the following options at the top of the bottom panel:
+
+ - Select the **Refresh** icon to update the query history and ID.
+ - Select the filter icon () to remove or add columns to display.
+ - Select the **More options** icon () to export the contents of the **Query history** tab to a JSON or CSV file.
+
+For more information about Firebolt’s **Develop Space**, see [Using the develop workspace](/Guides/query-data/using-the-develop-workspace.html).
+
+## [](#run-query)Run Query
+
+
+
+To run a query on your data, do the following:
+
+1. Select the (**+**) icon next to the **Script 2** tab to open a new tab.
+2. Enter the following simple query, which fetches a list of databases associated with your account:
+
+ ```
+ SHOW CATALOGS;
+ ```
+3. Select **Run** to process the query. Firebolt uses the engine listed to the right of your database to run your query and its status of **Running** or **Stopped**. You can select a different engine from the dropdown menu next to the engine () icon.
+
+ If your engine is **Stopped**, Firebolt may prompt you to start your engine. Select **Start Engine**. Engine startup typically requires a few moments to complete, as Firebolt prepares your environment for data analysis.
+
+For more information about Firebolt’s **Develop Space**, see [Use the Develop Space](/Guides/query-data/using-the-develop-workspace.html).
+
+## [](#optimize-your-workflow)Optimize your workflow
+
+
+
+Firebolt uses a number of optimization strategies to reduce query times. Over small datasets like those specified in this guide, the uplift may not be noticeable. However, these strategies can **dramatically improve** query performance for larger datasets. The following sections discuss how [primary indexes](#primary-indexes) and [aggregating indexes](#aggregating-indexes) to do the following:
+
+- Reduce the amount of data that the query scans.
+- Pre-calculate values that are used repeatedly during computations.
+
+### [](#primary-indexes)Primary Indexes
+
+One of Firebolt’s key optimization strategies is to select a primary index for columns that are used frequently in `WHERE`, `JOIN`, `GROUP_BY`, and clauses used for sorting. In Firebolt, a primary index is a type of **sparse index**. Thus, selecting the best primary index can reduce query run times significantly by reducing the data set that the query searches over. Selecting primary indexes also allows Firebolt to manage updates, deletions and insertions to tables and provide optimal query performance.
+
+If you have a composite primary index, the order that the columns are listed is important. Specify the column that has a large number of unique values, or high cardinality, first, followed by columns with lower cardinality. A sort order with the previous characteristics allows Firebolt to prune, or eliminate irrelevant data, so that it doesn’t have to scan it in query processing. Pruning significantly enhances query performance.
+
+You can create a primary index **only** when you create a table. If you want to change the primary index, you must create a new table. The following example shows how to use [CREATE TABLE](/sql_reference/commands/data-definition/create-fact-dimension-table.html) to create a new `levels` table, define the schema, and set two primary indexes:
+
+```
+CREATE TABLE IF NOT EXISTS levels (
+ "LevelID" INT,
+ "Name" TEXT,
+ "GameID" INT,
+ "LevelType" TEXT,
+ "MaxPoints" INT,
+ "PointsPerLap" DOUBLE,
+ "SceneDetails" TEXT
+)
+PRIMARY INDEX "LevelID", "Name";
+```
+
+In the previous code example, the primary index contains two values. The first value, `LevelID`, is required in order to create a primary index. The second value, `Name`, and any following values are optional. Firebolt will use all listed primary indexes to optimize query scans. If Name has lower cardinality than `LevelID`, then Firebolt can optimize these indexes to eliminate scanning over irrelevant data. For more information about primary indexes and sort order, see [Primary index](/Overview/indexes/primary-index.html).
+
+To read data into the `levels` table, enter the following into a new script tab:
+
+```
+COPY INTO levels
+FROM 's3://firebolt-publishing-public/help_center_assets/firebolt_sample_dataset/levels.csv'
+WITH TYPE = CSV
+HEADER = TRUE;
+```
+
+### [](#aggregating-indexes)Aggregating Indexes
+
+Another key optimization strategy includes pre-calculating aggregate values for columns that are frequently used in functions that combine data such as `COUNT`, `SUM`, `MAX`, `MIN`, `AVG`, `JOIN`, and `GROUP BY`. Rather than computing aggregate values each time they are used in a calculation, the results are accessed from storage, which helps run queries quickly and saves compute resources.
+
+An aggregating index combines columns into a statistical result. You can calculate an aggregate index on an entire table, or more efficiently, calculate them over a subset of table columns. You can also use your knowledge of which dimensions and aggregate functions are used most often for your use case to predefine what table dimensions and which aggregate functions to use.
+
+Once you create aggregate indexes, Firebolt maintains them automatically for you. If you load new data into your table or alter it, your aggregate indexes are automatically updated. You can also have multiple aggregate indexes for a single table. When you query a table with multiple aggregate indexes, Firebolt will automatically select the best index to use to optimize performance.
+
+From the **tutorial** table that you created in the previous step, assume you want to run a query to look at the AVG(NumberOfLaps), grouped by LevelType. The following example code shows you how to create an aggregating index **levels\_agg\_idx** on the **LevelType** column to pre-calculate the average number of laps for each level.
+
+```
+CREATE AGGREGATING INDEX
+ levels_agg_idx
+ON tutorial (
+ "LevelType"
+ , AVG("NumberOfLaps")
+ );
+```
+
+After you run the script, the `levels_agg_idx` aggregate index listed in the left navigation pane under **Indexes** in the **tutorial** table. Any queries that run over the tutorial table that use an average of the **NumberOfLaps** column grouped by **LevelType** will now use the `levels_agg_idx` index instead of reading the entire table to calculate it.
+
+For more information, see [Aggregating index](/Overview/indexes/aggregating-index.html).
+
+### [](#warm-data-and-cache-eviction)Warm data and cache eviction
+
+Another key optimization strategy is to read warm data, or data accessed from cache, rather than reading in “cold” data from an Amazon S3 bucket. Querying cold data can be significantly slower than querying warm data, particularly for large datasets that contain millions of rows or more. If you reach about 80% of your available cache, the least recently used data will be moved out of cache into an Amazon S3 bucket.
+
+#### [](#warm-data)Warm data
+
+When data is warm, Firebolt transfers data from remote storage in Amazon (S3) to a local (cache). Data is automatically warmed when you access it during a query, and stored in a solid state drive (SSD) cache. However, when you query data to warm it, you use an engine, and incur [engine consumption](/Overview/engine-consumption.html) costs. Therefore, you should use filters to warm only the data that you need to access frequently in your queries.
+
+The following guidance applies:
+
+- If you need access to all the data in a table, use `CHECKSUM` to warm the entire table as follows:
+
+ ```
+ SELECT CHECKSUM(*) FROM levels;
+ ```
+- If you only need a few columns that meet a certain criteria, filter them before warming the data as shown in the following code example:
+
+ ```
+ SELECT CHECKSUM("Name", "MaxPoints") FROM levels WHERE "MaxPoints" BETWEEN 100 AND 250;
+ ```
+- If you have a large dataset, you can divide the data into smaller segments, and execute the queries in parallel, as shown in the following code example:
+
+ ```
+ SELECT CHECKSUM("Name", "MaxPoints") FROM levels WHERE "MaxPoints" BETWEEN 0 AND 100;
+ SELECT CHECKSUM("Name", "MaxPoints") FROM levels WHERE "MaxPoints" BETWEEN 101 AND 200;
+ SELECT CHECKSUM("Name", "MaxPoints") FROM levels WHERE "MaxPoints" > 200;
+ ```
+
+#### [](#cache-eviction)Cache eviction
+
+When your cache usage exceeds approximately 80% of its capacity, Firebolt automatically evicts the some data from the cache. If you query this data later, Firebolt reloads it into the cache before processing the query. The total available cache size depends on your engine’s size and family. Larger engine sizes provide more cache space, and the storage-optimized family offers more cache than the compute-optimized family. Small and medium sized engines are available for use right away. If you want to use a large or extra-large engine, reach out to [support@firebolt.io](mailto:support@firebolt.io).
+
+You can check the size of your cache using the following example code:
+
+```
+SHOW CACHE;
+```
+
+The previous code example shows your cache usage in GB per total cache available.
+
+When data is loaded into Firebolt, it is stored in units of data storage called tablets. A tablet contains a subset of a table’s rows and columns. If you reach your cache’s 80% capacity, the entire tablet that contains the least recently used data, is evicted.
+
+The following code example shows how to view information about the tablets that are used to store your table including the number of uncompressed and compressed bytes on disk:
+
+```
+SELECT * FROM information_schema.engine_tablets where table_name = 'levels';
+```
+
+## [](#clean-up)Clean up
+
+
+
+After you’ve completed the steps in this guide, avoid incurring costs associated with the getting started exercises by doing the following:
+
+- Stop any running engines.
+- Remove data from storage.
+
+### [](#stop-any-running-engines)Stop any running engines
+
+Firebolt shows you the status of your current engine next to the engines icon () under your script tab as either **Stopped** or **Running**. To shut down your engine, select your engine from the drop-down list next to the name of the engine, and then select one of the following:
+
+- Stop engine - Allow all of the currently running queries to finish running and then shut down the engine. Selecting this option will allow the engine to run for as long as it takes to complete all queries running on the selected engine.
+- Terminate all queries and stop - Stop the engine and stop running any queries. Selecting this option stops the engine in about 20-30 seconds.
+
+### [](#remove-data-from-storage)Remove data from storage
+
+To remove a table and all of its data, enter [DROP TABLE](/sql_reference/commands/data-definition/drop-table.html) into a script tab, as shown in the following code example:
+
+```
+DROP TABLE levels
+```
+
+To remove a database and all of its associated data, do the following in the Firebolt **Develop Space**:
+
+- Select the database from the left navigation bar.
+- Select the **More options** () icon.
+- Select **Delete database**. Deleting your database will permanently remove your database from Firebolt. You cannot undo this action. Select **Delete**.
+
+## [](#export-data)Export data
+
+
+
+If you want to save your data outside of Firebolt, you can use [COPY TO](/sql_reference/commands/data-management/copy-to.html) to export data to an external table. This section shows how to set the minimal AWS permissions and use `COPY TO` to export data to an [AWS S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html). You may have to reach out to your administrator to obtain or change AWS permissions.
+
+### [](#set-permissions-to-write-to-an-aws-bucket)Set permissions to write to an AWS bucket
+
+Firebolt must have the following permissions to write to an AWS S3 bucket:
+
+1. AWS access key credentials. The credentials must be associated with a user with permissions to write objects to the bucket. Specify access key credentials using the following syntax:
+
+```
+ CREDENTIALS = (AWS_ACCESS_KEY_ID = '' AWS_SECRET_ACCESS_KEY = '')
+```
+
+In the previous credentials example, `` is the AWS access key ID associated with an AWS user or an IAM role. An access key ID has the following form: `AKIAIOSFODNN7EXAMPLE`. The value `` is the AWS secret access key. A secret access key has the following form: `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`.
+
+1. An AWS IAM policy statement attached to a user role. Firebolt requires the following minimum permissions in the IAM policy:
+
+ ```
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "s3:Get*",
+ "s3:List*",
+ "s3:PutObject",
+ "s3:DeleteObject"
+ ],
+ "Resource": [
+ "arn:aws:s3:::my_s3_bucket",
+ "arn:aws:s3:::my_s3_bucket/*"
+ ]
+ }
+ ]
+ }
+ ```
+
+ For more information about AWS access keys and roles, see [Creating Access Key and Secret ID in AWS](/Guides/loading-data/creating-access-keys-aws.html).
+
+### [](#export-to-an-aws-bucket)Export to an AWS bucket
+
+Use [COPY TO](/sql_reference/commands/data-management/copy-to.html) select all the columns from a table and export to an AWS S3 bucket as shown in the following code example:
+
+```
+COPY (SELECT * FROM test_table)
+ TO 's3://my-bucket/path/to/data'
+ CREDENTIALS =
+ (AWS_ROLE_ARN= 'arn:aws:iam::123456789012:role/my-firebolt-role');
+```
+
+In the previous code example, the role ARN ([Amazon Resource Name](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html)) identifies the AWS IAM role that specifies the access for users or services. An ARN follows the following structure: arn:aws:iam::account-id:role/role-name. Because TYPE is omitted from `COPY TO`, the file or files will be written in the default CSV format. Because `COMPRESSION` is also omitted, the output data is compressed using GZIP (\*.csv.gz) format.
+
+Firebolt assigns a query ID, that has the following example format `16B903C4206098FD`, to the query at runtime. If the size of the compressed output exceeds the default of `16` MB, Firebolt writes multiple GZIP files. In the following example, the size of the output is `40` MB, so Firebolt writes `4` files.
+
+```
+s3://my_bucket/my_fb_queries/
+16B903C4206098FD_0.csv.gz
+16B903C4206098FD_1.csv.gz
+16B903C4206098FD_2.csv.gz
+16B903C4206098FD_3.csv.gz
+```
+
+## [](#next-steps)Next steps
+
+To continue learning about Firebolt’s architecture, capabilities, using Firebolt after your trial period, and setting up your organization, see [Resources beyond getting started](/Guides/getting-started/get-started-next.html).
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_integrations_airbyte.md b/cmd/docs-scrapper/fireboltdocs/guides_integrations_airbyte.md
new file mode 100644
index 0000000..5af6bdc
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_integrations_airbyte.md
@@ -0,0 +1,94 @@
+# [](#overview)Overview
+
+
+
+Airbyte is an open-source data integration platform that significantly simplifies the ETL (Extract, Transform, Load) process, making it easier for users to manage and migrate their data across various sources. By providing a user-friendly interface and robust functionality, Airbyte enables seamless data movement and transformation, catering to a wide range of data integration needs. One of the key features of Airbyte is its extensive range of connectors, which allow it to integrate with numerous data sources and destinations.
+
+Using Airbyte’s Firebolt connector, users can efficiently and effortlessly load large amounts of data to and from Firebolt. This capability extends to integration with a wide array of data sources, thanks to Airbyte’s extensive library of connectors. Whether your data resides in cloud storage, on-premises databases, SaaS applications, or other data warehouses, Airbyte facilitates smooth and reliable data transfer between these sources and Firebolt.
+
+# [](#quickstart)Quickstart
+
+There are several [ways](https://docs.airbyte.com/deploying-airbyte) to deploy Airbyte. In this tutorial we will the easiest way to start prototyping by using a [Docker Compose](https://docs.docker.com/compose/) deployment locally.
+
+If you already have an airbyte deployment skip to the [configuration section](#step-2-configure-firebolt-connection-via-ui).
+
+#### [](#prerequisites)Prerequisites
+
+1. **Docker**: Ensure you have Docker installed. You can download it from [here](https://www.docker.com/products/docker-desktop).
+2. **Firebolt Account**: You need an active Firebolt account. Sign up [here](https://www.firebolt.io/) if you don’t have one.
+3. **Firebolt Database and Table**: Make sure you have a Firebolt database and table with data ready for querying.
+4. **Firebolt Service Account**: Create a [service account](/Guides/managing-your-organization/service-accounts.html) in Firebolt and note its id and secret.
+
+#### [](#step-1-deploy-airbyte-locally-with-docker)Step 1: Deploy Airbyte Locally with Docker
+
+1. Create a new directory for your Airbyte setup:
+
+ ```
+ git clone --depth=1 https://github.com/airbytehq/airbyte.git
+ ```
+2. Switch to the Airbyte directory:
+
+ ```
+ cd airbyte
+ ```
+3. Start Airbyte by running the following command in the terminal:
+
+ ```
+ ./run-ab-platform.sh
+ ```
+4. Open your browser and navigate to `http://localhost:8000` to access the Airbyte UI.
+5. You will be asked for a username and password. By default the username is `airbyte` and the password is `password`. Before you deploy Airbyte in production make sure to change the password.
+
+#### [](#step-2-configure-firebolt-connection-via-ui)Step 2: Configure Firebolt Connection via UI
+
+1. In the Airbyte UI, click on the **“Connections”** tab and select **“Create your first connection”**.
+2. Click on **“New Destination”** and select **“Firebolt”** as the destination type.
+3. Enter your Firebolt connection details:
+
+ - Client ID: Your service account id.
+ - Client Secret: Your service account secret.
+ - Database: Your database name.
+ - Account: Your firebolt [account](/Guides/managing-your-organization/managing-accounts.html).
+ - Engine: Firebolt engine which will run the ingestion.
+ - Host (Optional): For non-standard use cases. Should be left blank.
+4. Select replication strategy. SQL is easier to setup but S3 is more performant on production loads. See the [Airbyte doc](https://docs.airbyte.com/integrations/destinations/firebolt) for more information.
+5. Save.
+
+ 
+
+#### [](#step-3-create-a-connection-in-airbyte)Step 3: Create a Connection in Airbyte
+
+1. In the Airbyte UI, click on the **“Connections”** tab and select **“Create your first connection”** (**“New Connection”** if you already have a connection defined).
+2. Choose a source from which you want to extract data. We’ll be using **Faker** to generate some sample data.
+3. Leave fields as is and click **“Set up source”**.
+4. Next in the destination screen select the Firebolt destination you configured earlier.
+5. Select the streams you want to replicate and sync mode (Full refresh or Incremental). To save time select only “products” stream.
+6. Finally specify the frequency of your data repication or manual if you want to trigger the job in UI or via an API call.
+7. Click **“Set up connection”** to start syncing data from your source to Firebolt!
+
+ 
+
+#### [](#step-4-monitor-and-manage-data-syncs)Step 4: Monitor and Manage Data Syncs
+
+1. Use the Airbyte UI to monitor your data syncs and ensure that data is being transferred accurately and efficiently.
+2. Adjust sync settings and transformations as needed to optimize your ETL process. You can leverage DBT to
+
+ 
+
+### [](#output-schema)Output schema
+
+The Firebolt Destination connector is a V1 connector, meaning it works with raw data. Refer to Airbyte’s [Destination V2 document](https://docs.airbyte.com/using-airbyte/core-concepts/typing-deduping#what-is-destinations-v2) to learn about the differences. Each stream is written into its own [Fact table](/Overview/indexes/using-indexes.html#firebolt-managed-tables) in Firebolt, containing three columns:
+
+\*`_airbyte_ab_id`: a UUID assigned by Airbyte to each processed event. The column type is TEXT.
+
+- `_airbyte_emitted_at`: a TIMESTAMP indicating when the event was pulled from the source.
+- `_airbyte_data`: a JSON blob representing event data, stored as TEXT, but can be parsed using [JSON functions](/sql_reference/functions-reference/JSON/).
+
+### [](#further-reading)Further Reading
+
+After setting up Airbyte with Firebolt, explore these resources to leverage additional features and enhance your data integration capabilities:
+
+1. Learn how to use [Firebolt Source](https://docs.airbyte.com/integrations/sources/firebolt).
+2. Ensure you’re following [security guidelines](https://docs.airbyte.com/operating-airbyte/security).
+3. Explore other [deployment options](https://docs.airbyte.com/deploying-airbyte).
+4. Configure your [connections](https://docs.airbyte.com/cloud/managing-airbyte-cloud/configuring-connections).
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_integrations_airflow.md b/cmd/docs-scrapper/fireboltdocs/guides_integrations_airflow.md
new file mode 100644
index 0000000..e17c7e3
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_integrations_airflow.md
@@ -0,0 +1,413 @@
+# [](#connecting-to-airflow)Connecting to Airflow
+
+[Apache Airflow](https://airflow.apache.org/) is a data orchestration tool that allows you to programmatically create, schedule, and monitor workflows. You can connect a Firebolt database into your data pipeline using the Airflow provider package for Firebolt. For example, you can schedule automatic incremental data ingestion into Firebolt.
+
+This guide explains how to install the [Airflow provider package](https://pypi.org/project/airflow-provider-firebolt/) for Firebolt, set up a connection to Firebolt resources using the Airflow user interface (UI), and create an example Directed Acyclic Graph (DAG) for common Firebolt tasks. The source code for the Airflow provider package for Firebolt is available in the [airflow-provider-firebolt](https://github.com/firebolt-db/airflow-provider-firebolt) repository on GitHub.
+
+## [](#prerequisites)Prerequisites
+
+Make sure that you have:
+
+- A Firebolt account. [Create a new account](/Guides/managing-your-organization/managing-accounts.html#create-a-new-account).
+- A Firebolt database and engine.
+- [Python](https://www.python.org/downloads/) version 3.8 or later.
+- An installation of Airflow. See the [Airflow installation guide](https://airflow.apache.org/docs/apache-airflow/stable/installation/index.html).
+
+## [](#install-the-airflow-provider-package-for-firebolt)Install the Airflow provider package for Firebolt
+
+You need to install the Airflow provider package for Firebolt. This package enables Firebolt as a **Connection type** in the Airflow UI.
+
+1. Install the package.
+
+ Run the following command to install the package:
+
+ ```
+ pip install airflow-provider-firebolt
+ ```
+2. Upgrade to the latest version.
+
+ Run the latest version of the provider package. [Release history](https://pypi.org/project/airflow-provider-firebolt/#history) is available on PyPI.
+
+ Use the following command to upgrade:
+
+ ```
+ pip install airflow-provider-firebolt --upgrade
+ ```
+
+ Restart Airflow after upgrading to apply the new changes.
+3. Install a specific version.
+
+ If a specific version is required, replace `1.0.0` with the desired version:
+
+ ```
+ pip install airflow-provider-firebolt==1.0.0
+ ```
+4. Install the provider for AWS Managed Airflow (MWAA).
+
+ Ensure you are using version 2 of AWS Managed Airflow (MWAA) when working with the Firebolt Airflow provider. Add `airflow-provider-firebolt` to the `requirements.txt` file following the instructions in the [MWAA Documentation.](https://docs.aws.amazon.com/mwaa/latest/userguide/working-dags-dependencies.html)
+
+## [](#connect-airflow-to-firebolt)Connect Airflow to Firebolt
+
+Create a connection object in the Airflow UI to integrate Firebolt with Airflow.
+
+### [](#steps-to-configure-a-connection)Steps to configure a connection
+
+1. Open the Airflow UI and log in.
+2. Select the **Admin** menu.
+3. Choose **Connections**.
+4. Select the **+** button to add a new connection.
+5. Choose Firebolt from the **Connection Type** list
+6. Provide the details in the following table. These connection parameters correspond to built-in Airflow variables.
+
+ Parameter Description Example value Connection id The name of the connection for the UI. `My_Firebolt_Connection` Description Information about the connection. `Connection to Firebolt database MyDatabase using engine MyFireboltDatabase_general_purpose.` Database The name of the Firebolt database to connect to. `MyFireboltDatabase` Engine The name of the engine to run queries `MyFireboltEngine` Client ID The ID of your service account. `XyZ83JSuhsua82hs` Client Secret The [secret](/Guides/loading-data/creating-access-keys-aws.html) for your service account authentication. `yy7h&993))29&%j` Account The name of your account. `developer` Extra The additional properties that you may need to set (optional). `{"property1": "value1", "property2": "value2"}`
+
+ Client ID and secret credentials can be obtained by registering a [service account](/Guides/managing-your-organization/service-accounts.html).
+7. Choose **Test** to verify the connection.
+8. Once the test succeeds, select **Save**.
+
+## [](#create-a-dag-for-data-processing-with-firebolt)Create a DAG for data processing with Firebolt
+
+A DAG file in Airflow is a Python script that defines tasks and their execution order for a data workflow. The following example is an example DAG for performing the following tasks:
+
+- Start a Firebolt [engine](/Overview/engine-fundamentals.html).
+- Create an [external table](/Guides/loading-data/working-with-external-tables.html) linked to an Amazon S3 data source.
+- Create a fact table for ingested data. For more information, see [Firebolt-managed tables](/Overview/indexes/using-indexes.html#firebolt-managed-tables).
+- Insert data into the fact table.
+- Stop the Firebolt engine. This task is not required if your engine has `AUTO_STOP` configured
+
+### [](#dag-script-example)DAG script example
+
+The following DAG script creates a DAG named `firebolt_provider_trip_data`. It uses an Airflow connection to Firebolt named `my_firebolt_connection`. For the contents of the SQL scripts that the DAG runs, see the following [SQL script examples](#sql-script-examples). You can run this example with your own database and engine by updating the connector values in Airfow, setting the `FIREBOLT_CONN_ID` to match your connector, and creating the necessary custom variables in Airflow.
+
+```
+import time
+import airflow
+from airflow.models import DAG, Variable
+from firebolt_provider.operators.firebolt \
+ import FireboltOperator, FireboltStartEngineOperator, FireboltStopEngineOperator
+
+# Define function to get Firebolt connection parameters
+def connection_params(conn_opp, field):
+ connector = FireboltOperator(
+ firebolt_conn_id=conn_opp, sql="", task_id="CONNECT")
+ conn_parameters = connector.get_db_hook()._get_conn_params()
+ return getattr(conn_parameters, field)
+
+# Set up the Firebolt connection ID
+firebolt_conn_id = 'firebolt'
+firebolt_engine_name = connection_params(firebolt_conn_id, 'engine_name')
+tmpl_search_path = Variable.get("firebolt_sql_path")
+default_args = {
+ 'owner': 'airflow',
+ 'start_date': airflow.utils.dates.days_ago(1)
+}
+
+# Function to open query files saved locally.
+def get_query(query_file):
+ return open(query_file, "r").read()
+
+# Define a variable based on an Airflow DAG class.
+# For class parameters, see https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/models/dag/index.html#airflow.models.dag.DAG.
+with DAG('firebolt_provider_startstop_trip_data',
+ default_args=default_args,
+ template_searchpath=tmpl_search_path,
+ schedule_interval=None,
+ catchup=False,
+ tags=["firebolt"]) as dag:
+
+ # Define DAG tasks and task sequence.
+ # Where necessary, read local sql files using the Airflow variable.
+ task_start_engine = FireboltStartEngineOperator(
+ dag=dag,
+ task_id="START_ENGINE",
+ firebolt_conn_id=firebolt_conn_id,
+ engine_name=firebolt_engine_name)
+
+ task_trip_data__external_table = FireboltOperator(
+ dag=dag,
+ task_id="task_trip_data__external_table",
+ sql=get_query(f'{tmpl_search_path}/trip_data__create_external_table.sql'),
+ firebolt_conn_id=firebolt_conn_id
+ )
+
+ task_trip_data__create_table = FireboltOperator(
+ dag=dag,
+ task_id="task_trip_data__create_table",
+ sql=get_query(f'{tmpl_search_path}/trip_data__create_table.sql'),
+ firebolt_conn_id=firebolt_conn_id
+ )
+ task_trip_data__create_table.post_execute = lambda **x: time.sleep(10)
+
+ task_trip_data__process_data = FireboltOperator(
+ dag=dag,
+ task_id="task_trip_data__process_data",
+ sql=get_query(f'{tmpl_search_path}/trip_data__process.sql'),
+ firebolt_conn_id=firebolt_conn_id
+ )
+
+ task_stop_engine = FireboltStopEngineOperator(
+ dag=dag,
+ task_id="STOP_ENGINE",
+ firebolt_conn_id=firebolt_conn_id,
+ engine_name=firebolt_engine_name)
+
+ (task_start_engine >> task_trip_data__external_table >>
+ task_trip_data__create_table >> task_trip_data__process_data >> task_stop_engine)
+```
+
+This DAG showcases various Firebolt tasks as an example and is not intended to represent a typical real-world workflow or pipeline.
+
+### [](#define-airflow-variables)Define Airflow variables
+
+Airflow variables store-key value pairs that DAGs can use during execution. You can create and manage variables through the Airflow user interface (UI) or JSON documents. For detailed instructions, check out Airflow’s [Variables](https://airflow.apache.org/docs/apache-airflow/stable/concepts/variables.html) and [Managing Variables](https://airflow.apache.org/docs/apache-airflow/stable/howto/variable.html) documentation pages.
+
+**Example variable for SQL files**
+The DAG example uses the custom variable `firebolt_sql_path` to define the directory within your Airflow home directory where SQL files are stored. The DAG reads these files to execute tasks in Firebolt.
+
+- **Key**: `firebolt_sql_path`
+- **Value**: Path to the directory containing SQL scripts. For example, `~/airflow/sql_store`.
+
+**Using the variable in the DAG**
+A python function in the DAG reads the SQL scripts stored in the directory defined by `firebolt_sql_path`. This allows the DAG to dynamically execute the SQL files as tasks in Firebolt.
+
+The following example demonstrates how the variable is accessed in the DAG script:
+
+```
+tmpl_search_path = Variable.get("firebolt_sql_path")
+
+def get_query(query_file):
+ with open(query_file, "r") as file:
+ return file.read()
+```
+
+### [](#sql-script-examples)SQL script examples
+
+Save the following SQL scripts to your `tmpl_search_path`.
+
+#### [](#trip_data__create_external_tablesql)trip\_data\_\_create\_external\_table.sql
+
+This example creates the `ex_trip_data` fact table to connect to a public Amazon S3 data store.
+
+```
+CREATE EXTERNAL TABLE IF NOT EXISTS ex_trip_data(
+ vendorid INTEGER,
+ lpep_pickup_datetime TIMESTAMP,
+ lpep_dropoff_datetime TIMESTAMP,
+ passenger_count INTEGER,
+ trip_distance REAL,
+ ratecodeid INTEGER,
+ store_and_fwd_flag TEXT,
+ pu_location_id INTEGER,
+ do_location_id INTEGER,
+ payment_type INTEGER,
+ fare_amount REAL,
+ extra REAL,
+ mta_tax REAL,
+ tip_amount REAL,
+ tolls_amount REAL,
+ improvement_surcharge REAL,
+ total_amount REAL,
+ congestion_surcharge REAL
+)
+url = 's3://firebolt-publishing-public/samples/taxi/'
+object_pattern = '*yellow*2020*.csv'
+type = (CSV SKIP_HEADER_ROWS = true);
+```
+
+#### [](#trip_data__create_tablesql)trip\_data\_\_create\_table.sql
+
+This example creates the `my_taxi_trip_data` fact table, to receive ingested data.
+
+```
+DROP TABLE IF EXISTS my_taxi_trip_data;
+CREATE FACT TABLE IF NOT EXISTS my_taxi_trip_data(
+ vendorid INTEGER,
+ lpep_pickup_datetime TIMESTAMP,
+ lpep_dropoff_datetime TIMESTAMP,
+ passenger_count INTEGER,
+ trip_distance REAL,
+ ratecodeid INTEGER,
+ store_and_fwd_flag TEXT,
+ pu_location_id INTEGER,
+ do_location_id INTEGER,
+ payment_type INTEGER,
+ fare_amount REAL,
+ extra REAL,
+ mta_tax REAL,
+ tip_amount REAL,
+ tolls_amount REAL,
+ improvement_surcharge REAL,
+ total_amount REAL,
+ congestion_surcharge REAL,
+ SOURCE_FILE_NAME TEXT,
+ SOURCE_FILE_TIMESTAMP TIMESTAMP
+) PRIMARY INDEX vendorid;
+```
+
+#### [](#trip_data__processsql)trip\_data\_\_process.sql
+
+An `INSERT INTO` operation ingests data into the `my_taxi_trip_data` fact table using the `ex_trip_data` external table. This example uses the external table metadata column, `$source_file_timestamp`, to retrieve records exclusively from the latest file.
+
+```
+INSERT INTO my_taxi_trip_data
+SELECT
+ vendorid,
+ lpep_pickup_datetime,
+ lpep_dropoff_datetime,
+ passenger_count,
+ trip_distance,
+ ratecodeid,
+ store_and_fwd_flag,
+ pu_location_id,
+ do_location_id,
+ payment_type,
+ fare_amount,
+ extra,
+ mta_tax,
+ tip_amount,
+ tolls_amount,
+ improvement_surcharge,
+ total_amount,
+ congestion_surcharge,
+ $source_file_name,
+ $source_file_timestamp
+FROM ex_trip_data
+WHERE coalesce($source_file_timestamp > (SELECT MAX(source_file_timestamp) FROM my_taxi_trip_data), true);
+```
+
+## [](#example-working-with-query-results)Example: Working with query results
+
+The `FireboltOperator` is designed to execute SQL queries but does not return query results. To retrieve query results, use the `FireboltHook` class. The following example demonstrates how to use `FireboltHook` to execute a query and log the row count in the `my_taxi_trip_data` table.
+
+### [](#python-code-example-retrieiving-query-results)Python code example: Retrieiving query results
+
+```
+import logging
+
+import airflow
+from airflow import DAG
+from airflow.operators.python import PythonOperator
+from firebolt_provider.hooks.firebolt import FireboltHook
+from airflow.providers.common.sql.hooks.sql import fetch_one_handler
+
+# Set up the Firebolt connection ID
+firebolt_conn_id = 'firebolt'
+default_args = {
+ 'owner': 'airflow',
+ 'start_date': airflow.utils.dates.days_ago(1)
+}
+
+
+# Function to notify the team about the data
+def notify(message: str):
+ logging.info(message)
+
+
+# Function to fetch data from Firebolt and notify the team
+def fetch_firebolt_data():
+ hook = FireboltHook(firebolt_conn_id=firebolt_conn_id)
+ results = hook.run(
+ "SELECT count(*) FROM my_taxi_trip_data",
+ handler=fetch_one_handler
+ )
+ count = results[0]
+ notify("Amount of data in Firebolt: " + str(count))
+
+
+with DAG(
+ 'return_result_dag',
+ default_args=default_args,
+ schedule_interval=None, # Run manually
+ catchup=False
+) as dag:
+ # Define a Python operator to fetch data from Firebolt and notify the team
+ monitor_firebolt_data = PythonOperator(
+ task_id='monitor_firebolt_data',
+ python_callable=fetch_firebolt_data
+ )
+
+ monitor_firebolt_data
+```
+
+## [](#example-controlling-query-execution-timeout)Example: Controlling query execution timeout
+
+The Firebolt provider includes parameters to control query execution time and behavior when a timeout occurs:
+
+- `query_timeout`: Sets the maximum duration (in seconds) that a query can run
+- `fail_on_query_timeout` - If `True`, a timeout raises a `QueryTimeoutError`. If `False`, the task terminates quietly, and the task proceeds without raising an error.
+
+### [](#python-code-example-using-timeout-settings)Python code example: Using timeout settings
+
+In this example, the `FireboltOperator` task stops execution after one second and proceeds without error. The `PythonOperator` task fetches data from Firebolt with a timeout of 0.5 seconds and raises an error if the query times out.
+
+```
+import airflow
+from airflow.models import DAG, Variable
+from airflow.operators.python import PythonOperator
+from firebolt_provider.hooks.firebolt import FireboltHook
+from airflow.providers.common.sql.hooks.sql import fetch_one_handler
+from firebolt_provider.operators.firebolt import FireboltOperator
+
+# Set up the Firebolt connection ID
+firebolt_conn_id = 'firebolt'
+default_args = {
+ 'owner': 'airflow',
+ 'start_date': airflow.utils.dates.days_ago(1)
+}
+tmpl_search_path = Variable.get("firebolt_sql_path")
+
+
+def get_query(query_file):
+ return open(query_file, "r").read()
+
+# Function to fetch data with a timeout
+def fetch_with_timeout():
+ hook = FireboltHook(
+ firebolt_conn_id=firebolt_conn_id,
+ query_timeout=0.5,
+ fail_on_query_timeout=True,
+ )
+ results = hook.run(
+ "SELECT count(*) FROM my_taxi_trip_data",
+ handler=fetch_one_handler,
+ )
+ print(f"Results: {results}")
+
+# Define the DAG
+with DAG(
+ 'timeout_dag',
+ default_args=default_args,
+ schedule_interval=None, # Run manually
+ catchup=False
+) as dag:
+
+ # Firebolt operator with a timeout
+
+ firebolt_operator_with_timeout = FireboltOperator(
+ dag=dag,
+ task_id="insert_with_timeout",
+ sql=get_query(f'{tmpl_search_path}/trip_data__process.sql'),
+ firebolt_conn_id=firebolt_conn_id,
+ query_timeout=1,
+ # Task will not fail if query times out, and will proceed to the next task
+ fail_on_query_timeout=False,
+ )
+
+ # Python operator to fetch data with a timeout
+ operator_with_hook_timeout = PythonOperator(
+ dag=dag,
+ task_id='select_with_hook_timeout',
+ python_callable=fetch_with_timeout,
+ )
+
+ firebolt_operator_with_timeout >> operator_with_hook_timeout
+```
+
+## [](#additional-resources)Additional resources
+
+For more information about connecting to Airflow, refer to the following resources:
+
+- [Managing Connections in Airflow](https://airflow.apache.org/docs/apache-airflow/stable/howto/connection.html)
+- [Firebolt Airflow provider on Pypi](https://pypi.org/project/airflow-provider-firebolt/)
+- [DAGs](https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html)
+- [airflow.models.dag](https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/models/dag/index.html#module-airflow.models.dag)
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_integrations_connecting_to_apache_superset.md b/cmd/docs-scrapper/fireboltdocs/guides_integrations_connecting_to_apache_superset.md
new file mode 100644
index 0000000..1866423
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_integrations_connecting_to_apache_superset.md
@@ -0,0 +1,105 @@
+# [](#apache-superset)Apache Superset
+
+[Apache Superset](https://superset.apache.org) is an open-source data exploration and visualization platform that empowers users to create interactive, shareable dashboards and charts for analyzing and presenting data. It supports a wide range of data sources and provides an intuitive, web-based interface for data exploration, slicing, and dicing, with features like dynamic filtering, pivot tables, and drag-and-drop functionality. Superset also offers a rich set of visualization options and can be extended through custom plugins, making it a versatile tool for data analysts and business users to gain insights from their data and collaborate effectively.
+
+With its exceptional speed and scalability, Firebolt allows users to handle vast amounts of data with minimal query latency, ensuring that Superset dashboards and visualizations load quickly, even when dealing with massive datasets. This integration between Firebolt and Superset creates a powerful combination for data professionals, offering them a streamlined and efficient workflow for extracting maximum value from their data.
+
+Firebolt is also supported in [Preset](/Guides/integrations/connecting-to-preset.html), a fully managed cloud Superset solution.
+
+# [](#prerequisites)Prerequisites
+
+Superset can be installed in several ways, including using a pre-built [Docker container](https://superset.apache.org/docs/installation/installing-superset-using-docker-compose), building it from [source](https://superset.apache.org/docs/installation/installing-superset-from-scratch) or deploying via [Kubernetes Helm chart](https://superset.apache.org/docs/installation/running-on-kubernetes).
+
+The easiest way to get started is to run Superset via Docker.
+
+You will need:
+
+- [Docker](https://www.docker.com/) and [Docker Compose](https://docs.docker.com/compose/).
+- [VirtualBox](https://www.virtualbox.org/) (Windows only).
+- [Git](https://git-scm.com/).
+
+# [](#quickstart)Quickstart
+
+Follow this guide to setup Superset and get your first chart ready.
+
+### [](#setup-superset)Setup Superset
+
+1. Clone Superset’s GitHub [repository](https://github.com/apache/superset)
+
+ ```
+ git clone https://github.com/apache/superset.git
+ ```
+2. Change directory to the root of the newly cloned repository and add the Firebolt driver
+
+ ```
+ cd superset
+ touch ./docker/requirements-local.txt
+ echo "firebolt-sqlalchemy" >> ./docker/requirements-local.txt
+ ```
+3. Run Superset via Docker Compose
+
+ ```
+ docker compose -f docker-compose-non-dev.yml up
+ ```
+4. (Optional) Verify firebolt driver is present in Superset container
+
+ ```
+ docker exec -it bash
+ pip freeze | grep firebolt
+ ```
+
+ You should see `firebolt-sqlalchemy` in the output.
+
+Once your Superset is booted up you should be able to access it in http://localhost:8088/
+
+> **Note:** For more installation details, refer to [Adding New Database Drivers in Docker](https://superset.apache.org/docs/databases/docker-add-drivers) in the Superset documentation.
+
+### [](#setup-firebolt-connection)Setup Firebolt connection
+
+After the initial setup in Superset User Inteface head to the `Settings -> Database connections` in the top right corner.
+
+
+
+On the next screen, press the `+ Database` button and select Firebolt from the dropdown. If you don’t see Firebolt in the list, please refer to the [Setup Superset](#setup-superset) section for instructions on how to install the Firebolt driver and verify that the driver is present.
+
+
+
+The connection expects a SQLAlchemy connection string of the form:
+
+```
+firebolt://{client_id}:{client_secret}@{database}/{engine_name}?account_name={account_name}
+```
+
+To authenticate, use a service account ID and secret. A service account is identified by a `client_id` and a `client_secret`. Learn how to generate an ID and secret [here](/Guides/managing-your-organization/service-accounts.html).
+
+Account name must be provided, you can learn about accounts in [Manage accounts](/Guides/managing-your-organization/managing-accounts.html) section.
+
+
+
+Click the Test Connection button to confirm things work end to end. If the connection looks good, save the configuration by clicking the Connect button in the bottom right corner of the modal window. Now you’re ready to start using Superset!
+
+### [](#build-your-first-chart)Build your first chart
+
+> **Note:** This section assumes you have followed Firebolt [tutorial](/Guides/getting-started/) and loaded a sample data set into your database.
+
+Now that you’ve configured Firebolt as a data source, you can select specific tables (Datasets) that you want to see in Superset.
+
+Go to Data -> Datasets and select `+ Dataset`. There you can select your sample table by specifying Firebolt as your Database, your schema and the table name you chose.
+
+
+
+Press “Create Dataset and Create Chart”. On the next screen you can select your desired chart type. For this tutorial we will go with a simple Bar Chart.
+
+
+
+In the next screen you can drag and drop your table columns into Metrics and Dimensions, specify filters or sorting orders. We will plot max play time per level grouping it by level type and sorting the x-axis in ascending order.
+
+
+
+Your first chart is ready! You can now save it, add more data to it or change its type. You can also start building a dashboard with different charts telling a story. Learn more about this functionality and more by following the links below.
+
+# [](#further-reading)Further reading
+
+- [Creating your first Dashboard](https://superset.apache.org/docs/creating-charts-dashboards/creating-your-first-dashboard).
+- [Exploring data](https://superset.apache.org/docs/creating-charts-dashboards/exploring-data).
+- [Preset](https://preset.io/) - managed Superset.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_integrations_connecting_to_paradime.md b/cmd/docs-scrapper/fireboltdocs/guides_integrations_connecting_to_paradime.md
new file mode 100644
index 0000000..2162cde
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_integrations_connecting_to_paradime.md
@@ -0,0 +1,82 @@
+# [](#integrate-paradime-with-firebolt)Integrate Paradime with Firebolt
+
+
+
+[Paradime](https://www.paradime.io/) is a unified platform for data science and analytics that streamlines workflows for data teams. It offers a collaborative workspace where data scientists and analysts can explore, analyze, and visualize data across multiple tools and environments. Paradime integrates with tools including Jupyter notebooks, SQL editors, and Tableau. You can use the Paradime connector to link the Paradime platform directly to Firebolt’s cloud data warehouse. This connection allows you to run SQL queries, visualize results, and collaborate with team members all within the Paradime workspace.
+
+This guide shows you how to connect Paradime to Firebolt using the Paradime user interface (UI). You must have a Firebolt account, a Firebolt service account, access to a Firebolt database, and an account with Paradime. These instructions build on the steps in Paradime’s [Getting Started with your Paradime Workspace](https://docs.paradime.io/app-help/guides/paradime-101/getting-started-with-your-paradime-workspace) guide, providing Firebolt-specific configuration details.
+
+Topics:
+
+- [Prerequisites](#prerequisites)
+- [Create a Paradime workspace](#create-a-paradime-workspace).
+- (Optional) [Create a schedule](#create-a-schedule-optional).
+
+## [](#prerequisites)Prerequisites
+
+Before you can connect Paradime to Firebolt, you must have the following:
+
+1. **Firebolt Account**: Ensure that you have access to an active Firebolt account. If you don’t have access, you can [sign up for an account](https://www.firebolt.io/sign-up). For more information about how to register with Firebolt, see [Get started with Firebolt](/Guides/getting-started/).
+2. **Service Account**: You must have access to an active Firebolt [service account](/Guides/managing-your-organization/service-accounts.html), which facilitates programmatic access to Firebolt.
+3. **Firebolt Database**: You must have access to a Firebolt database. If you don’t have access, you can [create a database](/Guides/getting-started/get-started-sql.html#create-a-database).
+4. **Paradime Account**: You must have access to an active Paradime account. If you don’t have access, you can [sign up](https://app.paradime.io) for one.
+
+## [](#create-a-paradime-workspace)Create a Paradime workspace
+
+Create a Paradime workspace to connect to Firebolt as follows:
+
+01. In the Paradime UI, navigate to your account profile in the upper-right corner of the page.
+02. Select **Profile Settings**.
+03. In the **Workspaces** window, select the **New Workspace** button.
+04. Enter a descriptive name for your workspace in the text box under **Name**.
+05. Select **Create Workspace**.
+06. Select **Continue**.
+07. Select the most recent dbt-core version from the drop-down list.
+08. Select **Continue**.
+09. Select a dbt repository. You can either use an existing data build tool ([dbt](https://www.getdbt.com/blog/what-exactly-is-dbt)) repository or fork Firebolt’s sample [Jaffle Shop](https://github.com/firebolt-db/jaffle_shop_firebolt) repository from GitHub. Paradime supports the following providers: Azure DevOps, Bitbucket, GitHub, and GitLab.
+10. Select **Next**.
+11. Enter the SSH URI for your repository in the text box under **Repository URI**. Copy the key that appears under the **Deploy Key**.
+12. Add the new deploy key to your dbt repository and allow write access. The following are resources for providers supported by Paradime:
+
+ - [Add the deploy key](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/managing-deploy-keys#set-up-deploy-keys) in **github**.
+ - [Add a deployment key](https://www.atlassian.com/blog/bitbucket/deployment-keys) in **Bitbucket**.
+ - Use [deploy keys](https://docs.gitlab.com/ee/user/project/deploy_keys/) in **gitlab**.
+ - [Use SSH key authentication](https://learn.microsoft.com/en-us/azure/devops/repos/git/use-ssh-keys-to-authenticate?view=azure-devops) to connect with **Azure DevOps**.
+13. Select **Continue**.
+14. If your repository connected successfully, select **Continue**.
+15. Select **Firebolt** from the choices under **Warehouse connection**.
+16. Under **Connection Settings**, enter the following:
+
+ 1. **Profile Name** – The name of a [connection profile](https://docs.getdbt.com/docs/core/connect-data-platform/connection-profiles) that is defined in `dbt_project.yaml` by a workspace administrator, and contains configurations including credentials to connect to a data warehouse. For more information, see Paradime’s [Setting up your profile](https://docs.getdbt.com/docs/core/connect-data-platform/connection-profiles#setting-up-your-profile) guide.
+ 2. **Target** – Specify the [target variable](https://docs.getdbt.com/reference/dbt-jinja-functions/target) that contains information about your data warehouse connection including its name, schema, and type.
+ 3. **Host Name** – Enter `api.app.firebolt.io`.
+17. Under **Development Credentials**, enter the following:
+
+ 1. **Client Id** – Enter your Firebolt [service account ID](/Guides/managing-your-organization/service-accounts.html#get-a-service-account-id). Do not enter your Firebolt login email.
+ 2. **Client Secret** – Enter your Firebolt [service account secret](/Guides/managing-your-organization/service-accounts.html#generate-a-secret). Do not enter your Firebolt password.
+ 3. **Account Name** – Enter your Firebolt [account name](/Guides/managing-your-organization/managing-accounts.html).
+ 4. **Engine Name** – Enter the name of the engine where you want to run your queries.
+ 5. **Database Name** – Specify the Firebolt database name.
+ 6. Select **Test Connection** to connect to Firebolt and authenticate.
+ 7. Select **Next**.
+
+For more information about the previous connection settings, see Paradime’s documentation to [add a development connection](https://docs.paradime.io/app-help/documentation/settings/connections/development-environment/firebolt).
+
+## [](#create-a-schedule-optional)Create a schedule (Optional)
+
+Paradime offers a scheduling feature using a [Bolt user interface](https://docs.paradime.io/app-help/documentation/bolt) to automatically run dbt commands on a specified interval or event. You can use Bolt to run a dbt job in a production environment, in a test environment prior to merging changes to production, or in an environment that runs jobs only on changed models.
+
+To create a new schedule:
+
+1. Login to your [Paradime account](https://app.paradime.io/?target=main-app).
+2. Select **Bolt** from the left navigation bar.
+3. Select **+ New Schedule**.
+4. Select a pre-configured template from a list of popular Bolt templates or create a new schedule using a blank template. For information about how to configure settings in a Paradime schedule, see [Schedule Fields](https://docs.paradime.io/app-help/guides/paradime-101/running-dbt-in-production-with-bolt/creating-bolt-schedules#ui-based-schedule-fields).
+5. Select **Publish**.
+6. To view the new schedule, select **Bolt** from the left navigation pane.
+
+# [](#additional-resources)Additional resources
+
+- Learn about the [Paradime integrated development Environment](https://docs.paradime.io/app-help/guides/paradime-101/getting-started-with-the-paradime-ide).
+- Learn to use the [Bolt scheduler](https://docs.paradime.io/app-help/bolt-scheduler/running-dbt-tm-in-production/creating-bolt-schedules) to run your dbt jobs.
+- Learn how to [manage your Bolt schedule](https://docs.paradime.io/app-help/documentation/bolt/managing-schedules).
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_integrations_connecting_to_preset.md b/cmd/docs-scrapper/fireboltdocs/guides_integrations_connecting_to_preset.md
new file mode 100644
index 0000000..a36abbe
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_integrations_connecting_to_preset.md
@@ -0,0 +1,66 @@
+# [](#preset)Preset
+
+[Preset](https://preset.io/) is a cloud-hosted data exploration and visualization platform built on top of the popular open-source project, [Apache Superset](https://superset.apache.org/). This fully managed service makes it easy to run Superset at scale with enterprise-ready security, reliability, and governance.
+
+Boasting exceptional speed and scalability, Firebolt enables users to adeptly manage substantial data volumes with minimal query latency. The integration with Preset establishes a strong partnership for data professionals, presenting them with a streamlined and efficient workflow. This collaboration ensures prompt loading of Preset dashboards and visualizations, even when confronted with extensive datasets, thereby facilitating the extraction of maximum value from their data.
+
+# [](#prerequisites)Prerequisites
+
+Preset is a managed service so most of the deployment requirements are handled by them.
+
+You will only need:
+
+- To [register](https://manage.app.preset.io/starter-registration/) a Preset account.
+- To have a Firebolt account and service account [credentials](/Guides/managing-your-organization/service-accounts.html).
+- [Load data](/Guides/loading-data/loading-data.html) you want to visualise.
+
+Make sure that your [service account’s network policy](https://docs.firebolt.io/Guides/managing-your-organization/service-accounts.html#edit-your-service-account-using-the-ui) allows connections from [Preset IPs](https://docs.preset.io/docs/connecting-your-data).
+
+# [](#quickstart)Quickstart
+
+### [](#create-a-workspace)Create a workspace
+
+A workspace is an organizational unit, accessible by team members, that is created for a specific purpose. You can read Preset’s [guidance](https://docs.preset.io/docs/about-workspaces) on workspaces to learn more.
+
+1. To Create a Workspace, navigate to the empty card and select + Workspace.
+
+ 
+2. Define Workspace name and settings
+
+ 
+3. Save the workspace and enter it by clicking the card.
+
+### [](#setup-firebolt-connection)Setup Firebolt connection
+
+After the initial setup in Preset User Inteface head to the `Settings -> Database connections` in the top right corner.
+
+
+
+On the next screen, press the `+ Database` button and select Firebolt from the dropdown.
+
+
+
+The connection expects a SQLAlchemy connection string of the form:
+
+```
+firebolt://{client_id}:{client_secret}@{database}/{engine_name}?account_name={account_name}
+```
+
+To authenticate, use a service account ID and secret. A service account is identified by a `client_id` and a `client_secret`. Learn how to generate an ID and secret [here](/Guides/managing-your-organization/service-accounts.html).
+
+Account name must be provided, you can learn about accounts in [Manage accounts](/Guides/managing-your-organization/managing-accounts.html) section.
+
+
+
+Click the Test Connection button to confirm things work end to end. If the connection looks good, save the configuration by clicking the Connect button in the bottom right corner of the modal window. Now you’re ready to start using Preset!
+
+### [](#build-your-first-chart)Build your first chart
+
+To build a chart you can follow our guide in the [Superset section](/Guides/integrations/connecting-to-apache-superset.html#build-your-first-chart), as the Preset works identically.
+
+# [](#further-reading)Further reading
+
+- [Creating a chart](https://docs.preset.io/docs/creating-a-chart) walkthrough.
+- [Creating a Dashboard](https://docs.preset.io/docs/creating-a-dashboard).
+- [Collaboration features of Preset](https://docs.preset.io/docs/sharing-and-collaboration).
+- [Storytelling in charts](https://docs.preset.io/docs/storytelling-with-charts-and-dashboards-mini-guide).
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_integrations_connecting_with_dbt.md b/cmd/docs-scrapper/fireboltdocs/guides_integrations_connecting_with_dbt.md
new file mode 100644
index 0000000..422e2a0
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_integrations_connecting_with_dbt.md
@@ -0,0 +1,213 @@
+# [](#overview)Overview
+
+
+
+[DBT](https://www.getdbt.com), or Data Build Tool, is a framework designed for managing and executing data transformations within modern data warehousing architectures. It facilitates the development and deployment of SQL-based transformations in a version-controlled environment, enabling collaboration and ensuring reproducibility of data pipelines. DBT streamlines the process of transforming raw data into analytics-ready datasets, accelerating the delivery of insights.
+
+The Firebolt adapter for dbt brings together dbt’s state-of-the-art development tools and Firebolt’s next-generation analytics performance. On top of dbt’s core features, the adapter offers native support for all of Firebolt’s index types and has been specifically enhanced to support ingestion from S3 using Firebolt’s external tables mechanics.
+
+# [](#prerequisites)Prerequisites
+
+There are two ways to deploy DBT: self-hosted [DBT Core](https://docs.getdbt.com/docs/introduction#dbt-core) and managed [DBT Cloud](https://docs.getdbt.com/docs/cloud/about-cloud/dbt-cloud-features).
+
+This guide shows how to set up a local installation of [DBT Core](https://docs.getdbt.com/docs/introduction#dbt-core). This guide uses Python’s `pip` package manager, but you can use the following ways to install DBT: [Homebrew](https://docs.getdbt.com/docs/core/homebrew-install), [Docker](https://docs.getdbt.com/docs/core/docker-install), and from [source](https://docs.getdbt.com/docs/core/source-install).
+
+You will need the following:
+
+- A GitHub account.
+- Python 3.8+.
+
+# [](#quickstart)Quickstart
+
+This guide shows you how to set up DBT with Firebolt and run your first DBT [model](https://docs.getdbt.com/docs/build/models).
+
+### [](#setup-dbt-core)Setup DBT Core
+
+1. Create a new Python [virtual environment](https://docs.python.org/3/library/venv.html), as shown in the following script example:
+
+ ```
+ python3 -m venv dbt-env
+ ```
+2. Activate your `venv`, as shown in the following script example:
+
+ ```
+ source dbt-env/bin/activate
+ ```
+3. Install Firebolt’s [adapter](https://github.com/firebolt-db/dbt-firebolt) for DBT, as shown in the following script example:
+
+ ```
+ python -m pip install dbt-firebolt
+ ```
+4. (Optional) Check that both dbt packages are installed:
+
+ ```
+ python -m pip list | grep dbt
+ ```
+
+ This command should return `dbt-core` and `dbt-firebolt` and their respective versions.
+
+### [](#setup-connection-to-firebolt)Setup connection to Firebolt
+
+DBT uses a `profiles.yml` file to store the connection information. This file generally lives outside of your dbt project to avoid checking in sensitive information in to version control.
+
+The usual place to create this file on Mac and Linux is `~/.dbt/profiles.yml`.
+
+1. Open `~/.dbt/profiles.yml` with your preferred text editor.
+2. Paste the following sample configuration:
+
+ ```
+ jaffle-shop:
+ target: dev
+ outputs:
+ dev:
+ type: firebolt
+ client_id: ""
+ client_secret: ""
+ database: ""
+ engine_name: ""
+ account_name: ""
+ schema: ""
+ ```
+3. Replace the placeholders with your account’s information.
+
+ `` and `` are key and secret of your service account. If you don’t have one, follow the steps in the [Manage service accounts](/Guides/managing-your-organization/service-accounts.html) page to learn how to set one up.
+
+ `` and `` are the Firebolt’s database and engine that you want your queries to run.
+
+ `` is a Firebolt account that you’re connected to. Learn more [here](/Guides/managing-your-organization/managing-accounts.html).
+
+ `` is a prefix prepended to your table names. Since Firebolt does not support custom schemas, this prefix serves as a [workaround](https://docs.getdbt.com/docs/core/connect-data-platform/firebolt-setup#supporting-concurrent-development) to prevent table name conflicts during concurrent development.
+
+### [](#setup-jaffle-shop-a-sample-dbt-project)Setup Jaffle Shop, a sample dbt project
+
+`jaffle_shop` is a fictional ecommerce store. This dbt project transforms raw data from an app database into a customers and orders model ready for analytics. [This version](https://github.com/firebolt-db/jaffle_shop_firebolt) is designed to showcase Firebolt’s integration with DBT.
+
+1. Clone `jaffle-shop-firebolt` repository and change to the newly created directory, as follows:
+
+ ```
+ git clone https://github.com/firebolt-db/jaffle_shop_firebolt.git
+ cd jaffle_shop_firebolt
+ ```
+2. Ensure your profile is setup correctly:
+
+ ```
+ dbt debug
+ ```
+
+ If you’re seeing an error here, check that your `profiles.yml` is [set up correctly](#setup-connection-to-firebolt), is in the right directory on your system, and that the [engine](/Guides/operate-engines/operate-engines.html). is running. Also check that you’re still in `dbt-env` virtual Python environment that we’ve [setup earlier](#setup-dbt-core) and that both packages are present.
+3. Install dependent packages:
+
+ ```
+ dbt deps
+ ```
+4. Run the external table model. If your database is not in `us-east-1` AWS region then refer to the [Readme](https://github.com/firebolt-db/jaffle_shop_firebolt) on how to copy the files.
+
+ ```
+ dbt run-operation stage_external_sources
+ ```
+5. Load sample CSV in your database:
+
+ ```
+ dbt seed
+ ```
+6. Run the models:
+
+ ```
+ dbt run
+ ```
+
+You should now see the `customers` and `orders` tables in your database, created using dbt models. From here you can explore more of DBT’s capabilities, including incremental models, documentation generation, and more, by following the official guides in the section below.
+
+# [](#limitations)Limitations
+
+Not every feature of DBT is supported in Firebolt. You can find an up-to-date list of features in the [adapter documentation](https://github.com/firebolt-db/dbt-firebolt?tab=readme-ov-file#feature-support).
+
+# [](#external-table-loading-strategy)External table loading strategy
+
+In the previous Jaffle Shop example we used a public Amazon S3 bucket to load data. If your bucket contains sensitive data, you will want to restrict access. Follow our [guide](/Guides/loading-data/creating-access-keys-aws.html) to set up AWS authentication using an ID and secret key.
+
+In your `dbt_project.yml`, you can specify the credentials for your external table in fields `aws_key_id` and `aws_secret_key`, as shown in the following code example:
+
+```
+sources:
+ - name: firebolt_external
+ schema: ""
+ loader: S3
+
+ tables:
+ - name:
+ external:
+ url: 's3:///'
+ object_pattern: ''
+ type: ''
+ credentials:
+ aws_key_id:
+ aws_secret_key:
+ object_pattern: ''
+ compression: ''
+ partitions:
+ - name:
+ data_type:
+ regex: ''
+ columns:
+ - name:
+ data_type:
+```
+
+To use external tables, you must define a table as external in your `dbt_project.yml` file. Every external table must contain the fields: `url`, `type`, and `object_pattern`. The Firebolt external table [specification](/sql_reference/commands/data-definition/create-external-table.html) requires fewer fields than those specified in the dbt documentation.
+
+# [](#copy-loading-strategy)“Copy” loading strategy
+
+You can also use [COPY FROM](/sql_reference/commands/data-management/copy-from.html) to load data from Amazon S3 into Firebolt. It has a simple syntax and doesn’t require an exact match with your source data. `COPY_FROM` does not create an intermediate table and writes your data straight into Firebolt so you can start working with it right away.
+
+The copy syntax in dbt closely adheres to the [syntax](/sql_reference/commands/data-management/copy-from.html#syntax) in Firebolt’s `COPY_FROM`.
+
+To use `COPY FROM` instead of creating an external table, set `strategy: copy` in your external source definition. For backwards compatibility, if no strategy is specified, the external table strategy is used by default.
+
+```
+
+sources:
+ - name: s3
+ tables:
+ - name:
+ external:
+ strategy: copy
+ url: 's3:///'
+ credentials:
+ aws_key_id:
+ aws_secret_key:
+ options:
+ object_pattern: ''
+ type: 'CSV'
+ auto_create: true
+```
+
+You can also include the following options:
+
+```
+options:
+ object_pattern: ''
+ type: 'CSV'
+ auto_create: true
+ allow_column_mismatch: false
+ max_errors_per_file: 10
+ csv_options:
+ header: true
+ delimiter: ','
+ quote: DOUBLE_QUOTE
+ escape: '\'
+ null_string: '\\N'
+ empty_field_as_null: true
+ skip_blank_lines: true
+ date_format: 'YYYY-MM-DD'
+ timestamp_format: 'YYYY-MM-DD HH24:MI:SS'
+```
+
+In the previous code example, `csv_options` are indented. For detailed descriptions of these options and their allowed values, refer to the [parameter specification](/sql_reference/commands/data-management/copy-from.html#parameters).
+
+# [](#further-reading)Further reading
+
+- [Configuring Firebolt-specific features](https://docs.getdbt.com/reference/resource-configs/firebolt-configs).
+- [Incremental models](https://docs.getdbt.com/docs/build/incremental-models).
+- [Data tests](https://docs.getdbt.com/docs/build/data-tests).
+- [Documenting your models](https://docs.getdbt.com/docs/collaborate/documentation).
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_integrations_cube_js.md b/cmd/docs-scrapper/fireboltdocs/guides_integrations_cube_js.md
new file mode 100644
index 0000000..7ff4d34
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_integrations_cube_js.md
@@ -0,0 +1,98 @@
+# [](#overview)Overview
+
+
+
+Cube.js is an open-source analytical API platform that empowers developers to build custom and scalable analytics solutions. By acting as an intermediary between your data sources and front-end applications, Cube.js simplifies the process of querying large datasets and ensures efficient data management and visualization.
+
+Integrating Cube.js with Firebolt significantly enhances the data processing capabilities of your analytics stack. Firebolt’s ability to execute complex queries with minimal latency aligns perfectly with Cube.js’s goal of delivering fast and responsive analytics. As a result, users benefit from a seamless and highly performant analytics experience, making it an ideal solution for businesses looking to scale their data operations without compromising on speed or efficiency.
+
+# [](#quickstart-connecting-cubejs-to-firebolt)Quickstart: Connecting Cube.js to Firebolt
+
+Follow these steps to quickly connect Cube.js to Firebolt and start building powerful analytics solutions using Docker. For this demo we’ll be using [Cube Core](https://cube.dev/docs/product/getting-started/core). For other deployment options follow the Cube [documentation](https://cube.dev/docs/product/deployment).
+
+#### [](#prerequisites)Prerequisites
+
+1. **Docker**: Ensure you have Docker installed. You can download it from [here](https://www.docker.com/products/docker-desktop).
+2. **Firebolt Account**: You need an active Firebolt account. Sign up [here](https://www.firebolt.io/) if you don’t have one.
+3. **Firebolt Database and Table**: Make sure you have a Firebolt database and table with data ready for querying. Follow our [Getting started tutorial](/Guides/getting-started/) to set up some sample data.
+4. **Firebolt Service Account**: Create a [service account](/Guides/managing-your-organization/service-accounts.html) in Firebolt and note its id and secret.
+
+#### [](#step-1-create-a-cubejs-project-with-docker)Step 1: Create a Cube.js Project with Docker
+
+1. Create a new directory for your Cube.js project:
+
+ ```
+ mkdir cubejs-firebolt
+ cd cubejs-firebolt
+ touch docker-compose.yml
+ ```
+2. Create a `docker-compose.yml` file with the following content:
+
+ ```
+ version: "2.2"
+
+ services:
+ cube:
+ image: cubejs/cube:latest
+ ports:
+ - 4000:4000
+ - 15432:15432
+ environment:
+ CUBEJS_DEV_MODE: "true"
+ volumes:
+ - .:/cube/conf
+ ```
+
+#### [](#step-2-start-cubejs)Step 2: Start Cube.js
+
+1. Run the Cube.js development server using Docker Compose:
+
+ ```
+ docker compose up -d
+ ```
+2. Open your browser and navigate to `http://localhost:4000`. You should see the Cube.js [playground](https://cube.dev/docs/product/workspace/playground).
+
+#### [](#step-3-configure-firebolt-connection-via-ui)Step 3: Configure Firebolt Connection via UI
+
+The Playground has a database connection wizard that loads when Cube is first started up and no .env file is found. After database credentials have been set up, an .env file will automatically be created and populated with credentials.
+
+1. Select **Firebolt** as the database type.
+2. Enter your Firebolt credentials:
+
+ - **Client ID**: Your service account ID
+ - **Client Secret**: Your service account secret
+ - **Database**: Your Firebolt database name
+ - **Account**: Your [account](/Guides/managing-your-organization/managing-accounts.html) name
+ - **Engine Name**: Your Firebolt engine name
+3. Click “Apply” to set up the connection
+
+#### [](#step-4-generate-schema-using-ui)Step 4: Generate Schema Using UI
+
+You should see tables available to you from the configured database
+
+1. Select the `levels` table.
+2. After selecting the table, click Generate Data Model and pick either YAML (recommended) or JavaScript format.
+3. Click build.
+
+ 
+
+You can start exploring your data!
+
+#### [](#step-5-query-data-in-playground)Step 5: Query data in Playground
+
+Select measures, dimensions and filters to explore your data!
+
+
+
+Congratulations! You have successfully connected Cube.js to Firebolt and can now start building high-performance analytics solutions. For more detailed configuration and advanced features, refer to the [Cube.js documentation](https://cube.dev/docs) and [Firebolt documentation](https://docs.firebolt.io/).
+
+### [](#further-reading)Further Reading
+
+After setting up Cube.js with Firebolt, you can explore and leverage several powerful features to enhance your analytics capabilities. Here are some resources to help you get started:
+
+1. **Cube.js Data Blending**: Understand how to combine data from different sources for more comprehensive analysis. [Cube.js Data Blending Documentation](https://cube.dev/docs/product/data-modeling/concepts/data-blending)
+2. **Cube.js Security**: Implement row-level security to ensure your data is accessed appropriately. [Cube.js Security Documentation](https://cube.dev/docs/security)
+3. **Cube.js API**: Explore the Cube.js REST API to programmatically access your data and build custom integrations. [Cube.js API Reference](https://cube.dev/docs/rest-api)
+4. **Cube.js Dashboard App**: Build and deploy powerful dashboards using Cube.js and your favorite front-end frameworks. [Cube.js Dashboard App Documentation](https://cube.dev/docs/dashboard-app)
+
+These resources will help you unlock the full potential of Cube.js and create robust, high-performance analytics solutions.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_integrations_dbeaver.md b/cmd/docs-scrapper/fireboltdocs/guides_integrations_dbeaver.md
new file mode 100644
index 0000000..f82189f
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_integrations_dbeaver.md
@@ -0,0 +1,59 @@
+# [](#integrate-with-dbeaver)Integrate with DBeaver
+
+
+
+DBeaver is a free, open-source database administration tool that supports multiple database types. It provides a graphical interface for managing databases, running queries, and analyzing data. DBeaver is widely used for database development, troubleshooting, and administration, making it a versatile choice for both developers and database administrators. You can connect DBeaver to Firebolt using the [Firebolt JDBC driver](/Guides/developing-with-firebolt/connecting-with-jdbc.html).
+
+- [Prerequisites](#prerequisites)
+- [Add the Firebolt JDBC Driver in DBeaver](#add-the-firebolt-jdbc-driver-in-dbeaver)
+- [Connect to Firebolt in DBeaver](#connect-to-firebolt-in-dbeaver)
+- [Query Firebolt in DBeaver](#query-firebolt-in-dbeaver)
+- [Additional Resources](#additional-resources)
+
+## [](#prerequisites)Prerequisites
+
+You must have the following prerequisites before you can connect your Firebolt account to DBeaver:
+
+- **Firebolt account** – You need an active Firebolt account. If you do not have one, you can [sign up](https://go.firebolt.io/signup) for one.
+- **Firebolt database and engine** – You must have access to a Firebolt database. If you do not have access, you can [create a database](/Guides/getting-started/get-started-sql.html#create-a-database) and then [create an engine](/Guides/getting-started/get-started-sql.html#create-an-engine).
+- **Firebolt service account** – You must have an active Firebolt [service account](/Guides/managing-your-organization/service-accounts.html) for programmatic access, along with its ID and secret.
+- **Sufficient permissions** – Your service account must be [associated](/Guides/managing-your-organization/service-accounts.html#create-a-user) with a user. The user should have [USAGE](/Overview/Security/Role-Based%20Access%20Control/database-permissions/) permission to query your database, and [OPERATE](/Overview/Security/Role-Based%20Access%20Control/engine-permissions.html) permission to start and stop an engine if it is not already started. It should also have at least USAGE and SELECT [permissions](/Overview/Security/Role-Based%20Access%20Control/database-permissions/schema-permissions.html) on the schema you are planning to query.
+- **DBeaver installed** – You must have downloaded and installed [DBeaver](https://dbeaver.io/download/).
+
+## [](#add-the-firebolt-jdbc-driver-in-dbeaver)Add the Firebolt JDBC Driver in DBeaver
+
+To connect to Firebolt, you must add the Firebolt JDBC driver to DBeaver as follows:
+
+1. Download the [Firebolt JDBC driver](/Guides/developing-with-firebolt/connecting-with-jdbc.html#download-the-jar-file).
+2. In the DBeaver user interface (UI), under **Database**, select **Driver Manager**.
+3. In **Driver Manager**, select **New** and enter the following parameters:
+
+ - **Driver Name**: `Firebolt`
+ - **Class Name**: `com.firebolt.FireboltDriver`
+4. Select the **Libraries** tab.
+5. Select **Add File**, and then select the JDBC driver you downloaded in the first step.
+6. Select **Close**.
+
+## [](#connect-to-firebolt-in-dbeaver)Connect to Firebolt in DBeaver
+
+To connect to Firebolt, you must configure a new database connection in DBeaver as follows:
+
+1. In DBeaver, select **Database**, then **New Database Connection**.
+2. Enter `Firebolt` in the search box, then select it from the list.
+3. Select **Next>**.
+4. Enter the connection parameters in the **Main** tab as follows:
+
+ Parameter Description **JDBC URL** Use `jdbc:firebolt:?engine=&account=` replacing `` with your Firebolt [database name](/Overview/indexes/using-indexes.html#databases), `` with your [engine name](/Guides/getting-started/get-started-sql.html#create-an-engine) and `` with your [account name](/Guides/managing-your-organization/managing-accounts.html). **Username** Your Firebolt [service account](/Guides/managing-your-organization/service-accounts.html#get-a-service-account-id) ID. **Password** Your Firebolt [service account](/Guides/managing-your-organization/service-accounts.html#generate-a-secret) secret.
+5. Select **Test Connection** to verify the connection. Ensure your Firebolt database is running before testing.
+6. If the connection is successful, select **Finish**.
+
+## [](#query-firebolt-in-dbeaver)Query Firebolt in DBeaver
+
+1. In the database navigator, right-click or open the context menu of your Firebolt connection, select **SQL Editor**, then select **New SQL Script**.
+2. Enter SQL queries into the SQL editor to interact with your Firebolt database.
+
+## [](#additional-resources)Additional Resources
+
+- Learn more about the [Firebolt JDBC driver](/Guides/developing-with-firebolt/connecting-with-jdbc.html).
+- Explore [DBeaver’s documentation](https://dbeaver.io/documentation/) for details on its UI, integrations, tools, and features.
+- Discover other tools that [Firebolt integrates](/Guides/integrations/integrations.html) with.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_integrations_estuary.md b/cmd/docs-scrapper/fireboltdocs/guides_integrations_estuary.md
new file mode 100644
index 0000000..18b1551
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_integrations_estuary.md
@@ -0,0 +1,127 @@
+# [](#integrate-estuary-flow-with-firebolt)Integrate Estuary Flow with Firebolt
+
+
+
+Estuary Flow is a real-time data integration platform designed to streamline the movement and transformation of data between diverse sources and destinations. It provides an event-driven architecture and a user-friendly interface for building pipelines with minimal effort. You can use Flow to set up pipelines to load data from various sources, such as cloud storage and databases, into Firebolt’s cloud data warehouse for low-latency analytics.
+
+This guide shows you how to set up a Flow pipeline that automatically moves data from your Amazon S3 bucket to your Firebolt database using the Estuary Flow user interface (UI). You must have access to an Estuary Flow account, an Amazon S3 bucket, and a Firebolt service account.
+
+Topics:
+
+- [Integrate Estuary Flow with Firebolt](#integrate-estuary-flow-with-firebolt)
+
+ - [Prerequisites](#prerequisites)
+ - [Configure your Estuary Flow source](#configure-your-estuary-flow-source)
+ - [Configure your Estuary Flow destination](#configure-your-estuary-flow-destination)
+ - [Monitor your materialization](#monitor-your-materialization)
+ - [Validate your materialization](#validate-your-materialization)
+ - [Additional resources](#additional-resources)
+
+## [](#prerequisites)Prerequisites
+
+1. **Estuary Flow account** – You must have access to an active Estuary Flow account. If you do not have access, you can [sign up](https://www.estuary.dev) with Estuary.
+2. **Amazon S3 bucket** – you must have access to the following:
+
+ - An [AWS Access Key ID and AWS Secret Access Key](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html) for an Amazon S3 bucket.
+ - The name and path to an [Amazon S3 bucket](https://aws.amazon.com/s3/) that contains your data.
+3. **Firebolt service account** –
+
+ - Access to an organization in Firebolt. If you don’t have access, you can [create an organization](/Guides/managing-your-organization/creating-an-organization.html).
+ - Access to a Firebolt database and engine. If you don’t have access, you can [create a database](/Guides/getting-started/get-started-sql.html#create-a-database) and [create an engine](/Guides/getting-started/get-started-sql.html#create-an-engine).
+ - Access to a Firebolt service account, which is used for programmatic access, its [service account ID](/Guides/managing-your-organization/service-accounts.html#get-a-service-account-id) and [secret](/Guides/managing-your-organization/service-accounts.html#generate-a-secret-using-the-ui). If you don’t have access, you can [create a service account](/Guides/managing-your-organization/service-accounts.html#create-a-service-account).
+
+## [](#configure-your-estuary-flow-source)Configure your Estuary Flow source
+
+To set up an Estuary Flow pipeline that automatically moves data from your Amazon S3 bucket, you must create a capture that defines how and where data should be collected. Create a capture for the Estuary Flow source as follows:
+
+1. Sign in to your [Estuary Flow Dashboard](https://dashboard.estuary.dev).
+2. Select **Sources** from the left navigation pane.
+3. In the **Sources** window, select **+ NEW CAPTURE**.
+4. From the list of available connectors, navigate to **Amazon S3**, and select **Capture**.
+5. Under **Capture Details**, enter a descriptive name for your capture in the text box under **Name**.
+6. Under **Endpoint Config**, enter the following:
+
+ 1. **AWS Access Key ID** – The AWS account ID associated with the Amazon S3 bucket containing your data.
+ 2. **AWS Secret Access Key** – The AWS secret access key associated with the Amazon S3 bucket containing your data.
+ 3. **AWS Region** – The [AWS region](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) that contains your Amazon S3 bucket. For example: `us-east-1`.
+ 4. **Bucket** – The name of your Amazon S3 bucket. For example, `firebolt-publishing-public`.
+ 5. **Prefix** (Optional) – A folder or key prefix to restrict the data to a specific path within the bucket. An example prefix structure follows: `/help_center_assets/firebolt_sample_dataset/levels.csv`.
+ 6. **Match Keys** (Optional) – Use a filter to include only specific object keys under the prefix, narrowing the capture’s scope.
+7. Select the **NEXT** button in the upper-right corner of the page.
+8. Test and save your connection as follows:
+
+ 1. Select **TEST** in the upper-right corner of the page. Estuary will run a test for your capture and display **Success** if it completes successfully.
+ 2. Select **CLOSE** in the bottom-right corner of the page.
+ 3. Select the **SAVE AND PUBLISH** button in the upper-right corner of the page. Estuary will test, save, and publish your capture and display **Success** if it completes successfully.
+ 4. Select **CLOSE** in the bottom-right corner of the page.
+
+## [](#configure-your-estuary-flow-destination)Configure your Estuary Flow destination
+
+To set up an Estuary Flow pipeline that automatically moves data from your Amazon S3 bucket, you must create a materialization that defines how the data should appear in the destination system, including any schema or transformation logic. Create a materialization for the Estuary Flow destination as follows:
+
+1. Select **Destinations** from the left navigation pane.
+2. Select the **+ NEW MATERIALIZATION** button in the upper-left corner of the page.
+3. Navigate to the **Firebolt** connector and select **Materialization**.
+4. Under **Materialization Details**, enter a descriptive name for your materialization in the text box under **Name**.
+5. Under **Endpoint Config**, enter the following:
+
+ 01. **Client ID** – The service account ID for your Firebolt service account.
+ 02. **Client Secret** – The secret for your Firebolt service account.
+ 03. **Account Name** – The name of your service account.
+ 04. **Database** – The name of the Firebolt database where you want to put your data. For example, `my-database`.
+ 05. **Engine Name** – The name of the Firebolt engine to run the queries. For example: `my-engine-name`.
+ 06. **S3 Bucket** – The name of the Amazon S3 bucket to store temporary intermediate files related to the operation of the external table. For example, `my-bucket`.
+ 07. **S3 Prefix** – (Optional) A folder or key prefix to restrict the data to a specific path within the bucket. An example prefix structure follows the format in: `temp_files/`.
+ 08. **AWS Key ID** – The access key ID for the AWS account linked to the Amazon S3 bucket for temporary file storage.
+ 09. **AWS Secret Key** – The AWS secret key associated with the Amazon S3 bucket to store temporary files.
+ 10. **AWS Region** – The [AWS region](https://aws.amazon.com/about-aws/global-infrastructure/regions_az/) of your Amazon S3 bucket. For example: `us-east-1`.
+6. Select the **NEXT** button in the upper-right corner of the page.
+7. Under **Source Collections**, do the following:
+
+ 1. Select **Source From Capture**.
+ 2. In the **Captures** window, select the checkbox next to the Amazon S3 source you specified when you configured your Estuary Flow source.
+ 3. Select the **CONTINUE** button in the bottom-right corner of the page.
+ 4. Verify that the **Table** name and type in the **CONFIG** tab under **Resource Configuration** are correct, and update if necessary.
+ 5. (Optional) Choose **Refresh** next to **Field Selection** to preview the fields, their types, and actions that will be written to Firebolt.
+8. Test and save your materialization as follows:
+
+ 1. Select the **TEST** button in the upper-right corner of the page. Estuary will run a test for your materialization and display **Success** if it completes successfully.
+ 2. Select **CLOSE** in the bottom-right corner of the page.
+ 3. Select the **SAVE AND PUBLISH** button in the upper-right corner of the page. Estuary will test, save, and publish your materialization and display **Success** if it completes successfully.
+ 4. Select **CLOSE** in the bottom-right corner of the page.
+
+## [](#monitor-your-materialization)Monitor your materialization
+
+You can monitor your new data pipeline in Estuary Flow’s dashboard as follows:
+
+1. Select **Destinations** from the left navigation pane.
+2. Select your newly created materialization to view a dashboard with the following tabs:
+
+ 1. **OVERVIEW** – Provides a high-level summary of the materialization that includes throughput over time.
+ 2. **SPEC** – Displays the configurations and specifications of the materialization that includes schema mapping from the source to destination, the configuration of the destination, and any filters or constrains on the materialized data.
+ 3. **LOGS** – Provides records of materialization activity including success and failure events, messages, and errors.
+
+Ensure that your data is being ingested and transferred as expected.
+
+## [](#validate-your-materialization)Validate your materialization
+
+You can validate that your data has arrived at Firebolt as follows:
+
+1. Log in to the [Firebolt Workspace](https://firebolt.go.firebolt.io/signup).
+2. Select the **Develop** icon () from the left navigation pane.
+3. In the **Script Editor**, run a query on the table that you specified as an Estuary Flow destination to confirm the transfer of data as follows:
+
+ 1. Select the name of the database that you specified as your Estuary Flow destination from the drop-down list next to **Databases**.
+ 2. Enter a script in the script editor to query the table that you specified as an Estuary Flow destination. The following code example returns the contents of all rows and all columns from the `games` table:
+
+ ```
+ SELECT * FROM games
+ ```
+
+ You’ve successfully set up an Estuary Flow pipeline to move data from an Amazon S3 source to a Firebolt destination. Next, explore the following resources to continue expanding your knowledge base.
+
+## [](#additional-resources)Additional resources
+
+- Explore the [core concepts](https://docs.estuary.dev/concepts/) of Estuary Flow.
+- Access [tutorials](https://docs.estuary.dev/getting-started/tutorials/) for Estuary Flow including a tutorial on [data transformation](https://docs.estuary.dev/guides/derivation_tutorial_sql/).
+- Learn more about Estuary Flow’s [command line interface](https://docs.estuary.dev/concepts/flowctl/).
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_integrations_integrations.md b/cmd/docs-scrapper/fireboltdocs/guides_integrations_integrations.md
new file mode 100644
index 0000000..c2e5e03
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_integrations_integrations.md
@@ -0,0 +1,16 @@
+# [](#integrate-with-firebolt)Integrate with Firebolt
+
+* * *
+
+- [Airflow](/Guides/integrations/airflow.html)
+- [dbt](/Guides/integrations/connecting-with-dbt.html)
+- [Apache Superset](/Guides/integrations/connecting-to-apache-superset.html)
+- [Preset](/Guides/integrations/connecting-to-preset.html)
+- [Cube.js](/Guides/integrations/cube-js.html)
+- [Airbyte](/Guides/integrations/airbyte.html)
+- [OpenTelemetry Exporter](/Guides/integrations/otel-exporter.html)
+- [Tableau](/Guides/integrations/tableau.html)
+- [Paradime](/Guides/integrations/connecting-to-paradime.html)
+- [Metabase](/Guides/integrations/metabase.html)
+- [Estuary](/Guides/integrations/estuary.html)
+- [DBeaver](/Guides/integrations/dbeaver.html)
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_integrations_metabase.md b/cmd/docs-scrapper/fireboltdocs/guides_integrations_metabase.md
new file mode 100644
index 0000000..0ca8720
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_integrations_metabase.md
@@ -0,0 +1,56 @@
+
+
+# [](#connecting-to-metabase)Connecting to Metabase
+
+[Metabase](https://www.metabase.com/) is an open-source business intelligence platform. You can use Metabase’s user interface to explore, analyze, and visualize data, query databases, generate reports, and create dashboards.
+
+This guide shows you how to [set up a Firebolt connector](#set-up-a-connector-to-metabase) for a self-hosted Metabase instance and how to [create a connection](#create-a-connection-to-metabase). If you are using either the managed or cloud-hosted version of [**Metabase Cloud**](https://www.metabase.com/docs/latest/cloud/start), you can skip directly to the [Create a Connection](#create-a-connection-to-metabase).
+
+You can also watch a short video on how to connect Metabase to Firebolt:
+
+**Topics:**
+
+1. [Set up a connector to metabase](#set-up-a-connector-to-metabase)
+2. [Create a connection to metabase](#create-a-connection-to-metabase)
+3. [Additional Resources](#additional-resources)
+
+### [](#set-up-a-connector-to-metabase)Set up a connector to metabase
+
+Metabase can be deployed as a **self-hosted instance**, which is a version that you install and manage on your own server infrastructure. If you are using either the managed or cloud-hosted version of [**Metabase Cloud**](https://www.metabase.com/docs/latest/cloud/start), you can skip directly to the [Create a Connection](#create-a-connection-to-metabase).
+
+For self-hosted deployments on-premises, the Firebolt connector must be installed manually using the following steps:
+
+1. **Download the Firebolt Metabase driver**
+
+ - Go to the [GitHub Releases page for Firebolt](https://github.com/firebolt-db/metabase-firebolt-driver/releases).
+ - Locate the most recent version of the Firebolt driver, and download it.
+2. **Move the driver file to the plugins directory**
+
+ - Save the downloaded driver file in the `/plugins` directory on your Metabase host system.
+ - By default, the `/plugins` directory is located in the same folder where the `metabase.jar` file runs. After completing these steps, the Firebolt connector will be available for configuration within Metabase.
+
+### [](#create-a-connection-to-metabase)Create a connection to metabase
+
+After setting up the Firebolt connector, use the following steps to create a connection between Metabase and your Firebolt database:
+
+1. Open your Metabase instance’s home page in a web browser.
+2. Select **Settings** from the top-right menu of the Metabase interface.
+3. Select **Admin** from the dropdown menu.
+4. On the **Admin** page, select **Databases** in the top navigation bar.
+5. Select the **Add Database** button.
+6. From the **Database Type** dropdown list, select **Firebolt**.
+
+ Fill out the required connection details using the descriptions provided in the following table:
+
+ Field Description **Display Name** A name to identify your database in Metabase. Use the same name as your Firebolt database for simplicity. **Client ID** The [service account ID](/Guides/managing-your-organization/service-accounts.html#get-a-service-account-id) associated with your Firebolt database. **Client Secret** The [secret for the service account](/Guides/managing-your-organization/service-accounts.html#generate-a-secret) associated with your Firebolt database. **Database name** Specify the name of the Firebolt database you want to connect to. **Account name** The name of your Firebolt account, which is required to log in and authenticate your database connection. **Engine name** Provide the name of the Firebolt engine that will be used to run queries against the database. **Additional JDBC options** Add any extra parameters needed for the connection, such as `connection_timeout_millis=10000`. For more options, access the [JDBC connection parameters guide](/Guides/developing-with-firebolt/connecting-with-jdbc.html#available-connection-parameters).
+7. Select **Save** to store your database configuration.
+
+Verify the connection by confirming that Metabase displays a success message indicating that your Firebolt database has been added successfully. If the connection fails, double-check your settings and ensure all required fields are correct.
+
+### [](#additional-resources)Additional Resources
+
+For more information about Metabase configuration and troubleshooting, refer to the following resources:
+
+- [**Adding and Managing Databases**](https://www.metabase.com/docs/latest/databases/connecting) — Official Metabase documentation on connecting to data sources and managing database connections.
+- [**Troubleshooting Database Connections**](https://www.metabase.com/docs/latest/troubleshooting-guide/db-connection) — Guidance on resolving issues when connecting [Metabase](https://www.metabase.com/docs/latest/databases/connecting) to your databases.
+- [**Troubleshooting Database Performance**](https://www.metabase.com/docs/latest/troubleshooting-guide/db-performance) — Tips for identifying and addressing performance issues with connected databases.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_integrations_otel_exporter.md b/cmd/docs-scrapper/fireboltdocs/guides_integrations_otel_exporter.md
new file mode 100644
index 0000000..5a8d7cb
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_integrations_otel_exporter.md
@@ -0,0 +1,9 @@
+# [](#overview)Overview
+
+[OpenTelemetry](https://opentelemetry.io/) is a [CNCF](https://www.cncf.io/) project that provides a collection of APIs, SDKs, and tools to instrument, generate, collect, and export telemetry data (metrics, logs, and traces). In the past few years, this project become the accepted standard for telemetry, with native support by all major vendors. As such, Firebolt provides an OpenTelemetry exporter which gives and compatibility with minimal effort.
+
+Firebolt OpenTelemetry Exporter is provided as a docker image, which allows exporting engine metrics to any [OTLP](https://opentelemetry.io/docs/specs/otel/protocol/) compatible collector. This makes possible to integrate Firebolt runtime metrics into customer’s monitoring and alerting systems and be able to use homogeneous infrastructure for observability of the entire data stack.
+
+# [](#enabling-firebolt-opentelemetry-exporter)Enabling Firebolt OpenTelemetry Exporter
+
+For installation and usage instructions, see the [otel-exporter](https://github.com/firebolt-db/otel-exporter) repository on GitHub.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_integrations_tableau.md b/cmd/docs-scrapper/fireboltdocs/guides_integrations_tableau.md
new file mode 100644
index 0000000..6beef0d
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_integrations_tableau.md
@@ -0,0 +1,79 @@
+# [](#integrate-with-tableau)Integrate with Tableau
+
+
+
+[Tableau](https://www.tableau.com/) is a visual analytics platform that empowers users to explore, analyze, and present data through interactive visualizations. It supports diverse use cases such as data exploration, reporting, and collaboration, and helps users gain insights and make informed decisions. This guide shows you how to set up your Firebolt account to integrate with [Tableau Desktop](https://www.tableau.com/products/desktop) and [Tableau Exchange](https://exchange.tableau.com).
+
+The latest Firebolt version is not compatible with Tableau Online, and you will not be able to connect it to your Firebolt account. You can only use the connector from Tableau Exchange with an older version of Firebolt. If you want to use the latest version, use Tableau Desktop or Tableau Server and follow the instructions below.
+
+## [](#prerequisites)Prerequisites
+
+You must have the following prerequisites before you can connect your Firebolt account to Tableau:
+
+- **Tableau account** – You must have access to an active Tableau account. If you do not have access, you can [sign up](https://www.tableau.com/products/trial) for one.
+- **Firebolt account** – You need an active Firebolt account. If you do not have one, you can [sign up](https://go.firebolt.io/signup) for one.
+- **Firebolt database and table** – You must have access to a Firebolt database that contains a table with data ready for visualization. If you don’t have access, you can [create a database](/Guides/getting-started/get-started-sql.html#create-a-database) and then [load data](/Guides/loading-data/loading-data.html) into it.
+- **Firebolt service account** – You must have access to an active Firebolt [service account](/Guides/managing-your-organization/service-accounts.html), which facilitates programmatic access to Firebolt, its ID and secret.
+- **Firebolt user** – You must have a user that is [associated](/Guides/managing-your-organization/service-accounts.html#create-a-user) with your service account. The user should have [USAGE](/Overview/Security/Role-Based%20Access%20Control/database-permissions/) permission to query your database, and [OPERATE](/Overview/Security/Role-Based%20Access%20Control/engine-permissions.html) permission to start and stop an engine if it is not already started.
+
+## [](#connect-to-tableau)Connect to Tableau
+
+To connect to Tableau, you must download a Firebolt connector, a [JDBC driver](/Guides/developing-with-firebolt/connecting-with-jdbc.html#jdbc-driver), connect to Firebolt, and select a database and schema to query. You can either install [Tableau Desktop](https://www.tableau.com/products/desktop) for individual use or [Tableau Server](https://www.tableau.com/products/server) for centralized access to dashboards on a shared server.
+
+1. **Download and install Tableau**
+
+ 1. To download Tableau’s Desktop, navigate to Tableau’s Desktop [download page](https://www.tableau.com/en-gb/products/desktop/download), and follow the prompts to install the program. To use Tableau Server, follow Tableau’s instructions for [installation and configuration](https://help.tableau.com/current/server/en-us/install_config_top.htm).
+ 2. Follow the prompts to install Tableau.
+2. **Download the latest Firebolt connector**
+
+ Download the latest version of Firebolt’s Tableau connector from Firebolt’s GitHub [repository](https://github.com/firebolt-db/tableau-connector/releases). The earliest version of the driver that is compatible with the latest version of Firebolt is [v1.1.0](https://github.com/firebolt-db/tableau-connector/releases/tag/v1.1.0). The name of the file has the following format: `firebolt_connector-.taco`, and should be saved in a specific directory that depends on the operating system used as follows:
+
+ For Tableau Desktop, save the file connector to:
+
+ - Windows - `C:\Users\[Windows User]\Documents\My Tableau Repository\Connectors`
+ - MacOS - `/Users/[user]/Documents/My Tableau Repository/Connectors`
+
+ For any other installations including Tableau Server and older versions of Tableau, follow the steps in the Tableau [guide](https://help.tableau.com/current/pro/desktop/en-us/examples_connector_sdk.htm#use-a-connector-built-with-tableau-connector-sdk).
+3. **Download the latest JDBC driver**
+
+ Download a JDBC driver, which will allow Tableau to interact with a Firebolt databases using Java, from Firebolt’s GitHub [repository](https://github.com/firebolt-db/jdbc/releases). The name of the file has the following format: `firebolt-jdbc-.jar`, and should be saved in a specific directory that depends on the operating system as follows:
+
+ - Windows: `C:\Program Files\Tableau\Drivers`
+ - Mac: `/Users//Library/Tableau/Drivers`
+ - Linux: `/opt/tableau/tableau_driver/jdbc`
+4. **Start Tableau and verify Firebolt connector availability**
+
+ 1. Start your Tableau Desktop or Server. If you already started Tableau prior to downloading the drivers, restart Tableau.
+ 2. In the left navigation panel, under **To a Server**, select the `>` to the right of **More…**.
+ 3. Search for and select the **Firebolt by Firebolt Analytics Inc** connector in the search bar.
+ 4. In the left navigation panel, under **To a Server**, select the `>` to the right of **More…**.
+ 5. Select **Firebolt Connector by Firebolt**.
+ 6. Enter the following parameters in the **General** tab:
+
+ **Field** **Required** **Description** **Host** No Most users should not enter a value in the text box under `Host`. **Account** Yes The name of your Firebolt account within your organization. **Engine Name** Yes The name of the [engine](/Overview/engine-fundamentals.html) to run queries. **Database** Yes The name of the Firebolt [database](/Overview/indexes/using-indexes.html#databases) to connect to. **Client ID** Yes The [ID of your service account](/Guides/managing-your-organization/service-accounts.html#get-a-service-account-id). **Client Secret** Yes The [secret](/Guides/managing-your-organization/service-accounts.html#generate-a-secret) for your service account authentication.
+ 7. Select **Sign in**.
+5. **Choose the database and the schema to query**
+
+ After successful authentication, **Database** and **Schema** drop-down lists appear in the left navigation pane under **Connections**. The database name from the previous step appears in the database drop-down list. To change the database, you must repeat the previous step and set up a new connector.
+
+ Choose the schema and tables as follows:
+
+ 1. Select the drop-down list under **Schema** to select a [schema](/Overview/indexes/using-indexes.html#schema). Most users should choose `public`. For more information about schema permissions and privileges, see [Schema permissions](/Overview/Security/Role-Based%20Access%20Control/database-permissions/schema-permissions.html).
+ 2. Drag and drop tables from the list of available tables in your schema to use them in Tableau.
+6. **Visualize your data**
+
+ Once your data source is selected you can begin visualizing the data by creating graphs and charts as follows:
+
+ 1. Select `Sheet 1` tab from the bottom-left corner of your Tableau window next to **Data Source**.
+ 2. In the left navigation panel under **Sheets**, drag and drop any available columns or pre-defined aggregation from your table into the Tableau workspace to start building charts. See Tableau’s [Build a view from scratch](https://help.tableau.com/current/pro/desktop/en-us/getstarted_buildmanual_ex1basic.htm) documentation for more information.
+
+## [](#limitations)Limitations
+
+- Firebolt does not support [Tableau Cloud](https://www.tableau.com/products/cloud-bi).
+- Once you have set up a connection to Firebolt, you cannot change the database that you specified during setup. In order to change the database, you must repeat step 4 to **Start Tableau and verify Firebolt connector availability** in [Connect to Tableau](#connect-to-tableau) to set up a new connection.
+
+## [](#additional-resources)Additional resources
+
+- Watch Tableau’s [free training videos](https://www.tableau.com/en-gb/learn/training) on getting started, preparing data, and geographical analysis.
+- Read Tableau’s data visualization [articles](https://www.tableau.com/en-gb/learn/articles) about creating effective, engaging, and interactive examples.
+- Follow Tableau’s [blog](https://www.tableau.com/en-gb/blog) for new features and tips.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_loading_data_configuring_aws_role_to_access_amazon_s3.md b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_configuring_aws_role_to_access_amazon_s3.md
new file mode 100644
index 0000000..8b9e010
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_configuring_aws_role_to_access_amazon_s3.md
@@ -0,0 +1,163 @@
+# [](#use-aws-iam-roles-to-access-amazon-s3)Use AWS IAM roles to access Amazon S3
+
+Firebolt uses AWS Identity and Access Management (IAM) permissions to load data from an Amazon S3 bucket into Firebolt. This requires you to set up permissions using the AWS Management Console. Specify credentials when you create an external table using one of the following options:
+
+- You can provide **Access Keys** associated with an IAM principal that has the required permissions.
+- You can specify an **IAM role** that Firebolt assumes for the appropriate permissions.
+
+This guide explains how to create an AWS IAM permissions policy and an IAM role to grant Firebolt the necessary permissions to access and read data from an Amazon S3 bucket.
+
+1. [Create an IAM permissions policy in AWS](#create-an-iam-permissions-policy-in-aws)
+2. [Create the IAM role in AWS](#create-the-iam-role-in-aws)
+3. [How to specify the IAM role](#how-to-specify-the-iam-role)
+
+ 1. [Specify the IAM role for data loading](#specify-the-iam-role-for-data-loading)
+ 2. [Specify the IAM role in `COPY FROM`](#specify-the-iam-role-in-copy-from)
+ 3. [Using IAM role in the Firebolt load data wizard](#using-iam-role-in-the-firebolt-load-data-wizard)
+ 4. [Using IAM role in external table definitions](#using-iam-role-in-external-table-definitions)
+
+## [](#create-an-iam-permissions-policy-in-aws)Create an IAM permissions policy in AWS
+
+01. Log in to the [AWS Identity and Access Management (IAM) Console](https://console.aws.amazon.com/iam/home#/home).
+02. From the left navigation panel, under **Access management**, choose **Account settings**.
+03. Under **Security Token Service (STS),** in the **Endpoints** list, find the **Region name** where your account is located. If the status is **Inactive**, choose **Activate**.
+04. Choose **Policies** from the left navigation panel.
+05. Select **Create Policy**.
+06. Select the **JSON** tab.
+07. Add a policy document that grants Firebolt access to the Amazon S3 bucket and folder.
+
+ The following policy in JSON format provides Firebolt with the required permissions to unload data using a single bucket and folder path. Copy and paste the text into the policy editor. Replace `` and `` with the actual bucket name and path prefix.
+
+ ```
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "s3:GetObject",
+ "s3:GetObjectVersion"
+ ],
+ "Resource": "arn:aws:s3::://*"
+ },
+ {
+ "Effect": "Allow",
+ "Action": "s3:GetBucketLocation",
+ "Resource": "arn:aws:s3:::"
+ },
+ {
+ "Effect": "Allow",
+ "Action": "s3:ListBucket",
+ "Resource": "arn:aws:s3:::",
+ "Condition": {
+ "StringLike": {
+ "s3:prefix": [
+ "/*"
+ ]
+ }
+ }
+ }
+ ]
+ }
+ ```
+
+ - If you encounter the following error: `Access Denied (Status Code: 403; Error Code: AccessDenied)`, one possible fix may be to remove the following condition from the IAM policy:
+
+ ```
+ "Condition": {
+ "StringLike": {
+ "s3:prefix": [
+ "/*"
+ ]
+ }
+ }
+ ```
+08. Select **Next** in the bottom-right corner of the workspace.
+09. In the **Review and create** pane, under **Policy details**, enter the **Policy name**. For example, `_firebolt-s3-access_`.
+10. Enter an optional **Description**.
+11. Select the **Create policy** button in the bottom-right corner of the workspace.
+
+Setting the s3:prefix condition key to * grants access to **all** prefixes in the specified bucket for the associated action.
+
+## [](#create-the-iam-role-in-aws)Create the IAM role in AWS
+
+To integrate Firebolt with AWS, you must create an IAM role and associate it with the permission policy that you created in the previous [Create an IAM permissions policy in AWS](#create-an-iam-permissions-policy-in-aws) section. The following steps guide you through creating an IAM role, configuring the required trust policy from the Firebolt Workspace, and associating it with your IAM permissions policy. Once completed, you can use the role’s Amazon Resource Name (ARN) in Firebolt’s `CREDENTIALS` clause to enable secure data ingestion.
+
+01. Log in to the [AWS Identity and Access Management (IAM) Console](https://console.aws.amazon.com/iam/home#/home).
+02. Select **Roles** from the left navigation panel.
+03. Select the **Create role** button in the top right part of the main window.
+04. In the **Select trusted entity** window, select the radio button next to **Custom trust policy**.
+05. A **Custom trust policy** window opens. Leave this window open until you obtain a custom trust policy from the Firebolt **Workspace** as follows:
+
+ 1. Log in to the [Firebolt Workspace](https://go.firebolt.io/login).
+ 2. Select the plus (**+**) sign in Firebolt’s **Develop Space**.
+ 3. Select **Load data** from the drop-down list.
+ 4. Select an engine from the drop-down list next to **Select engine for ingestion**. If you do not have an engine, select **Create new engine** to create one.
+ 5. Select the **Next step** button.
+ 6. Select the radio button next to **IAM Role** in the **Authentication method** row.
+ 7. Select the **Create an IAM role** button.
+ 8. In the **Create new IAM role** window that pops up, select the copy icon under **Trust policy** to copy the entire trust policy to your clipboard.
+ 9. Return to the AWS **Custom trust policy** window from step 5.
+06. Replace the entire contents of the **Custom trust policy** with the contents of your clipboard from the Firebolt **Workspace**.
+07. Select the **Next** button in the bottom right part of the main window.
+08. Under **Permissions policies** enter the name of and select the checkbox next to the policy you created in step 9 of the previous section [create an IAM permissions policy in AWS](#create-an-iam-permissions-policy-in-aws).
+09. Select the **Next** button in the bottom right part of the main window.
+10. Under **Role name**, enter a name that you can use to identify it.
+11. Select the **Create role** button in the bottom right part of the main window.
+12. Under **Role name**, select the name of the role you created in step 10.
+13. Copy the value under **ARN**. This value has the following format: `arn:aws:iam::123456789012:role/your_role_name`. Use the ARN value in the Firebolt `CREDENTIALS` clause as the `AWS_ROLE_ARN`, as shown in the following sections.
+
+Once you’ve created your IAM policy and associated it with your IAM role, you’re ready to load data into Firebolt using IAM roles. Firebolt assumes the IAM role to securely access and read data from your Amazon S3 bucket.
+
+## [](#how-to-specify-the-iam-role)How to specify the IAM role
+
+Firebolt supports AWS IAM roles for secure access to Amazon S3 when loading data. You can specify an IAM role in different ways, including in the `COPY FROM` statement, the Firebolt **Load Data** wizard, or an external table definition. The following sections explain how to configure IAM roles for each method.
+
+### [](#specify-the-iam-role-for-data-loading)Specify the IAM role for data loading
+
+When loading data into Firebolt, specify the IAM role ARN from the previous step to grant the necessary permissions. If you configured an external ID, ensure it is included along with the role ARN. The following sections show you how to load data into Firebolt using AWS IAM roles to access your storage bucket.
+
+### [](#specify-the-iam-role-in-copy-from)Specify the IAM role in `COPY FROM`
+
+Use the IAM role ARN from the previous step in the [CREDENTIALS](/sql_reference/commands/data-management/copy-from.html) of the `COPY FROM` statement. If you specified an external ID, make sure to specify it in addition to the role ARN. When you use the `COPY FROM` statement to load data from your source, Firebolt assumes the IAM role to obtain permissions to read from the location specified in the `COPY FROM` statement.
+
+For a step-by-step guide, see [The simplest COPY FROM workflow](/Guides/loading-data/loading-data-sql.html#the-simplest-copy-from-workflow).
+
+**Example**
+
+The following code example loads data from a CSV file in an Amazon S3 bucket into the `tutorial` table in Firebolt, using an AWS IAM role for authentication, treating the first row as a header, and automatically creating the table if it does not exist:
+
+```
+COPY INTO tutorial
+FROM 's3://your_s3_bucket/your_file.csv'
+WITH
+CREDENTIALS = (
+ AWS_ROLE_ARN='arn:aws:iam::123456789012:role/my-firebolt-role'
+ AWS_EXTERNAL_ID='ca4f5690-4fdf-4684-9d1c-2d5f9fabc4c9'
+)
+HEADER=TRUE AUTO_CREATE=TRUE;
+```
+
+### [](#using-iam-role-in-the-firebolt-load-data-wizard)Using IAM role in the Firebolt load data wizard
+
+You can use the role ARN from the previous step when loading data using the **Load data** wizard in the **Firebolt Workspace**. For a step-by-step guide, see [Load data using a wizard](/Guides/loading-data/loading-data-wizard.html).
+
+### [](#using-iam-role-in-external-table-definitions)Using IAM role in external table definitions
+
+Specify the IAM role ARN and the optional `external_id` in the [`CREDENTIALS`](/sql_reference/commands/data-definition/create-external-table.html) of the `CREATE EXTERNAL TABLE` statement. Firebolt assumes this IAM role when using an `INSERT INTO` statement to load data into a fact or dimension table.
+
+**Example**
+
+The following code example creates an external table which maps to Parquet files stored in an Amazon S3 bucket, using an AWS IAM role for access, and extracts partition values for `c_type` from the file path based on a specified regex pattern:
+
+```
+CREATE EXTERNAL TABLE my_ext_table (
+ c_id INTEGER,
+ c_name TEXT,
+ c_type TEXT PARTITION('[^/]+/c_type=([^/]+)/[^/]+/[^/]+')
+)
+CREDENTIALS = (AWS_ROLE_ARN='arn:aws:iam::123456789012:role/my-firebolt-role' AWS_ROLE_EXTERNAL_ID='ca4f5690-4fdf-4684-9d1c-2d5f9fabc4c9')
+URL = 's3://my_bucket/'
+OBJECT_PATTERN= '*.parquet'
+TYPE = (PARQUET)
+```
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_loading_data_creating_access_keys_aws.md b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_creating_access_keys_aws.md
new file mode 100644
index 0000000..df8d12d
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_creating_access_keys_aws.md
@@ -0,0 +1,107 @@
+# [](#creating-an-access-key-and-secret-id-in-aws)Creating an access key and secret ID in AWS
+
+This section will walk you through the steps to create security credentials in AWS. These credentials will be used to load data from AWS S3 into Firebolt.
+
+In order to enable Firebolt to load data from your S3 buckets, you must:
+
+1. Create a user.
+2. Create appropriate permissions for this user.
+3. Create access credentials to authenticate this user.
+
+## [](#create-a-user)Create a user
+
+1. Log into your AWS Management console and go to the IAM section. You can do this by typing “IAM” in the search bar.
+2. Once you are in the IAM section, select the **Create User** button.
+
+ 
+3. Enter a name for the user and select **Next**.
+
+ 
+4. You can have the default permission option set to **Add user to group** and select **Next**.
+
+ 
+5. Select **Create User**.
+
+ 
+6. You will see a message **User created successfully**.
+
+ 
+
+## [](#create-s3-access-permissions)Create S3 access permissions
+
+Now that you have created the user, you will now assign this user appropriate permissions for S3.
+
+1. Select on the user name as shown below.
+
+ 
+2. In the Permissions tab, select the **Add Permissions** drop-down and choose **Create inline policy**.
+
+ 
+3. In **Specify Permissions** choose S3 as the service.
+
+ 
+4. Choose **JSON**, paste the following JSON code in the policy editor, and select **Next**.
+
+ 
+
+ ```
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Action": [
+ "s3:GetObject",
+ "s3:GetObjectVersion"
+ ],
+ "Resource": "arn:aws:s3::://*"
+ },
+ {
+ "Effect": "Allow",
+ "Action": "s3:GetBucketLocation",
+ "Resource": "arn:aws:s3:::"
+ },
+ {
+ "Effect": "Allow",
+ "Action": "s3:PutObject",
+ "Resource": "arn:aws:s3:::/*"
+ },
+ {
+ "Effect": "Allow",
+ "Action": "s3:ListBucket",
+ "Resource": "arn:aws:s3:::",
+ "Condition": {
+ "StringLike": {
+ "s3:prefix": [
+ "/*"
+ ]
+ }
+ }
+ }
+ ]
+ }
+ ```
+
+ **IMPORTANT:** Replace “<bucket>” with the S3 bucket that you want to provide access to.
+5. Enter a description for the policy and select **Create Policy**.
+
+ 
+6. You will see a message that the policy has been successfully created.
+
+## [](#create-access-key-and-secret-id)Create access key and secret ID
+
+Now that you have created a user, authorized the user with the appropriate S3 permissions, you will create access credentials for this user. These credentials will be used to authenticate the user.
+
+1. Select the **Security Credentials** tab, as shown in the following image:
+
+ 
+2. In the **Access Keys** section, select the **Create Access Key** button.
+
+ 
+3. For the use case, choose the **Application running on AWS compute service**. You will see an alternative recommendation. You can check the box that says “I understand the above recommendation and want to proceed to create an access key” and select **Next**.
+
+ 
+4. Set a description tag for the access key and select \*\*C, 
+5. You will see a message indicating that the access key was created. Make sure to download the access key. You will need these credentials when you load S3 data into Firebolt.
+
+ 
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_loading_data_loading_data.md b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_loading_data.md
new file mode 100644
index 0000000..c88334b
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_loading_data.md
@@ -0,0 +1,48 @@
+# [](#load-data)Load data
+
+You can load data into Firebolt from an Amazon S3 bucket using two different workflows.
+
+If you want to get started quickly, load data using a **wizard** in the **Firebolt Workspace**. If you want a more customized experience, you can write **SQL scripts** to handle each part of your workflow. This guide shows you how to load data using both the wizard and SQL, and some common data loading workflows and errors.
+
+
+
+Before you can load data, you must first register with Firebolt, then create a database and an engine. For information about how to register, see [Get Started](../../Guides/getting-started/). See the following sections for information about how to create a database and engine.
+
+## [](#load-data-using-a-wizard)Load data using a wizard
+
+You can use the **Load data** wizard in the **Firebolt Workspace** to load data in either CSV or Parquet format, and choose from a variety of different loading parameters which include the following:
+
+- Specifying a custom delimiter, quote character, escape character, and other options.
+- How to handle errors during data load.
+- Specifying a primary index.
+
+The **Load data** wizard guides you through the process of creating an engine and database as part of the loading process.
+
+See [Load data using a wizard](/Guides/loading-data/loading-data-wizard.html) for information about the options available in the **Load data** wizard.
+
+## [](#load-data-using-sql)Load data using SQL
+
+You can use SQL to load data in CSV, Parquet, TSV, AVRO, JSON Lines or ORC formats. Prior to loading data, you must also create a database and engine using either of the following options:
+
+- Use buttons in the **Firebolt Workspace** to create a database and engine. For more information, see the [Create a Database](/Guides/getting-started/get-started-sql.html#create-a-database) and [Create an Engine](/Guides/getting-started/get-started-sql.html#create-an-engine) sections in the [Get Started using SQL](/Guides/getting-started/get-started-sql.html) guide.
+- Use the SQL commands [CREATE DATABASE](/sql_reference/commands/data-definition/create-database.html) and [CREATE ENGINE](/sql_reference/commands/engines/create-engine.html).
+
+See [SQL to load data](/Guides/loading-data/loading-data-sql.html) for information and code examples to load data using SQL.
+
+## [](#optimizing-during-data-loading)Optimizing during data loading
+
+Optimizing your workflow for Firebolt starts when you load your data. Use the following guidance:
+
+1. A primary index is a sparse index that uniquely identifies rows in a table. Having a primary index is critical to query performance at Firebolt because it allows a query to locate data without scanning an entire dataset. If you are familiar with your data and query history well enough to select an optimal primary index, you can define it when creating a table. If you don’t, you can still load your data without a primary index. Then, once you know your query history patterns, you must create a new table in order to define a primary index.
+
+ You can specify primary indexes in either the **Load data** wizard or inside SQL commands. The [Load data using a wizard](/Guides/loading-data/loading-data-wizard.html) guide discusses considerations for selecting and how to select primary indexes. The [Load data using SQL](/Guides/loading-data/loading-data-sql.html) discusses considerations for selecting and shows code examples that select primary indexes. For more advanced information, see [Primary indexes](/Overview/indexes/primary-index.html).
+2. If you intend to use [aggregate functions](/sql_reference/functions-reference/aggregation/) in queries, you can calculate an aggregating index when loading your data. Then queries use these pre-calculated values to access information quickly. For an example of calculating an aggregating index during load, see [Load data using SQL](/Guides/loading-data/loading-data-sql.html). For an introduction to aggregating indexes, see the [Aggregating indexes](/Guides/getting-started/get-started-sql.html#aggregating-indexes) section of the **Get Started** guide. For more information, see [Aggregating indexes](/Overview/indexes/aggregating-index.html).
+
+## [](#next-steps)Next steps
+
+After you load your data, you can start running and optimizing your queries. A typical workflow has the previous steps followed by data and resource cleanup as shown in the following diagram:
+
+
+
+- [Load data using a wizard](/Guides/loading-data/loading-data-wizard.html)
+- [Load data using SQL](/Guides/loading-data/loading-data-sql.html)
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_loading_data_loading_data_sql.md b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_loading_data_sql.md
new file mode 100644
index 0000000..04c1dde
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_loading_data_sql.md
@@ -0,0 +1,497 @@
+# [](#load-data-using-sql)Load data using SQL
+
+If the **Load data** wizard does not meet your needs, or you prefer to write directly in SQL, you can enter SQL and run it in the **Firebolt Workspace**, or use an API.
+
+Before you can load data using a SQL script, you must register with Firebolt, and create a database and an engine.
+
+A general workflow to load data using SQL is shown in the following diagram, with the highlighted path representing the SQL workflow and the muted path representing using the **Load data** wizard:
+
+
+
+For more information on how to register, create a database and engine using the **Firebolt Workspace**, see the [Get Started](/Guides/getting-started/) guide. To create an engine using SQL, use [CREATE ENGINE](/sql_reference/commands/engines/create-engine.html). You can check how many engines are defined in your current account using [SHOW ENGINES](/sql_reference/commands/metadata/show-engines.html). For more information and examples of how to create engines, see [Work with engines using DDL](/Guides/operate-engines/working-with-engines-using-ddl.html). To create a database, use [CREATE DATABASE](/sql_reference/commands/data-definition/create-database.html). You can check how many databases (i.e., catalogs) are defined in your current account using [SHOW CATALOGS](/sql_reference/commands/metadata/show-catalogs.html). Next, log into the **Firebolt Workspace** and enter SQL into the script tab in the **SQL Editor**.
+
+The following code examples show different workflows based on need and complexity:
+
+- [The simplest COPY FROM workflow](#the-simplest-copy-from-workflow)
+- [Define a schema, create a table, and load data](#define-a-schema-create-a-table-and-load-data)
+- [Load multiple files into a table](#load-multiple-files-into-a-table)
+- [Filter data before loading using OFFSET and LIMIT](#filter-data-before-loading-using-offset-and-limit)
+- [Aggregating data during data load](#aggregating-data-during-data-load)
+- [Update an existing table from an external table](#update-an-existing-table-from-an-external-table)
+- [Load source file metadata into a table](#load-source-file-metadata-into-a-table)
+- [Continue loading even with errors](#continue-loading-even-with-errors)
+- [Log errors during data load](#log-errors-during-data-load)
+
+## [](#the-simplest-copy-from-workflow)The simplest COPY FROM workflow
+
+Although there are many options to handle different data loading workflows, `COPY FROM` requires only two parameters:
+
+1. The name of the table that you are loading data into.
+2. A location to load the data from.
+
+An example of the **simplest** way to invoke `COPY FROM` is:
+
+```
+COPY INTO tutorial FROM
+'s3://firebolt-publishing-public/help_center_assets/firebolt_sample_dataset/levels.csv' WITH HEADER=TRUE;
+```
+
+The previous code creates a table named `tutorial`, reads a CSV file with headers from a public Amazon S3 bucket, automatically generates a schema, and loads the data.
+
+If the data is contained in an Amazon S3 bucket with restricted access, you will need to provide credentials. The following example shows how to provide credentials and read a file with headers, and automatically generate a schema:
+
+```
+COPY INTO tutorial
+FROM 's3://your_s3_bucket/your_file.csv'
+WITH
+CREDENTIALS = (
+ AWS_ROLE_ARN='arn:aws:iam::123456789012:role/my-firebolt-role'
+)
+HEADER=TRUE AUTO_CREATE=TRUE;
+```
+
+Firebolt supports authentication using both permanent AWS access keys and temporary security credentials obtained through Amazon’s [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) feature. To provide your credentials for the previous example, follow these steps:
+
+#### [](#static-credentials)Static Credentials
+
+Replace `` with an AWS access key ID associated with an AWS user or IAM role. The access key ID is a 20-character string (e.g., AKIAIOSFODNN7EXAMPLE). Replace `` with the AWS secret access key associated with the AWS user or IAM role. The secret access key is a 40-character string (e.g., wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY). You can also specify an `AWS_SESSION_TOKEN`.
+
+**Example:**
+
+```
+COPY INTO tutorial
+FROM 's3://test-bucket/data.csv'
+WITH
+CREDENTIALS = (
+ AWS_ACCESS_KEY_ID = 'AKIAIOSFODNN7EXAMPLE' AWS_SECRET_ACCESS_KEY = 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
+)
+```
+
+#### [](#assume-role-authentication)Assume Role Authentication
+
+Replace with your role's Amazon Resource Name (ARN) of the IAM role that you want Firebolt to assume. This method gives Firebolt temporary credentials to authenticate and access your Amazon S3 bucket.
+
+**Example:**
+
+```
+COPY INTO tutorial
+FROM 's3://test-bucket/data.csv'
+WITH
+CREDENTIALS = (
+ AWS_ROLE_ARN='arn:aws:iam::746669185839:role/example'
+)
+```
+
+## [](#define-a-schema-create-a-table-and-load-data)Define a schema, create a table, and load data
+
+You can also load data into an existing table using your own schema definition. Manually defining your own schema, can give you finer control over data ingestion. This example contains the following two steps:
+
+1. Create the target table.
+
+ Create a table to load the data into, as shown in the following code example:
+
+ ```
+ CREATE TABLE IF NOT EXISTS levels (
+ LevelID INT,
+ Name TEXT,
+ GameID INT,
+ LevelType TEXT,
+ MaxPoints INT,
+ PointsPerLap DOUBLE,
+ SceneDetails TEXT
+ );
+ ```
+
+ The previous code example creates a table named `levels`, and defines each of the columns with a name and data type. For more information about the data types that Firebolt supports, see [Data types](/sql_reference/data-types.html).
+2. Run COPY FROM.
+
+ Use COPY FROM to load the data from an Amazon S3 bucket into the levels table, as shown in the following code example:
+
+ ```
+ COPY INTO levels
+ FROM 's3://firebolt-publishing-public/help_center_assets/firebolt_sample_dataset/levels.csv'
+ WITH TYPE = CSV
+ HEADER = TRUE;
+ ```
+
+ The previous code example reads data from a Firebolt test data set from the fictional [Ultra Fast Gaming Inc.](https://help.firebolt.io/t/ultra-fast-gaming-firebolt-sample-dataset/250) company. The `levels` data set is in CSV format, but you can also use `COPY FROM` to read files in `Parquet` format. If you are reading in a `CSV` file and specify `HEADER = TRUE`, then Firebolt expects the first line of your file to contain column names.
+
+## [](#load-multiple-files-into-a-table)Load multiple files into a table
+
+You can use the `PATTERN` option in `COPY FROM` to load several files at the same time from an Amazon S3 bucket. The `PATTERN` option uses standard regular expressions. For more information about regular expressions, see the Wikipedia [glob programming](https://en.wikipedia.org/wiki/Glob_%60%28programming%60%29) article.
+
+```
+COPY INTO nyc_restaurant_inspections FROM
+'s3://firebolt-sample-datasets-public-us-east-1/nyc_sample_datasets/nyc_restaurant_inspections/parquet/'
+WITH PATTERN="*.parquet" AUTO_CREATE=TRUE TYPE=PARQUET;
+```
+
+In the previous code example, the following apply:
+
+- **COPY INTO**: Specifies the target table to load the data into.
+- **FROM**: Specifies the S3 bucket location of the data.
+- **WITH PATTERN**= “\*.parquet”: Uses a regular expressions pattern with wildcards (\*) to include all Parquet files in the directory.
+- **AUTO\_CREATE=TRUE**: Automatically creates the table and the schema if the table does not already exist. Parquet files include rich data, and typically have schema information for simple and high-fidelity schema creation. Specifying AUTO\_CREATE to TRUE ensures the schema in the Parquet file is preserved after loading.
+- **TYPE = PARQUET**: Specifies the data format as Parquet.
+
+## [](#filter-data-before-loading-using-offset-and-limit)Filter data before loading using OFFSET and LIMIT
+
+You can use `COPY FROM` with the `LIMIT` and `OFFSET` clauses to filter out data before you load it into Firebolt. The following example demonstrates how to filter the source data by skipping the first five rows of data and inserting only the next three rows.
+
+```
+COPY offset_limit
+FROM 's3://firebolt-publishing-public/help_center_assets/firebolt_sample_dataset/levels.csv'
+OFFSET 5 LIMIT 3
+WITH TYPE = CSV HEADER = TRUE;
+```
+
+In the previous code example, the following apply:
+
+- **OFFSET**: Specifies a non-negative number of rows that are skipped before returning results from the query.
+- **LIMIT**: Restricts the number of rows that are included in the result set.
+- **TYPE = CSV**: Specifies the data format as CSV.
+- **HEADER**: Specifies that the first row of the source file contains column headers.
+
+For more information about `OFFSET` and `LIMIT`, see [SELECT Query Syntax](/sql_reference/commands/queries/select.html).
+
+## [](#aggregating-data-during-data-load)Aggregating data during data load
+
+If you frequently use [aggregation functions](/sql_reference/functions-reference/aggregation/) such as `COUNT`, `MAX`, or `SUM`, you can perform these aggregations on top of an external table without loading the raw data into Firebolt. This approach allows you to avoid costs associated with importing and storing the dataset, particularly if you don’t need to store the originating data set.
+
+The following example shows how to aggregate data using an [external table](/Guides/loading-data/working-with-external-tables.html). Then, define a table in the Firebolt database with the desired aggregations. Finally, insert data from the external table into the internal table. This example contains the following three steps:
+
+1. Create an external table linked to files in an Amazon S3 bucket.
+
+ The following code creates an external table that links to files in Amazon S3 bucket. The table has a defined schema that matches the type and names of the originating data:
+
+ ```
+ CREATE EXTERNAL TABLE IF NOT EXISTS ex_playstats (
+ GameID INTEGER,
+ PlayerID INTEGER,
+ StatTime TIMESTAMPNTZ,
+ SelectedCar TEXT,
+ CurrentLevel INTEGER,
+ CurrentSpeed REAL,
+ CurrentPlayTime BIGINT,
+ CurrentScore BIGINT,
+ Event TEXT,
+ ErrorCode TEXT,
+ TournamentID INTEGER)
+ URL = 's3://firebolt-sample-datasets-public-us-east-1/gaming/parquet/playstats/'
+ OBJECT_PATTERN = '*'
+ TYPE = (PARQUET);
+ ```
+
+ The previous code uses `OBJECT_PATTERN` to link all (\*) files inside the specified directory contained in `URL`, and `TYPE` to specify the file format.
+2. Define a table in the Firebolt database with the desired aggregations, as shown in the following code example:
+
+ ```
+ CREATE TABLE IF NOT EXISTS playstats_max_scores (
+ PlayerID INTEGER,
+ TournamentID INTEGER,
+ MaxCurrentLevel INTEGER,
+ MaxCurrentSpeed REAL,
+ MaxCurrentScore BIGINT
+ ) PRIMARY INDEX TournamentID, PlayerID;
+ ```
+
+ The previous code creates a table with the aggregate values `MaxCurrentLevel`, `MaxCurrentSpeed`, and `MaxCurrentScore`.
+3. Insert data from the external table into the internal table using the aggregate functions, as shown in the following code example:
+
+```
+ INSERT INTO playstats_max_scores
+ SELECT PlayerID,
+ TournamentID,
+ MAX(CurrentLevel),
+ MAX(CurrentSpeed),
+ MAX(CurrentScore)
+ FROM ex_playstats
+ GROUP BY ALL
+```
+
+The previous code calculates the aggregate function `MAX` before loading the data into the `playstats_max_scores` table.
+
+## [](#update-an-existing-table-from-an-external-table)Update an existing table from an external table
+
+Firebolt saves metadata including virtual columns, and the source file’s name, size and timestamp when mapping data from an Amazon S3 bucket to a Firebolt database. You can query this metadata directly for troubleshooting and analysis, or use it to find new data, as shown in this example.
+
+To load only new and updated data from an Amazon S3 bucket into an existing table, use an external table and two temporary tables. This section guides you through creating a new table, which will serve as the existing table in a complete example. If you already have an existing table, its schema definition must include the file timestamp and file name metadata. For more information about these metadata columns, see **Using metadata virtual columns** in [Work with external tables](/Guides/loading-data/working-with-external-tables.html).
+
+The full workflow involves creating an internal source data table, an external table linked to the source data, and two temporary tables for the latest timestamp and updated data. The `updates_table` selects new data and uses an inner join to insert these records into your existing table, as illustrated in the diagram below:
+
+
+
+This example contains the following nine steps:
+
+1. Create a table
+
+ The following code example shows you how to create a `players` table from a sample players dataset, and then copy data from a parquet file in an Amazon S3 bucket into it:
+
+ ```
+ CREATE TABLE IF NOT EXISTS
+ players (
+ PlayerID INTEGER,
+ Nickname TEXT,
+ Email TEXT,
+ AgeCategory TEXT,
+ Platforms ARRAY (TEXT NULL),
+ RegisteredOn DATE,
+ IsSubscribedToNewsletter BOOLEAN,
+ InternalProbabilityToWin DOUBLE PRECISION,
+ SOURCE_FILE_NAME TEXT,
+ SOURCE_FILE_TIMESTAMP TIMESTAMPNTZ)
+ PRIMARY INDEX agecategory, registeredon;
+ ```
+
+ The previous code example defines the schema for the players table, which includes the [metadata columns](/Guides/loading-data/working-with-external-tables.html) `SOURCE_FILE_NAME` and `SOURCE_FILE_TIMESTAMP`.
+2. Create an external table
+
+ Use an external table to query the source data directly to compare it to data in your existing table. The advantages of using an external table to check for new data are as follows:
+
+ - An external table links to the data source without loading it into a database, which avoids costs associated with importing and storing it.
+ - Using an external table isolates data operations to reduce the risk of corrupting data contained in the main `players` table.
+
+ The following code example creates an external players\_ext table linked to the source data:
+
+ ```
+ CREATE EXTERNAL TABLE IF NOT EXISTS
+ players_ext (
+ PlayerID INTEGER,
+ Nickname TEXT,
+ Email TEXT,
+ AgeCategory TEXT,
+ Platforms ARRAY (TEXT NULL),
+ RegisteredOn PGDATE,
+ IsSubscribedToNewsletter BOOLEAN,
+ InternalProbabilityToWin DOUBLE PRECISION)
+ URL = 's3://firebolt-sample-datasets-public-us-east-1/gaming/parquet/players/'
+ OBJECT_PATTERN = '*'
+ TYPE = (PARQUET);
+ ```
+
+ The previous code example defines the schema for parquet data and links to all parquet files in the Amazon S3 bucket that contains the source data.
+
+ If you are using an external table to link to data in parquet format, the order of the columns in the external table does not have to match the order of the columns in the source data. If you are reading data in csv format, the order must match the order in the source data.
+3. Copy data from the `players_ext` external table into an internal `players` table, as shown in the following code example:
+
+ ```
+ INSERT INTO players
+ SELECT *,
+ $SOURCE_FILE_NAME,
+ $SOURCE_FILE_TIMESTAMP
+ FROM players_ext
+ ```
+4. Create a temporary table that contains the most recent timestamp from your existing table, as shown in the following code example:
+
+ ```
+ CREATE TABLE IF NOT EXISTS control_maxdate AS (
+ SELECT MAX(source_file_timestamp) AS max_time
+ FROM players
+ );
+ ```
+
+ The previous code example uses an aggregate function `MAX` to select the most recent timestamp from the existing `players` table.
+5. Create a temporary table to select and store data that has a newer timestamp than that contained in the control\_maxdate table, as shown in the following code example:
+
+ ```
+ CREATE TABLE IF NOT EXISTS updates_table AS (
+ WITH external_table AS (
+ SELECT *,
+ $SOURCE_FILE_NAME AS source_file_name_new,
+ $SOURCE_FILE_TIMESTAMP AS source_file_timestamp_new,
+ FROM players_ext
+ WHERE $source_file_timestamp > (SELECT max_time FROM control_maxdate)
+ AND playerid IN (SELECT DISTINCT playerid FROM players)
+ )
+ SELECT
+ e.*
+ FROM players f
+ INNER JOIN external_table e
+ ON f.playerid = e.playerid
+ );
+ ```
+
+ The previous code example creates an `updates_table` using a `SELECT` statement to filter out data that is older than the previously recorded timestamp. The code includes a table alias `e`, which refers to `external_table`, and the table alias `f`, which refers to the `players` table. The `INNER JOIN` uses `playerid` to match rows in the external table to those in the `player` table, and then updates the `players` table.
+6. Delete records from the original players table that have been updated, based on matching player IDs in the `updates_table`, as shown in the following code example:
+
+ ```
+ DELETE FROM players
+ WHERE playerid IN (SELECT playerid FROM updates_table);
+ ```
+7. Insert updated records from the `updates_table`, including a new timestamp, into the `players` table to replace the deleted records from the previous step, as shown in the following example:
+
+ ```
+ INSERT INTO players
+ SELECT
+ playerid,
+ nickname,
+ email,
+ agecategory,
+ platforms,
+ registeredon,
+ issubscribedtonewsletter,
+ internalprobabilitytowin,
+ source_file_name_new,
+ source_file_timestamp_new
+ FROM updates_table;
+ ```
+8. Insert any entirely new, rather than updated, records into the `players` table, as shown in the following code example:
+
+ ```
+ INSERT INTO players
+ SELECT *,
+ $SOURCE_FILE_NAME,
+ $SOURCE_FILE_TIMESTAMP
+ FROM players_ext
+ WHERE $SOURCE_FILE_TIMESTAMP > (SELECT max_time FROM control_maxdate)
+ AND playerid NOT IN (SELECT playerid FROM players);
+ ```
+9. Clean up resources. Remove the temporary tables used in the update process as shown in the following code example:
+
+ ```
+ DROP TABLE IF EXISTS control_maxdate;
+ DROP TABLE IF EXISTS updates_table;
+ ```
+
+## [](#load-source-file-metadata-into-a-table)Load source file metadata into a table
+
+When you load data from an Amazon S3 bucket, Firebolt uses an external table which holds metadata about your source file to map into a Firebolt database. You can load metadata from the virtual columns contained in the external file into a table. You can use the name, timestamp and file size to determine the source of a row of data in a table. When adding data to an existing table, you can use this information to check whether new data is available, or to determine the vintage of the data. The external table associated with your source file contains the following fields:
+
+- `source_file_name` - the name of your source file.
+- `source_file_timestamp` - the date that your source file was modified in the Amazon S3 bucket that it was read from.
+- `source_file_size` - the size of your source file in bytes.
+
+The following code example shows you how to create and load metadata into a `levels_meta` table, which contains only the metadata:
+
+```
+ CREATE TABLE levels_meta (
+ day_of_creation date,
+ name_of_file text,
+ size_of_file int);
+ COPY levels_meta(
+ day_of_creation $source_file_timestamp,
+ name_of_file $source_file_name,
+ size_of_file $source_file_size)
+ FROM 's3://firebolt-publishing-public/help_center_assets/firebolt_sample_dataset/levels.csv'
+ WITH AUTO_CREATE = TRUE
+ HEADER = TRUE
+ TYPE = CSV;
+```
+
+The following code shows you how to read in the source data from the Amazon S3 bucket and add the metadata as new columns in that table:
+
+```
+CREATE TABLE IF NOT EXISTS levels_meta_plus (
+ "LevelID" INT,
+ "Name" TEXT,
+ "GameID" INT,
+ "LevelType" TEXT,
+ "MaxPoints" INT,
+ "PointsPerLap" DOUBLE,
+ "SceneDetails" TEXT,
+ day_of_creation date,
+ name_of_file text,
+ size_of_file int
+);
+
+COPY INTO levels_meta_plus (
+ "LevelID",
+ "GameID",
+ "Name",
+ "LevelType",
+ "MaxPoints",
+ "PointsPerLap",
+ "SceneDetails",
+ day_of_creation $source_file_timestamp,
+ name_of_file $source_file_name,
+ size_of_file $source_file_size
+)
+FROM 's3://firebolt-publishing-public/help_center_assets/firebolt_sample_dataset/levels.csv'
+WITH
+HEADER = TRUE
+TYPE = CSV;
+```
+
+For more information about metadata, see **Using metadata virtual columns** in [Work with external tables](/Guides/loading-data/working-with-external-tables.html).
+
+## [](#continue-loading-even-with-errors)Continue loading even with errors
+
+By default, if Firebolt runs into an error when loading your data, the job will stop loading and end in error. If you want to continue loading your data even in the presence of errors, set `MAX_ERRORS_PER_FILE` to a percentage or integer larger than `0`. `COPY FROM` will then continue to load data until it exceeds the specified percent based on the total number of rows in your data. If you enter an integer between `0` and `100`, `COPY FROM` will interpret the integer as a percentage of rows. You can specify only `0%` or `100%`.
+
+For example, if `MAX_ERRORS_PER_FILE` is set to `0` or `0%`, `COPY FROM` will load data until one row has an error, and then return an error. Setting `MAX_ERRORS_PER_FILE` to either `100` or `100%` allows the loading process to continue even if every row has an error. If all rows have errors, no data will load into the target table.
+
+The following code example loads a sample CSV data set with headers, and will finish the loading job even if every row contains an error.
+
+```
+COPY INTO new_levels_auto
+FROM 's3://firebolt-publishing-public/help_center_assets/firebolt_sample_dataset/levels.csv'
+WITH AUTO_CREATE = TRUE
+HEADER = TRUE
+TYPE = CSV
+MAX_ERRORS_PER_FILE = '100%';
+```
+
+In the previous code example, the following apply:
+
+- `COPY INTO new_levels_auto`: Creates a new table named `new_levels_auto`. The `INTO` clause is optional. If the table already exists, `COPY FROM` will add the rows to the existing table.
+- `FROM`: Specifies the S3 bucket location of the data. In this example, the dataset is located in a publicly accessible bucket, so you do not need to provide credentials.
+- `AUTO_CREATE=TRUE`: Creates a target table and automatically infers the schema.
+- `HEADER=TRUE`: Specifies that the first row of the source file contains column headers.
+- `TYPE`: Specifies the data format of the incoming data.
+- `MAX_ERRORS_PER_FILE`: Specified as an integer or literal text. In the previous example, `MAX_ERRORS_PER_FILE` uses text.
+
+## [](#log-errors-during-data-load)Log errors during data load
+
+`COPY FROM` supports an option to generate error files that describe the errors encountered and note the rows with errors. To store these files in an Amazon S3 bucket, you must provide credentials to allow Firebolt to write to the bucket.
+
+The following example sets an error handling threshold and specifies an Amazon S3 bucket as the source data and another to write the error file:
+
+```
+COPY INTO my_table
+FROM 's3://my-bucket/data.csv'
+WITH
+CREDENTIALS = (
+ AWS_ROLE_ARN='arn:aws:iam::123456789012:role/my-firebolt-role'
+)
+MAX_ERRORS_PER_FILE = '100%'
+ERROR_FILE = 's3://my-bucket/error_logs/'
+ERROR_FILE_CREDENTIALS = (
+ AWS_ROLE_ARN='arn:aws:iam::123456789012:role/my-firebolt-role'
+)
+HEADER = TRUE
+```
+
+In the previous code example, the following apply:
+
+- `COPY INTO`: Specifies the target table to load the data into.
+- `FROM`: Specifies the S3 bucket location of the data.
+- `CREDENTIALS`: Specifies AWS credentials to access information in the Amazon S3 bucket that contains the source data. AWS [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) authentication is used for dynamic, temporary credentials. For more Information about credentials and how to set them up, see [The simplest COPY FROM workflow](/Guides/loading-data/loading-data-sql.html#the-simplest-copy-from-workflow).
+- Error Handling:
+
+ - `MAX_ERRORS_PER_FILE = ‘100%’`: Allows errors in up to `100%` of the rows per file before the load data job fails.
+ - `ERROR_FILE`: Specifies the Amazon S3 bucket location to write the error file.
+- `HEADER = TRUE`: Indicates that the first row of the CSV file contains column headers.
+
+**How to examine the error files**
+
+If you specify an S3 path with the necessary permissions for an error file and the `COPY FROM` process encounters errors, two different files will be generated in your bucket. The following queries show you how to load these error files into new tables so that you can query and examine the error details and the corresponding rows.
+
+The following query loads the `error_reasons` csv file, which contains a header with column names:
+
+```
+COPY error_reasons FROM 's3://my-bucket/error_logs/'
+WITH PATTERN='*error_reasons*.csv' HEADER=TRUE;
+
+SELECT * from error_reasons;
+```
+
+The following query loads a file containing all rows that encountered errors. Although this file has no header, the table schema should match that of the source file where the errors occurred.
+
+```
+COPY rejected_rows FROM 's3://my-bucket/error_logs/'
+WITH PATTERN='*rejected_rows*.csv' HEADER=FALSE;
+
+SELECT * FROM rejected_rows;
+```
+
+Configure error handling parameters such as `MAX_ERRORS_PER_FILE`, `ERROR_FILE`, and `ERROR_FILE_CREDENTIALS` to manage how errors are handled, ensure data integrity, and record errors for future review. For more information about `ERROR_FILE` or `ERROR_FILE_CREDENTIALS`, see the **Parameters** section of [COPY FROM](/sql_reference/commands/data-management/copy-from.html).
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_loading_data_loading_data_wizard.md b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_loading_data_wizard.md
new file mode 100644
index 0000000..61090f2
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_loading_data_wizard.md
@@ -0,0 +1,185 @@
+# [](#load-data-using-a-wizard)Load data using a wizard
+
+The **Load data** wizard can help you get started loading data from an Amazon S3 bucket using a simple workflow. You can use the wizard to both create an engine and load your data.
+
+A general workflow to load data using the **Load data** wizard is shown in the following diagram as the highlighted decision path compared to using SQL shown in the muted path:
+
+
+
+The wizard also guides you through setting up an AWS connection. To use the wizard, you will need the uniform resource locator (URL) of an Amazon S3 bucket. If credentials are required to access the data that you want to load, you will also need an AWS Key ID and your AWS Secret Key. In most steps in the wizard, you can view the SQL commands associated with your selections in the **Load data** main window by selecting **Show SQL script** in the left navigation pane at the bottom of the window.
+
+To use the wizard, use the following steps:
+
+1. Register and/or log in to the [Firebolt Workspace](https://firebolt.go.firebolt.io/signup).
+2. Select the (+) icon from the left navigation pane next to **Databases**.
+3. Select **Load data** from the drop-down menu, as shown in the following image:
+
+
+
+## [](#select-an-engine)Select an engine
+
+
+
+Select an engine to load data. If the engine that you want to use already exists, select it from the dropdown list next to **Select engine for ingestion**. Otherwise, select **Create new engine** from the dropdown list, and do the following:
+
+1. Enter a name in the **New engine name** text box.
+2. Select an engine size from the drop-down list next to **Node type**. Consider the following when creating a new engine:
+
+ 1. If you are loading data and using Firebolt for the first time, use the smallest engine size (S) and a small dataset to try out Firebolt’s capabilities. Refer to the [Get Started](/Guides/getting-started/) guide for more information.
+ 2. If you want to load larger datasets, and a S engine provides insufficient performance, Firebolt recommends **scaling out**, or adding more nodes, first, as shown in the following diagram.
+
+ 
+ Scaling out can enhance performance for workloads with many similarly sized files, but it also increases billing costs.
+
+ Small and medium engines are available for use right away. If you want to use a large or extra-large engine, reach out to support@firebolt.io. For more information, see [Sizing Engines](/Guides/operate-engines/sizing-engines.html).
+3. Select the number of compute nodes to use to load your data next to **Number of nodes**. A node is an individual compute unit within a compute cluster.
+
+
+
+- Using more than one node allows Firebolt to load your data and perform operations on your data in parallel on multiple nodes within a single cluster, which can speed up the data loading process.
+- A higher number of nodes also means increased costs for compute resources. You can see the total cost per hour for your selection under Advanced settings, given in Firebolt Units (FBU). Each FBU is equivalent to $0.35 US dollars per hour. Find the right balance between cost and speed for your workload. You must use at least one node.
+
+
+
+1. Select the number of clusters next to **Number of clusters**. A cluster is a group of nodes that work together. The following apply:
+
+ - If you increase the number of clusters, you will add the number of compute nodes that you selected for each added cluster.
+
+ You can see the total cost per hour for your selection under **Advanced settings**, given in Firebolt Units (FBU). Find the right balance between cost and speed for your workload. You must use at least one cluster.
+2. Select the down arrow next to **Advanced settings** for more options for your engine including setting a time to stop the engine after a period of inactivity.
+
+## [](#set-up-aws-connection)Set up AWS connection
+
+
+
+### [](#a-using-public-data-that-do-not-require-access-credentials)A. Using public data that do not require access credentials
+
+- If the data is public and no credentials are needed, simply provide the URL of your Amazon S3 bucket and select **Next Step**.
+
+### [](#b-using-private-data-credentials-required)B. Using Private Data (Credentials Required)
+
+If the data requires credentials for access, you must provide them so that Firebolt can retrieve it from AWS on your behalf. You can choose either **Static Credentials** or **Assume Role Authentication**.
+
+- Use static credentials for simplicity and persistent access when security risks are low, and if your environment requires minimal configuration.
+- Use [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) Authentication\** for enhanced security, temporary access, and dynamic role management, particularly in environments requiring fine-grained permissions or cross-account access.
+
+#### [](#1-static-credentials)1. Static Credentials
+
+1. Provide the **URL** for your Amazon S3 bucket.
+2. Enter your **AWS Key ID** and **AWS Secret Key**.
+3. For authentication:
+
+ - Select **Access Key ID & Secret Key** as your authentication method.
+ - The **AWS Key ID** is a 20-character string associated with an AWS user or IAM role (e.g., `AKIAIOSFODNN7EXAMPLE`).
+ - The **AWS Secret Key** is a 40-character string linked to the AWS Key ID (e.g., `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`).
+ - Optionally, you can also specify an **AWS Session Token**.
+ - For more information about these credentials, see [Create Access Key and Secret ID in AWS](/Guides/loading-data/creating-access-keys-aws.html).
+4. Select **Next Step**.
+
+#### [](#2-assume-role-authentication)2. Assume Role Authentication
+
+1. Select **IAM Role** as your authentication method.
+2. Select **Create an IAM role**. To allow Firebolt to read and write to your Amazon S3 bucket using dynamic credentials, you must do the following:
+
+ - Create an IAM Role.
+ - Define an **AssumeRole** Policy.
+3. After the role is created in your AWS account and the trust policy is attached, copy the **Amazon Resource Name (ARN)** of the role to your clipboard.
+4. Paste the ARN into the **Amazon Resource Name** field in Firebolt.
+5. Select **Next Step**.
+
+#### [](#3-using-firebolts-test-dataset-if-youre-not-ready-with-your-own-data)3. Using Firebolt’s Test Dataset (If You’re Not Ready with Your Own Data)
+
+If you don’t have your own data ready, you can use Firebolt’s sample dataset from the fictional company [Ultra Fast Gaming Inc](https://help.firebolt.io/t/ultra-fast-gaming-firebolt-sample-dataset/250):
+
+- Use the following Amazon S3 bucket URL: `s3://firebolt-publishing-public/help_center_assets/firebolt_sample_dataset/`.
+
+Alternatively, you can click the toggle button next to **Use Firebolt Playground Bucket to load sample data**.
+
+1. Select **Next step**.
+
+## [](#select-data-to-ingest)Select data to ingest
+
+
+
+1. Select the data file that you want to load. Firebolt’s **Load data** wizard currently supports files in both CSV and Parquet formats. The contents of your S3 bucket are shown automatically along with their object type, size, and when the object was last modified.
+2. Enter text or a [prefix](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-prefixes.html) into the search field above **FILE NAME** to filter the list of objects. You can enter either part of the object’s name or the full prefix that it starts with.
+3. Select one file. Firebolt does not support selecting multiple files, or selecting folders.
+4. If you are using Firebolt’s test data, select box next to `levels.csv`.
+5. Select **Next step**.
+
+## [](#set-up-destination)Set up destination
+
+
+Specify the table inside a database that you want to load your data into.
+
+1. You can either select an existing database from the drop-down list next to **Select database** or **Create new database**.
+
+ 1. If you created a new database, enter a new database name and a new table to load your data into. Select **Next step**.
+ 2. If you selected an existing database, select the table in the database from the drop-down list next to **Select table**, or **Create new table** and provide a new table name.
+2. Select **Next step**.
+
+## [](#format-data)Format data
+
+
+A default formatting and error handling scheme shows a preview of your data. You can change the default configuration using the following options:
+
+1. Toggle off **Use default formatting** to show custom formatting options. You can specify options including different file delimiter, quote character, and escape character.
+
+ - Enter a new value in the text box or select an option from the drop-down arrow next to the option that you want to change.
+ - After each change, the data preview changes to reflect your selection.
+2. Toggle off **Use default error handling** to show the following additional error handling options:
+
+ - You can specify a file to write errors to. Enter the name of the file that you want to write including the URL address for an Amazon S3 bucket that contains that file, and your AWS credentials. Firebolt will use these credentials to write an error file on your behalf. The output file should be in the following format:
+
+ ```
+ s3:///>
+ ```
+ - **Max errors per file** - Specify the percentage of errors you want to allow during data loading. By default, the maximum is set to `0%`, meaning any error will stop the loading process. If you wish to continue loading despite errors, set **Max errors per file** to a non-zero value. For example, entering `10%` or `10` allows the process to continue until errors affect `10%` of the rows.
+3. Select **Next step**.
+
+## [](#map-data)Map data
+
+
+
+Map the values in your data to columns into the target table. Firebolt automatically detects the schema of your data and displays information including the detected column names, type, and a preview of the data in the next window. By default, each column has a checkbox next to its name. Deselect the box if you don’t want to load the column. You can adjust the schema for the following items:
+
+1. **Type** - you can change the [data type](/sql_reference/data-types.html) of the column.
+2. **Nullable** - toggle this switch to `ON` if the columns in your data can contain `NULL` values. If this value is toggled off for a column, and that column contains `NULL` values, then the wizard will generate an error and stop loading.
+3. **Primary index** - toggle this switch to `ON` for the columns you want to include in your primary index.
+
+ - One of Firebolt’s key optimization strategies is to use a primary index that ties to columns that are used frequently in `WHERE`, `JOIN`, `GROUP_BY`, and other clauses used for sorting. Selecting the best primary index, which is a sparse index, can reduce query run times significantly by reducing the data set that the query scans. A primary index also allows Firebolt to manage updates, deletions and insertions to tables and provide optimal query performance.
+ - It’s best if you choose a primary index based on knowledge about your data and query history. If you don’t know which column(s) to select, you can use Firebolt’s suggested primary indexes by keeping **Automatically assign primary indexes** checked, as shown in the following image:
+
+ 
+
+ Using Firebolt’s suggested primary index is preferable to having none. In the absence of a query history, Firebolt prioritizes choosing a column for the primary index in the following order: a datetime or timestamp column, a column with low cardinality, or the first column.
+ - If you include multiple columns as a composite primary index, they will be added in sort order. For example, if you select `column_1` first, then select `column_3`, then `column_3` will be added as a primary index after `column_1`. This means `column_1` will be used first as a sparse index, followed by `column_3`. If you choose more than one primary index, the order of sorting appears next to the toggle switch under the **Primary Index** column. In the previous example, the number `1` appears next to `column_1` and a number `2` appears next to `column_3`. To achieve optimal results, choose indexes in the order of their cardinality, or the number of unique values. Start with the column that has the highest number of unique values as your first primary index, followed by the column with the next highest cardinality. For more information about how to choose a primary index, see [Primary index](/Overview/indexes/primary-index.html).
+4. Select **Next step**.
+
+## [](#review-configuration)Review configuration
+
+The **Review configuration** window displays your selections in SQL code. If you want to change the configuration, you must go back through the **Load data** wizard workflow to the section that you want to change and amend your selection. You cannot edit the SQL code in the **Review configuration** window.
+
+1. Select **Run ingestion** to load your data. The **Load data** wizard completes and your configuration will run in the **Develop Space** inside the **Firebolt Workspace**. The main window in the **SQL editor** contains the SQL script that configures your load data selections, and may contain several queries.
+
+## [](#view-results-and-query-statistics)View results and query statistics
+
+
+
+After your load data job completes, you can view the results of each query that was configured by the **Load data** wizard in Firebolt user interface under **Results** in the bottom window. If you need to edit the queries, you can enter the change into the **SQL Editor** directly and select **Run**.
+
+1. View information about your query in the **Statistics** tab. This information contains the status of the query, how long it took to run, and the number of rows processed during the data loading job.
+2. View metrics in the **Query Profile** tab for each operator used in your query. Select an operation to view metrics. These metrics include the following:
+
+ 1. The output cardinality - the number of rows each operator produced.
+ 2. The thread time - the sum of the wall clock time that threads spent to run the selected operation across all nodes.
+ 3. The CPU time - the sum of the time that threads that ran the operator were scheduled on a CPU core.
+ 4. The output types - the data types of the result of the query.
+
+ You can use metrics in the **Query Profile** tab to analyze and measure the efficiency and performance of your query. For example, If the CPU time is much smaller than thread time, the input-output (IO) latency may be high or the engine that you are using may be running multiple queries at the same time. For more information, see [Example with ANALYZE](/sql_reference/commands/queries/explain.html).
+3. View monitoring information including the percent CPU, memory, disk use and cache read in the **Engine monitoring** tab. Information is shown from the last 5 minutes by default. Select a different time interval from the drop-down menu next to **Last 5 minutes**. You can also select the **Refresh** icon next to the drop-down menu to update the graphical information.
+4. View detailed information associated with each query in the **Query history** tab. This information includes the query status, start time, number of rows and bytes scanned during the load, user and account information. You can do the following:
+
+ 1. Select the **Refresh** icon to update the query history and ID.
+ 2. Select the filter icon () to remove or add columns to display.
+ 3. Select the **More options** icon () to export the contents of the Query history tab to a JSON or CSV file.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_external_tables.md b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_external_tables.md
new file mode 100644
index 0000000..36e5d08
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_external_tables.md
@@ -0,0 +1,87 @@
+# [](#work-with-external-tables)Work with external tables
+
+Firebolt supports loading data using *external tables*, which are different from [fact and dimension tables](/Overview/indexes/using-indexes.html#firebolt-managed-tables). External tables store metadata objects that reference files stored in an Amazon S3 bucket, rather than actual data.
+
+To create an external table, run the [CREATE EXTERNAL TABLE](/sql_reference/commands/data-definition/create-external-table.html) command. After you create an external table, use the [INSERT](/sql_reference/commands/data-management/insert.html) command to load the data from the external table into a fact or dimension table. Data that you ingest must be in the same AWS Region as the target Firebolt database.
+
+Although you can run a query over an external table to return query results, we don’t recommend it. Such a query will be significantly slower than the same query run over the same data in a fact or dimension table because of the data transfer between Firebolt and your data store. We strongly recommend that you use external tables only for ingestion, specifying the table and its columns only in the `FROM` clause of an `INSERT` statement.
+
+## [](#workflows)Workflows
+
+For a simple end-to-end workflow that demonstrates loading data into Firebolt, see the [Getting started tutorial](/Guides/getting-started/).
+
+## [](#supported-file-formats)Supported file formats
+
+Firebolt supports loading the following source file formats from S3: `PARQUET`, `CSV`, `TSV`, `AVRO`, `JSON` ([JSON Lines](https://jsonlines.org/)), and `ORC`. We are quick to add support for more types, so make sure to let us know if you need it.
+
+## [](#using-metadata-virtual-columns)Using metadata virtual columns
+
+Firebolt external tables include metadata virtual columns that Firebolt populates with useful system data during ingestion. Firebolt includes these columns automatically. You don’t need to specify them in the `CREATE EXTERNAL TABLE` statement.
+
+When you use an external table to ingest data, you can explicitly reference these columns to ingest the metadata. First, you define the columns in a `CREATE FACT|DIMENSION TABLE` statement. Next, you specify the virtual column names to select in the `INSERT INTO` statement, with the fact or dimension table as the target. You can then query the columns in the fact or dimension table for analysis, troubleshooting, and to implement logic. For more information, see the example below.
+
+The metadata virtual columns listed below are available in external tables.
+
+Metadata column name Description Data type `$source_file_name` The full path of the row data’s source file in Amazon S3, without the bucket. For example, with a source file of `s3://my_bucket/xyz/year=2018/month=01/part-00001.parquet`, the `$source_file_name` is `xyz/year=2018/month=01/part-00001.parquet`. TEXT `$source_file_timestamp` The UTC creation timestamp in second resolution of the row’s source file in Amazon S3. (S3 objects are immutable. In cases where files are overwritten with new data - this will be Last Modified time.) TIMESTAMPTZ `$source_file_size` Size in bytes of the row’s source file in Amazon S3. BIGINT
+
+### [](#examplequerying-metadata-virtual-column-values)Example–querying metadata virtual column values
+
+The query example below creates an external table that references an AWS S3 bucket that contains Parquet files from which Firebolt will ingest values for `c_id` and `c_name`.
+
+```
+CREATE EXTERNAL TABLE my_external_table
+ (
+ c_id INTEGER,
+ c_name TEXT
+ )
+ CREDENTIALS = (
+ AWS_ACCESS_KEY_ID = 'AKIAIOSFODNN7EXAMPLE' AWS_SECRET_ACCESS_KEY = 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
+ )
+ URL = 's3://my_bucket/'
+ OBJECT_PATTERN= '*.parquet'
+ TYPE = (PARQUET);
+```
+
+The query example below creates a dimension table, which will be the target for the data to be ingested. The statement defines two additional columns, `$source_file_name` and `$source_file_timestamp`, to contain metadata values that Firebolt creates automatically for the external table.
+
+```
+CREATE DIMENSION TABLE my_dim_table_with_metadata
+(
+ c_id INTEGER,
+ c_name TEXT,
+ source_file_name TEXT,
+ source_file_timestamp TIMESTAMPTZ,
+);
+```
+
+Finally, the `INSERT` query below ingests the data from `my_external_table` into `my_dim_table_with_metadata`. The `SELECT` clause explicitly specifies the metadata virtual columns, which is a requirement.
+
+```
+INSERT INTO
+ my_dim_table_with_metadata
+SELECT
+ *,
+ $source_file_name,
+ $source_file_timestamp
+FROM
+ my_external_table;
+```
+
+An example `SELECT` query over `my_dim_table_with_metadata` shows that the source data file (minus the `s3://my_bucket` portion of the file path) and file timestamp are included in the dimension table for each row.
+
+```
+SELECT * FROM my_dim_table_with_metadata;
+```
+
+```
++-----------+---------------------+------------------------ +------------------------+
+| c_id | c_name | source_file_name | source_file_timestamp |
++-----------+---------------------+-------------------------+------------------------+
+| 11385 | ClevelandDC8933 | central/cle.parquet | 2021-09-10 10:32:03+00 |
+| 12386 | PortlandXfer9483 | west/pdx.parquet | 2021-09-10 10:32:04+00 |
+| 12387 | PortlandXfer9449 | west/pdx.parquet | 2021-09-10 10:32:04+00 |
+| 12388 | PortlandXfer9462 | west/pdx.parquet | 2021-09-10 10:32:04+00 |
+| 12387 | NashvilleXfer9987 | south/bna.parquet | 2021-09-10 10:33:01+00 |
+| 12499 | ClevelandXfer8998 | central/cle.parquet | 2021-09-10 10:32:03+00 |
+[...]
+```
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_semi_structured_data_load_json_data.md b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_semi_structured_data_load_json_data.md
new file mode 100644
index 0000000..bb95ebc
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_semi_structured_data_load_json_data.md
@@ -0,0 +1,193 @@
+# [](#load-semi-structured-json-data)Load semi-structured JSON data
+
+Semi-structured data does not follow a strict table format but contains structured tags or key-value pairs. JSON is an example of semi-structured data. Firebolt supports the following three ways to ingest JSON based on how your data changes and how you query it:
+
+- [Load JSON into a fixed schema](#load-json-into-a-fixed-schema) if your JSON data has a stable set of fields with shallow nesting.
+- [Transform the input during load](#transform-the-input-during-load) if your table must always contain certain fields.
+- [Store JSON as text](#store-json-as-text) if you need only specific fields on demand or if the table structure changes frequently.
+
+This document shows you how to load data using each of the previous methods and the sample JSON dataset in the following section.
+
+### [](#sample-json-dataset)Sample JSON dataset
+
+The following JSON data shows two session records for a website, where each line represents a single JSON object. This sample data is used in each of the examples in this document.
+
+```
+[
+ {
+ "id": 1,
+ "StartTime": "2020-01-06 17:00:00",
+ "Duration": 450,
+ "tags": ["summer-sale", "sports"],
+ "user_agent": {
+ "agent": "Mozilla/5.0",
+ "platform": "Windows NT 6.1",
+ "resolution": "1024x4069"
+ }
+ },
+ {
+ "id": 2,
+ "StartTime": "2020-01-05 12:00:00",
+ "Duration": 959,
+ "tags": ["gadgets", "audio"],
+ "user_agent": {
+ "agent": "Safari",
+ "platform": "iOS 14"
+ }
+ }
+]
+```
+
+The following code example creates a staging table that stores the raw JSON data, allowing you to run the subsequent examples:
+
+```
+-- Create a staging table for raw JSON data with one JSON object per row
+DROP TABLE IF EXISTS doc_visits_source;
+CREATE TABLE doc_visits_source (
+ raw_json TEXT
+);
+
+-- Insert raw JSON data as individual rows
+INSERT INTO doc_visits_source (raw_json)
+VALUES
+('{"id": 1, "StartTime": "2020-01-06 17:00:00", "Duration": 450, "tags": ["summer-sale", "sports"], "user_agent": {"agent": "Mozilla/5.0", "platform": "Windows NT 6.1", "resolution": "1024x4069"}}'),
+('{"id": 2, "StartTime": "2020-01-05 12:00:00", "Duration": 959, "tags": ["gadgets", "audio"], "user_agent": {"agent": "Safari", "platform": "iOS 14"}}');
+```
+
+If you want to load JSON data from an Amazon S3 bucket, you can create an external table that references the file as follows:
+
+```
+CREATE EXTERNAL TABLE visits_external (
+ raw_json TEXT
+)
+LOCATION = 's3://your-bucket-name/path/to/json-file/'
+FILE_FORMAT = (TYPE = 'JSON');
+```
+
+## [](#load-json-into-a-fixed-schema)Load JSON into a fixed schema
+
+If your JSON data has a stable set of fields with shallow nesting, you can load it into a table with a fixed schema to simplify queries. Missing keys are assigned default values. This method allows you to query columns directly without additional parsing, making queries faster and easier to write. Extra keys that are not explicitly mapped are excluded from structured tables, making this approach less flexible for changing data. If stored separately in a `TEXT` column, they remain accessible for later extraction.
+
+The following code example uses the previously created `doc_visits_source` table to define columns that map directly to known keys:
+
+```
+-- Create the target table 'visits_fixed' with a fixed schema
+DROP TABLE IF EXISTS visits_fixed;
+CREATE FACT TABLE visits_fixed (
+ id INT DEFAULT 0,
+ start_time TIMESTAMP DEFAULT '1970-01-01 00:00:00',
+ duration INT DEFAULT 0,
+ tags ARRAY(TEXT) DEFAULT []
+)
+PRIMARY INDEX start_time;
+
+-- Insert data into 'visits_fixed' by extracting values from the raw JSON
+INSERT INTO visits_fixed
+SELECT
+ JSON_POINTER_EXTRACT(raw_json, '/id')::INT AS id,
+ TO_TIMESTAMP(TRIM(BOTH '"' FROM JSON_POINTER_EXTRACT(raw_json, '/StartTime')), 'YYYY-MM-DD HH24:MI:SS') AS start_time,
+ JSON_POINTER_EXTRACT(raw_json, '/Duration')::INT AS duration,
+ JSON_POINTER_EXTRACT(raw_json, '/tags')::ARRAY(TEXT) AS tags
+FROM doc_visits_source;
+```
+
+The following table shows the expected results:
+
+id start\_time duration tags 1 1/6/2020 17:00 450 \[“summer-sale”, “sports”] 2 1/5/2020 12:00 959 \[“gadgets”, “audio”]
+
+Important characteristics of the table:
+
+- The mandatory scalar fields, `id`, `start_time`, and `duration`, are stored in separate columns, which makes it easier to filter, sort, or join by these fields.
+- Each column maps directly to a known JSON key, allowing for simpler queries without the need for JSON functions.
+- Default values ensure that the table loads even if some fields are missing or additional keys appear. Extra JSON fields such as `user_agent` with `agent`, `platform`, and `resolution` are ignored and not stored in the table.
+- Array columns are used to store `tags`, which supports arbitrary numbers of values without schema changes.
+
+## [](#transform-the-input-during-load)Transform the input during load
+
+Parsing JSON data during ingestion eliminates the need for subsequent query-time parsing, simplifying and accelerating queries. However, transforming data during load also requires well-defined JSON paths that remain consistent. If the JSON paths change, the load might fail.
+
+The following code example uses the previously created `doc_visits_source` table to parse JSON data as it loads and inserts extracted fields into a Firebolt table named `visits_transformed`. It shows how to use [JSON\_POINTER\_EXTRACT\_KEYS](/sql_reference/functions-reference/JSON/json-pointer-extract-keys.html) and [JSON\_POINTER\_EXTRACT\_VALUES](/sql_reference/functions-reference/JSON/json-pointer-extract-values.html) to store a dynamic key-value pair – `agent_props_keys` and `agent_props_vals` – from a nested object:
+
+```
+DROP TABLE IF EXISTS visits_transformed;
+CREATE FACT TABLE visits_transformed (
+ id INT,
+ start_time TIMESTAMP,
+ duration INT,
+ tags ARRAY(TEXT),
+ agent_props_keys ARRAY(TEXT),
+ agent_props_vals ARRAY(TEXT)
+)
+PRIMARY INDEX start_time;
+
+INSERT INTO visits_transformed
+SELECT
+ JSON_POINTER_EXTRACT(raw_json, '/id')::INT,
+ TO_TIMESTAMP(TRIM(BOTH '"' FROM JSON_POINTER_EXTRACT(raw_json, '/StartTime')), 'YYYY-MM-DD HH24:MI:SS'),
+ JSON_POINTER_EXTRACT(raw_json, '/Duration')::INT,
+ JSON_POINTER_EXTRACT(raw_json, '/tags')::ARRAY(TEXT),
+ JSON_POINTER_EXTRACT_KEYS(raw_json, '/user_agent')::ARRAY(TEXT),
+ JSON_POINTER_EXTRACT_VALUES(raw_json, '/user_agent')::ARRAY(TEXT)
+FROM doc_visits_source;
+```
+
+The following table shows the expected results:
+
+id start\_time duration tags agent\_props\_keys agent\_props\_vals 1 1/6/2020 17:00 450 \[“summer-sale”,”sports”] \[“agent”, “platform”, “resolution”] \[“Mozilla/5.0”, “Windows NT 6.1”, “1024x4069”] 2 1/5/2020 12:00 959 \[“gadgets”,”audio”] \[“agent”, “platform”] \[“Safari”, “iOS 14”]
+
+Important characteristics of the previous table:
+
+- The `user_agent` object is stored in two arrays: `agent_props_keys` and `agent_props_vals`. The [`JSON_POINTER_EXTRACT_KEYS`](/sql_reference/functions-reference/JSON/json-pointer-extract-keys.html) function extracts the keys from the `user_agent` object into the `agent_props_keys` array. The [`JSON_POINTER_EXTRACT_VALUES`](/sql_reference/functions-reference/JSON/json-pointer-extract-values.html) function extracts the corresponding values into the `agent_props_vals` array. Storing keys and values in parallel arrays offers flexibility when the `user_agent` map changes and avoids schema updates for new or removed fields.
+
+A common error may occur if a field path does not exist in the JSON document. Firebolt returns an error because `NULL` values cannot be cast to `INT`. For example, the following query attempts to extract a non-existent field `/unknown_field` and cast it to `INT`, which results in an error:
+
+```
+SELECT JSON_POINTER_EXTRACT(raw_json, '/unknown_field')::INT
+FROM doc_visits_source;
+```
+
+To avoid this error, use a default value or conditional expression as shown in the following code example:
+
+```
+INSERT INTO visits_transformed
+SELECT
+ CASE
+ WHEN JSON_POINTER_EXTRACT(raw_json, '/unknown_field') IS NOT NULL
+ THEN JSON_POINTER_EXTRACT(raw_json, '/unknown_field')::INT
+ ELSE NULL
+ END AS id
+FROM doc_visits_source;
+```
+
+The following table shows the expected results:
+
+id start\_time duration tags 0 NULL NULL NULL 0 NULL NULL NULL 1 1/6/2020 17:00 450 \[“summer-sale”, “sports”] 2 1/5/2020 12:00 959 \[“gadgets”, “audio”]
+
+## [](#store-json-as-text)Store JSON as text
+
+You can store JSON as a single text column if the data structure changes frequently or if you only need certain fields in some queries. This approach simplifies ingestion since no parsing occurs during loading, but it requires parsing fields at query time, which can make queries more complex if you need to extract many fields regularly.
+
+The following code example uses the previously created intermediary `doc_visits_source` table to create a permanent table that stores raw JSON, allowing you to parse only what you need on demand:
+
+```
+DROP TABLE IF EXISTS visits_raw;
+CREATE FACT TABLE visits_raw (
+ raw_json TEXT
+);
+
+-- Insert data into the 'visits_raw' table from the staging table
+INSERT INTO visits_raw
+SELECT raw_json
+FROM doc_visits_source;
+```
+
+The following table shows the expected results:
+
+raw\_json {“id”: 1, “StartTime”: “2020-01-06 17:00:00”, “Duration”: 450, “tags”: \[“summer-sale”, “sports”], “user\_agent”: {“agent”: “Mozilla/5.0”, “platform”: “Windows NT 6.1”, “resolution”: “1024x4069”}} {“id”: 2, “StartTime”: “2020-01-05 12:00:00”, “Duration”: 959, “tags”: \[“gadgets”, “audio”], “user\_agent”: {“agent”: “Safari”, “platform”: “iOS 14”}}
+
+Important characteristics of the table:
+
+- The `id`, `start_time`, `durations`, and `tags` columns follow the same purpose as in the [previous table example](#transform-the-input-during-load).
+- Each row in the previous table contains a complete JSON object stored in a single `TEXT` column, rather than being parsed into separate fields. This approach is beneficial when the required fields are unknown at ingestion or the JSON structure changes frequently, allowing for flexible data storage without modifying the schema. Fields can be extracted dynamically at query time using Firebolt’s JSON functions, though frequent parsing may increase query complexity and cost.
+- Parsing occurs at query time, which can save upfront processing when data is loaded, but it might increase query complexity and cost if you need to parse many fields frequently.
+- Subsequent queries need to extract fields manually with JSON functions as needed.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_semi_structured_data_load_parquet_data.md b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_semi_structured_data_load_parquet_data.md
new file mode 100644
index 0000000..9594460
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_semi_structured_data_load_parquet_data.md
@@ -0,0 +1,179 @@
+# [](#load-semi-structured-parquet-data)Load semi-structured Parquet data
+
+Apache Parquet is a binary file format that supports both structured columns and semi-structured data, including arrays, structs, and maps. If these nested structures do not align to a strictly relational schema, they are described as semi-structured. Firebolt’s external tables support extracting these semi-structured fields from Parquet files, treating them similarly as other semi-structured data such as JSON. This document shows how to load and query Parquet data that is stored as structs in arrays or as maps of key-value pairs.
+
+- [Defining external table columns for Parquet arrays and maps](#defining-external-table-columns-for-parquet-arrays-and-maps)
+- [Syntax for defining a Parquet nested structure](#syntax-for-defining-a-parquet-nested-structure)
+- [Example–ingest and work with structs inside Parquet arrays](#exampleingest-and-work-with-structs-inside-parquet-arrays)
+
+ - [Step 1–create an external table](#step-1create-an-external-table)
+ - [Step 2–create a fact or dimension table](#step-2create-a-fact-or-dimension-table)
+ - [Step 3–insert into the fact table from the external table](#step-3insert-into-the-fact-table-from-the-external-table)
+ - [Step 4–query array values](#step-4query-array-values)
+- [Example–ingest and work with maps](#exampleingest-and-work-with-maps)
+
+ - [Step 1–create an external table](#step-1create-an-external-table-1)
+ - [Step 2–create a fact or dimension table](#step-2create-a-fact-or-dimension-table-1)
+ - [Step 3–insert into the fact table from the external table](#step-3insert-into-the-fact-table-from-the-external-table-1)
+ - [Step 4–query map values](#step-4query-map-values)
+
+## [](#defining-external-table-columns-for-parquet-arrays-and-maps)Defining external table columns for Parquet arrays and maps
+
+When you set up an external table to ingest Parquet data files, you use a hierarchical dotted notation syntax to define table columns. Firebolt uses this notation to identify the field to ingest.
+
+## [](#syntax-for-defining-a-parquet-nested-structure)Syntax for defining a Parquet nested structure
+
+You specify the top grouping element of a nested structure in Parquet followed by the field in that structure that contains the data to ingest. You then declare the column type using the `ARRAY()` notation, where `` is the [Firebolt data type](/sql_reference/data-types.html) corresponding to the data type of the field in Parquet.
+
+```
+"." ARRAY()
+```
+
+Examples of this syntax in `CREATE EXTERNAL TABLE` queries are demonstrated below.
+
+## [](#exampleingest-and-work-with-structs-inside-parquet-arrays)Example–ingest and work with structs inside Parquet arrays
+
+Consider the Parquet schema example below. The following elements define an array of structs:
+
+- A single, optional group field, `hashtags`, contains any number of another group, `bag`. This is the top grouping element.
+- The `bag` groups each contain a single, optional group, `array_element`.
+- The`array_element` group contains a single, optional field, `s`.
+- The field `some_value` contains a value that is a `TEXT` type (in binary primitive format).
+
+```
+optional group hashtags (LIST) {
+ repeated group bag {
+ optional group array_element {
+ optional binary some_value (TEXT);
+ }
+ }
+}
+```
+
+The steps below demonstrate the process to ingest the array values into Firebolt. You create an external table, create a fact table, and insert data into the fact table from the external table, which is connected to the Parquet data store.
+
+### [](#step-1create-an-external-table)Step 1–create an external table
+
+The `CREATE EXTERNAL TABLE` example below creates a column in an external table from the Parquet schema shown in the example above. The column definition uses the top level grouping `hashtags` followed by the field `some_value`. Intermediate nesting levels are omitted.
+
+```
+CREATE EXTERNAL TABLE IF NOT EXISTS my_parquet_array_ext_tbl
+(
+ [...,] --additional columns possible, not shown
+ "hashtags.some_value" ARRAY(TEXT)
+ [,...]
+)
+CREDENTIALS = (AWS_KEY_ID = '****' AWS_SECRET_KEY = '*****')
+URL = 's3://my_bucket_of_parquet_goodies/'
+OBJECT_PATTERN = '*.parquet'
+TYPE = (PARQUET);
+```
+
+### [](#step-2create-a-fact-or-dimension-table)Step 2–create a fact or dimension table
+
+Create a fact or dimension table that defines a column of the same `ARRAY(TEXT)` type that you defined in the external table in step 1. The example below demonstrates this for a fact table.
+
+```
+CREATE FACT TABLE IF NOT EXISTS my_parquet_array_fact_tbl
+(
+ [...,] --additional columns possible, not shown
+ some_value ARRAY(TEXT)
+ [,...]
+)
+[...]
+--required primary index for fact table not shown
+--optional partitions not shown
+;
+```
+
+### [](#step-3insert-into-the-fact-table-from-the-external-table)Step 3–insert into the fact table from the external table
+
+The example below demonstrates an `INSERT` statement that selects the array values from Parquet data files using the external table column definition in step 1, and then inserts them into the specified fact table column, `some_value`.
+
+```
+INSERT INTO my_parquet_array_fact_tbl
+ SELECT "hashtags.some_value" AS some_value
+ FROM my_parquet_array_ext_tbl;
+```
+
+### [](#step-4query-array-values)Step 4–query array values
+
+After you ingest array values into the fact table, you can query and manipulate the array using array functions and Lambda functions. For more information, see [Working with arrays](/Guides/loading-data/working-with-semi-structured-data/working-with-arrays.html).
+
+Use multipart Parquet column names to extract data from nested structures. For simple `ARRAY(TEXT)`, use a single top-level field name.
+
+## [](#exampleingest-and-work-with-maps)Example–ingest and work with maps
+
+External tables connected to AWS Glue currently do not support reading maps from Parquet files.
+
+Parquet stores maps as arrays of key-value pairs, where each key\_value group contains a key and its corresponding value. Consider the Parquet schema example below. The following define the key-value elements of the map:
+
+- A single, optional group, `context`, is a group of mappings that contains any number of the group `key_value`.
+- The `key_value` groups each contain a required field, `key`, which contains the key name as a `TEXT`. Each group also contains an optional field `value`, which contains the value as a `TEXT` corresponding to the key name in the same `key_value` group.
+
+```
+optional group context (MAP) {
+ repeated group key_value {
+ required binary key (TEXT);
+ optional binary value (TEXT);
+ }
+ }
+```
+
+The steps below demonstrate the process of creating an external table, creating a fact table, and inserting data into the fact table from the Parquet file using the external table.
+
+### [](#step-1create-an-external-table-1)Step 1–create an external table
+
+When you create an external table for a Parquet map, you use the same syntax that you use in the example for arrays above. You create one column for keys and another column for values. The `CREATE EXTERNAL TABLE` example below demonstrates this.
+
+```
+CREATE EXTERNAL TABLE IF NOT EXISTS my_parquet_map_ext_tbl
+(
+ "context.keys" ARRAY(TEXT),
+ "context.values" ARRAY(TEXT)
+)
+CREDENTIALS = (AWS_KEY_ID = '****' AWS_SECRET_KEY = '*****')
+URL = 's3://my_bucket_of_parquet/'
+OBJECT_PATTERN = '*.parquet'
+TYPE = (PARQUET);
+```
+
+### [](#step-2create-a-fact-or-dimension-table-1)Step 2–create a fact or dimension table
+
+Create a Firebolt fact or dimension table that defines columns of the same `ARRAY(TEXT)` types that you defined in the external table in step 1. The example below demonstrates this for a fact table.
+
+```
+CREATE FACT TABLE IF NOT EXISTS my_parquet_map_fact_tbl
+(
+ [...,] --additional columns possible, not shown
+ my_parquet_array_keys ARRAY(TEXT),
+ my_parquet_array_values ARRAY(TEXT)
+ [,...]
+)
+[...] --required primary index for fact table not shown
+ --optional partitions not shown
+```
+
+### [](#step-3insert-into-the-fact-table-from-the-external-table-1)Step 3–insert into the fact table from the external table
+
+The example below demonstrates an `INSERT INTO` statement that selects the array values from Parquet data files using the external table column definition in step 1, and inserts them into the specified fact table columns, `my_parquet_array_keys` and `my_parquet_array_values`.
+
+```
+INSERT INTO my_parquet_map_fact_tbl
+ SELECT "context.keys" AS my_parquet_array_keys,
+ "context.values" AS my_parquet_array_values
+ FROM my_parquet_map_ext_tbl;
+```
+
+### [](#step-4query-map-values)Step 4–query map values
+
+After you ingest array values into the fact table, you can query and manipulate the array using array functions and Lambda functions. For more information, see [Working with arrays](/Guides/loading-data/working-with-semi-structured-data/working-with-arrays.html).
+
+A query that uses a Lambda function to return a specific value by specifying the corresponding key value is shown below. For more information, see [Manipulating arrays using Lambda functions](/Guides/loading-data/working-with-semi-structured-data/working-with-arrays.html#manipulating-arrays-with-lambda-functions).
+
+```
+SELECT
+ ARRAY_FIRST(v, k -> k = 'key_name_of_interest', my_parquet_array_keys, my_parquet_array_values) AS returned_corresponding_key_value
+FROM
+ my_parquet_map_ext_tbl;
+```
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_semi_structured_data_working_with_arrays.md b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_semi_structured_data_working_with_arrays.md
new file mode 100644
index 0000000..50554b1
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_semi_structured_data_working_with_arrays.md
@@ -0,0 +1,387 @@
+# [](#work-with-arrays)Work with arrays
+
+This section covers querying and manipulating arrays in Firebolt.
+
+- [Declaring ARRAY data types in table definitions](#declaring-array-data-types-in-table-definitions)
+- [Simple array functions](#simple-array-functions)
+- [Manipulating arrays with Lambda functions](#manipulating-arrays-with-lambda-functions)
+
+ - [Lambda function general syntax](#lambda-function-general-syntax)
+ - [Lambda function example–single array](#lambda-function-examplesingle-array)
+ - [Lambda function example–multiple arrays](#lambda-function-examplemultiple-arrays)
+- [UNNEST](#unnest)
+
+ - [Example–single UNNEST with single ARRAY-typed column](#examplesingle-unnest-with-single-array-typed-column)
+ - [Example–single UNNEST using multiple ARRAY-typed columns](#examplesingle-unnest-using-multiple-array-typed-columns)
+ - [Example–multiple UNNEST clauses resulting in a Cartesian product](#examplemultiple-unnest-clauses-resulting-in-a-cartesian-product)
+ - [Example–error on UNNEST of multiple arrays with different lengths](#exampleerror-on-unnest-of-multiple-arrays-with-different-lengths)
+- [ARRAY input and output syntax](#array-input-and-output-syntax)
+
+ - [Converting ARRAY to TEXT](#converting-array-to-text)
+ - [Converting TEXT to ARRAY](#converting-text-to-array)
+ - [Nested ARRAYs](#nested-arrays)
+
+## [](#declaring-array-data-types-in-table-definitions)Declaring ARRAY data types in table definitions
+
+Array types are declared using `ARRAY()` where `` can be any data type that Firebolt supports. This includes the `ARRAY` data type, so arrays can be arbitrarily nested.
+
+If you load an array from a CSV file, the arrays in the CSV file must be enclosed in double quotes (`""`).
+
+For example, if a CSV file contains a row containing `value1 , value2 , "[array_value3 , array_value4]"` , you can create a table using the following code to read `array_value3` and `array_value4` into `array_values`.
+
+```
+CREATE TABLE IF NOT EXISTS array_example
+ (
+ value1 STRING,
+ value2 STRING,
+ array_values ARRAY(TEXT)
+ )
+ URL = 's3://path_to_your_data/'
+ TYPE = (csv);
+```
+
+Array literals are also supported. For example, the `SELECT` statement shown below is valid.
+
+```
+SELECT [1,2,3,4]
+```
+
+### [](#basis-for-examples)Basis for examples
+
+All examples in this topic are based on the table below, named `visits`. The column `id` is of type `INTEGER`. All other columns are of type `ARRAY(TEXT)`.
+
+
+
+## [](#simple-array-functions)Simple array functions
+
+There are several fundamental functions that you can use to work with arrays, including [ARRAY\_LENGTH](/sql_reference/functions-reference/array/array-length.html), [ARRAY\_CONCAT](/sql_reference/functions-reference/array/array-concat.html), and [ARRAY\_FLATTEN](/sql_reference/functions-reference/array/flatten.html). See the respective reference for a full description. Brief examples are shown below.
+
+### [](#length-example)LENGTH example
+
+`LENGTH` returns the number of elements in an array.
+
+```
+SELECT
+ id,
+ LENGTH(agent_props_keys) as key_array_length
+FROM visits;
+```
+
+**Returns**:
+
+```
++-----------------------+
+| id | key_array_length |
++-----------------------+
+| 1 | 3 |
+| 2 | 2 |
+| 3 | 3 |
++-----------------------+
+```
+
+### [](#array_concat-example)ARRAY\_CONCAT example
+
+`ARRAY_CONCAT` combines multiple arrays into a single array.
+
+```
+SELECT
+ id,
+ ARRAY_CONCAT(agent_props_keys, agent_props_vals) as concat_keys_and_vals
+FROM visits;
+```
+
+**Returns**:
+
+```
++----+------------------------------------------------------------------------------+
+| id | concat_keys_and_vals |
++----+------------------------------------------------------------------------------+
+| 1 | ["agent","platform","resolution","Mozilla/5.0","Windows NT 6.1","1024X4069"] |
+| 2 | ["agent","platform","Safari","iOS 14"] |
+| 3 | ["agent","platform","platform","Safari","iOS 14","Windows 11"] |
++----+------------------------------------------------------------------------------+
+```
+
+### [](#array_flatten-example)ARRAY\_FLATTEN example
+
+`ARRAY_FLATTEN` converts an ARRAY of ARRAYs into a single flattened ARRAY. Note that this operation flattens only one level of the array hierarchy.
+
+```
+SELECT ARRAY_FLATTEN([ [[1,2,3],[4,5]], [[2]] ]) as flattened_array;
+```
+
+**Returns**:
+
+```
++---------------------+
+| flattened_array |
++---------------------+
+| [[1,2,3],[4,5],[2]] |
++---------------------+
+```
+
+## [](#manipulating-arrays-with-lambda-functions)Manipulating arrays with Lambda functions
+
+Firebolt *Lambda functions* are a powerful tool that you can use on arrays to extract results. Lambda functions iteratively perform an operation on each element of one or more arrays. Arrays and the operation to perform are specified as arguments to the Lambda function.
+
+### [](#lambda-function-general-syntax)Lambda function general syntax
+
+The general syntax pattern of a Lambda function is shown below. For detailed syntax and examples see the reference topics for [Lambda functions](/sql_reference/functions-reference/Lambda/).
+
+```
+([, ][, ...]) -> , [, ][, ...])
+```
+
+Parameter Description `` Any array function that accepts a Lambda expression as an argument. For a list, see [Lambda functions](/sql_reference/functions-reference/Lambda/). `[, ][, ...]` A list of one or more variables that you specify. The list is specified in the same order and must be the same length as the list of array expressions (`[, ][, ...]`). At runtime, each variable contains an element of the corresponding array. The specified `` is performed for each variable. The operation that is performed for each element of the array. This is typically a function or Boolean expression. \[, ]\[, ...] A comma-separated list of expressions, each of which evaluates to an `ARRAY` data type.
+
+### [](#lambda-function-examplesingle-array)Lambda function example–single array
+
+Consider the following [TRANSFORM](/sql_reference/functions-reference/Lambda/transform.html) array function that uses a single array variable and reference in the Lambda expression. This example applies the `UPPER` function to each element `t` in the `ARRAY`-typed column `tags`. This converts each element in each `tags` array to upper-case.
+
+```
+SELECT
+ id,
+ TRANSFORM(t -> UPPER(t), tags) AS up_tags
+FROM visits;
+```
+
+**Returns:**
+
+```
++----+--------------------------+
+| id | up_tags |
++----+--------------------------+
+| 1 | ["SUMMER-SALE","SPORTS"] |
+| 2 | ["GADGETS","AUDIO"] |
+| 3 | ["SUMMER-SALE","AUDIO"] |
++----+--------------------------+
+```
+
+### [](#lambda-function-examplemultiple-arrays)Lambda function example–multiple arrays
+
+[ARRAY\_FIRST](/sql_reference/functions-reference/Lambda/array-first.html) is an example of a function that takes multiple arrays as arguments in a map of key-value pairs. One array represents the keys and the other represents the values.
+
+`ARRAY_FIRST` uses a Boolean expression that you specify to find the key in the key array. If the Boolean expression resolves to true, the function returns the first value in the value array that corresponds to the key’s element position. If there are duplicate keys, only the first corresponding value is returned.
+
+The example below returns the first value in the `agent_props_vals` array where the corresponding position in the `agent_props_keys` array contains the key `platform`.
+
+```
+SELECT
+ id,
+ ARRAY_FIRST(v, k -> k = 'platform', agent_props_vals, agent_props_keys) AS platform
+FROM visits;
+```
+
+**Returns**:
+
+```
++----+----------------+
+| id | platform |
++----+----------------+
+| 1 | Windows NT 6.1 |
+| 2 | iOS 14 |
+| 3 | iOS 14 |
++----+----------------+
+```
+
+[ARRAY\_SORT](/sql_reference/functions-reference/array/array-sort.html) sorts one array by another. One array represents the values and the other represents the sort order.
+
+The example below sorts the first array by the positions defined in the second array
+
+```
+SELECT
+ ARRAY_SORT(x,y -> y, [ 'A','B','C'],[3,2,1]) AS res;
+```
+
+**Returns**:
+
+```
++-----------------+
+| res |
++-----------------+
+| ["C", "B", "A"] |
++-----------------+
+```
+
+## [](#unnest)UNNEST
+
+You might want to transform a nested array structure to a standard tabular format. `UNNEST` serves this purpose.
+
+[UNNEST](/sql_reference/commands/queries/select.html#unnest) is a table-valued function (TVF) that transforms an input row containing an array into a set of rows. `UNNEST` unfolds the elements of the array and duplicates all other columns found in the `SELECT` clause for each array element. If the input array is empty, the corresponding row is eliminated.
+
+You can use a single `UNNEST` command to unnest several arrays if the arrays are the same length.
+
+Multiple `UNNEST` statements in a single `FROM` clause result in a Cartesian product. Each element in the first array has a record in the result set corresponding to each element in the second array.
+
+### [](#examplesingle-unnest-with-single-array-typed-column)Example–single UNNEST with single ARRAY-typed column
+
+The following example unnests the `tags` column from the `visits` table.
+
+```
+SELECT
+ id,
+ tag
+FROM
+ visits,
+ UNNEST(tags) as r(tag);
+```
+
+**Returns**:
+
+```
++----+---------------+
+| id | tag |
++----+---------------+
+| 1 | "summer-sale" |
+| 1 | "sports" |
+| 2 | "gadgets" |
+| 2 | "audio" |
++----+---------------+
+```
+
+### [](#examplesingle-unnest-using-multiple-array-typed-columns)Example–single UNNEST using multiple ARRAY-typed columns
+
+The following query specifies both the `agent_props_keys` and `agent_props_vals` columns to unnest.
+
+```
+SELECT
+ id,
+ a_key,
+ a_val
+FROM
+ visits,
+ UNNEST(agent_props_keys, agent_props_vals) as r(a_key, a_val);
+```
+
+**Returns**:
+
+```
++----+------------+------------------+
+| id | a_key | a_val |
++----+------------+------------------+
+| 1 | agent | “Mozilla/5.0” |
+| 1 | platform | “Windows NT 6.1” |
+| 1 | resolution | “1024x4069” |
+| 2 | agent | “Safari” |
+| 2 | platform | “iOS 14” |
++----+------------+------------------+
+```
+
+### [](#examplemultiple-unnest-clauses-resulting-in-a-cartesian-product)Example–multiple UNNEST clauses resulting in a Cartesian product
+
+The following query, while valid, creates a Cartesian product.
+
+```
+SELECT
+ id,
+ a_key,
+ a_val
+FROM
+ visits,
+ UNNEST(agent_props_keys as a_keys) as r1(a_key),
+ UNNEST(agent_props_vals as a_vals) as r2(a_val);
+```
+
+**Returns**:
+
+```
++-----+------------+------------------+
+| id | a_key | a_val |
++-----+------------+------------------+
+| 1 | agent | "Mozilla/5.0" |
+| 1 | agent | "Windows NT 6.1" |
+| 1 | agent | "1024x4069" |
+| 1 | platform | "Mozilla/5.0" |
+| 1 | platform | "Windows NT 6.1" |
+| 1 | platform | "1024x4069" |
+| 1 | resolution | "Mozilla/5.0" |
+| 1 | resolution | "Windows NT 6.1" |
+| 1 | resolution | "1024x4069" |
+| 2 | agent | "Safari" |
+| 2 | agent | "iOS 14" |
+| 2 | platform | "Safari" |
+| 2 | platform | "iOS 14" |
++-----+------------+------------------+
+```
+
+### [](#exampleerror-on-unnest-of-multiple-arrays-with-different-lengths)Example–error on UNNEST of multiple arrays with different lengths
+
+The following query is **invalid** and will result in an error as the `tags` and `agent_props_keys` arrays have different lengths for row 1.
+
+```
+SELECT
+ id,
+ tag,
+ a_key
+FROM
+ visits,
+ UNNEST(tags, agent_props_keys) as r(tag, a_key);
+```
+
+## [](#array-input-and-output-syntax)ARRAY input and output syntax
+
+`ARRAY` values can be converted from and to `TEXT`. This happens, for example, when an explicit `CAST` is added to a query, or when `ARRAY` values are (de-)serialized in a `COPY` statement.
+
+### [](#converting-array-to-text)Converting ARRAY to TEXT
+
+Broadly, the `TEXT` representation of an `ARRAY` value starts with an opening curly brace (`{`). This is followed by the `TEXT` representations of the individual array elements separated by commas (`,`). It ends with a closing curly brace (`}`). `NULL` array elements are represented by the literal string `NULL`. For example, the query
+
+```
+SELECT
+ CAST([1,2,3,4,NULL] AS TEXT)
+```
+
+returns the `TEXT` value `'{1,2,3,4,NULL}'`.
+
+When converting `ARRAY` values containing `TEXT` elements to `TEXT`, some additional rules apply. Specifically, array elements are enclosed by double quotes (`"`) in the following cases:
+
+- The array element is an empty string.
+- The array element contains curly or square braces (`{`,`[`,`]`,`}`), commas (`,`), double quotes (`"`), backslashes (`\`), or white space.
+- The array element matches the word `NULL` (case-insensitively).
+
+Additionally, double quotes and backslashes embedded in array elements will be backslash-escaped. For example, the query
+
+```
+SELECT
+ CAST(['1','2','3','4',NULL,'','{impostor,array}','["impostor","array","back\slash"]',' padded and spaced ', 'only spaced', 'null'] AS TEXT)
+```
+
+returns the `TEXT` value `'{1,2,3,4,NULL,"","{impostor,array}","[\"impostor\",\"array\",\"back\\slash\"]"," padded and spaced ","only spaced","null"}'`.
+
+### [](#converting-text-to-array)Converting TEXT to ARRAY
+
+When converting the `TEXT` representation of an array back to `ARRAY`, the same quoting and escaping rules as above apply. Unquoted whitespace surrounding array elements is trimmed, but whitespace contained within array elements is preserved. The array elements themselves are converted according to the conversion rules for the requested array element type. For example, the query
+
+```
+SELECT
+ CAST('{1, 2, 3, 4, null, "", "{impostor,array}", "[\"impostor\",\"array\",\"back\\slash\"]", " padded and spaced ", "null", unquoted padded and spaced }' AS ARRAY(TEXT))
+```
+
+returns the `ARRAY(TEXT)` value `[1,2,3,4,NULL,'','{impostor,array}','["impostor","array","back\slash"]',' padded and spaced ','null','unquoted padded and spaced']`.
+
+It is also possible to enclose arrays with square braces (`[` and `]`) instead of curly braces (`{` and `}`) when converting from `TEXT` to `ARRAY`. For example, the query
+
+```
+SELECT
+ CAST('[1, 2, 3, 4, NULL]' AS ARRAY(INTEGER))
+```
+
+returns the `ARRAY(INTEGER)` value `[1,2,3,4,NULL]`.
+
+### [](#nested-arrays)Nested ARRAYs
+
+Finally, the same prodedure applies when converting nested `ARRAY` values from and to `TEXT`. For example, the query
+
+```
+SELECT
+ CAST([NULL,[],[NULL],[1,2],[3,4]] AS TEXT)
+```
+
+returns the `TEXT` value `{NULL,{},{1,2},{3,4}}`, and the query
+
+```
+SELECT
+ CAST('{NULL,{},{1,2},{3,4}}' AS ARRAY(ARRAY(INTEGER)))
+```
+
+returns the `ARRAY(ARRAY(INTEGER))` value `[NULL,[],[NULL],[1,2],[3,4]]`.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_semi_structured_data_working_with_semi_structured_data.md b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_semi_structured_data_working_with_semi_structured_data.md
new file mode 100644
index 0000000..f553233
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_loading_data_working_with_semi_structured_data_working_with_semi_structured_data.md
@@ -0,0 +1,21 @@
+# [](#work-with-semi-structured-data)Work with semi-structured data
+
+Semi-structured data is any data that does not follow a strict tabular schema and often includes fields that are not standard SQL data types. This data typically has a nested structure and supports complex types such as arrays, maps, and structs.
+
+Common formats of semi-structured data include:
+
+- **JSON**— A widely used format for semi-structured data. For information on loading JSON data with Firebolt, see [Load semi-structured JSON data](/Guides/loading-data/working-with-semi-structured-data/load-json-data.html).
+- **Parquet and ORC**— Serialization formats that support nested structures and complex data types. For information on loading Parquet data with Firebolt, see [Load semi-structured Parquet data](/Guides/loading-data/working-with-semi-structured-data/load-parquet-data.html).
+
+## [](#firebolts-approach-to-semi-structured-data)Firebolt’s approach to semi-structured data
+
+Firebolt transforms semi-structured data using arrays, enabling efficient querying. Arrays in Firebolt represent the following data constructs:
+
+- **Variable-length arrays**— Arrays with unpredictable lengths in the source data are supported by Firebolt. These arrays can have arbitrary nesting levels, provided the nesting level is consistent within a column and known during table creation.
+- **Maps**— Maps, also known as dictionaries, are represented using two coordinated arrays—one for keys and one for values. This approach is particularly useful for JSON-like data where objects have varying keys.
+
+* * *
+
+- [Load semi-structured JSON data](/Guides/loading-data/working-with-semi-structured-data/load-json-data.html)
+- [Load semi-structured Parquet data](/Guides/loading-data/working-with-semi-structured-data/load-parquet-data.html)
+- [Work with arrays](/Guides/loading-data/working-with-semi-structured-data/working-with-arrays.html)
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization.md b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization.md
new file mode 100644
index 0000000..8d69785
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization.md
@@ -0,0 +1,20 @@
+# [](#register-and-set-up-your-organization)Register and set up your organization
+
+Learn how to register your organization, create accounts, and create logins and/or service accounts.
+
+- [Register your organization](/Guides/managing-your-organization/creating-an-organization.html)
+- [Create an account](/Guides/managing-your-organization/managing-accounts.html)
+- [Manage logins](/Guides/managing-your-organization/managing-logins.html)
+- [Create service accounts](/Guides/managing-your-organization/service-accounts.html)
+
+# [](#manage-user-permissions)Manage user permissions
+
+Learn how to create users, and link logins and/or service accounts.
+
+- [Manage users](/Guides/managing-your-organization/managing-users.html)
+
+# [](#manage-billing)Manage billing
+
+Learn how to manage billing with observability at both organization and account levels.
+
+- [Manage billing](/Guides/managing-your-organization/billing.html)
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_billing.md b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_billing.md
new file mode 100644
index 0000000..3968304
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_billing.md
@@ -0,0 +1,33 @@
+# [](#billing)Billing
+
+Firebolt bills are based on the consumption of resources within each account in your organization. This includes the total amount of data stored and engine usage.
+
+- **Data storage** usage is calculated on the daily average amount of data (in bytes) stored under your Firebolt account name for indexes and raw compressed data.
+- **Engine resources** usage is calculated with **one-second granularity** between the time Firebolt makes the engine available for queries and when the the engine moves to the stopped state.
+
+## [](#set-up-account-billing-through-aws-marketplace)Set-up account billing through AWS Marketplace
+
+To continue using Firebolt’s engines for query execution after your initial $200 credit, valid for 30 days, you’ll need to set-up a billing account by connecting your account to the [AWS Marketplace](https://aws.amazon.com/marketplace).
+
+**Steps for registration:**
+
+1. On the [Firebolt Workspace page](https://go.firebolt.io/), select the **Configure**() icon from the left navigation pane.
+2. Under **Organization settings**, select **Billing**.
+3. Click **Connect to AWS Marketplace** to take you to the Firebolt page on AWS Marketplace.
+4. On the AWS Marketplace page, click the **View Purchase Options** in the top right hand corner of the screen.
+5. Click **Setup Your Account**.
+
+Your account should now be associated with AWS Marketplace.
+
+## [](#invoices)Invoices
+
+Invoices for Firebolt engines and data are submitted through the AWS Marketplace. The final monthly invoice is available on the third day of each month through the AWS Marketplace. A billing cycle starts on the first day of the month and finishes on the last day of the same month.
+
+## [](#viewing-billing-information)Viewing billing information
+
+Users with the **Org Admin** role can monitor the cost history of each account in the organization.
+
+**To view cost information for your organization** Organization cost details are captured in two information\_schema tables. Query those two tables and retrieve any information about the organization’s cost
+1\) [Engines billing](/sql_reference/information-schema/engines-billing.html) 2) [Storage billing](/sql_reference/information-schema/storage-billing.html)
+
+Firebolt billing is reported to the AWS Marketplace at the beginning of the next day. By default, the **Accounts & Billing** page displays the engine usage breakdown based on billing time. If you prefer to see the engine usage by actual usage day, you can click the **Engines breakdown** selector under the **Usage cost by engine** table and click **Actual running time**.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_creating_an_organization.md b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_creating_an_organization.md
new file mode 100644
index 0000000..7e87995
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_creating_an_organization.md
@@ -0,0 +1,36 @@
+# [](#register-to-firebolt)Register to Firebolt
+
+To start working with Firebolt, you first need to register your organization and create your first account. An organization provides a logical structure for managing accounts, billing, and authentication. [Read more about organizations and accounts and their benefits.](/Overview/organizations-accounts.html)
+
+When registering to Firebolt, the domain name used in your registration email will determine the organization name. Organization names are globally unique — no two organizations can have the same name. If you need two organizations under the same domain, contact the Firebolt Support team for further assistance.
+
+## [](#create-an-organization)Create an organization
+
+To register to Firebolt and create an organization:
+
+1. Go to Firebolt’s registration page: [go.firebolt.io/signup](https://go.firebolt.io/signup)
+2. Enter the following information in the form:
+
+ - First name
+ - Last name
+ - Email - make sure you use a business email address, such as `you@anycorp.com`. Based on that address, Firebolt infers the name of your company and organization. Firebolt does not support usernames with personal email addresses, such as `me@gmail.com` or `you@outlook.com`.
+ - Region in which to create your first account. You will be able to create additional accounts in other regions later on, if needed.
+3. Click **Register**.
+4. An email will be sent to the address provided to verify the organization. When this email is received, click on **Activate**. To move on to the next step, Firebolt will approve your registration request and validate your information - this step might take a couple of minutes to complete.
+5. Once approved, you will get a welcome email. Click **Go to Firebolt** in this email.
+6. Enter a password as instructed and choose **Set password**.
+7. Choose **Log in**. Enter your login information (email address and password) and click **Log in**.
+
+Congratulations - you have successfully set up your organization. Welcome to Firebolt!
+
+
+
+Your organization comes prepared with one account for your convenience - choose your own name or keep the default.
+
+## [](#next-steps)Next steps:
+
+- [Manage accounts](/Guides/managing-your-organization/managing-accounts.html)
+- [Create logins](/Guides/managing-your-organization/managing-logins.html) or [set up SSO authentication](/Guides/security/sso/)
+- [Add users](/Guides/managing-your-organization/managing-users.html) to your account
+- [Manage roles](/Guides/security/rbac.html)
+- Create databases, engines, and load your data. Follow our [getting started tutorial](/Guides/getting-started/) to try this out with sample data.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_managing_accounts.md b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_managing_accounts.md
new file mode 100644
index 0000000..a1114ae
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_managing_accounts.md
@@ -0,0 +1,89 @@
+# [](#manage-accounts)Manage accounts
+
+Your organization comes prepared with one account for your convenience. You can add more accounts, edit existing accounts, or delete accounts using SQL or in the UI.
+
+To view all accounts, click **Configure** to open the configure space, then choose **Accounts** from the menu, or query the [information\_schema.accounts](/sql_reference/information-schema/accounts.html) view.
+
+## [](#create-a-new-account)Create a new account
+
+Creating an account requires the org\_admin role.
+
+### [](#sql)SQL
+
+To create an account using SQL, use the [CREATE ACCOUNT](/sql_reference/commands/data-definition/create-account.html) statement. For example:
+
+```
+CREATE ACCOUNT dev WITH REGION = 'us-east-1';
+```
+
+### [](#ui)UI
+
+To create an account via the UI:
+
+
+
+1. Click **Configure** to open the configure space, then choose **Accounts** from the menu.
+2. From the Accounts management page, choose **Create Account**. Type a name for the account and choose a region. You won’t be able to change the region for this account later, so choose carefully.
+3. Choose **Create**.
+
+
+
+Then you will see your new account on the **Accounts management** page.
+
+There can be up to 20 accounts per organization and you can use `CREATE ACCOUNT` 25 times. If you have a need for additional account creations beyond this limit, contact [Firebolt Support](https://docs.firebolt.io/godocs/Reference/help-menu.html) for assistance. Our team can provide guidance and, if appropriate, adjust your account settings to accommodate your needs.
+
+## [](#edit-an-existing-account)Edit an existing account
+
+Editing an account requires the account\_admin or org\_admin role.
+
+### [](#sql-1)SQL
+
+To edit an existing account using SQL, use the [ALTER ACCOUNT](/sql_reference/commands/data-definition/alter-account.html) statement. For example:
+
+```
+ALTER ACCOUNT dev RENAME TO staging;
+```
+
+### [](#ui-1)UI
+
+To edit an account via the UI:
+
+1. Click **Configure** to open the configure space, then choose **Accounts** from the menu.
+2. Search for the relevant account using the top search filters or by scrolling through the accounts list. Hover over the right-most column to make the account menu appear then choose **Edit account**. Edit the name of the account.
+3. Choose **Save**.
+
+
+
+## [](#delete-an-existing-account)Delete an existing account
+
+Deleting an account requires the account\_admin or org\_admin role.
+
+### [](#sql-2)SQL
+
+To delete an existing account using SQL, use the [DROP ACCOUNT](/sql_reference/commands/data-definition/drop-account.html) statement. For example:
+
+```
+DROP ACCOUNT dev;
+```
+
+### [](#ui-2)UI
+
+To delete an account via the UI:
+
+1. Click **Configure** to open the configure space, then choose **Accounts** from the menu.
+2. Search for the relevant account using the top search filters or by scrolling through the accounts list. Hover over the right-most column to make the account menu appear then choose **Delete account**. If your account is not empty (for example, if it contains other objects such as users/databases/engines/etc.), you will need to confirm that you will also delete the sub-objects by selecting **Delete account sub-objects permanently**.
+3. Choose **Confirm**.
+
+
+
+The account will be removed from the **Accounts management** page.
+
+## [](#switch-accounts)Switch accounts
+
+To switch the account you are using:
+
+### [](#ui-3)UI
+
+Click on your login button - the current account will be marked. Choose an account you would like to switch to.
+
+
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_managing_logins.md b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_managing_logins.md
new file mode 100644
index 0000000..a8eccfc
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_managing_logins.md
@@ -0,0 +1,80 @@
+# [](#manage-logins)Manage logins
+
+Logins are managed at the organization level and are used for authentication. Logins are a combination of a login name (email), first name, last name, and password, unless you’ve configured [Single Sign-On (SSO)](../security/sso/). Moreover, logins can be configured with advanced authentication properties such as [MFA](/Guides/security/enabling-mfa.html) and [network policies](/Guides/security/network-policies.html). Logins are linked to users at the account level, so that roles may be managed separately per account. A user must be linked to either a login or a service account for programmatic use to gain access to Firebolt. You can add, edit or delete logins using SQL or in the UI.
+
+To view all logins, click **Configure** to open the configure space, then choose **Logins** from the menu, or query the [information\_schema.logins](/sql_reference/information-schema/logins.html) view.
+
+Managing logins requires the org\_admin role.
+
+## [](#create-a-new-login)Create a new login
+
+### [](#sql)SQL
+
+To create a login using SQL, use the [CREATE LOGIN](/sql_reference/commands/access-control/create-login.html) statement. For example:
+
+```
+CREATE LOGIN "alexs@acme.com" WITH FIRST_NAME = 'Alex' LAST_NAME = 'Summers';
+```
+
+### [](#ui)UI
+
+To create a login via the UI:
+
+1. Click **Configure** to open the configure space, then choose **Logins** from the menu:
+
+
+
+
+
+1. From the Logins management page, choose **Create Login**.
+2. Enter the following details:
+
+ - First name: specifies the first name of the user for the login.
+ - Last name: specifies the last name of the user for the login.
+ - Login name: specifies the login in the form of an email address. This must be unique within your organization.
+3. Optionally, you can:
+
+ - Associate a [network policy](/Guides/security/network-policies.html) with the login by choosing a network policy name under the **Network policy attached** field.
+ - Enable password login, which specifies if the login can authenticate Firebolt using a password.
+ - Enable multi-factor authentication (MFA). Read more about how to configure MFA [here](/Guides/security/enabling-mfa.html).
+ - Set the login as **organisation admin**, which enables fully managing the organization.
+
+## [](#edit-an-existing-login)Edit an existing login
+
+### [](#sql-1)SQL
+
+To edit an existing login using SQL, use the [ALTER LOGIN](/sql_reference/commands/access-control/alter-login.html) statement. For example:
+
+```
+ALTER LOGIN "alexs@acme.com" SET NETWORK_POLICY = my_network_policy
+```
+
+### [](#ui-1)UI
+
+To edit a login via the UI:
+
+1. Click **Configure** to open the configure space, then choose **Logins** from the menu.
+2. Search for the relevant login using the top search filters, or by scrolling through the list of logins. Hover over the right-most column to make the login menu appear, then choose **Edit login details**. Edit the desired fields and choose **Save**.
+
+Login name can not be changed for logins that were provisioned via SSO.
+
+
+
+## [](#deleting-an-existing-login)Deleting an existing login
+
+### [](#sql-2)SQL
+
+To delete an existing login using SQL, use the [DROP LOGIN](/sql_reference/commands/access-control/drop-login.html) statement. For example:
+
+```
+DROP LOGIN "alexs@acme.com";
+```
+
+### [](#ui-2)UI
+
+To delete a login via the UI:
+
+1. Click **Configure** to open the configure space, then choose **Logins** from the menu.
+2. Search for the relevant login using the top search filters, or by scrolling through the logins list. Hover over the right-most column to make the login menu appear, then choose **Delete login**.
+
+If the login is linked to any users, deletion will not be permitted. The login must be unlinked from all users before deletion.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_managing_users.md b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_managing_users.md
new file mode 100644
index 0000000..1c26d19
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_managing_users.md
@@ -0,0 +1,355 @@
+# [](#manage-users-and-roles)Manage users and roles
+
+In Firebolt, an **organization** can have multiple **accounts**, each serving as a separate workspace for managing resources and data. Within each account, users are created to control access, with their identities defined through logins or service accounts. **Logins** are associated with individual human users, each authenticated by unique credentials, allowing them to interact directly with Firebolt’s resources according to assigned roles. **Service accounts** provide programmatic access for applications and automated processes within the account, such as data pipelines or monitoring tools. Each login and service account is linked to specific **roles**, which define their permissions, ensuring that access is managed efficiently and securely across the organization.
+
+## [](#-logins) Logins
+
+A **login** in Firebolt represents a **human user** and is associated with an individual’s credentials, identified by an **email address**. Logins are tied to user roles, which define what the individual can access or modify. A login is primarily used for human authentication and allows a user to access the platform, run queries, and interact with databases and other resources. For instance, a login object might be created for a specific person such as `kate@acme.com`, and this login is linked to roles that control permissions.
+
+## [](#-service-accounts) Service accounts
+
+A **service account** represents a **machine or application** rather than a human user. It allows automated processes to authenticate and interact with Firebolt resources. A service account is used for programmatic access, such as in pipelines, monitoring systems, application data access, and scheduled queries. Service accounts are associated with roles just like logins but are designed to operate without human intervention. For example, a service account might be used for a data pipeline that regularly ingests data into Firebolt. Each service account must be associated with a user. For more information about how to create and manage service accounts, see [Manage programmatic access to Firebolt](/Guides/managing-your-organization/service-accounts.html).
+
+## [](#-users) Users
+
+A **user** is a distinct identity that interacts with the Firebolt platform. Each user is assigned specific **roles**, which determine what actions they can perform and which resources they can access. Users are essential for controlling access in Firebolt and are managed through **role-based access control (RBAC)**. Users authenticate via **logins** or **service accounts**, depending on whether they are human users or machine-based processes.
+
+A user must be associated with **either** a login or a service account, as follows:
+
+
+
+There can be multiple users per login or service account. Users are managed at the account level, as shown in the following diagram:
+
+
+
+You can [add](#set-up-a-new-user), [edit](#edit-an-existing-user) or [delete](#deleting-an-existing-user) users using SQL in the **Develop Space** or using the user interface (UI) in the **Configure Space**.
+
+Managing roles requires the account\_admin role. For more information about roles, see the [Roles](/Overview/organizations-accounts.html#roles) section in [Organizations and accounts](/Overview/organizations-accounts.html), and the [Account permissions](/Overview/Security/Role-Based%20Access%20Control/account-permissions.html) section of [Role-based access control](/Overview/Security/Role-Based%20Access%20Control/) that specifies permissions for **CREATE USER**.
+
+**Topics**
+
+- [Manage users and roles](#manage-users-and-roles)
+
+ - [Logins](#-logins)
+ - [Service accounts](#-service-accounts)
+ - [Users](#-users)
+ - [Set up a new user](#set-up-a-new-user)
+
+ - [Set up a new user for programmatic access](#set-up-a-new-user-for-programmatic-access)
+ - [Set up a new user for human access](#set-up-a-new-user-for-human-access)
+
+ - [Create a login](#create-a-login)
+
+ - [Create a login using the UI](#create-a-login-using-the-ui)
+ - [Create a login using SQL](#create-a-login-using-sql)
+ - [Create a user](#create-a-user)
+
+ - [Create a user using the UI](#create-a-user-using-the-ui)
+ - [Create a user using SQL](#create-a-user-using-sql)
+ - [Link the user to the login or service account](#link-the-user-to-the-login-or-service-account)
+
+ - [Link a user using the UI](#link-a-user-using-the-ui)
+ - [Link a user using SQL](#link-a-user-using-sql)
+ - [Create a role](#create-a-role)
+
+ - [Create a role using the UI](#create-a-role-using-the-ui)
+ - [Create a role using SQL](#create-a-role-using-sql)
+ - [Assign a role to a user](#assign-a-role-to-a-user)
+
+ - [Assign a role using the UI](#assign-a-role-using-the-ui)
+ - [Assign a role using SQL](#assign-a-role-using-sql)
+ - [Edit an existing user](#edit-an-existing-user)
+
+ - [Edit a user using the UI](#edit-a-user-using-the-ui)
+ - [Edit a user using SQL](#edit-a-user-using-sql)
+ - [Deleting an existing user](#deleting-an-existing-user)
+
+ - [Delete a user using the UI](#delete-a-user-using-the-ui)
+ - [Delete a user using SQL](#delete-a-user-using-sql)
+
+## [](#set-up-a-new-user)Set up a new user
+
+To set up a new user, complete the following steps:
+
+1. Create a new login or service account. The following section provides information about creating a new login, for human access to Firebolt. If you want to set up a new user for programmatic access, see [Create a service account](/Guides/managing-your-organization/service-accounts.html#create-a-service-account).
+2. Create a new user.
+3. Link the user with a login or a service account.
+4. Create a role.
+5. Assign the role to the user.
+
+The following sections guide you through the previous steps.
+
+### [](#set-up-a-new-user-for-programmatic-access)Set up a new user for programmatic access
+
+
+
+To set up a user for programmatic access, [create a service account](/Guides/managing-your-organization/service-accounts.html#create-a-service-account), and then complete the steps in the following sections to [create a user](#create-a-user), [link the user](#link-the-user-to-the-login-or-service-account) to a service account, [create a role](#create-a-role), and [assign the role](#assign-a-role-to-a-user) to the user.
+
+### [](#set-up-a-new-user-for-human-access)Set up a new user for human access
+
+#### [](#create-a-login)Create a login
+
+
+
+A login is an **email** that is used for authentication. A login can be associated with multiple accounts. When you set up a new user, you must create either a login or service account for them. Create a login if you want to associate a user with human access to Firebolt. [Create a service account](/Guides/managing-your-organization/service-accounts.html#create-a-service-account) for programmatic access. You will link the user to **either** a login or a service account.
+
+##### [](#create-a-login-using-the-ui)Create a login using the UI
+
+Login to [Firebolt’s Workspace](https://go.firebolt.io/login). If you haven’t yet registered with Firebolt, see the [Get Started](/Guides/getting-started/) guide. If you encounter any issues, reach out to [support@firebolt.io](mailto:support@firebolt.io) for help. Then, do the following:
+
+1. Select the Configure icon () in the left navigation pane to open the **Configure Space**.
+2. Select **Logins**.
+3. Select **Create Login**.
+4. In the **Create login** window that pops up, enter the following:
+
+ 1. First Name - The first name of the user.
+ 2. Last Name - The last name of the user.
+ 3. Login Name - The email address of the user.
+5. Select a network policy from the drop-down list. You can choose **Default** or create your own. The default network policy accepts traffic from any IP address. For more about network policies, including how to create a new policy, see [Manage network policies](/Guides/security/network-policies.html).
+6. Toggle the following options on or off to select the following:
+
+ 1. Is password enabled - Toggle **on** to require authentication using a password.
+ 2. Is MFA enabled - Toggle **on** to require authentication using multi-factor authentication (MFA).
+ 3. Is organization admin - Toggle **on** to grant that login permissions associated with an **Organization Admin**. A user must have organization administrative privileges to manage logins and service accounts. For more information about organization administrative privileges and other roles, see the [Roles](/Overview/organizations-accounts.html#roles) section in [Organizations and accounts](/Overview/organizations-accounts.html).
+7. Select **Create**.
+
+##### [](#create-a-login-using-sql)Create a login using SQL
+
+Login to [Firebolt’s Workspace](https://go.firebolt.io/login). If you haven’t yet registered with Firebolt, see the [Get Started](/Guides/getting-started/) guide. If you encounter any issues, reach out to [support@firebolt.io](mailto:support@firebolt.io) for help. Then, do the following:
+
+1. Select the **Develop** icon ().
+
+ By default, when you login to **Firebolt’s Workspace** for the first time, Firebolt creates a tab in the **Develop Space** called **Script 1**. The following apply:
+
+ - The database that **Script 1** will use is located directly below the tab name. If you want to change the database, select another database from the drop-down list.
+ - An engine must be running to process the script in a selected tab. The name and status of the engine that **Script 1** uses for computation is located to the right of the current selected database. If the engine has auto-start set to `TRUE`, it will start from a stopped state. For more information about auto-start, see [Immediately Starting or Automatically Stopping an Engine](/Guides/operate-engines/working-with-engines-using-ddl.html#automatically-start-or-stop-an-engine).
+2. Select **system** from the drop-down arrow next to the engine name. The system engine is always running, and you can use it to create a login. You can also use an engine that you create.
+3. Use the syntax in the following example code to create a login in the SQL Script Editor:
+
+ ```
+ CREATE LOGIN ""
+ WITH FIRST_NAME =
+ LAST_NAME = ;
+ ```
+
+#### [](#create-a-user)Create a user
+
+
+
+After you create a login, the next step is to create a user.
+
+##### [](#create-a-user-using-the-ui)Create a user using the UI
+
+1. Select the **Govern** icon () in the left navigation pane to open the **Govern Space**.
+2. Select **Users** from the left sub-menu bar.
+3. Select the **+ Create User** button at the top right of the **Govern Space**.
+4. In the **Create User** window, enter the following:
+
+ 1. **User name** - The name of the user to associate with the login. This name can be any string, excluding spaces, and special characters such as exclamation points (!), percent signs (%), at sign(@), dot sign (.), underscore sign (\_), minus sign (-), and asterisks (\*).
+ 2. **Assign to** - Use the dropdown to assign the user to one of the following:
+ i. **Unassigned** - No specific assignment.
+
+ ii. **Login** - Associates the user with a login name or email address. After selecting this option, you will be prompted to choose the login name or email address.
+
+ iii. **Service Account** - Associates the user with a service account. After selecting this option, you will be prompted to choose a service account name.
+ 3. **Role** - Select the role you want to assign to the user. If no role is specified, the user is automatically granted a [public role](/Overview/organizations-accounts.html#public-role). For more information about roles, see the [Roles](/Overview/organizations-accounts.html#roles) section in [Organization and accounts](/Overview/organizations-accounts.html).
+ 4. **Default Database** - Choose a database to associate with the user, setting it as their default for access.
+ 5. **Default Engine** - Choose a default processing engine to associate with the user.
+5. Select **Create new user** to save the configuration.
+
+##### [](#create-a-user-using-sql)Create a user using SQL
+
+Use the syntax in the following example code and the [CREATE USER](/sql_reference/commands/access-control/create-user.html) statement to create a user in the **SQL Script Editor** in the **Develop Space**:
+
+```
+CREATE USER ;
+```
+
+You can also create a user and link it to a login simultaneously as shown in the following code example:
+
+```
+CREATE USER WITH LOGIN = "";
+```
+
+Create a user and link it to a service account at the same time as shown in the following code example:
+
+```
+CREATE USER WITH SERVICE_ACCOUNT=
+```
+
+#### [](#link-the-user-to-the-login-or-service-account)Link the user to the login or service account
+
+
+
+If the user wasn’t associated with a login or service account when they were created, you must link them.
+
+##### [](#link-a-user-using-the-ui)Link a user using the UI
+
+1. Select the Govern icon () in the left navigation pane to open the **Govern Space**.
+2. Select **Users** from the left sub-menu bar.
+3. Select the three horizontal dots (…) to the right of the user that you need to link to a login.
+4. Select **Edit user details**.
+5. If you want to link the user to a login for human access, select **Login** from the drop-down list next to **Assign to**. If you want to link the user to a service account for programmatic access, select **Service Account** from the drop-down list next to **Assign to**.
+6. If you want to link the user to a login for human access, select the name of the login to associate with the user from the drop-down list under **Login name**. If you want to link the user to a service account for programmatic access, select a name from the drop-down list next to **Service account name**. This drop-down list contains only login accounts that are not already assigned to a user in the current account.
+7. Select **Save**.
+
+##### [](#link-a-user-using-sql)Link a user using SQL
+
+Use the syntax in the following example code and the [ALTER\_USER](/sql_reference/commands/access-control/alter-user.html) statement to link a user to a login in the **SQL Script Editor** in the **Develop Space**:
+
+```
+ALTER USER SET LOGIN = "";
+```
+
+The following code links a user to a service account:
+
+```
+ALTER USER SET SERVICE_ACCOUNT =
+```
+
+#### [](#create-a-role)Create a role
+
+
+
+If you don’t already have a role that you want to assign to a user, you can create a role to define what actions users can perform. For more information, see [Roles](/Overview/organizations-accounts.html#roles).
+
+##### [](#create-a-role-using-the-ui)Create a role using the UI
+
+1. Select the Govern icon () in the left navigation pane to open the **Govern Space**.
+2. Select **Roles** from the left sub-menu bar.
+3. Select the **+ New Role** button at the top right of the **Govern Space**.
+4. In the left sub-menu bar, enter the following:
+
+ 1. Role name - The name of the role that you want to create. You can use this role to grant privileges for more than one user.
+5. Select **Databases** in the left sub-menu bar, and select the following in **Database privileges**:
+
+ 1. **Create database** - Toggle **on** to allow the user to create any database in the account.
+ 2. **Modify any database** - Toggle **on** to allow the user to modify any database in the account, or keep the option **off** to select the specific database the user can modify.
+ 3. **Usage any database** - Toggle **on** to allow the user to use any database in the account, or keep the option **off** to select the specific database the user can use.
+ 4. If you didn’t specify using or modifying all databases, select the checkbox next to the specific database that you want to grant the user access to modify or use.
+6. Select **Engines** in the left sub-menu bar, and select the following in **Engine privileges**:
+
+ 1. **Create engine** - Toggle **on** to allow the user to create any engine in the account.
+ 2. **Modify any engine** - Toggle **on** to allow the user to modify any engine in the account, or keep the option **off** to select the specific engine the user can modify.
+ 3. **Operate any engine** - Toggle **on** to allow the user to stop or start any engine in the account, or keep the option **off** to select the specific engine the user can start or stop. Any running engine that is not the system engine accumulates usage costs.
+ 4. **Usage any engine** - Toggle **on** to allow the user to use any engine in the account, or keep the option **off** to select the specific engine the user can use.
+7. Select **Create**.
+
+##### [](#create-a-role-using-sql)Create a role using SQL
+
+Use the syntax in the following example code and the [CREATE ROLE](/sql_reference/commands/access-control/create-role.html) and [GRANT](/sql_reference/commands/access-control/grant.html) statements to create a role in the **SQL Script Editor** in the **Develop Space**:
+
+```
+CREATE ROLE ;
+```
+
+Use the following code to grant engine **access to a role**:
+
+```
+GRANT USAGE ON ENGINE TO
+```
+
+Use the following code example to grant a role permission to **modify a database**:
+
+```
+GRANT MODIFY ON DATABASE TO
+```
+
+Use the following code example to grant a role permission to **create objects inside the public schema**:
+
+```
+GRANT CREATE ON SCHEMA public TO
+```
+
+Use the following code to grant a role permission to **access the public schema** in a database:
+
+```
+GRANT USAGE ON SCHEMA public TO
+```
+
+Use the following code example to grant a role permission to **read data from a specified table**:
+
+```
+GRANT SELECT ON TABLE TO
+```
+
+For more information about role-based access, see [Manage role-based access control](/Guides/security/rbac.html).
+
+#### [](#assign-a-role-to-a-user)Assign a role to a user
+
+
+
+You can assign a new role to the user or change the role assigned to the user from the default **public** role to grant them specific permissions. A user can have multiple roles.
+
+##### [](#assign-a-role-using-the-ui)Assign a role using the UI
+
+1. Select the Govern icon () in the left navigation pane to open the **Govern Space**.
+2. Select **Users** from the left sub-menu bar.
+3. Select the three horizontal dots (…) to the right of the user that you need to link to a login.
+4. Select **Edit user details**.
+5. Select the checkbox next to the role that you want to assign to the user from the list under **Assign Roles**.
+6. Select **Save**.
+
+##### [](#assign-a-role-using-sql)Assign a role using SQL
+
+Use the syntax in the following example code and the [GRANT](/sql_reference/commands/access-control/grant.html) statement to assign a role in the **SQL Script Editor** in the **Develop Space**:
+
+```
+GRANT TO USER ;
+```
+
+You can use `GRANT` to assign a role to another role as follows:
+
+```
+GRANT TO ROLE
+```
+
+## [](#edit-an-existing-user)Edit an existing user
+
+You can alter a user’s name, login or service account that they are associated with, their default database, and engine.
+
+### [](#edit-a-user-using-the-ui)Edit a user using the UI
+
+1. Select the Govern icon () in the left navigation pane to open the **Govern Space**.
+2. Select **Users** from the left sub-menu bar.
+3. Select the three horizontal dots (…) to the right of the user that you need to edit.
+4. Select **Edit user details**.
+5. Edit the desired fields.
+6. Select **Save**.
+
+### [](#edit-a-user-using-sql)Edit a user using SQL
+
+Use the [ALTER USER](/sql_reference/commands/access-control/alter-user.html) statement to change a user’s information in the **SQL Script Editor** in the **Develop Space**.
+
+The following code example changes a user’s name:
+
+```
+ALTER USER "alex" RENAME TO "alexs";
+```
+
+The following code example changes a user’s login:
+
+```
+ALTER USER alex SET LOGIN="alexs@acme.com";
+```
+
+Users can modify most of their own account settings without requiring [RBAC](/Overview/Security/Role-Based%20Access%20Control/#role-based-access-control-rbac) permissions, except when altering [LOGIN](/Guides/managing-your-organization/managing-logins.html) configurations or a [SERVICE ACCOUNT](/Guides/managing-your-organization/service-accounts.html).
+
+## [](#deleting-an-existing-user)Deleting an existing user
+
+You can delete a user using either the UI or with SQL. The delete operation is irreversible.
+
+### [](#delete-a-user-using-the-ui)Delete a user using the UI
+
+1. Select **Users** from the left sub-menu bar.
+2. Select the three horizontal dots (…) to the right of the user that you need to delete.
+3. Select **Delete user**.
+4. Select **Confirm** to delete the user. This operation is irreversible.
+
+### [](#delete-a-user-using-sql)Delete a user using SQL
+
+Use the syntax in the following example code and the the [DROP USER](/sql_reference/commands/access-control/drop-user.html) statement to delete an existing user in the **SQL Script Editor** in the **Develop Space**:
+
+```
+DROP USER "alex";
+```
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_service_accounts.md b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_service_accounts.md
new file mode 100644
index 0000000..d28a1b1
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_managing_your_organization_service_accounts.md
@@ -0,0 +1,219 @@
+# [](#manage-programmatic-access-to-firebolt)Manage programmatic access to Firebolt
+
+Service accounts in Firebolt are used exclusively for **programmatic access**, allowing applications, scripts, or automated systems to securely interact with Firebolt resources. Unlike regular logins for individuals, each service account has an ID and a secret for authentication.
+
+To manage service accounts, you must have the **organization admin** role, which grants full administrative control over an organization in Firebolt, including managing logins, network policies, and accounts. This role ensures proper access management, security, and compliance with organizational policies.
+
+Administrators use service accounts to control how external tools and applications access Firebolt, ensuring access is limited to necessary resources. Service accounts are associated with specific users within the organization, giving administrators control over what data and permissions they have. This helps enforce security rules, track usage, and audit system access in a clear and controlled way.
+
+You can access a Firebolt database programmatically using either of the following:
+
+- The [Firebolt API](https://docs.firebolt.io/godocs/Guides/query-data/using-the-api.html#firebolt-api) - directly interacts with Firebolt’s data warehouse using HTTP requests.
+- The [Firebolt drivers](https://docs.firebolt.io/godocs/Guides/developing-with-firebolt/) - use a third party tool or programming language to integrate with Firebolt’s data warehouse. Firebolt supports several languages including Python, Node, .Net, and Go.
+
+Service accounts must be manually linked to a [user account](https://docs.firebolt.io/godocs/Guides/managing-your-organization/managing-users.html) after they have been created. The service account provides access to the organization, and the associated user provides access to an account within the organization. To use Firebolt programmatically, you must authenticate with an ID and a secret. These are generated when you create a service account. You can add, delete and generate secrets for service accounts using SQL scripts in the **Develop Space** or through the user interface (UI) in the **Configure Space**.
+
+Follow these steps to **gain programmatic access to Firebolt**:
+
+1. [Create a service account](#create-a-service-account).
+2. [Get a service account ID](#get-a-service-account-id).
+3. [Generate a secret](#generate-a-secret).
+4. [Create a user](#create-a-user).
+
+After completing the previous steps, the following sections show you how to **manage programmatic access through your service account**:
+
+1. [Test your new service account](#test-your-new-service-account).
+2. [Edit your service account](#edit-your-service-account).
+3. [Delete your service account](#delete-your-service-account).
+
+## [](#create-a-service-account)Create a service account
+
+
+
+You can create a service account using SQL scripts in the **Develop Space** or through the user interface (UI) in the **Configure Space**.
+
+### [](#create-a-service-account-using-the-ui)Create a service account using the UI
+
+Login to Firebolt’s [Workspace](https://go.firebolt.io/login). If you haven’t yet registered with Firebolt, see the [Get Started](https://docs.firebolt.io/Guides/getting-started/) guide. If you encounter any issues, reach out to support@firebolt.io for help. Then, do the following:
+
+1. Select the **Configure** icon () in the left navigation pane to open the **Configure Space**.
+2. Select Service accounts on the left sub-menu bar.
+3. Select the **+ Create a service account** button at the top right of the **Configure Space**.
+4. In the **Create a service account** window that appears, enter the following:
+
+ - Name - The name of the service account.
+ - [Network policy](https://docs.firebolt.io/Guides/security/network-policies.html) - A security feature that defines a list of allowed and blocked IP addresses or ranges to manage access at the organization level, login level, or for service accounts.
+ - Description - A description for the service account.
+5. Toggle **Is organization admin** to designate the service account as an account with administrative privileges in your organization. In Firebolt, the organization admin role provides full administrative privileges over the organization, allowing management of users, service accounts, network policies, and other organization-wide settings.
+6. Select **Create** to finish creating the service account.
+
+### [](#create-a-service-account-using-sql)Create a service account using SQL
+
+Login to Firebolt’s [Workspace](https://go.firebolt.io/login). If you haven’t yet registered with Firebolt, see [Get Started](https://docs.firebolt.io/Guides/getting-started/). If you encounter any issues, reach out to support@firebolt.io for help. Then, do the following:
+
+1. Select the **Develop** icon ().
+2. By default, when you login to **Firebolt’s Workspace** for the first time, Firebolt creates a tab in the **Develop Space** called **Script 1**. The following apply:
+
+ - The database that **Script 1** will run using is located directly below the tab name. If you want to change the database, select another database from the drop-down list.
+ - An engine must be running to process the script in a selected tab. The name and status of the engine that **Script 1** uses for computation is located to the right of the current selected database.
+
+ Select **system** from the drop-down arrow next to the engine name. The system engine is always running, and you can use it to create a service account. You can also use an engine that you create.
+3. Use the syntax in the following example code to create a service account in the **SQL Script Editor**:
+
+ ```
+ CREATE SERVICE ACCOUNT IF NOT EXISTS "service_account_name" WITH DESCRIPTION = 'service account 1';
+ ```
+
+ For more information, see the [CREATE SERVICE ACCOUNT](https://docs.firebolt.io/sql_reference/commands/access-control/create-service-account.html) command.
+
+## [](#get-a-service-account-id)Get a service account ID
+
+
+
+Your new service account is listed in the **Configure Space** in the **Service accounts** on the left sub-menu bar. Note the ID of this service account under the **ID** column in the **Service accounts management** table. You will use this ID for authentication.
+
+## [](#generate-a-secret)Generate a secret
+
+
+
+Each service account requires a secret to access Firebolt programmatically. You can generate a secret using SQL scripts in the **Develop Space** or through the UI in the **Configure Space**.
+
+If you generate a new secret, the previous secret for your service account will no longer work inside your applications or services.
+
+### [](#generate-a-secret-using-the-ui)Generate a secret using the UI
+
+1. Select the **Configure** icon () in the left navigation pane to open the **Configure Space**.
+2. Select **Service accounts** from the left sub-menu bar.
+3. Select the three horizontal dots (…) to the right of the service account that you want to generate a secret.
+4. Select **Create a new secret**.
+5. Select the copy icon from the pop-up window **New secret for service account** that displays the new secret to copy the secret to your clipboard. This secret is not stored anywhere. Once you close the pop-up window, you will no longer be able to retrieve this secret.
+
+### [](#generate-a-secret-using-sql)Generate a secret using SQL
+
+Use the syntax in the following example code to generate a secret for a service account in the **SQL Script Editor** in the **Develop Space**:
+
+```
+CALL fb_GENERATESERVICEACCOUNTKEY('service_account_name')
+```
+
+The `CALL fb_GENERATESERVICEACCOUNTKEY` command in the previous code example returns both the service account ID and secret. Once you retrieve this secret, you cannot retrieve it again later.
+
+## [](#create-a-user)Create a user
+
+
+
+Once you create the service account, it must be associated with a user. Your organization may have multiple Firebolt accounts, each with its own set of resources, databases, and users. Each service account can only be linked to one user per Firebolt account, but it can be assigned to different users across multiple accounts. This setup allows the service account to work across multiple accounts, while ensuring it is linked to only one user per account.
+
+You can create a user using SQL scripts in the **Develop Space** or through the UI in the **Govern Space**.
+
+### [](#create-a-user-using-the-ui)Create a user using the UI
+
+1. Select the Govern icon () in the left navigation pane to open the **Govern Space**.
+2. Select Users from the left sub-menu bar.
+3. Select the **+ Create User** button at the top right of the **Govern Space**.
+4. In the **Create User** window, enter the following:
+
+ - **User Name** - The name of the user to associate with the service account.
+ - **Default Database** - (Optional) The name of the database that is associated with the user.
+ - **Default Engine** - (Optional) The name of the engine that is associated with the user.
+
+ Toggle the radio button next to **Associate a service account**.
+5. Select the name of the service account to associate with the user from the drop-down list under **Service Account Associated**. This drop-down list contains only service accounts that are not already assigned to a user in the current account.
+
+### [](#create-a-user-using-sql)Create a user using SQL
+
+Use the syntax in the following example code to generate a secret for a service account in the **SQL Script Editor** in the **Develop Space**:
+
+```
+CREATE USER alex WITH SERVICE_ACCOUNT = service_account_name;
+```
+
+The previous code example creates a user with the username `alex`, and associates it with a service account by its `service_account_name`.
+
+For more information, see [Manage users](https://docs.firebolt.io/godocs/Guides/managing-your-organization/managing-users.html#create-a-new-user).
+
+## [](#test-your-new-service-account)Test your new service account
+
+Once you have set up your service account, use the following code example to send a request to Firebolt’s REST API, and receive an authentication token:
+
+```
+curl -X POST --location 'https://id.app.firebolt.io/oauth/token' \
+--header 'Content-Type: application/x-www-form-urlencoded' \
+--data-urlencode 'grant_type=client_credentials' \
+--data-urlencode 'audience=https://api.firebolt.io' \
+--data-urlencode "client_id=${service_account_id}" \
+--data-urlencode "client_secret=${service_account_secret}"
+```
+
+In the previous code example, use the service account ID and secret from the previous **Generate a secret** step for `service_account_id` and `service_account_secret`.
+
+The following is an example response to the REST API request:
+
+**Response:**
+
+```
+{
+ "access_token":"eyJz93a...k4laUWw",
+ "token_type":"Bearer",
+ "expires_in":86400
+}
+```
+
+In the previous example response, the following apply:
+
+- The `access_token` is a unique token that authorizes your API requests that acts as a temporary key to access resources or perform actions. You can use this token to authenticate with Firebolt’s platform until it expires.
+- The `token_type` is `Bearer`, which means that the access token must be included in an authorization header of your API requests using the format: `Authorization: Bearer `.
+- The token `expires_in` indicates the number of seconds until the token expires.
+
+Use the returned `access_token` to authenticate with Firebolt.
+
+## [](#edit-your-service-account)Edit your service account
+
+You can edit your service account using SQL scripts in the **Develop Space** or through the UI in the **Configure Space**.
+
+### [](#edit-your-service-account-using-the-ui)Edit your service account using the UI
+
+1. Select **Configure** icon () in the left navigation pane to open the **Configure Space**.
+2. Select **Service accounts** from the left sub-menu bar.
+3. Select the three horizontal dots (…) to the right of the service account that you want to edit.
+4. Select **Edit service account**.
+
+ In the **Edit service account** pop-up window, you can edit the following:
+
+ - **Name** - The name of the service account.
+ - **Network policy** - The network policy associated with the service account that defines whether an IP address is allowed or blocked from interacting with Firebolt resources.
+ - **Description** - The description of the service account.
+ - **Is organization admin** - Toggle on or off to identify the service account as an organizational admin.
+
+ Select **Save** to keep your edits.
+
+### [](#edit-your-service-account-using-sql)Edit your service account using SQL
+
+Use [ALTER SERVICE ACCOUNT](https://docs.firebolt.io/sql_reference/commands/access-control/alter-service-account.html), as shown in the following example to edit a service account in the **SQL Script Editor** in the **Develop Space**:
+
+```
+ALTER SERVICE ACCOUNT service_account_name SET NETWORK_POLICY = my_network_policy
+```
+
+In the previous code example, the service account’s network policy is set to a new value.
+
+## [](#delete-your-service-account)Delete your service account
+
+You can delete your service account using SQL scripts in the **Develop Space** or through the UI in the **Configure Space**.
+
+{: .note} You can’t delete a service account if it is linked to users. You must first unlink the service account from all users. You can view all users linked to a service account by navigating to the **Users** section in the **Govern Space**. In the **Users Management** table, each **User Name** has the name of a **Service Account** if it is associated with one. To unlink a user account, select the three horizontal dots (…) to the right of the **User Name**, and select **Edit user details**. Then, toggle off **Associate a service account**.
+
+### [](#delete-your-service-account-using-the-ui)Delete your service account using the UI
+
+1. Select the **Configure** icon () in the left navigation pane to open the **Configure Space**.
+2. Select **Service accounts** from the left sub-menu bar.
+3. Select the three horizontal dots (…) to the right of the service account that you want to delete.
+4. Select **Delete service account**.
+
+### [](#delete-your-service-account-using-sql)Delete your service account using SQL
+
+Use [DROP SERVICE ACCOUNT](https://docs.firebolt.io/sql_reference/commands/access-control/drop-service-account.html), as shown in the following example to delete a service account in the **SQL Script Editor** in the **Develop Space**:
+
+```
+DROP SERVICE ACCOUNT service_account_name;
+```
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_operate_engines.md b/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_operate_engines.md
new file mode 100644
index 0000000..352cd8a
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_operate_engines.md
@@ -0,0 +1,10 @@
+# [](#operate-engines)Operate engines
+
+Learn how to work with engines using both the UI and SQL, how to size and monitor engines, how to use RBAC to govern engines and how to use the system engine.
+
+* * *
+
+- [Work with engines](/Guides/operate-engines/working-with-engines-using-ddl.html)
+- [Sizing Engines](/Guides/operate-engines/sizing-engines.html)
+- [System Engine](/Guides/operate-engines/system-engine.html)
+- [Governing Engines](/Guides/operate-engines/rbac-for-engines.html)
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_rbac_for_engines.md b/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_rbac_for_engines.md
new file mode 100644
index 0000000..a1b20ae
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_rbac_for_engines.md
@@ -0,0 +1,59 @@
+# [](#governing-engines)Governing Engines
+
+Use [Role Based Access Control](/Guides/security/rbac.html) (RBAC) to granularly control which users within an account can create new engines, use, operate, monitor and modify existing engines. Accordingly, Firebolt provides CREATE, USAGE, OPERATE, MONITOR and MODIFY permissions to control these actions. You can use RBAC to control whether a user has permissions to perform these actions for specific engines or for all engines in a given account. Note that permissions for CREATE ENGINE can only be granted at the account level.
+
+Follow the below steps to control what permissions a user has for a given engine or for any engine within an account:
+
+- Create a new role
+- Grant permissions to the role
+- Assign role to a user
+
+**Example 1:** We want to provide a user kate with permissions to create and operate engines
+
+```
+CREATE ROLE prodAdminRole;
+
+GRANT CREATE ENGINE ON ACCOUNT myAccount IN ORGANIZATION myOrg TO prodAminRole;
+
+GRANT OPERATE ON ENGINE myEngine IN ACCOUNT myAccount TO prodAdminRole;
+
+GRANT ROLE prodAdminRole TO USER kate;
+```
+
+**Example 2:** We want to provide a user kate with permissions to only use and operate engines
+
+```
+CREATE ROLE prodAdminRole;
+
+GRANT USAGE ENGINE ON myEngine IN ACCOUNT myAccount TO prodAminRole;
+
+GRANT OPERATE ON ENGINE myEngine IN ACCOUNT myAccount TO prodAdminRole;
+
+GRANT ROLE prodAdminRole TO USER kate;
+```
+
+**Example 3:** We want to provide a user kate with permissions to use, operate and monitor engine metrics
+
+```
+CREATE ROLE prodAdminRole;
+
+GRANT USAGE ENGINE ON myEngine IN ACCOUNT myAccount TO prodAminRole;
+
+GRANT MONITOR USAGE ON ENGINE myEngine IN ACCOUNT myAccount TO prodAminRole;
+
+GRANT OPERATE ON ENGINE myEngine IN ACCOUNT myAccount TO prodAdminRole;
+
+GRANT ROLE prodAdminRole TO USER kate;
+```
+
+**Example 4:** We want to provide a user kate with permissions to create and modify engines
+
+```
+CREATE ROLE prodAdminRole;
+
+GRANT CREATE ENGINE ON ACCOUNT myAccount IN ORGANIZATION myOrg TO prodAminRole;
+
+GRANT MODIFY ON ENGINE myEngine IN ACCOUNT myAccount TO prodAdminRole;
+
+GRANT ROLE prodAdminRole TO USER kate;
+```
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_sizing_engines.md b/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_sizing_engines.md
new file mode 100644
index 0000000..6e16f03
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_sizing_engines.md
@@ -0,0 +1,45 @@
+# [](#sizing-engines)Sizing Engines
+
+Selecting an appropriate engine size for your workload depends on multiple factors such as the size of your active dataset, latency and throughput requirements of your workload, your considerations for price-performance and the number of users and queries your workload is expected to handle concurrently. Our guidance is to start small with an engine size that fits your active dataset and monitor the workload using the engine observability metrics (see below). Based on these metrics, you can then dynamically resize your engine to meet the needs of your workload.
+
+If your workload requires high processing power relative to data size, use a compute-optimized node type. These nodes have the approximately same processing power as storage-optimized nodes but have less memory, cache space, and cost.
+
+## [](#dimensions-of-engine-sizing)Dimensions of engine sizing
+
+Firebolt allows you to change:
+
+- The type of nodes in your engine.
+- The compute family of nodes in your engine.
+- The number of nodes in a cluster of your engine.
+- The number of copies of that cluster in your engine.
+
+See the [engine fundamentals](/Overview/engine-fundamentals.html) page for details.
+
+## [](#using-observability-metrics-to-resize-an-engine)Using Observability Metrics to Resize an Engine
+
+Firebolt provides engine observability metrics that give visibility into how the engine resources are being utilized by your workloads. Use the [Information\_Schema.engine\_metrics\_history](/sql_reference/information-schema/engine-metrics-history.html) view to understand how much CPU, RAM, and disk are utilized by your workloads. In addition, this view also provides details on how often your queries hit the local cache and how much of your query data is spilling onto the disk. These metrics can help you decide whether your engine needs a different node type and whether you need to add more nodes to improve the query performance. Use the [Information\_Schema.engine\_running\_queries](/sql_reference/information-schema/engine-running-queries.html) view to understand how many queries are waiting in the queue to be run. If there are a number of queries still waiting to be run, adding another cluster to your engine may help improve the query throughput.
+
+## [](#initial-sizing)Initial Sizing
+
+**ELT Workloads**
+
+For the ELT workloads, the engine size would depend on the number of files and the size of the files used to ingest the data. You can parallelize the ingest process with additional nodes, which can provide improved performance.
+
+**Queries**
+
+To correctly size an engine for querying data, there are several factors to consider:
+
+- The size of frequently accessed data under your query pattern. More data will require a engine with a larger cache size.
+- The relative amount of processing performed within the queries in your query pattern. More complex queries will generally require more CPU cores.
+- The Queries Per Second (QPS) of your workload. At higher QPS you may need to enable auto-scaling or multiple clusters on your engine.
+- The number of requests outstanding or the time submitted queries run. Longer running queries may raise the requirements of the engine to need instance types with more memory or more clusters in the engine.
+
+For query processing, our recommendation is to start with a S or M storage-optimized instance type. Then, run a checksum over the dataset you expect to be queried frequently. Firebolt Engines cache the data locally, which helps serve queries at low latencies. The cache size provided by the engines varies depending on the type of node used in your engines, with each size having twice the cache of the next smallest size. Compute-optimized instances have approximately one quarter of the cache size of storage-optimized instances. After the checksum, you can use [Information\_Schema.engine\_metrics\_history](/sql_reference/information-schema/engine-metrics-history.html) to see the cache utilization percentage. If an acceptable percentage of your active dataset fits, you can then run queries at your expected QPS on the engine.
+
+**TIP:** You can use [Multi-Cluster Engine Warmup](/Reference/system-settings.html#multi-cluster-engine-warmup) to submit your checksum queries to all clusters in a multi-cluster engine.
+
+Small and medium storage-optimized engines are available for use right away. Compute-optimized instance types are available, but may see longer engine start times. If you want to use a large or extra-large engine, reach out to [support@firebolt.io](mailto:support@firebolt.io).
+
+**TIP:** You also have the option to run your workload simultaneously on engines with different configurations and use these metrics to identify which configuration best fits your needs.
+
+You will need to have the appropriate [RBAC](/Guides/operate-engines/rbac-for-engines.html) permissions to use the engine observability metrics.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_system_engine.md b/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_system_engine.md
new file mode 100644
index 0000000..92f9b88
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_system_engine.md
@@ -0,0 +1,111 @@
+# [](#system-engine)System Engine
+
+Firebolt’s system engine enables running various metadata-related queries without having to start an engine. The system engine is always available for you in all databases to select and use.
+
+The system engine supports running the following commands:
+
+- All [access control](/sql_reference/commands/access-control/) commands
+- All [engine](/sql_reference/commands/engines/) commands
+- Most [data definition](/sql_reference/commands/data-definition/) commands. The following commands are not supported:
+
+ - [ALTER TABLE DROP PARTITION](/sql_reference/commands/data-definition/alter-table.html)
+ - [CREATE AGGREGATING INDEX](/sql_reference/commands/data-definition/create-aggregating-index.html)
+ - [CREATE EXTERNAL TABLE](/sql_reference/commands/data-definition/create-external-table.html)
+ - [CREATE TABLE AS SELECT](/sql_reference/commands/data-definition/create-fact-dimension-table-as-select.html)
+- Most [metadata](/sql_reference/commands/metadata/) commands. The following commands are not supported:
+
+ - [SHOW CACHE](/sql_reference/commands/metadata/show-cache.html)
+- Non-data-accessing [SELECT](/sql_reference/commands/queries/select.html) queries like `SELECT CURRENT_TIMESTAMP()`
+- [SELECT](/sql_reference/commands/queries/select.html) queries on some [information\_schema](/sql_reference/information-schema/) views:
+
+ - [information\_schema.accounts](/sql_reference/information-schema/accounts.html)
+ - [information\_schema.applicable\_roles](/sql_reference/information-schema/applicable-roles.html)
+ - [information\_schema.transitive\_applicable\_roles](/sql_reference/information-schema/transitive-applicable-roles.html)
+ - [information\_schema.columns](/sql_reference/information-schema/columns.html)
+ - [information\_schema.catalogs](/sql_reference/information-schema/catalogs.html)
+ - [information\_schema.enabled\_roles](/sql_reference/information-schema/enabled-roles.html)
+ - [information\_schema.engines](/sql_reference/information-schema/engines.html)
+ - [information\_schema.indexes](/sql_reference/information-schema/indexes.html)
+ - [information\_schema.logins](/sql_reference/information-schema/logins.html)
+ - [information\_schema.network\_policies](/sql_reference/information-schema/network_policies.html)
+ - [information\_schema.service\_accounts](/sql_reference/information-schema/service-accounts.html)
+ - [information\_schema.tables](/sql_reference/information-schema/tables.html)
+ - [information\_schema.users](/sql_reference/information-schema/users.html)
+ - [information\_schema.views](/sql_reference/information-schema/views.html)
+
+## [](#using-the-system-engine-via-the-firebolt-manager)Using the system engine via the Firebolt manager
+
+1. In the Firebolt manager, choose the Databases icon in the navigation pane.
+2. Click on the SQL Workspace icon for the desired database. In case you have no database in your account - create one first.
+3. From the engine selector in the SQL Workspace, choose System Engine, then run one of the supported queries.
+
+## [](#using-the-system-engine-via-sdks)Using the system engine via SDKs
+
+### [](#python-sdk)Python SDK
+
+Connect via the connector without specifying the engine\_name. Database parameter is optional.
+
+System engine does not need a database defined. If you wish to connect to an existing database and run metadata queries with the system engine, just specify the name of your database.
+
+**Example**
+
+```
+from firebolt.db import connect
+from firebolt.client import DEFAULT_API_URL
+from firebolt.client.auth import ClientCredentials
+
+client_id = ""
+client_secret = ""
+account_name = ""
+
+with connect(
+ database="", # Omit this parameter if you don't need db-specific operations
+ auth=ClientCredentials(client_id, client_secret),
+ account_name=account_name,
+ api_endpoint=DEFAULT_API_URL,
+) as connection:
+
+ cursor = connection.cursor()
+
+ cursor.execute("SHOW CATALOGS")
+
+ print(cursor.fetchall())
+```
+
+Guidance on creating service accounts can be found in the [service account](/Guides/managing-your-organization/service-accounts.html) section.
+
+### [](#other-sdks)Other SDKs
+
+Any other Firebolt connector can also be used similarly, as long as the engine name is omitted.
+
+## [](#system-engine-limitations)System Engine Limitations
+
+### [](#supported-queries-for-system-engine)Supported queries for system engine
+
+System engine only supports running the metadata-related queries listed above. Additional queries will be supported in future versions.
+
+### [](#rate-limits-for-system-engines)Rate Limits for System Engines
+
+To ensure fair and consistent access to the System Engine for all users, we have introduced rate limits that govern resource usage per account. These limits are designed to prevent resource contention and ensure optimal performance for everyone.
+
+When the rate limits are exceeded on the system engine, the system will return the following error: `429: Account system engine resources usage limit exceeded`. This error typically occurs when an account submits an excessive number of queries or executes highly complex queries that surpass the allocated resource thresholds.
+
+**What to Do If You Encounter Rate Limits**
+
+If you receive the 429 error, consider these steps to resolve the issue:
+
+- Switch to a User Engine: Offload your workloads to a dedicated User Engine if possible. User Engines do not have the same rate limits, making them better suited for higher workloads or complex operations.
+- Review your query patterns and ensure they are not unnecessarily complex or resource-intensive. Use best practices to write efficient queries that minimize resource consumption.
+- Contact Support: If you believe your account has been rate-limited unfairly or you anticipate requiring higher limits, reach out to our support team to discuss adjusting your account’s thresholds.
+
+**Best Practices to Avoid Rate Limits**
+
+- Avoid running multiple concurrent queries that heavily use system resources.
+- Leverage Firebolt’s indexing and other optimization features to streamline your queries.
+- Regularly audit your workloads and usage patterns to align with the system’s best practices.
+
+**Why This Matters**
+
+These rate limits are critical for maintaining a fair and robust environment where all users can achieve reliable performance without disruption from resource-heavy neighbors. This measure aligns with our commitment to delivering consistent and high-quality service across all accounts.
+
+For additional support or questions, please contact our support team or refer to our documentation on optimizing query performance.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_working_with_engines_using_ddl.md b/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_working_with_engines_using_ddl.md
new file mode 100644
index 0000000..ef2ba5c
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_operate_engines_working_with_engines_using_ddl.md
@@ -0,0 +1,204 @@
+# [](#work-with-engines)Work with engines
+
+You can create, run, modify, and scale Firebolt engines using either the **Firebolt Workspace** [user interface](/Guides/query-data/using-the-develop-workspace.html) (UI) or the [Firebolt API](/API-reference/). Learn how to perform key engine operations, including starting, stopping, resizing, and configuring auto-start/stop settings, using both the UI and SQL commands. Firebolt also allows the dynamic scaling of engines without stopping them.
+
+All the engine operations in this guide can be performed using a [system engine](/Guides/operate-engines/system-engine.html).
+
+Topics:
+
+- [Create engines](#create-engines) – Learn how to create an engine.
+- [Start or resume an engine](#start-or-resume-an-engine) – Learn how to start or resume an engine.
+- [Stop an engine](#stop-an-engine) – Learn how to stop an engine either gracefully or immediately.
+- [Resize engines](#resize-engines) – Learn how to scale engines up or down by adjusting the node type or number of nodes.
+- [Concurrency auto-scaling](#concurrency-auto-scaling) – Learn how to enable auto-scaling for engines to automatically adjust the number of clusters based on workload.
+- [Automatically start or stop an engine](#automatically-start-or-stop-an-engine) – Learn how to configure engines to start and stop automatically based on specific conditions.
+
+## [](#create-engines)Create engines
+
+You can create an engine using SQL scripts or through the UI in the **Develop Space**.
+
+### [](#create-an-engine-using-the-ui)Create an engine using the UI
+
+1. Login to the [Firebolt Workspace](https://firebolt.go.firebolt.io/signup).
+2. Select the **Develop Space** icon (</>) from the left navigation bar.
+3. Select the red plus (+) button from the top of the left navigation bar.
+4. Select **Create new engine**.
+ 
+5. Enter the engine name, type, and number of nodes.
+ 
+6. Select **Create new engine**.
+
+### [](#create-an-engine-using-the-api)Create an engine using the API
+
+To create an engine, use [CREATE ENGINE](/sql_reference/commands/engines/create-engine.html).
+
+The following code example creates an engine with one cluster containing two nodes of type `S`:
+
+```
+CREATE ENGINE myengine;
+```
+
+The following code example creates an engine with two nodes of type `M`:
+
+```
+CREATE ENGINE myengine WITH
+TYPE="M" NODES=2 CLUSTERS=1;
+```
+
+When creating an engine using the UI, Firebolt preserves the exact capitalization of the engine name. For example, an engine named **MyEngine** will retain its casing. To reference this engine in SQL commands, enclose the name in quotes: “MyEngine”. For more information, visit the [Object Identifiers](/Reference/object-identifiers.html) page.
+
+## [](#start-or-resume-an-engine)Start or resume an engine
+
+### [](#start-an-engine-using-the-ui-)Start an engine using the UI
+
+1. In the **Engines** list, find the engine you want to start.
+2. Open the dropdown menu next to the engine and select **Start engine**.
+ 
+3. The engine status changes to **Running** once started.
+
+### [](#start-an-engine-using-the-api-)Start an engine using the API
+
+To start your engine, use the [START ENGINE](/sql_reference/commands/engines/start-engine.html) command:
+
+```
+START ENGINE myengine;
+```
+
+## [](#stop-an-engine)Stop an engine
+
+### [](#stop-an-engine-using-the-ui-)Stop an engine using the UI
+
+1. In the **Engines** list, find the engine you want to stop.
+2. Open the dropdown menu and select **Stop engine**.
+ 
+
+### [](#stop-an-engine-using-the-api-)Stop an engine using the API
+
+To stop an engine, use the [STOP ENGINE](/sql_reference/commands/engines/stop-engine.html) command:
+
+```
+STOP ENGINE myengine;
+```
+
+To stop an engine immediately without waiting for running queries to complete, use:
+
+```
+STOP ENGINE myengine WITH TERMINATE=TRUE;
+```
+
+Stopping an engine clears its cache. Queries run after restarting will experience a cold start, potentially impacting performance until the cache is rebuilt.
+
+## [](#resize-engines)Resize engines
+
+### [](#scale-engines-up-or-down-using-the-ui-)Scale engines up or down using the UI
+
+1. In the **Engines** list, find the engine to modify.
+2. Open the dropdown menu and select the **More options** icon ().
+3. Choose **Modify engine**.
+ 
+4. Choose the new node type and select **Modify engine**.
+ 
+
+### [](#scale-engines-up-or-down-using-the-api-)Scale engines up or down using the API
+
+Use the [ALTER ENGINE](/sql_reference/commands/engines/alter-engine.html) command to change the node type:
+
+```
+ALTER ENGINE my_prod_engine SET TYPE = “M”;
+```
+
+The previous example updates all nodes in the engine to use the ‘M’ type.
+
+### [](#scale-engines-out-or-in-using-the-ui)Scale engines out or in using the UI
+
+1. In the **Engines** list, find the engine to modify.
+2. Open the dropdown menu, select the **More options** icon (), and choose **Modify engine**.
+ 
+3. Adjust the number of nodes using the (-) and (+) buttons.
+
+### [](#scale-engines-out-or-in-using-the-api)Scale engines out or in using the API
+
+Use the [ALTER ENGINE](/sql_reference/commands/engines/alter-engine.html) command to change the number of nodes:
+
+```
+ALTER ENGINE my_prod_engine SET NODES = 3;
+```
+
+The previous example updates the engine so that it uses three nodes.
+
+## [](#concurrency-auto-scaling)Concurrency auto-scaling
+
+You can use the `MIN_CLUSTERS` and `MAX_CLUSTERS` parameters to enable auto-scaling and allow the engine to adjust the number of clusters based on workload. Firebolt scales the clusters between the defined minimum and maximum based on engine CPU usage, time in the queue, and other factors that vary with demand. Auto-scaling helps your engine adapt to fluctuating workloads, improving performance, minimizing delays during high demand, avoiding bottlenecks, ensuring consistent query response times, and optimizing resource utilization for a more cost-effective solution.
+
+To use auto-scale, do the following:
+
+1. Create an engine with `MIN_CLUSTERS` set to a value and `MAX_CLUSTERS` set to a value higher than `MIN_CLUSTERS` as shown in the following code example:
+
+ ```
+ CREATE ENGINE your_engine with MIN_CLUSTERS = 1 MAX_CLUSTERS = 2;
+ ```
+
+ In the previous code example, If `MIN_CLUSTERS` has the same value as `MAX_CLUSTERS`, auto-scaling is not enabled.
+2. Check the `information_schema.engines` view to check how many clusters are being used by your engine. The following code example returns the number of `CLUSTERS`, `MIN_CLUSTERS`, and `MAX_CLUSTERS` from the specified engine:
+
+ ```
+ SELECT CLUSTERS, MIN_CLUSTERS, MAX_CLUSTERS
+ FROM information_schema.engines WHERE engine_name = 'your_engine'
+ ```
+
+ You can also select the **Engine monitoring** tab at the bottom of the **SQL script editor** in the **Develop Workspace** as shown in the following image:
+
+ 
+
+ The **Engine monitoring** tab displays CPU, memory, and disk use, cache reads, number of running and suspended queries, and spilled bytes.
+3. Test auto-scaling by running a query that overloads a single cluster, then check `information_schema.engines` to observe the change in the `CLUSTERS` value. You can use any query to test this functionality as long as it can overload the engine. The following example is one such query, but you can use any query that causes the engine to overload.
+
+ 1. In the **Develop Space**, run the following example query **in two separate tabs simultaneously**.
+ The following code example calculates the maximum product of `a.x` and `b.y` after casting them to `BIGINT`, and the total count of joined rows from two generated series of numbers ranging from 1 to 1,000,000:
+
+ ```
+ SELECT MAX(a.x::bigint * b.y::bigint), COUNT(*)
+ FROM GENERATE_SERIES(1, 1000000) AS a(x)
+ JOIN GENERATE_SERIES(1, 1000000) AS b(y) ON TRUE;
+ ```
+ 2. After about a minute, enter the code example in step 1 in a new tab. The query should return the numbers of `CLUSTERS` as `2` as shown in the following table:
+
+ clusters min\_clusters max\_clusters 2 1 2
+ 3. Stop the engine to stop resource consumption. These queries can run for a very long time and prevent the engine from stopping automatically. The following code example stops an engine without waiting for running queries to finish:
+
+ ```
+ STOP ENGINE your_engine WITH TERMINATE=true
+ ```
+
+If you are using Firebolt in preview mode, you can only use a single cluster for your engines. If you want to try using multi-cluster engines, contact [Firebolt support](mailto:support@firebolt.io). Additionally, when scaling an engine, both the old and new compute resources may be active at the same time for a period. This simultaneous operation can result in higher consumption of Firebolt Units ([FBUs](/Overview/engine-consumption.html)).
+
+## [](#automatically-start-or-stop-an-engine)Automatically start or stop an engine
+
+You can configure an engine to start automatically after creation and to stop after a set idle time.
+
+### [](#configure-automatic-startstop-using-the-ui)Configure automatic start/stop using the UI
+
+1. In the **Create new engine** menu, open **Advanced Settings**.
+2. Disable **Start engine immediately** to prevent the engine from starting upon creation.
+ 
+3. To configure automatic stopping, enable **Automatically stop engine** and set your idle timeout. The default is 20 minutes. Toggle the button off to disable auto-stop.
+ 
+
+### [](#configure-automatic-startstop-using-the-api)Configure automatic start/stop using the API
+
+Use the [CREATE ENGINE](/sql_reference/commands/engines/create-engine.html) command to set auto-start and auto-stop options:
+
+```
+CREATE ENGINE my_prod_engine WITH
+INITIALLY_STOPPED = true AUTO_STOP = 10;
+```
+
+The previous example creates an engine that remains stopped after creation and auto-stops after 10 minutes of inactivity.
+
+To modify the auto-stop feature later, use the [ALTER ENGINE](/sql_reference/commands/engines/alter-engine.html) command:
+
+```
+ALTER ENGINE my_prod_engine SET AUTO_STOP = 30;
+```
+
+The `INITIALLY_STOPPED` function can only be set during engine creation and cannot be modified afterward.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_query_data.md b/cmd/docs-scrapper/fireboltdocs/guides_query_data.md
new file mode 100644
index 0000000..c6f0c8f
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_query_data.md
@@ -0,0 +1,10 @@
+# [](#query-data)Query data
+
+Querying data in Firebolt is designed to be fast, flexible, and efficient, allowing you to extract insights from large datasets. Firebolt supports interactive querying through the **Develop Workspace** for hands-on exploration and offers an API for programmatic access, making it easy to integrate queries into automated workflows.
+
+Firebolt provides the following approaches for querying data:
+
+- The [Develop Workspace](/Guides/query-data/using-the-develop-workspace.html) – An intuitive, web-based user interface for writing, running, and refining SQL queries. It simplifies the query development process with features like syntax highlighting, instant query results, and result visualization. Designed for interactive exploration and analysis, the **Develop Workspace** can help you work efficiently with large datasets, troubleshoot queries, and fine-tune performance in a single integrated environment.
+- [Drivers](/Guides/developing-with-firebolt/) – Libraries and SDKs that enable you to connect to Firebolt databases from your applications, scripts, and tools. Firebolt provides drivers for popular programming languages like Python, .NET, and Java, allowing you to interact with Firebolt databases programmatically and integrate them into your applications.
+- [Connectors](/Guides/integrations/integrations.html) – Pre-built integrations that enable you to connect Firebolt to third-party tools, data sources, and services. Firebolt offers connectors for popular data tools like Tableau, Looker, and dbt, allowing you to seamlessly query Firebolt databases from your preferred analytics platforms.
+- The [Firebolt API](/Guides/query-data/using-the-api.html) – A REST API that provides access to Firebolt databases. Allows you to create a custom integration with Firebolt, in case none of the available drivers or connectors meet your requirements.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_query_data_using_the_api.md b/cmd/docs-scrapper/fireboltdocs/guides_query_data_using_the_api.md
new file mode 100644
index 0000000..d46ae34
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_query_data_using_the_api.md
@@ -0,0 +1,121 @@
+# [](#firebolt-api)Firebolt API
+
+Use the Firebolt REST API to execute queries on engines programmatically. Learn how to use the API, including authentication, working with engines and executing queries. A service account is required to access the API. Learn about [managing programmatic access to Firebolt](/Guides/managing-your-organization/service-accounts.html).
+
+- [Firebolt API](#firebolt-api)
+
+ - [Create a service account and associate it with a user](#create-a-service-account-and-associate-it-with-a-user)
+ - [Use tokens for authentication](#use-tokens-for-authentication)
+ - [Get the system engine URL](#get-the-system-engine-url)
+ - [Execute a query on the system engine](#execute-a-query-on-the-system-engine)
+ - [Get a user engine URL](#get-a-user-engine-url)
+ - [Execute a query on a user engine](#execute-a-query-on-a-user-engine)
+
+## [](#create-a-service-account-and-associate-it-with-a-user)Create a service account and associate it with a user
+
+Create a service account with organization administrator privilege, i.e., the service account property\_is\_organization\_admin_ must be *true*. Next, create a user with role privileges you would like to have the service account and associate the service account with the user.
+
+## [](#use-tokens-for-authentication)Use tokens for authentication
+
+To authenticate Firebolt using the service accounts with the properties as described above via Firebolt’s REST API, send the following request to receive an authentication token:
+
+```
+curl -X POST --location 'https://id.app.firebolt.io/oauth/token' \
+--header 'Content-Type: application/x-www-form-urlencoded' \
+--data-urlencode 'grant_type=client_credentials' \
+--data-urlencode 'audience=https://api.firebolt.io' \
+--data-urlencode "client_id=${service_account_id}" \
+--data-urlencode "client_secret=${service_account_secret}"
+```
+
+where:
+
+Property Data type Description client\_id TEXT The service [account ID](/Guides/managing-your-organization/service-accounts.html#get-a-service-account-id). client\_secret TEXT The service [account secret](/Guides/managing-your-organization/service-accounts.html#generate-a-secret).
+
+**Response**
+
+```
+{
+ "access_token":"access_token_value",
+ "token_type":"Bearer",
+ "expires_in":86400
+}
+```
+
+In the previous example response, the following apply:
+
+- The `access_token` is a unique token that authorizes your API requests that acts as a temporary key to access resources or perform actions. You can use this token to authenticate with Firebolt’s platform until it expires.
+- The `token_type` is `Bearer`, which means that the access token must be included in an authorization header of your API requests using the format: `Authorization: Bearer `.
+- The token `expires_in` indicates the number of seconds until the token expires.
+
+Use the returned access\_token to authenticate with Firebolt.
+
+To run a query using the API, you must first obtain the url of the engine you want to run on. Queries can be run against any engine in the account, including the system engine.
+
+## [](#get-the-system-engine-url)Get the system engine URL
+
+Use the following endpoint to return the system engine URL for ``.
+
+```
+curl https://api.app.firebolt.io/web/v3/account//engineUrl \
+-H 'Accept: application/json' \
+-H 'Authorization: Bearer '
+```
+
+**Example:** `https://api.app.firebolt.io/web/v3/account/my-account/engineUrl`
+
+**Response**
+
+```
+{
+ "engineUrl":".api.us-east-1.app.firebolt.io"
+}
+```
+
+## [](#execute-a-query-on-the-system-engine)Execute a query on the system engine
+
+Use the following endpoint to run a query on the system engine:
+
+```
+curl --location 'https://' \
+--header 'Authorization: Bearer ' \
+--data ''
+```
+
+where:
+
+Property Data type Description system engine URL TEXT The system engine URL ([retrieved here](#get-the-system-engine-url)) SQL query TEXT Any valid SQL query (optional) database name TEXT The database name
+
+## [](#get-a-user-engine-url)Get a user engine URL
+
+Get a user engine url by running the following query against the `information_schema.engines` table:
+
+```
+SELECT url
+FROM information_schema.engines
+WHERE engine_name=''
+```
+
+You can run the query on the system engine using the API with the following request:
+
+```
+curl --location 'https:///query' \
+--header 'Authorization: Bearer ' \
+--data 'SELECT * FROM information_schema.engines WHERE engine_name='\''my_engine'\'''
+```
+
+## [](#execute-a-query-on-a-user-engine)Execute a query on a user engine
+
+Use the following endpoint to run a query on a user engine:
+
+```
+curl --location 'https://&database=' \
+--header 'Authorization: Bearer ' \
+--data ''
+```
+
+where:
+
+Property Data type Description user engine URL TEXT The user engine URL ([retrieved here](#get-a-user-engine-url)) database name TEXT The database to run the query SQL query TEXT Any valid SQL query
+
+Queries are per request. To run multiple statement queries, separate queries each into one request.
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_query_data_using_the_develop_workspace.md b/cmd/docs-scrapper/fireboltdocs/guides_query_data_using_the_develop_workspace.md
new file mode 100644
index 0000000..a61949a
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_query_data_using_the_develop_workspace.md
@@ -0,0 +1,157 @@
+# [](#use-the-develop-space)Use the Develop Space
+
+- [Open the Develop Space](#open-the-develop-space)
+- [A quick tour](#a-quick-tour)
+- [Using the document editor](#using-the-document-editor)
+
+ - [Using auto-complete](#using-auto-complete)
+ - [Using script templates](#using-script-templates)
+ - [Using the CREATE EXTERNAL TABLE template to import data](#using-the-create-external-table-template-to-import-data)
+- [Managing scripts](#managing-scripts)
+- [Running scripts and working with results](#running-scripts-and-working-with-results)
+
+ - [Viewing results](#viewing-results)
+ - [Viewing multi-statement script results](#viewing-multi-statement-script-results)
+ - [Exporting results to a local hard drive](#exporting-results-to-a-local-hard-drive)
+- [Switching between light and dark mode](#switching-between-light-and-dark-mode)
+- [Keyboard shortcuts for the Develop Space](#keyboard-shortcuts-for-the-develop-space)
+
+ - [Query operations](#query-operations)
+ - [Script management](#script-management)
+ - [Search functionality](#search-functionality)
+ - [Editing text](#editing-text)
+
+The **Firebolt Workspace** has a **Develop Space** that you use to edit and run SQL scripts and view query results.
+
+## [](#open-the-develop-space)Open the Develop Space
+
+You can launch the space for a database by clicking the **Develop** icon from the left navigation pane or clicking the “+” icon next to “Script 1”.
+
+
+
+
+
+**Starting the Develop Space for the last database you worked with**
+
+1. Choose the **</>** icon from the left navigation pane.
+
+ 
+
+ The space for the database that you last worked with will open, and the database will be selected from the list.
+2. To switch to different database’s space, choose from the dropdown menu in the Databases panel.
+
+## [](#a-quick-tour)A quick tour
+
+The **Develop Space** is organized into two panels.
+
+- The left panel is the explore panel. You can use it to navigate to different databases and to work with different scripts in your database.
+- The center panel is the document editor. You can use it to edit scripts, save them, and run scripts. When you run a script, the results will be shown in the bottom part of the pane.
+
+ 
+
+## [](#using-the-document-editor)Using the document editor
+
+The document editor uses tabs to help you organize your SQL scripts. You can switch tabs to work with different scripts and run them. You can have multiple query statements on the same tab. Each statement must be terminated by a semi-colon (`;`).
+
+### [](#using-auto-complete)Using auto-complete
+
+As you enter your code in a script tab, Firebolt suggests keywords and object names from the chosen database. Press the tab key to add the first suggestion in the list to your script, or use arrow keys to select a different item from the list and then press the tab key.
+
+### [](#using-script-templates)Using script templates
+
+Script templates are available for common tasks, such as creating fact or dimension tables. Place the cursor in the editor where you want to insert code, choose the **</+** icon, and then select a query template from the list.
+
+### [](#using-the-create-external-table-template-to-import-data)Using the CREATE EXTERNAL TABLE template to import data
+
+To create an external table, which is the first step for ingesting data into Firebolt, choose the **Import Data** button from the object pane or choose the download icon and then choose **Import data** as shown in the example below.
+
+Firebolt creates a new tab with a `CREATE EXTERNAL TABLE` statement.
+
+## [](#managing-scripts)Managing scripts
+
+- [To rename a script](#scriptrename)
+- [To copy a script](#scriptcopy)
+- [To export a script and download it as a .sql file](#scriptexport)
+
+**Renaming a script**[]()
+
+- Choose the vertical ellipses next to the script name in the left pane, choose **Rename script**, type a new name, and then press ENTER.
+
+**Copying a script**[]()
+
+- Choose the vertical ellipses next to the script name in the left pane, choose **Duplicate script**, and then press ENTER. Firebolt saves a new script with the pattern \`\_copy.
+
+**Exporting a script and downloading it as a .sql file**[]()
+
+- Choose the vertical ellipses next to the script name in the left pane, and then choose **Export script**.
+
+ Firebolt downloads the file to your browser’s default download directory using the file pattern `.sql`.
+
+## [](#running-scripts-and-working-with-results)Running scripts and working with results
+
+At the bottom of each script tab, you can choose **Run** to execute SQL statements. SQL statements can only run on running engines. If an engine isn’t running, you can select it from the list and then choose the **Start** button for that engine. For more information about engines, see [Operate engines](/Guides/operate-engines/operate-engines.html)
+
+You can run all statements in a script or select snippets of SQL to run.
+
+**Running all SQL statements in a script**
+
+- Position the cursor anywhere in the script editor and then choose **Run**. All SQL statements must be terminated by a semi-colon (`;`) or an error occurs.
+
+**Running a snippet of SQL as a statement**
+
+- Select the SQL code you want to run as a statement and then choose **Run**. Behind the scenes, Firebolt automatically appends a semi-colon to the selected SQL code so it can run as a statement.
+
+### [](#viewing-results)Viewing results
+
+After you run a script or query statement, more results appear below the script editor, along with statistics about query execution. The statistics section will provide further information on your statement such as its status, duration, and more.
+
+
+
+### [](#viewing-multi-statement-script-results)Viewing multi-statement script results
+
+When you run a script that has multiple SQL statements with result sets (`SELECT` statements), each result is shown on a separate line with statistics about statement execution. The first statement that ran is numbered 1 and at the bottom of the list.
+
+To view the results table for a result set, choose the table icon as shown in the example below.
+
+
+
+### [](#exporting-results-to-a-local-hard-drive)Exporting results to a local hard drive
+
+You can export up to 10,000 rows of query results to your local hard drive after you run a query.
+
+1. Choose the download icon (see image below).
+2. Choose **Export table as CSV** or **Export table as JSON**.
+ Firebolt downloads the file type that you chose to the default download location for your browser.
+
+It is possible to export the results of a single query alongside the results summary of all queries run in your script (with the statistics).
+
+## [](#switching-between-light-and-dark-mode)Switching between light and dark mode
+
+Click on the toggle at the bottom of the left navigation pane to switch between light and dark mode.
+
+
+
+## [](#keyboard-shortcuts-for-the-develop-space)Keyboard shortcuts for the Develop Space
+
+- [Query operations](#query-operations)
+- [Script management](#script-management)
+- [Search functionality](#search-functionality)
+- [Editing text](#editing-text)
+
+**Tip:** Use the **Keyboard shortcuts panel** (`Ctrl + Shift + ?`) to quickly view available shortcuts directly within the Develop Space.
+
+### [](#query-operations)Query operations
+
+Function Windows & Linux Shortcut Mac Shortcut **Run** the **currently selected query**. Ctrl + Enter ⌘ + Enter **Run all** queries in the current script. Ctrl + Shift + Enter ⌘ + Shift + Enter **Toggle** expanding or collapsing **query results**. Ctrl + Alt + E ⌘ + Option + E
+
+### [](#script-management)Script management
+
+Function Windows & Linux Shortcut Mac Shortcut **Create** a new script. Ctrl + Alt + N ⌘ + Option + N **Jump** to a **previous** script. Ctrl + Alt + \[ ⌘ + Option + \[ **Jump** to the **next** script. Ctrl + Alt + ] ⌘ + Option + ] **Close** the **current** script. Ctrl + Alt + X ⌘ + Option + X **Close all** scripts. Ctrl + Alt + G ⌘ + Option + G **Close all but** the **current** script. Ctrl + Alt + O ⌘ + Option + O
+
+### [](#search-functionality)Search functionality
+
+Function Windows & Linux Shortcut Mac Shortcut **Open** a **search** panel. Ctrl + F ⌘ + F **Find** the **next search result**. F3 F3 **Find** the **previous search result**. Shift + F3 Shift + F3
+
+### [](#editing-text)Editing text
+
+Function Windows & Linux Shortcut Mac Shortcut **Toggle** adding or removing a **comment marker** for the current line. Ctrl + / Cmd + / **Toggle** adding or removing a **block comment marker** around a block of code or text. Shift + Alt + A Shift + Option + A **Automatically organize and indent** code for readability. Ctrl + Alt + F ⌘ + Option + F **Copy** the selected lines and paste them directly **above** the original. Alt + Shift + Up arrow Shift + Option + Up arrow **Move** the selected lines and paste them directly **above** the original without creating a duplicate. Alt + Up arrow Option + Up arrow **Copy** the selected lines and paste them directly **below** the original. Alt + Shift + Down arrow Shift + Option + Down arrow **Move** the selected lines and paste them directly **below** the original without creating a duplicate. Alt + Down arrow Option + Down arrow **Select text** to the **left** of the cursor. Alt + Shift + Left arrow Ctrl + Shift + Left arrow **Select text** to the **right** of the cursor. Alt + Shift + Right arrow Ctrl + Shift + Right arrow **Select** the **entire line**. Alt + L Ctrl + L **Decrease** the **indentation level** of the current or selected lines. Ctrl + \[ Cmd + \[ **Increase** the **indentation level** of the current or selected lines. Ctrl + ] Cmd + ] **Delete** the current or selected **lines**. Shift + Ctrl + K Shift + Cmd + K
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_security.md b/cmd/docs-scrapper/fireboltdocs/guides_security.md
new file mode 100644
index 0000000..79183de
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_security.md
@@ -0,0 +1,12 @@
+# [](#configure-security)Configure security
+
+Learn how to reduce potential attack surface with Firebolt’s security features, including multi-factor authentication and single-sign-on (SSO) integration, role-based access control (RBAC) and network policies to secure data assets in data warehouses.
+
+* * *
+
+- [Configure SSO](/Guides/security/sso/)
+- [Role-based access control (RBAC)](/Guides/security/rbac.html)
+- [Network policies](/Guides/security/network-policies.html)
+- [Multi-factor authentication](/Guides/security/enabling-mfa.html)
+- [Ownership](/Guides/security/ownership.html)
+- [AWS PrivateLink](/Guides/security/privatelink.html)
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_security_enabling_mfa.md b/cmd/docs-scrapper/fireboltdocs/guides_security_enabling_mfa.md
new file mode 100644
index 0000000..4ce62a0
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_security_enabling_mfa.md
@@ -0,0 +1,36 @@
+# [](#enable-multi-factor-authentication-mfa)Enable multi-factor authentication (MFA)
+
+Enable multi-factor authentication (MFA) as an additional layer of security to protect data that is accessible through Firebolt. With MFA enabled, users must authenticate with a one-time code generated by their mobile device upon login. MFA can be enabled per login.
+
+Enabling MFA for a login requires the org\_admin role.
+
+## [](#enable-mfa-for-a-login)Enable MFA for a login
+
+### [](#sql)SQL
+
+To enable MFA for a login using SQL, use the [ALTER LOGIN](/sql_reference/commands/access-control/alter-login.html) statement. For example:
+
+```
+ALTER LOGIN "alex@acme.com" SET IS_MFA_ENABLED = TRUE;
+```
+
+Multi-factor authentication can also be set for new logins, with the [CREATE LOGIN](/sql_reference/commands/access-control/create-login.html) command. For example:
+
+```
+CREATE LOGIN "betsy@acme.com" SET IS_MFA_ENABLED = TRUE;
+```
+
+### [](#ui)UI
+
+To enable MFA for a login in the UI:
+
+
+
+1. Click **Configure** to open the configure space, then choose **Logins** from the menu.
+2. Search for the relevant login using the top search filters or by scrolling through the logins list. Toggle on **Is MFA enabled**.
+
+
+
+MFA can also be enabled when creating a login, by toggling **Is MFA enabled** in the **Create login** window:
+
+
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_security_network_policies.md b/cmd/docs-scrapper/fireboltdocs/guides_security_network_policies.md
new file mode 100644
index 0000000..16b0fae
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_security_network_policies.md
@@ -0,0 +1,101 @@
+# [](#manage-network-policies)Manage network policies
+
+By default, Firebolt accepts traffic from any IP address. As an additional layer of security, you can configure individual Firebolt logins or service accounts so their traffic must originate only from the IP addresses that you specify. For each configuration (network policy), you specify a list of IP addresses from which traffic is allowed (the allow list) and a list of IP addresses from which traffic is denied (the blocked list). A network policy is a collection of allowed and blocked lists of IP addresses.
+
+Network policies can be configured on the organization level and also per login or service account. When evaluating a network policy, Firebolt validates the login/service account IP addresses first by the policy set at organization level. If there is no network policy on the organization level (or the organization-level network policy does not allow access), then the network policy is validated at the login/service account level. If a network policy does not allow access, the user will receive a `401 Unauthorized` response.
+
+The IP allow and blocked lists used to specify a network policy are specified as comma-separated IPv4 addresses and/or IPv4 address ranges in CIDR format. You can apply the same list to one or many users, and each user can have unique lists. You can specify lists manually or import lists of addresses and ranges from a CSV file saved locally. You can add, edit or delete network policies using SQL or in the UI.
+
+To view all network policies, click **Configure** to open the configure space, then choose **Network policies** from the menu, or query the [information\_schema.network\_policies](/sql_reference/information-schema/network_policies.html) view.
+
+Managing network policies requires the org\_admin role.
+
+## [](#create-a-network-policy)Create a network policy
+
+### [](#sql)SQL
+
+To create a network policy using SQL, use the [CREATE NETWORK POLICY](/sql_reference/commands/access-control/create-network-policy.html) statement. For example:
+
+```
+CREATE NETWORK POLICY my_network_policy WITH ALLOWED_IP_LIST = (‘4.5.6.1’, ‘2.4.5.1’) DESCRIPTION = 'my new network policy'
+```
+
+### [](#ui)UI
+
+To create a network policy via the UI:
+
+
+
+1. Click **Configure** to open the configure space, then choose **Network policies** from the menu.
+2. From the Network policies management page, choose **Create a new network policy**.
+3. Enter a network policy name. Optionally, enter a network policy description. To add to the allow list, enter comma-separated IPv4 addresses, or IPv4 address ranges in CIDR format under **Grant access from selected allowed IP addresses**, or choose **import file** to read IP addresses from a CSV file.
+4. Enter addreses for the block list in the **Deny access from selected blocked IP addresses**.
+5. Choose **Save**.
+
+For each user, the Allowed IPs and Blocked IPs are updated to reflect the total number of IP addresses from each list that you specified for that user. Network policies created in UI are automatically attached to the organization to which the policy creator is logged in.
+
+## [](#attach-a-network-policy-to-an-organization)Attach a network policy to an organization
+
+### [](#sql-1)SQL
+
+When a network policy is created in UI, it is automatically attached to an organization the creator is logged in to. However, to attach (or detach) a network policy, you can use the command [ALTER ORGANIZATION](/sql_reference/commands/data-definition/alter-organization.html). For example:
+
+```
+ALTER ORGANIZATION my_organization SET NETWORK_POLICY = my_network_policy
+```
+
+or to detach:
+
+```
+ALTER ORGANIZATION my_organization SET NETWORK_POLICY = DEFAULT
+```
+
+### [](#ui-1)UI
+
+To attach/detach a network policy to an organization via the UI:
+
+
+
+1. Click **Configure** to open the configure space, then choose **Network policies** from the menu.
+2. Search for the relevant network policy using the top search filters or by scrolling through the list.
+3. Switch the **Is organizational** toggle to on or off.
+
+## [](#edit-a-network-policy)Edit a network policy
+
+### [](#sql-2)SQL
+
+To edit a network policy using SQL, use the [ALTER NETWORK POLICY](/sql_reference/commands/access-control/alter-network-policy.html) statement. For example:
+
+```
+ALTER NETWORK POLICY my_network_policy SET ALLOWED_IP_LIST = (‘4.5.6.7’, ‘2.4.5.7’) BLOCKED_IP_LIST = (‘6.7.8.9’) DESCRIPTION = 'updated network policy'
+```
+
+### [](#ui-2)UI
+
+To edit a network policy via the UI:
+
+1. Click **Configure** to open the configure space, then choose **Network policies** from the menu.
+2. Search for the relevant network policy using the top search filters or by scrolling through the list. Hover over the right-most column to make the network policy menu appear, then choose **Edit network policy**.
+3. From here you can edit description, allowed and blocked IP addresses and choose **Save**.
+
+
+
+## [](#delete-a-network-policy)Delete a network policy
+
+### [](#sql-3)SQL
+
+To delete a network policy using SQL, use the [DROP NETWORK POLICY](/sql_reference/commands/access-control/drop-network-policy.html) statement. For example:
+
+```
+DROP NETWORK POLICY my_network_policy [ RESTRICT | CASCADE ]
+```
+
+### [](#ui-3)UI
+
+To delete a network policy via the UI:
+
+1. Click **Configure** to open the configure space, then choose **Network policies** from the menu.
+2. Search for the relevant network policy using the top search filters or by scrolling through the list. Hover over the right-most column to make the network policy menu appear, then choose **Delete network policy**. You will need to confirm that you will also be removing links to the network policy by choosing **Remove the linkage to logins, service accounts, or to the entire organization**
+3. Choose **Confirm**.
+
+
\ No newline at end of file
diff --git a/cmd/docs-scrapper/fireboltdocs/guides_security_ownership.md b/cmd/docs-scrapper/fireboltdocs/guides_security_ownership.md
new file mode 100644
index 0000000..22bb4c7
--- /dev/null
+++ b/cmd/docs-scrapper/fireboltdocs/guides_security_ownership.md
@@ -0,0 +1,77 @@
+# [](#ownership)Ownership
+
+Ownership allows users to perform all operations on any object they created without having to grant privileges for these operations manually. This provides a smoother user experience because objects are immediately available to use as they are created. These operations include granting privileges on owned objects.
+
+## [](#supported-object-types)Supported object types
+
+The object types that support ownership are:
+
+- Role
+- User
+- Engine
+- Database
+- Schema
+- Table
+- View
+
+The current owner of an object can be viewed in the corresponding information\_schema view:
+
+Object View Role N/A User [information\_schema.users](/sql_reference/information-schema/users.html) Database [information\_schema.catalogs](/sql_reference/information-schema/catalogs.html) Engine [information\_schema.engines](/sql_reference/information-schema/engines.html) Schema [information\_schema.schemata](/sql_reference/information-schema/schemata.html) Table [information\_schema.tables](/sql_reference/information-schema/tables.html) View [information\_schema.views](/sql_reference/information-schema/views.html) or [information\_schema.tables](/sql_reference/information-schema/tables.html)
+
+Index ownership, shown in [information\_schema.indexes](/sql_reference/information-schema/indexes.html), will always show the table owner as an index’s owner.
+
+## [](#changing-an-objects-owner)Changing an object’s owner
+
+The owner of an object may alter its ownership using the following syntax:
+
+```
+ALTER