diff --git a/notebooks/load-kafka-template/notebook.ipynb b/notebooks/load-kafka-template/notebook.ipynb index a1dc283..b5a9481 100644 --- a/notebooks/load-kafka-template/notebook.ipynb +++ b/notebooks/load-kafka-template/notebook.ipynb @@ -17,7 +17,7 @@ ] }, { - "id": "1bee474b", + "id": "17e0a577", "cell_type": "markdown", "metadata": {}, "source": [ @@ -33,6 +33,7 @@ { "attachments": {}, "cell_type": "markdown", + "id": "46a1d738", "metadata": {}, "source": [ "
Define the BOOTSTRAP_SERVER, PORT, TOPIC,SASL_USERNAME,SASL_MECHANISM,SECURITY_PROTOCOL, and SASL_PASSWORD variables below for integration, replacing the placeholder values with your own.
\n", "
"
- ],
- "id": "979e53c2"
+ ]
},
{
"attachments": {},
"cell_type": "markdown",
+ "id": "9f9d6aa2",
"metadata": {},
"source": [
"## Creating Table in SingleStore\n",
"\n",
"Start by creating a table that will hold the data imported from Kafka."
- ],
- "id": "9f9d6aa2"
+ ]
},
{
"cell_type": "code",
"execution_count": 2,
+ "id": "82a48dd0",
"metadata": {},
"outputs": [],
"source": [
@@ -115,12 +116,12 @@
" address TEXT,\n",
" created_at TIMESTAMP\n",
");"
- ],
- "id": "82a48dd0"
+ ]
},
{
"attachments": {},
"cell_type": "markdown",
+ "id": "90ad124f",
"metadata": {},
"source": [
"## Create a Pipeline to Import Data from Kafka\n",
@@ -131,21 +132,21 @@
"You have access to the Kafka topic.\n",
"Proper IAM roles or access keys are configured in SingleStore.\n",
"The JSON message has a structure that matches the table schema."
- ],
- "id": "90ad124f"
+ ]
},
{
"attachments": {},
"cell_type": "markdown",
+ "id": "6e401f87",
"metadata": {},
"source": [
"Using these identifiers and keys, execute the following statement."
- ],
- "id": "6e401f87"
+ ]
},
{
"cell_type": "code",
"execution_count": 3,
+ "id": "0b4f42d6",
"metadata": {},
"outputs": [],
"source": [
@@ -162,108 +163,116 @@
"}'\n",
"INTO TABLE my_table\n",
"FORMAT JSON ;"
- ],
- "id": "0b4f42d6"
+ ]
},
{
"attachments": {},
"cell_type": "markdown",
+ "id": "1d137801",
"metadata": {},
"source": [
"## Start the Pipeline\n",
"\n",
- "To start the pipeline and begin importing the data from the S3 bucket:"
- ],
- "id": "1d137801"
+ "To start the pipeline and begin importing the data from the Kafka topic:"
+ ]
},
{
"cell_type": "code",
"execution_count": 4,
+ "id": "e94cff73",
"metadata": {},
"outputs": [],
"source": [
"%%sql\n",
"START PIPELINE kafka_import_pipeline;"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "You can see status of your pipeline Click Here"
],
- "id": "e94cff73"
+ "id": "03f4cbfb"
},
{
"attachments": {},
"cell_type": "markdown",
+ "id": "094b857c",
"metadata": {},
"source": [
"## Select Data from the Table\n",
"\n",
"Once the data has been imported, you can run a query to select it:"
- ],
- "id": "094b857c"
+ ]
},
{
"cell_type": "code",
"execution_count": 5,
+ "id": "bc0f7b0c",
"metadata": {},
"outputs": [],
"source": [
"%%sql\n",
"SELECT * FROM my_table LIMIT 10;"
- ],
- "id": "bc0f7b0c"
+ ]
},
{
"attachments": {},
"cell_type": "markdown",
+ "id": "669dac71",
"metadata": {},
"source": [
"### Check if all data of the data is loaded"
- ],
- "id": "669dac71"
+ ]
},
{
"cell_type": "code",
"execution_count": 6,
+ "id": "a47c2f0f",
"metadata": {},
"outputs": [],
"source": [
"%%sql\n",
"SELECT count(*) FROM my_table"
- ],
- "id": "a47c2f0f"
+ ]
},
{
"attachments": {},
"cell_type": "markdown",
+ "id": "91eae728",
"metadata": {},
"source": [
"## Conclusion\n",
"\n",
- "We have shown how to insert data from a Amazon S3 using `Pipelines` to SingleStoreDB. These techniques should enable you to\n",
- "integrate your Amazon S3 with SingleStoreDB."
- ],
- "id": "91eae728"
+ "We have shown how to insert data from a Kafka topic using `Pipelines` to SingleStoreDB. These techniques should enable you to\n",
+ "integrate your Kafka topic with SingleStoreDB."
+ ]
},
{
"attachments": {},
"cell_type": "markdown",
+ "id": "6dc86514",
"metadata": {},
"source": [
"## Clean up\n",
"\n",
"Remove the '#' to uncomment and execute the queries below to clean up the pipeline and table created."
- ],
- "id": "6dc86514"
+ ]
},
{
"attachments": {},
"cell_type": "markdown",
+ "id": "706ccd4c",
"metadata": {},
"source": [
"#### Drop Pipeline"
- ],
- "id": "706ccd4c"
+ ]
},
{
"cell_type": "code",
"execution_count": 7,
+ "id": "ed7dc33a",
"metadata": {},
"outputs": [],
"source": [
@@ -271,28 +280,27 @@
"#STOP PIPELINE kafka_import_pipeline;\n",
"\n",
"#DROP PIPELINE kafka_import_pipeline;"
- ],
- "id": "ed7dc33a"
+ ]
},
{
"attachments": {},
"cell_type": "markdown",
+ "id": "b5e15411",
"metadata": {},
"source": [
"#### Drop Data"
- ],
- "id": "b5e15411"
+ ]
},
{
"cell_type": "code",
"execution_count": 8,
+ "id": "f8f3d6ef",
"metadata": {},
"outputs": [],
"source": [
"%%sql\n",
"#DROP TABLE my_table;"
- ],
- "id": "f8f3d6ef"
+ ]
},
{
"id": "12d50a52",
@@ -326,7 +334,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.11.6"
+ "version": "3.11.9"
}
},
"nbformat": 4,