From 6f1e39c07c76802897a721f51090b95b5c4cceb8 Mon Sep 17 00:00:00 2001 From: Michele Rastelli Date: Wed, 20 Mar 2024 11:27:21 +0100 Subject: [PATCH 1/2] Spark: ttl --- .../develop/integrations/arangodb-datasource-for-apache-spark.md | 1 + .../develop/integrations/arangodb-datasource-for-apache-spark.md | 1 + .../develop/integrations/arangodb-datasource-for-apache-spark.md | 1 + .../develop/integrations/arangodb-datasource-for-apache-spark.md | 1 + 4 files changed, 4 insertions(+) diff --git a/site/content/3.10/develop/integrations/arangodb-datasource-for-apache-spark.md b/site/content/3.10/develop/integrations/arangodb-datasource-for-apache-spark.md index 7b058d725b..4417191e41 100644 --- a/site/content/3.10/develop/integrations/arangodb-datasource-for-apache-spark.md +++ b/site/content/3.10/develop/integrations/arangodb-datasource-for-apache-spark.md @@ -150,6 +150,7 @@ usersDF.filter(col("birthday") === "1982-12-15").show() - `sampleSize`: sample size prefetched for schema inference, only used if read schema is not provided, `1000` by default - `fillBlockCache`: specifies whether the query should store the data it reads in the RocksDB block cache (`true` or `false`), `false` by default - `stream`: specifies whether the query should be executed lazily, `true` by default +- `ttl`: cursor ttl in seconds, `30` by default - `mode`: allows setting a mode for dealing with corrupt records during parsing: - `PERMISSIVE` : win case of a corrupted record, the malformed string is put into a field configured by `columnNameOfCorruptRecord`, and sets malformed fields to null. To keep corrupt records, a user can set a string diff --git a/site/content/3.11/develop/integrations/arangodb-datasource-for-apache-spark.md b/site/content/3.11/develop/integrations/arangodb-datasource-for-apache-spark.md index 7b058d725b..4417191e41 100644 --- a/site/content/3.11/develop/integrations/arangodb-datasource-for-apache-spark.md +++ b/site/content/3.11/develop/integrations/arangodb-datasource-for-apache-spark.md @@ -150,6 +150,7 @@ usersDF.filter(col("birthday") === "1982-12-15").show() - `sampleSize`: sample size prefetched for schema inference, only used if read schema is not provided, `1000` by default - `fillBlockCache`: specifies whether the query should store the data it reads in the RocksDB block cache (`true` or `false`), `false` by default - `stream`: specifies whether the query should be executed lazily, `true` by default +- `ttl`: cursor ttl in seconds, `30` by default - `mode`: allows setting a mode for dealing with corrupt records during parsing: - `PERMISSIVE` : win case of a corrupted record, the malformed string is put into a field configured by `columnNameOfCorruptRecord`, and sets malformed fields to null. To keep corrupt records, a user can set a string diff --git a/site/content/3.12/develop/integrations/arangodb-datasource-for-apache-spark.md b/site/content/3.12/develop/integrations/arangodb-datasource-for-apache-spark.md index fb3b3ecb7f..d8b4d7ee56 100644 --- a/site/content/3.12/develop/integrations/arangodb-datasource-for-apache-spark.md +++ b/site/content/3.12/develop/integrations/arangodb-datasource-for-apache-spark.md @@ -150,6 +150,7 @@ usersDF.filter(col("birthday") === "1982-12-15").show() - `sampleSize`: sample size prefetched for schema inference, only used if read schema is not provided, `1000` by default - `fillBlockCache`: specifies whether the query should store the data it reads in the RocksDB block cache (`true` or `false`), `false` by default - `stream`: specifies whether the query should be executed lazily, `true` by default +- `ttl`: cursor ttl in seconds, `30` by default - `mode`: allows setting a mode for dealing with corrupt records during parsing: - `PERMISSIVE` : win case of a corrupted record, the malformed string is put into a field configured by `columnNameOfCorruptRecord`, and sets malformed fields to null. To keep corrupt records, a user can set a string diff --git a/site/content/4.0/develop/integrations/arangodb-datasource-for-apache-spark.md b/site/content/4.0/develop/integrations/arangodb-datasource-for-apache-spark.md index fb3b3ecb7f..d8b4d7ee56 100644 --- a/site/content/4.0/develop/integrations/arangodb-datasource-for-apache-spark.md +++ b/site/content/4.0/develop/integrations/arangodb-datasource-for-apache-spark.md @@ -150,6 +150,7 @@ usersDF.filter(col("birthday") === "1982-12-15").show() - `sampleSize`: sample size prefetched for schema inference, only used if read schema is not provided, `1000` by default - `fillBlockCache`: specifies whether the query should store the data it reads in the RocksDB block cache (`true` or `false`), `false` by default - `stream`: specifies whether the query should be executed lazily, `true` by default +- `ttl`: cursor ttl in seconds, `30` by default - `mode`: allows setting a mode for dealing with corrupt records during parsing: - `PERMISSIVE` : win case of a corrupted record, the malformed string is put into a field configured by `columnNameOfCorruptRecord`, and sets malformed fields to null. To keep corrupt records, a user can set a string From e894b31f6da73cd49537bc9de414a843db1e7c7d Mon Sep 17 00:00:00 2001 From: Simran Spiller Date: Sun, 24 Mar 2024 09:27:47 +0100 Subject: [PATCH 2/2] Review --- .../integrations/arangodb-datasource-for-apache-spark.md | 2 +- .../integrations/arangodb-datasource-for-apache-spark.md | 2 +- .../integrations/arangodb-datasource-for-apache-spark.md | 2 +- .../integrations/arangodb-datasource-for-apache-spark.md | 2 +- 4 files changed, 4 insertions(+), 4 deletions(-) diff --git a/site/content/3.10/develop/integrations/arangodb-datasource-for-apache-spark.md b/site/content/3.10/develop/integrations/arangodb-datasource-for-apache-spark.md index 4417191e41..107a4a11bc 100644 --- a/site/content/3.10/develop/integrations/arangodb-datasource-for-apache-spark.md +++ b/site/content/3.10/develop/integrations/arangodb-datasource-for-apache-spark.md @@ -150,7 +150,7 @@ usersDF.filter(col("birthday") === "1982-12-15").show() - `sampleSize`: sample size prefetched for schema inference, only used if read schema is not provided, `1000` by default - `fillBlockCache`: specifies whether the query should store the data it reads in the RocksDB block cache (`true` or `false`), `false` by default - `stream`: specifies whether the query should be executed lazily, `true` by default -- `ttl`: cursor ttl in seconds, `30` by default +- `ttl`: cursor time to live in seconds, `30` by default - `mode`: allows setting a mode for dealing with corrupt records during parsing: - `PERMISSIVE` : win case of a corrupted record, the malformed string is put into a field configured by `columnNameOfCorruptRecord`, and sets malformed fields to null. To keep corrupt records, a user can set a string diff --git a/site/content/3.11/develop/integrations/arangodb-datasource-for-apache-spark.md b/site/content/3.11/develop/integrations/arangodb-datasource-for-apache-spark.md index 4417191e41..107a4a11bc 100644 --- a/site/content/3.11/develop/integrations/arangodb-datasource-for-apache-spark.md +++ b/site/content/3.11/develop/integrations/arangodb-datasource-for-apache-spark.md @@ -150,7 +150,7 @@ usersDF.filter(col("birthday") === "1982-12-15").show() - `sampleSize`: sample size prefetched for schema inference, only used if read schema is not provided, `1000` by default - `fillBlockCache`: specifies whether the query should store the data it reads in the RocksDB block cache (`true` or `false`), `false` by default - `stream`: specifies whether the query should be executed lazily, `true` by default -- `ttl`: cursor ttl in seconds, `30` by default +- `ttl`: cursor time to live in seconds, `30` by default - `mode`: allows setting a mode for dealing with corrupt records during parsing: - `PERMISSIVE` : win case of a corrupted record, the malformed string is put into a field configured by `columnNameOfCorruptRecord`, and sets malformed fields to null. To keep corrupt records, a user can set a string diff --git a/site/content/3.12/develop/integrations/arangodb-datasource-for-apache-spark.md b/site/content/3.12/develop/integrations/arangodb-datasource-for-apache-spark.md index d8b4d7ee56..c45cb51401 100644 --- a/site/content/3.12/develop/integrations/arangodb-datasource-for-apache-spark.md +++ b/site/content/3.12/develop/integrations/arangodb-datasource-for-apache-spark.md @@ -150,7 +150,7 @@ usersDF.filter(col("birthday") === "1982-12-15").show() - `sampleSize`: sample size prefetched for schema inference, only used if read schema is not provided, `1000` by default - `fillBlockCache`: specifies whether the query should store the data it reads in the RocksDB block cache (`true` or `false`), `false` by default - `stream`: specifies whether the query should be executed lazily, `true` by default -- `ttl`: cursor ttl in seconds, `30` by default +- `ttl`: cursor time to live in seconds, `30` by default - `mode`: allows setting a mode for dealing with corrupt records during parsing: - `PERMISSIVE` : win case of a corrupted record, the malformed string is put into a field configured by `columnNameOfCorruptRecord`, and sets malformed fields to null. To keep corrupt records, a user can set a string diff --git a/site/content/4.0/develop/integrations/arangodb-datasource-for-apache-spark.md b/site/content/4.0/develop/integrations/arangodb-datasource-for-apache-spark.md index d8b4d7ee56..c45cb51401 100644 --- a/site/content/4.0/develop/integrations/arangodb-datasource-for-apache-spark.md +++ b/site/content/4.0/develop/integrations/arangodb-datasource-for-apache-spark.md @@ -150,7 +150,7 @@ usersDF.filter(col("birthday") === "1982-12-15").show() - `sampleSize`: sample size prefetched for schema inference, only used if read schema is not provided, `1000` by default - `fillBlockCache`: specifies whether the query should store the data it reads in the RocksDB block cache (`true` or `false`), `false` by default - `stream`: specifies whether the query should be executed lazily, `true` by default -- `ttl`: cursor ttl in seconds, `30` by default +- `ttl`: cursor time to live in seconds, `30` by default - `mode`: allows setting a mode for dealing with corrupt records during parsing: - `PERMISSIVE` : win case of a corrupted record, the malformed string is put into a field configured by `columnNameOfCorruptRecord`, and sets malformed fields to null. To keep corrupt records, a user can set a string