Skip to content
This repository was archived by the owner on Dec 13, 2023. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
21 commits
Select commit Hold shift + click to select a range
abb8bbe
The Pregel algorithm for running AIR programs is called "ppa"
Simran-B Oct 10, 2022
396583d
Features: skipInaccessibleCollections is an EE query option
Simran-B Oct 11, 2022
94b71fa
Fix MINHASH_COUNT() AQL example and description
Simran-B Oct 11, 2022
f99e906
JS API: Add missing collection and cursor properties
Simran-B Oct 11, 2022
2b3b901
Minor edits
Simran-B Oct 11, 2022
767b1b7
Merge branch 'main' of https://github.com/arangodb/docs into api-fixe…
Simran-B Oct 11, 2022
69b9dcf
intermediateCommits cursor stats are 3.11 only, add missing coll/curs…
Simran-B Oct 11, 2022
90088f5
Fix anchor links
Simran-B Oct 14, 2022
31438aa
Sub-attributes called _id cannot be indexed, clarifications & formatting
Simran-B Oct 14, 2022
988557b
WIP
Simran-B Oct 18, 2022
65e4c58
Add missing collection properties, enterprise-hex-smart-vertex shardi…
Simran-B Oct 18, 2022
e8aaf9f
Formatting, removing references to implementation source files
Simran-B Oct 19, 2022
5d95ad4
More formatting
Simran-B Oct 19, 2022
cd774bd
Merge branch 'main' of https://github.com/arangodb/docs into api-fixe…
Simran-B Oct 19, 2022
26199e7
Add graph_module._listObjects(), improve example results, style
Simran-B Oct 19, 2022
6b03f74
JS API: Show how to actually use collection.iterate()
Simran-B Sep 29, 2022
38ba328
Deprecate collection.iterate() method in JS API
Simran-B Oct 19, 2022
6db6fec
Merge branch 'main' into api-fixes-2022-10-06
ansoboleva Oct 20, 2022
0df5dd4
Update examples for 3.11 in api-fixes-2022-10-06 at 2022-10-20T14:19:…
ansoboleva Oct 20, 2022
38143eb
Update examples for 3.10 in api-fixes-2022-10-06 at 2022-10-20T12:57:…
ansoboleva Oct 20, 2022
68c7b3b
Update examples for 3.11 in api-fixes-2022-10-06 at 2022-10-20T16:05:…
ansoboleva Oct 20, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
9 changes: 6 additions & 3 deletions 3.10/administration-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,10 +165,13 @@ The available sharding strategies are:
(excluding smart edge collections)
- `enterprise-hash-smart-edge`: default sharding used for new
smart edge collections starting from version 3.4
- `enterprise-hex-smart-vertex`: sharding used for vertex collections of
EnterpriseGraphs

If no sharding strategy is specified, the default will be `hash` for
all collections, and `enterprise-hash-smart-edge` for all smart edge
collections (requires the *Enterprise Edition* of ArangoDB).
If no sharding strategy is specified, the default is `hash` for
all normal collections, `enterprise-hash-smart-edge` for all smart edge
collections, and `enterprise-hex-smart-vertex` for EnterpriseGraph
vertex collections (the latter two require the *Enterprise Edition* of ArangoDB).
Manually overriding the sharding strategy does not yet provide a
benefit, but it may later in case other sharding strategies are added.

Expand Down
4 changes: 2 additions & 2 deletions 3.10/appendix-references-dbobject.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,9 +42,9 @@ The following methods exist on the *_db* object:

*AQL*

* [db._createStatement(query)](aql/invocation-with-arangosh.html#with-_createstatement-arangostatement)
* [db._createStatement(query)](aql/invocation-with-arangosh.html#with-db_createstatement-arangostatement)
* [db._query(query)](aql/invocation-with-arangosh.html#with-db_query)
* [db._explain(query)](release-notes-new-features28.html#miscellaneous-improvements)
* [db._explain(query)](aql/execution-and-performance-explaining-queries.html)
* [db._parse(query)](aql/invocation-with-arangosh.html#query-validation)

*Document*
Expand Down
12 changes: 6 additions & 6 deletions 3.10/aql/execution-and-performance-query-profiler.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ each stage had to do.

Without any indexes this query should have to perform the following operations:

1. Perfom a full collection scan via a _EnumerateCollectionNode_ and outputting
1. Perform a full collection scan via a _EnumerateCollectionNode_ and outputting
a row containing the document in `doc`.
2. Calculate the boolean expression `LET #1 = doc.value < 10` from all inputs
via a _CalculationNode_
Expand Down Expand Up @@ -128,7 +128,7 @@ The resulting query profile contains a _SubqueryNode_ which has the runtime of
all its children combined.

Actually, we cheated a little. The optimizer would have completely removed the
subquery if it had not been deactivated (`rules:["-all"]`). The optimimized
subquery if it had not been deactivated (`rules:["-all"]`). The optimized
version would take longer in the "optimizing plan" stage, but should perform
better with a lot of results.

Expand Down Expand Up @@ -165,8 +165,8 @@ The following query gets us all age groups in buckets (0-9, 10-19, 20-29, ...):

Without any indexes this query should have to perform the following operations:

1. Perfom a full collection scan via a _EnumerateCollectionNode_ and outputing
a row containg the document in `doc`.
1. Perform a full collection scan via a _EnumerateCollectionNode_ and outputting
a row containing the document in `doc`.
2. Compute the expression `LET #1 = FLOOR(u.age / 10) * 10` for all inputs via
a _CalculationNode_
3. Perform the aggregations via the _CollectNode_
Expand Down Expand Up @@ -212,10 +212,10 @@ Another mistake is to start a graph traversal from the wrong side
Assume we have two vertex collections _users_ and _products_ as well as an
edge collection _purchased_. The graph model looks like this:
`(users) <--[purchased]--> (products)`, i.e. every user is connected with an
edge in _pruchased_ to zero or more _products_.
edge in _purchased_ to zero or more _products_.

If we want to know all users that have purchased the product _playstation_
as well as produts of `type` _legwarmer_ we could use this query:
as well as products of `type` _legwarmer_ we could use this query:

```aql
FOR prod IN products
Expand Down
94 changes: 50 additions & 44 deletions 3.10/aql/execution-and-performance-query-statistics.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,23 +9,23 @@ A query that has been executed will always return execution statistics. Executio
can be retrieved by calling `getExtra()` on the cursor. The statistics are returned in the
return value's `stats` attribute:

{% arangoshexample examplevar="examplevar" script="script" result="result" %}
{% arangoshexample examplevar="examplevar" script="script" result="result" %}
@startDocuBlockInline 06_workWithAQL_statementsExtra
@EXAMPLE_ARANGOSH_OUTPUT{06_workWithAQL_statementsExtra}
|db._query(`
| db._query(`
| FOR i IN 1..@count INSERT
| { _key: CONCAT('anothertest', TO_STRING(i)) }
| INTO mycollection`,
| {count: 100},
| { count: 100 },
| {},
| {fullCount: true}
| { fullCount: true }
).getExtra();
|db._query({
| "query": `FOR i IN 200..@count INSERT
| { _key: CONCAT('anothertest', TO_STRING(i)) }
| INTO mycollection`,
| "bindVars": {count: 300},
| "options": { fullCount: true}
| db._query({
| "query": `FOR i IN 200..@count INSERT
| { _key: CONCAT('anothertest', TO_STRING(i)) }
| INTO mycollection`,
| "bindVars": { count: 300 },
| "options": { fullCount: true }
}).getExtra();
@END_EXAMPLE_ARANGOSH_OUTPUT
@endDocuBlock 06_workWithAQL_statementsExtra
Expand All @@ -34,53 +34,59 @@ return value's `stats` attribute:

The meaning of the statistics attributes is as follows:

- **writesExecuted**: the total number of data-modification operations successfully executed.
This is equivalent to the number of documents created, updated or removed by `INSERT`,
`UPDATE`, `REPLACE`, `REMOVE` or `UPSERT` operations.
- **writesIgnored**: the total number of data-modification operations that were unsuccessful,
but have been ignored because of query option `ignoreErrors`.
- **scannedFull**: the total number of documents iterated over when scanning a collection
without an index. Documents scanned by subqueries will be included in the result, but
operations triggered by built-in or user-defined AQL functions will not.
- **scannedIndex**: the total number of documents iterated over when scanning a collection using
an index. Documents scanned by subqueries will be included in the result, but operations
triggered by built-in or user-defined AQL functions will not.
- **cursorsCreated**: the total number of cursor objects created during query execution. Cursor
- **writesExecuted**: The total number of data-modification operations successfully executed.
This is equivalent to the number of documents created, updated, or removed by `INSERT`,
`UPDATE`, `REPLACE`, `REMOVE`, or `UPSERT` operations.
- **writesIgnored**: The total number of data-modification operations that were unsuccessful,
but have been ignored because of the `ignoreErrors` query option.
- **scannedFull**: The total number of documents iterated over when scanning a collection
without an index. Documents scanned by subqueries are included in the result, but
operations triggered by built-in or user-defined AQL functions are not.
- **scannedIndex**: The total number of documents iterated over when scanning a collection using
an index. Documents scanned by subqueries are included in the result, but operations
triggered by built-in or user-defined AQL functions are not.
- **cursorsCreated**: The total number of cursor objects created during query execution. Cursor
objects are created for index lookups.
- **cursorsRearmed**: the total number of times an existing cursor object was repurposed.
- **cursorsRearmed**: The total number of times an existing cursor object was repurposed.
Repurposing an existing cursor object is normally more efficient compared to destroying an
existing cursor object and creating a new one from scratch.
- **cacheHits**: the total number of index entries read from in-memory caches for indexes
of type edge or persistent. This value will only be non-zero when reading from indexes
- **cacheHits**: The total number of index entries read from in-memory caches for indexes
of type edge or persistent. This value is only non-zero when reading from indexes
that have an in-memory cache enabled, and when the query allows using the in-memory
cache (i.e. using equality lookups on all index attributes).
- **cacheMisses**: the total number of cache read attempts for index entries that could not
be served from in-memory caches for indexes of type edge or persistent. This value will
only be non-zero when reading from indexes that have an in-memory cache enabled, the
- **cacheMisses**: The total number of cache read attempts for index entries that could not
be served from in-memory caches for indexes of type edge or persistent. This value
is only non-zero when reading from indexes that have an in-memory cache enabled, the
query allows using the in-memory cache (i.e. using equality lookups on all index attributes)
and the looked up values are not present in the cache.
- **filtered**: the total number of documents that were removed after executing a filter condition
in a `FilterNode` or another node that post-filters data.
Note that `IndexNode`s can also filter documents by selecting only the required index range
- **filtered**: The total number of documents removed after executing a filter condition
in a `FilterNode` or another node that post-filters data. Note that nodes of the
`IndexNode` type can also filter documents by selecting only the required index range
from a collection, and the `filtered` value only indicates how much filtering was done by a
post filter in the `IndexNode` itself or following `FilterNode`s.
`EnumerateCollectionNode`s and `TraversalNode`s can also apply filter conditions and can
reported the number of filtered documents.
- **fullCount**: the total number of documents that matched the search condition if the query's
final top-level `LIMIT` statement were not present.
post-filter in the `IndexNode` itself or following `FilterNode` nodes.
Nodes of the `EnumerateCollectionNode` and `TraversalNode` types can also apply
filter conditions and can report the number of filtered documents.
- **httpRequests**: The total number of cluster-internal HTTP requests performed.
- **fullCount** (_optional_): The total number of documents that matched the search condition if the query's
final top-level `LIMIT` operation were not present.
This attribute may only be returned if the `fullCount` option was set when starting the
query and will only contain a sensible value if the query contained a `LIMIT` operation on
query and only contains a sensible value if the query contains a `LIMIT` operation on
the top level.
- **peakMemoryUsage**: the maximum memory usage of the query while it was running. In a cluster,
- **executionTime**: The query execution time (wall-clock time) in seconds.
- **peakMemoryUsage**: The maximum memory usage of the query while it was running. In a cluster,
the memory accounting is done per shard, and the memory usage reported is the peak
memory usage value from the individual shards.
Note that to keep things light-weight, the per-query memory usage is tracked on a relatively
high level, not including any memory allocator overhead nor any memory used for temporary
results calculations (e.g. memory allocated/deallocated inside AQL expressions and function
calls).
- **nodes**: _(optional)_ when the query was executed with the option `profile` set to at least `2`,
then this value contains runtime statistics per query execution node. This field contains the
node id (in `id`), the number of calls to this node `calls` and the number of items returned
by this node `items` (Items are the temporary results returned at this stage). You can correlate
this statistics with the `plan` returned in `extra`. For a human readable output you can execute
`db._profileQuery(<query>, <bind-vars>)` in the arangosh.
- **nodes** (_optional_): When the query is executed with the option `profile` set to at least `2`,
then this value contains runtime statistics per query execution node.
For a human readable output you can execute `db._profileQuery(<query>, <bind-vars>)`
in the arangosh.
- **id**: The execution node ID to correlate the statistics with the `plan` returned in
the `extra` attribute.
- **calls**: The number of calls to this node.
- **items**: The number of items returned by this node. Items are the temporary results
returned at this stage.
- **runtime**: The execution time of this node in seconds.
4 changes: 2 additions & 2 deletions 3.10/aql/functions-miscellaneous.md
Original file line number Diff line number Diff line change
Expand Up @@ -478,7 +478,7 @@ The result can be used to approximate the Jaccard similarity of sets.
Calculate the number of hashes (MinHash signature size) needed to not exceed the
specified error amount.

- **error** (number): the probabilistic error you can tolerate in the range `[0, 1]`
- **error** (number): the probabilistic error you can tolerate in the range `[0, 1)`
- returns **numHashes** (number): the required number of hashes to not exceed
the specified error amount

Expand All @@ -487,7 +487,7 @@ specified error amount.
{% aqlexample examplevar="examplevar" type="type" query="query" bind="bind" result="result" %}
@startDocuBlockInline aqlMinHashCount
@EXAMPLE_AQL{aqlMinHashCount}
RETURN MINHASH_ERROR(0.05)
RETURN MINHASH_COUNT(0.05)
@END_EXAMPLE_AQL
@endDocuBlock aqlMinHashCount
{% endaqlexample %}
Expand Down
18 changes: 9 additions & 9 deletions 3.10/aql/functions-string.md
Original file line number Diff line number Diff line change
Expand Up @@ -1500,22 +1500,22 @@ SHA512()

`SHA512(text) → hash`

Calculate the SHA512 checksum for `text` and returns it in a hexadecimal
Calculate the SHA512 checksum for `text` and return it in a hexadecimal
string representation.

- **text** (string): a string
- returns **hash** (string): SHA512 checksum as hex string

**Examples**

{% aqlexample examplevar="examplevar" type="type" query="query" bind="bind" result="result" %}
@startDocuBlockInline aqlSha512
@EXAMPLE_AQL{aqlSha512}
RETURN SHA512("foobar")
@END_EXAMPLE_AQL
@endDocuBlock aqlSha512
{% endaqlexample %}
{% include aqlexample.html id=examplevar type=type query=query bind=bind result=result %}
{% aqlexample examplevar="examplevar" type="type" query="query" bind="bind" result="result" %}
@startDocuBlockInline aqlSha512
@EXAMPLE_AQL{aqlSha512}
RETURN SHA512("foobar")
@END_EXAMPLE_AQL
@endDocuBlock aqlSha512
{% endaqlexample %}
{% include aqlexample.html id=examplevar type=type query=query bind=bind result=result %}

SOUNDEX()
---------
Expand Down
Loading