diff --git a/.github/CODE_OF_CONDUCT.md b/.github/CODE_OF_CONDUCT.md
index 82e3b000..89b1ff87 100644
--- a/.github/CODE_OF_CONDUCT.md
+++ b/.github/CODE_OF_CONDUCT.md
@@ -1,3 +1,5 @@
+{/* vale off */}
+
# Contributor covenant code of conduct
## Our pledge
@@ -119,7 +121,7 @@ version 2.0, available at
[https://www.contributor-covenant.org/version/2/0/code_of_conduct.html][v2.0].
Community Impact Guidelines were inspired by
-[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
+[Mozilla’s code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available
diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
index 163fbab6..cadc2c12 100644
--- a/.github/CONTRIBUTING.md
+++ b/.github/CONTRIBUTING.md
@@ -1,6 +1,6 @@
# Contribution guidelines
-We encourage you to participate in this documentation project. We appreciate your help in making Axiom as easy to understand and work with as possible.
+Axiom encourages you to participate in this documentation project. The community appreciates your help in making Axiom as easy to understand and work with as possible.
To contribute, fork this repo, and then clone it. For more information, see the [GitHub documentation](https://docs.github.com/en/get-started/exploring-projects-on-github/contributing-to-a-project).
@@ -32,7 +32,7 @@ If you want to contribute but don’t know where to start, browse the open issue
- When you review a PR, use GitHub suggestions for changes where discussion is necessary. For major changes or uncontroversial smaller fixes, commit directly to the branch.
- Let the original creator merge the PR. The reviewer only approves or asks for changes.
- In your comments, be kind, considerate, and constructive.
-- If a comment does not apply to the review of the PR, post it on the related issue.
+- If a comment doesn’t apply to the review of the PR, post it on the related issue.
## Commits
diff --git a/.prettierignore b/.prettierignore
new file mode 100644
index 00000000..947c9ca0
--- /dev/null
+++ b/.prettierignore
@@ -0,0 +1 @@
+/docs.json
diff --git a/.vale.ini b/.vale.ini
index 9332221a..b7fda657 100644
--- a/.vale.ini
+++ b/.vale.ini
@@ -26,6 +26,8 @@ Google.Headings = NO
Google.Parens = NO
Google.Colons = NO
Google.Ordinal = NO
+Google.Will = NO
+Google.EmDash = NO
# Ignore code surrounded by backticks or plus sign, parameters defaults, URLs, and angle brackets.
TokenIgnores = (<\/?[A-Z].+>), (\x60[^\n\x60]+\x60), ([^\n]+=[^\n]*), (\+[^\n]+\+), (http[^\n]+\[)
diff --git a/ai-engineering/concepts.mdx b/ai-engineering/concepts.mdx
index e3a8ed70..c022a283 100644
--- a/ai-engineering/concepts.mdx
+++ b/ai-engineering/concepts.mdx
@@ -4,7 +4,7 @@ description: "Learn about the core concepts in Rudder: Capabilities, Collections
keywords: ["ai engineering", "rudder", "concepts", "capability", "grader", "eval"]
---
-import { definitions } from '/snippets/definitions.mdx';
+import { definitions } from '/snippets/definitions.mdx'
This page defines the core terms used in the Rudder workflow. Understanding these concepts is the first step toward building robust and reliable generative AI capabilities.
@@ -20,7 +20,7 @@ The concepts in Rudder are best understood within the context of the development
The prototype is then tested against a collection of reference examples (so called “ground truth”) to measure its quality and effectiveness using graders. This process is known as an eval.
- Once a capability meets quality benchmarks, it's deployed. In production, graders can be applied to live traffic (online evals) to monitor performance and cost in real-time.
+ Once a capability meets quality benchmarks, it’s deployed. In production, graders can be applied to live traffic (online evals) to monitor performance and cost in real-time.
Insights from production monitoring reveal edge cases and opportunities for improvement. These new examples are used to refine the capability, expand the ground truth collection, and begin the cycle anew.
@@ -33,7 +33,7 @@ The concepts in Rudder are best understood within the context of the development
A generative AI capability is a system that uses large language models to perform a specific task by transforming inputs into desired outputs.
-Capabilities exist on a spectrum of complexity. They can be a simple, single-step function (for example, classifying a support ticket's intent) or evolve into a sophisticated, multi-step agent that uses reasoning and tools to achieve a goal (for example, orchestrating a complete customer support resolution).
+Capabilities exist on a spectrum of complexity. They can be a simple, single-step function (for example, classifying a support ticket’s intent) or evolve into a sophisticated, multi-step agent that uses reasoning and tools to achieve a goal (for example, orchestrating a complete customer support resolution).
### Collection
@@ -57,7 +57,7 @@ Annotations are expert-provided labels, corrections, or outputs added to records
### Grader
-A grader is a function that scores a capability's output. It programmatically assesses quality by comparing the generated output against ground truth or other criteria, returning a score or judgment. Graders are the reusable, atomic scoring logic used in all forms of evaluation.
+A grader is a function that scores a capability’s output. It programmatically assesses quality by comparing the generated output against ground truth or other criteria, returning a score or judgment. Graders are the reusable, atomic scoring logic used in all forms of evaluation.
### Evaluator (Eval)
@@ -65,8 +65,8 @@ An evaluator, or eval, is the process of testing a capability against a collecti
### Online Eval
-An online eval is the process of applying a grader to a capability's live production traffic. This provides real-time feedback on performance degradation, cost, and quality drift, enabling continuous monitoring and improvement.
+An online eval is the process of applying a grader to a capability’s live production traffic. This provides real-time feedback on performance degradation, cost, and quality drift, enabling continuous monitoring and improvement.
-### What's next?
+### What’s next?
Now that you understand the core concepts, see them in action in the Rudder [workflow](/ai-engineering/quickstart).
\ No newline at end of file
diff --git a/ai-engineering/create.mdx b/ai-engineering/create.mdx
index 0b895750..ba7215ac 100644
--- a/ai-engineering/create.mdx
+++ b/ai-engineering/create.mdx
@@ -4,14 +4,14 @@ description: "Learn how to create and define AI capabilities using structured pr
keywords: ["ai engineering", "rudder", "create", "prompt", "template", "schema"]
---
-import { Badge } from "/snippets/badge.jsx";
-import { definitions } from '/snippets/definitions.mdx';
+import { Badge } from "/snippets/badge.jsx"
+import { definitions } from '/snippets/definitions.mdx'
The **Create** stage is about defining a new AI capability as a structured, version-able asset in your codebase. The goal is to move away from scattered, hard-coded string prompts and toward a more disciplined and organized approach to prompt engineering.
### Defining a capability as a prompt object
-In Rudder, every capability is represented by a `Prompt` object. This object serves as the single source of truth for the capability's logic, including its messages, metadata, and the schema for its arguments.
+In Rudder, every capability is represented by a `Prompt` object. This object serves as the single source of truth for the capability’s logic, including its messages, metadata, and the schema for its arguments.
For now, these `Prompt` objects can be defined and managed as TypeScript files within your own project repository.
@@ -47,7 +47,7 @@ export const emailSummarizerPrompt = {
### Strongly-typed arguments with `Template`
-To ensure that prompts are used correctly, the `@axiomhq/ai` package includes a `Template` type system (exported as `Type`) for defining the schema of a prompt's `arguments`. This provides type safety, autocompletion, and a clear, self-documenting definition of what data the prompt expects.
+To ensure that prompts are used correctly, the `@axiomhq/ai` package includes a `Template` type system (exported as `Type`) for defining the schema of a prompt’s `arguments`. This provides type safety, autocompletion, and a clear, self-documenting definition of what data the prompt expects.
The `arguments` object uses `Template` helpers to define the shape of the context:
@@ -78,7 +78,7 @@ export const reportGeneratorPrompt = {
} satisfies Prompt;
```
-You can even infer the exact TypeScript type for a prompt's context using the `InferContext` utility.
+You can even infer the exact TypeScript type for a prompt’s context using the `InferContext` utility.
### Prototyping and local testing
@@ -119,8 +119,8 @@ To enable more advanced workflows and collaboration, Axiom is building tools to
* Coming soon The `axiom` CLI will allow you to `push`, `pull`, and `list` prompt versions directly from your terminal, synchronizing your local files with the Axiom platform.
* Coming soon The SDK will include methods like `axiom.prompts.create()` and `axiom.prompts.load()` for programmatic access to your managed prompts. This will be the foundation for A/B testing, version comparison, and deploying new prompts without changing your application code.
-### What's next?
+### What’s next?
-Now that you've created and structured your capability, the next step is to measure its quality against a set of known good examples.
+Now that you’ve created and structured your capability, the next step is to measure its quality against a set of known good examples.
Learn more about this step of the Rudder workflow in the [Measure](/ai-engineering/measure) docs.
\ No newline at end of file
diff --git a/ai-engineering/iterate.mdx b/ai-engineering/iterate.mdx
index 59b96b20..90449310 100644
--- a/ai-engineering/iterate.mdx
+++ b/ai-engineering/iterate.mdx
@@ -4,14 +4,14 @@ description: "Learn how to iterate on your AI capabilities by using production d
keywords: ["ai engineering", "rudder", "iterate", "improvement", "a/b testing", "champion challenger"]
---
-import { Badge } from "/snippets/badge.jsx";
-import { definitions } from '/snippets/definitions.mdx';
+import { Badge } from "/snippets/badge.jsx"
+import { definitions } from '/snippets/definitions.mdx'
The iteration workflow described here is in active development. Axiom is working with design partners to shape what’s built. [Contact Axiom](https://www.axiom.co/contact) to get early access and join a small group of teams shaping these tools.
-The **Iterate** stage is where the Rudder workflow comes full circle. It's the process of taking the real-world performance data from the [Observe](/ai-engineering/observe) stage and the quality benchmarks from the [Measure](/ai-engineering/measure) stage, and using them to make concrete improvements to your AI capability. This creates a cycle of continuous, data-driven enhancement.
+The **Iterate** stage is where the Rudder workflow comes full circle. It’s the process of taking the real-world performance data from the [Observe](/ai-engineering/observe) stage and the quality benchmarks from the [Measure](/ai-engineering/measure) stage, and using them to make concrete improvements to your AI capability. This creates a cycle of continuous, data-driven enhancement.
## Identifying opportunities for improvement
@@ -25,7 +25,7 @@ These examples can be used to create a new, more robust Coming soon Once you've created a new version of your `Prompt` object, you need to verify that it's actually an improvement. The best way to do this is to run an "offline evaluation"—testing your new version against the same ground truth collection you used in the **Measure** stage.
+Coming soon Once you’ve created a new version of your `Prompt` object, you need to verify that it’s actually an improvement. The best way to do this is to run an "offline evaluation"—testing your new version against the same ground truth collection you used in the **Measure** stage.
The Axiom Console will provide views to compare these evaluation runs side-by-side:
@@ -38,7 +38,7 @@ This ensures you can validate changes with data before they ever reach your user
Coming soon After a new version of your capability has proven its superiority in offline tests, you can deploy it with confidence. The Rudder workflow will support a champion/challenger pattern, where you can deploy a new "challenger" version to run in shadow mode against a portion of production traffic. This allows for a final validation on real-world data without impacting the user experience.
-Once you're satisfied with the challenger's performance, you can promote it to become the new "champion" using the SDK's `deploy` function.
+Once you’re satisfied with the challenger’s performance, you can promote it to become the new "champion" using the SDK’s `deploy` function.
```typescript
import { axiom } from './axiom-client';
@@ -50,7 +50,7 @@ await axiom.prompts.deploy('prompt_123', {
});
```
-## What's next?
+## What’s next?
By completing the Iterate stage, you have closed the loop. Your improved capability is now in production, and you can return to the **Observe** stage to monitor its performance and identify the next opportunity for improvement.
diff --git a/ai-engineering/measure.mdx b/ai-engineering/measure.mdx
index 56899166..d2c0e5c1 100644
--- a/ai-engineering/measure.mdx
+++ b/ai-engineering/measure.mdx
@@ -4,14 +4,14 @@ description: "Learn how to measure the quality of your AI capabilities by runnin
keywords: ["ai engineering", "rudder", "measure", "evals", "evaluation", "scoring", "graders"]
---
-import { Badge } from "/snippets/badge.jsx";
-import { definitions } from '/snippets/definitions.mdx';
+import { Badge } from "/snippets/badge.jsx"
+import { definitions } from '/snippets/definitions.mdx'
The evaluation framework described here is in active development. Axiom is working with design partners to shape what’s built. [Contact Axiom](https://www.axiom.co/contact) to get early access and join a small group of teams shaping these tools.
-The **Measure** stage is where you quantify the quality and effectiveness of your AI capability. Instead of relying on anecdotal checks, this stage uses a systematic process called an eval to score your capability's performance against a known set of correct examples (ground truth). This provides a data-driven benchmark to ensure a capability is ready for production and to track its quality over time.
+The **Measure** stage is where you quantify the quality and effectiveness of your AI capability. Instead of relying on anecdotal checks, this stage uses a systematic process called an eval to score your capability’s performance against a known set of correct examples (ground truth). This provides a data-driven benchmark to ensure a capability is ready for production and to track its quality over time.
## The `Eval` function
@@ -62,7 +62,7 @@ Eval('text-match-eval', {
## Grading with scorers
-Coming soon A grader is a function that scores a capability's output. Axiom will provide a library of built-in scorers for common tasks (e.g., checking for semantic similarity, factual correctness, or JSON validity). You can also provide your own custom functions to measure domain-specific logic. Each scorer receives the `input`, the generated `output`, and the `expected` value, and must return a score.
+Coming soon A grader is a function that scores a capability’s output. Axiom will provide a library of built-in scorers for common tasks (e.g., checking for semantic similarity, factual correctness, or JSON validity). You can also provide your own custom functions to measure domain-specific logic. Each scorer receives the `input`, the generated `output`, and the `expected` value, and must return a score.
## Running evaluations
@@ -80,8 +80,8 @@ This command will execute the specified test file using `vitest` in the backgrou
The Console will feature leaderboards and comparison views to track score progression across different versions of a capability, helping you verify that your changes are leading to measurable improvements.
-## What's next?
+## What’s next?
-Once your capability meets your quality benchmarks in the Measure stage, it's ready to be deployed. The next step is to monitor its performance with real-world traffic.
+Once your capability meets your quality benchmarks in the Measure stage, it’s ready to be deployed. The next step is to monitor its performance with real-world traffic.
Learn more about this step of the Rudder workflow in the [Observe](/ai-engineering/observe) docs.
\ No newline at end of file
diff --git a/ai-engineering/observe.mdx b/ai-engineering/observe.mdx
index 9bf61eae..4eabddf2 100644
--- a/ai-engineering/observe.mdx
+++ b/ai-engineering/observe.mdx
@@ -1,25 +1,25 @@
---
title: "Observe"
-description: "Learn how to observe your deployed AI capabilities in production using Axiom's AI SDK to capture telemetry."
+description: "Learn how to observe your deployed AI capabilities in production using Axiom’s AI SDK to capture telemetry."
keywords: ["ai engineering", "rudder", "observe", "telemetry", "withspan", "opentelemetry"]
---
-import { Badge } from "/snippets/badge.jsx";
+import { Badge } from "/snippets/badge.jsx"
import AIEngineeringInstrumentationSnippet from '/snippets/ai-engineering-instrumentation.mdx'
The **Observe** stage is about understanding how your deployed generative AI capabilities perform in the real world. After creating and evaluating a capability, observing its production behavior is crucial for identifying unexpected issues, tracking costs, and gathering the data needed for future improvements.
## Capturing telemetry with the `@axiomhq/ai` SDK
-The foundation of the Observe stage is Axiom's SDK, which integrates with your app to capture detailed OpenTelemetry traces for every AI interaction.
+The foundation of the Observe stage is Axiom’s SDK, which integrates with your app to capture detailed OpenTelemetry traces for every AI interaction.
-The initial release of `@axiomhq/ai` is focused on providing deep integration with TypeScript applications, particularly those using Vercel's AI SDK to interact with frontier models.
+The initial release of `@axiomhq/ai` is focused on providing deep integration with TypeScript applications, particularly those using Vercel’s AI SDK to interact with frontier models.
### Instrumenting AI SDK calls
-The easiest way to get started is by wrapping your existing AI model client. The `@axiomhq/ai` package provides helper functions for popular libraries like Vercel's AI SDK.
+The easiest way to get started is by wrapping your existing AI model client. The `@axiomhq/ai` package provides helper functions for popular libraries like Vercel’s AI SDK.
The `wrapAISDKModel` function takes an existing AI model object and returns an instrumented version that will automatically generate trace data for every call.
@@ -230,7 +230,7 @@ The Axiom AI SDK is built on the OpenTelemetry standard. To send traces, you nee
### Configuring the tracer
-You must configure an OTLP trace exporter pointing to your Axiom instance. This is typically done in a dedicated instrumentation file that is loaded before your application starts.
+You must configure an OTLP trace exporter pointing to your Axiom instance. This is typically done in a dedicated instrumentation file that’s loaded before your application starts.
@@ -262,7 +262,7 @@ Visualizing and making sense of this telemetry data is a core part of the Axiom
* Coming soon A dedicated **AI Trace Waterfall** view will visualize single and multi-step LLM workflows, with clear input/output inspection at each stage.
* Coming soon A pre-built **Gen AI OTel Dashboard** will automatically appear for any dataset receiving AI telemetry. It will feature elements for tracking cost per invocation, time-to-first-token, call counts by model, and error rates.
-## What's next?
+## What’s next?
Now that you are capturing and analyzing production telemetry, the next step is to use these insights to improve your capability.
diff --git a/ai-engineering/overview.mdx b/ai-engineering/overview.mdx
index 434d8e2a..02600172 100644
--- a/ai-engineering/overview.mdx
+++ b/ai-engineering/overview.mdx
@@ -1,13 +1,13 @@
---
title: "Overview"
-description: "Introduction to Rudder, Axiom's methodology for designing, evaluating, monitoring, and iterating generative-AI capabilities."
+description: "Introduction to Rudder, Axiom’s methodology for designing, evaluating, monitoring, and iterating generative-AI capabilities."
keywords: ["ai engineering", "rudder", "prompt engineering", "generative ai"]
tag: "NEW"
---
-import { definitions } from '/snippets/definitions.mdx';
+import { definitions } from '/snippets/definitions.mdx'
-Generative AI development is fundamentally different from traditional software engineering. Its outputs are probabilistic, not deterministic; the same input can produce different results. This variability makes it challenging to guarantee quality and predict failure modes without the right infrastructure.
+Generative AI development is fundamentally different from traditional software engineering. Its outputs are probabilistic, not deterministic. The same input can produce different results. This variability makes it challenging to guarantee quality and predict failure modes without the right infrastructure.
Axiom’s data intelligence platform is ideally suited to address the unique challenges of AI engineering. Building on the foundational EventDB and Console components, Axiom provides an essential toolkit for the next generation of software builders.
@@ -15,16 +15,16 @@ This section of the documentation introduces the concepts and workflows for buil
### Rudder workflow
-Axiom provides a structured, iterative workflow—the Rudder method—for developing AI capabilities. The workflow is designed to build statistical confidence in systems that are not entirely predictable, and is grounded in systematic evaluation and continuous improvement, from initial prototype to production monitoring.
+Axiom provides a structured, iterative workflow—the Rudder method—for developing AI capabilities. The workflow is designed to build statistical confidence in systems that aren’t entirely predictable, and is grounded in systematic evaluation and continuous improvement, from initial prototype to production monitoring.
The core stages are:
* **Create**: Define a new AI capability, prototype it with various models, and gather reference examples to establish ground truth.
-* **Measure**: Systematically evaluate the capability's performance against reference data using custom graders to score for accuracy, quality, and cost.
+* **Measure**: Systematically evaluate the capability’s performance against reference data using custom graders to score for accuracy, quality, and cost.
* **Observe**: Cultivate the capability in production by collecting rich telemetry on every LLM call and tool execution. Use online evaluations to monitor for performance degradation and discover edge cases.
* **Iterate**: Use insights from production to refine prompts, augment reference datasets, and improve the capability over time.
-### What's next?
+### What’s next?
* To understand the key terms used in Rudder, see the [Concepts](/ai-engineering/concepts) page.
* To start building, follow the [Quickstart](/ai-engineering/quickstart) page.
\ No newline at end of file
diff --git a/ai-engineering/quickstart.mdx b/ai-engineering/quickstart.mdx
index 5e195f2f..44ffaa58 100644
--- a/ai-engineering/quickstart.mdx
+++ b/ai-engineering/quickstart.mdx
@@ -48,7 +48,7 @@ The `@axiomhq/ai` package also includes the `axiom` command-line interface (CLI)
## Configuration
-The Axiom AI SDK is built on the OpenTelemetry standard and requires a configured tracer to send data to Axiom. This is typically done in a dedicated instrumentation file that is loaded before the rest of your application.
+The Axiom AI SDK is built on the OpenTelemetry standard and requires a configured tracer to send data to Axiom. This is typically done in a dedicated instrumentation file that’s loaded before the rest of your application.
Here is a standard configuration for a Node.js environment:
@@ -116,7 +116,7 @@ OPENAI_API_KEY=""
GEMINI_API_KEY=""
```
-## What's next?
+## What’s next?
Now that your application is configured to send telemetry to Axiom, the next step is to start instrumenting your AI model and tool calls.
diff --git a/apl/aggregation-function/arg-max.mdx b/apl/aggregation-function/arg-max.mdx
index 37f02ef9..34090681 100644
--- a/apl/aggregation-function/arg-max.mdx
+++ b/apl/aggregation-function/arg-max.mdx
@@ -154,5 +154,5 @@ This query identifies the URI with the highest status code for each country.
## List of related aggregations
- [arg_min](/apl/aggregation-function/arg-min): Retrieves the record with the minimum value for a numeric field.
-- [max](/apl/aggregation-function/max): Retrieves the maximum value for a numeric field but does not return additional fields.
+- [max](/apl/aggregation-function/max): Retrieves the maximum value for a numeric field but doesn’t return additional fields.
- [percentile](/apl/aggregation-function/percentile): Provides the value at a specific percentile of a numeric field.
\ No newline at end of file
diff --git a/apl/aggregation-function/avg.mdx b/apl/aggregation-function/avg.mdx
index 8804aa99..c1137d22 100644
--- a/apl/aggregation-function/avg.mdx
+++ b/apl/aggregation-function/avg.mdx
@@ -3,7 +3,7 @@ title: avg
description: 'This page explains how to use the avg aggregation function in APL.'
---
-The `avg` aggregation in APL calculates the average value of a numeric field across a set of records. You can use this aggregation when you need to determine the mean value of numerical data, such as request durations, response times, or other performance metrics. It is useful in scenarios such as performance analysis, trend identification, and general statistical analysis.
+The `avg` aggregation in APL calculates the average value of a numeric field across a set of records. You can use this aggregation when you need to determine the mean value of numerical data, such as request durations, response times, or other performance metrics. It’s useful in scenarios such as performance analysis, trend identification, and general statistical analysis.
When to use `avg`:
diff --git a/apl/aggregation-function/dcount.mdx b/apl/aggregation-function/dcount.mdx
index a1da8d62..6959d754 100644
--- a/apl/aggregation-function/dcount.mdx
+++ b/apl/aggregation-function/dcount.mdx
@@ -5,7 +5,7 @@ description: 'This page explains how to use the dcount aggregation function in A
The `dcount` aggregation function in Axiom Processing Language (APL) counts the distinct values in a column. This function is essential when you need to know the number of unique values, such as counting distinct users, unique requests, or distinct error codes in log files.
-Use `dcount` for analyzing datasets where it’s important to identify the number of distinct occurrences, such as unique IP addresses in security logs, unique user IDs in application logs, or unique trace IDs in OpenTelemetry traces.
+Use `dcount` for analyzing datasets where it’s important to identify the number of distinct occurrences, such as unique IP addresses in security logs, unique user IDs in app logs, or unique trace IDs in OpenTelemetry traces.
The `dcount` aggregation in APL is a statistical aggregation that returns estimated results. The estimation comes with the benefit of speed at the expense of accuracy. This means that `dcount` is fast and light on resources even on a large or high-cardinality dataset, but it doesn’t provide precise results.
diff --git a/apl/aggregation-function/histogram.mdx b/apl/aggregation-function/histogram.mdx
index f515789d..07fc4126 100644
--- a/apl/aggregation-function/histogram.mdx
+++ b/apl/aggregation-function/histogram.mdx
@@ -3,7 +3,7 @@ title: histogram
description: 'This page explains how to use the histogram aggregation function in APL.'
---
-The `histogram` aggregation in APL allows you to create a histogram that groups numeric values into intervals or “bins.” This is useful for visualizing the distribution of data, such as the frequency of response times, request durations, or other continuous numerical fields. You can use it to analyze patterns and trends in datasets like logs, traces, or metrics. It is especially helpful when you need to summarize a large volume of data into a digestible form, providing insights on the distribution of values.
+The `histogram` aggregation in APL allows you to create a histogram that groups numeric values into intervals or “bins.” This is useful for visualizing the distribution of data, such as the frequency of response times, request durations, or other continuous numerical fields. You can use it to analyze patterns and trends in datasets like logs, traces, or metrics. It’s especially helpful when you need to summarize a large volume of data into a digestible form, providing insights on the distribution of values.
The `histogram` aggregation is ideal for identifying peaks, valleys, and outliers in your data. For example, you can analyze the distribution of request durations in web server logs or span durations in OpenTelemetry traces to understand performance bottlenecks.
diff --git a/apl/aggregation-function/make-list.mdx b/apl/aggregation-function/make-list.mdx
index e9153c0c..832ef0b4 100644
--- a/apl/aggregation-function/make-list.mdx
+++ b/apl/aggregation-function/make-list.mdx
@@ -147,6 +147,6 @@ This query collects the cities from which each user has made HTTP requests, usef
## List of related aggregations
- [**make_set**](/apl/aggregation-function/make-set): Similar to `make_list`, but only unique values are collected in the set. Use `make_set` when duplicates aren’t relevant.
-- [**count**](/apl/aggregation-function/count): Returns the count of rows in each group. Use this instead of `make_list` when you're interested in row totals rather than individual values.
+- [**count**](/apl/aggregation-function/count): Returns the count of rows in each group. Use this instead of `make_list` when you’re interested in row totals rather than individual values.
- [**max**](/apl/aggregation-function/max): Aggregates values by returning the maximum value from each group. Useful for numeric comparison across rows.
- [**dcount**](/apl/aggregation-function/dcount): Returns the distinct count of values for each group. Use this when you need unique value counts instead of listing them.
\ No newline at end of file
diff --git a/apl/aggregation-function/make-set-if.mdx b/apl/aggregation-function/make-set-if.mdx
index e40a80cb..ba8c8888 100644
--- a/apl/aggregation-function/make-set-if.mdx
+++ b/apl/aggregation-function/make-set-if.mdx
@@ -69,7 +69,7 @@ The `make_set_if` function returns a dynamic array of distinct values from the s
-In this use case, you're analyzing HTTP logs and want to get the distinct cities from which requests originated, but only for requests that took longer than 500 ms.
+In this use case, you’re analyzing HTTP logs and want to get the distinct cities from which requests originated, but only for requests that took longer than 500 ms.
**Query**
@@ -92,7 +92,7 @@ This query returns the distinct cities from which requests took more than 500 ms
-Here, you're analyzing OpenTelemetry traces and want to identify the distinct services that processed spans with a duration greater than 1 second, grouped by trace ID.
+Here, you’re analyzing OpenTelemetry traces and want to identify the distinct services that processed spans with a duration greater than 1 second, grouped by trace ID.
**Query**
diff --git a/apl/aggregation-function/make-set.mdx b/apl/aggregation-function/make-set.mdx
index 5e79e883..296a7e38 100644
--- a/apl/aggregation-function/make-set.mdx
+++ b/apl/aggregation-function/make-set.mdx
@@ -3,7 +3,7 @@ title: make_set
description: 'This page explains how to use the make_set aggregation function in APL.'
---
-The `make_set` aggregation in APL (Axiom Processing Language) is used to collect unique values from a specific column into an array. It is useful when you want to reduce your data by grouping it and then retrieving all unique values for each group. This aggregation is valuable for tasks such as grouping logs, traces, or events by a common attribute and retrieving the unique values of a specific field for further analysis.
+The `make_set` aggregation in APL (Axiom Processing Language) is used to collect unique values from a specific column into an array. It’s useful when you want to reduce your data by grouping it and then retrieving all unique values for each group. This aggregation is valuable for tasks such as grouping logs, traces, or events by a common attribute and retrieving the unique values of a specific field for further analysis.
You can use `make_set` when you need to collect non-repeating values across rows within a group, such as finding all the unique HTTP methods in web server logs or unique trace IDs in telemetry data.
diff --git a/apl/aggregation-function/max.mdx b/apl/aggregation-function/max.mdx
index efcc52db..20d2371c 100644
--- a/apl/aggregation-function/max.mdx
+++ b/apl/aggregation-function/max.mdx
@@ -3,7 +3,7 @@ title: max
description: 'This page explains how to use the max aggregation function in APL.'
---
-The `max` aggregation in APL allows you to find the highest value in a specific column of your dataset. This is useful when you need to identify the maximum value of numerical data, such as the longest request duration, highest sales figures, or the latest timestamp in logs. The `max` function is ideal when you are working with large datasets and need to quickly retrieve the largest value, ensuring you're focusing on the most critical or recent data point.
+The `max` aggregation in APL allows you to find the highest value in a specific column of your dataset. This is useful when you need to identify the maximum value of numerical data, such as the longest request duration, highest sales figures, or the latest timestamp in logs. The `max` function is ideal when you are working with large datasets and need to quickly retrieve the largest value, ensuring you’re focusing on the most critical or recent data point.
## For users of other query languages
diff --git a/apl/aggregation-function/maxif.mdx b/apl/aggregation-function/maxif.mdx
index 2c3334ac..832ebdb9 100644
--- a/apl/aggregation-function/maxif.mdx
+++ b/apl/aggregation-function/maxif.mdx
@@ -142,7 +142,7 @@ This query returns the maximum request duration for requests coming from the Uni
## List of related aggregations
-- [**minif**](/apl/aggregation-function/minif): Returns the minimum value from a column for rows that satisfy a condition. Use `minif` when you're interested in the lowest value under specific conditions.
+- [**minif**](/apl/aggregation-function/minif): Returns the minimum value from a column for rows that satisfy a condition. Use `minif` when you’re interested in the lowest value under specific conditions.
- [**max**](/apl/aggregation-function/max): Returns the maximum value from a column without filtering. Use `max` when you want the highest value across the entire dataset without conditions.
- [**sumif**](/apl/aggregation-function/sumif): Returns the sum of values for rows that satisfy a condition. Use `sumif` when you want the total value of a column under specific conditions.
- [**avgif**](/apl/aggregation-function/avgif): Returns the average of values for rows that satisfy a condition. Use `avgif` when you want to calculate the mean value based on a filter.
diff --git a/apl/aggregation-function/minif.mdx b/apl/aggregation-function/minif.mdx
index 3d2acf75..5e742420 100644
--- a/apl/aggregation-function/minif.mdx
+++ b/apl/aggregation-function/minif.mdx
@@ -148,4 +148,4 @@ This query returns the minimum request duration for HTTP requests originating fr
- [**maxif**](/apl/aggregation-function/maxif): Finds the maximum value of an expression that satisfies a condition. Use `maxif` when you need the maximum value under a condition, rather than the minimum.
- [**avgif**](/apl/aggregation-function/avgif): Calculates the average value of an expression that meets a specified condition. Useful when you want an average instead of a minimum.
- [**countif**](/apl/aggregation-function/countif): Counts the number of records that satisfy a given condition. Use this for counting records rather than calculating a minimum.
-- [**sumif**](/apl/aggregation-function/sumif): Sums the values of an expression for records that meet a condition. Helpful when you're interested in the total rather than the minimum.
\ No newline at end of file
+- [**sumif**](/apl/aggregation-function/sumif): Sums the values of an expression for records that meet a condition. Helpful when you’re interested in the total rather than the minimum.
\ No newline at end of file
diff --git a/apl/aggregation-function/percentile.mdx b/apl/aggregation-function/percentile.mdx
index 5af44fe7..a9d6f9b0 100644
--- a/apl/aggregation-function/percentile.mdx
+++ b/apl/aggregation-function/percentile.mdx
@@ -3,7 +3,7 @@ title: percentile
description: 'This page explains how to use the percentile aggregation function in APL.'
---
-The `percentile` aggregation function in Axiom Processing Language (APL) allows you to calculate the value below which a given percentage of data points fall. It is particularly useful when you need to analyze distributions and want to summarize the data using specific thresholds, such as the 90th or 95th percentile. This function can be valuable in performance analysis, trend detection, or identifying outliers across large datasets.
+The `percentile` aggregation function in Axiom Processing Language (APL) allows you to calculate the value below which a given percentage of data points fall. It’s particularly useful when you need to analyze distributions and want to summarize the data using specific thresholds, such as the 90th or 95th percentile. This function can be valuable in performance analysis, trend detection, or identifying outliers across large datasets.
You can apply the `percentile` function to various use cases, such as analyzing log data for request durations, OpenTelemetry traces for service latencies, or security logs to assess risk patterns.
diff --git a/apl/aggregation-function/stdev.mdx b/apl/aggregation-function/stdev.mdx
index aa245a37..4bd4c454 100644
--- a/apl/aggregation-function/stdev.mdx
+++ b/apl/aggregation-function/stdev.mdx
@@ -14,7 +14,7 @@ If you come from other query languages, this section explains how to adjust your
-In Splunk SPL, the `stdev` aggregation function works similarly but has a different syntax. While SPL uses the `stdev` command within the `stats` function, APL users will find the aggregation works similarly in APL with just minor differences in syntax.
+In Splunk SPL, the `stdev` aggregation function works similarly but has a different syntax. While SPL uses the `stdev` command within the `stats` function, APL users find the aggregation works similarly in APL with just minor differences in syntax.
```sql Splunk example
diff --git a/apl/aggregation-function/sum.mdx b/apl/aggregation-function/sum.mdx
index 7ce22bd4..5afdf00c 100644
--- a/apl/aggregation-function/sum.mdx
+++ b/apl/aggregation-function/sum.mdx
@@ -5,7 +5,7 @@ description: 'This page explains how to use the sum aggregation function in APL.
The `sum` aggregation in APL is used to compute the total sum of a specific numeric field in a dataset. This aggregation is useful when you want to find the cumulative value for a certain metric, such as the total duration of requests, total sales revenue, or any other numeric field that can be summed.
-You can use the `sum` aggregation in a wide range of scenarios, such as analyzing log data, monitoring traces, or examining security logs. It is particularly helpful when you want to get a quick overview of your data in terms of totals or cumulative statistics.
+You can use the `sum` aggregation in a wide range of scenarios, such as analyzing log data, monitoring traces, or examining security logs. It’s particularly helpful when you want to get a quick overview of your data in terms of totals or cumulative statistics.
## For users of other query languages
@@ -140,6 +140,6 @@ This query counts the total number of successful requests (status 200) in the da
- [**count**](/apl/aggregation-function/count): Counts the number of records in a dataset. Use `count` when you want to count the number of rows, not aggregate numeric values.
- [**avg**](/apl/aggregation-function/avg): Computes the average value of a numeric field. Use `avg` when you need to find the mean instead of the total sum.
-- [**min**](/apl/aggregation-function/min): Returns the minimum value of a numeric field. Use `min` when you're interested in the lowest value.
-- [**max**](/apl/aggregation-function/max): Returns the maximum value of a numeric field. Use `max` when you're interested in the highest value.
+- [**min**](/apl/aggregation-function/min): Returns the minimum value of a numeric field. Use `min` when you’re interested in the lowest value.
+- [**max**](/apl/aggregation-function/max): Returns the maximum value of a numeric field. Use `max` when you’re interested in the highest value.
- [**sumif**](/apl/aggregation-function/sumif): Sums a numeric field conditionally. Use `sumif` when you only want to sum values that meet a specific condition.
\ No newline at end of file
diff --git a/apl/aggregation-function/sumif.mdx b/apl/aggregation-function/sumif.mdx
index 581c83ee..2752a016 100644
--- a/apl/aggregation-function/sumif.mdx
+++ b/apl/aggregation-function/sumif.mdx
@@ -126,14 +126,14 @@ Here, we calculate the total request duration for failed HTTP requests (those wi
|---------------------------|
| 64000 |
-This query computes the total request duration for all failed HTTP requests (where the status code is not `200`), which can be useful for security log analysis.
+This query computes the total request duration for all failed HTTP requests (where the status code isn’t `200`), which can be useful for security log analysis.
## List of related aggregations
-- [**avgif**](/apl/aggregation-function/avgif): Computes the average of a numeric expression for records that meet a specified condition. Use `avgif` when you're interested in the average value, not the total sum.
+- [**avgif**](/apl/aggregation-function/avgif): Computes the average of a numeric expression for records that meet a specified condition. Use `avgif` when you’re interested in the average value, not the total sum.
- [**countif**](/apl/aggregation-function/countif): Counts the number of records that satisfy a condition. Use `countif` when you need to know how many records match a specific criterion.
- [**minif**](/apl/aggregation-function/minif): Returns the minimum value of a numeric expression for records that meet a condition. Useful when you need the smallest value under certain criteria.
- [**maxif**](/apl/aggregation-function/maxif): Returns the maximum value of a numeric expression for records that meet a condition. Use `maxif` to identify the highest values under certain conditions.
\ No newline at end of file
diff --git a/apl/aggregation-function/topk.mdx b/apl/aggregation-function/topk.mdx
index 2c5dc6ea..7a5bba93 100644
--- a/apl/aggregation-function/topk.mdx
+++ b/apl/aggregation-function/topk.mdx
@@ -162,8 +162,8 @@ This query returns the top 5 cities based on the number of HTTP requests.
## List of related aggregations
-- [top](/apl/tabular-operators/top-operator): Returns the top values based on a field without requiring a specific number of results (`k`), making it useful when you're unsure how many top values to retrieve.
-- [topkif](/apl/aggregation-function/topkif): Returns the top `k` results without filtering. Use topk when you do not need to restrict your analysis to a subset.
+- [top](/apl/tabular-operators/top-operator): Returns the top values based on a field without requiring a specific number of results (`k`), making it useful when you’re unsure how many top values to retrieve.
+- [topkif](/apl/aggregation-function/topkif): Returns the top `k` results without filtering. Use topk when you don’t need to restrict your analysis to a subset.
- [sort](/apl/tabular-operators/sort-operator): Orders the dataset based on one or more fields, which is useful if you need a complete ordered list rather than the top `k` values.
- [extend](/apl/tabular-operators/extend-operator): Adds calculated fields to your dataset, which can be useful in combination with `topk` to create custom rankings.
- [count](/apl/aggregation-function/count): Aggregates the dataset by counting occurrences, often used in conjunction with `topk` to find the most common values.
\ No newline at end of file
diff --git a/apl/aggregation-function/topkif.mdx b/apl/aggregation-function/topkif.mdx
index 9cf40729..05d3bed3 100644
--- a/apl/aggregation-function/topkif.mdx
+++ b/apl/aggregation-function/topkif.mdx
@@ -8,7 +8,7 @@ The `topkif` aggregation in Axiom Processing Language (APL) allows you to identi
Use `topkif` when you need to focus on the most important filtered subsets of data, especially in log analysis, telemetry data, and monitoring systems. This aggregation helps you quickly zoom in on significant values without scanning the entire dataset.
-The `topkif` aggregation in APL is a statistical aggregation that returns estimated results. The estimation provides the benefit of speed at the expense of precision. This means that `topkif` is fast and light on resources even on large or high-cardinality datasets but does not provide completely accurate results.
+The `topkif` aggregation in APL is a statistical aggregation that returns estimated results. The estimation provides the benefit of speed at the expense of precision. This means that `topkif` is fast and light on resources even on large or high-cardinality datasets but doesn’t provide completely accurate results.
For completely accurate results, use the [top operator](/apl/tabular-operators/top-operator) together with a filter.
@@ -20,7 +20,7 @@ If you come from other query languages, this section explains how to adjust your
-Splunk SPL does not have a direct equivalent to the `topkif` function. You can achieve similar results by using the top command combined with a where clause, which is closer to using APL’s top operator with a filter. However, APL’s `topkif` provides a more optimized, estimated solution when you want speed and efficiency.
+Splunk SPL doesn’t have a direct equivalent to the `topkif` function. You can achieve similar results by using the top command combined with a where clause, which is closer to using APL’s top operator with a filter. However, APL’s `topkif` provides a more optimized, estimated solution when you want speed and efficiency.
```sql Splunk example
@@ -160,7 +160,7 @@ This query returns the top 5 cities generating the most GET HTTP requests.
# List of related aggregations
-- [topk](/apl/aggregation-function/topk): Returns the top `k` results without filtering. Use topk when you do not need to restrict your analysis to a subset.
+- [topk](/apl/aggregation-function/topk): Returns the top `k` results without filtering. Use topk when you don’t need to restrict your analysis to a subset.
- [top](/apl/tabular-operators/top-operator): Returns the top results based on a field with accurate results. Use top when precision is important.
- [sort](/apl/tabular-operators/sort-operator): Sorts the dataset based on one or more fields. Use sort if you need full ordered results.
- [extend](/apl/tabular-operators/extend-operator): Adds calculated fields to your dataset, useful before applying topkif to create new fields to rank.
diff --git a/apl/aggregation-function/variance.mdx b/apl/aggregation-function/variance.mdx
index e53aee50..14f26de2 100644
--- a/apl/aggregation-function/variance.mdx
+++ b/apl/aggregation-function/variance.mdx
@@ -3,7 +3,7 @@ title: variance
description: 'This page explains how to use the variance aggregation function in APL.'
---
-The `variance` aggregation function in APL calculates the variance of a numeric expression across a set of records. Variance is a statistical measurement that represents the spread of data points in a dataset. It's useful for understanding how much variation exists in your data. In scenarios such as performance analysis, network traffic monitoring, or anomaly detection, `variance` helps identify outliers and patterns by showing how data points deviate from the mean.
+The `variance` aggregation function in APL calculates the variance of a numeric expression across a set of records. Variance is a statistical measurement that represents the spread of data points in a dataset. It’s useful for understanding how much variation exists in your data. In scenarios such as performance analysis, network traffic monitoring, or anomaly detection, `variance` helps identify outliers and patterns by showing how data points deviate from the mean.
## For users of other query languages
diff --git a/apl/aggregation-function/varianceif.mdx b/apl/aggregation-function/varianceif.mdx
index ceffe7c6..35c6a6f9 100644
--- a/apl/aggregation-function/varianceif.mdx
+++ b/apl/aggregation-function/varianceif.mdx
@@ -139,5 +139,5 @@ This query calculates the variance in request durations for requests originating
## List of related aggregations
- [**avgif**](/apl/aggregation-function/avgif): Computes the average value of an expression for records that match a given condition. Use `avgif` when you want the average instead of variance.
-- [**sumif**](/apl/aggregation-function/sumif): Returns the sum of values that meet a specified condition. Use `sumif` when you're interested in totals, not variance.
+- [**sumif**](/apl/aggregation-function/sumif): Returns the sum of values that meet a specified condition. Use `sumif` when you’re interested in totals, not variance.
- [**stdevif**](/apl/aggregation-function/stdevif): Returns the standard deviation of values based on a condition. Use `stdevif` when you want to measure dispersion using standard deviation instead of variance.
\ No newline at end of file
diff --git a/apl/apl-features.mdx b/apl/apl-features.mdx
index ed213159..1509457c 100644
--- a/apl/apl-features.mdx
+++ b/apl/apl-features.mdx
@@ -104,7 +104,7 @@ keywords: ['axiom documentation', 'documentation', 'axiom', 'APL', 'axiom proces
| Datetime function | [unixtime_seconds_todatetime](/apl/scalar-functions/datetime-functions/unixtime-seconds-todatetime) | Converts second Unix timestamp to datetime. |
| Datetime function | [week_of_year](/apl/scalar-functions/datetime-functions/week-of-year) | Returns the ISO 8601 week number from a datetime expression. |
| Hash function | [hash_md5](/apl/scalar-functions/hash-functions#hash-md5) | Returns MD5 hash. |
-| Hash function | [hash_sha1](/apl/scalar-functions/hash-functions#hash-sha1) | Returns SHA1 hash. |
+| Hash function | [hash_sha1](/apl/scalar-functions/hash-functions#hash-sha1) | Returns SHA-1 hash. |
| Hash function | [hash_sha256](/apl/scalar-functions/hash-functions#hash-sha256) | Returns SHA256 hash. |
| Hash function | [hash_sha512](/apl/scalar-functions/hash-functions#hash-sha512) | Returns SHA512 hash. |
| Hash function | [hash](/apl/scalar-functions/hash-functions/hash) | Returns integer hash of input. |
@@ -199,8 +199,8 @@ keywords: ['axiom documentation', 'documentation', 'axiom', 'APL', 'axiom proces
| String function | [indexof](/apl/scalar-functions/string-functions#indexof) | Returns index of the first occurrence of a substring. |
| String function | [isascii](/apl/scalar-functions/string-functions/isascii) | Returns `true` if all characters in an input string are ASCII characters. |
| String function | [isempty](/apl/scalar-functions/string-functions#isempty) | Returns `true` if the argument is empty or null. |
-| String function | [isnotempty](/apl/scalar-functions/string-functions#isnotempty) | Returns `true` if the argument is not empty or null. |
-| String function | [isnotnull](/apl/scalar-functions/string-functions#isnotnull) | Returns `true` if the argument is not null. |
+| String function | [isnotempty](/apl/scalar-functions/string-functions#isnotempty) | Returns `true` if the argument isn’t empty or null. |
+| String function | [isnotnull](/apl/scalar-functions/string-functions#isnotnull) | Returns `true` if the argument isn’t null. |
| String function | [isnull](/apl/scalar-functions/string-functions#isnull) | Returns `true` if the argument is null. |
| String function | [parse_bytes](/apl/scalar-functions/string-functions#parse-bytes) | Parses byte-size string to number of bytes. |
| String function | [parse_csv](/apl/scalar-functions/string-functions#parse-csv) | Splits a CSV-formatted string into an array. |
diff --git a/apl/data-types/scalar-data-types.mdx b/apl/data-types/scalar-data-types.mdx
index 6abdd312..c2000172 100644
--- a/apl/data-types/scalar-data-types.mdx
+++ b/apl/data-types/scalar-data-types.mdx
@@ -46,7 +46,7 @@ Literals of type **datetime** have the syntax **datetime** (`value`), where a nu
| **Example** | **Value** |
| ------------------------------------------------------------ | --------------------------------------------------------------------------------- |
| **datetime(2019-11-30 23:59:59.9)** **datetime(2015-12-31)** | Times are always in UTC. Omitting the date gives a time today. |
-| **datetime(null)** | Check out our [null values](/apl/data-types/null-values) |
+| **datetime(null)** | Check out [null values](/apl/data-types/null-values) |
| **now()** | The current time. |
| **now(-timespan)** | now()-timespan |
| **ago(timespan)** | now()-timespan |
@@ -55,9 +55,7 @@ Literals of type **datetime** have the syntax **datetime** (`value`), where a nu
### Supported formats
-We support the **ISO 8601** format, which is the standard format for representing dates and times in the Gregorian calendar.
-
-### [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html)
+Axiom supports the [ISO 8601](https://www.iso.org/iso-8601-date-and-time-format.html) format which is the standard format for representing dates and times in the Gregorian calendar.
| **Format** | **Example** |
| ------------------- | --------------------------- |
@@ -128,13 +126,13 @@ The **string** data type represents a sequence of zero or more [Unicode](https:/
There are several ways to encode literals of the **string** data type in a query text:
- Enclose the string in double-quotes(`"`): "This is a string literal. Single quote characters (') don’t require escaping. Double quote characters (\") are escaped by a backslash (\\)"
-- Enclose the string in single-quotes (`'`): Another string literal. Single quote characters (\') require escaping by a backslash (\\). Double quote characters (") do not require escaping.
+- Enclose the string in single-quotes (`'`): Another string literal. Single quote characters (\') require escaping by a backslash (\\). Double quote characters (") don’t require escaping.
In the two representations above, the backslash (`\`) character indicates escaping. The backslash is used to escape the enclosing quote characters, tab characters (`\t`), newline characters (`\n`), and itself (`\\`).
### Raw string literals
-Raw string literals are also supported. In this form, the backslash character (`\`) stands for itself, and does not denote an escape sequence.
+Raw string literals are also supported. In this form, the backslash character (`\`) stands for itself, and doesn’t denote an escape sequence.
- Enclosed in double-quotes (`""`): `@"This is a raw string literal"`
- Enclose in single-quotes (`'`): `@'This is a raw string literal'`
diff --git a/apl/guides/migrating-from-sql-to-apl.mdx b/apl/guides/migrating-from-sql-to-apl.mdx
index b38704be..d2885934 100644
--- a/apl/guides/migrating-from-sql-to-apl.mdx
+++ b/apl/guides/migrating-from-sql-to-apl.mdx
@@ -1,15 +1,15 @@
---
title: "Migrate from SQL to APL"
-description: "This guide will help you through migrating SQL to APL, helping you understand key differences and providing you with query examples."
+description: "This guide helps you migrate SQL to APL, helping you understand key differences and providing you with query examples."
sidebarTitle: SQL
keywords: ['axiom documentation', 'documentation', 'axiom', 'apl', 'sql', 'guide', 'migration guide', 'sql query']
---
## Introduction
-As data grows exponentially, organizations are continuously seeking more efficient and powerful tools to manage and analyze their data. The Query tab, which utilizes the Axiom Processing Language (APL), is one such service that offers fast, scalable, and interactive data exploration capabilities. If you are an SQL user looking to migrate to APL, this guide will provide a gentle introduction to help you make the transition smoothly.
+As data grows exponentially, organizations are continuously seeking more efficient and powerful tools to manage and analyze their data. The Query tab, which utilizes the Axiom Processing Language (APL), is one such service that offers fast, scalable, and interactive data exploration capabilities.
-**This tutorial will guide you through migrating SQL to APL, helping you understand key differences and providing you with query examples.**
+This tutorial helps you migrate SQL to APL, helping you understand key differences and providing you with query examples.
## Introduction to Axiom Processing Language (APL)
@@ -38,7 +38,7 @@ While SQL and APL are query languages, there are some key differences to conside
- **Pipelining:** APL uses a pipelining model, much like the UNIX command line. You can chain commands together using the pipe (`|`) symbol, with each command operating on the results of the previous command. This makes it very easy to write complex queries.
-- **Easy to Learn:** APL is designed to be simple and easy to learn, especially for those already familiar with SQL. It does not require any knowledge of database schemas or the need to specify joins.
+- **Easy to Learn:** APL is designed to be simple and easy to learn, especially for those already familiar with SQL. It doesn’t require any knowledge of database schemas or the need to specify joins.
- **Scalability:** APL is a more scalable platform than SQL. This means that it can handle larger amounts of data.
@@ -580,7 +580,7 @@ WHERE LOWER(Method) LIKE 'get’';
## Take the First Step Today: Dive into APL
-The journey from SQL to APL might seem daunting at first, but with the right approach, it can become an empowering transition. It is about expanding your data query capabilities to leverage the advanced, versatile, and fast querying infrastructure that APL provides. In the end, the goal is to enable you to draw more value from your data, make faster decisions, and ultimately propel your business forward.
+The journey from SQL to APL might seem daunting at first, but with the right approach, it can become an empowering transition. It’s about expanding your data query capabilities to leverage the advanced, versatile, and fast querying infrastructure that APL provides. In the end, the goal is to enable you to draw more value from your data, make faster decisions, and ultimately propel your business forward.
Try converting some of your existing SQL queries to APL and observe the performance difference. Explore the Axiom Processing Language and start experimenting with its unique features.
diff --git a/apl/guides/migrating-from-sumologic-to-apl.mdx b/apl/guides/migrating-from-sumologic-to-apl.mdx
index 8abe6fe7..0f0397f8 100644
--- a/apl/guides/migrating-from-sumologic-to-apl.mdx
+++ b/apl/guides/migrating-from-sumologic-to-apl.mdx
@@ -528,8 +528,8 @@ In this section, we will identify version numbers that match numeric values 2, 3
## Making the Leap: Transform Your Data Analytics with APL
-As we've navigated through the process of migrating from Sumo Logic to APL, we hope you've found the insights valuable. The powerful capabilities of Axiom Processing Lnaguage are now within your reach, ready to empower your data analytics journey.
+As we’ve navigated through the process of migrating from Sumo Logic to APL, we hope you’ve found the insights valuable. The powerful capabilities of Axiom Processing Lnaguage are now within your reach, ready to empower your data analytics journey.
-Ready to take the next step in your data analytics journey? Dive deeper into APL and discover how it can unlock even more potential in your data. Check out our APL [learning resources](/apl/guides/migrating-from-sql-to-apl) and [tutorials](/apl/tutorial) to become proficient in APL, and join our [community forums](http://axiom.co/discord) to engage with other APL users. Together, we can redefine what’s possible in data analytics. Remember, the migration to APL is not just a change, it’s an upgrade. Embrace the change, because better data analytics await you.
+Ready to take the next step in your data analytics journey? Dive deeper into APL and discover how it can unlock even more potential in your data. Check out our APL [learning resources](/apl/guides/migrating-from-sql-to-apl) and [tutorials](/apl/tutorial) to become proficient in APL, and join our [community forums](http://axiom.co/discord) to engage with other APL users. Together, we can redefine what’s possible in data analytics. Remember, the migration to APL isn’t just a change, it’s an upgrade. Embrace the change, because better data analytics await you.
Begin your APL journey today!
\ No newline at end of file
diff --git a/apl/introduction.mdx b/apl/introduction.mdx
index 98af570a..4d765842 100644
--- a/apl/introduction.mdx
+++ b/apl/introduction.mdx
@@ -87,7 +87,7 @@ To quote the dataset or field in your APL query, enclose its name with quotation
For more information on rules about naming and quoting entities, see [Entity names](/apl/entities/entity-names).
-## What's next
+## What’s next
Check out the [list of sample queries](/apl/tutorial) or explore the supported operators and functions:
diff --git a/apl/query-statement/set-statement.mdx b/apl/query-statement/set-statement.mdx
index f5a3935c..7e17fb58 100644
--- a/apl/query-statement/set-statement.mdx
+++ b/apl/query-statement/set-statement.mdx
@@ -6,7 +6,7 @@ keywords: ['axiom documentation', 'documentation', 'axiom', 'set statement', 'st
The `set` statement is used to set a query option. Options enabled with the `set` statement only have effect for the duration of the query.
-The `set` statement specified will affect how your query is processed and the returned results.
+The `set` statement affects how your query is processed and the returned results.
## Syntax
@@ -16,7 +16,7 @@ set OptionName=OptionValue
## Strict types
-The `stricttypes` query option lets you specify only the exact type of the data type declaration needed in your query, or a **QueryFailed** error will be thrown.
+The `stricttypes` query option lets you specify only the exact type of the data type declaration needed in your query. Otherwise, it throws a **QueryFailed** error.
## Example
diff --git a/apl/scalar-functions/array-functions/array-reverse.mdx b/apl/scalar-functions/array-functions/array-reverse.mdx
index a2d5bef6..68437607 100644
--- a/apl/scalar-functions/array-functions/array-reverse.mdx
+++ b/apl/scalar-functions/array-functions/array-reverse.mdx
@@ -12,12 +12,12 @@ If you come from other query languages, this section explains how to adjust your
-In Splunk, reversing an array is not a built-in function, so you typically manipulate the data manually or use workarounds. In APL, `array_reverse` simplifies this process by reversing the array directly.
+In Splunk, reversing an array isn’t a built-in function, so you typically manipulate the data manually or use workarounds. In APL, `array_reverse` simplifies this process by reversing the array directly.
```sql Splunk example
-# SPL does not have a direct array_reverse equivalent.
+# SPL doesn’t have a direct array_reverse equivalent.
```
```kusto APL equivalent
diff --git a/apl/scalar-functions/array-functions/array-rotate-left.mdx b/apl/scalar-functions/array-functions/array-rotate-left.mdx
index df3193e6..102a84b8 100644
--- a/apl/scalar-functions/array-functions/array-rotate-left.mdx
+++ b/apl/scalar-functions/array-functions/array-rotate-left.mdx
@@ -12,7 +12,7 @@ If you come from other query languages, this section explains how to adjust your
-In APL, `array_rotate_left` allows for direct rotation within the array. Splunk SPL does not have a direct equivalent, so you may need to combine multiple SPL functions to achieve a similar rotation effect.
+In APL, `array_rotate_left` allows for direct rotation within the array. Splunk SPL doesn’t have a direct equivalent, so you may need to combine multiple SPL functions to achieve a similar rotation effect.
```sql Splunk example
diff --git a/apl/scalar-functions/array-functions/array-shift-left.mdx b/apl/scalar-functions/array-functions/array-shift-left.mdx
index 1300e124..24293b9b 100644
--- a/apl/scalar-functions/array-functions/array-shift-left.mdx
+++ b/apl/scalar-functions/array-functions/array-shift-left.mdx
@@ -32,7 +32,7 @@ In Splunk SPL, there is no direct equivalent to `array_shift_left`, but you can
-ANSI SQL does not have a native function equivalent to `array_shift_left`. Typically, you would use procedural SQL to write custom logic for this transformation. In APL, the `array_shift_left` function provides an elegant, concise solution.
+ANSI SQL doesn’t have a native function equivalent to `array_shift_left`. Typically, you would use procedural SQL to write custom logic for this transformation. In APL, the `array_shift_left` function provides an elegant, concise solution.
```sql SQL example
diff --git a/apl/scalar-functions/array-functions/array-shift-right.mdx b/apl/scalar-functions/array-functions/array-shift-right.mdx
index 561d5a84..47b186fd 100644
--- a/apl/scalar-functions/array-functions/array-shift-right.mdx
+++ b/apl/scalar-functions/array-functions/array-shift-right.mdx
@@ -33,7 +33,7 @@ In Splunk SPL, similar functionality might be achieved using custom code to rota
-ANSI SQL does not have a built-in function for shifting arrays. In SQL, achieving this would involve user-defined functions or complex subqueries. In APL, `array_shift_right` simplifies this operation significantly.
+ANSI SQL doesn’t have a built-in function for shifting arrays. In SQL, achieving this would involve user-defined functions or complex subqueries. In APL, `array_shift_right` simplifies this operation significantly.
```sql SQL example
diff --git a/apl/scalar-functions/array-functions/array-slice.mdx b/apl/scalar-functions/array-functions/array-slice.mdx
index 3c99176d..78f56d03 100644
--- a/apl/scalar-functions/array-functions/array-slice.mdx
+++ b/apl/scalar-functions/array-functions/array-slice.mdx
@@ -55,8 +55,8 @@ array_slice(array, start, end)
| Parameter | Description |
|-----------|-------------|
| `array` | The input array to slice. |
-| `start` | The starting index of the slice (inclusive). If negative, it is counted from the end of the array. |
-| `end` | The ending index of the slice (exclusive). If negative, it is counted from the end of the array. |
+| `start` | The starting index of the slice (inclusive). If negative, it’s counted from the end of the array. |
+| `end` | The ending index of the slice (exclusive). If negative, it’s counted from the end of the array. |
### Returns
diff --git a/apl/scalar-functions/array-functions/array-split.mdx b/apl/scalar-functions/array-functions/array-split.mdx
index d4ac25e9..5d8bf05f 100644
--- a/apl/scalar-functions/array-functions/array-split.mdx
+++ b/apl/scalar-functions/array-functions/array-split.mdx
@@ -33,7 +33,7 @@ In Splunk SPL, array manipulation is achieved through functions like `mvzip` and
-ANSI SQL does not have built-in functions for directly splitting arrays. APL provides this capability natively, making it easier to handle array operations within queries.
+ANSI SQL doesn’t have built-in functions for directly splitting arrays. APL provides this capability natively, making it easier to handle array operations within queries.
```sql SQL example
diff --git a/apl/scalar-functions/array-functions/array-sum.mdx b/apl/scalar-functions/array-functions/array-sum.mdx
index 19adba0c..8d488626 100644
--- a/apl/scalar-functions/array-functions/array-sum.mdx
+++ b/apl/scalar-functions/array-functions/array-sum.mdx
@@ -28,7 +28,7 @@ In Splunk SPL, you might need to use commands or functions such as `mvsum` for s
-ANSI SQL does not natively support array operations like summing array elements. However, you can achieve similar results with `UNNEST` and `SUM`. In APL, `array_sum` simplifies this by handling array summation directly.
+ANSI SQL doesn’t natively support array operations like summing array elements. However, you can achieve similar results with `UNNEST` and `SUM`. In APL, `array_sum` simplifies this by handling array summation directly.
```sql SQL example
diff --git a/apl/scalar-functions/array-functions/bag-keys.mdx b/apl/scalar-functions/array-functions/bag-keys.mdx
index 1231e8e0..91437456 100644
--- a/apl/scalar-functions/array-functions/bag-keys.mdx
+++ b/apl/scalar-functions/array-functions/bag-keys.mdx
@@ -76,7 +76,7 @@ bag_keys(bag)
### Returns
-An array of type `string[]` containing the names of the keys in the dynamic object. If the input is not a dynamic object, the function returns `null`.
+An array of type `string[]` containing the names of the keys in the dynamic object. If the input isn’t a dynamic object, the function returns `null`.
## Use case examples
diff --git a/apl/scalar-functions/array-functions/isarray.mdx b/apl/scalar-functions/array-functions/isarray.mdx
index e15c3a37..c6e05c55 100644
--- a/apl/scalar-functions/array-functions/isarray.mdx
+++ b/apl/scalar-functions/array-functions/isarray.mdx
@@ -3,7 +3,7 @@ title: isarray
description: 'This page explains how to use the isarray function in APL.'
---
-The `isarray` function in APL checks whether a specified value is an array. Use this function to validate input data, handle dynamic schemas, or filter for records where a field is explicitly an array. It is particularly useful when working with data that contains fields with mixed data types or optional nested arrays.
+The `isarray` function in APL checks whether a specified value is an array. Use this function to validate input data, handle dynamic schemas, or filter for records where a field is explicitly an array. It’s particularly useful when working with data that contains fields with mixed data types or optional nested arrays.
## For users of other query languages
@@ -12,7 +12,7 @@ If you come from other query languages, this section explains how to adjust your
-In Splunk SPL, similar functionality is achieved by analyzing the data structure manually, as SPL does not have a direct equivalent to `isarray`. APL simplifies this task by providing the `isarray` function to directly evaluate whether a value is an array.
+In Splunk SPL, similar functionality is achieved by analyzing the data structure manually, as SPL doesn’t have a direct equivalent to `isarray`. APL simplifies this task by providing the `isarray` function to directly evaluate whether a value is an array.
```sql Splunk example
@@ -60,7 +60,7 @@ isarray(value)
| Parameter | Description |
|-----------|-----------------------------------------|
-| `value` | The value to check if it is an array. |
+| `value` | The value to check if it’s an array. |
### Returns
diff --git a/apl/scalar-functions/array-functions/pack-array.mdx b/apl/scalar-functions/array-functions/pack-array.mdx
index 28802729..10333812 100644
--- a/apl/scalar-functions/array-functions/pack-array.mdx
+++ b/apl/scalar-functions/array-functions/pack-array.mdx
@@ -3,7 +3,7 @@ title: pack_array
description: 'This page explains how to use the pack_array function in APL.'
---
-The `pack_array` function in APL creates an array from individual values or expressions. You can use this function to group related data into a single field, which can simplify handling and querying of data collections. It is especially useful when working with nested data structures or aggregating data into arrays for further processing.
+The `pack_array` function in APL creates an array from individual values or expressions. You can use this function to group related data into a single field, which can simplify handling and querying of data collections. It’s especially useful when working with nested data structures or aggregating data into arrays for further processing.
## For users of other query languages
diff --git a/apl/scalar-functions/array-functions/pack-dictionary.mdx b/apl/scalar-functions/array-functions/pack-dictionary.mdx
index 28723d96..dd7dc547 100644
--- a/apl/scalar-functions/array-functions/pack-dictionary.mdx
+++ b/apl/scalar-functions/array-functions/pack-dictionary.mdx
@@ -18,7 +18,7 @@ If you come from other query languages, this section explains how to adjust your
-While SPL doesn't have a direct equivalent of `pack_dictionary`, you can simulate similar behavior using the `eval` command and `mvzip` or `mvmap` to construct composite objects. In APL, `pack_dictionary` is a simpler and more declarative way to produce key-value structures inline.
+While SPL doesn’t have a direct equivalent of `pack_dictionary`, you can simulate similar behavior using the `eval` command and `mvzip` or `mvmap` to construct composite objects. In APL, `pack_dictionary` is a simpler and more declarative way to produce key-value structures inline.
```sql Splunk example
diff --git a/apl/scalar-functions/conversion-functions.mdx b/apl/scalar-functions/conversion-functions.mdx
index eeb4eda7..c0e20aff 100644
--- a/apl/scalar-functions/conversion-functions.mdx
+++ b/apl/scalar-functions/conversion-functions.mdx
@@ -68,7 +68,7 @@ In this example, the value of `newstatus` is the value of `status` because the `
### Future-proof queries
-In this example, the query is prepared for a field named `upcoming_field` that is expected to be added to the data soon. By using `ensure_field()`, logic can be written around this future field, and the query will work when the field becomes available.
+In this example, the query is prepared for a field named `upcoming_field` that’s expected to be added to the data soon. By using `ensure_field()`, logic can be written around this future field, and the query will work when the field becomes available.
```kusto
['sample-http-logs']
@@ -152,7 +152,7 @@ Converts the input to a value of type real. **(todouble() is an alternative word
### Returns
-If conversion is successful, the result is a value of type real. If conversion is not successful, the result returns false.
+If conversion is successful, the result is a value of type real. If conversion isn’t successful, the result returns false.
### Examples
@@ -244,7 +244,7 @@ Converts input to a hexadecimal string.
### Returns
-If conversion is successful, result will be a string value. If conversion is not successful, result will be false.
+If conversion is successful, result will be a string value. If conversion isn’t successful, result will be false.
### Examples
@@ -280,7 +280,7 @@ Converts input to long (signed 64-bit) number representation.
### Returns
-If conversion is successful, result will be a long number. If conversion is not successful, result will be false.
+If conversion is successful, result will be a long number. If conversion isn’t successful, result will be false.
### Examples
diff --git a/apl/scalar-functions/conversion-functions/toarray.mdx b/apl/scalar-functions/conversion-functions/toarray.mdx
index 9b45314a..faf85e06 100644
--- a/apl/scalar-functions/conversion-functions/toarray.mdx
+++ b/apl/scalar-functions/conversion-functions/toarray.mdx
@@ -31,7 +31,7 @@ print methods = dynamic(["GET", "POST", "PUT"])
-ANSI SQL does not support arrays natively. You typically store lists as JSON and use JSON functions to manipulate them. In APL, you can parse JSON into dynamic values and use `toarray` to convert those into arrays for further processing.
+ANSI SQL doesn’t support arrays natively. You typically store lists as JSON and use JSON functions to manipulate them. In APL, you can parse JSON into dynamic values and use `toarray` to convert those into arrays for further processing.
```sql SQL example
@@ -64,7 +64,7 @@ toarray(value)
### Returns
-An array containing the elements of the dynamic input. If the input is already an array, the result is identical. If the input is a property bag, it returns an array of values. If the input is not coercible to an array, the result is an empty array.
+An array containing the elements of the dynamic input. If the input is already an array, the result is identical. If the input is a property bag, it returns an array of values. If the input isn’t coercible to an array, the result is an empty array.
## Example
diff --git a/apl/scalar-functions/conversion-functions/todynamic.mdx b/apl/scalar-functions/conversion-functions/todynamic.mdx
index ebda6b7e..7d9e6805 100644
--- a/apl/scalar-functions/conversion-functions/todynamic.mdx
+++ b/apl/scalar-functions/conversion-functions/todynamic.mdx
@@ -63,7 +63,7 @@ todynamic(value)
### Returns
-A dynamic value. If the input is not a valid JSON string, the function returns `null`.
+A dynamic value. If the input isn’t a valid JSON string, the function returns `null`.
## Example
diff --git a/apl/scalar-functions/datetime-functions.mdx b/apl/scalar-functions/datetime-functions.mdx
index 9941c63c..93ec6af3 100644
--- a/apl/scalar-functions/datetime-functions.mdx
+++ b/apl/scalar-functions/datetime-functions.mdx
@@ -35,7 +35,7 @@ The table summarizes the datetime functions available in APL.
| [unixtime_seconds_todatetime](/apl/scalar-functions/datetime-functions/unixtime-seconds-todatetime) | Converts a Unix timestamp expressed in whole seconds to an APL `datetime` value. |
| [week_of_year](/apl/scalar-functions/datetime-functions/week-of-year) | Returns the ISO 8601 week number from a datetime expression. |
-We support the ISO 8601 format, which is the standard format for representing dates and times in the Gregorian calendar. [Check them out here](/apl/data-types/scalar-data-types#supported-formats)
+Axiom supports the ISO 8601 format which is the standard format for representing dates and times in the Gregorian calendar. For more information, see [Supported formats](/apl/data-types/scalar-data-types#supported-formats).
## ago
diff --git a/apl/scalar-functions/datetime-functions/unixtime-microseconds-todatetime.mdx b/apl/scalar-functions/datetime-functions/unixtime-microseconds-todatetime.mdx
index ad847953..761db0bf 100644
--- a/apl/scalar-functions/datetime-functions/unixtime-microseconds-todatetime.mdx
+++ b/apl/scalar-functions/datetime-functions/unixtime-microseconds-todatetime.mdx
@@ -3,7 +3,7 @@ title: unixtime_microseconds_todatetime
description: 'This page explains how to use the unixtime_microseconds_todatetime function in APL.'
---
-`unixtime_microseconds_todatetime` converts a Unix timestamp that is expressed in whole microseconds since 1970-01-01 00:00:00 UTC to an APL `datetime` value.
+`unixtime_microseconds_todatetime` converts a Unix timestamp that’s expressed in whole microseconds since 1970-01-01 00:00:00 UTC to an APL `datetime` value.
Use the function whenever you ingest data that stores time as epoch microseconds (for example, JSON logs from NGINX or metrics that follow the StatsD line protocol). Converting to `datetime` lets you bin, filter, and visualize events with the rest of your time-series data.
diff --git a/apl/scalar-functions/datetime-functions/unixtime-milliseconds-todatetime.mdx b/apl/scalar-functions/datetime-functions/unixtime-milliseconds-todatetime.mdx
index c3c982d3..6db60f23 100644
--- a/apl/scalar-functions/datetime-functions/unixtime-milliseconds-todatetime.mdx
+++ b/apl/scalar-functions/datetime-functions/unixtime-milliseconds-todatetime.mdx
@@ -3,7 +3,7 @@ title: unixtime_milliseconds_todatetime
description: 'This page explains how to use the unixtime_milliseconds_todatetime function in APL.'
---
-`unixtime_milliseconds_todatetime` converts a Unix timestamp that is expressed in whole milliseconds since 1970-01-01 00:00:00 UTC to an APL `datetime` value.
+`unixtime_milliseconds_todatetime` converts a Unix timestamp that’s expressed in whole milliseconds since 1970-01-01 00:00:00 UTC to an APL `datetime` value.
Use the function whenever you ingest data that stores time as epoch milliseconds (for example, JSON logs from NGINX or metrics that follow the StatsD line protocol). Converting to `datetime` lets you bin, filter, and visualize events with the rest of your time-series data.
diff --git a/apl/scalar-functions/datetime-functions/unixtime-nanoseconds-todatetime.mdx b/apl/scalar-functions/datetime-functions/unixtime-nanoseconds-todatetime.mdx
index e86543fb..421956c8 100644
--- a/apl/scalar-functions/datetime-functions/unixtime-nanoseconds-todatetime.mdx
+++ b/apl/scalar-functions/datetime-functions/unixtime-nanoseconds-todatetime.mdx
@@ -3,7 +3,7 @@ title: unixtime_nanoseconds_todatetime
description: 'This page explains how to use the unixtime_nanoseconds_todatetime function in APL.'
---
-`unixtime_nanoseconds_todatetime` converts a Unix timestamp that is expressed in whole nanoseconds since 1970-01-01 00:00:00 UTC to an APL `datetime` value.
+`unixtime_nanoseconds_todatetime` converts a Unix timestamp that’s expressed in whole nanoseconds since 1970-01-01 00:00:00 UTC to an APL `datetime` value.
Use the function whenever you ingest data that stores time as epoch nanoseconds (for example, JSON logs from NGINX or metrics that follow the StatsD line protocol). Converting to `datetime` lets you bin, filter, and visualize events with the rest of your time-series data.
diff --git a/apl/scalar-functions/datetime-functions/unixtime-seconds-todatetime.mdx b/apl/scalar-functions/datetime-functions/unixtime-seconds-todatetime.mdx
index 1980f64d..ecf574ec 100644
--- a/apl/scalar-functions/datetime-functions/unixtime-seconds-todatetime.mdx
+++ b/apl/scalar-functions/datetime-functions/unixtime-seconds-todatetime.mdx
@@ -3,7 +3,7 @@ title: unixtime_seconds_todatetime
description: 'This page explains how to use the unixtime_seconds_todatetime function in APL.'
---
-`unixtime_seconds_todatetime` converts a Unix timestamp that is expressed in whole seconds since 1970-01-01 00:00:00 UTC to an APL [datetime value](/apl/data-types/scalar-data-types).
+`unixtime_seconds_todatetime` converts a Unix timestamp that’s expressed in whole seconds since 1970-01-01 00:00:00 UTC to an APL [datetime value](/apl/data-types/scalar-data-types).
Use the function whenever you ingest data that stores time as epoch seconds (for example, JSON logs from NGINX or metrics that follow the StatsD line protocol). Converting to `datetime` lets you bin, filter, and visualize events with the rest of your time-series data.
diff --git a/apl/scalar-functions/ip-functions/has-any-ipv4.mdx b/apl/scalar-functions/ip-functions/has-any-ipv4.mdx
index ab8642da..ba413208 100644
--- a/apl/scalar-functions/ip-functions/has-any-ipv4.mdx
+++ b/apl/scalar-functions/ip-functions/has-any-ipv4.mdx
@@ -28,7 +28,7 @@ In Splunk, you typically use the `cidrmatch` or similar functions for working wi
-SQL does not natively support CIDR matching or IP address comparison out of the box. In APL, the `has_any_ipv4` function is designed to simplify these checks with concise syntax.
+SQL doesn’t natively support CIDR matching or IP address comparison out of the box. In APL, the `has_any_ipv4` function is designed to simplify these checks with concise syntax.
```sql SQL example
diff --git a/apl/scalar-functions/ip-functions/has-ipv4-prefix.mdx b/apl/scalar-functions/ip-functions/has-ipv4-prefix.mdx
index a98b4aea..7000fab5 100644
--- a/apl/scalar-functions/ip-functions/has-ipv4-prefix.mdx
+++ b/apl/scalar-functions/ip-functions/has-ipv4-prefix.mdx
@@ -3,7 +3,7 @@ title: has_ipv4_prefix
description: 'This page explains how to use the has_ipv4_prefix function in APL.'
---
-The `has_ipv4_prefix` function checks if an IPv4 address starts with a specified prefix. Use this function to filter or match IPv4 addresses efficiently based on their prefixes. It is particularly useful when analyzing network traffic, identifying specific address ranges, or working with CIDR-based IP filtering in datasets.
+The `has_ipv4_prefix` function checks if an IPv4 address starts with a specified prefix. Use this function to filter or match IPv4 addresses efficiently based on their prefixes. It’s particularly useful when analyzing network traffic, identifying specific address ranges, or working with CIDR-based IP filtering in datasets.
## For users of other query languages
diff --git a/apl/scalar-functions/ip-functions/has-ipv4.mdx b/apl/scalar-functions/ip-functions/has-ipv4.mdx
index 08b5aeff..4e21bf42 100644
--- a/apl/scalar-functions/ip-functions/has-ipv4.mdx
+++ b/apl/scalar-functions/ip-functions/has-ipv4.mdx
@@ -11,7 +11,7 @@ To use `has_ipv4`, ensure that IP addresses in the text are properly delimited w
- **Valid:** `192.168.1.1` in `"Requests from: 192.168.1.1, 10.1.1.115."`
- **Invalid:** `192.168.1.1` in `"192.168.1.1ThisText"`
-The function returns `true` if the IP address is valid and present in the text; otherwise, it returns `false`.
+The function returns `true` if the IP address is valid and present in the text. Otherwise, it returns `false`.
## For users of other query languages
diff --git a/apl/scalar-functions/ip-functions/ipv4-is-in-any-range.mdx b/apl/scalar-functions/ip-functions/ipv4-is-in-any-range.mdx
index 56a774ef..cdb67af8 100644
--- a/apl/scalar-functions/ip-functions/ipv4-is-in-any-range.mdx
+++ b/apl/scalar-functions/ip-functions/ipv4-is-in-any-range.mdx
@@ -30,7 +30,7 @@ In Splunk SPL, you use `cidrmatch` to check if an IP belongs to a range. In APL,
-ANSI SQL does not have a built-in function for checking IP ranges. Instead, you use custom functions or comparisons. APL’s `ipv4_is_in_any_range` simplifies this by handling multiple CIDR blocks and ranges in a single function.
+ANSI SQL doesn’t have a built-in function for checking IP ranges. Instead, you use custom functions or comparisons. APL’s `ipv4_is_in_any_range` simplifies this by handling multiple CIDR blocks and ranges in a single function.
```sql SQL example
diff --git a/apl/scalar-functions/ip-functions/ipv4-is-private.mdx b/apl/scalar-functions/ip-functions/ipv4-is-private.mdx
index 71a428dd..7bbddbcd 100644
--- a/apl/scalar-functions/ip-functions/ipv4-is-private.mdx
+++ b/apl/scalar-functions/ip-functions/ipv4-is-private.mdx
@@ -83,7 +83,7 @@ ipv4_is_private(ip: string)
### Returns
- `true`: The input IP address is private.
-- `false`: The input IP address is not private.
+- `false`: The input IP address isn’t private.
## Use case example
diff --git a/apl/scalar-functions/ip-functions/ipv4-netmask-suffix.mdx b/apl/scalar-functions/ip-functions/ipv4-netmask-suffix.mdx
index 8d62f89b..56cb99c7 100644
--- a/apl/scalar-functions/ip-functions/ipv4-netmask-suffix.mdx
+++ b/apl/scalar-functions/ip-functions/ipv4-netmask-suffix.mdx
@@ -66,7 +66,7 @@ ipv4_netmask_suffix(ipv4address)
- Returns an integer representing the netmask suffix. For example, `24` for `192.168.1.1/24`.
- Returns the value `32` when the input IPv4 address doesn’t contain the suffix.
-- Returns `null` if the input is not a valid IPv4 address in CIDR notation.
+- Returns `null` if the input isn’t a valid IPv4 address in CIDR notation.
## Use case example
diff --git a/apl/scalar-functions/ip-functions/ipv6-compare.mdx b/apl/scalar-functions/ip-functions/ipv6-compare.mdx
index b9b25e22..86875f63 100644
--- a/apl/scalar-functions/ip-functions/ipv6-compare.mdx
+++ b/apl/scalar-functions/ip-functions/ipv6-compare.mdx
@@ -31,7 +31,7 @@ print comparison = ipv6_compare('2001:db8::1', '2001:db8::2')
-ANSI SQL does not natively support IPv6 comparisons. Typically, users must store IPv6 addresses as strings or binary values and write custom logic to compare them.
+ANSI SQL doesn’t natively support IPv6 comparisons. Typically, users must store IPv6 addresses as strings or binary values and write custom logic to compare them.
```sql SQL example
diff --git a/apl/scalar-functions/ip-functions/ipv6-is-in-range.mdx b/apl/scalar-functions/ip-functions/ipv6-is-in-range.mdx
index 595c3a38..9a8a051b 100644
--- a/apl/scalar-functions/ip-functions/ipv6-is-in-range.mdx
+++ b/apl/scalar-functions/ip-functions/ipv6-is-in-range.mdx
@@ -28,7 +28,7 @@ In Splunk SPL, IP range checking for IPv6 addresses typically requires custom sc
-ANSI SQL does not have native functions for CIDR range checks on IPv6 addresses. You typically rely on user-defined functions (UDFs) or external tooling. In APL, `ipv6_is_in_range` provides this capability out of the box.
+ANSI SQL doesn’t have native functions for CIDR range checks on IPv6 addresses. You typically rely on user-defined functions (UDFs) or external tooling. In APL, `ipv6_is_in_range` provides this capability out of the box.
```sql SQL example
-- Using a hypothetical UDF
diff --git a/apl/scalar-functions/ip-functions/ipv6-is-match.mdx b/apl/scalar-functions/ip-functions/ipv6-is-match.mdx
index 200a4f16..0d2d452a 100644
--- a/apl/scalar-functions/ip-functions/ipv6-is-match.mdx
+++ b/apl/scalar-functions/ip-functions/ipv6-is-match.mdx
@@ -14,7 +14,7 @@ If you come from other query languages, this section explains how to adjust your
-Splunk SPL does not have a dedicated function for matching IPv6 addresses against CIDR blocks. You typically use regular expressions or custom lookups to perform similar checks. In contrast, APL provides a built-in function that directly evaluates IPv6 CIDR membership.
+Splunk SPL doesn’t have a dedicated function for matching IPv6 addresses against CIDR blocks. You typically use regular expressions or custom lookups to perform similar checks. In contrast, APL provides a built-in function that directly evaluates IPv6 CIDR membership.
```sql Splunk example
@@ -31,7 +31,7 @@ Splunk SPL does not have a dedicated function for matching IPv6 addresses agains
-ANSI SQL does not have a standard function to check if an IPv6 address belongs to a subnet. You often implement this logic with string manipulation or rely on database-specific functions. APL simplifies this with `ipv6_is_match`, which accepts a full IPv6 address and a subnet in CIDR notation.
+ANSI SQL doesn’t have a standard function to check if an IPv6 address belongs to a subnet. You often implement this logic with string manipulation or rely on database-specific functions. APL simplifies this with `ipv6_is_match`, which accepts a full IPv6 address and a subnet in CIDR notation.
```sql SQL example
diff --git a/apl/scalar-functions/ip-functions/parse-ipv4.mdx b/apl/scalar-functions/ip-functions/parse-ipv4.mdx
index 8ec46b17..b81e7085 100644
--- a/apl/scalar-functions/ip-functions/parse-ipv4.mdx
+++ b/apl/scalar-functions/ip-functions/parse-ipv4.mdx
@@ -3,7 +3,7 @@ title: parse_ipv4
description: 'This page explains how to use the parse_ipv4 function in APL.'
---
-The `parse_ipv4` function in APL converts an IPv4 address and represents it as a long number. You can use this function to convert an IPv4 address for advanced analysis, filtering, or comparisons. It is especially useful for tasks like analyzing network traffic logs, identifying trends in IP address usage, or performing security-related queries.
+The `parse_ipv4` function in APL converts an IPv4 address and represents it as a long number. You can use this function to convert an IPv4 address for advanced analysis, filtering, or comparisons. It’s especially useful for tasks like analyzing network traffic logs, identifying trends in IP address usage, or performing security-related queries.
## For users of other query languages
@@ -12,7 +12,7 @@ If you come from other query languages, this section explains how to adjust your
-Splunk does not provide a direct function for converting an IPv4 address into a long number. However, you can achieve similar functionality using custom SPL expressions.
+Splunk doesn’t provide a direct function for converting an IPv4 address into a long number. However, you can achieve similar functionality using custom SPL expressions.
```sql Splunk example
@@ -28,7 +28,7 @@ Splunk does not provide a direct function for converting an IPv4 address into a
-SQL does not have a built-in function equivalent to `parse_ipv4`, but you can use bitwise operations to achieve a similar result.
+SQL doesn’t have a built-in function equivalent to `parse_ipv4`, but you can use bitwise operations to achieve a similar result.
```sql SQL example
diff --git a/apl/scalar-functions/mathematical-functions.mdx b/apl/scalar-functions/mathematical-functions.mdx
index 226a7f79..e39adb52 100644
--- a/apl/scalar-functions/mathematical-functions.mdx
+++ b/apl/scalar-functions/mathematical-functions.mdx
@@ -22,7 +22,7 @@ The table summarizes the mathematical functions available in APL.
| [gamma](#gamma) | Computes gamma function. |
| [isinf](#isinf) | Returns whether input is an infinite (positive or negative) value. |
| [isint](#isint) | Returns whether input is an integer (positive or negative) value |
-| [isnan](#isnan) | Returns whether input is Not-a-Number (NaN) value. |
+| [isnan](#isnan) | Returns whether input is a Not a Number (NaN) value. |
| [log](#log) | Returns the natural logarithm function. |
| [log10](#log10) | Returns the common (base-10) logarithm function. |
| [log2](#log2) | Returns the base-2 logarithm function. |
diff --git a/apl/scalar-functions/mathematical-functions/max-of.mdx b/apl/scalar-functions/mathematical-functions/max-of.mdx
index 2dadec4b..e56c66f1 100644
--- a/apl/scalar-functions/mathematical-functions/max-of.mdx
+++ b/apl/scalar-functions/mathematical-functions/max-of.mdx
@@ -3,7 +3,7 @@ title: max_of
description: 'This page explains how to use the max_of function in APL.'
---
-Use the `max_of` function in APL (Axiom Processing Language) to return the maximum value from a list of scalar expressions. You can use it when you need to compute the maximum of a fixed set of values within each row, rather than across rows like with [aggregation functions](/apl/aggregation-function/statistical-functions). It is especially useful when the values you want to compare come from different columns or are dynamically calculated within the same row.
+Use the `max_of` function in APL (Axiom Processing Language) to return the maximum value from a list of scalar expressions. You can use it when you need to compute the maximum of a fixed set of values within each row, rather than across rows like with [aggregation functions](/apl/aggregation-function/statistical-functions). It’s especially useful when the values you want to compare come from different columns or are dynamically calculated within the same row.
Use `max_of` when you want to:
@@ -19,7 +19,7 @@ If you come from other query languages, this section explains how to adjust your
-Splunk does not provide a direct function equivalent to `max_of`. However, you can use the `eval` command with nested `if` statements or custom logic to emulate similar functionality on a per-event basis.
+Splunk doesn’t provide a direct function equivalent to `max_of`. However, you can use the `eval` command with nested `if` statements or custom logic to emulate similar functionality on a per-event basis.
```sql Splunk example
@@ -35,7 +35,7 @@ extend max_value = max_of(a, b, c)
-ANSI SQL does not offer a built-in function like `max_of` to compute the maximum across expressions in a single row. Instead, you typically use `GREATEST`, which serves a similar purpose.
+ANSI SQL doesn’t offer a built-in function like `max_of` to compute the maximum across expressions in a single row. Instead, you typically use `GREATEST`, which serves a similar purpose.
```sql SQL example
diff --git a/apl/scalar-functions/mathematical-functions/rand.mdx b/apl/scalar-functions/mathematical-functions/rand.mdx
index 84ba9dca..37829ee5 100644
--- a/apl/scalar-functions/mathematical-functions/rand.mdx
+++ b/apl/scalar-functions/mathematical-functions/rand.mdx
@@ -3,7 +3,7 @@ title: rand
description: 'This page explains how to use the rand function in APL.'
---
-Use the `rand` function in APL to generate pseudo-random numbers. This function is useful when you want to introduce randomness into your queries. For example, to sample a subset of data, generate test data, or simulate probabilistic scenarios.
+Use the `rand` function in APL to generate pseudo-random numbers. This function is useful when you want to introduce randomness into your queries. For example, to sample a subset of data, generate test data, or simulate probabilistic scenarios.
## For users of other query languages
diff --git a/apl/scalar-functions/mathematical-functions/set-difference.mdx b/apl/scalar-functions/mathematical-functions/set-difference.mdx
index 3dbd3f7b..db3d5d75 100644
--- a/apl/scalar-functions/mathematical-functions/set-difference.mdx
+++ b/apl/scalar-functions/mathematical-functions/set-difference.mdx
@@ -3,7 +3,7 @@ title: set_difference
description: 'This page explains how to use the set_difference function in APL.'
---
-Use the `set_difference` function in APL to compute the distinct elements in one array that are not present in another. This function helps you filter out shared values between two arrays, producing a new array that includes only the unique values from the first input array.
+Use the `set_difference` function in APL to compute the distinct elements in one array that aren’t present in another. This function helps you filter out shared values between two arrays, producing a new array that includes only the unique values from the first input array.
Use `set_difference` when you need to identify new or missing elements, such as:
@@ -72,7 +72,7 @@ set_difference(Array1, Array2)
### Returns
-An array that includes all values from `Array1` that are not present in `Array2`. The result does not include duplicates.
+An array that includes all values from `Array1` that aren’t present in `Array2`. The result doesn’t include duplicates.
## Example
@@ -95,6 +95,6 @@ Use `set_difference` to return the difference between two arrays.
## List of related functions
-- [set_difference](apl/scalar-functions/mathematical-functions/set-difference): Returns elements in the first array that are not in the second. Use it to find exclusions.
+- [set_difference](apl/scalar-functions/mathematical-functions/set-difference): Returns elements in the first array that aren’t in the second. Use it to find exclusions.
- [set_has_element](/apl/scalar-functions/mathematical-functions/set-has-element): Tests whether a set contains a specific value. Prefer it when you only need a Boolean result.
- [set_union](/apl/scalar-functions/mathematical-functions/set-union): Returns the union of two or more sets. Use it when you need any element that appears in at least one set instead of every set.
\ No newline at end of file
diff --git a/apl/scalar-functions/mathematical-functions/set-has-element.mdx b/apl/scalar-functions/mathematical-functions/set-has-element.mdx
index a5c6b888..fd49e84d 100644
--- a/apl/scalar-functions/mathematical-functions/set-has-element.mdx
+++ b/apl/scalar-functions/mathematical-functions/set-has-element.mdx
@@ -3,7 +3,7 @@ title: set_has_element
description: 'This page explains how to use the set_has_element function in APL.'
---
-`set_has_element` returns true when a dynamic array contains a specific element and false when it does not. Use it to perform fast membership checks on values that you have already aggregated into a set with functions such as `make_set`.
+`set_has_element` returns true when a dynamic array contains a specific element and false when it doesn’t. Use it to perform fast membership checks on values that you have already aggregated into a set with functions such as `make_set`.
## For users of other query languages
@@ -72,7 +72,7 @@ set_has_element(set, value)
### Returns
-A `bool` that is true when `value` exists in `set` and false otherwise.
+A `bool` that’s true when `value` exists in `set` and false otherwise.
## Example
diff --git a/apl/scalar-functions/mathematical-functions/set-intersect.mdx b/apl/scalar-functions/mathematical-functions/set-intersect.mdx
index 7d4b6391..130ef58d 100644
--- a/apl/scalar-functions/mathematical-functions/set-intersect.mdx
+++ b/apl/scalar-functions/mathematical-functions/set-intersect.mdx
@@ -14,7 +14,7 @@ If you come from other query languages, this section explains how to adjust your
-Splunk SPL does not have a direct equivalent to `set_intersect`, but you can achieve similar functionality using `mvfilter` with conditions based on a lookup or manually defined set. APL simplifies this process by offering a built-in array intersection function.
+Splunk SPL doesn’t have a direct equivalent to `set_intersect`, but you can achieve similar functionality using `mvfilter` with conditions based on a lookup or manually defined set. APL simplifies this process by offering a built-in array intersection function.
```sql Splunk example
@@ -33,7 +33,7 @@ print A=dynamic(['apple', 'banana', 'cherry']), B=dynamic(['banana', 'cherry', '
-ANSI SQL does not natively support array data types or set operations over arrays. To perform an intersection, you usually need to normalize the arrays using `UNNEST` or `JOIN`, which can be verbose. In APL, `set_intersect` performs this in a single step.
+ANSI SQL doesn’t natively support array data types or set operations over arrays. To perform an intersection, you usually need to normalize the arrays using `UNNEST` or `JOIN`, which can be verbose. In APL, `set_intersect` performs this in a single step.
```sql SQL example
diff --git a/apl/scalar-functions/mathematical-functions/set-union.mdx b/apl/scalar-functions/mathematical-functions/set-union.mdx
index a6bb0169..7fa620c0 100644
--- a/apl/scalar-functions/mathematical-functions/set-union.mdx
+++ b/apl/scalar-functions/mathematical-functions/set-union.mdx
@@ -3,9 +3,9 @@ title: set_union
description: 'This page explains how to use the set_union function in APL.'
---
-Use the `set_union` function in APL to combine two dynamic arrays into one, returning a new array that includes all distinct elements from both. The order of elements in the result is not guaranteed and may differ from the original input arrays.
+Use the `set_union` function in APL to combine two dynamic arrays into one, returning a new array that includes all distinct elements from both. The order of elements in the result isn’t guaranteed and may differ from the original input arrays.
-You can use `set_union` when you need to merge two arrays and eliminate duplicates. It is especially useful in scenarios where you need to perform set-based logic, such as comparing user activity across multiple sources, correlating IPs from different datasets, or combining traces or log attributes from different events.
+You can use `set_union` when you need to merge two arrays and eliminate duplicates. It’s especially useful in scenarios where you need to perform set-based logic, such as comparing user activity across multiple sources, correlating IPs from different datasets, or combining traces or log attributes from different events.
## For users of other query languages
diff --git a/apl/scalar-functions/metadata-functions/ingestion_time.mdx b/apl/scalar-functions/metadata-functions/ingestion_time.mdx
index 3a5cb8c4..5aec5058 100644
--- a/apl/scalar-functions/metadata-functions/ingestion_time.mdx
+++ b/apl/scalar-functions/metadata-functions/ingestion_time.mdx
@@ -37,7 +37,7 @@ Splunk provides the `_indextime` field, which represents when an event was index
-ANSI SQL does not have a standard equivalent to `ingestion_time`, since SQL databases typically do not distinguish ingestion time from event time. APL provides `ingestion_time` for observability-specific workflows where the arrival time of data is important.
+ANSI SQL doesn’t have a standard equivalent to `ingestion_time`, since SQL databases typically don’t distinguish ingestion time from event time. APL provides `ingestion_time` for observability-specific workflows where the arrival time of data is important.
```sql SQL example
@@ -64,7 +64,7 @@ ingestion_time()
### Parameters
-This function does not take any parameters.
+This function doesn’t take any parameters.
### Returns
diff --git a/apl/scalar-functions/pair-functions.mdx b/apl/scalar-functions/pair-functions.mdx
index cc123da2..2d26d8ea 100644
--- a/apl/scalar-functions/pair-functions.mdx
+++ b/apl/scalar-functions/pair-functions.mdx
@@ -14,7 +14,7 @@ keywords: ['axiom documentation', 'documentation', 'axiom', 'pair', 'parse_pair'
Each argument has a **required** section which is denoted with `required` or `optional`
-- If it’s denoted by `required` it means the argument must be passed into that function before it'll work.
+- If it’s denoted by `required` it means the argument must be passed into that function before it’ll work.
- if it’s denoted by `optional` it means the function can work without passing the argument value.
## pair()
diff --git a/apl/scalar-functions/sql-functions.mdx b/apl/scalar-functions/sql-functions.mdx
index b0635c86..00e2365f 100644
--- a/apl/scalar-functions/sql-functions.mdx
+++ b/apl/scalar-functions/sql-functions.mdx
@@ -18,7 +18,7 @@ Analyzes an SQL statement and constructs a data model, enabling insights into th
### Limitations
-- It is mainly used for simple SQL queries. SQL statements like stored procedures, Windows functions, common table expressions (CTEs), recursive queries, advanced statistical functions, and special joins are not supported.
+- It’s mainly used for simple SQL queries. SQL statements like stored procedures, Windows functions, common table expressions (CTEs), recursive queries, advanced statistical functions, and special joins aren’t supported.
### Arguments
diff --git a/apl/scalar-functions/string-functions.mdx b/apl/scalar-functions/string-functions.mdx
index 785568c0..00f00da3 100644
--- a/apl/scalar-functions/string-functions.mdx
+++ b/apl/scalar-functions/string-functions.mdx
@@ -24,12 +24,12 @@ The table summarizes the string functions available in APL.
| [isascii](/apl/scalar-functions/string-functions/isascii) | Checks whether all characters in an input string are ASCII characters. |
| [isempty](#isempty) | Returns true if the argument is an empty string or is null. |
| [isnotempty](#isnotempty) | Returns true if the argument isn’t an empty string or a null. |
-| [isnotnull](#isnotnull) | Returns true if the argument is not null. |
+| [isnotnull](#isnotnull) | Returns true if the argument isn’t null. |
| [isnull](#isnull) | Evaluates its sole argument and returns a bool value indicating if the argument evaluates to a null value. |
| [parse_bytes](#parse-bytes) | Parses a string including byte size units and returns the number of bytes |
| [parse_json](#parse-json) | Interprets a string as a JSON value and returns the value as dynamic. |
| [parse_url](#parse-url) | Parses an absolute URL string and returns a dynamic object contains all parts of the URL. |
-| [parse_urlquery](#parse-urlquery) | Parses a url query string and returns a dynamic object contains the Query parameters. |
+| [parse_urlquery](#parse-urlquery) | Parses a URL query string and returns a dynamic object contains the Query parameters. |
| [quote](/apl/scalar-functions/string-functions/quote) | Returns a string representing the input enclosed in double quotes, with internal quotes and escape sequences handled appropriately. |
| [replace](#replace) | Replace all regex matches with another string. |
| [replace_regex](#replace-regex) | Replaces all regex matches with another string. |
@@ -154,7 +154,7 @@ Counts occurrences of a substring in a string. regex matches don’t.
### Returns
-The number of times that the search string can be matched in the dataset. Regex matches do not.
+The number of times that the search string can be matched in the dataset. Regex matches don’t.
### Examples
@@ -319,7 +319,7 @@ format_bytes(8000000, 2, "MB", 10) == "8.00 MB"
## format_url
-Formats an input string into a valid URL. This function will return a string that is a properly formatted URL.
+Formats an input string into a valid URL. This function will return a string that’s a properly formatted URL.
### Arguments
@@ -415,7 +415,7 @@ isempty([value])
## isnotempty
-Returns `true` if the argument isn’t an empty string, and it isn’t null.
+Returns `true` if the argument isn’t an empty string, and it’sn’t null.
### Examples
@@ -438,7 +438,7 @@ notempty([value]) -- alias of isnotempty
## isnotnull
-Returns `true` if the argument is not null.
+Returns `true` if the argument isn’t null.
### Examples
@@ -535,11 +535,11 @@ Interprets a string as a JSON value and returns the value as dynamic.
### Returns
-An object of type json that is determined by the value of json:
+An object of type json that’s determined by the value of json:
- If json is of type string, and is a properly formatted JSON string, then the string is parsed, and the value produced is returned.
-- If json is of type string, but it isn’t a properly formatted JSON string, then the returned value is an object of type dynamic that holds the original string value.
+- If json is of type string, but it’sn’t a properly formatted JSON string, then the returned value is an object of type dynamic that holds the original string value.
### Examples
@@ -664,7 +664,7 @@ Replace all regex matches with another string.
### Returns
-- source after replacing all matches of regex with evaluations of rewrite. Matches do not overlap.
+- source after replacing all matches of regex with evaluations of rewrite. Matches don’t overlap.
### Examples
@@ -691,7 +691,7 @@ Replaces all regex matches with another string.
### Returns
-source after replacing all matches of regex with evaluations of rewrite. Matches do not overlap.
+source after replacing all matches of regex with evaluations of rewrite. Matches don’t overlap.
### Examples
@@ -846,7 +846,7 @@ project split_str = split("axiom_observability_monitoring", "_")
Concatenates between 1 and 64 arguments.
-If the arguments aren’t of string type, they'll be forcibly converted to string.
+If the arguments aren’t of string type, they’ll be forcibly converted to string.
### Arguments
@@ -890,7 +890,7 @@ strcat(argument1, argument2[, argumentN])
Concatenates between 2 and 64 arguments, with delimiter, provided as first argument.
-- If arguments aren’t of string type, they'll be forcibly converted to string.
+- If arguments aren’t of string type, they’ll be forcibly converted to string.
### Arguments
@@ -946,8 +946,8 @@ The function starts comparing the first character of each string. If they are eq
Returns an integral value indicating the relationship between the strings:
- When the result is 0: The contents of both strings are equal.
-- When the result is -1: the first character that does not match has a lower value in string1 than in string2.
-- When the result is 1: the first character that does not match has a higher value in string1 than in string2.
+- When the result is -1: the first character that doesn’t match has a lower value in string1 than in string2.
+- When the result is 1: the first character that doesn’t match has a higher value in string1 than in string2.
### Examples
@@ -1013,7 +1013,7 @@ project str_len = strlen("axiom")
Repeats given string provided amount of times.
-- In case if first or third argument is not of a string type, it will be forcibly converted to string.
+- In case if first or third argument isn’t of a string type, it will be forcibly converted to string.
### Arguments
diff --git a/apl/scalar-functions/string-functions/indexof_regex.mdx b/apl/scalar-functions/string-functions/indexof_regex.mdx
index f13d2276..1b06cf4a 100644
--- a/apl/scalar-functions/string-functions/indexof_regex.mdx
+++ b/apl/scalar-functions/string-functions/indexof_regex.mdx
@@ -18,7 +18,7 @@ If you come from other query languages, this section explains how to adjust your
-Use `match()` in Splunk SPL to perform regular expression matching. However, `match()` returns a Boolean, not the match position. APL’s `indexof_regex` is similar to combining `match()` with additional logic to extract position, which is not natively supported in SPL.
+Use `match()` in Splunk SPL to perform regular expression matching. However, `match()` returns a Boolean, not the match position. APL’s `indexof_regex` is similar to combining `match()` with additional logic to extract position, which isn’t natively supported in SPL.
```sql Splunk example
@@ -35,7 +35,7 @@ Use `match()` in Splunk SPL to perform regular expression matching. However, `ma
-ANSI SQL does not have a built-in function to return the index of a regex match. You typically use `REGEXP_LIKE` for Boolean evaluation. `indexof_regex` provides a more direct and powerful way to find the exact match position in APL.
+ANSI SQL doesn’t have a built-in function to return the index of a regex match. You typically use `REGEXP_LIKE` for Boolean evaluation. `indexof_regex` provides a more direct and powerful way to find the exact match position in APL.
```sql SQL example
@@ -72,7 +72,7 @@ indexof_regex(string, match [, start [, occurrence [, length]]])
### Returns
-The function returns the position (starting at zero) where the pattern first matches within the string. If the pattern is not found, the result is `-1`.
+The function returns the position (starting at zero) where the pattern first matches within the string. If the pattern isn’t found, the result is `-1`.
The function returns `null` in the following cases:
diff --git a/apl/scalar-functions/string-functions/isascii.mdx b/apl/scalar-functions/string-functions/isascii.mdx
index 189a149e..a221f0e1 100644
--- a/apl/scalar-functions/string-functions/isascii.mdx
+++ b/apl/scalar-functions/string-functions/isascii.mdx
@@ -3,7 +3,7 @@ title: isascii
description: 'This page explains how to use the isascii function in APL.'
---
-Use the `isascii` function to check whether a string contains only ASCII characters. It returns `true` if every character in the input string belongs to the ASCII character set (i.e., character codes 0–127) and `false` otherwise.
+Use the `isascii` function to check whether a string contains only ASCII characters. It returns `true` if every character in the input string belongs to the ASCII character set (for example, character codes 0–127) and `false` otherwise.
The function is useful in scenarios where you want to detect non-ASCII text in logs, validate inputs for encoding compliance, or identify potential anomalies introduced by copy-pasted foreign characters or malformed input in user-submitted data.
diff --git a/apl/scalar-functions/string-functions/regex_quote.mdx b/apl/scalar-functions/string-functions/regex_quote.mdx
index e6c25e2a..73e89af9 100644
--- a/apl/scalar-functions/string-functions/regex_quote.mdx
+++ b/apl/scalar-functions/string-functions/regex_quote.mdx
@@ -3,7 +3,7 @@ title: regex_quote
description: 'This page explains how to use the regex_quote function in APL.'
---
-Use the `regex_quote` function in APL when you need to safely insert arbitrary string values into regular expression patterns. This function escapes all special characters in the input string so that it is interpreted as a literal sequence, rather than as part of a regular expression syntax.
+Use the `regex_quote` function in APL when you need to safely insert arbitrary string values into regular expression patterns. This function escapes all special characters in the input string so that it’s interpreted as a literal sequence, rather than as part of a regular expression syntax.
`regex_quote` is especially useful when your APL query constructs regular expressions dynamically using user input or field values. Without escaping, strings like `.*` or `[a-z]` would behave like regex wildcards or character classes, potentially leading to incorrect results or vulnerabilities. With `regex_quote`, you can ensure the string is treated exactly as-is.
@@ -14,7 +14,7 @@ If you come from other query languages, this section explains how to adjust your
-In Splunk, the `re.escape()` function is not available natively in SPL, so you often handle escaping in external scripts or manually. In APL, `regex_quote` provides built-in support for quoting regular expression metacharacters.
+In Splunk, the `re.escape()` function isn’t available natively in SPL, so you often handle escaping in external scripts or manually. In APL, `regex_quote` provides built-in support for quoting regular expression metacharacters.
```sql Splunk example
diff --git a/apl/scalar-functions/type-functions/iscc.mdx b/apl/scalar-functions/type-functions/iscc.mdx
index 8ada48b2..21b93363 100644
--- a/apl/scalar-functions/type-functions/iscc.mdx
+++ b/apl/scalar-functions/type-functions/iscc.mdx
@@ -5,7 +5,7 @@ description: 'This page explains how to use the iscc function in APL to check if
Use the `iscc` function to determine whether a given string is a valid credit card number. This function checks the string against known credit card number patterns and applies a checksum verification (typically the Luhn algorithm) to validate the structure and integrity of the input.
-You can use `iscc` when analyzing logs that may contain sensitive data to detect accidental leakage of credit card information. It is also useful when filtering or sanitizing input data, monitoring suspicious behavior, or validating form submissions in telemetry data.
+You can use `iscc` when analyzing logs that may contain sensitive data to detect accidental leakage of credit card information. It’s also useful when filtering or sanitizing input data, monitoring suspicious behavior, or validating form submissions in telemetry data.
## For users of other query languages
@@ -14,7 +14,7 @@ If you come from other query languages, this section explains how to adjust your
-Splunk SPL does not provide a built-in function for validating credit card numbers. To perform similar validation, you typically rely on regular expressions and manual checksum implementations using `eval` or custom search commands.
+Splunk SPL doesn’t provide a built-in function for validating credit card numbers. To perform similar validation, you typically rely on regular expressions and manual checksum implementations using `eval` or custom search commands.
```sql Splunk example
diff --git a/apl/scalar-functions/type-functions/isimei.mdx b/apl/scalar-functions/type-functions/isimei.mdx
index e146469b..8894b741 100644
--- a/apl/scalar-functions/type-functions/isimei.mdx
+++ b/apl/scalar-functions/type-functions/isimei.mdx
@@ -25,11 +25,10 @@ In APL, you can use `isimei` directly to check if a string is a valid IMEI numbe
```sql Splunk example
-... | eval is_imei=if(match(imei_field, "^[0-9]{15}$"), "true", "false") | where is_imei="true"
+| eval is_imei=if(match(imei_field, "^[0-9]{15}$"), "true", "false") | where is_imei="true"
````
```kusto APL equivalent
-...
| where isimei(imei_field)
```
diff --git a/apl/scalar-functions/type-functions/ismap.mdx b/apl/scalar-functions/type-functions/ismap.mdx
index 29b28ac6..31825e23 100644
--- a/apl/scalar-functions/type-functions/ismap.mdx
+++ b/apl/scalar-functions/type-functions/ismap.mdx
@@ -35,7 +35,7 @@ In Splunk SPL, you typically work with field types implicitly and rarely check i
-ANSI SQL does not natively support map types. If you use a platform that supports JSON or semi-structured data (such as PostgreSQL with `jsonb`, BigQuery with `STRUCT`, or Snowflake), you can simulate map checks using type inspection or schema introspection.
+ANSI SQL doesn’t natively support map types. If you use a platform that supports JSON or semi-structured data (such as PostgreSQL with `jsonb`, BigQuery with `STRUCT`, or Snowflake), you can simulate map checks using type inspection or schema introspection.
```sql SQL example
diff --git a/apl/scalar-functions/type-functions/isreal.mdx b/apl/scalar-functions/type-functions/isreal.mdx
index f046ec1c..e565b2bb 100644
--- a/apl/scalar-functions/type-functions/isreal.mdx
+++ b/apl/scalar-functions/type-functions/isreal.mdx
@@ -30,7 +30,7 @@ Splunk uses the `isnum` function to check whether a string represents a numeric
-ANSI SQL does not have a direct equivalent to `isreal`. You typically check for numeric values using `IS NOT NULL` and avoid known invalid markers manually. APL’s `isreal` abstracts this by directly checking if a value is a real number.
+ANSI SQL doesn’t have a direct equivalent to `isreal`. You typically check for numeric values using `IS NOT NULL` and avoid known invalid markers manually. APL’s `isreal` abstracts this by directly checking if a value is a real number.
```sql SQL example
diff --git a/apl/scalar-functions/type-functions/isstring.mdx b/apl/scalar-functions/type-functions/isstring.mdx
index 248db8b5..65ac6203 100644
--- a/apl/scalar-functions/type-functions/isstring.mdx
+++ b/apl/scalar-functions/type-functions/isstring.mdx
@@ -3,7 +3,7 @@ title: isstring
description: 'This page explains how to use the isstring function in APL.'
---
-Use the `isstring` function to determine whether a value is of type string. This function is especially helpful when working with heterogeneous datasets where field types are not guaranteed, or when ingesting data from sources with loosely structured or mixed schemas.
+Use the `isstring` function to determine whether a value is of type string. This function is especially helpful when working with heterogeneous datasets where field types aren’t guaranteed, or when ingesting data from sources with loosely structured or mixed schemas.
You can use `isstring` to:
- Filter rows based on whether a field is a string.
@@ -34,7 +34,7 @@ In Splunk SPL, type checking is typically implicit and not exposed through a ded
-ANSI SQL does not include a built-in `IS STRING` function. Instead, type checks usually rely on schema constraints, manual casting, or vendor-specific solutions. In contrast, APL offers `isstring` as a first-class function that returns a boolean indicating whether a value is of type string.
+ANSI SQL doesn’t include a built-in `IS STRING` function. Instead, type checks usually rely on schema constraints, manual casting, or vendor-specific solutions. In contrast, APL offers `isstring` as a first-class function that returns a boolean indicating whether a value is of type string.
```sql SQL example
@@ -72,7 +72,7 @@ isstring(value)
### Returns
-A `bool` value that is `true` if the input value is of type string, `false` otherwise.
+A `bool` value that’s `true` if the input value is of type string, `false` otherwise.
## Use case example
diff --git a/apl/scalar-functions/type-functions/isutf8.mdx b/apl/scalar-functions/type-functions/isutf8.mdx
index 41950341..08aa518f 100644
--- a/apl/scalar-functions/type-functions/isutf8.mdx
+++ b/apl/scalar-functions/type-functions/isutf8.mdx
@@ -3,7 +3,7 @@ title: isutf8
description: 'This page explains how to use the isutf8 function in APL.'
---
-Use the `isutf8` function to check whether a string is a valid UTF-8 encoded sequence. The function returns a boolean indicating whether the input conforms to UTF-8 encoding rules.
+Use the `isutf8` function to check whether a string is a valid UTF-8 encoded sequence. The function returns a boolean indicating whether the input conforms to UTF-8 encoding rules.
`isutf8` is useful when working with data from external sources such as logs, telemetry events, or data pipelines, where encoding issues can cause downstream processing to fail or produce incorrect results. By filtering out or isolating invalid UTF-8 strings, you can ensure better data quality and avoid unexpected behavior during parsing or transformation.
@@ -14,7 +14,7 @@ If you come from other query languages, this section explains how to adjust your
-Splunk does not provide a built-in function to directly check if a string is valid UTF-8. Users typically rely on workarounds using field transformations or regex, which can be error-prone or incomplete. APL provides `isutf8` as a simple and reliable alternative.
+Splunk doesn’t provide a built-in function to directly check if a string is valid UTF-8. Users typically rely on workarounds using field transformations or regex, which can be error-prone or incomplete. APL provides `isutf8` as a simple and reliable alternative.
```sql Splunk example
diff --git a/apl/scalar-operators/logical-operators.mdx b/apl/scalar-operators/logical-operators.mdx
index a9997f3b..2f3ce6b7 100644
--- a/apl/scalar-operators/logical-operators.mdx
+++ b/apl/scalar-operators/logical-operators.mdx
@@ -14,6 +14,6 @@ The following logical operators are supported between two values of the `bool` t
| **Operator name** | **Syntax** | **meaning** | |
| ----------------- | ---------- | ----------------------------------------------------------------------------------------------------------------------- | --- |
| Equality | **==** | Returns `true` if both operands are non-null and equal to each other. Otherwise, `false`. |
-| Inequality | **!=** | Returns `true` if either one (or both) of the operands are null, or they are not equal to each other. Otherwise, `false`. |
+| Inequality | **!=** | Returns `true` if either one (or both) of the operands are null, or they aren’t equal to each other. Otherwise, `false`. |
| Logical and | **and** | Returns `true` if both operands are `true`. |
| Logical or | **or** | Returns `true `if one of the operands is `true`, regardless of the other operand. |
diff --git a/apl/tabular-operators/count-operator.mdx b/apl/tabular-operators/count-operator.mdx
index e5ae1327..492f5675 100644
--- a/apl/tabular-operators/count-operator.mdx
+++ b/apl/tabular-operators/count-operator.mdx
@@ -55,7 +55,7 @@ SELECT COUNT(*) FROM web_logs;
### Parameters
-The `count` operator does not take any parameters. It simply returns the number of records in the dataset or query result.
+The `count` operator doesn’t take any parameters. It simply returns the number of records in the dataset or query result.
### Returns
@@ -139,4 +139,4 @@ This query returns the number of HTTP requests that resulted in an error (HTTP s
- [extend](/apl/tabular-operators/extend-operator): The `extend` operator adds calculated fields to a dataset. You can use `extend` alongside `count` if you want to add additional calculated data to your query results.
- [project](/apl/tabular-operators/project-operator): The `project` operator selects specific fields from a dataset. While `count` returns the total number of records, `project` can limit or change which fields you see.
- [where](/apl/tabular-operators/where-operator): The `where` operator filters rows based on a condition. Use `where` with `count` to only count records that meet certain criteria.
-- [take](/apl/tabular-operators/take-operator): The `take` operator returns a specified number of records. You can use `take` to limit results before applying `count` if you're interested in counting a sample of records.
\ No newline at end of file
+- [take](/apl/tabular-operators/take-operator): The `take` operator returns a specified number of records. You can use `take` to limit results before applying `count` if you’re interested in counting a sample of records.
\ No newline at end of file
diff --git a/apl/tabular-operators/extend-operator.mdx b/apl/tabular-operators/extend-operator.mdx
index 6672f326..a4671b58 100644
--- a/apl/tabular-operators/extend-operator.mdx
+++ b/apl/tabular-operators/extend-operator.mdx
@@ -62,7 +62,7 @@ SELECT id, req_duration_ms, req_duration_ms * 1000 AS newField FROM logs;
The operator returns a copy of the original dataset with the following changes:
- Field names noted by `extend` that already exist in the input are removed and appended as their new calculated values.
-- Field names noted by `extend` that do not exist in the input are appended as their new calculated values.
+- Field names noted by `extend` that don’t exist in the input are appended as their new calculated values.
## Use case examples
@@ -137,5 +137,5 @@ This query creates a new field `status_category` that labels each HTTP request a
## List of related operators
-- [project](/apl/tabular-operators/project-operator): Use `project` to select specific fields or rename them. Unlike `extend`, it does not add new fields.
+- [project](/apl/tabular-operators/project-operator): Use `project` to select specific fields or rename them. Unlike `extend`, it doesn’t add new fields.
- [summarize](/apl/tabular-operators/summarize-operator): Use `summarize` to aggregate data, which differs from `extend` that only adds new calculated fields without aggregation.
\ No newline at end of file
diff --git a/apl/tabular-operators/extend-valid-operator.mdx b/apl/tabular-operators/extend-valid-operator.mdx
index b50fa8dd..b19dc3c9 100644
--- a/apl/tabular-operators/extend-valid-operator.mdx
+++ b/apl/tabular-operators/extend-valid-operator.mdx
@@ -5,11 +5,11 @@ description: 'This page explains how to use the extend-valid operator in APL.'
The `extend-valid` operator in Axiom Processing Language (APL) allows you to extend a set of fields with new calculated values, where these calculations are based on conditions of validity for each row. It’s particularly useful when working with datasets that contain missing or invalid data, as it enables you to calculate and assign values only when certain conditions are met. This operator helps you keep your data clean by applying calculations to valid data points, and leaving invalid or missing values untouched.
-This is a shorthand operator to create a field while also doing basic checking on the validity of the field. In many cases, additional checks are required and it is recommended in those cases a combination of an [extend](/apl/tabular-operators/extend-operator) and a [where](/apl/tabular-operators/where-operator) operator are used. The basic checks that Axiom preform depend on the type of the expression:
+This is a shorthand operator to create a field while also doing basic checking on the validity of the field. In many cases, additional checks are required and it’s recommended in those cases a combination of an [extend](/apl/tabular-operators/extend-operator) and a [where](/apl/tabular-operators/where-operator) operator are used. The basic checks that Axiom preform depend on the type of the expression:
-- **Dictionary:** Check if the dictionary is not null and has at least one entry.
-- **Array:** Check if the arrat is not null and has at least one value.
-- **String:** Check is the string is not empty and has at least one character.
+- **Dictionary:** Check if the dictionary isn’t null and has at least one entry.
+- **Array:** Check if the arrat isn’t null and has at least one value.
+- **String:** Check is the string isn’t empty and has at least one character.
- **Other types:** The same logic as `tobool` and a check for true.
You can use `extend-valid` to perform conditional transformations on large datasets, especially in scenarios where data quality varies or when dealing with complex log or telemetry data.
diff --git a/apl/tabular-operators/externaldata-operator.mdx b/apl/tabular-operators/externaldata-operator.mdx
index a97735a8..98aef3df 100644
--- a/apl/tabular-operators/externaldata-operator.mdx
+++ b/apl/tabular-operators/externaldata-operator.mdx
@@ -3,7 +3,7 @@ title: externaldata
description: 'This page explains how to use the externaldata operator in APL.'
---
-The `externaldata` operator in APL allows you to retrieve data from external storage sources, such as Azure Blob Storage, AWS S3, or HTTP endpoints, and use it within queries. You can specify the schema of the external data and query it as if it were a native dataset. This operator is useful when you need to analyze data that is stored externally without importing it into Axiom.
+The `externaldata` operator in APL allows you to retrieve data from external storage sources, such as Azure Blob Storage, AWS S3, or HTTP endpoints, and use it within queries. You can specify the schema of the external data and query it as if it were a native dataset. This operator is useful when you need to analyze data that’s stored externally without importing it into Axiom.
The `externaldata` operator currently supports external data sources with a file size of maximum 5 MB.
@@ -18,7 +18,7 @@ If you come from other query languages, this section explains how to adjust your
-Splunk does not have a direct equivalent to `externaldata`, but you can use `inputlookup` or `| rest` commands to retrieve data from external sources.
+Splunk doesn’t have a direct equivalent to `externaldata`, but you can use `inputlookup` or `| rest` commands to retrieve data from external sources.
```sql Splunk example
diff --git a/apl/tabular-operators/getschema-operator.mdx b/apl/tabular-operators/getschema-operator.mdx
index c47ff226..b4c67574 100644
--- a/apl/tabular-operators/getschema-operator.mdx
+++ b/apl/tabular-operators/getschema-operator.mdx
@@ -54,7 +54,7 @@ SELECT COLUMN_NAME, DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME =
### Parameters
-The `getschema` operator does not take any parameters.
+The `getschema` operator doesn’t take any parameters.
### Returns
diff --git a/apl/tabular-operators/join-operator.mdx b/apl/tabular-operators/join-operator.mdx
index 0fa0c315..788cf94a 100644
--- a/apl/tabular-operators/join-operator.mdx
+++ b/apl/tabular-operators/join-operator.mdx
@@ -26,8 +26,8 @@ The kinds of join and their typical use cases are the following:
- `leftouter`: Returns all rows from the left dataset. If a match exists in the right dataset, the matching rows are included; otherwise, columns from the right dataset are `null`. Retains all data from the left dataset, enriching it with matching data from the right dataset.
- `rightouter`: Returns all rows from the right dataset. If a match exists in the left dataset, the matching rows are included; otherwise, columns from the left dataset are `null`. Retains all data from the right dataset, enriching it with matching data from the left dataset.
- `fullouter`: Returns all rows from both datasets. Matching rows are combined, while non-matching rows from either dataset are padded with `null` values. Combines both datasets while retaining unmatched rows from both sides.
-- `leftanti`: Returns rows from the left dataset that have no matches in the right dataset. Identifies rows in the left dataset that do not have corresponding entries in the right dataset.
-- `rightanti`: Returns rows from the right dataset that have no matches in the left dataset. Identifies rows in the right dataset that do not have corresponding entries in the left dataset.
+- `leftanti`: Returns rows from the left dataset that have no matches in the right dataset. Identifies rows in the left dataset that don’t have corresponding entries in the right dataset.
+- `rightanti`: Returns rows from the right dataset that have no matches in the left dataset. Identifies rows in the right dataset that don’t have corresponding entries in the left dataset.
- `leftsemi`: Returns rows from the left dataset that have at least one match in the right dataset. Only columns from the left dataset are included. Filters rows in the left dataset based on existence in the right dataset.
- `rightsemi`: Returns rows from the right dataset that have at least one match in the left dataset. Only columns from the right dataset are included. Filters rows in the right dataset based on existence in the left dataset.
@@ -53,7 +53,7 @@ The kinds of join and their typical use cases are the following:
- Use `inner` for standard joins where you need all matches.
- Use `leftouter` or `rightouter` when you need to retain all rows from one dataset.
-- Use `leftanti` or `rightanti` to find rows that do not match.
+- Use `leftanti` or `rightanti` to find rows that don’t match.
- Use `fullouter` for complete combinations of both datasets.
- Use `leftsemi` or `rightsemi` to filter rows based on existence in another dataset.
diff --git a/apl/tabular-operators/limit-operator.mdx b/apl/tabular-operators/limit-operator.mdx
index 1dd3b098..f3ce56d3 100644
--- a/apl/tabular-operators/limit-operator.mdx
+++ b/apl/tabular-operators/limit-operator.mdx
@@ -3,9 +3,9 @@ title: limit
description: 'This page explains how to use the limit operator in APL.'
---
-The `limit` operator in Axiom Processing Language (APL) allows you to restrict the number of rows returned from a query. It is particularly useful when you want to see only a subset of results from large datasets, such as when debugging or previewing query outputs. The `limit` operator can help optimize performance and focus analysis by reducing the amount of data processed.
+The `limit` operator in Axiom Processing Language (APL) allows you to restrict the number of rows returned from a query. It’s particularly useful when you want to see only a subset of results from large datasets, such as when debugging or previewing query outputs. The `limit` operator can help optimize performance and focus analysis by reducing the amount of data processed.
-Use the `limit` operator when you want to return only the top rows from a dataset, especially in cases where the full result set is not necessary.
+Use the `limit` operator when you want to return only the top rows from a dataset, especially in cases where the full result set isn’t necessary.
## For users of other query languages
diff --git a/apl/tabular-operators/lookup-operator.mdx b/apl/tabular-operators/lookup-operator.mdx
index 5b414e88..054e3e6c 100644
--- a/apl/tabular-operators/lookup-operator.mdx
+++ b/apl/tabular-operators/lookup-operator.mdx
@@ -34,7 +34,7 @@ index=web_logs | lookup port_lookup port AS client_port OUTPUT service_name
-In ANSI SQL, `lookup` is similar to an `INNER JOIN`, where records from both tables are matched based on a common key. Unlike SQL, APL does not support other types of joins in `lookup`.
+In ANSI SQL, `lookup` is similar to an `INNER JOIN`, where records from both tables are matched based on a common key. Unlike SQL, APL doesn’t support other types of joins in `lookup`.
```sql SQL example
diff --git a/apl/tabular-operators/parse-operator.mdx b/apl/tabular-operators/parse-operator.mdx
index 52fece5d..0c37af4d 100644
--- a/apl/tabular-operators/parse-operator.mdx
+++ b/apl/tabular-operators/parse-operator.mdx
@@ -5,7 +5,7 @@ description: 'This page explains how to use the parse operator function in APL.'
The `parse` operator in APL enables you to extract and structure information from unstructured or semi-structured text data, such as log files or strings. You can use the operator to specify a pattern for parsing the data and define the fields to extract. This is useful when analyzing logs, tracing information from text fields, or extracting key-value pairs from message formats.
-You can find the `parse` operator helpful when you need to process raw text fields and convert them into a structured format for further analysis. It’s particularly effective when working with data that doesn't conform to a fixed schema, such as log entries or custom messages.
+You can find the `parse` operator helpful when you need to process raw text fields and convert them into a structured format for further analysis. It’s particularly effective when working with data that doesn’t conform to a fixed schema, such as log entries or custom messages.
## Importance of the parse operator
@@ -273,7 +273,7 @@ usa-acmeinc-3iou24
### Parse in relaxed mode
-The parse operator supports a relaxed mode that allows for more flexible parsing. In relaxed mode, Axiom treats the parsing pattern as a regular string and matches results in a relaxed manner. If some parts of the pattern are missing or do not match the expected type, Axiom assigns null values.
+The parse operator supports a relaxed mode that allows for more flexible parsing. In relaxed mode, Axiom treats the parsing pattern as a regular string and matches results in a relaxed manner. If some parts of the pattern are missing or don’t match the expected type, Axiom assigns null values.
This example parses the `log` field into four separate parts (`method`, `url`, `status`, and `responseTime`) based on a structured format. The extracted parts are projected as separate fields.
@@ -364,7 +364,7 @@ Log: PodStatusUpdate (podName=nginx-pod, namespace=default, phase=Running, start
When using the parse operator, consider the following best practices:
- Use appropriate parsing modes: Choose the parsing mode (simple, relaxed, regex) based on the complexity and variability of the data being parsed. Simple mode is suitable for fixed patterns, while relaxed and regex modes offer more flexibility.
-- Handle missing or invalid data: Consider how to handle scenarios where the parsing pattern does not match or the extracted values do not conform to the expected types. Use the relaxed mode or provide default values to handle such cases.
+- Handle missing or invalid data: Consider how to handle scenarios where the parsing pattern doesn’t match or the extracted values don’t conform to the expected types. Use the relaxed mode or provide default values to handle such cases.
- Project only necessary fields: After parsing, use the project operator to select only the fields that are relevant for further querying. This helps reduce the amount of data transferred and improves query performance.
- Use parse in combination with other operators: Combine parse with other APL operators like where, extend, and summarize to filter, transform, and aggregate the parsed data effectively.
diff --git a/apl/tabular-operators/project-away-operator.mdx b/apl/tabular-operators/project-away-operator.mdx
index 82014e63..8547b8fc 100644
--- a/apl/tabular-operators/project-away-operator.mdx
+++ b/apl/tabular-operators/project-away-operator.mdx
@@ -5,7 +5,7 @@ description: 'This page explains how to use the project-away operator function i
The `project-away` operator in APL is used to exclude specific fields from the output of a query. This operator is useful when you want to return a subset of fields from a dataset, without needing to manually specify every field you want to keep. Instead, you specify the fields you want to remove, and the operator returns all remaining fields.
-You can use `project-away` in scenarios where your dataset contains irrelevant or sensitive fields that you do not want in the results. It simplifies queries, especially when dealing with wide datasets, by allowing you to filter out fields without having to explicitly list every field to include.
+You can use `project-away` in scenarios where your dataset contains irrelevant or sensitive fields that you don’t want in the results. It simplifies queries, especially when dealing with wide datasets, by allowing you to filter out fields without having to explicitly list every field to include.
## For users of other query languages
diff --git a/apl/tabular-operators/project-keep-operator.mdx b/apl/tabular-operators/project-keep-operator.mdx
index 034b5701..42994b4a 100644
--- a/apl/tabular-operators/project-keep-operator.mdx
+++ b/apl/tabular-operators/project-keep-operator.mdx
@@ -3,7 +3,7 @@ title: project-keep
description: 'This page explains how to use the project-keep operator function in APL.'
---
-The `project-keep` operator in APL is a powerful tool for field selection. It allows you to explicitly keep specific fields from a dataset, discarding any others not listed in the operator's parameters. This is useful when you only need to work with a subset of fields in your query results and want to reduce clutter or improve performance by eliminating unnecessary fields.
+The `project-keep` operator in APL is a powerful tool for field selection. It allows you to explicitly keep specific fields from a dataset, discarding any others not listed in the operator’s parameters. This is useful when you only need to work with a subset of fields in your query results and want to reduce clutter or improve performance by eliminating unnecessary fields.
You can use `project-keep` when you need to focus on particular data points, such as in log analysis, security event monitoring, or extracting key fields from traces.
diff --git a/apl/tabular-operators/sample-operator.mdx b/apl/tabular-operators/sample-operator.mdx
index 0e3068ca..c76f7ac9 100644
--- a/apl/tabular-operators/sample-operator.mdx
+++ b/apl/tabular-operators/sample-operator.mdx
@@ -3,7 +3,7 @@ title: sample
description: 'This page explains how to use the sample operator function in APL.'
---
-The `sample` operator in APL psuedo-randomly selects rows from the input dataset at a rate specified by a parameter. This operator is useful when you want to analyze a subset of data, reduce the dataset size for testing, or quickly explore patterns without processing the entire dataset. The sampling algorithm is not statistically rigorous but provides a way to explore and understand a dataset. For statistically rigorous analysis, use `summarize` instead.
+The `sample` operator in APL psuedo-randomly selects rows from the input dataset at a rate specified by a parameter. This operator is useful when you want to analyze a subset of data, reduce the dataset size for testing, or quickly explore patterns without processing the entire dataset. The sampling algorithm isn’t statistically rigorous but provides a way to explore and understand a dataset. For statistically rigorous analysis, use `summarize` instead.
You can find the `sample` operator useful when working with large datasets, where processing the entire dataset is resource-intensive or unnecessary. It’s ideal for scenarios like log analysis, performance monitoring, or sampling for data quality checks.
diff --git a/apl/tabular-operators/top-operator.mdx b/apl/tabular-operators/top-operator.mdx
index 5a249cff..8cc45d5d 100644
--- a/apl/tabular-operators/top-operator.mdx
+++ b/apl/tabular-operators/top-operator.mdx
@@ -3,7 +3,7 @@ title: top
description: 'This page explains how to use the top operator function in APL.'
---
-The `top` operator in Axiom Processing Language (APL) allows you to retrieve the top N rows from a dataset based on specified criteria. It is particularly useful when you need to analyze the highest values in large datasets or want to quickly identify trends, such as the highest request durations in logs or top error occurrences in traces. You can apply it in scenarios like log analysis, security investigations, or tracing system performance.
+The `top` operator in Axiom Processing Language (APL) allows you to retrieve the top N rows from a dataset based on specified criteria. It’s particularly useful when you need to analyze the highest values in large datasets or want to quickly identify trends, such as the highest request durations in logs or top error occurrences in traces. You can apply it in scenarios like log analysis, security investigations, or tracing system performance.
## For users of other query languages
@@ -144,4 +144,4 @@ This query shows the top 3 most common HTTP status codes in security logs.
- [order](/apl/tabular-operators/order-operator): Use when you need full control over row ordering without limiting the number of results.
- [summarize](/apl/tabular-operators/summarize-operator): Useful when aggregating data over fields and obtaining summarized results.
-- [take](/apl/tabular-operators/take-operator): Returns the first N rows without sorting. Use when ordering is not necessary.
\ No newline at end of file
+- [take](/apl/tabular-operators/take-operator): Returns the first N rows without sorting. Use when ordering isn’t necessary.
\ No newline at end of file
diff --git a/apps/aws-privatelink.mdx b/apps/aws-privatelink.mdx
index dccc8ea3..5b893cd2 100644
--- a/apps/aws-privatelink.mdx
+++ b/apps/aws-privatelink.mdx
@@ -18,7 +18,7 @@ Axiom exposes AWS PrivateLink endpoints in the `us-east-1` AWS region. To route
1. Start creating a VPC endpoint. For more information, see the [AWS documentation](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#create-interface-endpoint-aws).
1. In **Service category**, select **Other endpoint services**.
1. In **Service name**, enter `com.amazonaws.vpce.us-east-1.vpce-svc-05a64735cdf68866b` to establish AWS PrivateLink for `api.axiom.co`.
-1. Click **Verify service**. If this does not succeed, reach out to [Axiom Support](https://axiom.co/contact).
+1. Click **Verify service**. If this doesn’t succeed, reach out to [Axiom Support](https://axiom.co/contact).
1. Select the VPC and subnets that you want to connect to the Axiom VPC service endpoint. Ensure that **Enable DNS name** is turned on and the security group accepts inbound traffic on TCP port `443`.
1. Finish the setup and wait for the VPC endpoint to become available. This usually takes 10 minutes.
diff --git a/apps/cloudflare-logpush.mdx b/apps/cloudflare-logpush.mdx
index 16eafee1..bc74fc2d 100644
--- a/apps/cloudflare-logpush.mdx
+++ b/apps/cloudflare-logpush.mdx
@@ -1,6 +1,6 @@
---
title: 'Connect Axiom with Cloudflare Logpush'
-description: "Axiom gives you an all-at-once view of key Cloudflare Logpush metrics and logs, out of the box, with our dynamic Cloudflare Logpush dashboard."
+description: "Axiom gives you an all-at-once view of key Cloudflare Logpush metrics and logs, out of the box, with the dynamic Cloudflare Logpush dashboard."
overview: 'Service for pushing logs to storage services in real-time'
sidebarTitle: Cloudflare Logpush
keywords: ['axiom documentation', 'documentation', 'axiom', 'cloudflare logpush', 'requests', 'edge network', 'route trigger', 'cloudflare']
diff --git a/apps/cloudflare-workers.mdx b/apps/cloudflare-workers.mdx
index 591b7b73..06d91bc3 100644
--- a/apps/cloudflare-workers.mdx
+++ b/apps/cloudflare-workers.mdx
@@ -8,7 +8,7 @@ keywords: ['axiom documentation', 'documentation', 'axiom', 'cloudflare workers'
import Prerequisites from "/snippets/standard-prerequisites.mdx"
import ReplaceDatasetToken from "/snippets/replace-dataset-token.mdx"
-The Axiom Cloudflare Workers app provides granular detail about the traffic coming in from your monitored sites. This includes edge requests, static resources, client auth, response duration, and status. Axiom gives you an all-at-once view of key Cloudflare Workers metrics and logs, out of the box, with our dynamic Cloudflare Workers dashboard.
+The Axiom Cloudflare Workers app provides granular detail about the traffic coming in from your monitored sites. This includes edge requests, static resources, client auth, response duration, and status. Axiom gives you an all-at-once view of key Cloudflare Workers metrics and logs, out of the box, with the dynamic Cloudflare Workers dashboard.
The data obtained with the Axiom dashboard gives you better insights into the state of your Cloudflare Workers so you can easily monitor bad requests, popular URLs, cumulative execution time, successful requests, and more. The app is part of Axiom’s unified logging and observability platform, so you can easily track Cloudflare Workers edge requests alongside a comprehensive view of other resources in your Cloudflare Worker environments.
@@ -16,7 +16,7 @@ The data obtained with the Axiom dashboard gives you better insights into the st
Axiom Cloudflare Workers is an open-source project and welcomes your contributions. For more information, see the [GitHub repository](https://github.com/axiomhq/axiom-cloudflare-workers).
-## What is Cloudflare Workers
+## What’s Cloudflare Workers
[Cloudflare Workers](https://developers.cloudflare.com/workers/) is a serverless computing platform developed by Cloudflare. The Workers platform allows developers to deploy and run JavaScript code directly at the network edge in more than 200 data centers worldwide. This serverless architecture enables high performance, low latency, and efficient scaling for web apps and APIs.
diff --git a/apps/grafana.mdx b/apps/grafana.mdx
index e77d8dfd..67813e3d 100644
--- a/apps/grafana.mdx
+++ b/apps/grafana.mdx
@@ -9,7 +9,7 @@ keywords: ['axiom documentation', 'documentation', 'axiom', 'grafana', 'datasour
-## What is a Grafana data source plugin?
+## What’s a Grafana data source plugin?
Grafana is an open-source tool for time-series analytics, visualization, and alerting. It’s frequently used in DevOps and IT Operations roles to provide real-time information on system health and performance.
diff --git a/apps/lambda.mdx b/apps/lambda.mdx
index a6d70eee..17ba4f2d 100644
--- a/apps/lambda.mdx
+++ b/apps/lambda.mdx
@@ -35,7 +35,7 @@ These new zero-config dashboards help you spot and troubleshoot Lambda function
## Monitor Lambda functions and usage in Axiom
-Having real-time visibility into your function logs is important because any duration between sending your lambda request and the execution time can cause a delay and adds to customer-facing latency. You need to be able to measure and track your Lambda invocations, maximum and minimum execution time, and all invocations by function.
+Having real-time visibility into your function logs is important because any duration between sending your lambda request and the execution time can cause a delay and adds to customer-facing latency. You need to be able to measure and track your Lambda invocations, maximum execution time, minimum execution time, and all invocations by function.
diff --git a/apps/netlify.mdx b/apps/netlify.mdx
index 8cf6515e..d3a57096 100644
--- a/apps/netlify.mdx
+++ b/apps/netlify.mdx
@@ -1,13 +1,13 @@
---
title: 'Connect Axiom with Netlify'
-description: 'Integrating Axiom with Netlify to get a comprehensive observability experience for your Netlify projects. This app will give you a better understanding of how your Jamstack apps are performing.'
+description: 'Integrating Axiom with Netlify to get a comprehensive observability experience for your Netlify projects. This gives you a better understanding of how your Jamstack apps are performing.'
overview: 'All-in-one platform for automating modern web projects'
sidebarTitle: Netlify
keywords: ['axiom documentation', 'documentation', 'axiom', 'netlify', log drains', 'jamstack', 'zero config observability']
logoId: 'netlify'
---
-Integrate Axiom with Netlify to get a comprehensive observability experience for your Netlify projects. This integration will give you a better understanding of how your Jamstack apps are performing.
+Integrate Axiom with Netlify to get a comprehensive observability experience for your Netlify projects. This integration gives you a better understanding of how your Jamstack apps are performing.
You can easily monitor logs and metrics related to your website traffic, serverless functions, and app requests. The integration is easy to set up, and you don’t need to configure anything to get started.
@@ -17,7 +17,7 @@ Axiom’s Netlify app is complete with a pre-built dashboard that gives you cont
Overall, the Axiom Netlify app makes it easy to monitor and optimize your Jamstack apps. However, do note that this integration is only available for Netlify customers enterprise-level plans where [Log Drains are supported](https://docs.netlify.com/monitor-sites/log-drains/).
-## What is Netlify
+## What’s Netlify
Netlify is a platform for building highly performant and dynamic websites, e-commerce stores, and web apps. Netlify automatically builds your site and deploys it across its global edge network.
@@ -27,64 +27,29 @@ The Netlify platform provides teams everything they need to take modern web proj
The log events gotten from Axiom gives you better insight into the state of your Netlify sites environment so that you can easily monitor traffic volume, website configurations, function logs, resource usage, and more.
-1. Simply login to your [Axiom account](https://app.axiom.co/), click on **Apps** from the **Settings** menu, select the **Netlify app** and click on **Install now**.
-
-
-
-
-
-- It’ll redirect you to Netlify to authorize Axiom.
-
-
-
-
-
-- Click **Authorize**, and then copy the integration token.
-
-
-
-
-
-2. Log into your **Netlify Team Account**, click on your site settings and select **Log Drains**.
-
-- In your log drain service, select **Axiom**, paste the integration token from Step 1, and then click **Connect**.
-
-
-
-
+1. Log in to your [Axiom account](https://app.axiom.co/), click **Apps** in the **Settings** menu, select the **Netlify app**, and then click **Install now**.
+1. Click **Authorize**, and then copy the integration token.
+1. Log in to your **Netlify Team Account**, select your site settings, and then select **Log Drains**.
+1. In your log drain service, select **Axiom**, paste the integration token from Step 1, and then click **Connect**.
## App overview
### Traffic and function Logs
-With Axiom, you can instrument, and actively monitor your Netlify sites, stream your build logs, and analyze your deployment process, or use our pre-build Netlify Dashboard to get an overview of all the important traffic data, usage, and metrics. Various logs will be produced when users collaborate and interact with your sites and websites hosted on Netlify. Axiom captures and ingests all these logs into the `netlify` dataset.
-
-You can also drill down to your site source with our advanced query language and fork our dashboard to start building your own site monitors.
+With Axiom, you can instrument, and actively monitor your Netlify sites, stream your build logs, and analyze your deployment process, or use the pre-build Netlify Dashboard to get an overview of all the important traffic data, usage, and metrics. Various logs are produced when users collaborate and interact with your sites and websites hosted on Netlify. Axiom captures and ingests all these logs into the `netlify` dataset.
-- Back in your Axiom datasets console you'll see all your traffic and function logs in your `netlify` dataset.
+You can also drill down to your site source with Axiom’s advanced query language and fork the dashboard to start building your own site monitors.
-
-
-
+Back in your Axiom datasets console, you see all your traffic and function logs in your `netlify` dataset.
### Live stream logs
Stream your sites and app logs live, and filter them to see important information.
-
-
-
-
### Zero-config dashboard for your Netlify sites
-Use our pre-build Netlify Dashboard to get an overview of all the important metrics. When ready, you can fork our dashboard and start building your own.
-
-
-
-
+Use the pre-build Netlify Dashboard to get an overview of all the important metrics. You can fork the dashboard and start building your own.
## Start logging Netlify Sites today
Axiom Netlify integration allows you to monitor, and log all of your sites, and apps in one place. With the Axiom app, you can quickly detect site errors, and get high-level insights into your Netlify projects.
-
-- We welcome ideas, feedback, and collaboration, join us in our [Discord Community](http://axiom.co/discord) to share them with us.
diff --git a/apps/vercel.mdx b/apps/vercel.mdx
index 97cb3165..09fd711d 100644
--- a/apps/vercel.mdx
+++ b/apps/vercel.mdx
@@ -40,7 +40,7 @@ For function logs, if you call `console.log`, `console.warn` or `console.error`
## Web vitals
-Axiom supports capturing and analyzing Web Vital data directly from your user’s browser without any sampling and with more data than is available elsewhere. It is perfect to pair with Vercel’s in-built analytics when you want to get really deep into a specific problem or debug issues with a specific audience (user-agent, location, region, etc).
+Axiom supports capturing and analyzing Web Vital data directly from your user’s browser without any sampling and with more data than is available elsewhere. It’s perfect to pair with Vercel’s in-built analytics when you want to get really deep into a specific problem or debug issues with a specific audience (user-agent, location, region, etc).
Web Vitals are only currently supported for Next.js websites. Expanded support is coming soon.
@@ -153,13 +153,13 @@ export { reportWebVitals } from 'next-axiom';
## Upgrade to Next.js 13 from Next.js 12
-If you plan on upgrading to Next.js 13, you'll need to make specific changes to ensure compatibility:
+If you plan on upgrading to Next.js 13, you’ll need to make specific changes to ensure compatibility:
- Upgrade the next-axiom package to version `1.0.0` or higher:
- Make sure any exported variables have the `NEXT_PUBLIC_ prefix`, for example,, `NEXT_PUBLIC_AXIOM_TOKEN`.
- In client components, use the `useLogger` hook instead of the `log` prop.
- For server-side components, you need to create an instance of the `Logger` and flush the logs before the component returns.
-- For Web Vitals tracking, you'll replace the previous method of capturing data. Remove the `reportWebVitals()` line and instead integrate the `AxiomWebVitals` component into your layout.
+- For Web Vitals tracking, you’ll replace the previous method of capturing data. Remove the `reportWebVitals()` line and instead integrate the `AxiomWebVitals` component into your layout.
## Vercel Function logs 4KB limit
diff --git a/console/intelligence/spotlight.mdx b/console/intelligence/spotlight.mdx
index 36151c89..8f2a7cf9 100644
--- a/console/intelligence/spotlight.mdx
+++ b/console/intelligence/spotlight.mdx
@@ -95,4 +95,4 @@ To dig deeper, iteratively refine your Spotlight analysis or jump to a view of m
```
1. Select the time period where errors spiked.
-1. Run Spotlight to identify if there's anything different about the selected errors.
+1. Run Spotlight to identify if there’s anything different about the selected errors.
diff --git a/doc-assets/shots/netlify-118.png b/doc-assets/shots/netlify-118.png
deleted file mode 100644
index 18882993..00000000
Binary files a/doc-assets/shots/netlify-118.png and /dev/null differ
diff --git a/doc-assets/shots/netlify-120.png b/doc-assets/shots/netlify-120.png
deleted file mode 100644
index 14218a06..00000000
Binary files a/doc-assets/shots/netlify-120.png and /dev/null differ
diff --git a/doc-assets/shots/netlify-27.png b/doc-assets/shots/netlify-27.png
deleted file mode 100644
index 515af10c..00000000
Binary files a/doc-assets/shots/netlify-27.png and /dev/null differ
diff --git a/doc-assets/shots/netlify-28.png b/doc-assets/shots/netlify-28.png
deleted file mode 100644
index 3ff23078..00000000
Binary files a/doc-assets/shots/netlify-28.png and /dev/null differ
diff --git a/doc-assets/shots/netlify-2c.png b/doc-assets/shots/netlify-2c.png
deleted file mode 100644
index f108c4f0..00000000
Binary files a/doc-assets/shots/netlify-2c.png and /dev/null differ
diff --git a/doc-assets/shots/netlify-6n.png b/doc-assets/shots/netlify-6n.png
deleted file mode 100644
index a1b84aef..00000000
Binary files a/doc-assets/shots/netlify-6n.png and /dev/null differ
diff --git a/doc-assets/shots/netlify-dash-7.png b/doc-assets/shots/netlify-dash-7.png
deleted file mode 100644
index 89f01833..00000000
Binary files a/doc-assets/shots/netlify-dash-7.png and /dev/null differ
diff --git a/docs.json b/docs.json
index 72a3345a..6ca3e0d0 100644
--- a/docs.json
+++ b/docs.json
@@ -255,7 +255,11 @@
"groups": [
{
"group": "Get started",
- "pages": ["apl/introduction", "apl/tutorial", "apl/apl-features"]
+ "pages": [
+ "apl/introduction",
+ "apl/tutorial",
+ "apl/apl-features"
+ ]
},
{
"group": "Functions",
@@ -709,7 +713,13 @@
}
},
"contextual": {
- "options": ["copy", "view", "chatgpt", "claude", "perplexity"]
+ "options": [
+ "copy",
+ "view",
+ "chatgpt",
+ "claude",
+ "perplexity"
+ ]
},
"styling": {
"eyebrows": "breadcrumbs"
diff --git a/get-help/faq.mdx b/get-help/faq.mdx
index 2b785842..6a279f54 100644
--- a/get-help/faq.mdx
+++ b/get-help/faq.mdx
@@ -5,9 +5,11 @@ sidebarTitle: 'FAQs'
keywords: ['axiom documentation', 'documentation', 'axiom', 'axiom', 'run axiom', 'retain data', 'faq', 'frwquently asked questions']
---
+{/* vale off */}
+
This page aims to offer a deeper understanding of Axiom. If you can’t find an answer to your questions, please feel free to [contact our team](https://axiom.co/contact).
-## What is Axiom?
+## What’s Axiom?
Axiom is a log management and analytics solution that reduces the cost and management overhead of logging as much data as you want.
@@ -57,7 +59,7 @@ For example, state of the art in logging is running stateful clusters that need
2. The choice needs to be made between hot and cold data, and also what is archived. Now your data is in 2-3 different places and queries can be fast or slow depending on where the data is
-The end result is needing to carefully consider all data that is ingested, and putting limits and/or sampling to control the DevOps and cost burden.
+The end result is needing to carefully consider all data that’s ingested, and putting limits and/or sampling to control the DevOps and cost burden.
### The ways Axiom is different
@@ -91,7 +93,7 @@ For more information, see [Pricing](https://axiom.co/pricing).
## Can I try Axiom for free?
-Yes. The Personal plan is free forever with a generous allowance. It is available to all customers.
+Yes. The Personal plan is free forever with a generous allowance. It’s available to all customers.
With unlimited users included, the Axiom Cloud plan starting at $25/month is a great choice for growing companies and for enterprise organizations who want to run a proof-of-concept.
diff --git a/getting-started-guide/glossary.mdx b/getting-started-guide/glossary.mdx
index f7f71668..56a91bb1 100644
--- a/getting-started-guide/glossary.mdx
+++ b/getting-started-guide/glossary.mdx
@@ -4,8 +4,12 @@ description: "The glossary explains the key concepts in Axiom."
sidebarTitle: Glossary
---
+{/* vale off */}
+
[A](#a) [B](#b) [C](#c) [D](#d) [E](#e) [F](#f) G H I K [L](#l) [M](#m) [N](#n) [O](#o) [P](#p) [Q](#q) [R](#r) S [T](#t) W X Y Z
+{/* vale on */}
+
## A
### Anomaly monitor
@@ -49,7 +53,7 @@ For more information, see [Introduction to APL](/apl/introduction).
### Bring Your Own Cloud (BYOC)
-BYOC is a deployment option and pricing plan. We deploy Axiom into your own AWS infrastructure, and you store and process data in your own environment.
+BYOC is a deployment option and pricing plan. Axiom deploys into your AWS infrastructure, and you store and process data in your own environment.
## C
diff --git a/getting-started-guide/quickstart-using-sample-data.mdx b/getting-started-guide/quickstart-using-sample-data.mdx
index 7e849073..9c5643bd 100644
--- a/getting-started-guide/quickstart-using-sample-data.mdx
+++ b/getting-started-guide/quickstart-using-sample-data.mdx
@@ -100,6 +100,6 @@ You created a simple dashboard that displays the number of requests for each ser
You created a monitor that automatically sends a notification to your email address if the number of requests taking longer than 5 ms is higher than 10,000 during a one minute period.
-## What's next
+## What’s next
To check out Axiom with a sample app, see [Get started with example app](/getting-started-guide/get-started-example-app).
\ No newline at end of file
diff --git a/guides/apex.mdx b/guides/apex.mdx
index 2a706dee..a9b3ad55 100644
--- a/guides/apex.mdx
+++ b/guides/apex.mdx
@@ -27,15 +27,15 @@ The Axiom Go SDK is an open-source project and welcomes your contributions. For
import adapter "github.com/axiomhq/axiom-go/adapters/apex"
```
- Alternatively, configure the adapter using [options](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/apex#Option) passed to the [New](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/apex#New) function:
+Alternatively, configure the adapter using [options](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/apex#Option) passed to the [New](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/apex#New) function:
- ```go
- handler, err := adapter.New(
- adapter.SetDataset("DATASET_NAME"),
- )
- ```
+```go
+handler, err := adapter.New(
+ adapter.SetDataset("DATASET_NAME"),
+)
+```
-
+
## Configure client
@@ -44,20 +44,20 @@ To configure the underlying client manually, choose one of the following:
- Use [SetClient](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/apex#SetClient) to pass in the client you have previously created with [Send data from Go app to Axiom](/guides/go).
- Use [SetClientOptions](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/apex#SetClientOptions) to pass [client options](https://pkg.go.dev/github.com/axiomhq/axiom-go/axiom#Option) to the adapter.
- ```go
- import (
- "github.com/axiomhq/axiom-go/axiom"
- adapter "github.com/axiomhq/axiom-go/adapters/apex"
- )
-
- // ...
-
- handler, err := adapter.New(
- adapter.SetClientOptions(
- axiom.SetPersonalTokenConfig("AXIOM_TOKEN"),
- ),
- )
- ```
+```go
+import (
+ "github.com/axiomhq/axiom-go/axiom"
+ adapter "github.com/axiomhq/axiom-go/adapters/apex"
+)
+
+// ...
+
+handler, err := adapter.New(
+ adapter.SetClientOptions(
+ axiom.SetPersonalTokenConfig("AXIOM_TOKEN"),
+ ),
+)
+```
The adapter uses a buffer to batch events before sending them to Axiom. Flush this buffer explicitly by calling [Close](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/apex#Handler.Close). For more information, see the [example in GitHub](https://github.com/axiomhq/axiom-go/blob/main/examples/apex/main.go).
diff --git a/guides/logrus.mdx b/guides/logrus.mdx
index be2ad0ab..7591416e 100644
--- a/guides/logrus.mdx
+++ b/guides/logrus.mdx
@@ -27,15 +27,15 @@ The Axiom Go SDK is an open-source project and welcomes your contributions. For
import adapter "github.com/axiomhq/axiom-go/adapters/logrus"
```
- Alternatively, configure the adapter using [options](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/logrus#Option) passed to the [New](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/logrus#New) function:
+Alternatively, configure the adapter using [options](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/logrus#Option) passed to the [New](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/logrus#New) function:
- ```go
- hook, err := adapter.New(
- adapter.SetDataset("DATASET_NAME"),
- )
- ```
+```go
+hook, err := adapter.New(
+ adapter.SetDataset("DATASET_NAME"),
+)
+```
-
+
## Configure client
@@ -44,20 +44,20 @@ To configure the underlying client manually, choose one of the following:
- Use [SetClient](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/logrus#SetClient) to pass in the client you have previously created with [Send data from Go app to Axiom](/guides/go).
- Use [SetClientOptions](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/logrus#SetClientOptions) to pass [client options](https://pkg.go.dev/github.com/axiomhq/axiom-go/axiom#Option) to the adapter.
- ```go
- import (
- "github.com/axiomhq/axiom-go/axiom"
- adapter "github.com/axiomhq/axiom-go/adapters/logrus"
- )
-
- // ...
-
- hook, err := adapter.New(
- adapter.SetClientOptions(
- axiom.SetPersonalTokenConfig("AXIOM_TOKEN"),
- ),
- )
- ```
+```go
+import (
+ "github.com/axiomhq/axiom-go/axiom"
+ adapter "github.com/axiomhq/axiom-go/adapters/logrus"
+)
+
+// ...
+
+hook, err := adapter.New(
+ adapter.SetClientOptions(
+ axiom.SetPersonalTokenConfig("AXIOM_TOKEN"),
+ ),
+)
+```
The adapter uses a buffer to batch events before sending them to Axiom. Flush this buffer explicitly by calling [Close](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/logrus#Hook.Close). For more information, see the [example in GitHub](https://github.com/axiomhq/axiom-go/blob/main/examples/logrus/main.go).
diff --git a/guides/opentelemetry-cloudflare-workers.mdx b/guides/opentelemetry-cloudflare-workers.mdx
index 234e6181..6b3ac4e7 100644
--- a/guides/opentelemetry-cloudflare-workers.mdx
+++ b/guides/opentelemetry-cloudflare-workers.mdx
@@ -183,7 +183,7 @@ wrangler deploy
## View your app in Cloudflare Workers
-Once you've deployed your app using Wrangler, view and manage it through the Cloudflare dashboard. To see your Cloudflare Workers app, follow these steps:
+Once you’ve deployed your app using Wrangler, view and manage it through the Cloudflare dashboard. To see your Cloudflare Workers app, follow these steps:
- In your [Cloudflare dashboard](https://dash.cloudflare.com/), click **Workers & Pages** to access the Workers section. You see a list of your deployed apps.
diff --git a/guides/opentelemetry-dotnet.mdx b/guides/opentelemetry-dotnet.mdx
index 500640f3..ee388275 100644
--- a/guides/opentelemetry-dotnet.mdx
+++ b/guides/opentelemetry-dotnet.mdx
@@ -328,7 +328,7 @@ This is the core SDK for OpenTelemetry in .NET. It provides the foundational too
``
-This package allows apps to export telemetry data to the console. It is primarily useful for development and testing purposes, offering a simple way to view the telemetry data your app generates in real time.
+This package allows apps to export telemetry data to the console. It’s primarily useful for development and testing purposes, offering a simple way to view the telemetry data your app generates in real time.
### OpenTelemetry.Exporter.OpenTelemetryProtocol
diff --git a/guides/opentelemetry-go.mdx b/guides/opentelemetry-go.mdx
index 377cdc3a..deb7eb97 100644
--- a/guides/opentelemetry-go.mdx
+++ b/guides/opentelemetry-go.mdx
@@ -43,7 +43,7 @@ Before installing the OpenTelemetry dependencies, ensure your Go project is prop
### Initialize a Go module
-If your project is not already initialized as a Go module, run the following command in your project’s root directory. This step creates a `go.mod` file which tracks your project’s dependencies.
+If your project isn’t already initialized as a Go module, run the following command in your project’s root directory. This step creates a `go.mod` file which tracks your project’s dependencies.
```bash
go mod init
@@ -53,7 +53,7 @@ Replace `` with your project’s name or the GitHub repository path
### Manage dependencies
-After initializing your Go module, tidy up your project’s dependencies. This ensures that your `go.mod` file accurately reflects the packages your project depends on, including the correct versions of the OpenTelemetry libraries you'll be using.
+After initializing your Go module, tidy up your project’s dependencies. This ensures that your `go.mod` file accurately reflects the packages your project depends on, including the correct versions of the OpenTelemetry libraries you’ll be using.
Run the following command in your project’s root directory:
diff --git a/guides/send-logs-from-dotnet.mdx b/guides/send-logs-from-dotnet.mdx
index d4e8aa94..c154b0ea 100644
--- a/guides/send-logs-from-dotnet.mdx
+++ b/guides/send-logs-from-dotnet.mdx
@@ -80,7 +80,7 @@ public static class AxiomLogger
// Check the response status code
if (!response.IsSuccessStatusCode)
{
- // If the response is not successful, print the error details
+ // If the response isn’t successful, print the error details
var responseBody = await response.Content.ReadAsStringAsync();
Console.WriteLine($"Failed to send log: {response.StatusCode}\n{responseBody}");
}
diff --git a/guides/send-logs-from-laravel.mdx b/guides/send-logs-from-laravel.mdx
index 23280c7d..710c2b4e 100644
--- a/guides/send-logs-from-laravel.mdx
+++ b/guides/send-logs-from-laravel.mdx
@@ -122,11 +122,11 @@ return [
];
```
-At the start of the `logging.php` file in your Laravel project, you'll find some Monolog handlers like `NullHandler`, `StreamHandler`, and a few more. This shows that Laravel uses Monolog to help with logging, which means it can do a lot of different things with logs.
+At the start of the `logging.php` file in your Laravel project, you’ll find some Monolog handlers like `NullHandler`, `StreamHandler`, and a few more. This shows that Laravel uses Monolog to help with logging, which means it can do a lot of different things with logs.
### Default log channel
-The `default` configuration specifies the primary channel Laravel uses for logging. In our setup, this is set through the **`.env`** file with the **`LOG_CHANNEL`** variable, which you've set to **`axiom`**. This means that, by default, log messages will be sent to the Axiom channel, using the custom handler you've defined to send logs to the dataset.
+The `default` configuration specifies the primary channel Laravel uses for logging. In our setup, this is set through the **`.env`** file with the **`LOG_CHANNEL`** variable, which you’ve set to **`axiom`**. This means that, by default, log messages will be sent to the Axiom channel, using the custom handler you’ve defined to send logs to the dataset.
```bash
LOG_CHANNEL=axiom
@@ -305,7 +305,7 @@ class AxiomHandler extends AbstractProcessingHandler
## Creating the test controller
-In this section, we will demonstrate the process of verifying that your custom Axiom logger is properly set up and functioning within your Laravel app. To do this, we'll create a simple test controller with a method designed to send a log message using the Axiom channel. Following this, we'll define a route that triggers this logging action, allowing you to easily test the logger by accessing a specific URL in your browser or using a tool like cURL.
+In this section, we will demonstrate the process of verifying that your custom Axiom logger is properly set up and functioning within your Laravel app. To do this, we’ll create a simple test controller with a method designed to send a log message using the Axiom channel. Following this, we’ll define a route that triggers this logging action, allowing you to easily test the logger by accessing a specific URL in your browser or using a tool like cURL.
Create a new controller called `TestController` within your `app/Http/Controllers` directory. In this controller, add a method named `logTest` . This method will use Laravel’s logging to send a test log message to your Axiom dataset. Here’s how you set it up:
@@ -370,7 +370,7 @@ With this route, navigating to `/test-log` on your Laravel app’s domain will e
## Run the app
-If you are running the Laravel app locally, to see your custom Axiom logger in action, you'll need to start your Laravel app. Open your terminal or command prompt, navigate to the root directory of your Laravel project, and run the following command:
+If you are running the Laravel app locally, to see your custom Axiom logger in action, you’ll need to start your Laravel app. Open your terminal or command prompt, navigate to the root directory of your Laravel project, and run the following command:
```bash
php artisan serve
@@ -380,7 +380,7 @@ This command launches the built-in development server, making your app accessibl
## View the logs in Axiom
-Once you've set up your Laravel app with Axiom logging and sent test logs via our `TestController`, check your dataset. There, you'll find your logs categorized by levels like `debug`, `info`, `error`, and `warning`. This confirms everything is working and showcases Axiom’s capabilities in handling log data.
+Once you’ve set up your Laravel app with Axiom logging and sent test logs via our `TestController`, check your dataset. There, you’ll find your logs categorized by levels like `debug`, `info`, `error`, and `warning`. This confirms everything is working and showcases Axiom’s capabilities in handling log data.
@@ -388,4 +388,4 @@ Once you've set up your Laravel app with Axiom logging and sent test logs via ou
## Conclusion
-This guide has introduced you to integrating Axiom for logging in Laravel apps. You've learned how to create a custom logger, configure log channels, and understand the significance of log levels. With this knowledge, you’re set to track errors and analyze log data effectively using Axiom.
+This guide has introduced you to integrating Axiom for logging in Laravel apps. You’ve learned how to create a custom logger, configure log channels, and understand the significance of log levels. With this knowledge, you’re set to track errors and analyze log data effectively using Axiom.
diff --git a/guides/zap.mdx b/guides/zap.mdx
index ed1d2190..1cf4007c 100644
--- a/guides/zap.mdx
+++ b/guides/zap.mdx
@@ -27,15 +27,15 @@ The Axiom Go SDK is an open-source project and welcomes your contributions. For
import adapter "github.com/axiomhq/axiom-go/adapters/zap"
```
- Alternatively, configure the adapter using [options](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/zap#Option) passed to the [New](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/zap#New) function:
+Alternatively, configure the adapter using [options](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/zap#Option) passed to the [New](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/zap#New) function:
- ```go
- core, err := adapter.New(
- adapter.SetDataset("DATASET_NAME"),
- )
- ```
+```go
+core, err := adapter.New(
+ adapter.SetDataset("DATASET_NAME"),
+)
+```
-
+
## Configure client
@@ -44,20 +44,20 @@ To configure the underlying client manually, choose one of the following:
- Use [SetClient](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/zap#SetClient) to pass in the client you have previously created with [Send data from Go app to Axiom](/guides/go).
- Use [SetClientOptions](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/zap#SetClientOptions) to pass [client options](https://pkg.go.dev/github.com/axiomhq/axiom-go/axiom#Option) to the adapter.
- ```go
- import (
- "github.com/axiomhq/axiom-go/axiom"
- adapter "github.com/axiomhq/axiom-go/adapters/zap"
- )
-
- // ...
-
- core, err := adapter.New(
- adapter.SetClientOptions(
- axiom.SetPersonalTokenConfig("AXIOM_TOKEN"),
- ),
- )
- ```
+```go
+import (
+ "github.com/axiomhq/axiom-go/axiom"
+ adapter "github.com/axiomhq/axiom-go/adapters/zap"
+)
+
+// ...
+
+core, err := adapter.New(
+ adapter.SetClientOptions(
+ axiom.SetPersonalTokenConfig("AXIOM_TOKEN"),
+ ),
+)
+```
The adapter uses a buffer to batch events before sending them to Axiom. Flush this buffer explicitly by calling [Sync](https://pkg.go.dev/github.com/axiomhq/axiom-go/adapters/zap#WriteSyncer.Sync). For more information, see the [zap documentation](https://pkg.go.dev/go.uber.org/zap/zapcore#WriteSyncer) and the [example in GitHub](https://github.com/axiomhq/axiom-go/blob/main/examples/zap/main.go).
diff --git a/introduction.mdx b/introduction.mdx
index 7bd955f9..eec481e7 100644
--- a/introduction.mdx
+++ b/introduction.mdx
@@ -3,9 +3,9 @@ title: "What is Axiom?"
keywords: ["axiom", "eventdb", "console", "data platform"]
---
-Axiom is a data platform designed to efficiently collect, store, and analyze event and telemetry data at massive scale. At its core, Axiom combines a high-performance, proprietary data store with an intelligent Console—helping teams reach actionable insights from their data faster.
+Axiom is a data platform designed to efficiently collect, store, and analyze event and telemetry data at massive scale. At its core, Axiom combines a high-performance, proprietary data store with an intelligent Console, helping teams reach actionable insights from their data faster.
-Trusted by 30,000+ organizations—from high-growth startups to global enterprises.
+Trusted by 30,000+ organizations, from high-growth startups to global enterprises.
## Components
@@ -19,7 +19,7 @@ Robust, cost-effective, and scalable datastore specifically optimized for timest
* **Extreme compression:** Tuned storage format compresses data 25-50x, significantly reducing storage costs and ensuring data remains queryable at any time.
* **Serverless querying:** Axiom spins up ephemeral, serverless runtimes on-demand to execute queries efficiently, minimizing idle compute resources and costs.
-Learn more about Axiom’s architecture in the [architecture](/platform-overview/architecture) page.
+For more information, see [Axiom’s architecture](/platform-overview/architecture).
### Console
@@ -37,6 +37,6 @@ Intuitive web app built for exploration, visualization, and monitoring of your d
## Getting started
-* Learn more about Axiom’s [features](/platform-overview/features).
-* Explore the interactive demo [playground](https://play.axiom.co/).
-* Create your own [organization](https://app.axiom.co/register).
+* [Learn more about Axiom’s features](/platform-overview/features).
+* [Explore the interactive demo playground](https://play.axiom.co/).
+* [Create your own organization](https://app.axiom.co/register).
diff --git a/legal/cookies.mdx b/legal/cookies.mdx
index 4a26e497..36e3a118 100644
--- a/legal/cookies.mdx
+++ b/legal/cookies.mdx
@@ -1,6 +1,6 @@
---
-title: Cookie Policy
-description: "How we use cookies and similar tracking technologies on our websites."
+title: Cookie policy
+description: "How Axiom uses cookies and similar tracking technologies on its websites."
sidebarTitle: Cookies
---
diff --git a/legal/privacy.mdx b/legal/privacy.mdx
index 06549839..30cac220 100644
--- a/legal/privacy.mdx
+++ b/legal/privacy.mdx
@@ -1,7 +1,6 @@
---
title: Privacy policy
-description: "Learn how we collect, use, and protect your personal information and data."
-sidebarTitle: Privacy policy
+description: "Learn how Axiom collects, uses, and protects your personal information and data."
---
{/* vale off */}
diff --git a/legal/sla.mdx b/legal/sla.mdx
index bbcebf55..98e20db9 100644
--- a/legal/sla.mdx
+++ b/legal/sla.mdx
@@ -3,6 +3,8 @@ title: SLA
description: "Building event data infrastructure your team can trust."
---
+{/* vale off */}
+
Axiom is committed to delivering a reliable, performant, and secure data platform. We want our customers to have full confidence in the Axiom Service, backed by transparent commitments. This Service Quality document covers:
1. **Service availability** and corresponding SLAs
@@ -129,4 +131,4 @@ Axiom is not obligated to correct any Issue or issue that meets any of the follo
5. which is caused by Customer’s negligence, abuse, misapplication, or use of the Axiom Service other than as specified in the Documentation; or
6. which would be resolved by the Customer using an error correction or update regarding the Axiom Service.
-Customer acknowledges that new features may be added to the Axiom Service based on market demand and technological innovation. Accordingly, as Axiom develops enhanced versions of the Axiom Service, Axiom may cease to maintain and support older versions.
\ No newline at end of file
+Customer acknowledges that new features may be added to the Axiom Service based on market demand and technological innovation. Accordingly, as Axiom develops enhanced versions of the Axiom Service, Axiom may cease to maintain and support older versions.
diff --git a/legal/terms-of-service.mdx b/legal/terms-of-service.mdx
index 7944bb6f..3a0a77e9 100644
--- a/legal/terms-of-service.mdx
+++ b/legal/terms-of-service.mdx
@@ -1,7 +1,6 @@
---
title: Terms of service
-description: "The terms and conditions governing your use of our services and platform."
-sidebarTitle: Terms of service
+description: "The terms and conditions governing your use of Axiom services and platform."
---
{/* vale off */}
diff --git a/platform-overview/architecture.mdx b/platform-overview/architecture.mdx
index 781d49f6..f1a8ad9f 100644
--- a/platform-overview/architecture.mdx
+++ b/platform-overview/architecture.mdx
@@ -1,11 +1,11 @@
---
title: "Architecture"
keywords: ["architecture", "storage", "query", "ingestion", "compression"]
-description: "Technical deep-dive into Axiom's distributed architecture."
+description: "Technical deep-dive into Axiom’s distributed architecture."
---
-You don't need to understand any of the following material to get massive value from Axiom. As a fully managed data platform, Axiom just works. This technical deep-dive is intended for curious minds wondering: Why is Axiom different?
+You don’t need to understand any of the following material to get massive value from Axiom. As a fully managed data platform, Axiom just works. This technical deep-dive is intended for curious minds wondering: Why is Axiom different?
Axiom routes ingestion requests through a distributed edge layer to a cluster of specialized services that process and store data in a proprietary columnar format optimized for event data. Query requests are executed by ephemeral, serverless workers that operate directly on compressed data stored in object storage.
@@ -24,7 +24,7 @@ Data flows through a multi-layered ingestion system designed for high throughput
## Storage architecture
-Axiom's storage layer is built around a custom columnar format that achieves extreme compression ratios:
+Axiom’s storage layer is built around a custom columnar format that achieves extreme compression ratios:
**Columnar organization**: Events are decomposed into columns and stored using specialized encodings optimized for each data type. String columns use dictionary encoding, numeric columns use various compression schemes, and boolean columns use bitmap compression.
@@ -70,7 +70,7 @@ A background compaction system continuously optimizes storage efficiency:
- **Fieldspace**: Optimizes for specific field access patterns
- **Concat**: Simple concatenation for append-heavy workloads
-**Compression optimization**: During compaction, data is recompressed using more aggressive algorithms and column-specific optimizations that aren't feasible during real-time ingestion.
+**Compression optimization**: During compaction, data is recompressed using more aggressive algorithms and column-specific optimizations that aren’t feasible during real-time ingestion.
## System architecture
diff --git a/platform-overview/features.mdx b/platform-overview/features.mdx
index 50cd7f4c..73710d5f 100644
--- a/platform-overview/features.mdx
+++ b/platform-overview/features.mdx
@@ -1,6 +1,6 @@
---
title: "Features"
-description: "Comprehensive overview of Axiom's components, features, and capabilities across the platform."
+description: "Comprehensive overview of Axiom’s components, features, and capabilities across the platform."
keywords: ["capabilities", "features", "components", "eventdb", "console", "integrations", "api", "security", "compliance"]
mode: "wide"
---
@@ -10,7 +10,7 @@ mode: "wide"
| **Data Platform** | Deployment | Axiom Cloud | Axiom hosts and manages all infrastructure in its own cloud. |
| | | | |
| **EventDB** | - | - | Foundation of Axiom’s platform for ingesting, storing, and querying timestamped event data at scale. |
-| **EventDB** | Ingest | - | Ingest pipeline that is coordination-free, durable by default, and scales linearly without requiring Kafka or other heavy middleware. |
+| **EventDB** | Ingest | - | Ingest pipeline that’s coordination-free, durable by default, and scales linearly without requiring Kafka or other heavy middleware. |
| **EventDB** | Storage | - | Custom block-based format on object storage, with extreme compression (average 25×, up to 50× for more structured events) and efficient metadata. |
| **EventDB** | Query | - | Serverless ephemeral runtimes that spin up on demand to process queries, powered by the Axiom Processing Language (APL). |
| **EventDB** | Query | [APL (Axiom Processing Language)](/apl/introduction) | Powerful query language supporting filtering, aggregations, transformations, and specialized operators. |
@@ -25,7 +25,7 @@ mode: "wide"
| **Console** | Dashboards | - | Combine multiple visual elements (charts, tables, logs, etc.) onto a single page. |
| **Console** | Dashboards | [Elements](/dashboard-elements/overview) | Various chart types, log streams, notes, and more to tailor each dashboard. |
| **Console** | Dashboards | [Annotations](/query-data/annotate-charts) | Mark points in time or highlight events directly in your dashboards. |
-| **Console** | Monitors | [Threshold onitors](/monitor-data/threshold-monitors) | Checks if aggregated values exceed or fall below a predefined threshold (e.g., error counts > 100). |
+| **Console** | Monitors | [Threshold onitors](/monitor-data/threshold-monitors) | Checks if aggregated values exceed or fall below a predefined threshold (for example, error counts > 100). |
| **Console** | Monitors | [Match monitors](/monitor-data/match-monitors) | Triggers on specific log patterns or conditions for each event. |
| **Console** | Monitors | [Anomaly monitors](/monitor-data/anomaly-monitors) | Learns from historical data to detect unexpected deviations or spikes. |
| **Console** | Alerting | [Notifiers](/monitor-data/notifiers-overview) (Webhooks, Email, Slack, and more) | Sends notifications through various channels including email, chat platforms, incident management systems, and custom webhook integrations. |
@@ -45,7 +45,7 @@ mode: "wide"
| **APIs and CLI** | CLI | [Query from terminal](/restapi/query) | Execute queries in APL or simpler filters directly in a command-line session. |
| **APIs and CLI** | [Terraform Provider](https://registry.terraform.io/providers/axiomhq/axiom/latest) | - | Terraform provider for programmatically creating and managing Axiom resources including datasets, notifiers, monitors, and users. |
| | | | |
-| **Security and Compliance** | - | - | Axiom's data protection measures and compliance with major privacy/security frameworks. |
+| **Security and Compliance** | - | - | Axiom’s data protection measures and compliance with major privacy/security frameworks. |
| **Security and Compliance** | [Compliance](/platform-overview/security) | SOC 2 Type II, GDPR, CCPA, HIPAA | Meets industry standards for data handling and privacy. |
## Related links
diff --git a/platform-overview/roadmap.mdx b/platform-overview/roadmap.mdx
index f83f1258..5586efaa 100644
--- a/platform-overview/roadmap.mdx
+++ b/platform-overview/roadmap.mdx
@@ -27,7 +27,6 @@ As teams incorporate more AI capabilities into their own products, they need con
Learn more in the [AI engineering](/ai-engineering/overview) docs.
-
### Platform excellence and scale
Supporting ambitious builders requires a rock-solid and scalable foundation. Axiom continues to invest heavily in core performance, reliability, and capabilities of the Axiom platform to ensure it can handle the most demanding workloads.
@@ -47,7 +46,7 @@ Each feature of Axiom is in one of the following states:
- **End of life:** The feature is no longer available or supported. Axiom has sunset it in favor of newer solutions.
-Private and public preview features are experimental, are not guaranteed to work as expected, and may return unexpected query results. Please consider the risk you run when you use preview features against production workloads.
+Private and public preview features are experimental, aren’t guaranteed to work as expected, and may return unexpected query results. Please consider the risk you run when you use preview features against production workloads.
Current private preview features:
diff --git a/query-data/traces.mdx b/query-data/traces.mdx
index beabb357..6b7371c8 100644
--- a/query-data/traces.mdx
+++ b/query-data/traces.mdx
@@ -135,7 +135,7 @@ In the waterfall view of traces, Axiom warns you about slow and fast spans. Thes
The span duration histogram can be useful in the following cases, among others:
- You look at a span and you’re not familiar with the typical behavior of the service that created it. You want to know if you look at something normal in terms of duration or an outlier. The histogram helps you determine if you look at an outlier and might drill down further.
-- You've found an outlier. You want to investigate and look at other outliers. The histogram shows you what the baseline is and what’s not normal in terms of duration. You want to filter for the outliers and see what they have in common.
+- You’ve found an outlier. You want to investigate and look at other outliers. The histogram shows you what the baseline is and what’s not normal in terms of duration. You want to filter for the outliers and see what they have in common.
- You want to see if there was a recent change in the typical duration for the selected span type.
To narrow the time range of the histogram, click and select an area in the histogram.
diff --git a/reference/cli.mdx b/reference/cli.mdx
index 5f90ecc3..e99b3296 100644
--- a/reference/cli.mdx
+++ b/reference/cli.mdx
@@ -254,4 +254,4 @@ axiom help auth status
**if you have questions, or any opinions you can [start an issue](https://github.com/axiomhq/cli/issues) on Axiom CLI’s open source repository.**
-**You can also visit our [Discord community](https://axiom.co/discord) to start or join a discussion. We'd love to hear from you!**
+**You can also visit our [Discord community](https://axiom.co/discord) to start or join a discussion. We’d love to hear from you!**
diff --git a/reference/datasets.mdx b/reference/datasets.mdx
index bbce2d9e..489e677f 100644
--- a/reference/datasets.mdx
+++ b/reference/datasets.mdx
@@ -75,7 +75,7 @@ In most cases, you can use `_time` and `_sysTime` interchangeably. The differenc
To create a dataset using the Axiom app, follow these steps:
-1. Click
**Settings > Datasets**.
+1. Click
**Settings > Datasets and views**.
1. Click **New dataset**.
1. Name the dataset, and then click **Add**.
@@ -93,7 +93,7 @@ You can import data to your dataset in one of the following formats:
To import data to a dataset, follow these steps:
-1. Click
**Settings > Datasets**.
+1. Click
**Settings > Datasets and views**.
1. In the list, find the dataset where you want to import data, and then click
**Import** on the right.
1. Optional: Specify the timestamp field. This is only necessary if your data contains a timestamp field and it’s different from `_time`.
1. Upload the file, and then click **Import**.
@@ -108,7 +108,7 @@ Trimming a dataset deletes all data before the specified date.
To trim a dataset, follow these steps:
-1. Click
**Settings > Datasets**.
+1. Click
**Settings > Datasets and views**.
1. In the list, find the dataset that you want to trim, and then click
**Trim dataset** on the right.
1. Specify the date before which you want to delete data.
1. Enter the name of the dataset, and then click **Trim**.
@@ -127,7 +127,7 @@ You can only vacuum fields once per day for each dataset.
To vacuum fields, follow these steps:
-1. Click
**Settings > Datasets**.
+1. Click
**Settings > Datasets and views**.
1. In the list, find the dataset where you want to vacuum fields, and then click
**Vacuum fields** on the right.
1. Select the checkbox, and then click **Vacuum**.
@@ -145,7 +145,7 @@ No ingest usage associated with the shared dataset accrues to the receiving orga
To share a dataset with another Axiom organization:
1. Ensure you have the necessary privileges to share datasets. By default, only users with the Owner role can share datasets.
-1. Click
**Settings > Datasets**.
+1. Click
**Settings > Datasets and views**.
1. In the list, find the dataset that you want to share, and then click
**Share dataset** on the right.
1. In the Sharing links section, click **+** to create a new sharing link.
1. Copy the URL and share it with the receiving user in the organization with which you want to share the dataset. For example, `https://app.axiom.co/s/dataset/{sharing-token}`.
@@ -157,7 +157,7 @@ Organizations can gain access to the dataset with an active sharing link. To dea
To delete a sharing link:
-1. Click
**Settings > Datasets**.
+1. Click
**Settings > Datasets and views**.
1. In the list, find the dataset, and then click
**Share dataset** on the right.
1. To the right of the sharing link, click
**Delete**.
1. Click **Delete sharing link**.
@@ -166,7 +166,7 @@ To delete a sharing link:
If your organization has previously shared a dataset with a receiving organization, and you want to remove the receiving organization’s access to the dataset, follow these steps:
-1. Click
**Settings > Datasets**.
+1. Click
**Settings > Datasets and views**.
1. In the list, find the dataset, and then click
**Share dataset** on the right.
1. In the list, find the organization whose access you want to remove, and then click
**Remove**.
1. Click **Remove access**.
@@ -176,7 +176,7 @@ If your organization has previously shared a dataset with a receiving organizati
If your organization has previously received access to a dataset from a sending organization, and you want to remove the shared dataset from your organization, follow these steps:
1. Ensure you have Delete permissions for the shared dataset.
-1. Click
**Settings > Datasets**.
+1. Click
**Settings > Datasets and views**.
1. In the list, click the shared dataset that you want to remove, and then click
**Remove dataset**.
1. Enter the name of the dataset, and then click **Remove**.
@@ -196,7 +196,7 @@ When you specify a data retention period for a dataset that’s shorter than the
To change the data retention period for a dataset, follow these steps:
-1. Click
**Settings > Datasets**.
+1. Click
**Settings > Datasets and views**.
1. In the list, find the dataset for which you want to change the retention period, and then click
**Edit dataset retention** on the right.
1. Enter a data retention period. The custom retention period must be greater than 0 days.
1. Click **Submit**.
@@ -209,6 +209,6 @@ Deleting a dataset deletes all data contained in the dataset.
To delete a dataset, follow these steps:
-1. Click
**Settings > Datasets**.
+1. Click
**Settings > Datasets and views**.
1. In the list, click the dataset that you want to delete, and then click
**Delete dataset**.
1. Enter the name of the dataset, and then click **Delete**.
diff --git a/reference/field-restrictions.mdx b/reference/field-restrictions.mdx
index 749f4ac4..99d74351 100644
--- a/reference/field-restrictions.mdx
+++ b/reference/field-restrictions.mdx
@@ -5,7 +5,7 @@ sidebarTitle: Limits
keywords: ['axiom documentation', 'documentation', 'axiom', 'reference', 'settings', 'field restrictions', 'time stamp', 'time stamp field', 'limits', 'requirements', 'pricing', 'usage']
---
-{/* TODO: Rename this file it does not reflect the content. */}
+{/* TODO: Rename this file it doesn’t reflect the content. */}
import IngestDataLimits from "/snippets/ingest-data-limits.mdx"
diff --git a/reference/query-hours.mdx b/reference/query-hours.mdx
index a44964aa..cf918a53 100644
--- a/reference/query-hours.mdx
+++ b/reference/query-hours.mdx
@@ -4,7 +4,7 @@ description: "This page explains how to calculate and manage query compute resou
keywords: ['query', 'gb hours']
---
-{/* TODO: Rename this file it does not reflect the content. */}
+{/* TODO: Rename this file it doesn’t reflect the content. */}
Axiom measures the resources used to execute queries in terms of GB-hours.
diff --git a/reference/regions.mdx b/reference/regions.mdx
index 7e47bee3..5b8c71a0 100644
--- a/reference/regions.mdx
+++ b/reference/regions.mdx
@@ -4,7 +4,7 @@ description: 'This page explains how to work with Axiom based on your organizati
---
-Axiom will soon support a unified multi-region model to manage data across the US, EU (and beyond) from a single organization; contact the Axiom team to learn more.
+Axiom will soon support a unified multi-region model to manage data across the US, EU (and beyond) from a single organization. [Contact Axiom](https://www.axiom.co/contact) to learn more.
In Axiom, your organization can use one of the following regions:
diff --git a/reference/settings.mdx b/reference/settings.mdx
index 53024377..a68471f3 100644
--- a/reference/settings.mdx
+++ b/reference/settings.mdx
@@ -5,7 +5,7 @@ description: 'This section explains how to configure Role-Based Access Control (
keywords: ['rbac', 'api token', 'personal token', 'billing', 'dataset', 'integrations', 'teams', 'profile', 'user settings']
---
-{/* TODO: Rename this file it does not reflect the content. */}
+{/* TODO: Rename this file it doesn’t reflect the content. */}
import AccessToDatasets from "/snippets/access-to-datasets.mdx"
diff --git a/restapi/api-limits.mdx b/restapi/api-limits.mdx
index ad8b2219..aa327b25 100644
--- a/restapi/api-limits.mdx
+++ b/restapi/api-limits.mdx
@@ -57,7 +57,7 @@ Alongside data volume limits, we also monitor the rate of ingest requests.
If an organization consistently sends an excessive number of requests per second,
far exceeding normal usage patterns, we reserve the right to suspend their ingest
to maintain system stability and ensure fair resource allocation for all users.
-To prevent exceeding these rate limits, it is highly recommended to use batching clients,
+To prevent exceeding these rate limits, it’s highly recommended to use batching clients,
which can efficiently manage the number of requests by aggregating data before sending.
## Limits on ingested data
diff --git a/restapi/introduction.mdx b/restapi/introduction.mdx
index ba14ce14..c88a47dc 100644
--- a/restapi/introduction.mdx
+++ b/restapi/introduction.mdx
@@ -88,7 +88,7 @@ Below is a list of the types of data used within the Axiom API:
| **Map** | A data structure with a list of values assigned to a unique key. | \{ "key": "value" \} |
| **List** | A data structure with only a list of values separated by a comma. | ["value", 4567, 45.67] |
-## What's next
+## What’s next
- [Ingest data via API](/restapi/ingest)
- [Query data via API](/restapi/query)
\ No newline at end of file
diff --git a/send-data/aws-firelens.mdx b/send-data/aws-firelens.mdx
index 213b69cb..8781d0e6 100644
--- a/send-data/aws-firelens.mdx
+++ b/send-data/aws-firelens.mdx
@@ -26,7 +26,7 @@ Here’s a basic configuration for using FireLens with Fluent Bit to forward log
## Fluent Bit configuration for Axiom
-You'll typically define this in a file called `fluent-bit.conf`:
+You’ll typically define this in a file called `fluent-bit.conf`:
```ini
[SERVICE]
@@ -60,7 +60,7 @@ Read more about [Fluent Bit configuration here](/send-data/fluent-bit)
## ECS task definition with FireLens
-You'll want to include this within your ECS task definition, and reference the FireLens configuration type and options:
+You’ll want to include this within your ECS task definition, and reference the FireLens configuration type and options:
```json
{
diff --git a/send-data/aws-s3.mdx b/send-data/aws-s3.mdx
index 0d82c8ec..e6b6ad8f 100644
--- a/send-data/aws-s3.mdx
+++ b/send-data/aws-s3.mdx
@@ -143,10 +143,10 @@ The `.log` extension doesn't guarantee any specific format. Log files might cont
- Application-specific formats (Apache, Nginx, ELB, etc.)
- Custom formats with quoted strings and special characters
-The example code includes format detection for common formats, but you'll need to customize this based on your specific log structure.
+The example code includes format detection for common formats, but you’ll need to customize this based on your specific log structure.
#### Example: Custom parser for structured logs
-For logs with a specific structure (like AWS ELB logs), you have to implement a custom parser. Here's a simplified example:
+For logs with a specific structure (like AWS ELB logs), you have to implement a custom parser. Here’s a simplified example:
```py
import shlex
diff --git a/send-data/cribl.mdx b/send-data/cribl.mdx
index fe4264c5..67e5f0b5 100644
--- a/send-data/cribl.mdx
+++ b/send-data/cribl.mdx
@@ -73,7 +73,7 @@ In the Body Template, input `{{_raw}}`. This forwards the raw log event to Axiom
5. Save and enable the destination:
-After you've finished configuring the destination, save your changes and make sure the destination is enabled.
+After you’ve finished configuring the destination, save your changes and make sure the destination is enabled.
## Set up log forwarding from Cribl to Axiom using the Syslog destination
@@ -125,4 +125,4 @@ Open Cribl’s UI and navigate to **Destinations > Syslog**. Click on `+` Add Ne
4. Save and enable the destination
-After you've finished configuring the destination, save your changes and make sure the destination is enabled.
+After you’ve finished configuring the destination, save your changes and make sure the destination is enabled.
diff --git a/send-data/kubernetes.mdx b/send-data/kubernetes.mdx
index 0bc9a7ad..80849b00 100644
--- a/send-data/kubernetes.mdx
+++ b/send-data/kubernetes.mdx
@@ -391,35 +391,35 @@ metadata:
data:
fluent-bit.conf: |-
[SERVICE]
- Flush 1
- Log_Level debug
- Daemon off
- Parsers_File parsers.conf
- HTTP_Server On
- HTTP_Listen 0.0.0.0
- HTTP_Port 2020
+ Flush 1
+ Log_Level debug
+ Daemon off
+ Parsers_File parsers.conf
+ HTTP_Server On
+ HTTP_Listen 0.0.0.0
+ HTTP_Port 2020
[INPUT]
- Name tail
- Tag kube.*
- Path /var/log/containers/*.log
- Parser docker
- DB /var/log/flb_kube.db
- Mem_Buf_Limit 7MB
- Skip_Long_Lines On
- Refresh_Interval 10
+ Name tail
+ Tag kube.*
+ Path /var/log/containers/*.log
+ Parser docker
+ DB /var/log/flb_kube.db
+ Mem_Buf_Limit 7MB
+ Skip_Long_Lines On
+ Refresh_Interval 10
[FILTER]
- Name kubernetes
- Match kube.*
- Kube_URL https://kubernetes.default.svc:443
- Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
- Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
- Kube_Tag_Prefix kube.var.log.containers.
- Merge_Log On
- Merge_Log_Key log_processed
- K8S-Logging.Parser On
- K8S-Logging.Exclude Off
+ Name kubernetes
+ Match kube.*
+ Kube_URL https://kubernetes.default.svc:443
+ Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
+ Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token
+ Kube_Tag_Prefix kube.var.log.containers.
+ Merge_Log On
+ Merge_Log_Key log_processed
+ K8S-Logging.Parser On
+ K8S-Logging.Exclude Off
[OUTPUT]
Name http
diff --git a/send-data/logstash.mdx b/send-data/logstash.mdx
index 0b5355b8..7cd37e03 100644
--- a/send-data/logstash.mdx
+++ b/send-data/logstash.mdx
@@ -223,7 +223,7 @@ output{
-This configuration creates a new event named `cloned_event` that is a clone of the original event.
+This configuration creates a new event named `cloned_event` that’s a clone of the original event.
## GeoIP filter plugin
diff --git a/send-data/methods.mdx b/send-data/methods.mdx
index dfaca285..bc204f3b 100644
--- a/send-data/methods.mdx
+++ b/send-data/methods.mdx
@@ -62,7 +62,7 @@ When you’re ready to send events continuously, Axiom supports a wide range of
| [Syslog Proxy](/send-data/syslog-proxy) | Forward syslog data with transformation |
| [Tremor](/send-data/tremor) | Event processing system for complex workflows |
| [Winston](/guides/winston) | Popular Node.js logging library |
-| [Zap](/guides/zap) | Uber's fast, structured Go logger |
+| [Zap](/guides/zap) | Uber’s fast, structured Go logger |
## Amazon Web Services (AWS)
@@ -94,7 +94,6 @@ The following examples show how to send data using OpenTelemetry from various la
| [OpenTelemetry Python](/guides/opentelemetry-python) | Python Flask/FastAPI example |
| [OpenTelemetry Ruby](/guides/opentelemetry-ruby) | Ruby on Rails with OpenTelemetry |
-
-If you don't see a method you're looking for, please [contact](https://www.axiom.co/contact) the Axiom team for support.
+If you don't see a method you’re looking for, please [contact](https://www.axiom.co/contact) the Axiom team for support.
\ No newline at end of file
diff --git a/send-data/reference-architectures.mdx b/send-data/reference-architectures.mdx
index 17f7db76..253dbc67 100644
--- a/send-data/reference-architectures.mdx
+++ b/send-data/reference-architectures.mdx
@@ -33,7 +33,7 @@ Most organizations find it easy to send data to Axiom. If you are having trouble
### Agent pattern
-In this model, a lightweight collector runs as an agent on every host or as a sidecar in every pod. It's responsible for collecting telemetry from applications on that specific node and forwarding it directly to Axiom.
+In this model, a lightweight collector runs as an agent on every host or as a sidecar in every pod. It’s responsible for collecting telemetry from applications on that specific node and forwarding it directly to Axiom.
**Best for:** Capturing rich, host-specific metadata (e.g., `k8s.pod.name`, `host.id`) and providing a resilient, decentralized collection point with local buffering.
@@ -84,11 +84,11 @@ flowchart LR
## Tool recommendations
-Both the OpenTelemetry Collector and Vector are excellent choices that can be deployed in either an Agent or Aggregator pattern. The best choice depends on your team's existing ecosystem and primary use case.
+Both the OpenTelemetry Collector and Vector are excellent choices that can be deployed in either an Agent or Aggregator pattern. The best choice depends on your team’s existing ecosystem and primary use case.
### OpenTelemetry Collector
-The [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) is the CNCF-backed, vendor-neutral standard for collecting and processing telemetry. It's the ideal choice when your organization is standardizing on the OpenTelemetry framework for all signals (logs, metrics, and traces).
+The [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) is the CNCF-backed, vendor-neutral standard for collecting and processing telemetry. It’s the ideal choice when your organization is standardizing on the OpenTelemetry framework for all signals (logs, metrics, and traces).
**Use the OTel Collector when:**
* You are instrumenting your applications with OpenTelemetry SDKs.
@@ -170,7 +170,7 @@ Replace `SINK_ID` with the sink ID.
Both are first-class choices for sending data to Axiom. Your decision should be based on whether you need a unified OTel pipeline or a specialized, high-performance log processing tool.
-### What's next?
+### What’s next?
* Explore the complete range of options for sending data in the [Methods](/send-data/methods) page.
* For direct ingestion, see the [Axiom REST API](/restapi/introduction).
\ No newline at end of file
diff --git a/send-data/vector.mdx b/send-data/vector.mdx
index 0a126a90..e9ad112e 100644
--- a/send-data/vector.mdx
+++ b/send-data/vector.mdx
@@ -313,7 +313,7 @@ Example `vrl` file:
# Set time explicitly rather than allowing Axiom to default to the current time
. = set!(value: ., path: ["_time"], data: .timestamp)
-# Remove the original value as it is effectively a duplicate
+# Remove the original value as it’s effectively a duplicate
del(.timestamp)
```
diff --git a/styles/script.js b/styles/script.js
index 4fa40514..fa26b268 100644
--- a/styles/script.js
+++ b/styles/script.js
@@ -12,7 +12,7 @@ if (document.querySelectorAll("a")) {
}
}
-// Make top-left logo link to marketing site. The event listener is necessary to replace Mintlify's default event listener.
+// Make top-left logo link to marketing site. The event listener is necessary to replace Mintlify’s default event listener.
if (document.querySelector("#sidebar a")) {
const logoLink = document.querySelector("#sidebar a");
diff --git a/vale/styles/config/vocabularies/docs/accept.txt b/vale/styles/config/vocabularies/docs/accept.txt
index fd9b16fe..fcd027a6 100644
--- a/vale/styles/config/vocabularies/docs/accept.txt
+++ b/vale/styles/config/vocabularies/docs/accept.txt
@@ -31,6 +31,15 @@ Tailscale
tailnet
Firehose
k8s
+regex
+url
+functionality
+content type
+admin
+data are
+What is
+OAuth2
+Me
[Aa]xiom
[Dd]isable
diff --git a/vale/styles/docs/curly-quotation.yml b/vale/styles/docs/curly-quotation.yml
new file mode 100644
index 00000000..2c7168ec
--- /dev/null
+++ b/vale/styles/docs/curly-quotation.yml
@@ -0,0 +1,12 @@
+extends: substitution
+message: "Use '%s' instead of '%s'."
+level: suggestion
+action:
+ name: replace
+swap:
+ (\w+)'s: $1’s
+ (\w+)'ve: $1’ve
+ (\w+)'re: $1’re
+ (\w+)'d: $1’d
+ (\w+)'ll: $1’ll
+ (\w+)n't: $1n’t
\ No newline at end of file
diff --git a/vale/styles/docs/horizontal-line.yml b/vale/styles/docs/horizontal-line.yml
deleted file mode 100644
index 5531dc2f..00000000
--- a/vale/styles/docs/horizontal-line.yml
+++ /dev/null
@@ -1,7 +0,0 @@
-extends: existence
-message: "Don't use horizontal lines and empty paragraphs to structure content."
-nonword: true
-level: error
-scope: raw
-tokens:
- - '\n\n---'
\ No newline at end of file
diff --git a/vale/styles/docs/word-choice.yml b/vale/styles/docs/word-choice.yml
index 13387444..b6f4ff3d 100644
--- a/vale/styles/docs/word-choice.yml
+++ b/vale/styles/docs/word-choice.yml
@@ -10,5 +10,4 @@ swap:
(?:[Ee])xplore (?:page|view): Query tab
Data Explorer: Query tab
(?:[Ss])tream (?:page|view): Stream tab
- (?:[Cc])olumn: field
Explore tab: Query tab
\ No newline at end of file