Welcome to the NoSQLBench Quick Byte, the first session in a “Getting Started” series for
+NoSQLBench. This session introduces a new Cassandra Query Language (CQL) starter workload now
+available in version 5 of NoSQLBench.
Now, we are ready to run the cql-starter NoSQLBench scenario.
+
Locate NB5
+
Navigate via your local command line to where the nb5 binary was previously downloaded.
+
Verify
+
Ensure that issuing the following command identifies the workload used for this session.
+
./nb5 --list-workloads | grep cql-starter
+
+
Example output:
+
/activities/baselines/cql-starter.yaml
+
+
Optional step
+
An alternative is to copy the workload configuration listed below to your own local file in a
+folder of your choosing. You can name it whatever you like, as you will specify the absolute
+file path directly when issuing the scenario command.
+
CQL workload template
+
This YAML file is designed as a basic foundation for continuing to learn NoSQLBench
+capabilities as well as a starting point for customizing for your own testing needs.
+
You will notice that the number of cycles are minimal to support local testing to ensure that
+your configuration is constructed properly. When customizing these for real-world tests, the
+values can be set to millions or more! That is where the full power of NoSQLBench shines to
+generate critical metrics for analysis to make a system more robust.
Before running NoSQLBench scenario, let’s take a look at the layout of the file. Most of this
+will be the same layout structure used in all NB5 workload files so this helps to reveal a
+large amount of the basics. This is called a workload template.
+
Starting from the top of the workload template, the primary sections include:
+
+
Description - A way to describe what the workload does.
+
Scenarios - A set of named scenarios for detailing the intent of the workload and defines that for various blocks (e.g. schema, rampup, main, etc.).
+
Params - Optional parameters of interest to reference for applying values.
+
Bindings - Named recipes for generated data. These are referenced in block operations.
+
Blocks - Where the labeled operations reside (e.g. schema, rampup, and main).
+
+
Schema - A block section where the schema is actually defined and created.
+
Rampup - A block section for data setup that becomes the backdrop for testing; it’s the density of data outside the metrics collected in the main block.
+
Main - A block section that is the target of metrics collection activities.
+
+
+
+
This may look overwhelming at first glance, but the magic of what can be done for load testing target resources becomes more apparent as settings are tweaked for various test cases.
+
Basic Operations
+
The workload operations in the cql-starter are quite basic, and this is on purpose. The intent
+is to focus on a simple set of read and write operations to understand how to work with
+NoSQLBench and Cassandra using basic, direct CQL.
+
Table and Keyspace
+
For the default scenario workload, a simple table named ‘cqlstarter’ will be created with a keyspace
+named ‘starter’. There will be three fields for our table:
+
+
machine_id
+
message
+
time
+
+
The machine_id is a unique identifier type, the message field is a text type, and the time is a
+timestamp type.
+
Since the example is designed to be run locally, the Cassandra keyspace replication is defined
+using a SimpleStrategy with a replication factor of one.
+
WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '<<rf:1>>'}
+
+
Default scenario
+
For this session, the ‘default’ scenario is being used.
One may notice there is an ‘astra’ scenario included in the file with its own set of activities
+defined (e.g. -astra). References to astra are simply there to show how additional
+scenarios can be defined in a single workload file.
This illustrates how flexible and customizable the workload file can become. The words are
+customizable and can be tailored for understanding the test case for any business or technical domain.
+
Bindings
+
Values for our three fields during insert, will come from the bindings section of the file.
+Basic examples are included in the cql-starter, but this illustrates how bindings supply
+values to be used by operations. Again, these are basic, just to illustrate how binding
+functions can be utilized.
Notice how we can reference text from a file to be used for our message value. Nothing fancy,
+but illustrates how tests can leverage external information from files for decoupling input
+from the workload file itself. Think of this for things like secret token references, etc.
+that need to be referenced.
Note: The Discard() function is used to indicate a no-op as the initial message value. This
+may change in the future, but for now it is a necessity due to the nature of bindings
+defaulting to Long values. This is why the rampup_message was included for illustration as it uses a
+ToString(); function assigning a string value. By default, the binding's value is 0L.
When the workload is run after uncommenting the rampdown, selecting the content again using
+cqlsh, returns a table that has been truncated.
+
Next Steps
+
Checkout the NoSQLBench getting started section and details for its
+capabilities for your next testing initiative. This includes a number of
+built-in workloads that you can start from for more
+advanced scenarios.
+
Want to contribute?
+
It’s worth mentioning, NoSQLBench is open source and we are looking for contributions to expand
+its features! Head on over to the contributions
+page to find out more.
+
We will continue to have more Quick Bytes for NoSQLBench in the near future.
Welcome to the NoSQLBench Quick Byte, second session in a “Getting Started” series for NoSQLBench. This session
+introduces a new Http-Rest Starter workload now available in version 5 of NoSQLBench.
+
+
+
If you haven't heard of NoSQLBench, checkout our introduction material here.
+
+
+
If you already have a foundation with NoSQLBench and would like to understand what's included in the most recent
+version, checkout the release notes here.
+
+
+
If you would like to start with the first session in the NoSQLBench “Getting Started” series, check it out here.
+
+
+
This session illustrates use of http-rest methods, using NoSQLBench v5, along with a Docker deployment of an open source
+data gateway called Stargate.
+For more information about Stargate, checkout the repository here.
+
In comparison to the previous cql-starter, this starter focuses on http-rest interactions with the data gateway itself instead
+of via a CQL driver’s interaction with Cassandra.
+
Let’s get rolling and learn about http-rest operations!
Navigate to your local Stargate repository and execute the specified script.
+
cd ./stargate/docker-compose/cassandra-4.0/
+
+./start_cass_40_dev_mode.sh
+
+
Verify the Stargate services are started and healthy.
+
7d0c9076153c stargateio/graphqlapi:v2 "/usr/local/s2i/run" - Up About a minute (healthy)
+
+2757157aa423 stargateio/restapi:v2 "/usr/local/s2i/run" - Up About a minute (healthy)
+
+b0c00f0bdd56 stargateio/docsapi:v2 "/usr/local/s2i/run" - Up About a minute (healthy)
+
+1ab290e89dc6 stargateio/coordinator-4_0:v2 "./starctl" - Up 2 minutes (healthy)
+
+
Running the scenario
+
Now, we are ready to run the http-rest-starter NoSQLBench scenario.
+
Navigate to NB5 binary downloaded & identify workload
+
./nb5 --list-workloads | grep http-rest-starter
+
+
Example output:
+
/activities/baselines/http-rest-starter.yaml
+
+
Note: this scenario resides in the adapter-http parent directory for the repository.
+
Optional step
+
An alternative is to copy the workload configuration listed below to your own local file in a folder of your choosing. You can name it whatever you like, as you will specify the absolute file path directly when issuing the scenario command.
+
Workload file
+
This workload file is designed as a basic foundation for continuing to learn NoSQLBench capabilities as well as a starting point for customizing. You will notice the cycle values are minimal to support local testing. Adjust as needed for your own usage.
Before running the scenario, let’s take a look at the layout of the file. Most of this will be the same layout structure used in most workloads. As such, this example reveals a large amount of the foundational. In addition, this scenario introduces blocks having HTTP methods included such as:
+
+
GET
+
POST
+
DELETE
+
+
Workload layout
+
As a review, the primary sections of a NoSQLBench workload file include:
+
+
Description - A textual description of what the workload does.
+
Scenarios - Set of named scenarios for detailing the intent of the workload and defines that for various blocks (e.g. schema, rampup, main, etc.).
+
Params - Optional parameters of interest to reference for applying values.
+
Bindings - Named recipes for generated data. These are referenced in block operations.
+
Blocks - Where the labeled operations reside (e.g. schema, rampup, and main).
+
+
Schema - A block section where the schema is actually defined and created.
+
+
+
Rampup - Block section for data setup that becomes the backdrop for testing; it’s the density of data outside the metrics collected in the main block.
+
Main - Block section that is the target of metrics collection activities.
+
+
Testing operations
+
The workload operations in the http-rest-starter are quite basic in form, and this is intentional.
+
The intent is to focus on a simple set of read, write, and delete operations to understand how to work with
+NoSQLBench and Stargate (a data gateway) with http-rest operations.
+
Table and keyspace
+
For the default scenario, a simple table named http_rest_starter will be created with a keyspace named starter.
+
There will be two fields for our table, key and value, both with types of text.
Let’s break down the bindings to understand how they will be used as values in various operations.
+
+
request_id - represents a unique ID used when making the http-rest calls.
+
auto_gen_token - this binding uses a newly added function Token(), providing the generation of a
+token required by Stargate. If an auth_token value is specified, the rest of the values passed to the Token function are ignored, as the logic to generate
+a new token is not invoked. If the auth_token is not specified, the auth_uri can be specified along with the credentials used
+for requesting a token generation. Note that the last 3 arguments all have defaults when customizations aren't required.
+
seq_key and seq_value - are values generated for use by rampup write operations.
+
rw_key and rw_value - are values generated for use by the main read and write operations.
Here, the stargate_host is indicating we are targeting the local host services running in Docker. The port and other URL specifics are included in each of the block operations.
+
Examine results
+
After the workload has been run, let’s take a look at the results.
+
docker container exec -it cass40-stargate-coordinator-1 sh
+
+
Stargate log activity
+
Here you can poke around at the system.log to view the operations that were executed when running the http-rest-starter.
+
cd /stargate/log
+
+tail -100 system.log
+
+
Next steps
+
More getting started?
+
Checkout the NoSQLBench getting started section and details for its capabilities for your next testing initiative.
+You can find the starter details here.
+
More Http-Rest adapter information?
+
There are a number of http-rest examples in NoSQLBench.
+In fact, they expand on the use of the Stargate data gateway covering topics such as:
+
+
Documents API
+
GraphQL (CQL first approach)
+
GraphQL (Schema first approach)
+
+
Want to contribute?
+
It’s worth mentioning, NoSQLBench is open source, and we are looking for contributions to expand its features!
+Head over to the contributions page to find out more.
+
Need more advanced scenarios?
+
There are a number of pre-built scenarios that exist here.
+
We will continue to have more Quick Bytes for NoSQLBench in the near future.
In order to keep the code base tidy, here are the coding standards we try to observe:
+
Unit Tests
+
Unit Test Coverage
+
We really like unit tests where it makes sense. It usually makes sense to write unit tests!
+However, we aren't chasing 100% code coverage. We do measure it, and we do intend to keep it
+from going down past some reasonable level. We will start to pull up the code coverage
+requirements for each module over time, but only to a reasonable level.
+
Unit Test Logging
+
Don't write to System.stdout or System.stderr in your unit tests. Instead, use a logger, as in
The logging subsystem is used for nearly all console IO with NoSQLBench. This is a drastic
+simplification for developers and maintainer. In general, users don't want to be notified of
+anything except when the do, and when they do, they can simply use -v to turn the default
+console logging level up from WARN to INFO and so on. This applies as well to tests. We can
+keep console IO low for unit tests and even us async logging in the background to speed up
+builds and keep the build output tidy for when you need to troubleshoot actual build errors.
+
Integrated Test Coverage
+
Since NB is a layered runtime design, we need to be specific when we are talking about
+integrated tests. Once we have stabilized the new integrated test harness, it will be better
+documented for contributors. This is a work in progress.
There are multiple ways to contribute to NoSQLBench. The project is growing, and it needs many
+hands to help it be as awesome as it can be. Ways to contribute include: improving the
+documentation, submitting bug reports, reproducing bugs, providing bugfixes, submitting ideas
+for new features, helping with design discussion, contributing drivers or other core features,
+or even improving the CI/CD in github actions.
+
There is also a need to get the word out and show new users how to use various features, so if
+this is something you particularly enjoy, please jump in.
+
Other than that, we're always happy to find ways for contributors to get engaged, so if you are
+interested but don't know how to get started, join us on our
+discord server and we can figure something out.
+
CLA or contributor covenants
+
NoSQLBench does not presently have a contributor covenant or CLA (Contributor Licensing Agreement),
+but this may be added if necessary. In general, we try to keep the friction low for new
+contributors. The main two things you have to agree to contribute to NoSQLBench are:
In your license header, use the project name nosqlbench as the copyright holder.1
+
+
1
+
If this is not agreeable to anybody, we will need to setup a CLA or something similar
+to ensure that the project can evolve as needed without chasing down copyright holders or
+replacing abandoned code in the future. We'd like to keep things simpler than this, but we will
+consider it if significant contributions justify a change.
There are multiple ways to contribute to NoSQLBench. The project is growing, and it needs many
+hands to help it be as awesome as it can be. Ways to contribute include: improving the
+documentation, submitting bug reports, reproducing bugs, providing bugfixes, submitting ideas
+for new features, helping with design discussion, contributing drivers or other core features,
+or even improving the CI/CD in github actions.
+
There is also a need to get the word out and show new users how to use various features, so if
+this is something you particularly enjoy, please jump in.
+
Other than that, we're always happy to find ways for contributors to get engaged, so if you are
+interested but don't know how to get started, join us on our
+discord server and we can figure something out.
+
CLA or contributor covenants
+
NoSQLBench does not presently have a contributor covenant or CLA (Contributor Licensing Agreement),
+but this may be added if necessary. In general, we try to keep the friction low for new
+contributors. The main two things you have to agree to contribute to NoSQLBench are:
In your license header, use the project name nosqlbench as the copyright holder.1
+
+
1
+
If this is not agreeable to anybody, we will need to setup a CLA or something similar
+to ensure that the project can evolve as needed without chasing down copyright holders or
+replacing abandoned code in the future. We'd like to keep things simpler than this, but we will
+consider it if significant contributions justify a change.
The NoSQLBench project has a few dedicated builders. We would like to diversify the
+maintainer base of the project as well as build bridges to new communities in the NoSQL ecosystem.
+
This is good for the community and the maintainers, and helps foster trust in those that depend
+on the project.
+
If you are looking for an interesting project to work on, we're eager to work with you.
+Whether you are a seasoned builder or just starting out, there is a way for us to get you
+included into the project that is fun and satisfying for everyone.
All new changes to the project should be made in the form of a pull request. In certain cases,
+project maintainers may push directly in order to fix a CI/CD issue or similar, but otherwise
+everyone should expect to submit pull requests and get at least 1 maintainer approval before
+having their code merged.
+
Maintainers may make suggestions to your PR before it can be approved. This is done via the
+conversations feature directly in github. If there are required changes before approval, they
+will be described clearly with a request to make the changes before further review. If you are
+asked to make changes, all you have to do is refine the branch you submitted the PR from and
+push the changes up.
+
note: If you are unsure of your branch, and want to work on it further before review or merge,
+please submit it as a draft PR. This helps set expectations so that reviewers aren't studying
+incomplete submissions.
+
In order to make sure related issues are closed, you can add
+closing terms
+to the description.
+
When you are ready for review, move your PR from draft to "ready for review", and request one of
+the project maintainers for a review.
+
The rest of this page tells you what to expect from the maintainers during code review.
+
What is Accepted?
+
Generally speaking, any change which is non-trivial should already be discussed within the
+project in order to make cooperation between contributors harmonious. Very trivial change which
+are quick to review and very self-evident as to what they fix or improve may be accepted with
+little pushback. Still, it is a courtesy to the rest of the developer community to document what
+you are working on in an issue and assign yourself to it first.
+
Licensing
+
All code submitted should have the APLv2 license header at the top, with the copyright set as
+
/*
+ * Copyright (c) 2022 nosqlbench
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
If this is not the case, then the build will fail. There is a handy way to fix this.
+After this happens, the license checker will automatically create some prototype files
+which include the changes needed. You can simply run a utility script in
+scripts/accept_license.sh to write the updated files to the orignal names.
+
Coding Standards
+
Static Analysis
+
You should use a static analyzer if you have one. There is one that is used by the project for
+every build. It may flag significant issues in your PR, which may also keep it from being merged.
+It is always a good idea to leave your PR in draft mode until you get green light, as the CodeQL
+feedback will already give you something to fix if it finds anything.
+
Test Coverage
+
Any contribution of code should have an appropriate amount of built-in testing. This can be
+either as unit tests or as integrated tests. It is up to the judgement of the contributor what
+constitutes a sufficient level of testing, although more may be requested by code review.
+
The code coverage in unit and integrated tests will be improved over time. There may be
+build-time checks which warn you if the code coverage goes down with your added changes. This
+could cause it to fail during merge checks, so try to have a reasonable amount of test coverage
+in new code.
+
Dependencies
+
Be cautious about adding even more dependencies to the project. NoSQLBench is a rather large
+project due to the scope of what it does, including many driver modes. If you add dependencies
+without scanning for a suitable capability that is already in the project, it will cause
+dependency creep. This is bad. Try to help us keep the dependency tree well trimmed.
+
What is (generally) Accepted?
+
Incremental Improvements
+
Changes which are complimentary or incremental to core NoSQLBench functionality are always good.
+As long as a change doesn't compromise existing functionality or otherwise compromise the
+integrity of the project it will generally be welcome.
+
Trivial Improvements
+
Changes which have already been discussed or are of such a trivial nature that they are
+self-describing don't require substantial ceremony. Use your best judgement here. If you are
+fixing a typo on the website, just do it. If you are adding a better description to some CI/CD
+wiring, just do it. Again, just use your best judgement about what should be discussed or agreed
+to beforehand.
+
Subsystem Enhancements
+
Changes which make the core of NoSQLBench easier, better, or more powerful for any user in the
+NoSQL ecosystem are awesome. New driver are awesome. This includes drivers from vendors. All
+changes will be subject to the platform standards and community guidelines, of course.
Large, surprising changes with no previous consensus or discussion may not be approved with
+a request for discussion. This type of change is an unreasonable burden to any maintainer. If
+you want to make large changes, you must work together with the NoSQLBench builder community to
+make sure it fits well within the project, and is on-mission with the core outcomes of the project.
+
Untested or Untestable Code
+
You must design your contributions in a way that allows for testing.
This is partly documenting the current flow, partly aspirational. Within the first few releases
+of NB5, this should all be standard and reliable. This notice will be removed at that time.
+
Release Channels
+
We have a three track release system:
+
+
build artifacts are produced from every successful build on a branch.
+
This is a system of incremental promotion from build to preview to release, with documentation
+included. This allows us to preview new features and capabilities with the community in a safe
+way, only accepting previews which are up to release standards.
+
Repositories
+
Of course, managing this flow requires a few more repositories than usual, since we are hosting
+the docs sites on github pages. This means we need the main code repo for NoSQLBench, as well as
+a separate repo for each of the three docs sites:
All of the CI & CD build automation for NoSQLBench is hosted in github actions.
+
Release Criteria
+
Before a preview release can be promoted to a main release, the following must happen:
+
+
All high CVE alerts for potential issues must be addressed. We use multiple static analysis
+tools for this. Any high severity issues are required to be addressed, even if this means
+that they are flagged as a false positive after further review.
+
All integration test must pass.
+
New functionality must be documented in the docs site. Ideally this includes a what's new update.
+
+
Release Notifications
+
When a new release is made, the community should be notified in the usual places.
+This isn't active, yet but we intend to support:
Bugfixes, basic features, doc improvements, and any other change to the project that is
+well-defined, of reasonably small scope, and easily tested can be managed through issues and
+discussion on them. This is the most common way to make contributions to the project. Just
+follow the community guidelines in this section and you'll be making changes in no time.
+
Big Ideas
+
Ideas which require a degree of planning and design work should be discussed with the community
+and maintainers in discussions. There is an idea category
+that is already used for this. This is to ensure that everyone is on the same page about how
+something will work before work starts in earnest. Big ideas are awesome, but the hardest
+work of landing them successfully is in building consensus for how they should fundamentally work.
+
Prototypes
+
If you want to build a prototype of something and then present it to the project, we are always
+open to that. Sometimes a prototype is really the only way to effectively convey a vision. Still,
+doing this comes with certain risks. It's possible that your big idea is already in the works or
+in some form in the project already. If you aren't sure, it's always best to start a big idea
+conversation or approach us on discord to discuss it.
+
Issues and Assignment
+
If you are making contributions, it is important to get aligned with the other contributors
+before spending significant effort. One of the best ways to do this is to have an issue assigned
+to yourself. It can be frustrating for other developers to work on an issue that is already
+being solved.
You are using IntelliJ to work with the NoSQLBench code base.
+
You want IntelliJ to automatically apply the license files when needed on files you edit.
+
+
Steps
+
+
Enable the bundled copyright plugin
+
Under File | Settings | Editor | Copyright | Copyright Profiles, add one named 'aplv2'.
+
+
+
Set the copyright text (velocity template) to:
+
+
Copyright (c) $originalComment.match("Copyright \(c\) (\d+)", 1, "-", "$today.year")$today.year nosqlbench
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+
+
Set the Regex to detect copyright in comments to:
+
+
.*Licensed under the Apache License, Version 2.0.*
+
+
You can click the validate button to confirm that the velocity template works.
NoSQLBench is composed of a sound architectural underpinning that makes the process of implementing new adapters as seamless and quick as possible. NoSQLBench provides a hierarchy of classes which can be extended and interfaces to be implemented, in order to avoid having to re-implement the application infrastructure and in so doing reduces both the complexity and demands on time involved in implementing a new adapter type. However, it is necessary to understand this existing architecture in order to effectively use the provided building blocks. The following is intended as a guide to developing new adapters to extend the functionality of NoSQLBench.
+
Prerequisites and Assumptions
+
+
It is assumed that the developer(s) have familiarized themselves with the NoSqlBench core concepts provided in the NoSQLBench documentation.
+
The ability to code in Java is required. NoSQLBench makes extensive use of lambda functionality. It is a requirement to understand and be comfortable using these principles to develop an adapter.
+
This guide is not specific to any particular client. It is assumed that the developer(s) have familiarity with the client they are developing an adapter for.
+
+
Internal Dependencies
+
The NoSQLBench project as a whole is composed of numerous modules and packages. For the purpose of customer adapter implementation there are only two required; adapters-api and nb-annotations.
The adapters-api and nb-annotations modules contain all of the base classes, interfaces and annotations required to implement a new adapter and flesh out its functionality.
+
+
Adapters - Every new adapter will, of course, include an Adapter class. This class will be a descendent of the BaseDriverAdapter class found here. NoSQLBench uses the Java Service Provider Interface(SPI) to define service providers. If you don’t know what that is, don't worry about it now. The tl;dr is that extending the BaseDriverAdapter and annotating your driver adapter class with the Service annotation will result in that class being added to META-INF/services under the specified class name, making it available as a service provider. NoSQLBench takes care of all the rest!
+
Ops - NoSQLBench is built around the ability to define operations (“Ops”) in a generic fashion (the op template) and translate them into actions. All of the basic plumbing for this is handled in the adapters-api module, from the yaml file loader to the definition of the various types of ops available to implement. Along with the Op types the adapters-api package defines the base classes and interfaces for Op Dispensers, which do the work of creating the Op, and the Op Mappers, which map the appropriate functionality based on the Op type presented. We’re going to get into the details of the available implementations in the next section.
+
+
Your First Adapter
+
+
+
Familiarize yourself with the native driver. You will need to know exactly what functionality you want to test and what the APIs for that functionality look like so you can plan for what your Op classes need to be able to do.
+
+
+
Create a new module in the root of the project using standard naming:
+
+
adapter-<typeName> where the chosen selector that will be used with driver=<typeName>, e.g. adapter-jdbc, adapter-pinecone, etc.
+
package naming should follow io.nosqlbench.adapter.<typeName>
+
A quick way to create module is to copy an existing adapter's pom.xml into a new directory as pod.xml, modify the naming elements, and then rename pod.xml to pom.xml
+
In most cases the only dependencies you should need are adapters-api and nb-annotations (as discussed in the previous section), as well as the api library published for the native driver you are adapting. If you find that you need to pull in additional dependencies, first verify whether they are included in another module already within the nb project to simplify the process.
+
Initially, do not add the module to the root pom.xml under modules. You can still build it and test it without requiring it to build for the main module to build. Once it is ready to be included under the main build, then you add it to the modules list. At that time, add it to the list of driver dependencies for the nb5 module, and it will be included in the runtime.
+
+
+
+
Implement your first Op.
+
+
Implement a POJO which implements one of the Op interfaces and represents a value type for a specific operation.
+
It is recommended to use RunnableOp by default. This means you only need to provide a run method to define whatever action this operation encapsulates. The op class itself should be extremely lightweight, as the logic for constructing the operation will take place in the dispenser.
+
As a value type it must be repeatable.
+
It should capture the details of a single operation for diagnostics and debugging purposes. In database terminology, for example, this might be “insert”, or “delete”, or “update”.
+
+
+
+
Implement a minimal Adapter Space.
+
+
Stub a POJO which can hold instances of your native driver. This is your context, or adapter space.
+
This class should contain any logic needed to establish connectivity as well as any type of initialization required for the native driver.
+
The constructor of the adapter space should take a String and an NBConfiguration instance as its parameters. Any variables needed to initialize the environment should be accessible to the adapter space through the NBConfiguration instance. The adapter space class must define what these variables are by implementing the static getConfigModel() method. See existing adapter space implementations for details on this.
+
+
+
+
Implement the DriverAdapter.
+
+
This class must extend the BaseDriverAdapter class. As the BaseDriverAdapter class is a templated type, this implementation should use the 2 classes previously created, i.e.
+MyDriverAdapter extends BaseDriverAdapter<MyOp, MyAdapterSpace>
+
Add the @Service annotation that makes it available for runtime service looking and late binding.
+@Service(value = DriverAdapter.class, selector = "nativedrivertype")
+
Minimally this class must implement 2 methods, both of which should specify that they are overriding the base class
+
+
getOpMapper - this method will return the OpMapper specific to the types of Op classes that will be created for this driver. As we have not yet implemented this class, for now it can be stubbed to simply return null. The signature should look like
+public OpMapper<MyOp> getOpMapper()
+
getSpaceInitializer - this method accepts an NBConfiguration instance as an argument and will return a function that can be used to instantiate the previously defined adapter space. In its simplest form it will simply pass the configuration along to the constructor for the adapter space. The signature should look like
+public Function<String, ? extends MyAdapterSpace> getSpaceInitializer(NBConfiguration cfg)
+
+
+
+
+
+
Implement an OpDispenser for this Op type.
+
+
This must extend BaseOpDispenser, which is a templated class. The definition might look like this:
+public class MyOpDispenser extends BaseOpDispenser<MyOp, MyAdapterSpace>
+
The constructor must accept at least a DriverAdapter and an Op of the specified type, and it must call the super constructor, which requires the DriverAdapter and Op to be provided.
+
The constructor for this class will need at least the function for retrieving the relevant Adapter Space to facilitate the creation of the Op to be dispensed, and a ParsedOp object which is the operation as derived from the source yaml/JSON file to be converted to the appropriate op type by the dispenser.
+
The OpDispenser must also override the apply method defined in the BaseOpDispenser. For each test cycle this apply method will be called, and the OpDispenser will need to return the created Op for that cycle. The typical pattern used in the implementation of the OpDispenser is that at the time of construction it defines a LongFunction to create the Op that is dispensed when the apply method is called. The apply method itself is minimal and applies the input value (the cycle) to this function and dispenses the Op returned.
+
This is another place where it should be noted that in an adapter of any complexity there will usually be multiple Op types, each of which will have its own OpDispenser class responsible for dispensing only that type of Op. Don’t implement a hierarchy of Op types and dispense them through a single OpDispenser via dynamic binding.
+
+
+
+
Implement an OpMapper to create the OpDispenser
+
+
The OpMapper has a very simple job in the case of having only a single OpDispenser type, it creates the OpDispenser. In more complicated use cases the OpMapper receives an Op and interrogates it to determine the appropriate OpDispenser to create. More on this below.
+
Your class should implement the OpMapper interface, with the templated type once again being your Op. It then needs to override the apply(ParsedOp) method to return the OpDispenser. The signature should look something like this:
+public OpDispenser<? extends MyOp> apply(ParsedOp op)
+
At this point you can treat the apply method largely as a pass-through and simply return a new OpDispenser instance (although take a look at the existing implementations, you will probably want to emulate the use of the space cache to pass the dispenser the space object as well).
+
The constructor for this class doesn’t have any hard requirements, but if you look at the existing implementations you will notice most of them include as an argument a new class we haven’t talked about yet, the DriverSpaceCache. Not to worry, this is simply a cache to hold your associated context, the details of which are handled once again by the NoSQLBench plumbing. In the next step we will look at how this is used in the Adapter class.
+
+
+
+
Wire your classes together and get it to compile cleanly.
+
+
Go back to the Adapter type created in step 5. Now you’re going to fill in that getOpMapper method. If your code looks like the majority of implementations the method body will look like this:
As pointed out earlier this “just works” because NoSQLBench already provides the plumbing behind these calls. The BaseDriverAdapter handles the necessary initialization and storage of the appropriate type of space cache and NB configuration, and these can simply be passed in to the constructor for your OpMapper class!
+
You should now have:
+
+
An op class that implements the RunnableOp interface and defines a run method that does something specific to the native driver you are working with.
+
An adapter space class that encapsulates the context of the native driver and implements any logic necessary for initialization.
+
A driver adapter class that contains the logic to instantiate both the op mapper class and the adapter space class.
+
An op dispenser class that both contains the logic to construct instances of your op type from a ParsedOp object and implements the apply method to return new instances of your op type.
+
An op mapper class, as returned by the driver adapter, that implements the apply method to accept a ParsedOp instance and return an instance of the previously defined op dispenser class.
+
+
+
+
+
+
Further Reading
+
Use Cases With Multiple Op Types
+
In most sophisticated use cases a single op type will not be sufficient. As in the canonical example of a database where a user may want to perform a number of different operations such as inserting new records, reading existing records, updating, deleting, etc. In these cases it is necessary to define more than a single Op class, defining a separate class for each of these operations. In these cases there should be a separate OpDispenser class defined for each Op type as well, with the OpMapper class creating the OpDispenser type appropriate for the operation being performed.
+
The Op classes should remain as compact as possible, implementing only the basic functionality they are ascribed. The OpDispenser classes remain similar in structure, with each creating a function specific to the Op type it is associated with, to be called by the apply method at the time when the Op needs to be created. The difference in implementation is largely confined to how the OpDispenser class interacts with the ParsedOp object it receives in its constructor. The ParsedOp represents a single operation as defined by the source configuration yaml file passed at runtime. The yaml might contain any arbitrary number of different ops, each of which will be interpreted and result in a ParsedOp to be passed to the Mapper and Dispenser at runtime. Each dispenser can query the ParsedOp for the existence of the fields it expects to find defined for instantiation of the Op type it is responsible for, and define the creation functionality based on what it finds to be present.
+
In the cases where multiple Op types are defined an OpType enum class should also be provided. This allows the OpMapper class to use the TypeAndTarget functionality provided by the ParsedOp API. getTypeAndTarget is a method exposed by the ParsedOp API which allows the caller to pass in as arguments the class of the enum, the expected class the resulting function should return, the type name and the value name and in return receive a TypeAndTarget Object containing the enum type identification and a target function that will return the value associated with the type. This target function can be thought of as providing the “key” for the Op type in question. For example in the pinecone adapter every operation needs to specify the database index the operation will run against. This is represented in the op definition as:
You are using IntelliJ to work with the NoSQLBench code base.
+
You want IntelliJ to automatically apply the license files when needed on files you edit.
+
+
Steps
+
+
Enable the bundled copyright plugin
+
Under File | Settings | Editor | Copyright | Copyright Profiles, add one named 'aplv2'.
+
+
+
Set the copyright text (velocity template) to:
+
+
Copyright (c) $originalComment.match("Copyright \(c\) (\d+)", 1, "-", "$today.year")$today.year nosqlbench
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+
+
+
Set the Regex to detect copyright in comments to:
+
+
.*Licensed under the Apache License, Version 2.0.*
+
+
You can click the validate button to confirm that the velocity template works.
If you are working on the main nosqlbench code base, you don't have to rebuild everything. There
+are a couple basic methods you can use to focus just on the module you are developing.
+
Dev Iteration
+
Maven Install
+
For driver development, it's best to build and install the existing artifacts from main to your Maven repo.
+This puts all the libraries into your local maven artifact repo (typically under $HOME/.m2/repo)
+and then allows you to build only the specific module you are working in.
+
# ensure you have java 17 on your path
+java -version
+# OR you have a java 17 in $JAVA_HOME/bin/java -version
+$JAVA_HOME/bin/java -version
+cd nosqlbench
+mvn install
+
+
You can then run mvn only in the module that you need.
+
cd adapter-mynewdriver
+mvn verify
+
+
Module Testing
+
NBLIBDIR
+
Another way to save time is to build the nbr module, which is the NoSQLBench Runtime Module.
+This comes in jar form and executable form, but it is not built by default. To enable it, you'll
+need to activate its run profile. You can either provide -Pnbr to mvn commands like mvn -Pnbr install, or set it in your IDE's Maven profile toggles.
+
Once you have nbr built, you can use it along with any DriverAdapter jars by setting the
+NBLIBDIR environment variable.
+
When this is set, it is split by colon, just as the $PATH variable would be.
+For each path, relative to the current working directory,
+
+
If it is a directory, then each jar file found under it is added to the classpath.
+
If it is a zip file, then each jar file found within it is added to the classpath.
+
If it is a jar file, then it is added directly to the class path.
+
+
This allows you to provide your driver in a separate jar and invoke it with the NoSQLBench
+runtime just as if it were bundled.
There are multiple ways to contribute to NoSQLBench. The project is growing, and it needs many
+hands to help it be as awesome as it can be. Ways to contribute include: improving the
+documentation, submitting bug reports, reproducing bugs, providing bugfixes, submitting ideas
+for new features, helping with design discussion, contributing drivers or other core features,
+or even improving the CI/CD in github actions.
+
There is also a need to get the word out and show new users how to use various features, so if
+this is something you particularly enjoy, please jump in.
+
Other than that, we're always happy to find ways for contributors to get engaged, so if you are
+interested but don't know how to get started, join us on our
+discord server and we can figure something out.
+
CLA or contributor covenants
+
NoSQLBench does not presently have a contributor covenant or CLA (Contributor Licensing Agreement),
+but this may be added if necessary. In general, we try to keep the friction low for new
+contributors. The main two things you have to agree to contribute to NoSQLBench are:
In your license header, use the project name nosqlbench as the copyright holder.1
+
+
1
+
If this is not agreeable to anybody, we will need to setup a CLA or something similar
+to ensure that the project can evolve as needed without chasing down copyright holders or
+replacing abandoned code in the future. We'd like to keep things simpler than this, but we will
+consider it if significant contributions justify a change.
With NoSQLBench 5 and newer, there are some compatibility standards that we observe for
+dependencies. This is to ensure that various drivers and their upstream dependencies can work
+harmoniously in one runtime.
+
Java LTS
+
The latest Java LTS release should be used for every new major version. In some cases, newer Java
+LTS releases can be pulled into a minor release after sufficient testing.
+
The middle version number indicates the Java platform standard. For example nb5 version
+5.17.0 uses Java 17, which is the latest LTS version. The NoSQLBench project will use
+this version across the board, including JVM and language features.
+
No Shading
+
Keeping the project modular and easy to build is essential. In the past, there were some driver
+modules which would not play well together in the same runtime due to JNI or JNA level artifact
+conflicts. Guava, for example was used variously across many drivers and was an unending source
+of dependency conflicts.
+
Going forward, any module or dependency which requires shading in order to work with the other
+modules that do not will not be included in the project. This is a necessary minimum standard
+to protect the sanity of the code base as well as the developers who work on it.
+
JPMS Modular jars
+
Ideally, any dependencies which are added have the necessary minimum module information to
+function as a JPMS modular jar. This is not requiring the full adoption of JPMS. It is a
+fairly minor task to make your artifacts work this way.
+
Logging Subsystems
+
The standard logging implementation in NoSQLBench is Log4J2. This was used because of its
+extensive runtime configuration support. Further, API stub implementations for SLF4J and others
+are included, but only using the later (JPMS-friendly) versions. Included modules should not
+implement their own logging subsystem, but instead should either use Log4J2 directly, or SLF4J
+using a modern version.
+
Dependency Management
+
If you are building a new module, be sure you understand the project layout. Leverage the
+existing module structure to minimize new library dependencies. If you are using a common
+library, there is a good chance it is already available and under version management in the
+mvn-defaults/pom.xml file. If it is not, then scan for the dependency and hoist it up to the
+mvn-defaults level if that makes sense for multiple modules depending on it.
+
Abandoned Code
+
If code is contributed which needs to be maintained by a vendor or other party, there will be an
+expectation that the contributor is at least willing to do bug fixes or resolve other
+user-affecting issues. If a module is considered abandoned for a significant period of time, it
+may be removed from the project.
NoSQLBench is a large project. It has a few basic layers of functionality which nest together
+like a system of modules and interfaces:
+
+
Core APIs
+
+
Annotations and Processors
+
+
+
Core Runtime
+
Virtual Data Subsystem
+
Extensions and Plugins
+
Driver Adapters
+
+
SPI and Modules
+
Nearly all of the functionality in NoSQLBench is wired together at runtime through a service
+discovery mechanism. This allows modules to be bundled within the project and packaged together
+as needed. At runtime, when a particular set of components is called up to be used together, the
+runtime uses Java's SPI mechanism to discover and map them by a name, called a selector.
+
You will see this throughout the project in the form of a @Service(...) annotation. Any
+component withing the code base which needs to be realized at runtime must have this. The
+built-in annotation processors do the packing work to put these into the standard service manifest
+format.
+
Driver Adapters
+
If you are building a driver adapter, you don't need to understand the whole project structure.
+You simply need to implement the DriverAdapter interface. You only need the adapters-api
+module as an upstream dependency.
+
Insert Module Graph Here
+
This will be a visual of all the core modules.
+
(Java) Package Naming
+
All of the modules in the project follow a minimum structural standard. The internal package
+names must have the prefix io.nosqlbench.[module-name] where the module name has its
+non-words replaced with package boundaries.
+
This means that module named adapter-diag lives in a directory under the main project of the
+same name. It has a package named io.nosqlbench.adapter.diag in which all of its driver code
+resides.
+
This makes it easy to know what module any package is part of simply from the name.
NoSQLBench is a large project. It has a few basic layers of functionality which nest together
+like a system of modules and interfaces:
+
+
Core APIs
+
+
Annotations and Processors
+
+
+
Core Runtime
+
Virtual Data Subsystem
+
Extensions and Plugins
+
Driver Adapters
+
+
SPI and Modules
+
Nearly all of the functionality in NoSQLBench is wired together at runtime through a service
+discovery mechanism. This allows modules to be bundled within the project and packaged together
+as needed. At runtime, when a particular set of components is called up to be used together, the
+runtime uses Java's SPI mechanism to discover and map them by a name, called a selector.
+
You will see this throughout the project in the form of a @Service(...) annotation. Any
+component withing the code base which needs to be realized at runtime must have this. The
+built-in annotation processors do the packing work to put these into the standard service manifest
+format.
+
Driver Adapters
+
If you are building a driver adapter, you don't need to understand the whole project structure.
+You simply need to implement the DriverAdapter interface. You only need the adapters-api
+module as an upstream dependency.
+
Insert Module Graph Here
+
This will be a visual of all the core modules.
+
(Java) Package Naming
+
All of the modules in the project follow a minimum structural standard. The internal package
+names must have the prefix io.nosqlbench.[module-name] where the module name has its
+non-words replaced with package boundaries.
+
This means that module named adapter-diag lives in a directory under the main project of the
+same name. It has a package named io.nosqlbench.adapter.diag in which all of its driver code
+resides.
+
This makes it easy to know what module any package is part of simply from the name.
NoSQLBench version 5 is packaged directly as a Linux binary named
+nb5
+and as an executable Java 17 jar named
+nb5.jar
+. All releases are available at
+[NoSQLBench Releases]. The Linux binary is
+recommended, since it comes with its own JVM and eliminates the need to manage
+Java downloads.
+
Requirements
+
The nb5 binary requires Linux and a system with a working
+FUSE library. Most modern distributions
+have this out of the box.
+
nb5.jar is not particular about what system you run it on, as long as you have java 17 or newer.1
+
Download Scripts
+
Get the latest nb5 binary
+
# download the latest nb5 binary and make it executable
+curl -L -O https://github.com/nosqlbench/nosqlbench/releases/latest/download/nb5
+chmod +x nb5
+./nb5 --version
+
# download the latest nb5 jar
+curl -L -O https://github.com/nosqlbench/nosqlbench/releases/latest/download/nb5.jar
+java -jar nb5.jar --version
+
+
This documentation assumes you are using the Linux binary initiating NoSqlBench commands with
+./nb. If you are using the jar, just replace ./nb with java -jar nb.jar when running
+commands. If you are using the jar version, Java 17 is recommended, and will be required soon.
+
Running nb5
+
To run a simple built-in workload run:
+
./nb5 examples/bindings-basics
+
+
This runs a built-in scenario located in the workload template named 'bindings-basics'. The
+file that the scenario is defined in is called the workload template. The scenario is named
+default, so you don't even have to specify it here. But you could, as
Here is a more detailed command which demonstrates how customizable nb5 is:
+
1
./nb5 examples/bindings-basics default \
+
2
filename=exampledata.out \
+
3
format=csv \
+
4
cycles=10000 \
+
5
rate=100 \
+
6
--progress console:1s
+
+
Each line does something specific:
+
+
Starts the scenario named default from the workload template examples/binding-basics.
+
Sets the filename parameter (part of the stdout driver) to exampledata.out.
+
Sets the output format (part of the stdout driver) to CSV.
+
Sets the number of cycles to run to 10000, short for 0..10000, which represents 0 through 9999.
+
Sets the cycle rate to 100 per second.
+
Tells nb5 to report activity progress to the console every second.
+
+
Dashboards
+
You can use --docker-metrics to stand up a live metrics dashboard at port 3000.
+
👉In order to use the --docker-metrics option, you need to have docker installed on your
+local system, and your user must have permissions to use it. Typically, this means that your user
+has been added to the docker group with a command like sudo usermod $USER -g docker.
+
Here is the above command, with built-in dashboarding enabled:
The version scheme for NoSQLBench is [major]-[java-lts]-[minor], so nb5 version 5.17.1
+requires java version 17, which is the latest LTS Java release.
You need a target system to run your test against. If you already have one, you can skip this
+section. This tutorial assumes you are testing against a CQL based system. If you need to start
+something up, you have some options:
👉 If you want to see system-level metrics from your cluster, it is possible to get these as
+well as Apache Cassandra level metrics by using the DSE Metrics Collector (if using DSE), or
+by setting up a metrics feed to the Prometheus instance in your local docker stack. You can
+read the
+DSE Metrics Collector docs.
+
Run an Astra Cluster
+
You can choose to run a serverless cluster through DataStax AstraDB for functional
+testing. For tips on how to set up an Astra DB instance, you can check out this
+[Astra Tutorial].
+
If you plan to follow along this tutorial using AstraDB, you will need to follow these steps:
+
+
+
Add a keyspace named 'baselines' to your Astra Database (this is because Astra does not
+support adding keyspaces through CQLSH), see the following for details:
+
+
+
+
In the connect menu of your Astra DB Instance, download your secure connect bundle and make note of its path.
+
+
+
In your organization settings, you need to generate a Read/Write token and make note of the Client ID and Client Secret.
+see below for details:
+
+
+
+
Configuring for AstraDB
+
The following config pattern is often helpful for configuring NB5 for Astra:
This allows you to keep the credentials in files so that they aren't exposed on the console or
+in log files. It is also convenient to put these into a subdirectory together when you are
+testing multiple systems, for example:
👉 Regardless of what form you are using for the tutorial, you'll want to keep these options
+handy. You may want to drop them in anywhere you see a ... below.
+
This example shows how you can call on a completely pre-built workload template by simply using it
+as the first argument. cql-keyvalue is actually a workload description hosted within the binary
+(or jar), at activities/cql-keyvalue.yaml. You do have to provide some endpoint and authentication
+details, but these should not be added to the workload template anyway.
+
This also captures some workflow for us. It takes care of:
+
+
Initializing your schema.
+
Loading background data to provide a realistic scenario.
+
Running a main activity against the background data.
+
+
How this works is explained in more depth throughout this guide. For now, just know that all of
+these details are completely open for you to change simply by modifying the workload template.
+
Discover Scenarios
+
To get a list of built-in scenarios run:
+
./nb5 --list-scenarios
+
+
If you want a simple list of workload templates which contain named scenarios, run:
+
./nb5 --list-workloads
+
+
These are distinct commands, because you can have multiple named scenarios in a given workload
+template. The commands above scan for all known sources (bundled within or locally on your
+filesystem) and provide a list of available scenarios or their containing workload templates.
+The example works above when we specify the workload, because it has a default scenario built in.
+
👉 These commands will include workloads that were shipped with nb5 and workloads in your local
+directory. To learn more about how to design custom workloads see
+[Workloads 101].
+
You can also include the example path prefix which will show many more:
+
./nb5 --list-workloads --include examples
+
+
When learning about bindings, it is recommended for first time users to use the above command to
+find lots of examples for inspiration.
+
Compose scenarios
+
We could easily ask nb5 to run a different named scenario of our choosing by running this:
+
# Run a specific named scenario
+./nb5 cql-keyvalue astra ...
+
+# Run a specific step of a specific named scenario
+./nb5 cql-keyvalue astra.schema ...
+
+# Run a series of specific steps from specific named scenarios
+./nb5 cql-keyvalue astra.schema astra.rampup ...
+
+
If you don't specify which steps to run, they are all run serially, in the order they are defined.
+If you don't specify which named scenario to run, default is used.
+
Example Activities
+
The first examples above show you how to call whole scenarios, which contain multiple steps and
+pre-configured defaults. You can also skip using named scenarios and invoke the activities that
+they invoke directly.
+
+
You can skip the next part of this page if you want to just use built-in scenarios. If you
+want to know how to drill down to the steps and test them individually, continue on.
+
+
Create a schema
+
We will start by creating a simple schema in the database:
+
# ( We'll use SQL highlighting for our CQL )
+
+CREATE KEYSPACE baselines
+ WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}
+ AND durable_writes =true;
+
+CREATETABLEbaselines.keyvalue (
+ key textPRIMARY KEY,
+ value text
+)
+
+
From your command line, go ahead and execute the following command, replacing
+the ... with that of one of your database nodes. Alternately, if using Astra,
+use the options described in the test target section.
+
./nb5 run driver=cqld4 workload=cql-keyvalue tags=block:schema ...
+
+
This follows the basic command pattern of all nb5 commands. The first bare word is a command,
+and all assignments after it are parameters to that command. The run command is short for "run
+an activity".
+
Let's break down each of those command line options.
+
run tells NoSQLBench to run an activity.
+
driver=... is used to specify the activity type (driver). In this case
+we are using cqld4, which tells NoSQLBench to use the DataStax Java Driver
+and execute CQL statements against a database.
+
workload=... is used to specify the workload definition file that defines the activity. In
+this example, we use cql-keyvalue which is a pre-built workload that is packaged with
+NoSQLBench.
+
tags=block:schema tells NoSQLBench to run the op templates from a block that is
+tagged with block:schema. In this example, that is the DDL portion of the cql-keyvalue
+workload.
+
... should be the endpoint and authentication details that you used from the first example.
+
If you like, you can verify the result of this command by describing your
+keyspace in cqlsh or DataStax Studio with
+DESCRIBE KEYSPACE baselines.
+
Load some data
+
Before running a test of typical access patterns where you want to capture the results, you
+need to make the test more interesting than loading an empty table. For this, we use the
+rampup activity.
+
Before sending our test writes to the database, we will use the stdout driver, so we can see
+what NoSQLBench is generating for CQL statements.
+
Go ahead and execute the following command:
+
./nb5 run driver=stdout workload=cql-keyvalue
+
+
You should see 10 of the following statements in your console
NoSQLBench deterministically generates data, so the generated values will be the same from run to run.
+
Now we are ready to write some data to our database. Go ahead and execute the following from your command line:
+
./nb5 run driver=cql workload=cql-keyvalue tags=block:rampup cycles=100k ... --progress console:1s
+
+
Note the differences between this and the command that we used to generate the schema.
+
tags=block:rampup is running the yaml block in cql-keyvalue that has only INSERT statements.
+
cycles=100k will run a total of 100,000 operations, in this case,
+100,000 writes. You will want to pick an appropriately large number of
+cycles in actual testing to make your main test meaningful.
+
👉 The cycles parameter is not just a quantity. It is a range of values.
+The cycles=n format is short for
+cycles=0..n, which makes cycles a zero-based range. For example,
+cycles=5 means that the activity will use cycles 0,1,2,3,4, but not 5. The
+reason for this is explained in detail in the Activity Parameters section.
+
These parameters are explained in detail in the section on Activity
+Parameters.
+
--progress console:1s will print the progression of the run to the
+console every 1 second.
Now that we have a base dataset of 100K rows in the database, we will now
+run a mixed read / write workload, by default this runs a 50% read / 50%
+write workload. This time we will add a -v option for more context.
You can go ahead and paste your activity parameters on the end. nb5 will always parse out the
+global options (those with a dash) and leave your commands intact.
tags=block:main is using a new block in our workload template that contains both read and
+write queries.
+
threads=50 is an important one. The default for NoSQLBench is to run with a single thread.
+This is not adequate for workloads that will be running many operations, so threads is used as
+a way to increase concurrency on the client side.
+
cyclerate=5000 is used to control the operations per second that are initiated by NoSQLBench.
+This command line option is the primary means to rate limit the workload and here we are
+running at 5000 ops/sec.
+
Now What?
+
Note in the above output, we
+see Configured scenario log at logs/scenario_20230113_135200_029.log.
+
By default, NoSQLBench records the metrics from the run in this file, we will go into detail
+about these metrics in the next section example results.
We just ran a very simple workload against our database. In that example, we saw that nb5
+writes to a log file, and it is in that log file where the most basic form of metrics are displayed.
+
Log File Metrics
+
For our previous run, we saw that NoSQLBench was writing to a
+file like logs/scenario_20190812_154431_028.log
+
Even when you don't configure NoSQLBench to write its metrics to another location, it will
+periodically report all the metrics to the log file. At the end of a scenario, before NoSQLBench
+shuts down, it will flush the partial reporting interval again to the logs. This means you can
+always look in the logs for metrics information.
+
WARNING:
+If you look in the logs for metrics, be aware that the last report will only contain a partial
+interval of results. When looking at the last partial window, only metrics which average over time
+or which compute the mean for the whole test will be meaningful.
+
Below is a sample of the log that gives us our basic metrics. There is a lot to digest here, for now
+we will only focus a subset of the most important metrics.
+
2019-08-12 15:46:00,274 INFO [main] i.e.c.ScenarioResult [ScenarioResult.java:48] -- BEGIN METRICS DETAIL --
+2019-08-12 15:46:00,294 INFO [main] i.e.c.ScenarioResult [Slf4jReporter.java:373] type=GAUGE, name=cql-keyvalue.cycles.config.burstrate, value=5500.0
+2019-08-12 15:46:00,295 INFO [main] i.e.c.ScenarioResult [Slf4jReporter.java:373] type=GAUGE, name=cql-keyvalue.cycles.config.cyclerate, value=5000.0
+2019-08-12 15:46:00,295 INFO [main] i.e.c.ScenarioResult [Slf4jReporter.java:373] type=GAUGE, name=cql-keyvalue.cycles.waittime, value=3898782735
+2019-08-12 15:46:00,298 INFO [main] i.e.c.ScenarioResult [Slf4jReporter.java:373] type=HISTOGRAM, name=cql-keyvalue.resultset-size, count=100000, min=0, max=1, mean=8.0E-5, stddev=0.008943914131967056, median=0.0, p75=0.0, p95=0.0, p98=0.0, p99=0.0, p999=0.0
+2019-08-12 15:46:01,703 INFO [main] i.e.c.ScenarioResult [ScenarioResult.java:56] -- END METRICS DETAIL --
+
+
The log contains lots of information on metrics, but this is obviously not the most desirable way
+to consume metrics from NoSQLBench.
+
We recommend that you use one of these methods, according to your environment or tooling available:
+
+
--docker-metrics with a local docker-based grafana dashboard (See the section on Docker Based
+Metrics)
+
--docker-metrics-at <addr> with a remote docker-based grafana dashboard that you have
+previously
+set up.
+
Send your metrics to a dedicated graphite server with --report-graphite-to graphitehost
+
Record your metrics to local CSV files with --report-csv-to my_metrics_dir
+
Record your metrics to HDR logs with --log-histograms my_hdr_metrics.log
+
Record your metrics to a running docker metrics system with --docker-metrics-at <host>
+
+
See the command line reference for details on how to route your metrics to a metrics collector or
+format of your preference.
A set of core metrics are provided for every workload that runs with NoSQLBench, regardless of the activity type and
+protocol used. This section explains each of these metrics and shows an example of them from the log file.
+
metric: result
+
This is the primary metric that should be used to get a quick idea of the throughput and latency for a given run. It
+encapsulates the entire operation life cycle ( ie. bind, execute, get result back ).
+
For this example we see that we averaged 3732 operations / second with 3.6ms 75th percentile latency and 23.9ms 99th
+percentile latency. Note the raw metrics are in microseconds. This duration_unit may change depending on how a user
+configures NoSQLBench, so always double-check it.
This metric shows whether there were any errors during the run. You can confirm that the count is equal to the number of
+cycles for the run if you are expecting or requiring zero failed operations.
+
Here we see that all 100k of our cycles succeeded. Note that the metrics for throughput and latency here are slightly
+different than the results metric simply because this is a separate timer that only includes operations which
+completed with no exceptions.
For read workloads, this metric shows the size of result sent back to NoSQLBench from the server. This is useful to
+confirm that you are reading rows that already exist in the database.
NoSQLBench will retry failures 10 times by default, this is configurable via the maxtries command line option for the
+cql activity type. This metric shows a histogram of the number of tries that each operation required, in this example,
+there were no retries as the count is 100k.
Now that you've run NoSQLBench for the first time and seen what it does, you can choose what level
+of customization you want for further testing.
+
The sections below describe key areas that users typically customize when working with NoSQLBench.
+
Everyone who uses NoSQLBench will want to get familiar with the
+Core Concepts
+section. This is essential reading for new and experienced testers alike.
+
High-Level Users
+
Several canonical workloads are already baked-in to NoSQLBench for immediate use. If you are simply
+wanting to drive workloads without building a custom workload, then you'll want to learn about
+the available workloads and their options.
+
Workload Builders
+
If you want to use NoSQLBench to build a tailored workload that closely emulates what a specific
+application would do, then you can build a self-contained and portable YAML file that specifies all
+the details. You can specify the access patterns, data distributions, and more. This is
+explained further in Workloads 101
+
Built-In Sources
+
You can use the --list-workloads option to see all the built-in workloads, and then use the
+--copy <name> option to copy them out of the runtime into your local directory.
+These sources provide a wealth of examples to consider as you build your own workloads or customize
+existing ones.
+
Scenario Developers
+
For advanced scenario designs, iterative testing models, or analysis methods, you can use ECMAScript
+to control the scenario from start to finish. This is an advanced feature that is not recommended
+for first-time users. If you need this feature and run into any issues, join us on the discord
+server and strike up a conversation!
NoSQLBench version 5 is packaged directly as a Linux binary named
+nb5
+and as an executable Java 17 jar named
+nb5.jar
+. All releases are available at
+[NoSQLBench Releases]. The Linux binary is
+recommended, since it comes with its own JVM and eliminates the need to manage
+Java downloads.
+
Requirements
+
The nb5 binary requires Linux and a system with a working
+FUSE library. Most modern distributions
+have this out of the box.
+
nb5.jar is not particular about what system you run it on, as long as you have java 17 or newer.1
+
Download Scripts
+
Get the latest nb5 binary
+
# download the latest nb5 binary and make it executable
+curl -L -O https://github.com/nosqlbench/nosqlbench/releases/latest/download/nb5
+chmod +x nb5
+./nb5 --version
+
# download the latest nb5 jar
+curl -L -O https://github.com/nosqlbench/nosqlbench/releases/latest/download/nb5.jar
+java -jar nb5.jar --version
+
+
This documentation assumes you are using the Linux binary initiating NoSqlBench commands with
+./nb. If you are using the jar, just replace ./nb with java -jar nb.jar when running
+commands. If you are using the jar version, Java 17 is recommended, and will be required soon.
+
Running nb5
+
To run a simple built-in workload run:
+
./nb5 examples/bindings-basics
+
+
This runs a built-in scenario located in the workload template named 'bindings-basics'. The
+file that the scenario is defined in is called the workload template. The scenario is named
+default, so you don't even have to specify it here. But you could, as
Here is a more detailed command which demonstrates how customizable nb5 is:
+
1
./nb5 examples/bindings-basics default \
+
2
filename=exampledata.out \
+
3
format=csv \
+
4
cycles=10000 \
+
5
rate=100 \
+
6
--progress console:1s
+
+
Each line does something specific:
+
+
Starts the scenario named default from the workload template examples/binding-basics.
+
Sets the filename parameter (part of the stdout driver) to exampledata.out.
+
Sets the output format (part of the stdout driver) to CSV.
+
Sets the number of cycles to run to 10000, short for 0..10000, which represents 0 through 9999.
+
Sets the cycle rate to 100 per second.
+
Tells nb5 to report activity progress to the console every second.
+
+
Dashboards
+
You can use --docker-metrics to stand up a live metrics dashboard at port 3000.
+
👉In order to use the --docker-metrics option, you need to have docker installed on your
+local system, and your user must have permissions to use it. Typically, this means that your user
+has been added to the docker group with a command like sudo usermod $USER -g docker.
+
Here is the above command, with built-in dashboarding enabled:
The version scheme for NoSQLBench is [major]-[java-lts]-[minor], so nb5 version 5.17.1
+requires java version 17, which is the latest LTS Java release.
👉 The docs are presently updated to support NoSQLBench v5.17. 👈
+
Welcome to NB5! This release represents a massive leap forward. There are so many improvements
+that should have gone into smaller releases along the way, but here we are. We've had our heads
+down, focusing on new APIs, porting drivers, and fixing bugs, but it's time to talk about the
+new stuff!
+
For those who are experienced NB5 users, this will have few (but some!) surprises. For
+those of you who are NB (4 or earlier) users, NB5 is a whole different kind of testing tool. The
+changes allow for a much more streamlined user and developer experience, while also offering
+additional capabilities never seen together in a systems testing tool.
+
Everything mentioned here will find its way into the main docs site before were done.
+
We've taken some care to make sure that there is support for earlier workloads where at all
+possible. If we've missed something critical, please let us know, and we'll patch it up ASAP.
+
This is a narrative overview of changes for NB5 in general. Individual
+releases will have itemized code changes
+listed individually.
+
Artifacts
+
nb5
+
The main bundled artifact is now named nb5. This version of NoSQLBench is a
+significant departure from the previous limitations and conventions, so a new name was fitting.
+It also allows you to easily have both on your system if you are maintaining test harnesses.
+This is a combination of the NoSQLBench core runtime module nbr and all the bundled driver
+adapters which have been contributed to the project.
+
Packaging
+
The code base for nb5 is more modular and adaptable. The core runtime module nbr is now
+separate, including only the core diagnostic driver which is used in integration tests. This allows
+for leaner and meaner integration tests.
+
drivers
+
We've ported many drivers to the nb5 APIs. All CQL support is now being provided by
+Datastax Java Driver for Apache Cassandra.
+In addition, multiple contributors are stepping up to provide new drivers for many systems
+across the NoSQL ecosystem.
+
Project
+
Significant changes were made for the benefit of both users and developers.
+
Team
+
We've expanded the developer team which maintains tools like NoSQLBench. This should allow us to
+make improvements faster, focus on users more, and bring more strategic capabilities to the project
+which can redefine how advanced testing is done.
+
WYSiWYG Docs
+
We've connected the integration and specification tests to the documentation in a way that
+ties examples everything together. If the examples and integration tests that are used on this
+site fail, the build fails. Otherwise, the most recent examples are auto exported from the main
+code base to the docs site. This means that test coverage will improve examples in the docs,
+which will stay constantly up to date. Expect coverage of this method to improve with each
+release. Until we can say What You See Is What You Get across all nb5 functions and examples,
+we're not done yet.
+
Releases
+
Going forward we'll enforce stricter release criteria. Interim releases will be flagged as
+prerelease unless due diligence checks have been done and a peer review finds a prerelease
+suitable for promotion to a main release. Once flagged as a normal release, CI/CD tools can pick
+up the release from the github releases area automatically.
+
We have a set of release criteria which will be published to this site and used as a blueprint for
+releases going forward. More information on how releases are managed can be found in our
+Contributing section. This will include testing coverage,
+static
+analysis, and further integrated testing support.
+
Documentation
+
This doc site is a significant step up from the previous version. It is now more accessible,
+more standards compliant, and generally more user-friendly. The dark theme is highly usable.
+Syntax highlighting is much easier on the eyes, and page navigation works better! The starting
+point for this site was provided by the abridge theme by
+Jieiku.
+
Architecture
+
The impetus for a major new version of NoSQLBench came from user and developer needs. In
+order to provide a consistent user experience across a variety of testing needs, the core
+machinery needed an upgrade. The APIs for building drivers and features have been redesigned to
+this end, resulting in a vast improvement for all who use or maintain nb5 core or drivers.
+
These benefits include:
+
+
Vastly simplified driver contributor experience
+
Common features across all implemented DriverAdapters
+
Interoperability between drivers in the same scenario or activity
+
Standard core activity params across all drivers, like op=...
+
Standard metrics semantics across all drivers
+
Standard highly configurable error handler support
+
Standard op template features, like named start and stop timers
+
Standard diagnostic tools across all drivers, like dryrun=...
+
+
The amount of Standard you see in this list is directly related to the burden removed from
+both nb5 users and project contributors.
+
Some highlights of these will be described below, with more details in the user guide.
+
+
The error handlers mechanism is now fully generalized across all
+drivers.
+It is also chainable, with specific support for handling each error type with a specific chain of
+handlers, or
+simply assigning a default to all of them as before.
+
The rate limiter is more efficient. This should allow it to work better in some scenarios
+where inter-core contention was a limiting factor.
+
It is now possible to dry-run an activity with dryrun=op or similar. Each dryrun option goes
+a little further into a normal run so that incremental verification of workloads can be done.
+For example, the dryrun=op option uses all the logic of a normal execution, but it wraps
+the op implementation in a no-op. The results of this will tell you how fast the client can
+synthesize and dispatch operations when there is no op execution involved. The measurement
+will be conservative due to the extra wrapping layer.
+
Thread management within activities is now more efficient, more consistent, and more real-time.
+Polling calls were replaced with evented calls where possible.
+
Only op templates which are active (selected and have a positive ratio) are resolved at
+activity initialization. This improves startup times for large workload with subsets of
+operations enabled.
+
Native drivers (like the CQL Java Driver) now have their driver instance and object graph
+cached, indexed by a named op field called space. By default, this is wired to return
+default, thus each unique adapter will use the same internal object graph for execution.
+This is how things worked for most drivers before. However, if the user specifies that the
+space should vary, then they simply assign it a binding. This allows for advanced driver
+testing across a number of client instances, either pseudo-randomly or in lock-step with
+specific access patterns. If you don't want to use this, then ignore it and everything works
+as it did before. But if you do, this is built-in to every driver by design.
+
The activity parameter driver simply sets the default adapter for an activity. You can set
+this per op template, and run a different driver for every cycle. This field must be static on
+each op template, however. This allows for mixed-mode workloads in the same op sequence.
+
Adapters can be loaded from external jars. This can help users who are building adapters and want
+to avoid building the full runtime just for iterative testing.
+
The phase loop has been removed.
+
Operations can now generate more operations associated with a cycle. This opens the door to
+
There is a distinct API for implementing dynamic activity params distinctly from
+initialization params.
+
+
Ergonomics
+
Console
+
+
ANSI color support in some places, such as in console logging patterns. The --ansi and
+--console-pattern and --logging-pattern options work together. If a non-terminal is
+detected on stdout, ANSI is automatically disabled.
+
The progress meter has been modified to show real-time, accurate, detailed numbers
+including operations in flight.
+
+
Discovery
+
+
Discovery of bundled assets is now more consistent, supported with a family of --list-...
+options.
+
+
Configuration
+
+
Drivers know what parameters they can be configured with. This allows for more
+intelligent and useful feedback to users around appropriate parameter usage. If you get a
+param name wrong, nb5 will likely suggest the next closest option.
+
S3 Urls should work in most places, including for loading workload templates. You only need to
+configure your local authentication first.
+
+
Templating
+
Much of the power of NB5 is illustrated in the new ways you can template workloads. This
+includes structured data, dynamic op configuration, and driver instancing, to name a few.
+
+
The structure of op templates (the YAML you write to simulate access patterns) has been
+standardized around a strict set of specification tests and examples. These are documented
+in-depth and tested against a specification with round-trip validation.
+
Now, JSON and Jsonnet are supported directly as workload template formats. Jsonnet allows you to
+see the activity params as external variables.
+
All workload template structure is now supported as maps, in addition to the other structural
+forms (previously called workload YAMLs). All of these forms automatically de-sugar into the
+canonical forms for the runtime to use. This follows the previous pattern of "If it does what
+it looks like, it is valid", but further allows simplification of workloads with inline
+naming of elements.
+
In addition to workload template structure, op templates also support arbitrary structure
+instead of just scalar or String values. This is especially useful for JSON payload modeling.
+This means that op templates now have a generalized templating mechanism that works for all
+data structures. You can reference bindings as before, but you can also create collections and
+string templates by writing fields as they naturally occur, then adding {bindings} where you
+need.
+
All op template fields can be made dynamic if an adapter supports it. It is up to the adapter
+implementor to decide which op fields must be static.
+
Op template values auto-defer to configured values as static, then dynamic, and then
+configured from activity parameters as defaults. If an adapter supports a parameter at the
+activity level, and an op form supports the same field, then this occurs automatically.
+
Tags for basic workload template elements are provided gratis. You no longer need to specify the
+conventional tags. All op templates now have block: <blockname> and name: <blockname>--<name> tags added. This works with regexes in tag filtering.
+
Named scenarios now allow for nb5 <workload-file> <scenario-name>.<scenario-step> .... You can
+prototype and validate complex scenarios by sub-selecting the steps to execute.
+
You can use the op="..." activity parameter to specific a single-op workload on the
+command line, as if you had read it from a workload YAML. This allows
+for one-liner tests streamlined integration, and other simple utility usage.
+
Binding recipes can now occur inline, as {{Identity()}}. This works with the op
+parameter above.
+
You can now set a minimum version of NoSQLBench to use for a workload. The min_version: "4.17. 15" property is checked starting from the most-significant number down. If there are new core
+features that your workload depends on, you can use this to avoid ambiguous errors.
+
Template vars like <<name:value>> or TEMPLATE(name,value) can set defaults the first time they
+are seen. This means you don't have to update them everywhere. A nice way to handle this is to
+include them in the description once, since you should be documenting them anyway!
+
You can load JSON files directly. You can also load JSONNET files directly! If you need to
+sanity check your jsonnet rendering, you can use dryrun=jsonnet.
+
All workload template elements can have a description.
+
+
Misc Improvements
+
(some carry over from pre-nb5 features)
+
+
Argsfile support for allowing sticky parameters on a system.
+
Tag filters are more expressive, with regexes and conjunctions.
+
Some scenario commands now allow for regex-style globbing on activity alias names.
+
Startup logging now includes details on version, hardware config, and scenario commands for
+better diagnostics and bug reports.
+
The logging subsystem config is improved and standardized across the project.
+
Test output is now vectored exclusively through logging config.
+
Analysis methods are improved and more widely used.
+
+
Deprecations and Standards
+
+
NB5 depends on Java 17. Going forward, major versions will adopt the latest LTS java release.
+
Dependencies which require shading in order to play well with others are not supported. If you
+have a native driver or need to depend on a library which is not a good citizen, you can only
+use it with NB5 by using the external jar feature (explained elsewhere). This includes the
+previous CQL drivers which were the 1.9.* and 3.*.* version. Only CQL driver 4.* is
+provided in nb5.
+
Dependencies should behave as modular jars, according to JPMS specification. This does not
+mean they need to be JPMS modules, only that the get halfway there.
+
Log4J2 is the standard logging provider in the runtime for NoSQLBench. An SLF4J stub
+implementation is provided to allow clients which implement against the SLF4J API to work.
+
All new drivers added to the project are based on the Adapter API.
+
+
Works In Progress
+
+
These docs!
+
Bulk Loading efficiency for large tests
+
Linearized Op Modeling
+
+
We now have a syntax for designating fields to extract from op results. This is part of the
+support needed to make client-side joins and other patterns easy to emulate.
+
+
+
Rate Limiter v3
+
VictoriaMetrics Integration
+
+
Labeled metrics need to be fed to a victoria metrics docker via push. This approach will
+remove much of the pain involved in using prometheus as an ephemeral testing apparatus.
In general, our goals with NoSQLBench are to make the help systems and examples wrap around the
+users like a suit of armor, so that they feel capable of doing most things without having to ask for
+help. Please keep this in mind when looking for personal support from our community, and help us
+find those places where the docs are lacking. Maybe you can help us by adding some missing docs!
+
Doc Site
+
This site is intended to be the first and most useful form of documentation for NoSQLBench. It is
+hosted on a separate project repo on GitHub so that it can be owned and maintained by the NoSQLBench
+user and developer community.
+(Click the GitHub link [here] to go directly there.) If you
+see something here that should be updated or expanded, please submit an issue, reach out on discord,
+or even better, submit a pull request!
+
Discord Server
+
Our discord server is where users and developers can discuss anything about NoSQLBench and support
+each other. Please
+[join us] there if you are a new user of NoSQLBench!
+
Contributing
+
We welcome all builders to NoSQLBench to help us improve it. Whether you are focused on improving
+docs, building a new subsystem or developing a new scripting extension, there is room for everyone, and all are
+appreciated. We have a new contributing section
+if you are interested.
+
Documentation
+
The primary documentation for NoSQLBench has been moved to this site. While there are still some
+comment line help facilities, like nb5 help <driver>, this is the best place to read it.
+When nb5 is built, it automatically exports all the rendered docs content to this site.
+
This site contains much more documentation than you will want to browse on the command line.
+It is recommended that you look here first with the search function when possible.
+
Bug Fixes
+
If you think you have found a bug, please
+[file a bug report]
+. NoSQLBench is actively used within DataStax, and verified bugs will get attention as resources
+permit. We appreciate all feedback, no matter how detailed. However, bug reports which are more
+detailed, or which include steps to reproduce any issues will get attention first.
+
Feature Requests
+
If you would like to see something in NoSQLBench that is not there yet, please
+[submit a feature request]
+.
+
Documentation Requests
+
If you would like to see a specific NoSQLBench or testing topic added to the guidebook, please
+[request docs content]
+.
NoSQLBench is built on top of core concepts that have been scrutinized, replaced, refined, and
+hardened through several years of use by a diverse set of users.
+
This level of refinement is important when trying to find a way to express common patterns in what
+is often a highly fragmented practice. Testing is hard. Scale testing is hard. Distributed testing
+is hard. Combined, the challenge of executing realistic tests is often quite daunting to all but
+seasoned test engineers. To make this worse, existing tools have only skirmished with this problem
+enough to make dents, but none has tackled full-on the lack of conceptual building blocks.
+
This has to change. We need a set of testing concepts that can span across workloads and system
+types, and machinery to put these concepts to use. This is why it is important to focus on finding a
+useful and robust set of concepts to use as the foundation for the rest of the toolkit to be built
+on. Finding these building blocks is often one of the most difficult challenges in systems design.
+Once you find and validate a useful set of concepts, everything else gets easier.
+
We believe that the success that we've already had using NoSQLBench has been strongly tied to the core
+concepts. Some concepts used in NoSQLBench are shared below for illustration, but this is by no
+means an exhaustive list.
+
The Cycle
+
Cycles in NoSQLBench are whole numbers on a number line. Each operation in a NoSQLBench scenario is
+derived from a single cycle. It's a long value, representing a seed. The cycle determines not only which
+operation is selected for execution, but also what data will be attached and fed to it.
+
Cycles are specified as a closed-open[min,max) interval, known as slices in some languages. That
+is to say, the min value is included in the range, but the max value is not. This means that you can stack
+slices using common numeric reference points without overlaps or gaps, and that you can have exact
+awareness of what data is in your dataset, even incrementally.
+
You can think of a cycle as a single-valued coordinate system for data that lives adjacent to that
+number on the number line. In this way, virtual dataset functions are ways of converting coordinates
+into data.
+
In NoSQLBench, the cycle range determines both the total size of a workload and the specific set of
+operations which will be performed. Using the same cycle range is the same as specifying the same
+exact operations. This means that your tests can be completely deterministic (pseudo-random) and
+repeatable, even when they appear random or are shaped by density curves.
+
The Activity
+
An activity is a multithreaded flywheel of statements in some sequence and ratio. Each activity
+runs over the numbers in a cycle range. An activity is specified as a series of op templates in some
+ratio and order. When an activity runs, it executes an efficient loop over specific operations with
+its own thread pool.
+
The Op Template
+
Each possible operation in an activity is provided by the user in a YAML or data structure driven
+template. The op templates are used to create efficient op dispensers in the runtime according to
+the mapping rules for a given driver.
+
The Driver Adapter
+
A driver adapter is a high level driver for a protocol which interfaces a native driver to the
+runtime machinery of NoSQLBench. It's like a statement-aware cartridge that knows how to take a
+basic op template and turn it into an operation for an activity to execute for a given cycle.
+
The Scenario
+
The scenario is a runtime session that holds activities while they run. A NoSQLBench scenario is
+responsible for aggregating global runtime settings, metrics reporting channels, log files, and so
+on. All activities run within a scenario, under the control of the scenario script.
+
The Scenario Script
+
Each scenario is governed by a central script. This script runs in a single-threaded manner, asynchronous from
+the activities, maintaining control over them. If necessary, the scenario script is automatically created for the user,
+and the user never knows it is there. If the user has advanced testing requirements, then they may take
+advantage of the scripting capability at such time. The scenario completes when the script exits, AND all
+activities are also complete. Shortcut forms of scripting are provided on the command line to address
+common variations.
In general, our goals with NoSQLBench are to make the help systems and examples wrap around the
+users like a suit of armor, so that they feel capable of doing most things without having to ask for
+help. Please keep this in mind when looking for personal support from our community, and help us
+find those places where the docs are lacking. Maybe you can help us by adding some missing docs!
+
Doc Site
+
This site is intended to be the first and most useful form of documentation for NoSQLBench. It is
+hosted on a separate project repo on GitHub so that it can be owned and maintained by the NoSQLBench
+user and developer community. (Click the GitHub link [here] to go directly there.) If you
+see something here that should be updated or expanded, please submit an issue, reach out on discord,
+or best of all, submit a pull request!
+
Discord Server
+
Our discord server is where users and developers can discuss anything about NoSQLBench and support
+each other. Please
+[join us] there if you are a new user of NoSQLBench!
+
Contributing
+
We welcome all builders to NoSQLBench to help us improve it. Whether you are focused on improving
+docs or building a new subsystem or scripting extension, there is room for everyone, and all are
+appreciated. Please read about
+[CONTRIBUTING]
+if you are interested.
+
Built-In Docs
+
NoSQLBench has some built-in docs which are available on the command line. You can see a list of
+built-in docs with the command:
+
nb5 help topics
+
+
or, to read any topic, simply use the command:
+
nb5 help <topic>
+
+
This doc site contains much more documentation than you will want to browse on the command line.
+It is recommended that you look here first with the search function when possible.
+
Bug Fixes
+
If you think you have found a bug, please
+[file a bug report]
+. NoSQLBench is actively used within DataStax, and verified bugs will get attention as resources
+permit. We appreciate all feedback, no matter how detailed. However, bug reports which are more
+detailed, or which include steps to reproduce any issues will get attention first.
+
Feature Requests
+
If you would like to see something in NoSQLBench that is not there yet, please
+[submit a feature request]
+.
+
Documentation Requests
+
If you would like to see a specific NoSQLBench or testing topic added to the guidebook, please
+[request docs content]
+.
Welcome to the documentation for NoSQLBench. This is a power tool that emulates real application
+workloads. This means that you can fast-track performance, sizing and data model testing without
+writing your own testing harness.
+
To get started right away, jump to the
+Getting Started section of the docs.
+
What is NoSQLBench?
+
NoSQLBench is a serious performance testing tool for the NoSQL ecosystem.
+
NoSQLBench brings advanced testing capabilities into one tool that are not found in other testing
+tools.
+
+
You can run common testing workloads directly from the command line within 5 minutes of reading this.
+
You can generate virtual data sets of arbitrary size, with deterministic data and statistically
+shaped values.
+
You can design custom workloads that emulate your application, contained in a single file, based
+on statement templates—no IDE or coding required.
+
You can immediately plot your results in a docker and grafana stack on Linux with a single command
+line option.
+
You can open the access panels when necessary and rewire the runtime behavior of NoSQLBench for advanced
+testing. This includes access to a full scripting environment with Javascript.
+
+
The core machinery of NoSQLBench has been built with attention to detail. It has been battle tested
+within DataStax as a way to help users validate their data models, baseline system performance, and
+qualify system designs for scale.
+
In short, NoSQLBench wishes to be a programmable power tool for performance testing. However, it is
+somewhat generic. It doesn't know directly about a particular type of system, or protocol. It simply
+provides a suitable machine harness in which to put your drivers and testing logic. If you know how
+to build a client for a particular kind of system, NoSQLBench will let you load it like a plugin and control
+it dynamically.
+
Initially, NoSQLBench was used for CQL testing, but we have seen this expanded over time by
+other users and vendors with drivers for a variety of systems. We would like to see this
+expanded further with contributions from others.
+
Origins
+
The code in this project comes from multiple sources. The procedural data generation capability was
+known before as Virtual Data Set. The core runtime and scripting harness was from the
+EngineBlock project. The CQL support was previously used within DataStax. In March 2020, DataStax
+and the project maintainers decided to put everything into one OSS project in order to make
+contributions and sharing easier for everyone. Thus, the new project name and structure was launched
+as NoSQLBench.io. NoSQLBench is an independent project that is primarily sponsored by DataStax.
+
We offer NoSQLBench as a new way of thinking about testing systems. It is not limited to testing
+only one type of system. It is our wish to build a community of users and practice around this
+project so that everyone in the NoSQL ecosystem can benefit from common concepts and understanding
+and reliable patterns of use.
+
Scalable User Experience
+
NoSQLBench endeavors to be valuable to all users. We do this by making it easy for you, our user, to
+do just what you need without worrying about the rest. If you need to do something simple, it should
+be simple to find the right settings and just do it. If you need something more sophisticated, then
+you should be able to find what you need with a reasonable amount of effort and no surprises.
+
That is the core design principle behind NoSQLBench. We hope you like it.
Welcome to the documentation for NoSQLBench. This is a power tool that emulates real application
+workloads. This means that you can fast-track performance, sizing and data model testing without
+writing your own testing harness.
+
To get started right away, jump to the
+Getting Started section of the docs.
+
What is NoSQLBench?
+
NoSQLBench is a serious performance testing tool for the NoSQL ecosystem.
+
NoSQLBench brings advanced testing capabilities into one tool that are not found in other testing
+tools.
+
+
You can run common testing workloads directly from the command line within 5 minutes of reading this.
+
You can generate virtual data sets of arbitrary size, with deterministic data and statistically
+shaped values.
+
You can design custom workloads that emulate your application, contained in a single file, based
+on statement templates—no IDE or coding required.
+
You can immediately plot your results in a docker and grafana stack on Linux with a single command
+line option.
+
You can open the access panels when necessary and rewire the runtime behavior of NoSQLBench for advanced
+testing. This includes access to a full scripting environment with Javascript.
+
+
The core machinery of NoSQLBench has been built with attention to detail. It has been battle tested
+within DataStax as a way to help users validate their data models, baseline system performance, and
+qualify system designs for scale.
+
In short, NoSQLBench wishes to be a programmable power tool for performance testing. However, it is
+somewhat generic. It doesn't know directly about a particular type of system, or protocol. It simply
+provides a suitable machine harness in which to put your drivers and testing logic. If you know how
+to build a client for a particular kind of system, NoSQLBench will let you load it like a plugin and control
+it dynamically.
+
Initially, NoSQLBench was used for CQL testing, but we have seen this expanded over time by
+other users and vendors with drivers for a variety of systems. We would like to see this
+expanded further with contributions from others.
+
Origins
+
The code in this project comes from multiple sources. The procedural data generation capability was
+known before as Virtual Data Set. The core runtime and scripting harness was from the
+EngineBlock project. The CQL support was previously used within DataStax. In March 2020, DataStax
+and the project maintainers decided to put everything into one OSS project in order to make
+contributions and sharing easier for everyone. Thus, the new project name and structure was launched
+as NoSQLBench.io. NoSQLBench is an independent project that is primarily sponsored by DataStax.
+
We offer NoSQLBench as a new way of thinking about testing systems. It is not limited to testing
+only one type of system. It is our wish to build a community of users and practice around this
+project so that everyone in the NoSQL ecosystem can benefit from common concepts and understanding
+and reliable patterns of use.
+
Scalable User Experience
+
NoSQLBench endeavors to be valuable to all users. We do this by making it easy for you, our user, to
+do just what you need without worrying about the rest. If you need to do something simple, it should
+be simple to find the right settings and just do it. If you need something more sophisticated, then
+you should be able to find what you need with a reasonable amount of effort and no surprises.
+
That is the core design principle behind NoSQLBench. We hope you like it.
There are a few core design principles that steer the technical design of NoSQLBench. They are
+shared here in the hopes that they will help others understand what NoSQLBench is all about.
+
Respect for Users
+
While this sounds like a conduct level aspect, the focus here is on what this means in terms of
+design. Respect is absolutely part of the code of conduct governing this project as well.
+
What this means is that we try to build systems which respect users in general and in specific
+ways which can only be tackled through thoughtful design.
+
Durable Concepts
+
We focus on core concepts that stand the test of time, and as such give time back to users. We
+look for concepts which give back more in terms of clarity and reuse than they take away by
+indirection. We build patterns of use around these concepts which bring users together in
+common practice and understanding and move the testing ecosystem together as a whole.
+
Composable Systems
+
We build composable systems such that they can quickly be used in a pre-built form at a high
+level. We make them reconfigurable by those who need so that they can be repurposed into
+something more contextual. In this way, we provide a sliding scale of user experience, where
+users' time is exchanged for incremental value in results.
+
High Fidelity
+
We build high-fidelity measurement tools and instruments into NoSQLBench, so that the results
+are not only useful, but repeatable and reproducible. We build efficiency into the NoSQLBench
+machinery, so that testing tools maintain headroom enough to make accurate measurements at speed.
NoSQLBench has enjoyed a history of unique innovation: driven by the vision of its users and
+builders, forged by the need for practical methods to test modern systems. This section covers a
+sampling of what makes NoSQLBench unique. Many of these features simply could not be found in other
+testing systems when they were needed. Thus, NoSQLBench took form as we solved them one after
+another in the same tool space. The result is a powerful runtime and system of components and
+concepts which can be adapted to a variety of testing needs.
+
Virtual Data Set
+
The Virtual Dataset capabilities within NoSQLBench allow you to generate data on the fly. There
+are many reasons for using this technique in testing, but it is often a topic that is overlooked or
+taken for granted.
+
This has multiple positive effects on the fidelity of a test:
+
+
It is very much more efficient than interacting with storage systems and piping data around. In most
+cases, even loading data from lightweight storage like NVMe will be more time intensive than simply
+generating it.
+
As such, it leaves significant headroom on the table for introducing other valuable capabilities
+into the test system, like advanced rate metering, coordinated omission awareness, and more.
+
Changing the generated data is as easy as changing the recipe.
+
The efficiency of the client is often high enough to support single-client test setups without
+appreciable loss of capacity.
+
Because of modern procedural generation techniques, the variety and shape of data available is
+significant. Increasing the space of possibilities is a matter of adding new algorithms. There is
+no data bulk to manage.
+
Sophisticated test setups that are highly data dependent are portable. All you need is the test
+client. The building blocks for data generation are included, and many pre-built testing scenarios
+are already wired to use them.
+
It is straight-forward to design incremental data generation schemes which produce monotonic
+identifiers, pseudo-random traversal over the values, or even statistically-shaped versions of
+incremental or pseudo-random values.
+
+
Additional details of this approach are explained below.
+
Industrial Strength
+
The algorithms used to generate data are based on advanced techniques in the realm of variate
+sampling. The authors have gone to great lengths to ensure that data generation is efficient and as
+much O(1) in processing time as possible.
+
For example, one technique that is used to achieve this is to initialize and cache data in high
+resolution look-up tables for distributions which may otherwise perform differently depending on
+their respective density functions. The existing Apache Commons Math libraries have been adapted
+into a set of interpolated Inverse Cumulative Distribution sampling functions. This means that you
+can use them all in the same place as you would a Uniform distribution, and once initialized, they
+sample with identical overhead. This means that by changing your test definition, you don't
+accidentally change the behavior of your test client; only the data, as intended.
+
A Purpose-Built Tool
+
Many other testing systems avoid building a dataset generation component. It's a tough problem to
+solve, so it's often just avoided. Instead, they use libraries like "faker" or other sources of data
+which weren't designed for testing at scale. Faker is well named, no pun intended. It was meant as a
+vignette and wire-framing library, not a source of test data for realistic results. If you are using
+a testing tool for scale testing and relying on a faker variant, then you will almost certainly get
+invalid results that do not represent how a system would perform in production.
+
The virtual dataset component of NoSQLBench is a library that was designed for high scale and
+realistic data streams. It uses the limits of the data types in the JVM to simulate high cardinality
+datasets which approximate production data distributions for realistic and reproducible results.
+
Deterministic
+
The data that is generated by the virtual dataset libraries is deterministic. This means that for a
+given cycle in a test, the operation that is synthesized for that cycle will be the same from one
+session to the next. This is intentional. If you want to perturb the test data from one session to
+the next, then you can most easily do it by simply selecting a different set of cycles as your
+basis.
+
This means that if you find something interesting in a test run, you can go back to it just by
+specifying the cycles in question. It also means that you aren't losing comparative value between
+tests with additional randomness thrown in. The data you generate will still look random to the
+human eye, but that doesn't mean that it won't be reproducible.
+
Statistically Shaped
+
If you want a normal distribution, you can have it simply by specifying Normal(50,10). The values
+drawn from this sampling function are deterministic AND normal. If you want another distribution,
+you can have it. All the distributions provided by the Apache Commons math libraries are supported.
+You can ask for a stream of floating point values 1 trillion values long, in any order. You can use
+discrete or continuous distributions, with whatever distribution parameters you need.
+
Best of Both Worlds
+
Some might worry that fully synthetic testing data is not realistic enough. The devil is in the
+details on these arguments, but suffice it to say that you can pick the level of real data you use
+as seed data with NoSQLBench. You don't have to choose between realism and agility. The procedural
+data generation approach allows you to have all the benefits of testing agility of low-entropy
+testing tools while retaining nearly all the benefits of real testing data.
+
For example, using the alias sampling method and a published US census (public domain) list of names
+and surnames tha occurred more than 100x, we can provide extremely accurate samples of names
+according to the published labels and weights. The alias method allows us to sample accurately
+in O(1) time from the entire dataset by turning a large number of weights into two uniform
+samples. You will simply not find a better way to sample realistic (US) names than this. (If you do,
+please file an issue!) Actually, any data set that you have in CSV form with a weight column can
+also be used this way, so you're not strictly limited to US census data.
+
Java Idiomatic Extension
+
The way that the virtual dataset component works allows Java developers to write any extension to
+the data generation functions simply in the form of Java functional interfaces. As long
+as they include the annotation processor and annotate their classes, they will show up in the
+runtime and be available to any workload by their class name.
+
Additionally, annotation based examples and annotation processing is used to hoist function docs
+directly into the published docs that go along with any version of NoSQLBench.
+
Binding Recipes
+
It is possible to stitch data generation functions together directly in a workload YAML. These are
+data-flow sketches of functions that can be copied and pasted between workload descriptions to share
+or remix data streams. This allows for the adventurous to build sophisticated virtual datasets that
+emulate nuances of real datasets, but in a form that takes up less space on the screen than this
+paragraph!
+
Portable Workloads
+
All the workloads that you can build with NoSQLBench are self-contained in a workload file. This
+is a statement-oriented configuration file that contains templates for the operations you want to
+run in a workload.
+
This defines part of an activity—the iterative flywheel part that is run directly within an
+activity type. This file contains everything needed to run a basic activity: a set of statements
+in some ratio. It can be used to start an activity, or as part of several activities within a
+scenario.
+
Standard YAML Format
+
The format for describing statements in NoSQLBench is generic, but in a particular way that is
+specialized around describing statements for a workload. That means that you can use the same YAML
+format to describe a workload for kafka as you can for Apache Cassandra or DSE.
+
The YAML structure has been tailored to describing statements, their data generation bindings, how
+they are grouped and selected, and the parameters needed by drivers, such as whether they should be
+prepared statements or not.
+
Furthermore, the YAML format allows for defaults and overrides with a very simple mechanism that reduces
+editing fatigue for frequent users.
+
You can also templatize document-wide macro parameters which are taken from the command line just
+like any other parameter. This is a way of templating a workload and making it multipurpose or
+adjustable on the fly.
+
Experimentation Friendly
+
Because the workload YAML format is generic across driver types, it is possible to ask one driver
+type to interpret the statements that are meant for another. This isn't generally a good idea, but
+it becomes extremely handy when you want to have a high level driver type like stdout
+interpret the syntax of another driver like cql. When you do this, the stdout activity type _
+plays_ the statements to your console as they would be executed in CQL, data bindings and all.
+
This means you can empirically and directly demonstrate and verify access patterns, data skew, and
+other dataset details before you change back to cql mode and turn up the settings for a higher scale
+test. It takes away the guess work about what your test is actually doing, and it works for all
+drivers.
+
Scripting Environment
+
The ability to write open-ended testing simulations is provided in NoSQLBench by means of a scripted
+runtime, where each scenario is driven from a control script that can do anything the user wants.
+
Dynamic Parameters
+
Some configuration parameters of activities are designed to be assignable while a workload is
+running. This makes things like threads, rates, and other workload dynamics in real-time. The
+internal APIs work with the scripting environment to expose these parameters directly to scenario
+scripts. Drivers that are provided to NoSQLBench can also expose dynamic parameters in the same way
+so that anything can be scripted dynamically when needed.
+
Scripting Automatons
+
When a NoSQLBench scenario is running, it is under the control of a single-threaded script. Each
+activity that is started by this script is run within its own thread pool, simultaneously and
+asynchronously.
+
The control script has executive control of the activities, as well as full visibility into the
+metrics that are provided by each activity. The way these two parts of the runtime meet is through
+the service objects which are installed into the scripting runtime. These service objects provide a
+named access point for each running activity and its metrics.
+
This means that the scenario script can do something simple, like start activities and wait for them
+to complete, OR, it can do something more sophisticated like dynamically and iteratively scrutinize
+the metrics and make real-time adjustments to the workload while it runs.
+
Analysis Methods
+
Scripting automatons that do feedback-oriented analysis of a target system are called analysis
+methods in NoSQLBench. These are used for advanced testing scenarios. Advanced testers or
+researchers can build their own in a way that interacts with a live system with feedback and
+sampling times measured in seconds.
+
Command Line Scripting
+
The command line has the form of basic test commands and parameters. These command get converted
+directly into scenario control script in the order they appear. The user can choose whether to stay
+in high level executive mode, with simple commands like nb5 test-scenario ..., or to drop
+directly into script design. They can look at the equivalent script for any command line by running
+--show-script. If you take the script that is dumped to console and run it, it will do exactly the
+same thing as if you hadn't even looked at it and just ran basic commands on the command line.
+
There are even ways to combine script fragments, full commands, and calls to scripts on the command
+line. Since each variant is merely a way of constructing scenario script, they all get composited
+together before the scenario script is run.
+
New introductions to NoSQLBench should focus on the command line. Once a user is familiar with this,
+it is up to them whether to tap into the deeper functionality. If they don't need to know about
+scenario scripting, then they shouldn't have to learn about it to be effective. This is what we are
+calling a scalable user experience.
+
Compared to DSLs
+
Other tools may claim that their DSL makes scenario "simulation" easier. In practice, any DSL is
+generally dependent on a development tool to lay the language out in front of a user in a fluent
+way. This means that DSLs are almost always developer-targeted tools, and mostly useless for casual
+users who don't want to break out an IDE.
+
One of the things a DSL proponent may tell you is that it tells you "all the things you can do!".
+This is de-facto the same thing as it telling you
+"all the things you can't do" because it's not part of the DSL. This is not a win-win for the user.
+For DSL-based systems, the user is required to use the DSL, even when it interferes with the
+user's creative control. Most DSLs aren't rich enough to do much that is interesting from a
+simulation perspective.
+
In NoSQLBench, we don't force the user to use the programming abstractions except at a very surface
+level: the CLI. It is up to the user whether to open the secret access panel for the more
+advanced functionality. If they decide to do this, we give them a commodity language (ECMAScript),
+and we wire it into all the things they were already using. We don't take away their creative
+freedom by telling them what they can't do. This way, users can pick their level of investment and
+reward as best fits their individual needs, as it should be.
+
Scripting Extensions
+
Also mentioned under the section on modularity, it is relatively easy for a developer to add their
+own scripting extensions into NoSQLBench in the form of named service objects.
+
Modular Architecture
+
The internal architecture of NoSQLBench is modular throughout. Everything from the scripting
+extensions to data generation is enumerated at compile time into a service descriptor, and then
+discovered at runtime by the SPI mechanism in Java.
+
This means that extending and customizing bundles and features is quite manageable.
+
It also means that it is relatively easy to provide a suitable API for multi-protocol support. In
+fact, there are several drivers available in the current NoSQLBench distribution. You can list them
+out with nb5 --list-drivers, and you can get help on how to use each of them
+with nb5 help <driver name>.
+
This also is a way for us to encourage and empower other contributors to help develop the
+capabilities and reach of NoSQLBench. By encouraging others to help us build NoSQLBench modules and
+extensions, we can help more users in the NoSQL community at large.
+
High Fidelity Metrics
+
Since NoSQLBench has been built as a serious testing tool for all users, some attention was
+necessary on the way metric are used. More details follow...
+
Discrete Reservoirs
+
In NoSQLBench, we avoid the use of time-decaying metrics reservoirs. Internally, we use HDR
+reservoirs with discrete time boundaries. This is so that you can look at the min and max values and
+know that they apply accurately to the whole sampling window.
+
Metric Naming
+
All running activities have a symbolic alias that identifies them for the purposes of automation and
+metrics. If you have multiple activities running concurrently, they will have different names and
+will be represented distinctly in the metrics flow.
+
Precision and Units
+
By default, the internal HDR histogram reservoirs are kept at 4 digits of precision. All timers are
+kept at nanosecond resolution.
+
Metrics Reporting
+
Metrics can be reported via graphite as well as CSV, logs, HDR logs, and HDR stats summary CSV
+files.
+
Coordinated Omission
+
The metrics naming and semantics in NoSQLBench are set up so that you can have coordinated omission
+metrics when they are appropriate, but there are no there changes when they are not. This means that
+the metric names and meanings remain stable in any case.
+
Particularly, NoSQLBench tries to avoid the term "latency" altogether as it is often overused and
+thus prone to confusing people.
+
Instead, the terms service time, wait time, and response time are used. These are abbreviated
+in metrics as servicetime, waittime, and responsetime.
+
The servicetime metric is the only one which is always present. When a rate limiter is used, then
+additionally waittime and responsetime are reported.
+
Advanced Testing Features
+
👉 Some features discussed here are only for advanced testing scenarios. First-time users should
+become familiar with the basic options first.
+
Hybrid Rate Limiting
+
Rate limiting is a complicated endeavor, if you want to do it well. The basic rub is that going fast
+means you have to be less accurate, and vice-versa. As such, rate limiting is a parasitic drain on
+any system. The act of rate limiting itself poses a limit to the maximum rate, regardless of the
+settings you pick. This occurs as a side effect of forcing your system to interact with some
+hardware notion of time passing, which takes CPU cycles that could be going to the thing you are
+limiting.
+
This means that in practice, rate limiters are often very featureless. It's daunting enough to need
+rate limiting, and asking for anything more than that is often wishful thinking. Not so in
+NoSQLBench.
+
The rate limiter in NoSQLBench provides a comparable degree of performance and accuracy to others
+found in the Java ecosystem, but it also has advanced features:
+
+
It allows a sliding scale between average rate limiting and strict rate limiting, called _
+bursting_.
+
It internally accumulates delay time, for C.O. friendly metrics which are separately tracked for
+each and every operation.
+
It is resettable and reconfigurable on the fly, including the bursting rate.
+
It provides its configured values in addition to performance data in metrics, capturing your rate
+limiter settings as a simple matter of metrics collection.
+
It comes with advanced scripting helpers which allow you to read data directly from histogram
+reservoirs, or control the reservoir window programmatically.
+
+
Flexible Error Handling
+
An emergent facility in NoSQLBench is the way that error are handled within an activity. For
+example, with the CQL activity type, you are able to route error handling for any of the known
+exception types. You can count errors, you can log them. You can cause errored operations to
+auto-retry if possible, up to a configurable number of tries.
+
This means, that as a user, you get to decide what your test is about. Is it about measuring some
+nominal but anticipated level of errors due to intentional over-saturation? If so, then count the
+errors, and look at their histogram data for timing details within the available timeout.
+
Are you doing a basic stability test, where you want the test to error out for even the slightest
+error? You can configure for that if you need.
+
Cycle Logging
+
It is possible to record the result status of each and every cycle in a NoSQLBench test run. If the
+results are mostly homogeneous, the RLE encoding of the results will reduce the output file down to
+a small fraction of the number of cycles. The errors are mapped to ordinals by error type, and these
+ordinals are stored into a direct RLE-encoded log file. For most testing where most of the results
+are simply success, this file will be tiny. You can also convert the cycle log into textual form for
+other testing and post-processing and vice-versa.
+
Op Sequencing
+
The way that operations are planned for execution in NoSQLBench is based on a stable ordering that
+is configurable. The statement forms are mixed together based on their relative ratios. The three
+schemes currently supported are round-robin with exhaustion (bucket), duplicate in order
+(concat), and a way to spread each statement out over the unit interval
+(interval). These account for most configuration scenarios without users having to micromanage
+their statement templates.
With NB5, the cqld4 driver comes with a workload generator that can be used to generate a
+workload yaml from a CQL schema file.
+
Development on this workload generator is just starting, but it is already in a useful state.
+Eventually, workload observation and monitoring methods will be used to create workloads which
+more accurately emulate those in-situ.
+
Inputs & Outputs
+
You need the cql schema for whatever keyspaces, types, and tables you want to include.
+Optionally, you can provide a table stats file which is generated as
+nodetool tablestats > tablestats.
+
Note: The table stats file provides results only for a single node. As such, you will want to
+adjust a config parameter called partition_multiplier to improve accuracy of the generated
+workload. Further, the file only contains traffic and data details since the last time your node
+was restarted, thus it may not be representative of the overall character of your workload and data.
The first option is simply the name of the .cql file containing your schema.
+
The second is the name of the output yaml. The generator will not overwrite this file for you.
+
The third option is an optional table stats file created with nodetool tablestats >tablestats.
+If provided, the reads, writes, and estimated partition counts will be used to weight the
+workload to the tables and data sizes automatically.
+
+
For now, it is almost certain that you'll need to extract the configs and tailor them as
+described below. Then, when you run nb5 cqlgen ... the config files in the current directory will
+be used.
+
Workload Patterns
+
The initial version of the cql workload generator provides these defaults:
+
+
All keyspaces, tables, or types which are provided on the input are included in the workload.
+
All create syntax has "if not exists" added.
+
All drop syntax has "if exists" added.
+
All table DDL properties are ignored except for durable writes.
+
The default replication settings are as for local testing with SimpleReplicationStrategy. For
+testing on a proper cluster with NetworkTopology or in Astra, you'll need to modify this in
+the configs explained below.
+
All UDTs are converted to blobs. This will be replaced by a layer which understands UDTs very
+soon.
+
Data bindings are created using the simplest possible binding recipes that work.
+
Cardinalities on partition-specific bindings are multiplied by 10. This presumes even data
+distribution, replication factor 3, and 30 nodes. This method will be improved in the future.
+
For the main phase, reads, writes, updates, and scans are included, 1 each.
+
+
reads select * from a fully qualified predicate.
+
writes will write to all named fields.
+
updates change all fields with a fully qualified predicate.
+
scan-10 will read up to 10 consecutive rows from a partition with a partially qualified
+predicate. This means the last clustering column is not included in the predicates. Single
+key (1 partition component) tables do not have a scan created for them.
+
When partition estimates are provided, all read and writes statements have predicates for
+the last partition component modified to modulo by the estimated partition cardinality.
+
+
+
+
Fine Tuning
+
The generator uses two internal files for the purposes of setting defaults:
+
+
cqlgen.conf - a yaml formatted configuration file.
+
cqlgen-bindings.yaml
+
+
Both of these files will be read from the internal nb5 resources unless you pull them into the
+local directory with these commands:
The details of how to customize these files are included within them. The cqlgen-bindings.yaml
+file contains default bindings by type. If you get UnresolvedBindingsException when trying to
+generate a workload, then a binding for the type in question must be added to the
+cqlgen-bindings.yaml file.
+
The cqlgen.conf controls much of the internal wiring of the workload generator. Modifying it
+gives you the ability to enable and disable certain stages and behaviors, like:
+
+
obfuscating all keyspace, table, and column names
+
keyspaces to include by name
+
tables to exclude by traffic
+
setting the replcation fields
+
default timeouts
+
block naming and construction (which type of operations are included in each)
+
+
These are mostly controlled by a series of processing phases known as transformers.
+Some transformers depend on others upstream, but if the data provided is not sufficient, they
+will silently pass-through.
+
This is a new feature of the NoSQLBench driver. If you are an early adopter, please reach out
+with ideas, or for requests and support as needed.
This is the built-in app that allows NB5 to export docs for integration into other systems.
+During a build of NB5, this app is used to auto-inject much of the content into the doc site.
+Invoking it as nb5 export-docs creates (or overwrites) a file called exported-docs.zip,
+containing the markdown source files for binding functions, bundled apps, specifications, and so on.
+
Using this mechanism ensures that:
+
+
NB5 contributors who build drivers, bindings, or other modular pieces can use idiomatic forms of
+documentation, allowing them to work in one code base.
+
Contributors are encouraged to make integrated tests follow a literate programming pattern,
+so that they work as examples as well as verification tools.
+
Users of NB5 will never see documentation which is included in integrated tests, because any
+failing test will prevent a failed build from being published or considered for a release.
+
+
This is a relatively new mechanism that will be improved further.
With NB5, the cqld4 driver comes with a workload generator that can be used to generate a
+workload yaml from a CQL schema file.
+
Development on this workload generator is just starting, but it is already in a useful state.
+Eventually, workload observation and monitoring methods will be used to create workloads which
+more accurately emulate those in-situ.
+
Inputs & Outputs
+
You need the cql schema for whatever keyspaces, types, and tables you want to include.
+Optionally, you can provide a table stats file which is generated as
+nodetool tablestats > tablestats.
+
Note: The table stats file provides results only for a single node. As such, you will want to
+adjust a config parameter called partition_multiplier to improve accuracy of the generated
+workload. Further, the file only contains traffic and data details since the last time your node
+was restarted, thus it may not be representative of the overall character of your workload and data.
The first option is simply the name of the .cql file containing your schema.
+
The second is the name of the output yaml. The generator will not overwrite this file for you.
+
The third option is an optional table stats file created with nodetool tablestats >tablestats.
+If provided, the reads, writes, and estimated partition counts will be used to weight the
+workload to the tables and data sizes automatically.
+
+
For now, it is almost certain that you'll need to extract the configs and tailor them as
+described below. Then, when you run nb5 cqlgen ... the config files in the current directory will
+be used.
+
Workload Patterns
+
The initial version of the cql workload generator provides these defaults:
+
+
All keyspaces, tables, or types which are provided on the input are included in the workload.
+
All create syntax has "if not exists" added.
+
All drop syntax has "if exists" added.
+
All table DDL properties are ignored except for durable writes.
+
The default replication settings are as for local testing with SimpleReplicationStrategy. For
+testing on a proper cluster with NetworkTopology or in Astra, you'll need to modify this in
+the configs explained below.
+
All UDTs are converted to blobs. This will be replaced by a layer which understands UDTs very
+soon.
+
Data bindings are created using the simplest possible binding recipes that work.
+
Cardinalities on partition-specific bindings are multiplied by 10. This presumes even data
+distribution, replication factor 3, and 30 nodes. This method will be improved in the future.
+
For the main phase, reads, writes, updates, and scans are included, 1 each.
+
+
reads select * from a fully qualified predicate.
+
writes will write to all named fields.
+
updates change all fields with a fully qualified predicate.
+
scan-10 will read up to 10 consecutive rows from a partition with a partially qualified
+predicate. This means the last clustering column is not included in the predicates. Single
+key (1 partition component) tables do not have a scan created for them.
+
When partition estimates are provided, all read and writes statements have predicates for
+the last partition component modified to modulo by the estimated partition cardinality.
+
+
+
+
Fine Tuning
+
The generator uses two internal files for the purposes of setting defaults:
+
+
cqlgen.conf - a yaml formatted configuration file.
+
cqlgen-bindings.yaml
+
+
Both of these files will be read from the internal nb5 resources unless you pull them into the
+local directory with these commands:
The details of how to customize these files are included within them. The cqlgen-bindings.yaml
+file contains default bindings by type. If you get UnresolvedBindingsException when trying to
+generate a workload, then a binding for the type in question must be added to the
+cqlgen-bindings.yaml file.
+
The cqlgen.conf controls much of the internal wiring of the workload generator. Modifying it
+gives you the ability to enable and disable certain stages and behaviors, like:
+
+
obfuscating all keyspace, table, and column names
+
keyspaces to include by name
+
tables to exclude by traffic
+
setting the replcation fields
+
default timeouts
+
block naming and construction (which type of operations are included in each)
+
+
These are mostly controlled by a series of processing phases known as transformers.
+Some transformers depend on others upstream, but if the data provided is not sufficient, they
+will silently pass-through.
+
This is a new feature of the NoSQLBench driver. If you are an early adopter, please reach out
+with ideas, or for requests and support as needed.
NB5 contains a bundled virtdata app which lets you verify bindings from the command line.
+It is useful for sanity checking values as well as getting a concurrent performance baseline for
+specific binding recipes.
+
diagnostic mode
+
The diagnose mode of virtdata can be used to explain binding resolution logic and potentially
+how to fix a binding that doesn't resolve properly:
To see the options you can simply run nb5 virtdata testmapper, which gives you
+
ARGS: checkperf 'specifier' threads bufsize start end
+example: 'timeuuid()' 100 1000 0 10000
+specifier: A VirtData function specifier.
+threads: The number of concurrent threads to run.
+bufsize: The number of cycles to give each thread at a time.
+start: The start cycle for the test, inclusive.
+end: The end cycle for the test, exclusive.
+
+
Assuming you have a working binding recipe that you want to measure for
+concurrent performance, you can use the form below:
A single thread (the first thread) generates the values into a reference buffer.
+
The specified number of threads is started, and synchronized for simultaneous start.
+
Each thread is given successive batches of input values from the cycle rage specified, in
+chunks of bufsize each. A dot . is printed to the console for each completed chunk.
+
Once all threads are complete, each checks their values against the reference buffer, and an
+exception is thrown if any difference are found. (This would mean concurrency is affecting
+the values, which is not allowed for binding functions.)
+
After all chunks are generated and verified, statistics are displayed to the console:
+
+
output details
+
1
run data = [derived values in brackets]
+
2
specifier = 'Combinations('0-90-9'); ToInt();'
+
3
threads = 96
+
4
min = 0
+
5
max = 1000000
+
6
[count] = 1000000
+
7
buffersize = 1000
+
8
[totalGenTimeMs] = 2408.874399
+
9
[totalCmpTimeMs] = 2274.510746
+
10
[genPerMs] = 39852.638
+
11
[cmpPerMs] = 42206.879
+
12
[genPerS] = 39852638.245
+
13
[cmpPerS] = 42206879.070
+
+
This shows that on a 12 core (24 thread) system, Around 40 million variates are able to be
+generated from the above recipe (across all cores, of course).
+
In detail:
+
+
[totalGenTimeMs] tracks the total time the thread pool spent generating data, in milliseconds.
+
[totalCmpTimeMs] tracks the total time the thread pool spent cross-checking data across threads.
+
[genPerMs] and [cmpPerMs] show the calculated rates for generation and validation per
+millisecond, respectively.
+
[genPerS] and [cmpPerS] show the calculated rates for generation and validation per second,
+respectively.
+
+
interpretation
+
This example shows how effective variate generation can be. This doesn't mean that you can
+easily simulate 40 million operations with this data. However, it does anecdotally indicate
+the proportional load that generation puts on the system. For example, if you were generating
+around 400K ops/s from a client with only this binding, it would be reasonable that variate
+generation consumes around (400,000/40,000,000) of the system's cycles, or around 1%.
+
More realistic testing scenarios are likely to use proportionally more due to the amount of data
+generation which is needed. Still, generating synthetic data makes for a more capable testing
+harnesses because of the extra headroom you leave in your system for other necessary work, like
+managing a driver's connection pool or serdes on requests and responses.
NoSQLBench has a built-in library for the flexible management and expressive use of
+procedural generation libraries. This section explains the core concepts
+of this library, known as Virtual Data Set.
+
Basic Example
+
These functions can be stitched together in small recipes. When you give
+these mapping functions useful names in your workloads, they are called
+bindings.
These are two bindings that you can use in your workloads. The names on the left
+are the binding names and the functions on the right are the binding recipes.
+Altogether, we just call them bindings.
+
Variates (Samples)
+
A numeric sample that is drawn from a distribution for the purpose
+of simulation or analysis is called a Variate.
+
Procedural Generation
+
Procedural generation is a category of algorithms and techniques which take
+a set or stream of inputs and produce an output in a different form or structure.
+While it may appear that procedural generation actually generates data, no output
+can come from a void. These techniques simply perturb a value in some stateful way,
+or map a coordinate system to another representation. Sometimes, both techniques are
+combined together.
+
Uniform Variate
+
A variate (sample) drawn from a uniform (flat) distribution is what we are used
+to seeing when we ask a system for a "random" value. These are often produced in
+one of two very common forms, either a register full of bits as with most hashing
+functions, or a floating point value between 0.0 and 1.0. (This is called the unit
+interval).
+
Uniform variates are not really random. Without careful attention to API usage,
+such random samples are not even unique from session to session. In many systems,
+the programmer has to be very careful to seed the random generator or they will
+get the same sequence of numbers every time they run their program. This turns out
+to be a useful property, and the random number generators that behave this way are
+usually called Pseudo-Random Number Generators, or PRNGs.
+
Apparently Random Variates
+
Uniform variates produced by PRNGs are not actually random, even though they may
+pass certain tests for randomness. The streams of values produced are nearly
+always measurably random by some meaningful standard. However, they can be
+used again in exactly the same way with the same initial seed.
+
Deterministic Variates
+
If you intentionally avoid randomizing the initial seed for a PRNG, for example,
+with the current timestamp, then it gives you a way to replay a sequence.
+You can think of each initial seed as a bank of values which you can go back
+to at any time. However, when using stateful PRNGs as a way to provide these
+variates, your results will be order dependent.
+
Randomly Accessible Determinism
+
Instead of using a PRNG, it is possible to use a hash function instead. With a 64-bit
+register, you have 2^64 (2^63 in practice due to available implementations) possible
+values. If your hash function has high dispersion, then you will effectively
+get the same result of apparent randomness as well as deterministic sequences, even
+when you use simple sequences of inputs to your random() function. This allows
+you to access a random value in bucket 57, for example, and go back to it at any
+time and in any order to get the same value again.
+
Data Mapping Functions
+
The data mapping functions are the core building block of virtual data set.
+Data mapping functions are generally pure functions. This simply means that
+a generator function will always provide the same result given the same input.
+The parameters that you will see on some binding recipes are not representative
+of volatile state. These parameters are initializer values which are part of a
+function's definition. For example a Mod(5) will always behave like a Mod(5),
+as a pure function. But a Mod(7) will behave differently than a Mod(5), although
+each function will always produce its own stable result for a given input.
+
Combining RNGs and Data Mapping Functions
+
Because pure functions play such a key part in procedural generation techniques,
+the terms "data mapping function", "data mapper" and "data mapping library" will
+be more common in the library than "generator". Conceptually, mapping functions
+to not generate anything. It makes more sense to think of mapping data from one
+domain to another. Even so, the data that is yielded by mapping functions can
+appear quite realistic.
+
Because good RNGs do generally contain internal state, they aren't purely
+functional. This means that in some cases -- those in which you need to have
+random access to a virtual data set, hash functions make more sense. This
+toolkit allows you to choose between the two in some cases. However, it
+generally favors using hashing and pure-function approaches where possible. Even
+the statistical curve simulations do this.
+
Bindings Template
+
It is often useful to have a template that describes a set of generator
+functions that can be reused across many threads or other application scopes. A
+bindings template is a way to capture the requested generator functions for
+re-use, with actual scope instantiation of the generator functions controlled by
+the usage point. For example, in a JEE app, you may have a bindings template in
+the application scope, and a set of actual bindings within each request (thread
+scope).
Collection functions allow you to construct Java Lists, Maps or Sets.
+These functions often take the form of a higher-order function, where
+the inner function definitions are called to determine the size of
+the collection, the individual values to be added, etc.
+
For each type of collection, there exists multiple forms which allow you to control how the provided
+function arguments are used to set the values into the collection.
+
Sized or Pair-wise functions
+
Any function in this category with Sized occuring in its name must be initialized with a sizing
+function as an argument. For example, ListSized(Mod(5),NumberNameToString()) will create a list
+which is sized by the first function -- a list between 0 and 4 elements in this case. With an input
+value of 3L, the resulting List will contain 3 elements. With an input of 7L, it will contain 2
+elements.
+
Alternately, when a function does not contain Sized in its name, the arguments provided are used
+as pair-wise mapping functions to the elements in the resulting collection.
+
Simply put, a Sized function will always require a sizing function as the first argument.
+
Stepped or Hashed or Same
+
Any function in this category which contains Stepped in its name will automatically increment the
+input value used for each element in the collection. For example
+ListStepped(NumberNameToString(),NumberNameToString()) will always creat a two-element List, but
+the inputs to the provided functions will be i+0, i+1, where i is the input value to the ListStepped
+function.
+
Alternately, any function in this category which contains Hashed in its name will automatically
+hash the input value used for each element. This is useful when you want to create values within a
+collection that vary significantly with respect of their common seed value. For example,
+ListHashed(NumberNameToString(),NumberNameToString(),NumberNameToString()) will always provide a
+three element List with values that are not obviously related to each other. For each additional
+element added to the collection, the previous input is hashed, so there is a relationship, but it
+will not be obvious nor discernable for most testing purposes.
+
If neither Stepped nor Hashed occurs in the function name, then every element function
+gets the exact value given to the main function.
+
Overview of functions
+
All of the useful collection binding functions follow the same basic patterns above.
+
List Functions**
+
Same Input
Stepped Input
Hashed Input
+
Pair-wise
ListFunctions(...)
ListStepped(...)
ListHashed(...)
+
Sized
ListSized(...)
ListSizedStepped(...)
ListSizedHashed(...)
+
+
The name ListFunctions(...) was chosen to avoid clashing with the existing List(...) function.
+
Set Functions
+
The values produced by the provided element functions for Sets do not check for duplicate values.
+This means that you must ensure that your element functions yield distinct values to insert into
+the collection as it is being built if you want to have a particular cardinality of values in your
+collection. Overwrites are allowed, although they may not be intended in most cases.
+
Same Input
Stepped Input
Hashed Input
+
Pair-wise
SetFunctions(...)
SetStepped(...)
SetHashed(...)
+
Sized
SetSized(...)
SetSizedStepped(...)
SetSizedHashed(...)
+
+
The name SetFunctions(...) was chosen to avoid clashing with the existing Set(...) function.
+
Map Functions
+
The values produced by the provided element functions for Maps do not check for duplicate values.
+This means that you must ensure that your element functions yield distinct keys to insert into
+the collection as it is being built if you want to have a particular cardinality of values in your
+collection. Overwrites are allowed, although they may not be intended in most cases.
+
Same Input
Stepped Input
Hashed Input
+
Pair-wise
MapFunctions(...)
MapStepped(...)
MapHashed(...)
+
Sized
MapSized(...)
MapSizedStepped(...)
MapSizedHashed(...)
+
+
The name MapFunctions(...) was chosen to avoid clashing with the existing Map(...) function.
+
For the key and value functions provided to a Map function, they are taken as even-odd pairs (starting
+at zero). For sized functions, the last defined key function will be used for elements past
+the size of the key functions provided. The same is true for the value functions. For example,
+a call to MapSized(3,f(...),g(...),h(...)) will use f(...) and g(...) for the first key and value,
+but from that point forward will use h(...) for all keys and g(...) for all values.
+
HashedLineToStringList
+
Creates a List<String> from a list of words in a file.
+
+
long -> HashedLineToStringList(String: filename, int: minSize, int: maxSize) -> List
+
+
HashedLineToStringSet
+
Return a pseudo-randomly created Set from the values in the specified file.
+
+
long -> HashedLineToStringSet(String: filename, int: minSize, int: maxSize) -> Set
+
Create a set of words sized between 2 and 10 elements
+
+
+
+
HashedLineToStringStringMap
+
Create a String-String map from the specified file, ranging in size from 0 to the specified maximum.
+
+
long -> HashedLineToStringStringMap(String: paramFile, int: maxSize) -> Map<String,String>
+
+
HashedRangeToLongList
+
Create a list of longs.
+
+
long -> HashedRangeToLongList(int: minVal, int: maxVal, int: minSize, int: maxSize) -> List
+
+
Join
+
This takes any collection and concatenates the String representation with a specified delimiter.
+
+
Collection -> Join(String: delim) -> String
+
+
example:Join(',')
+
Concatenate the incoming collection with ','
+
+
+
+
List
+
Create a {@code List} from a long input based on two functions, the first to determine the list size, and the second to populate the list with object values. The input fed to the second function is incremented between elements.
+
To directly create Lists of Strings from the String version of the same mapping functions, simply use {@link
+StringList} instead.
+
This function is not recommended, given that the other List functions are more clear about how they construct values.
+This function may be removed in the next major release, but it will be retained as deprecated for now.
+
+
long -> List(Object: sizeFunc, Object: valueFunc) -> List
+
+
ListFunctions
+
Create a List from a long input based on a set of provided functions. As a 'Pair-wise' function, the size of the resulting collection is determined directly by the number of provided element functions. As neither a 'Stepped' nor a 'Hashed' function, the input value used by each element function is the same as that provided to the outer function.
+
+
long -> ListFunctions(Object[]...: funcs) -> List
+
+
ListHashed
+
Create a List from a long input based on a set of provided functions. As a 'Pair-wise' function, the size of the resulting collection is determined directly by the number of provided element functions. As a 'Hashed' function, the input value is hashed again before being used by each element function.
+
+
long -> ListHashed(Object[]...: funcs) -> List
+
+
ListSized
+
Create a List from a long input based on a set of provided functions. As a 'Sized' function, the first argument is a function which determines the size of the resulting list. Additional functions provided are used to generate the elements to add to the collection. If the size is larger than the number of provided functions, the last provided function is used repeatedly as needed. As neither a 'Stepped' nor a 'Hashed' function, the input value used by each element function is the same as that provided to the outer function.
+
+
+
long -> ListSized(Object: sizeFunc, Object[]...: funcs) -> List
Create a sized list of object values of each function output. List size function will recursively call the last function tillend of the list size functions
+
+
+
+
long -> ListSized(int: size, Object[]...: funcs) -> List
+
+
+
ListSizedHashed
+
Create a List from a long input based on a set of provided functions. As a 'Sized' function, the first argument is a function which determines the size of the resulting list. Additional functions provided are used to generate the elements to add to the collection. If the size is larger than the number of provided functions, the last provided function is used repeatedly as needed. As a 'Hashed' function, the input value is hashed again before being used by each element function.
+
+
+
long -> ListSizedHashed(Object: sizeFunc, Object[]...: funcs) -> List
Create a sized hash list of object values of each function output. List size function will recursively call the last function tillend of the list size functions
+
+
+
+
long -> ListSizedHashed(int: size, Object[]...: funcs) -> List
+
+
+
ListSizedStepped
+
Create a List from a long input based on a set of provided functions. As a 'Sized' function, the first argument is a function which determines the size of the resulting list. Additional functions provided are used to generate the elements to add to the collection. If the size is larger than the number of provided functions, the last provided function is used repeatedly as needed. As a 'Stepped' function, the input value is incremented before being used by each element function.
+
+
+
long -> ListSizedStepped(Object: sizeFunc, Object[]...: funcs) -> List
long -> ListSizedStepped(int: size, Object[]...: funcs) -> List
+
+
+
ListStepped
+
Create a List from a long input based on a set of provided functions. As a 'Pair-wise' function, the size of the resulting collection is determined directly by the number of provided element functions. As a 'Stepped' function, the input value is incremented before being used by each element function.
+
+
long -> ListStepped(Object[]...: funcs) -> List
+
+
Map
+
Create a {@code Map} from a long input based on three functions, the first to determine the map size, and the second to populate the map with key objects, and the third to populate the map with value objects. The long input fed to the second and third functions is incremented between entries. To directly create Maps with key and value Strings using the same mapping functions, simply use {@link StringMap} instead.
+
+
+
long -> Map(function.LongToIntFunction: sizeFunc, function.LongFunction
create a map of size 2, with a specific function for each key and each value
+
+
+
+
MapFunctions
+
Create a Map from a long input based on a set of provided key and value functions. Any duplicate entries produced by the key functions are elided. As a 'Pair-wise' function, the size of the resulting collection is determined directly by the number of provided element functions. Since this is a map, the functions come in pairs, each even numbered function is a key function and each odd numbered function is the corresponding value function. As neither a 'Stepped' nor a 'Hashed' function, the input value used by each key and value function is the same as that provided to the outer function.
+
+
long -> MapFunctions(Object[]...: funcs) -> Map<Object,Object>
+
Create a map of object values. Produces values like {'one':'one'1:1}.
+
+
+
+
MapHashed
+
Create a Map from a long input based on a set of provided key and value functions. Any duplicate entries produced by the key functions are elided. As a 'Pair-wise' function, the size of the resulting collection is determined directly by the number of provided element functions. Since this is a map, the functions come in pairs, each even numbered function is a key function and each odd numbered function is the corresponding value function. As a 'Hashed' function, the input value is hashed again before being used by each key and value function.
+
+
long -> MapHashed(Object[]...: funcs) -> Map<Object,Object>
+
Create a map of object values. Produces values like {'one':'one','4464361019114304900','4464361019114304900'}.
+
+
+
+
MapSized
+
Create a Map from a long input based on a set of provided key and value functions. Any duplicate entries produced by the key functions are elided. As a 'Sized' function, the first argument is a function which determines the size of the resulting map. Additional functions provided are used to generate the elements to add to the collection, as in the pair-wise mode of {@link MapFunctions}. If the size is larger than the number of provided functions, the last provided function is used repeatedly as needed. (respectively for key functions as well as value functions) As neither a 'Stepped' nor a 'Hashed' function, the input value used by each key and value function is the same as that provided to the outer function.
+
+
+
long -> MapSized(Object: sizeFunc, Object[]...: funcs) -> Map<Object,Object>
Create a map of object values. Produces values like {'one':'one'1:1}.
+
+
+
+
long -> MapSized(int: size, Object[]...: funcs) -> Map<Object,Object>
+
+
+
MapSizedHashed
+
Create a Map from a long input based on a set of provided key and value functions. Any duplicate entries produced by the key functions are elided. As a 'Sized' function, the first argument is a function which determines the size of the resulting map. Additional functions provided are used to generate the elements to add to the collection, as in the pair-wise mode of {@link MapFunctions}. If the size is larger than the number of provided functions, the last provided function is used repeatedly as needed. (respectively for key functions as well as value functions) As a 'Hashed' function, the input value is hashed again before being used by each key and value function.
+
+
+
long -> MapSizedHashed(Object: sizeFunc, Object[]...: funcs) -> Map<Object,Object>
Create a map of object values. Produces values like {'one':'one'1:1}.
+
+
+
+
long -> MapSizedHashed(int: size, Object[]...: funcs) -> Map<Object,Object>
+
+
+
MapSizedStepped
+
Create a Map from a long input based on a set of provided key and value functions. Any duplicate entries produced by the key functions are elided. As a 'Sized' function, the first argument is a function which determines the size of the resulting map. Additional functions provided are used to generate the elements to add to the collection, as in the pair-wise mode of {@link MapFunctions}. If the size is larger than the number of provided functions, the last provided function is used repeatedly as needed. (respectively for key functions as well as value functions) As a 'Stepped' function, the input value is incremented before being used by each key or value function.
+
+
+
long -> MapSizedStepped(Object: sizeFunc, Object[]...: funcs) -> Map<Object,Object>
Create a map of object values. Produces values like {'one':'one'1:1}.
+
+
+
+
long -> MapSizedStepped(int: size, Object[]...: funcs) -> Map<Object,Object>
+
+
+
MapStepped
+
Create a Map from a long input based on a set of provided key and value functions. Any duplicate entries produced by the key functions are elided. As a 'Pair-wise' function, the size of the resulting collection is determined directly by the number of provided element functions. Since this is a map, the functions come in pairs, each even numbered function is a key function and each odd numbered function is the corresponding value function. As a 'Stepped' function, the input value is incremented before being used by each key or value function.
+
+
long -> MapStepped(Object[]...: funcs) -> Map<Object,Object>
+
Create a map of object values. Produces values like {'one':'one'1:1}.
+
+
+
+
Set
+
Create a {@code Set} from a long input based on two functions, the first to determine the set size, and the second to populate the set with object values. The input fed to the second function is incremented between elements.
+
To create Sets of Strings from the String version of the same mapping functions, simply use {@link StringSet}
+instead.
+
+
+
long -> Set(function.LongToIntFunction: sizeFunc, function.LongFunction: valueFunc) -> Set
+
+
example:Set(HashRange(3,7),Add(15L))
+
create a set between 3 and 7 elements of Long values
+
+
+
+
long -> Set(function.LongToIntFunction: sizeFunc, function.LongUnaryOperator: valueFunc) -> Set
+
+
+
long -> Set(function.LongToIntFunction: sizeFunc, function.LongToIntFunction: valueFunc) -> Set
+
+
+
long -> Set(function.LongFunction: sizeFunc, function.LongFunction: valueFunc) -> Set
+
+
+
long -> Set(function.LongFunction: sizeFunc, function.LongUnaryOperator: valueFunc) -> Set
+
+
+
long -> Set(function.LongFunction: sizeFunc, function.LongToIntFunction: valueFunc) -> Set
+
+
+
long -> Set(function.LongUnaryOperator: sizeFunc, function.LongFunction: valueFunc) -> Set
+
+
+
long -> Set(function.LongUnaryOperator: sizeFunc, function.LongUnaryOperator: valueFunc) -> Set
+
+
+
long -> Set(function.LongUnaryOperator: sizeFunc, function.LongToIntFunction: valueFunc) -> Set
+
+
+
SetFunctions
+
Create a Set from a long input based on a set of provided functions. Any duplicate values are elided. As a 'Pair-wise' function, the size of the resulting collection is determined directly by the number of provided element functions. As neither a 'Stepped' nor a 'Hashed' function, the input value used by each element function is the same as that provided to the outer function.
Create a list of object values of each function output. Produces values like ['one'], as each function produces the same value.
+
+
+
+
SetHashed
+
Create a Set from a long input based on a set of provided functions. As a 'Pair-wise' function, the size of the resulting collection is determined directly by the number of provided element functions, assuming no duplicate values. As a 'Hashed' function, the input value is hashed again before being used by each element function.
Create a hash list of object values of each function output, like ['2945182322382062539','text']
+
+
+
+
SetSized
+
Create a Set from a long input based on a set of provided functions. As a 'Sized' function, the first argument is a function which determines the size of the resulting set. Additional functions provided are used to generate the elements to add to the collection. If the size is larger than the number of provided functions, the last provided function is used repeatedly as needed. As neither a 'Stepped' nor a 'Hashed' function, the input value used by each element function is the same as that provided to the outer function.
+
+
+
long -> SetSized(Object: sizeFunc, Object[]...: funcs) -> Set
Create a sized set of object values, like ['one','text'], because 'text' is duplicated 4 times
+
+
+
+
long -> SetSized(int: size, Object[]...: funcs) -> Set
+
+
+
SetSizedHashed
+
Create a Set from a long input based on a set of provided functions. As a 'Sized' function, the first argument is a function which determines the size of the resulting set. Additional functions provided are used to generate the elements to add to the collection. If the size is larger than the number of provided functions, the last provided function is used repeatedly as needed. As a 'Hashed' function, the input value is hashed again before being used by each element function.
+
+
+
long -> SetSizedHashed(Object: sizeFunc, Object[]...: funcs) -> Set
Create a sized set of values like ['2945182322382062539', 'text', '37945690212757860', '287864597160630738', '3299224200079606887']
+
+
+
+
long -> SetSizedHashed(int: size, Object[]...: funcs) -> Set
+
+
+
SetSizedStepped
+
Create a Set from a long input based on a set of provided functions. As a 'Sized' function, the first argument is a function which determines the size of the resulting set. Additional functions provided are used to generate the elements to add to the collection. If the size is larger than the number of provided functions, the last provided function is used repeatedly as needed. As a 'Stepped' function, the input value is incremented before being used by each element function.
+
+
+
long -> SetSizedStepped(Object: sizeFunc, Object[]...: funcs) -> Set
long -> SetSizedStepped(int: size, Object[]...: funcs) -> Set
+
+
+
SetStepped
+
Create a Set from a long input based on a set of provided functions. As a 'Pair-wise' function, the size of the resulting collection is determined directly by the number of provided element functions, assuming no duplicate values. As a 'Stepped' function, the input value is incremented before being used by each element function.
Create a {@code List} from a long value, based on two functions, the first to determine the list size, and the second to populate the list with String values. The input fed to the second function is incremented between elements. Regardless of the object type provided by the second function, {@link Object#toString()} is used to get the value to add to the list.
+
To create Lists of any type of object simply use {@link List} with an specific value mapping function.
+
+
long -> StringList(function.LongToIntFunction: sizeFunc, function.LongFunction: valueFunc) -> List
+
+
example:StringList(HashRange(3,7),Add(15L))
+
create a list between 3 and 7 elements of String representations of Long values
+
+
+
+
StringMap
+
Create a {@code Map} from a long input based on three functions, the first to determine the map size, and the second to populate the map with key objects, and the third to populate the map with value objects. The long input fed to the second and third functions is incremented between entries. Regardless of the object type provided by the second and third functions, {@link Object#toString()} is used to determine the key and value to add to the map. To create Maps of any key and value types, simply use {@link Map} with an specific key and value mapping functions.
+
+
+
long -> StringMap(function.LongToIntFunction: sizeFunc, function.LongFunction: keyFunc, function.LongFunction: valueFunc) -> Map<String,String>
create a map of size 2, with a specific function for each key and each value
+
+
+
+
StringSet
+
Create a {@code Set} from a long based on two functions, the first to determine the set size, and the second to populate the set with String values. The input fed to the second function is incremented between elements. Regardless of the object type provided by the second function, {@link Object#toString()} is used to get the value to add to the list. To create Sets of any type of object simply use {@link Set} with a specific value mapping function.
+
+
+
long -> StringSet(function.LongToIntFunction: sizeFunc, function.LongFunction: valueFunc) -> Set
+
+
example:StringSet(HashRange(3,7),Add(15L))
+
create a set between 3 and 7 elements of String representations of Long values
+
+
+
+
long -> StringSet(function.LongToIntFunction: sizeFunc, function.LongUnaryOperator: valueFunc) -> Set
+
+
+
long -> StringSet(function.LongToIntFunction: sizeFunc, function.LongToIntFunction: valueFunc) -> Set
+
+
+
long -> StringSet(function.LongFunction<?>: sizeFunc, function.LongFunction: valueFunc) -> Set
+
+
+
long -> StringSet(function.LongFunction<?>: sizeFunc, function.LongUnaryOperator: valueFunc) -> Set
+
+
+
long -> StringSet(function.LongFunction<?>: sizeFunc, function.LongToIntFunction: valueFunc) -> Set
+
+
+
long -> StringSet(function.LongUnaryOperator: sizeFunc, function.LongFunction: valueFunc) -> Set
+
+
+
long -> StringSet(function.LongUnaryOperator: sizeFunc, function.LongUnaryOperator: valueFunc) -> Set
+
+
+
long -> StringSet(function.LongUnaryOperator: sizeFunc, function.LongToIntFunction: valueFunc) -> Set
Conversion functions simply allow values of one type
+to be converted to another type in an obvious way.
+
ByteBufferSizedHashed
+
Create a ByteBuffer from a long input based on a provided size function. As a 'Sized' function, the first argument is a function which determines the size of the resulting ByteBuffer. As a 'Hashed' function, the input value is hashed again before being used as value.
+
+
+
long -> ByteBufferSizedHashed(int: size) -> java.nio.ByteBuffer
+
+
example:ByteBufferSizedHashed(16)
+
Functionally identical to HashedToByteBuffer(16) but using dynamic sizing implementation
+
example:ByteBufferSizedHashed(HashRange(10, 14))
+
Create a ByteBuffer with variable limit (10 to 14)
+
+
+
+
long -> ByteBufferSizedHashed(Object: sizeFunc) -> java.nio.ByteBuffer
+
+
+
ByteBufferToHex
+
Convert the contents of the input ByteBuffer to a String as hexadecimal. This function is retained to avoid breaking previous workload definitions, but you should use {@link ToHexString} instead.
converts a double value to a string with only two digits after the decimal
+
+
+
+
DigestToByteBuffer
+
Computes the digest of the ByteBuffer on input and stores it in the output ByteBuffer. The digestTypes available are: MD2 MD5 SHA-1 SHA-224 SHA-256 SHA-384 SHA-512 SHA3-224 SHA3-256 SHA3-384 SHA3-512
+
+
+
long -> DigestToByteBuffer(String: digestType) -> java.nio.ByteBuffer
Convert the input double value to the closest float value.
+
+
double -> DoubleToFloat() -> Float
+
+
EscapeJSON
+
Escape all special characters which are required to be escaped when found within JSON content according to the JSON spec
+
{@code
+\b Backspace (ascii code 08)
+\f Form feed (ascii code 0C)
+\n New line
+\r Carriage return
+\t Tab
+\" Double quote
+\\ Backslash character
+\/ Forward slash
+}
+
+
+
String -> EscapeJSON() -> String
+
+
Flow
+
Combine functions into one.
+
This function allows you to combine multiple other functions into one. This is often useful
+for constructing more sophisticated recipes, when you don't have the ability to use
+control flow or non-functional forms.
+
The functions will be stitched together using the same logic that VirtData uses when
+combining flows outside functions. That said, if the functions selected are not the right ones,
+then it is possible to end up with the wrong data type at the end. To remedy this, be sure
+to add input and output qualifiers, like long-> or ->String where
+appropriate, to ensure that VirtData selects the right functions within the flow.
+
+
long -> Flow(Object[]...: funcs) -> Object
+
+
Format
+
Apply the Java String.format method to an incoming object. @see Java 8 String.format(...) javadoc Note: This function can often be quite slow, so more direct methods are generally preferrable.
+
+
Object -> Format(String: format) -> String
+
+
example:Format('Y')
+
Yield the formatted year from a Java date object.
+
+
+
+
HTMLEntityDecode
+
encode HTML Entities
+
+
String -> HTMLEntityDecode() -> String
+
+
example:HTMLEntityEncode()
+
Decode/Unescape input from HTML4 valid to text.
+
+
+
+
HTMLEntityEncode
+
encode HTML Entities
+
+
String -> HTMLEntityEncode() -> String
+
+
example:HTMLEntityEncode()
+
Encode/Escape input into HTML4 valid entties.
+
+
+
+
LongToByte
+
Convert the input long value to a byte, with negative values masked away.
+
+
long -> LongToByte() -> Byte
+
+
LongToShort
+
Convert the input value from long to short.
+
+
long -> LongToShort() -> Short
+
+
ModuloToBigDecimal
+
Return a {@code BigDecimal} value as the result of modulo division with the specified divisor.
+
+
+
long -> ModuloToBigDecimal() -> java.math.BigDecimal
+
+
+
long -> ModuloToBigDecimal(long: modulo) -> java.math.BigDecimal
+
+
+
ModuloToBigInt
+
Return a {@code BigInteger} value as the result of modulo division with the specified divisor.
+
+
+
long -> ModuloToBigInt() -> java.math.BigInteger
+
+
+
long -> ModuloToBigInt(long: modulo) -> java.math.BigInteger
+
+
+
ModuloToBoolean
+
Return a boolean value as the result of modulo division with the specified divisor.
+
+
long -> ModuloToBoolean() -> Boolean
+
+
ModuloToByte
+
Return a byte value as the result of modulo division with the specified divisor.
+
+
long -> ModuloToByte(long: modulo) -> Byte
+
+
ModuloToShort
+
Return a boolean value as the result of modulo division with the specified divisor.
+
+
long -> ModuloToShort(long: modulo) -> Short
+
+
SnappyComp
+
Compress the input using snappy compression
+
+
String -> SnappyComp() -> java.nio.ByteBuffer
+
+
StringDateWrapper
+
This function wraps an epoch time in milliseconds into a String as specified in the format. The valid formatters are documented at @see DateTimeFormat API Docs
+
+
long -> StringDateWrapper(String: format) -> String
+
+
ToBase64String
+
Computes the Base64 representation of the byte image of the input long.
+
+
+
String -> ToBase64String() -> String
+
+
example:ToBase64String()
+
encode any input as Base64
+
+
+
+
UUID -> ToBase64String() -> String
+
+
example:ToBase64String()
+
Encode the bits of a UUID into a Base64 String
+
+
+
+
long -> ToBase64String() -> String
+
+
example:ToBase64String()
+
Convert the bytes of a long input into a base64 String
+
+
+
+
ToBigDecimal
+
Convert values to BigDecimals at configurable scale or precision.
+
ToBigDecimal(...) functions which take whole-numbered inputs may have
+a scale parameter or a custom MathContext, but not both. The scale parameter
+is not supported for String or Double input forms.
+
+
+
double -> ToBigDecimal() -> java.math.BigDecimal
+
+
example:ToBigDecimal()
+
Convert all double values to BigDecimal values with no limits (using MathContext.UNLIMITED)
notes: Convert all input values to BigDecimal values with a specific MathContext.
+The value for context can be one of UNLIMITED,
+DECIMAL32, DECIMAL64, DECIMAL128, or any valid configuration supported by
+{@link MathContext#MathContext(String)}, such as {@code "precision=32 roundingMode=CEILING"}.
+In the latter form, roundingMode can be any valid value for {@link RoundingMode}, like
+UP, DOWN, CEILING, FLOOR, HALF_UP, HALF_DOWN, HALF_EVEN, or UNNECESSARY.
Convert all long values to whole-numbered BigDecimal values
+
+
+
+
long -> ToBigDecimal(int: scale) -> java.math.BigDecimal
+
+
example:ToBigDecimal(0)
+
Convert all long values to whole-numbered BigDecimal values
+
example:ToBigDecimal(2)
+
Convert long 'pennies' BigDecimal with 2 digits after decimal point
+
+
+
+
long -> ToBigDecimal(String: context) -> java.math.BigDecimal
+
+
+
notes: Convert all input values to BigDecimal values with a specific MathContext. This form is only
+supported for scale=0, meaning whole numbers. The value for context can be one of UNLIMITED,
+DECIMAL32, DECIMAL64, DECIMAL128, or any valid configuration supported by
+{@link MathContext#MathContext(String)}, such as {@code "precision=32 roundingMode=CEILING"}.
+In the latter form, roundingMode can be any valid value for {@link RoundingMode}, like
+UP, DOWN, CEILING, FLOOR, HALF_UP, HALF_DOWN, HALF_EVEN, or UNNECESSARY.
Convert all int values to BigDecimal values with no limits (using MathContext.UNLIMITED)
+
+
+
+
int -> ToBigDecimal(String: context) -> java.math.BigDecimal
+
+
+
notes: Convert all input values to BigDecimal values with a specific MathContext.
+The value for context can be one of UNLIMITED,
+DECIMAL32, DECIMAL64, DECIMAL128, or any valid configuration supported by
+{@link MathContext#MathContext(String)}, such as {@code "precision=32 roundingMode=CEILING"}.
+In the latter form, roundingMode can be any valid value for {@link RoundingMode}, like
+UP, DOWN, CEILING, FLOOR, HALF_UP, HALF_DOWN, HALF_EVEN, or UNNECESSARY.
notes: Convert all input values to BigDecimal values with a specific MathContext. This form is only
+supported for scale=0, meaning whole numbers. The value for context can be one of UNLIMITED,
+DECIMAL32, DECIMAL64, DECIMAL128, or any valid configuration supported by
+{@link MathContext#MathContext(String)}, such as {@code "precision=32 roundingMode=CEILING"}.
+In the latter form, roundingMode can be any valid value for {@link RoundingMode}, like
+UP, DOWN, CEILING, FLOOR, HALF_UP, HALF_DOWN, HALF_EVEN, or UNNECESSARY.
notes: Convert the ByteBuffer's contents to a hex string upper or lower case.
+
+
+
+
ToInetAddress
+
Convert the input value to a {@code InetAddress}
+
+
long -> ToInetAddress() -> InetAddress
+
+
ToInt
+
Convert the input value to an int with long modulus remainder. If the scale is chosen, then the value is wrapped at this value. Otherwise, {@link Integer#MAX_VALUE} is used.
+
+
+
Object -> ToInt() -> Integer
+
+
+
Double -> ToInt(int: scale) -> Integer
+
+
+
Double -> ToInt() -> Integer
+
+
+
double -> ToInt(int: scale) -> int
+
+
+
double -> ToInt() -> int
+
+
+
long -> ToInt() -> int
+
+
+
String -> ToInt() -> Integer
+
+
+
long -> ToInt(int: scale) -> int
+
+
example:ToInt(1000)
+
converts a long input value to an int between 0 and 999, inclusive
+
+
+
+
long -> ToInt() -> int
+
+
example:ToInt()
+
*converts a long input value to an int between 0 and 2147483647, inclusive *
+
+
+
+
ToJSON
+
Convert the input object to a JSON string with Gson.
+
+
Object -> ToJSON() -> String
+
+
ToJSONPretty
+
Convert the input object to a JSON string with Gson, with pretty printing enabled.
+
+
Object -> ToJSONPretty() -> String
+
+
ToLong
+
Convert the input value to a long.
+
+
+
Float -> ToLong(long: scale) -> Long
+
+
+
Float -> ToLong() -> Long
+
+
+
String -> ToLong() -> Long
+
+
+
double -> ToLong(long: scale) -> long
+
+
+
double -> ToLong() -> long
+
+
+
ToMD5ByteBuffer
+
Converts the byte image of the input long to a MD5 digest in ByteBuffer form. Deprecated usage due to unsafe MD5 digest. Replaced with DigestToByteBuffer with MD5 when absolutely needed for existing NB tests. However, stronger encryption algorithms (e.g. SHA-256) are recommended due to MD5's limitations.
+
+
long -> ToMD5ByteBuffer() -> java.nio.ByteBuffer
+
+
+
notes: Deprecated usage due to unsafe MD5 digest.
+Use the DigestToByteBuffer with alternatives other than MD5.
+
+
+
example:MD5ByteBuffer()
+
+
+
convert the a input to an md5 digest of its bytes
+
+
+
+
+
ToShort
+
Convert the input value to a short.
+
+
+
String -> ToShort() -> Short
+
+
+
Float -> ToShort() -> Short
+
+
+
Float -> ToShort(int: modulo) -> Short
+
+
+
long -> ToShort() -> Short
+
+
+
long -> ToShort(int: wrapat) -> Short
+
+
notes: This form allows for limiting the short values at a lower limit than Short.MAX_VALUE.
+@param wrapat The maximum value to return.
+
+
+
+
int -> ToShort() -> Short
+
+
+
int -> ToShort(int: scale) -> Short
+
+
+
double -> ToShort() -> Short
+
+
+
double -> ToShort(int: modulo) -> Short
+
+
+
ToString
+
Converts the input to the most obvious string representation with String.valueOf(...). Forms which accept a function will evaluate that function first and then apply String.valueOf() to the result.
+
+
+
Float -> ToString() -> String
+
+
+
long -> ToString() -> String
+
+
+
long -> ToString(function.LongUnaryOperator: f) -> String
+
+
+
long -> ToString(function.LongFunction<?>: f) -> String
+
+
+
long -> ToString(function.Function<Long,?>: f) -> String
+
+
+
long -> ToString(function.LongToIntFunction: f) -> String
+
+
+
long -> ToString(function.LongToDoubleFunction: f) -> String
+
+
+
long -> ToString(io.nosqlbench.virtdata.library.basics.shared.from_long.to_byte.LongToByte: f) -> String
+
+
+
long -> ToString(io.nosqlbench.virtdata.library.basics.shared.from_long.to_short.LongToShort: f) -> String
This function creates a non-random UUID in the type 4 version (Random). It always puts the same value in the MSB position of the UUID format. The input value is put in the LSB position. This function is suitable for deterministic testing of scenarios which depend on type 4 UUIDs, but without the mandated randomness that makes testing difficult. Just be aware that the MSB will always contain value 0x0123456789ABCDEFL unless you specify a different long value to pre-fill it with.
+
+
+
long -> ToUUID() -> UUID
+
+
+
long -> ToUUID(long: msbs) -> UUID
+
+
+
URLDecode
+
URLDecode string data
+
+
String -> URLDecode(String: charset) -> String
+
+
notes: URLDecode any incoming string using the specified charset.
+
+
+
+
@param charset A valid character set name from {@link Charset}
+
+
+
example:URLDecode('UTF-16')
+
+
+
URLDecode using the UTF-16 charset.
+
+
+
String -> URLDecode() -> String
+
+
example:URLDecode()
+
URLDecode using the default UTF-8 charset.
+
+
+
+
URLEncode
+
URLEncode string data
+
+
String -> URLEncode(String: charset) -> String
+
+
notes: UrlEncode any incoming string using the specified charset.
+
+
+
+
@param charset A valid character set name from {@link Charset}
Functions in this category know about times and dates, datetimes, seconds or millisecond epoch times, and so forth.
+
Some of the functions in this category are designed to allow testing of UUID types which are usually designed to avoid
+determinism. This makes it possible to test systems which depend on UUIDs but which require determinism in test data.
+This is strictly for testing use. Breaking the universally-unique properties of UUIDs in production systems is a bad
+idea. Yet, in testing, this determinism is quite useful.
+
CqlDurationFunctions
+
Map a long value into a CQL Duration object based on a set of field functions.
+
+
+
long -> CqlDurationFunctions(Object: monthsFunc, Object: daysFunc, Object: nanosFunc) -> com.datastax.oss.driver.api.core.data.CqlDuration
+
+
notes: Create a CQL Duration object from the two provided field functions. The months field is always set to
+zero in this form.
+@param monthsFunc A function that will yield the months value
+@param daysFunc A function that will yield the days value
+@param nanosFunc A function that will yeild the nanos value
+
+
+
+
long -> CqlDurationFunctions(Object: daysFunc, Object: nanosFunc) -> com.datastax.oss.driver.api.core.data.CqlDuration
+
+
notes: Create a CQL Duration object from the two provided field functions. The months field is always set to
+zero in this form.
+@param daysFunc A function that will yield the days value
+@param nanosFunc A function that will yeild the nanos value
+
+
+
+
CurrentEpochMillis
+
Provide the millisecond epoch time as given by
+
{@code System.currentTimeMillis()}
+
+
CAUTION: This does not produce deterministic test data.
+
+
long -> CurrentEpochMillis() -> long
+
+
DateRangeDuring
+
Takes an input as a reference point in epoch time, and converts it to a DateRange, with the bounds set to the lower and upper timestamps which align to the specified precision. You can use any of these precisions to control the bounds around the provided timestamp: millisecond, second, minute, hour, day, month, or year. If the zoneid is not specified, it defaults to "GMT". If the zoneid is set to "default", then the zoneid is set to the default for the JVM. Otherwise, the specified zone is used.
+
+
+
long -> DateRangeDuring(String: precision) -> com.datastax.dse.driver.api.core.data.time.DateRange
+
+
example:DateRangeDuring('millisecond')}
+
Convert the incoming millisecond to an equivalent DateRange
+
example:DateRangeDuring('minute')}
+
Convert the incoming millisecond to a DateRange for the minute in which the millisecond falls
+
+
+
+
long -> DateRangeDuring(String: precision, String: zoneid) -> com.datastax.dse.driver.api.core.data.time.DateRange
+
+
+
DateRangeFunc
+
Uses the precision and the two functions provided to create a DateRange. You can use any of these precisions to control the bounds around the provided timestamp: millisecond, second, minute, hour, day, month, or year. If the zoneid is not specified, it defaults to "GMT". If the zoneid is set to "default", then the zoneid is set to the default for the JVM. Otherwise, the specified zone is used.
+
+
+
long -> DateRangeFunc(String: precision, Object: lowerFunc, Object: upperFunc) -> com.datastax.dse.driver.api.core.data.time.DateRange
Takes an input as a reference point in epoch time, and converts it to a DateRange, with the lower bounds set to the lower bound of the precision and millisecond provided, and with no upper bound. You can use any of these precisions to control the bounds around the provided timestamp: millisecond, second, minute, hour, day, month, or year. If the zoneid is not specified, it defaults to "GMT". If the zoneid is set to "default", then the zoneid is set to the default for the JVM. Otherwise, the specified zone is used.
+
+
+
long -> DateRangeOnOrAfter(String: precision, String: zoneid) -> com.datastax.dse.driver.api.core.data.time.DateRange
+
+
+
long -> DateRangeOnOrAfter(String: precision) -> com.datastax.dse.driver.api.core.data.time.DateRange
+
+
example:DateRangeOnOrAfter('millisecond')}
+
Convert the incoming millisecond to an match any time on or after
+
example:DateRangeOnOrAfter('minute')}
+
Convert the incoming millisecond to mach any time on or after the minute in which the millisecond falls
+
+
+
+
DateRangeOnOrBefore
+
Takes an input as a reference point in epoch time, and converts it to a DateRange, with the upper bound set to the upper bound of the precision and millisecond provided, and with no lower bound. You can use any of these precisions to control the bounds around the provided timestamp: millisecond, second, minute, hour, day, month, or year. If the zoneid is not specified, it defaults to "GMT". If the zoneid is set to "default", then the zoneid is set to the default for the JVM. Otherwise, the specified zone is used.
+
+
+
long -> DateRangeOnOrBefore(String: precision) -> com.datastax.dse.driver.api.core.data.time.DateRange
+
+
example:DateRangeOnOrBefore('millisecond')}
+
Convert the incoming millisecond to match anything on or before it.
+
example:DateRangeOnOrBefore('minute')}
+
Convert the incoming millisecond to match anything on or before the minute in which the millisecond falls
+
+
+
+
long -> DateRangeOnOrBefore(String: precision, String: zoneid) -> com.datastax.dse.driver.api.core.data.time.DateRange
+
+
+
DateRangeParser
+
Parses the DateRange format according to [Date Range Formatting](https://lucene.apache.org/solr/guide/6_6/working-with-dates
+.html#WorkingwithDates-DateRangeFormatting). When possible it is more efficient to use the other DateRange methods since they do not require parsing.
*Convert inputs like '[1970-01-01T00:00:00 TO 1970-01-01T00:00:00]' into DateRanges *
+
+
+
+
DateTimeParser
+
This function will parse a String containing a formatted date time, yielding a DateTime object. If no arguments are provided, then the format is set to "yyyy-MM-dd HH:mm:ss.SSSZ". For details on formatting options, see @see DateTimeFormat
notes: Initialize the parser with the given pattern. With this form, if any input fails to parse,
+or is null or empty, then an exception is thrown.
+@param dateTimePattern The pattern which represents the incoming format.
notes: Initialize the parser with the given pattern and default value. In this form, if any
+input fails to parse, then exceptions are suppressed and the default is provided instead.
+At initialization, the default is parsed as a sanity check.
+@param dateTimePattern The pattern which represents the incoming format.
+@param defaultTime An example of a formatted datetime string which is used as a default.
+
+
+
example:DateTimeParser('yyyy-MM-dd','1999-12-31')
+
+
+
parse any date in the yyyy-MM-dd format, or return the DateTime represented by 1999-12-31
+
+
+
+
+
EpochMillisToCqlLocalDate
+
Converts epoch millis to a java.time.LocalDate, which takes the place of the previous CQL-specific LocalDate. if a zoneid of 'default' is provided, then the time zone is set by the default for the JVM. If not, then a valid ZoneId is looked up. The no-args version uses GMT.
+
+
+
long -> EpochMillisToCqlLocalDate(String: zoneid) -> java.time.LocalDate
+
+
+
long -> EpochMillisToCqlLocalDate() -> java.time.LocalDate
+
+
example:EpochMillisToJavaLocalDate()
+
Yields the LocalDate for the millis in GMT
+
+
+
+
EpochMillisToJavaLocalDate
+
Converts epoch millis to a java.time.{@link LocalDate} object, using either the system default timezone or the timezone provided. If the specified ZoneId is not the same as the time base of the epoch millis instant, then conversion errors will occur. Short form ZoneId values like 'CST' can be used, although US Domestic names which specify the daylight savings hours are not supported. The full list of short Ids at @see JavaSE ZoneId Ids Any timezone specifier may be used which can be read by {@link ZoneId#of(String)}
+
+
+
long -> EpochMillisToJavaLocalDate() -> java.time.LocalDate
+
+
example:EpochMillisToJavaLocalDate()
+
Yields the LocalDate for the system default ZoneId
+
+
+
+
long -> EpochMillisToJavaLocalDate(String: zoneid) -> java.time.LocalDate
+
+
example:EpochMillisToJavaLocalDate('ECT')
+
Yields the LocalDate for the ZoneId entry for 'Europe/Paris'
+
+
+
+
EpochMillisToJavaLocalDateTime
+
Converts epoch millis to a java.time.{@link LocalDateTime} object, using either the system default timezone or the timezone provided. If the specified ZoneId is not the same as the time base of the epoch millis instant, then conversion errors will occur. Short form ZoneId values like 'CST' can be used, although US Domestic names which specify the daylight savings hours are not supported. The full list of short Ids at @see JavaSE ZoneId Ids Any timezone specifier may be used which can be read by {@link ZoneId#of(String)}
+
+
+
long -> EpochMillisToJavaLocalDateTime() -> java.time.LocalDateTime
+
+
example:EpochMillisToJavaLocalDateTime()
+
Yields the LocalDateTime for the system default ZoneId
+
+
+
+
long -> EpochMillisToJavaLocalDateTime(String: zoneid) -> java.time.LocalDateTime
+
+
example:EpochMillisToJavaLocalDateTime('ECT')
+
Yields the LocalDateTime for the ZoneId entry for 'Europe/Paris'
+
+
+
+
LongToLocalDateDays
+
Days since Jan 1st 1970
+
+
long -> LongToLocalDateDays() -> java.time.LocalDate
+
+
example:LongToLocalDateDays()
+
take the cycle number and turn it into a LocalDate based on days since 1970
+
+
+
+
StartingEpochMillis
+
This function sets the minimum long value to the equivalent unix epoch time in milliseconds. It simply adds the input value to this base value as determined by the provided time specifier. It wraps any overflow within this range as well.
+
+
long -> StartingEpochMillis(String: baseTimeSpec) -> long
+
add the millisecond epoch time of 2017-01-01 23:59:59 to all input values
+
+
+
+
StringDateWrapper
+
This function wraps an epoch time in milliseconds into a String as specified in the format. The valid formatters are documented at @see DateTimeFormat API Docs
+
+
long -> StringDateWrapper(String: format) -> String
+
+
ToCqlDuration
+
Convert the input double value into a CQL {@link CqlDuration} object, by setting months to zero, and using the fractional part as part of a day, assuming 24-hour days.
Convert the input value into a {@link CqlDuration} by reading the input as total nanoseconds, assuming 30-month days.
+
+
long -> ToCqlDurationNanos() -> com.datastax.oss.driver.api.core.data.CqlDuration
+
+
ToDate
+
Convert the input value to a {@code Date}, by multiplying and then dividing the input by the provided parameters.
+
+
+
long -> ToDate(int: millis_multiplier, int: millis_divisor) -> Date
+
+
example:ToDate(86400000,2)
+
produce two Date values per day
+
+
+
+
long -> ToDate(int: spacing) -> Date
+
+
example:ToDate(86400000)
+
produce a single Date() per day
+
+
+
+
long -> ToDate() -> Date
+
+
+
ToDateTime
+
Convert the input value to a {@code org.joda.time.DateTime}
+
+
+
long -> ToDateTime(int: spacing, int: repeat_count) -> org.joda.time.DateTime
+
+
+
long -> ToDateTime(String: spacing) -> org.joda.time.DateTime
+
+
+
long -> ToDateTime() -> org.joda.time.DateTime
+
+
+
ToEpochTimeUUID
+
Converts a long UTC timestamp in epoch millis form into a Version 1 TimeUUID according to RFC 4122. This means that only one unique value for a timeuuid can be generated for each epoch milli value, even though version 1 TimeUUIDs can normally represent up to 10000 distinct values per millisecond. If you need to access this level of resolution for testing purposes, use {@link ToFinestTimeUUID} instead. This method is to support simple mapping to natural timestamps as we often find in the real world.
+
For the variants that have a String argument in the constructor, this is
+a parsable datetime that is used as the base time for all produced values.
+Setting this allows you to set the start of the time range for all timeuuid
+values produced. All times are parsed for UTC. All time use ISO date ordering,
+meaning that the most significant fields always go before the others.
+
The valid formats, in joda specifier form are:
+
+
yyyy-MM-dd HH:mm:ss.SSSZ, for example: 2015-02-28 23:30:15.223
+
yyyy-MM-dd HH:mm:ss, for example 2015-02-28 23:30:15
+
yyyyMMdd'T'HHmmss.SSSZ, for example: 20150228T233015.223
+
yyyyMMdd'T'HHmmssZ, for example: 20150228T233015
+
yyyy-MM-dd, for example: 2015-02-28
+
yyyyMMdd, for example: 20150228
+
yyyyMM, for example: 201502
+
yyyy, for example: 2015
+
+
+
+
long -> ToEpochTimeUUID() -> UUID
+
+
+
notes: Create version 1 timeuuids with a per-host node and empty clock data.
+The node and clock components are seeded from network interface data. In this case,
+the clock data is not seeded uniquely.
+
+
+
example:ToEpochTimeUUID()
+
+
+
basetime 0, computed node data, empty clock data
+
+
+
+
+
long -> ToEpochTimeUUID(long: node) -> UUID
+
+
notes: Create version 1 timeuuids with a specific static node and empty clock data.
+This is useful for testing so that you can know that values are verifiable, even though
+in non-testing practice, you would rely on some form of entropy per-system to provide
+more practical dispersion of values over reboots, etc.
+
+
+
+
@param node a fixture value for testing that replaces node and clock bits
+
+
+
example:ToEpochTimeUUID(5234)
+
+
+
basetime 0, specified node data (5234), empty clock data
+
+
+
long -> ToEpochTimeUUID(long: node, long: clock) -> UUID
+
+
notes: Create version 1 timeuuids with a specific static node and specific clock data.
+This is useful for testing so that you can know that values are verifiable, even though
+in non-testing practice, you would rely on some form of entropy per-system to provide
+more practical dispersion of values over reboots, etc.
+
+
+
+
@param node a fixture value for testing that replaces node bits
+@param clock a fixture value for testing that replaces clock bits
+
+
+
example:ToEpochTimeUUID(31,337)
+
+
+
basetime 0, specified node data (31) and clock data (337)
+
+
+
long -> ToEpochTimeUUID(String: baseSpec) -> UUID
+
+
notes: Create version 1 timeuuids with a per-host node and empty clock data.
+The node and clock components are seeded from network interface data. In this case,
+the clock data is not seeded uniquely.
+
+
+
+
@param baseSpec - a string specification for the base time value
+
+
+
example:ToEpochTimeUUID('2017-01-01T23:59:59')
+
+
+
specified basetime, computed node data, empty clock data
+
+
+
long -> ToEpochTimeUUID(String: baseSpec, long: node) -> UUID
+
+
notes: Create version 1 timeuuids with a specific static node and empty clock data.
+This is useful for testing so that you can know that values are verifiable, even though
+in non-testing practice, you would rely on some form of entropy per-system to provide
+more practical dispersion of values over reboots, etc.
+
+
+
+
@param baseSpec - a string specification for the base time value
+@param node a fixture value for testing that replaces node and clock bits
+
+
+
example:ToEpochTimeUUID('2012',12345)
+
+
+
basetime at start if 2012, with node data 12345, empty clock data
+
+
+
long -> ToEpochTimeUUID(String: baseSpec, long: node, long: clock) -> UUID
+
+
notes: Create version 1 timeuuids with a specific static node and specific clock data.
+This is useful for testing so that you can know that values are verifiable, even though
+in non-testing practice, you would rely on some form of entropy per-system to provide
+more practical dispersion of values over reboots, etc.
+
+
+
+
@param baseSpec - a string specification for the base time value
+@param node a fixture value for testing that replaces node bits
+@param clock a fixture value for testing that replaces clock bits
Converts a count of 100ns intervals from 1582 Julian to a Type1 TimeUUID according to RFC 4122. This allows you to access the finest unit of resolution for the purposes of simulating a large set of unique timeuuid values. This offers 10000 times more unique values per ms than {@link ToEpochTimeUUID}. For the variants that have a String argument in the constructor, this is a parsable datetime that is used as the base time for all produced values. Setting this allows you to set the start of the time range for all timeuuid values produced. All times are parsed for UTC. All time use ISO date ordering, meaning that the most significant fields always go before the others. The valid formats, in joda specifier form are:
+
+
yyyy-MM-dd HH:mm:ss.SSSZ, for example: 2015-02-28 23:30:15.223
+
yyyy-MM-dd HH:mm:ss, for example 2015-02-28 23:30:15
+
yyyyMMdd'T'HHmmss.SSSZ, for example: 20150228T233015.223
+
yyyyMMdd'T'HHmmssZ, for example: 20150228T233015
+
yyyy-MM-dd, for example: 2015-02-28
+
yyyyMMdd, for example: 20150228
+
yyyyMM, for example: 201502
+
yyyy, for example: 2015
+
+
+
+
long -> ToFinestTimeUUID() -> UUID
+
+
+
notes: Create version 1 timeuuids with a per-host node and empty clock data.
+The node and clock components are seeded from network interface data. In this case,
+the clock data is not seeded uniquely.
+
+
+
example:ToFinestTimeUUID()
+
+
+
basetime 0, computed node data, empty clock data
+
+
+
+
+
long -> ToFinestTimeUUID(long: node) -> UUID
+
+
notes: Create version 1 timeuuids with a specific static node and empty clock data.
+This is useful for testing so that you can know that values are verifiable, even though
+in non-testing practice, you would rely on some form of entropy per-system to provide
+more practical dispersion of values over reboots, etc.
+
+
+
+
@param node a fixture value for testing that replaces node and clock bits
+
+
+
example:ToFinestTimeUUID(5234)
+
+
+
basetime 0, specified node data (5234), empty clock data
+
+
+
long -> ToFinestTimeUUID(long: node, long: clock) -> UUID
+
+
notes: Create version 1 timeuuids with a specific static node and specific clock data.
+This is useful for testing so that you can know that values are verifiable, even though
+in non-testing practice, you would rely on some form of entropy per-system to provide
+more practical dispersion of values over reboots, etc.
+
+
+
+
@param node a fixture value for testing that replaces node bits
+@param clock a fixture value for testing that replaces clock bits
+
+
+
example:ToFinestTimeUUID(31,337)
+
+
+
basetime 0, specified node data (31) and clock data (337)
+
+
+
long -> ToFinestTimeUUID(String: baseTimeSpec) -> UUID
+
+
notes: Create version 1 timeuuids with a per-host node and empty clock data.
+The node and clock components are seeded from network interface data. In this case,
+the clock data is not seeded uniquely.
+
+
+
+
@param baseTimeSpec - a string specification for the base time value
+
+
+
example:ToFinestTimeUUID('2017-01-01T23:59:59')
+
+
+
specified basetime, computed node data, empty clock data
+
+
+
long -> ToFinestTimeUUID(String: baseTimeSpec, long: node) -> UUID
+
+
notes: Create version 1 timeuuids with a specific static node and empty clock data.
+This is useful for testing so that you can know that values are verifiable, even though
+in non-testing practice, you would rely on some form of entropy per-system to provide
+more practical dispersion of values over reboots, etc.
+
+
+
+
@param baseTimeSpec - a string specification for the base time value
+@param node a fixture value for testing that replaces node and clock bits
+
+
+
example:ToFinestTimeUUID('2012',12345)
+
+
+
basetime at start if 2012, with node data 12345, empty clock data
+
+
+
long -> ToFinestTimeUUID(String: baseTimeSpec, long: node, long: clock) -> UUID
+
+
notes: Create version 1 timeuuids with a specific static node and specific clock data.
+This is useful for testing so that you can know that values are verifiable, even though
+in non-testing practice, you would rely on some form of entropy per-system to provide
+more practical dispersion of values over reboots, etc.
+
+
+
+
@param node a fixture value for testing that replaces node bits
+@param clock a fixture value for testing that replaces clock bits
+@param baseTimeSpec - a string specification for the base time value
Convert the input epoch millisecond to a {@code Java Instant}, by multiplying and then dividing by the provided parameters. This is in contrast to the ToJodaInstant function which does the same thing using the Joda API Type.
+
+
+
long -> ToJavaInstant(int: millis_multiplier, int: millis_divisor) -> java.time.Instant
+
+
example:ToDate(86400000,2)
+
produce two Date values per day
+
+
+
+
long -> ToJavaInstant(int: spacing) -> java.time.Instant
+
+
example:ToDate(86400000)
+
produce a single Date() per day
+
+
+
+
long -> ToJavaInstant() -> java.time.Instant
+
+
+
ToJodaDateTime
+
Convert the input value to a {@code org.joda.time.DateTime}
+
+
+
long -> ToJodaDateTime(int: spacing, int: repeat_count) -> org.joda.time.DateTime
+
+
+
long -> ToJodaDateTime(String: spacing) -> org.joda.time.DateTime
+
+
+
long -> ToJodaDateTime() -> org.joda.time.DateTime
+
+
+
ToJodaInstant
+
Convert the input epoch millisecond to a {@code JodaTime Instant}, by multiplying and then dividing by the provided parameters. This is in contrast to the ToJavaInstant function which does the same thing, only using the Java API type.
+
+
+
long -> ToJodaInstant(int: millis_multiplier, int: millis_divisor) -> org.joda.time.Instant
+
+
example:ToDate(86400000,2)
+
produce two Date values per day
+
+
+
+
long -> ToJodaInstant(int: spacing) -> org.joda.time.Instant
+
+
example:ToDate(86400000)
+
produce a single Date() per day
+
+
+
+
long -> ToJodaInstant() -> org.joda.time.Instant
+
+
+
ToLocalTime
+
Convert the input epoch millisecond to a {@code Java Instant}, by multiplying and then dividing by the provided parameters, then convert the result to a java {@link LocalTime}.
+
+
+
long -> ToLocalTime(int: millis_multiplier, int: millis_divisor, String: zoneid) -> java.time.LocalTime
+
+
+
long -> ToLocalTime(int: millis_multiplier, int: millis_divisor) -> java.time.LocalTime
+
+
example:ToLocalTime(86400000,2)
+
produce two LocalTime values per day
+
+
+
+
long -> ToLocalTime(int: spacing) -> java.time.LocalTime
+
+
example:ToLocalTime(86400000)
+
produce a single LocalTime per day
+
+
+
+
long -> ToLocalTime() -> java.time.LocalTime
+
+
+
ToMillisAtStartOfDay
+
Return the epoch milliseconds at the start of the day for the given epoch milliseconds.
+
+
+
long -> ToMillisAtStartOfDay() -> long
+
+
example:ToMillisAtStartOfDay()
+
return millisecond epoch time of the start of the day of the provided millisecond epoch time, assuming UTC
+
+
+
+
long -> ToMillisAtStartOfDay(String: timezoneId) -> long
+
+
example:ToMillisAtStartOfDay('America/Chicago')
+
return millisecond epoch time of the start of the day of the provided millisecond epoch time, using timezone America/Chicago
+
+
+
+
ToMillisAtStartOfHour
+
Return the epoch milliseconds at the start of the hour for the given epoch milliseconds.
+
+
+
long -> ToMillisAtStartOfHour() -> long
+
+
example:ToMillisAtStartOfHour()
+
return millisecond epoch time of the start of the hour of the provided millisecond epoch time, assuming UTC
+
+
+
+
long -> ToMillisAtStartOfHour(String: timezoneId) -> long
+
+
example:ToMillisAtStartOfHour('America/Chicago')
+
return millisecond epoch time of the start of the hour of the provided millisecond epoch time, using timezone America/Chicago
+
+
+
+
ToMillisAtStartOfMinute
+
Return the epoch milliseconds at the start of the minute for the given epoch milliseconds.
+
+
+
long -> ToMillisAtStartOfMinute() -> long
+
+
example:ToMillisAtStartOfMinute()
+
return millisecond epoch time of the start of the minute of the provided millisecond epoch time, assuming UTC
+
+
+
+
long -> ToMillisAtStartOfMinute(String: timezoneId) -> long
return millisecond epoch time of the start of the minute of the provided millisecond epoch time, using timezone America/Chicago
+
+
+
+
ToMillisAtStartOfMonth
+
Return the epoch milliseconds at the start of the month for the given epoch milliseconds.
+
+
+
long -> ToMillisAtStartOfMonth() -> long
+
+
example:ToMillisAtStartOfMonth()
+
return millisecond epoch time of the start of the month of the provided millisecond epoch time, assuming UTC
+
+
+
+
long -> ToMillisAtStartOfMonth(String: timezoneId) -> long
+
+
example:ToMillisAtStartOfMonth('America/Chicago')
+
return millisecond epoch time of the start of the month of the provided millisecond epoch time, using timezone America/Chicago
+
+
+
+
ToMillisAtStartOfNamedWeekDay
+
Return the epoch milliseconds at the start of the most recent day that falls on the given weekday for the given epoch milliseconds, including the current day if valid.
+
+
+
long -> ToMillisAtStartOfNamedWeekDay() -> long
+
+
example:ToMillisAtStartOfNamedWeekDay()
+
return millisecond epoch time of the start of the most recent Monday (possibly the day-of) of the provided millisecond epoch time, assuming UTC
+
+
+
+
long -> ToMillisAtStartOfNamedWeekDay(String: weekday) -> long
return millisecond epoch time of the start of the most recent Saturday (possibly the day-of) of the provided millisecond epoch time, using timezone America/Chicago
+
+
+
+
ToMillisAtStartOfNextDay
+
Return the epoch milliseconds at the start of the day after the day for the given epoch milliseconds.
+
+
+
long -> ToMillisAtStartOfNextDay() -> long
+
+
example:ToMillisAtStartOfNextDay()
+
return millisecond epoch time of the start of next day (not including day-of) of the provided millisecond epoch time, assuming UTC
+
+
+
+
long -> ToMillisAtStartOfNextDay(String: timezoneId) -> long
return millisecond epoch time of the start of the next day (not including day-of) of the provided millisecond epoch time, using timezone America/Chicago
+
+
+
+
ToMillisAtStartOfNextNamedWeekDay
+
Return the epoch milliseconds at the start of the next day that falls on the given weekday for the given epoch milliseconds, not including the current day.
+
+
+
long -> ToMillisAtStartOfNextNamedWeekDay() -> long
+
+
example:ToMillisAtStartOfNextNamedWeekDay()
+
return millisecond epoch time of the start of the next Monday (not the day-of) of the provided millisecond epoch time, assuming UTC
+
+
+
+
long -> ToMillisAtStartOfNextNamedWeekDay(String: weekday) -> long
return millisecond epoch time of the start of the next Saturday (not the day-of) of the provided millisecond epoch time, using timezone America/Chicago
+
+
+
+
ToMillisAtStartOfSecond
+
Return the epoch milliseconds at the start of the second for the given epoch milliseconds.
+
+
+
long -> ToMillisAtStartOfSecond() -> long
+
+
example:ToMillisAtStartOfSecond()
+
return millisecond epoch time of the start of the second of the provided millisecond epoch time, assuming UTC
+
+
+
+
long -> ToMillisAtStartOfSecond(String: timezoneId) -> long
Diagnostic functions can be used to help you construct the right VirtData recipe.
+
Show
+
Show diagnostic values for the thread-local variable map.
+
+
+
long -> Show() -> String
+
+
example:Show()
+
Show all values in a json-like format
+
+
+
+
long -> Show(String[]...: names) -> String
+
+
example:Show('foo')
+
Show only the 'foo' value in a json-like format
+
example:Show('foo','bar')
+
Show the 'foo' and 'bar' values in a json-like format
+
+
+
+
Object -> Show() -> String
+
+
example:Show()
+
Show all values in a json-like format
+
+
+
+
Object -> Show(String[]...: names) -> String
+
+
example:Show('foo')
+
Show only the 'foo' value in a json-like format
+
example:Show('foo','bar')
+
Show the 'foo' and 'bar' values in a json-like format
+
+
+
+
ToLongFunction
+
Adapts any compatible {@link FunctionalInterface} type to a LongFunction, for use with higher-order functions, when they require a LongFunction as an argument. Some of the higher-order functions within this library specifically require a LongFunction as an argument, while some of the other functions are provided in semantically equivalent forms with compatible types which can't be converted directly or automatically by Java. In such cases, those types of functions can be wrapped with the forms described here in order to allow the inner and outer functions to work together.
+
+
+
long -> ToLongFunction(function.LongUnaryOperator: op) -> Object
+
+
+
long -> ToLongFunction(function.Function<Long,Long>: op) -> Object
+
+
+
long -> ToLongFunction(function.LongToIntFunction: op) -> Object
+
+
+
long -> ToLongFunction(function.LongToDoubleFunction: op) -> Object
+
+
+
long -> ToLongFunction(function.LongFunction<?>: func) -> Object
+
+
+
ToLongUnaryOperator
+
Adapts any compatible {@link FunctionalInterface} type to a LongUnaryOperator, for use with higher-order functions, when they require a LongUnaryOperator as an argument. Some of the higher-order functions within this library specifically require a LongUnaryOperator as an argument, while some of the other functions are provided in semantically equivalent forms with compatible types which can't be converted directly or automatically by Java. In such cases, those types of functions can be wrapped with the forms described here in order to allow the inner and outer functions to work together.
+
+
+
long -> ToLongUnaryOperator(function.LongFunction: f) -> long
+
+
+
long -> ToLongUnaryOperator(function.Function<Long,Long>: f) -> long
+
+
+
long -> ToLongUnaryOperator(function.LongUnaryOperator: f) -> long
+
+
+
TypeOf
+
Yields the class of the resulting type in String form.
All of the distributions that are provided in the Apache Commons Math
+project are supported here, in multiple forms.
+
Continuous or Discrete
+
These distributions break down into two main categories:
+
Continuous Distributions
+
These are distributions over real numbers like 23.4323, with continuity across the values. Each of the continuous
+distributions can provide samples that fall on an interval of the real number line. Continuous probability distributions
+include the Normal distribution, and the Exponential distribution, among many others.
+
Discrete Distributions
+
Discrete distributions, also known as integer distributions have only whole-number valued samples. These distributions
+include the Binomial distribution, the Zipf distribution, and the Poisson distribution, among others.
+
Hashed or Mapped
+
hashed samples
+
Generally, you will want to "randomly sample" from a probability distribution. This is handled automatically by the
+functions below if you do not override the defaults. The hash mode is the default sampling mode for probability
+distributions. This is accomplished by hashing the input before using the resulting value with the sampling curve.
+This is called the hash sampling mode by VirtData. You can put hash into the modifiers as explained below if you
+want to document it explicitly.
+
mapped samples
+
The method used to sample from these distributions depends on a mathematical function called the cumulative probability
+density function, or more specifically the inverse of it. Having this function computed over some interval allows one to
+sample the shape of a distribution progressively if desired. In other words, it allows for some percentile-like view
+of values within a given probability distribution. This mode of using the inverse cumulative density function is known
+as the map mode in VirtData, as it allows one to map a unit interval variate in a deterministic way to a density
+sampling curve. To enable this mode, simply pass map as one of the function modifiers for any function in this
+category.
+
Interpolated or Computed Samples
+
When sampling from mathematical models of probability densities,
+performance between different densities can vary drastically. This means
+that you may end up perturbing the results of your test in an unexpected
+way simply by changing parameters of your testing distributions. Even
+worse, some densities have painful corner cases in performance, like
+'Zipf', which can make tests unbearably slow and flawed as they chew up
+CPU resources.
+
NOTE:
+Functions like 'Zipf' can still take a long time to initialize for certain
+parameters. If you are seeing a workload that seems to hang while
+initializing, it might be computing complex integrals for large parameters
+of Zipf. We hope to pre-compute and cache these at a future time to avoid
+this type of impact. For now, just be aware that some parameters on some
+density curves can be expensive to compute even during initialization.
+
Interpolated Samples
+
For this reason, interpolation is built-in to these sampling functions. The default mode is interpolate. This
+means that the sampling function is pre-computed over 1000 equidistant points in the unit interval (0.0,1.0), and the
+result is shared among all threads as a look-up-table for interpolation. This makes all statistical sampling functions
+perform nearly identically at runtime (after initialization, a one time cost). This does have the minor side effect of a
+little loss in accuracy, but the difference is generally negligible for nearly all performance testing cases.
+
Infinite or Finite
+
For interpolated samples from continuous distributions, you also have the option of including or
+excluding infinite values which may occur in some distributions. If you want to include them,
+use infinite, or finite to explicitly avoid them (the default). Specifying 'infinite'
+doesn't guarantee that you will see +Infinity or -Infinity, only that they are allowed. The
+Normal distribution often contains -Infinity and +Infinity, for example, due to the function
+used to estimate its cumulative distribution. These values can often be valuable in finding
+corner cases which should be treated uniformly according to
+IEEE 754.
+
Clamp or Noclamp
+
For interpolated samples from continuous distributions, you also have the option of clamping the
+allowed values to the valid range for the integral data type used as input. To clamp the output
+values to the range (Long.MIN_VALUE,Long.MAX_VALUE) for long->double functions, or to (Integer.
+MIN_VALUE,Integer.MAX_VALUE) for int-double functions, specify clamp, which is also the default.
+To explicitly disable this, use noclamp. This is useful when you know the downstream functions
+will only work with a certain range of values without truncating conversions. When you are using
+double values natively on the downstream functions, use noclamp to avoid limiting the domain of
+values in your test data. (In this case, you might also consider infinite).
+
Computed Samples
+
Conversely, compute mode sampling calls the sampling function every time a sample is needed. This affords a little
+more accuracy, but is generally not preferable to the default interpolated mode. You'll know if you need computed
+samples. Otherwise, it's best to stick with interpolation so that you spend more time testing your target system and
+less time testing your data generation functions.
+
Input Range
+
All of these functions take a long as the input value for sampling. This is similar to how the unit interval (0.0,1.0)
+is used in mathematics and statistics, but more tailored to modern system capabilities. Instead of using the unit
+interval, we simply use the interval of all positive longs. This provides more compatibility with other functions in
+VirtData, including hashing functions. Internally, this value is automatically converted to a unit interval variate as
+needed to work well with the distributions from Apache Math.
long -> ChiSquared(double: degreesOfFreedom, String[]...: mods) -> double
+
+
+
int -> ChiSquared(double: degreesOfFreedom, String[]...: mods) -> double
+
+
+
CoinFunc
+
This is a higher-order function which takes an input value, and flips a coin. The first parameter is used as the threshold for choosing a function. If the sample values derived from the input is lower than the threshold value, then the first following function is used, and otherwise the second is used. For example, if the threshold is 0.23, and the input value is hashed and sampled in the unit interval to 0.43, then the second of the two provided functions will be used. The input value does not need to be hashed beforehand, since the user may need to use the full input value before hashing as the input to one or both of the functions. This function will accept either a LongFunction or a {@link Function} or a LongUnaryOperator in either position. If necessary, use {@link function.ToLongFunction} to adapt other function forms to be compatible with these signatures.
long -> ConstantContinuous(double: value, String[]...: mods) -> double
+
+
+
int -> ConstantContinuous(double: value, String[]...: mods) -> double
+
+
+
Enumerated
+
Creates a probability density given the values and optional weights provided, in "value:weight value:weight ..." form. The weight can be elided for any value to use the default weight of 1.0d. @see Commons JavaDoc: EnumeratedRealDistribution
+
+
+
int -> Enumerated(String: data, String[]...: mods) -> double
+
+
example:Enumerated('1 2 3 4 5 6')
+
a fair six-sided die roll
+
example:Enumerated('1:2.0 2 3 4 5 6')
+
an unfair six-sided die roll, where 1 has probability mass 2.0, and everything else has only 1.0
+
+
+
+
long -> Enumerated(String: data, String[]...: mods) -> double
+
+
example:Enumerated('1 2 3 4 5 6')
+
a fair 6-sided die
+
example:Enumerated('1:2.0 2 3 4 5:0.5 6:0.5')
+
an unfair fair 6-sided die, where ones are twice as likely, and fives and sixes are half as likely
Utility function used for advanced data generation experiments.
+
+
int -> TokenMapFileCycle(String: filename, boolean: loopdata, boolean: ascending) -> long
+
+
TokenMapFileNextCycle
+
Utility function used for advanced data generation experiments.
+
+
int -> TokenMapFileNextCycle(String: filename, boolean: loopdata, boolean: ascending) -> long
+
+
TokenMapFileNextToken
+
Utility function used for advanced data generation experiments.
+
+
int -> TokenMapFileNextToken(String: filename, boolean: loopdata, boolean: ascending) -> long
+
+
TokenMapFileToken
+
Utility function used for advanced data generation experiments.
+
+
int -> TokenMapFileToken(String: filename, boolean: loopdata, boolean: ascending) -> long
+
+
TriangularStep
+
Compute a value which increases monotonically with respect to the cycle value.
+All values for f(X+(m>=0)) will be equal or greater than f(X). In effect, this
+means that with a sequence of monotonic inputs, the results will be monotonic as
+well as clustered. The values will approximate input/average, but will vary in frequency
+around a simple binomial distribution.
+
The practical effect of this is to be able to compute a sequence of values
+over inputs which can act as foreign keys, but which are effectively ordered.
+
Call for Ideas
+
Due to the complexity of generalizing this as a pure function over other distributions,
+this is the only function of this type for now. If you are interested in this problem
+domain and have some suggestions for how to extend it to other distributions, please
+join the project or let us know.
+
+
+
long -> TriangularStep(long: average, long: variance) -> long
+
+
example:TriangularStep(100,20)
+
Create a sequence of values where the average is 100, but the range of values is between 80 and 120.
+
example:TriangularStep(80,10)
+
Create a sequence of values where the average is 80, but the range of values is between 70 and 90.
These functions help combine other functions into higher-order functions when needed.
+
Expr
+
Allow for the use of arbitrary expressions according to the MVEL expression language. Variables that have been set by a Save function are available to be used in this function. The variable name cycle is reserved, and is always equal to the current input value. This is not the same in every case as the current cycle of an operation. It could be different if there are preceding functions which modify the input value.
+
+
+
long -> Expr(String: expr) -> int
+
+
+
long -> Expr(String: expr) -> long
+
+
+
long -> Expr(String: expr) -> String
+
+
+
long -> Expr(String: expr) -> UUID
+
+
+
int -> Expr(String: expr) -> int
+
+
+
double -> Expr(String: expr) -> double
+
+
+
long -> Expr(String: expr) -> Object
+
+
+
IntFlow
+
Combine multiple IntUnaryOperators into a single function.
+
+
int -> IntFlow(function.IntUnaryOperator[]...: ops) -> int
+
+
LongFlow
+
Combine multiple LongUnaryOperators into a single function.
+
+
long -> LongFlow(function.LongUnaryOperator[]...: ops) -> long
+
+
example:LongFlow(Add(3),Mul(6))
+
Create an integer operator which adds 3 and multiplies the result by 6
+
+
+
+
StringFlow
+
Combine multiple String functions together into one function.
These functions have no particular category, so they ended up here by default.
+
Add
+
Adds a value to the input.
+
+
+
double -> Add(double: addend) -> double
+
+
+
long -> Add(int: addend) -> int
+
+
+
long -> Add(long: addend) -> long
+
+
+
int -> Add(int: addend) -> int
+
+
example:Add(23)
+
adds integer 23 to the input integer value
+
+
+
+
AddCycleRange
+
Adds a cycle range to the input, producing an increasing sawtooth-like output.
+
+
+
long -> AddCycleRange(int: maxValue) -> int
+
+
+
long -> AddCycleRange(int: minValue, int: maxValue) -> int
+
+
+
long -> AddCycleRange(long: maxValue) -> long
+
+
+
long -> AddCycleRange(long: minValue, long: maxValue) -> long
+
+
+
int -> AddCycleRange(int: maxValue) -> int
+
+
+
int -> AddCycleRange(int: minValue, int: maxValue) -> int
+
+
+
AddHashRange
+
Adds a pseudo-random value within the specified range to the input.
+
+
+
long -> AddHashRange(long: maxValue) -> long
+
+
+
long -> AddHashRange(long: minValue, long: maxValue) -> long
+
+
+
long -> AddHashRange(int: maxValue) -> int
+
+
+
long -> AddHashRange(int: minValue, int: maxValue) -> int
+
+
+
int -> AddHashRange(int: maxValue) -> int
+
+
+
int -> AddHashRange(int: minValue, int: maxValue) -> int
+
+
+
AlphaNumericString
+
Create an alpha-numeric string of the specified length, character-by-character.
+
+
long -> AlphaNumericString(int: length) -> String
+
+
ByteBufferSizedHashed
+
Create a ByteBuffer from a long input based on a provided size function. As a 'Sized' function, the first argument is a function which determines the size of the resulting ByteBuffer. As a 'Hashed' function, the input value is hashed again before being used as value.
+
+
+
long -> ByteBufferSizedHashed(int: size) -> java.nio.ByteBuffer
+
+
example:ByteBufferSizedHashed(16)
+
Functionally identical to HashedToByteBuffer(16) but using dynamic sizing implementation
+
example:ByteBufferSizedHashed(HashRange(10, 14))
+
Create a ByteBuffer with variable limit (10 to 14)
+
+
+
+
long -> ByteBufferSizedHashed(Object: sizeFunc) -> java.nio.ByteBuffer
+
+
+
CSVFrequencySampler
+
Takes a CSV with sample data and generates random values based on the relative frequencies of the values in the file. The CSV file must have headers which can be used to find the named columns. I.E. take the following imaginary `animals.csv` file: animal,count,country puppy,1,usa puppy,2,colombia puppy,3,senegal kitten,2,colombia `CSVFrequencySampler('animals.csv', animal)` will return `puppy` or `kitten` randomly. `puppy` will be 3x more frequent than `kitten`. `CSVFrequencySampler('animals.csv', country)` will return `usa`, `colombia`, or `senegal` randomly. `colombia` will be 2x more frequent than `usa` or `senegal`. Use this function to infer frequencies of categorical values from CSVs.
+
+
long -> CSVFrequencySampler(String: filename, String: columnName) -> String
+
+
+
notes: Create a sampler of strings from the given CSV file. The CSV file must have plain CSV headers
+as its first line.
+@param filename The name of the file to be read into the sampler buffer
+@param columnName The name of the column to be sampled
Read values.csv, count the frequency of values in 'modelno' column, and sample from this column proportionally
+
+
+
+
+
CSVSampler
+
This function is a toolkit version of the {@link WeightedStringsFromCSV} function. It is more capable and should be the preferred function for alias sampling over any CSV data. This sampler uses a named column in the CSV data as the value. This is also referred to as the labelColumn . The frequency of this label depends on the weight assigned to it in another named CSV column, known as the weightColumn .
+
Combining duplicate labels
+
When you have CSV data which is not organized around the specific identifier that you want to sample by, you can use some combining functions to tabulate these prior to sampling. In that case, you can use any of "sum", "avg", "count", "min", or "max" as the reducing function on the value in the weight column. If none are specified, then "sum" is used by default. All modes except "count" and "name" require a valid weight column to be specified.
+
+
sum, avg, min, max - takes the given stat for the weight of each distinct label
+
count - takes the number of occurrences of a given label as the weight
+
name - sets the weight of all distinct labels to 1.0d
+
+
Map vs Hash mode
+
As with some of the other statistical functions, you can use this one to pick through the sample values by using the map mode. This is distinct from the default hash mode. When map mode is used, the values will appear monotonically as you scan through the unit interval of all long values. Specifically, 0L represents 0.0d in the unit interval on input, and Long.MAX_VALUE represents 1.0 on the unit interval.) This mode is only recommended for advanced scenarios and should otherwise be avoided. You will know if you need this mode.
notes: Build an efficient O(1) sampler for the given column values with respect to the weights,
+combining equal values by summing the weights.
+
+
+
+
@param labelColumn The CSV column name containing the value
+@param weightColumn The CSV column name containing a double weight
+@param data Sampling modes or file names. Any of map, hash, sum, avg, count are taken
+as configuration modes, and all others are taken as CSV filenames.
Builds a shared text image in memory and samples from it pseudo-randomly with hashing. The characters provided can be listed like a string (abc123), or can include range specifiers like a hyphen (a-zA-Z0-9). These characters are used to build an image of the specified size in memory that is sampled from according to the size function. The extracted value is sized according to either a provided function, a size range, or otherwise the whole image. The image can be varied between tests if you want by specifying a seed value. If no seed value is specified, then the image length is used also as a seed.
+
+
+
long -> CharBufImage(int: size) -> java.nio.CharBuffer
+
+
notes: Shortcut constructor for building a simple text image
+from A-Z, a-z, 0-9 and a space, of the specified size.
+When this function is used, it always returns the full image
+if constructed in this way.
+@param size length in characters of the image.
+
+
+
+
long -> CharBufImage(Object: charsFunc, int: imgsize) -> java.nio.CharBuffer
+
+
notes: This is the same as {@link CharBufImage(Object,int,Object)} except that the
+extracted sample length is fixed to the buffer size, thus the function will
+always return the full buffer.
+@param charsFunc The function which produces objects, which toString() is used to collect their input
+@param imgsize The size of the CharBuffer to build at startup
+
+
+
+
long -> CharBufImage(Object: charsFunc, int: imgsize, Object: sizespec) -> java.nio.CharBuffer
+
+
notes: This is the same as {@link CharBufImage(Object, int, Object, long)} excep that
+the seed is defaulted to 0L
+@param charsFunc The function which produces objects, which toString() is used to collect their input
+@param imgsize The size of the CharBuffer to build at startup
+@param sizespec The specifier for how long samples should be. If this is a number, then it is static. If
+it is a function, then the size is determined for each call.
notes: Create a CharBuffer full of the contents of the results of calling a source
+function until it is full. Then allow it to be sampled with random extracts
+as determined by the sizespec.
+@param charsFunc The function which produces objects, which toString() is used to collect their input
+@param imgsize The size of the CharBuffer to build at startup
+@param sizespec The specifier for how long samples should be. If this is a number, then it is static. If
+it is a function, then the size is determined for each call.
+@param seed A seed that can be used to change up the rendered content.
+
+
+
+
CharBufferExtract
+
Create a CharBuffer from the first function, and then sample data from that buffer according to the size function. The initFunction can be given as simply a size, in which case ByteBufferSizedHash is used with Hex String conversion. If the size function yields a size larger than the available buffer size, then it is lowered to that size automatically. If it is lower, then a random offset is used within the buffer image. This function behaves slightly differently than most in that it creates and caches as source byte buffer during initialization.
+
+
long -> CharBufferExtract(Object: initFunc, Object: sizeFunc) -> java.nio.CharBuffer
+
+
Clamp
+
Clamp the output values to be at least the minimum value and at most the maximum value.
clamp output values between the range [1.0D, 9.0D], inclusive
+
+
+
+
long -> Clamp(long: min, long: max) -> long
+
+
example:Clamp(4L,400L)
+
clamp the output values in the range [4L,400L], inclusive
+
+
+
+
int -> Clamp(int: min, int: max) -> int
+
+
example:Clamp(1,100)
+
clamp the output values in the range [1,100], inclusive
+
+
+
+
Combinations
+
Convert a numeric value into a code according to ASCII printable characters. This is useful for creating various encodings using different character ranges, etc. This mapper can map over the sequences of character ranges providing every unique combination and then wrapping around to the beginning again. It can convert between character bases with independent radix in each position. Each position in the final string takes its values from a position-specific character set, described by the shorthand in the examples below. The constructor will throw an error if the number of combinations exceeds that which can be represented in a long value. (This is a very high number).
Yields a value within a specified range, which rolls over continuously.
+
+
+
long -> CycleRange(long: maxValue) -> long
+
+
+
long -> CycleRange(long: minValue, long: maxValue) -> long
+
+
+
long -> CycleRange(int: maxValue) -> int
+
+
+
long -> CycleRange(int: minValue, int: maxValue) -> int
+
+
+
int -> CycleRange(int: maxValue) -> int
+
+
+
notes: Sets the maximum value of the cycle range. The minimum is default to 0.
+@param maxValue The maximum value in the cycle to be added.
+
+
+
example:CycleRange(34)
+
+
+
add a rotating value between 0 and 34 to the input
+
+
+
+
+
int -> CycleRange(int: minValue, int: maxValue) -> int
+
+
notes: Sets the minimum and maximum value of the cycle range.
+@param minValue minimum value of the cycle to be added.
+@param maxValue maximum value of the cycle to be added.
+
+
+
+
DelimFrequencySampler
+
Takes a CSV with sample data and generates random values based on the relative frequencies of the values in the file. The CSV file must have headers which can be used to find the named columns. I.E. take the following imaginary `animals.csv` file: animal,count,country puppy,1,usa puppy,2,colombia puppy,3,senegal kitten,2,colombia `CSVFrequencySampler('animals.csv', animal)` will return `puppy` or `kitten` randomly. `puppy` will be 3x more frequent than `kitten`. `CSVFrequencySampler('animals.csv', country)` will return `usa`, `colombia`, or `senegal` randomly. `colombia` will be 2x more frequent than `usa` or `senegal`. Use this function to infer frequencies of categorical values from CSVs.
notes: Create a sampler of strings from the given delimited file. The delimited file must have plain headers
+as its first line.
+@param filename The name of the file to be read into the sampler buffer
+@param columnName The name of the column to be sampled
+@param delimiter delimmiter
Read values.csv, count the frequency of values in 'modelno' column, and sample from this column proportionally
+
+
+
+
+
DirectoryLines
+
Read each line in each matching file in a directory structure, providing one line for each time this function is called. The files are sorted at the time the function is initialized, and each line is read in order. This function does not produce the same result per cycle value. It is possible that different cycle inputs will return different inputs if the cycles are not applied in strict order. Still, this function is useful for consuming input from a set of files as input to a test or simulation.
+
+
long -> DirectoryLines(String: basepath, String: namePattern) -> String
+
+
example:DirectoryLines('/var/tmp/bardata', '.*')
+
load every line from every file in /var/tmp/bardata
+
+
+
+
Discard
+
This function takes a long input and ignores it. It returns a generic object which is meant to be used as input to other function which don't need a specific input.
+
+
long -> Discard() -> Object
+
+
Div
+
Divide the operand by a fixed value and return the result.
+
+
+
double -> Div(double: divisor) -> double
+
+
+
long -> Div(int: divisor) -> int
+
+
+
long -> Div(long: divisor) -> long
+
+
example:Div(42L)
+
divide all inputs by 42L
+
+
+
+
int -> Div(int: divisor) -> int
+
+
+
DivideToLongToString
+
This is equivalent to Div(...), but returns the result after String.valueOf(...). This function is also deprecated, as it is easily replaced by other functions.
+
+
long -> DivideToLongToString(long: divisor) -> String
+
+
ElapsedNanoTime
+
Provide the elapsed nano time since the process started. CAUTION: This does not produce deterministic test data.
+
+
long -> ElapsedNanoTime() -> long
+
+
EscapeJSON
+
Escape all special characters which are required to be escaped when found within JSON content according to the JSON spec
+
{@code
+\b Backspace (ascii code 08)
+\f Form feed (ascii code 0C)
+\n New line
+\r Carriage return
+\t Tab
+\" Double quote
+\\ Backslash character
+\/ Forward slash
+}
+
+
+
String -> EscapeJSON() -> String
+
+
FieldExtractor
+
Extracts out a set of fields from a delimited string, returning a string with the same delimiter containing only the specified fields.
Yield one of the specified values, rotating through them as the input value increases.
+
+
+
long -> FixedValues(int[]...: values) -> int
+
+
+
long -> FixedValues(long[]...: values) -> double
+
+
example:FixedValues(3D,53D,73d)
+
Yield 3D, 53D, 73D, 3D, 53D, 73D, 3D, ...
+
+
+
+
long -> FixedValues(long[]...: values) -> long
+
+
example:FixedValues(3L,53L,73L)
+
Yield 3L, 53L, 73L, 3L, 53L, 73L, 3L, ...
+
+
+
+
FullHash
+
This uses the Murmur3F (64-bit optimized) version of Murmur3, not as a checksum, but as a simple hash. It doesn't bother pushing the high-64 bits of input, since it only uses the lower 64 bits of output. This version returns the value regardless of this sign bit. It does not return the absolute value, as {@link Hash} does.
+
+
long -> FullHash() -> long
+
+
Hash
+
This uses the Murmur3F (64-bit optimized) version of Murmur3, not as a checksum, but as a simple hash. It doesn't bother pushing the high-64 bits of input, since it only uses the lower 64 bits of output. It does, however, return the absolute value. This is to make it play nice with users and other libraries.
+
+
+
long -> Hash() -> long
+
+
+
int -> Hash() -> int
+
+
+
long -> Hash() -> int
+
+
+
HashInterval
+
Return a value within a range, pseudo-randomly, using interval semantics, where the range of values return does not include the last value. This function behaves exactly like HashRange except for the exclusion of the last value. This allows you to stack intervals using known reference points without duplicating or skipping any given value. You can specify hash intervals as small as a single-element range, like (5,6), or as wide as the relevant data type allows.
+
+
+
int -> HashInterval(int: width) -> int
+
+
+
notes: Create a hash interval based on a minimum value of 0 and a specified width.
+@param width The maximum value, which is excluded.
+
+
+
example:HashInterval(4)
+
+
+
return values which could include 0, 1, 2, 3, but not 4
+
+
+
+
+
int -> HashInterval(int: minIncl, int: maxExcl) -> int
+
+
+
notes: Create a hash interval
+@param minIncl The minimum value, which is included
+@param maxExcl The maximum value, which is excluded
+
+
+
example:HashInterval(2,5)
+
+
+
return values which could include 2, 3, 4, but not 5
+
+
+
+
+
long -> HashInterval(int: width) -> int
+
+
+
notes: Create a hash interval based on a minimum value of 0 and a specified width.
+@param width The maximum value, which is excluded.
+
+
+
example:HashInterval(4)
+
+
+
return values which could include 0, 1, 2, 3, but not 4
+
+
+
+
+
long -> HashInterval(int: minIncl, int: maxExcl) -> int
+
+
+
notes: Create a hash interval
+@param minIncl The minimum value, which is included
+@param maxExcl The maximum value, which is excluded
+
+
+
example:HashInterval(2,5)
+
+
+
return values which could include 2, 3, 4, but not 5
+
+
+
+
+
long -> HashInterval(long: width) -> long
+
+
+
notes: Create a hash interval based on a minimum value of 0 and a specified width.
+@param width The maximum value, which is excluded.
+
+
+
example:HashInterval(4L)
+
+
+
return values which could include 0L, 1L, 2L, 3L, but not 4L
+
+
+
+
+
long -> HashInterval(long: minIncl, long: maxExcl) -> long
+
+
+
notes: Create a hash interval
+@param minIncl The minimum value, which is included
+@param maxExcl The maximum value, which is excluded
+
+
+
example:HashInterval(2L,5L)
+
+
+
return values which could include 2L, 3L, 4L, but not 5L
+
+
+
+
+
HashRange
+
Return a value within a range, pseudo-randomly. This is equivalent to returning a value with in range between 0 and some maximum value, but with a minimum value added. You can specify hash ranges as small as a single-element range, like (5,5), or as wide as the relevant data type allows.
+
+
+
int -> HashRange(int: width) -> int
+
+
+
int -> HashRange(int: minValue, int: maxValue) -> int
+
+
+
long -> HashRange(long: width) -> long
+
+
+
long -> HashRange(long: minValue, long: maxValue) -> long
+
+
+
long -> HashRange(int: width) -> int
+
+
example:HashRange(32L)
+
map the input to a number in the range 0-31, inclusive, of type int
+
+
+
+
long -> HashRange(int: minValue, int: maxValue) -> int
+
+
example:HashRange(35L,39L)
+
map the input to a number in the range 35-38, inclusive, of type int
+
+
+
+
HashRangeScaled
+
Return a pseudo-random value which can only be as large as the input times a scale factor, with a default scale factor of 1.0d
+
+
+
int -> HashRangeScaled(double: scalefactor) -> int
+
+
+
int -> HashRangeScaled() -> int
+
+
+
long -> HashRangeScaled(double: scalefactor) -> long
+
+
+
long -> HashRangeScaled() -> long
+
+
+
long -> HashRangeScaled(double: scalefactor) -> int
+
+
+
long -> HashRangeScaled() -> int
+
+
+
HashedByteBufferExtract
+
Create a ByteBuffer from the first function, and then sample data from that bytebuffer according to the size function. The initFunction can be given as simply a size, in which case ByteBufferSizedHash is used. If the size function yields a size larger than the available buffer size, then it is lowered to that size automatically. If it is lower, then a random offset is used within the buffer image. This function behaves slightly differently than most in that it creates and caches as source byte buffer during initialization.
+
+
long -> HashedByteBufferExtract(Object: initFunc, Object: sizeFunc) -> java.nio.ByteBuffer
+
+
HashedDoubleRange
+
Return a double value within the specified range. This function uses an intermediate long to arrive at the sampled value before conversion to double, thus providing a more linear sample at the expense of some precision at extremely large values.
+
+
long -> HashedDoubleRange(double: min, double: max) -> double
+
+
HashedFileExtractToString
+
Pseudo-randomly extract a section of a text file and return it according to some minimum and maximum extract size. The file is loaded into memory as a shared text image. It is then indexed into as a character buffer to find a pseudo-randomly sized fragment.
+
+
+
long -> HashedFileExtractToString(String: filename, int: minsize, int: maxsize) -> String
return a fragment from adventures.txt between 100 and 200 characters long
+
+
+
+
long -> HashedFileExtractToString(String: filename, Object: sizefunc) -> String
+
+
notes: Provide a size function for the fragment to be extracted. In this form, if the size function specifies a string
+size which is larger than the text image, it is truncated via modulo to fall within the text image size.
+
+
+
+
@param filename The file name to be loaded
+@param sizefunc A function which determines the size of the data to be loaded.
return a fragment from adventures.txt from a random offset, based on the size function provided.
+
+
HashedLineToInt
+
Return a pseudo-randomly selected integer value from a file of numeric values. Each line in the file must contain one parsable integer value.
+
+
long -> HashedLineToInt(String: filename) -> int
+
+
HashedLineToString
+
Return a pseudo-randomly selected String value from a single line of the specified file.
+
+
long -> HashedLineToString(String: filename) -> String
+
+
HashedLinesToKeyValueString
+
Generate a string in the format key1:value1;key2:value2;... from the words in the specified file, ranging in size between zero and the specified maximum.
+
+
long -> HashedLinesToKeyValueString(String: paramFile, int: maxsize) -> String
+
+
HashedLoremExtractToString
+
Provide a text extract from the full lorem ipsum text, between the specified minimum and maximum size.
+
+
long -> HashedLoremExtractToString(int: minsize, int: maxsize) -> String
+
+
HashedRangedToNonuniformDouble
+
This provides a random sample of a double in a range, without accounting for the non-uniform distribution of IEEE double representation. This means that values closer to high-precision areas of the IEEE spec will be weighted higher in the output. However, NaN and positive and negative infinity are filtered out via oversampling. Results are still stable for a given input value.
+
+
long -> HashedRangedToNonuniformDouble(long: min, long: max) -> double
+
+
HashedToByteBuffer
+
Hash a long input value into a byte buffer, at least length bytes long, but aligned on 8-byte boundary;
+
+
long -> HashedToByteBuffer(int: lengthInBytes) -> java.nio.ByteBuffer
+
+
Identity
+
Simply returns the input value. This function intentionally does nothing.
+
+
long -> Identity() -> long
+
+
Interpolate
+
Return a value along an interpolation curve. This allows you to sketch a basic density curve and describe it simply with just a few values. The number of values provided determines the resolution of the internal lookup table that is used for interpolation. The first value is always the 0.0 anchoring point on the unit interval. The last value is always the 1.0 anchoring point on the unit interval. This means that in order to subdivide the density curve in an interesting way, you need to provide a few more values in between them. Providing two values simply provides a uniform sample between a minimum and maximum value. The input range of this function is, as many of the other functions in this library, based on the valid range of positive long values, between 0L and Long.MAX_VALUE inclusive. This means that if you want to combine interpolation on this curve with the effect of pseudo-random sampling, you need to put a hash function ahead of it in the flow. Developer Note: This is the canonical implementation of LERPing in NoSQLBench, so is heavily documented. Any other LERP implementations should borrow directly from this, embedding by default.
+
+
+
long -> Interpolate(double[]...: values) -> double
+
+
example:Interpolate(0.0d,100.0d)
+
return a uniform double value between 0.0d and 100.0d
create values like '<zero;three,six>', '<one;four,seven>', ...
+
+
+
+
ListTemplate
+
Create a {@code List} based on two functions, the first to determine the list size, and the second to populate the list with string values. The input fed to the second function is incremented between elements.
+
+
long -> ListTemplate(function.LongToIntFunction: sizeFunc, function.LongFunction: valueFunc) -> List
+
create a list between 3 and 7 elements, with number names as the values
+
+
+
+
LoadElement
+
Load a value from a map, based on the injected configuration. The map which is used must be named by the mapname. If the injected configuration contains a variable of this name which is also a Map, then this map is referenced and read by the provided variable name.
Load the varable 'varname' from a map named 'vars', or provide 'defaultvalue' if neither is provided
+
+
+
+
LongToString
+
Return the string representation of the provided long. @deprecated use ToString() instead
+
+
long -> LongToString() -> String
+
+
MatchFunc
+
Match any input with a regular expression, and apply the associated function to it, yielding the value. If no matches occur, then the original value is passed through unchanged. Patterns and functions are passed as even,odd pairs indexed from the 0th position. Instead of a function, a String value may be provided as the associated output value.
Append '-is-a-number' to every input which is a sequence of digits
+
+
+
+
MatchRegex
+
Match any input with a regular expression, and apply the associated regex replacement to it, yielding the value. If no matches occur, then the original value is passed through unchanged. Patterns and replacements are passed as even,odd pairs indexed from the 0th position. Back-references to matching groups are supported.
replaced dashes with spaces in a 10 digit US phone number.
+
+
+
+
Max
+
Return the maximum of either the input value or the specified max.
+
+
+
double -> Max(double: max) -> double
+
+
+
long -> Max(long: max) -> long
+
+
example:Max(42L)
+
take the value of 42L or the input, which ever is greater
+
example:Max(-42L)
+
take the value of -42L or the input, which ever is greater
+
+
+
+
int -> Max(int: max) -> int
+
+
+
Min
+
Return the minimum of either the input value or the specified minimum.
+
+
+
double -> Min(double: min) -> double
+
+
+
long -> Min(long: min) -> long
+
+
+
int -> Min(int: min) -> int
+
+
+
Mod
+
Return the result of modulo division by the specified divisor.
+
+
+
long -> Mod(long: modulo) -> long
+
+
+
int -> Mod(int: modulo) -> int
+
+
+
long -> Mod(int: modulo) -> int
+
+
+
ModuloCSVLineToString
+
Select a value from a CSV file line by modulo division against the number of lines in the file. The second parameter is the field name, and this must be provided in the CSV header line as written.
+
+
long -> ModuloCSVLineToString(String: filename, String: fieldname) -> String
+
load values for 'lat' from the CSV file myfile.csv.
+
+
+
+
ModuloLineToString
+
Select a value from a text file line by modulo division against the number of lines in the file.
+
+
long -> ModuloLineToString(String: filename) -> String
+
+
ModuloToInteger
+
Return an integer value as the result of modulo division with the specified divisor.
+
+
long -> ModuloToInteger(int: modulo) -> Integer
+
+
ModuloToLong
+
Return a long value as the result of modulo division with the specified divisor.
+
+
long -> ModuloToLong(long: modulo) -> long
+
+
Mul
+
Return the result of multiplying the specified value with the input.
+
+
+
double -> Mul(double: factor) -> double
+
+
+
long -> Mul(long: multiplicand) -> long
+
+
+
int -> Mul(int: addend) -> int
+
+
+
long -> Mul(int: multiplicand) -> int
+
+
+
Murmur3DivToLong
+
Yield a long value which is the result of hashing and modulo division with the specified divisor.
+
+
long -> Murmur3DivToLong(long: divisor) -> long
+
+
Murmur3DivToString
+
Yield a String value which is the result of hashing and modulo division with the specified divisor to long and then converting the value to String.
+
+
long -> Murmur3DivToString(long: divisor) -> String
+
+
NumberNameToString
+
Provides the spelled-out name of a number. For example, an input of 7 would yield "seven". An input of 4234 yields the value "four thousand thirty four". The maximum value is limited at 999,999,999.
+
+
long -> NumberNameToString() -> String
+
+
PartitionLongs
+
Split the value range of Java longs into a number of offsets, starting with Long.MIN_VALUE. This method makes it easy to construct a set of offsets for testing, or to limit the values used a subset. The outputs will range from Long.MIN_VALUE (-2^63) up. This is not an exactly emulation of token range splits in Apache Cassandra.
+
+
long -> PartitionLongs(int: partitions) -> long
+
+
Prefix
+
Add the specified prefix String to the input value and return the result.
+
+
String -> Prefix(String: prefix) -> String
+
+
example:Prefix('PREFIX:')
+
Prepend 'PREFIX:' to every input value
+
+
+
+
ReplaceAll
+
Replace all occurrences of the extant string with the replacement string.
Replace all occurrences of the regular expression with the replacement string. Note, this is much less efficient than using the simple ReplaceAll for most cases.
Replace all occurrences of 'o' or 'n' or 'e' with 'two'
+
+
+
+
Shuffle
+
This function provides a low-overhead shuffling effect without loading elements into memory. It uses a bundled dataset of pre-computed Galois LFSR shift register configurations, along with a down-sampling method to provide amortized virtual shuffling with minimal memory usage. Essentially, this guarantees that every value in the specified range will be seen at least once before the cycle repeats. However, since the order of traversal of these values is dependent on the LFSR configuration, some orders will appear much more random than others depending on where you are in the traversal cycle. This function *does* yield values that are deterministic.
+
+
+
long -> Shuffle(long: min, long: maxPlusOne) -> long
+
+
example:Shuffle(11,99)
+
Provide all values between 11 and 98 inclusive, in some order, then repeat
+
+
+
+
long -> Shuffle(long: min, long: maxPlusOne, int: bankSelector) -> long
+
+
example:Shuffle(11,99,3)
+
Provide all values between 11 and 98 inclusive, in some different (and repeatable) order, then repeat
+
+
+
+
SignedHash
+
This uses the Murmur3F (64-bit optimized) version of Murmur3, not as a checksum, but as a simple hash. It doesn't bother pushing the high-64 bits of input, since it only uses the lower 64 bits of output. Unlike the other hash functions, this one may return positive as well as negative values.
+
+
+
long -> SignedHash() -> long
+
+
+
int -> SignedHash() -> int
+
+
+
long -> SignedHash() -> int
+
+
+
StaticStringMapper
+
Return a static String value.
+
+
long -> StaticStringMapper(String: string) -> String
+
+
Suffix
+
Add the specified prefix String to the input value and return the result.
+
+
String -> Suffix(String: suffix) -> String
+
+
example:Suffix('--Fin')
+
Append '--Fin' to every input value
+
+
+
+
Template
+
Creates a template function which will yield a string which fits the template provided, with all occurrences of {} substituted pair-wise with the result of the provided functions. The number of {} entries in the template must strictly match the number of functions or an error will be thrown. If you need to include single quotes or other special characters, you may use a backslash "\" in your template. The objects passed must be functions of any of the following types:
+
+
LongUnaryOperator
+
IntUnaryOperator
+
DoubleUnaryOperator
+
LongFunction
+
IntFunction
+
DoubleFunction
+
Function<Long,?>
+
+
The result of applying the input value to any of these functions is converted to a String
+and then stitched together according to the template provided.
+
+
+
long -> Template(String: template, Object[]...: funcs) -> String
+
+
example:Template('{}-{}',Add(10),Hash())
+
concatenate input+10, '-', and a pseudo-random long
+
+
+
+
long -> Template(boolean: truncate, String: template, Object[]...: funcs) -> String
+
+
example:Template(true, '{}-{}', Add(10),Hash())
+
throws an error, as the Add(10) function causes a narrowing conversion for a long input
+
+
+
+
long -> Template(function.LongUnaryOperator: iterOp, String: template, function.LongFunction<?>[]...: funcs) -> String
+
+
notes: If an operator is provided, it is used to change the function input value in an additional way before each function.
+
+
+
+
@param iterOp A pre-generation value mapping function
+@param template A string template containing
{}
anchors
+@param funcs A varargs length of LongFunctions of any output type
+
ThreadNumToInteger
+
Matches a digit sequence in the current thread name and caches it in a thread local. This allows you to use any intentionally indexed thread factories to provide an analogue for concurrency. Note that once the thread number is cached, it will not be refreshed. This means you can't change the thread name and get an updated value.
+
+
long -> ThreadNumToInteger() -> Integer
+
+
ThreadNumToLong
+
Matches a digit sequence in the current thread name and caches it in a thread local. This allows you to use any intentionally indexed thread factories to provide an analogue for concurrency. Note that once the thread number is cached, it will not be refreshed. This means you can't change the thread name and get an updated value.
+
+
long -> ThreadNumToLong() -> long
+
+
ToBase64
+
Takes a bytebuffer and turns it into a base64 string
+
+
java.nio.ByteBuffer -> ToBase64() -> String
+
+
ToHashedUUID
+
This function provides a stable hashing of the input value to a version 4 (Random) UUID.
+
+
long -> ToHashedUUID() -> UUID
+
+
Trim
+
Trim the input value and return the result.
+
+
String -> Trim() -> String
+
+
WeightedLongs
+
Provides a long value from a list of weighted values. The total likelihood of any value to be produced is proportional to its relative weight in the total weight of all elements. This function automatically hashes the input, so the result is already pseudo-random.
+
+
long -> WeightedLongs(String: valuesAndWeights) -> Long
+
+
example:WeightedLongs('1:10;3;5;12345;1
+
Yield 1 62.5% of the time, 3 31.25% of the time, and 12345 6.2% of the time
+
example:WeightedLongs('1,6,7
+
Yield 1 33.3% of the time, 6 33.3% of the time, and 7 33.3% of the time
+
+
+
+
WeightedStrings
+
Allows for weighted elements to be used, such as a:0.25;b:0.25;c:0.5 or a:1;b:1.0;c:2.0 The unit weights are normalized to the cumulative sum internally, so it is not necessary for them to add up to any particular value.
+
+
long -> WeightedStrings(String: valuesAndWeights) -> String
+
+
WeightedStringsFromCSV
+
Provides sampling of a given field in a CSV file according to discrete probabilities. The CSV file must have headers which can be used to find the named columns for value and weight. The value column contains the string result to be returned by the function. The weight column contains the floating-point weight or mass associated with the value on the same line. All the weights are normalized automatically.
+
If there are multiple file names containing the same format, then they
+will all be read in the same way.
+
If the first word in the filenames list is 'map', then the values will not
+be pseudo-randomly selected. Instead, they will be mapped over in some
+other unsorted and stable order as input values vary from 0L to Long.MAX_VALUE.
+
Generally, you want to leave out the 'map' directive to get "random sampling"
+of these values.
+
This function works the same as the three-parametered form of WeightedStrings,
+which is deprecated in lieu of this one. Use this one instead.
notes: Create a sampler of strings from the given CSV file. The CSV file must have plain CSV headers
+as its first line.
+@param valueColumn The name of the value column to be sampled
+@param weightColumn The name of the weight column, which must be parsable as a double
+@param filenames One or more file names which will be read in to the sampler buffer
These functions can generate null values. When using nulls in your binding recipes, ensure that you don't generate them
+in-line as inputs to other functions. This will lead to errors which interrupt your test. If you must use functions that
+generate null values, ensure that they are the only or last function in a chain.
+
If you need to mark a field to be undefined, but not set to null, then use the functions which know how to yield a
+VALUE.UNSET, which is a sigil constant within the VirtData runtime. These functions are correctly interpreted by
+conformant drivers like the SQL driver so that they will avoid inject the named field into an operation if it has this
+special value.
+
NullIfCloseTo
+
Returns null if the input value is within range of the specified value.
Reads a long variable from the input, hashes and scales it to the unit interval 0.0d - 1.0d, then uses the result to determine whether to return null object or a loaded value.
+
+
long -> NullOrLoad(double: ratio, String: varname) -> Object
+
+
NullOrPass
+
Reads a long variable from the thread local variable map, hashes and scales it to the unit interval 0.0d - 1.0d, then uses the result to determine whether to return a null object or the input value.
Always yields the VALUE.unset value, which signals to any consumers that the value provided should be considered undefined for any operation. This is distinct from functions which return a null, which is considered an actual value to be acted upon. It is deemed an error for any downstream user of this library to do anything with VALUE.unset besides explicitly acting like it wasn't provided. That is the point of VALUE.unset. The purpose of having such a value in this library is to provide a value type to help bridge between functional flows and imperative run-times. Without such a value, it would be difficult to simulate value streams in which some of the time values are set and other times they are not.
+
+
long -> Unset() -> Object
+
+
UnsetIfCloseTo
+
Yield VALUE.unset if the input value is close to the specified value by the sigma threshold. Otherwise, pass the input value along.
Yield VALUE.unset if the input String is empty. Throws an error if the input value is null. Otherwise, passes the original value along.
+
+
String -> UnsetIfEmpty() -> Object
+
+
UnsetIfEq
+
Yield UNSET.vale if the input value is equal to the specified value. Otherwise, pass the input value along.
+
+
+
double -> UnsetIfEq(double: compareto) -> Double
+
+
+
long -> UnsetIfEq(long: compareto) -> Object
+
+
+
UnsetIfGe
+
Yield VALUE.unset if the input value is greater than or equal to the specified value. Otherwise, pass the input value along.
+
+
+
long -> UnsetIfGe(long: compareto) -> Object
+
+
+
double -> UnsetIfGe(double: compareto) -> Object
+
+
+
UnsetIfGt
+
Yield UNSET.value if the input value is greater than the specified value. Otherwise, pass the input value along.
+
+
+
long -> UnsetIfGt(long: compareto) -> Object
+
+
+
double -> UnsetIfGt(double: compareto) -> Object
+
+
+
UnsetIfLe
+
Yield VALUE.unset if the input value is less than or equal to the specified value. Otherwise, pass the value along.
+
+
+
long -> UnsetIfLe(long: compareto) -> Object
+
+
+
double -> UnsetIfLe(double: compareto) -> Object
+
+
+
UnsetIfLt
+
Yield VALUE.unset if the provided value is less than the specified value, otherwise, pass the original value;
+
+
+
long -> UnsetIfLt(long: compareto) -> Object
+
+
+
double -> UnsetIfLt(double: compareto) -> Object
+
+
+
UnsetIfNullOrEmpty
+
Yields UNSET.value if the input value is null or empty. Otherwise, passes the original value along.
+
+
String -> UnsetIfNullOrEmpty() -> Object
+
+
UnsetIfStringEq
+
Yields UNSET.value if the input value is equal to the specified value. Throws an error if the input value is null. Otherwise, passes the original value along.
Yields UNSET.value if the input String is not equal to the specified String value. Throws an error if the input value is null. Otherwise, passes the original value along.
Reads a long variable from the input, hashes and scales it to the unit interval 0.0d - 1.0d, then uses the result to determine whether to return UNSET.value or a loaded value.
+
+
long -> UnsetOrLoad(double: ratio, String: varname) -> Object
+
+
UnsetOrPass
+
Reads a long variable from the thread local variable map, hashes and scales it to the unit interval 0.0d - 1.0d, then uses the result to determine whether to return UNSET.value or the input value.
Create a Distance generator which produces com.datastax.driver.dse.geometry.Distance objects.
+
+
+
long -> Distance(function.LongToDoubleFunction: xfunc, function.LongToDoubleFunction: yfunc, function.LongToDoubleFunction: rfunc) -> com.datastax.dse.driver.internal.core.data.geometry.Distance
+
+
+
long -> Distance(double: x, function.LongToDoubleFunction: yfunc, function.LongToDoubleFunction: rfunc) -> com.datastax.dse.driver.internal.core.data.geometry.Distance
+
+
+
long -> Distance(function.LongToDoubleFunction: xfunc, double: y, function.LongToDoubleFunction: rfunc) -> com.datastax.dse.driver.internal.core.data.geometry.Distance
+
+
+
long -> Distance(double: x, double: y, function.LongToDoubleFunction: rfunc) -> com.datastax.dse.driver.internal.core.data.geometry.Distance
+
+
+
long -> Distance(function.LongToDoubleFunction: xfunc, function.LongToDoubleFunction: yfunc, double: r) -> com.datastax.dse.driver.internal.core.data.geometry.Distance
+
+
+
long -> Distance(double: x, function.LongToDoubleFunction: yfunc, double: r) -> com.datastax.dse.driver.internal.core.data.geometry.Distance
+
+
+
long -> Distance(function.LongToDoubleFunction: xfunc, double: y, double: r) -> com.datastax.dse.driver.internal.core.data.geometry.Distance
+
+
+
long -> Distance(double: x, double: y, double: r) -> com.datastax.dse.driver.internal.core.data.geometry.Distance
+
+
+
LineString
+
Create a com.datastax.driver.dse.geometry.LineString
+
+
+
long -> LineString(function.LongToIntFunction: lenfunc, function.LongFunction<com.datastax.dse.driver.api.core.data.geometry.Point>: pointfunc) -> com.datastax.dse.driver.api.core.data.geometry.LineString
+
+
+
long -> LineString(function.LongToIntFunction: lenfunc, function.LongToDoubleFunction: xfunc, function.LongToDoubleFunction: yfunc) -> com.datastax.dse.driver.api.core.data.geometry.LineString
+
+
+
long -> LineString(int: len, function.LongFunction<com.datastax.dse.driver.api.core.data.geometry.Point>: pointfunc) -> com.datastax.dse.driver.api.core.data.geometry.LineString
+
+
+
Point
+
Create a Point generator which generates com.datastax.driver.dse.geometry.Point objects.
+
+
+
long -> Point(double: x, double: y) -> com.datastax.dse.driver.api.core.data.geometry.Point
+
+
+
long -> Point(double: x, function.LongToDoubleFunction: yfunc) -> com.datastax.dse.driver.api.core.data.geometry.Point
+
+
+
long -> Point(function.LongToDoubleFunction: xfunc, double: y) -> com.datastax.dse.driver.api.core.data.geometry.Point
+
+
+
long -> Point(function.LongToDoubleFunction: xfunc, function.LongToDoubleFunction: yfunc) -> com.datastax.dse.driver.api.core.data.geometry.Point
+
+
+
Polygon
+
Create a com.datastax.driver.dse.geometry.Polygon
+
+
+
long -> Polygon(function.LongToIntFunction: lenfunc, function.LongFunction<com.datastax.dse.driver.api.core.data.geometry.Point>: pointfunc) -> com.datastax.dse.driver.api.core.data.geometry.Polygon
+
+
+
long -> Polygon(function.LongToIntFunction: lenfunc, function.LongToDoubleFunction: xfunc, function.LongToDoubleFunction: yfunc) -> com.datastax.dse.driver.api.core.data.geometry.Polygon
+
+
+
long -> Polygon(int: len, function.LongFunction<com.datastax.dse.driver.api.core.data.geometry.Point>: pointfunc) -> com.datastax.dse.driver.api.core.data.geometry.Polygon
+
+
+
PolygonOnGrid
+
This function will return a polygon in the form of a rectangle from the specified grid system. The coordinates define the top left and bottom right coordinates in (x1,y1),(x2,y2) order, while the number of rows and columns divides these ranges into the unit-length for each square. x1 must be greater than x2. y1 must be less than y2. This grid system can be used to construct a set of overlapping grids such that the likelyhood of overlap is somewhat easy to reason about. For example, if you create one grid system as a refernce grid, then attempt to map another grid system which half overlaps the original grid, you can easily determine that half the time, a random rectangle selected from the second grid will overlap a rectangle from the first, for simple even-numbered grids and the expected uniform sampling on the internal coordinate selector functions.
Functions in this category are meant to provide easy grab-and-go functions that are tailored for real-world simulation.
+This library will grow over time. These functions are often built directly on top of other functions in the core
+libraries. However, they are provided here for simplicity in workload construction. They perform exactly the same as
+their longer-form equivalents.
+
Cities
+
Return a valid city name.
+
+
long -> Cities() -> String
+
+
example:Cities()
+
+
+
+
CitiesByDensity
+
Return a city name, weighted by population density.
+
+
long -> CitiesByDensity() -> String
+
+
example:CitiesByDensity()
+
+
+
+
CitiesByPopulation
+
Return a city name, weighted by total population.
+
+
long -> CitiesByPopulation() -> String
+
+
example:CitiesByPopulation()
+
+
+
+
Counties
+
Return a valid county name.
+
+
long -> Counties() -> String
+
+
example:Counties()
+
+
+
+
CountiesByDensity
+
Return a county name weighted by population density.
+
+
long -> CountiesByDensity() -> String
+
+
example:CountiesByDensity()
+
+
+
+
CountiesByPopulation
+
Return a county name weighted by total population.
+
+
long -> CountiesByPopulation() -> String
+
+
example:CountiesByPopulation()
+
+
+
+
CountryCodes
+
Return a valid country code.
+
+
long -> CountryCodes() -> String
+
+
example:CountryCodes()
+
+
+
+
CountryNames
+
Return a valid country name.
+
+
long -> CountryNames() -> String
+
+
example:CountryNames()
+
+
+
+
FirstNames
+
Return a pseudo-randomly sampled first name from the last US census data on first names occurring more than 100 times. Both male and female names are combined in this function.
+
+
+
long -> FirstNames() -> String
+
+
example:FirstNames()
+
select a random first name based on the chance of seeing it in the census data
+
+
+
+
long -> FirstNames(String: modifier) -> String
+
+
example:FirstNames('map')
+
select over the first names by probability as input varies from 1L to Long.MAX_VALUE
+
+
+
+
FullNames
+
Combines the FirstNames and LastNames functions into one that simply concatenates them with a space between. This function is a shorthand equivalent of {@code Template('{} {}', FirstNames(), LastNames())}
+
+
long -> FullNames() -> String
+
+
LastNames
+
Return a pseudo-randomly sampled last name from the last US census data on last names occurring more than 100 times.
+
+
+
long -> LastNames() -> String
+
+
example:LastNames()
+
select a random last name based on the chance of seeing it in the census data
+
+
+
+
long -> LastNames(String: modifier) -> String
+
+
example:LastNames('map')
+
select over the last names by probability as input varies from 1L to Long.MAX_VALUE
+
+
+
+
NumberNameToString
+
Provides the spelled-out name of a number. For example, an input of 7 would yield "seven". An input of 4234 yields the value "four thousand thirty four". The maximum value is limited at 999,999,999.
+
+
long -> NumberNameToString() -> String
+
+
StateCodes
+
Return a valid state code. (abbreviation)
+
+
long -> StateCodes() -> String
+
+
example:StateCodes()
+
+
+
+
StateCodesByDensity
+
Return a state code (abbreviation), weighted by population density.
+
+
long -> StateCodesByDensity() -> String
+
+
example:StateCodesByDensity()
+
+
+
+
StateCodesByPopulation
+
Return a state code (abbreviation), weighted by population.
+
+
long -> StateCodesByPopulation() -> String
+
+
example:StateCodesByPopulation()
+
+
+
+
StateNames
+
Return a valid state name.
+
+
long -> StateNames() -> String
+
+
example:StateNames()
+
+
+
+
StateNamesByDensity
+
Return a state name, weighted by population density.
+
+
long -> StateNamesByDensity() -> String
+
+
example:StateNamesByDensity()
+
+
+
+
StateNamesByPopulation
+
Return a state name, weighted by total population.
+
+
long -> StateNamesByPopulation() -> String
+
+
example:StateNamesByPopulation()
+
+
+
+
TimeZones
+
Return a state name, weighted by population density.
+
+
long -> TimeZones() -> String
+
+
example:Timezones()
+
+
+
+
TimeZonesByDensity
+
Return a state name, weighted by population density.
+
+
long -> TimeZonesByDensity() -> String
+
+
example:TimezonesByDensity
+
+
+
+
TimeZonesByPopulation
+
Return a state name, weighted by population.
+
+
long -> TimeZonesByPopulation() -> String
+
+
example:TimezonesByPopulation()
+
+
+
+
ToMD5ByteBuffer
+
Converts the byte image of the input long to a MD5 digest in ByteBuffer form. Deprecated usage due to unsafe MD5 digest. Replaced with DigestToByteBuffer with MD5 when absolutely needed for existing NB tests. However, stronger encryption algorithms (e.g. SHA-256) are recommended due to MD5's limitations.
+
+
long -> ToMD5ByteBuffer() -> java.nio.ByteBuffer
+
+
+
notes: Deprecated usage due to unsafe MD5 digest.
+Use the DigestToByteBuffer with alternatives other than MD5.
+
+
+
example:MD5ByteBuffer()
+
+
+
convert the a input to an md5 digest of its bytes
+
+
+
+
+
ZipCodes
+
Return a valid zip code.
+
+
long -> ZipCodes() -> String
+
+
example:ZipCodes()
+
+
+
+
ZipCodesByDensity
+
Return a zip code, weighted by population density.
Functions in the state category allow you to do things with side-effects in the function flow. Specifically, they allow
+you to save or load values of named variables to thread-local registers. These work best when used with non-async
+activities, since the normal statement grouping allows you to share data between statements in the sequence. It is not
+advised to use these with async activities.
+
When using these functions, be careful that you call them when needed. For example, if you have a named binding which
+will save a value, that action only occurs if some statement with this named binding is used.
+
For example, if you have an account records and transaction records, where you want to save the account identifier to
+use within the transaction inserts, you must ensure that each account binding is used within the thread first.
+
Clear
+
Clears the per-thread map which is used by the Expr function.
+
+
+
Object -> Clear() -> Object
+
+
+
notes: Clear all named entries from the per-thread map.
+
+
+
example:Clear()
+
+
+
clear all thread-local variables
+
+
+
+
+
Object -> Clear(String[]...: names) -> Object
+
+
+
notes: Clear the specified names from the per-thread map.
+@param names The names to be removed from the map.
+
+
+
example:Clear('foo')
+
+
+
clear the thread-local variable 'foo'
+
+
+
example:Clear('foo','bar')
+
+
+
clear the thread-local variables 'foo' and 'bar'
+
+
+
+
+
long -> Clear() -> long
+
+
+
notes: Clear all named entries from the per-thread map.
+
+
+
example:Clear()
+
+
+
clear all thread-local variables
+
+
+
+
+
long -> Clear(String[]...: names) -> long
+
+
+
notes: Clear the specified names from the per-thread map.
+@param names The names to be removed from the map.
+
+
+
example:Clear('foo')
+
+
+
clear the thread-local variable 'foo'
+
+
+
example:Clear('foo','bar')
+
+
+
clear the thread-local variables 'foo' and 'bar'
+
+
+
+
+
Load
+
Load a named value from the per-thread state map. The previous input value will be forgotten, and the named value will replace it before the next function in the chain.
+
+
+
String -> Load(String: name) -> String
+
+
example:Load('foo')
+
for the current thread, load a String value from the named variable
for the current thread, load a String value from the named variable, where the variable name is provided by a function, or the default value if the variable is not yet defined.
+
+
+
+
int -> Load(String: name) -> int
+
+
example:Load('foo')
+
for the current thread, load an int value from the named variable
+
+
+
+
int -> Load(String: name, int: defaultValue) -> int
+
+
example:Load('foo',42)
+
for the current thread, load an int value from the named variable, or return the default value if it is undefined.
+
+
+
+
int -> Load(function.Function<Object,Object>: nameFunc) -> int
+
+
example:Load(NumberNameToString())
+
for the current thread, load an int value from the named variable, where the variable name is provided by a function.
+
+
+
+
int -> Load(function.Function<Object,Object>: nameFunc, int: defaultValue) -> int
+
+
example:Load(NumberNameToString(),42)
+
for the current thread, load an int value from the named variable, where the variable name is provided by a function, or the default value if the named variable is undefined.
+
+
+
+
double -> Load(String: name) -> double
+
+
example:Load('foo')
+
for the current thread, load a double value from the named variable
for the current thread, load a double value from the named variable, where the variablename is provided by a function, or the default value if the named value is not yet defined.
+
+
+
+
long -> Load(String: name) -> long
+
+
example:Load('foo')
+
for the current thread, load a long value from the named variable
+
+
+
+
long -> Load(String: name, long: defaultValue) -> long
+
+
example:Load('foo', 423L)
+
for the current thread, load a long value from the named variable, or the default value if the variable is not yet defined
+
+
+
+
long -> Load(function.Function<Object,Object>: nameFunc) -> long
+
+
example:Load(NumberNameToString())
+
for the current thread, load a long value from the named variable, where the variable name is provided by the provided by a function.
+
+
+
+
long -> Load(function.Function<Object,Object>: nameFunc, long: defaultvalue) -> long
+
+
example:Load(NumberNameToString(),22L)
+
for the current thread, load a long value from the named variable, where the variable name is provided by the provided by a function, or the default value if the variable is not yet defined
+
+
+
+
long -> Load(String: name) -> Object
+
+
example:Load('foo')
+
for the current thread, load an Object value from the named variable
+
+
+
+
long -> Load(function.LongFunction: nameFunc) -> Object
+
+
example:Load(NumberNameToString())
+
for the current thread, load an Object value from the named variable, where the variable name is returned by the provided function
+
+
+
+
long -> Load(String: name, Object: defaultValue) -> Object
+
+
example:Load('foo','testvalue')
+
for the current thread, load an Object value from the named variable, or the default value if the variable is not yet defined.
+
+
+
+
long -> Load(function.LongFunction: nameFunc, Object: defaultValue) -> Object
+
+
example:Load(NumberNameToString(),'testvalue')
+
for the current thread, load an Object value from the named variable, where the variable name is returned by the provided function, or thedefault value if the variable is not yet defined.
+
+
+
+
Object -> Load(String: name) -> Object
+
+
example:Load('foo')
+
for the current thread, load an Object value from the named variable
for the current thread, load an Object value from the named variable, where the variable name is returned by the provided function, or thedefault value if the variable is not yet defined.
+
+
+
+
LoadDouble
+
Load a value from a named thread-local variable, where the variable name is fixed or a generated variable name from a provided function. If the named variable is not defined, then the default value is returned.
+
+
+
long -> LoadDouble(String: name) -> double
+
+
example:LoadDouble('foo')
+
for the current thread, load a double value from the named variable.
+
+
+
+
long -> LoadDouble(String: name, double: defaultValue) -> double
+
+
example:LoadDouble('foo',23D)
+
for the current thread, load a double value from the named variable,or the default value if the named variable is not defined.
+
+
+
+
long -> LoadDouble(function.LongFunction: nameFunc) -> double
+
+
example:LoadDouble(NumberNameToString())
+
for the current thread, load a double value from the named variable, where the variable name is provided by a function.
+
+
+
+
long -> LoadDouble(function.LongFunction: nameFunc, double: defaultValue) -> double
+
+
example:LoadDouble(NumberNameToString(),23D)
+
for the current thread, load a double value from the named variable,where the variable name is provided by a function, or the default value if the namedvariable is not defined.
+
+
+
+
Object -> LoadDouble(String: name) -> Double
+
+
example:LoadDouble('foo')
+
for the current thread, load a double value from the named variable.
for the current thread, load a double value from the named variable,where the variable name is provided by a function, or the default value if the namedvariable is not defined.
+
+
+
+
LoadFloat
+
Load a value from a named thread-local variable, where the variable name is fixed or a generated variable name from a provided function. If the named variable is not defined, then the default value is returned.
+
+
+
Object -> LoadFloat(String: name) -> Float
+
+
example:LoadFloat('foo')
+
for the current thread, load a float value from the named variable.
for the current thread, load a float value from the named variable,where the variable name is provided by a function, or the default value if the namedvariable is not defined.
+
+
+
+
long -> LoadFloat(String: name) -> Float
+
+
example:LoadFloat('foo')
+
for the current thread, load a float value from the named variable.
+
+
+
+
long -> LoadFloat(String: name, float: defaultValue) -> Float
+
+
example:LoadFloat('foo',23F)
+
for the current thread, load a float value from the named variable,or the default value if the named variable is not defined.
+
+
+
+
long -> LoadFloat(function.LongFunction: nameFunc) -> Float
+
+
example:LoadFloat(NumberNameToString())
+
for the current thread, load a float value from the named variable,where the variable name is provided by a function.
+
+
+
+
long -> LoadFloat(function.LongFunction: nameFunc, float: defaultValue) -> Float
+
+
example:LoadFloat(NumberNameToString(),23F)
+
for the current thread, load a float value from the named variable,where the variable name is provided by a function, or the default value if the namedvariable is not defined.
+
+
+
+
LoadInteger
+
Load a value from a named thread-local variable, where the variable name is fixed or a generated variable name from a provided function. If the named variable is not defined, then the default value is returned.
+
+
+
Object -> LoadInteger(String: name) -> Integer
+
+
example:LoadInteger('foo')
+
for the current thread, load an integer value from the named variable.
for the current thread, load an integer value from the named variable,where the variable name is provided by a function, or the default value if the namedvariable is not defined.
+
+
+
+
long -> LoadInteger(String: name) -> int
+
+
example:LoadInteger('foo')
+
for the current thread, load an integer value from the named variable.
+
+
+
+
long -> LoadInteger(String: name, int: defaultValue) -> int
+
+
example:LoadInteger('foo',42)
+
for the current thread, load an integer value from the named variable, or the default value if the named variable is not defined.
+
+
+
+
long -> LoadInteger(function.LongFunction: nameFunc) -> int
+
+
example:LoadInteger(NumberNameToString())
+
for the current thread, load an integer value from the named variable,where the variable name is provided by a function.
+
+
+
+
long -> LoadInteger(function.LongFunction: nameFunc, int: defaultValue) -> int
+
+
example:LoadInteger(NumberNameToString(),42)
+
for the current thread, load an integer value from the named variable,where the variable name is provided by a function, or the default value if the namedvariable is not defined.
+
+
+
+
LoadLong
+
Load a value from a named thread-local variable, where the variable name is fixed or a generated variable name from a provided function. If the named variable is not defined, then the default value is returned.
+
+
+
long -> LoadLong(String: name) -> long
+
+
example:LoadLong('foo',42L)
+
for the current thread, load a long value from the named variable.
+
+
+
+
long -> LoadLong(String: name, long: defaultValue) -> long
+
+
example:LoadLong('foo',42L)
+
for the current thread, load a long value from the named variable, or the default value if the named variable is not defined.
+
+
+
+
long -> LoadLong(function.LongFunction: nameFunc) -> long
+
+
example:LoadLong(NumberNameToString(),42L)
+
for the current thread, load a long value from the named variable,where the variable name is provided by a function.
+
+
+
+
long -> LoadLong(function.LongFunction: nameFunc, long: defaultValue) -> long
+
+
example:LoadLong(NumberNameToString(),42L)
+
for the current thread, load a long value from the named variable,where the variable name is provided by a function, or the default value if the namedvariable is not defined.
+
+
+
+
Object -> LoadLong(String: name) -> Long
+
+
example:LoadLong('foo',42L)
+
for the current thread, load a long value from the named variable.
+
+
+
+
Object -> LoadLong(String: name, long: defaultValue) -> Long
+
+
example:LoadLong('foo',42L)
+
for the current thread, load a long value from the named variable, or the default value if the named variable is not defined.
+
+
+
+
Object -> LoadLong(function.Function<Object,Object>: nameFunc) -> Long
+
+
example:LoadLong(NumberNameToString(),42L)
+
for the current thread, load a long value from the named variable,where the variable name is provided by a function.
+
+
+
+
Object -> LoadLong(function.Function<Object,Object>: nameFunc, long: defaultValue) -> Long
+
+
example:LoadLong(NumberNameToString(),42L)
+
for the current thread, load a long value from the named variable,where the variable name is provided by a function, or the default value if the namedvariable is not defined.
+
+
+
+
LoadString
+
Load a value from a named thread-local variable, where the variable name is fixed or a generated variable name from a provided function. If the named variable is not defined, then the default value is returned.
+
+
+
Object -> LoadString(String: name) -> String
+
+
example:LoadString('foo','examplevalue')
+
for the current thread, load a String value from the named variable.
for the current thread, load a String value from the named variable,where the variable name is provided by a function, or the default value if the namedvariable is not defined.
+
+
+
+
long -> LoadString(String: name) -> String
+
+
example:LoadString('foo','examplevalue')
+
for the current thread, load a String value from the named variable.
+
+
+
+
long -> LoadString(String: name, String: defaultValue) -> String
+
+
example:LoadString('foo','examplevalue')
+
for the current thread, load a String value from the named variable, or the default value if the named variable is not defined.
+
+
+
+
long -> LoadString(function.LongFunction: nameFunc) -> String
for the current thread, load a String value from the named variable,where the variable name is provided by a function, or the default value if the namedvariable is not defined.
+
+
+
+
NullOrLoad
+
Reads a long variable from the input, hashes and scales it to the unit interval 0.0d - 1.0d, then uses the result to determine whether to return null object or a loaded value.
+
+
long -> NullOrLoad(double: ratio, String: varname) -> Object
+
+
NullOrPass
+
Reads a long variable from the thread local variable map, hashes and scales it to the unit interval 0.0d - 1.0d, then uses the result to determine whether to return a null object or the input value.
Save the current input value at this point in the function chain to a thread-local variable name. The input value is unchanged, and available for the next function in the chain to use as-is.
+
+
+
long -> Save(String: name) -> long
+
+
example:Save('foo')
+
for the current thread, save the input object value to the named variable
+
+
+
+
long -> Save(function.LongFunction: nameFunc) -> long
+
+
example:Save(NumberNameToString())
+
for the current thread, save the current input object value to the named variable,where the variable name is provided by a function.
+
+
+
+
long -> Save(String: name) -> long
+
+
example:Save('foo')
+
save the current long value to the name 'foo' in this thread
+
+
+
+
long -> Save(function.Function<Object,Object>: nameFunc) -> long
+
+
example:Save(NumberNameToString())
+
save the current long value to the name generated by the function given.
+
+
+
+
String -> Save(String: name) -> String
+
+
example:Save('foo')
+
save the current String value to the name 'foo' in this thread
for the current thread, save the current double value to the name 'foo' in this thread, where the variable name is provided by a function.
+
+
+
+
int -> Save(String: name) -> int
+
+
example:Save('foo')
+
save the current int value to the name 'foo' in this thread
+
+
+
+
int -> Save(function.Function<Object,Object>: nameFunc) -> int
+
+
example:Save(NumberNameToString())
+
save the current int value to a named variable in this thread,where the variable name is provided by a function.
+
+
+
+
SaveDouble
+
Save a value to a named thread-local variable, where the variable name is fixed or a generated variable name from a provided function. Note that the input type is not that suitable for constructing names, so this is more likely to be used in an indirect naming pattern like SaveDouble(Load('id'))
+
+
+
double -> SaveDouble(String: name) -> double
+
+
example:Save('foo')
+
save the current double value to the name 'foo' in this thread
save a double value to a named variable in the current thread, where the variable name is provided by a function.
+
+
+
+
long -> SaveDouble(String: name) -> double
+
+
example:Save('foo')
+
save the current double value to the name 'foo' in this thread
+
+
+
+
long -> SaveDouble(function.LongFunction: nameFunc) -> double
+
+
example:Save(NumberNameToString())
+
save a double value to a named variable in the current thread, where the variable name is provided by a function.
+
+
+
+
SaveFloat
+
Save a value to a named thread-local variable, where the variable name is fixed or a generated variable name from a provided function. Note that the input type is not that suitable for constructing names, so this is more likely to be used in an indirect naming pattern like SaveFloat(Load('id'))
+
+
+
long -> SaveFloat(String: name) -> Float
+
+
example:SaveFloat('foo')
+
save the current float value to a named variable in this thread.
+
+
+
+
long -> SaveFloat(function.LongFunction: nameFunc) -> Float
+
+
example:SaveFloat(NumberNameToString())
+
save the current float value to a named variable in this thread, where the variable name is provided by a function.
+
+
+
+
Float -> SaveFloat(String: name) -> Float
+
+
example:SaveFloat('foo')
+
save the current float value to a named variable in this thread.
save the current float value to a named variable in this thread, where the variable name is provided by a function.
+
+
+
+
SaveInteger
+
Save a value to a named thread-local variable, where the variable name is fixed or a generated variable name from a provided function. Note that the input type is not that suitable for constructing names, so this is more likely to be used in an indirect naming pattern like SaveInteger(Load('id'))
+
+
+
int -> SaveInteger(String: name) -> int
+
+
example:SaveInteger('foo')
+
save the current integer value to a named variable in this thread.
+
+
+
+
int -> SaveInteger(function.Function<Object,Object>: nameFunc) -> int
+
+
example:SaveInteger(NumberNameToString())
+
save the current integer value to a named variable in this thread, where the variable name is provided by a function.
+
+
+
+
long -> SaveInteger(String: name) -> int
+
+
example:SaveInteger('foo')
+
save the current integer value to a named variable in this thread.
+
+
+
+
long -> SaveInteger(function.LongFunction: nameFunc) -> int
+
+
example:SaveInteger(NumberNameToString())
+
save the current integer value to a named variable in this thread, where the variable name is provided by a function.
+
+
+
+
SaveLong
+
Save a value to a named thread-local variable, where the variable name is fixed or a generated variable name from a provided function. Note that the input type is not that suitable for constructing names, so this is more likely to be used in an indirect naming pattern like SaveLong(Load('id'))
+
+
+
long -> SaveLong(String: name) -> long
+
+
example:SaveLong('foo')
+
save the current long value to a named variable in this thread.
+
+
+
+
long -> SaveLong(function.Function<Object,Object>: nameFunc) -> long
+
+
example:SaveLong(NumberNameToString())
+
save the current long value to a named variable in this thread, where the variable name is provided by a function.
+
+
+
+
long -> SaveLong(String: name) -> long
+
+
example:SaveLong('foo')
+
save the current long value to a named variable in this thread.
+
+
+
+
long -> SaveLong(function.Function<Object,Object>: nameFunc) -> long
+
+
example:SaveLong(NumberNameToString())
+
save the current long value to a named variable in this thread, where the variable name is provided by a function.
+
+
+
+
SaveString
+
Save a value to a named thread-local variable, where the variable name is fixed or a generated variable name from a provided function. Note that the input type is not that suitable for constructing names, so this is more likely to be used in an indirect naming pattern like SaveString(Load('id'))
+
+
+
long -> SaveString(String: name) -> String
+
+
example:SaveString('foo')
+
save the current String value to a named variable in this thread.
+
+
+
+
long -> SaveString(function.LongFunction: nameFunc) -> String
+
+
example:SaveString(NumberNameToString())
+
save the current String value to a named variable in this thread, where the variable name is provided by a function.
+
+
+
+
String -> SaveString(String: name) -> String
+
+
example:SaveString('foo')
+
save the current String value to a named variable in this thread.
save the current String value to a named variable in this thread, where the variable name is provided by a function.
+
+
+
+
Show
+
Show diagnostic values for the thread-local variable map.
+
+
+
long -> Show() -> String
+
+
example:Show()
+
Show all values in a json-like format
+
+
+
+
long -> Show(String[]...: names) -> String
+
+
example:Show('foo')
+
Show only the 'foo' value in a json-like format
+
example:Show('foo','bar')
+
Show the 'foo' and 'bar' values in a json-like format
+
+
+
+
Object -> Show() -> String
+
+
example:Show()
+
Show all values in a json-like format
+
+
+
+
Object -> Show(String[]...: names) -> String
+
+
example:Show('foo')
+
Show only the 'foo' value in a json-like format
+
example:Show('foo','bar')
+
Show the 'foo' and 'bar' values in a json-like format
+
+
+
+
Swap
+
Load a named value from the per-thread state map. The previous input value will be stored in the named value, and the previously stored value will be returned. A default value to return may be provided in case there was no previously stored value under the given name.
+
+
+
Object -> Swap(String: name) -> Object
+
+
example:Swap('foo')
+
for the current thread, swap the input value with the named variable and returned the named variable
for the current thread, swap the input value with the named variable and returned the named variable, or return the default value if the named value is not defined.
for the current thread, swap the input value with the named variable and returned the named variable, where the variable name is generated by the provided function.
for the current thread, swap the input value with the named variable and returned the named variable, where the variable name is generated by the provided function, or the default value if the named value is not defined.
+
+
+
+
long -> Swap(String: name) -> Object
+
+
example:Swap('foo')
+
for the current thread, swap the input value with the named variable and returned the named variable
+
+
+
+
long -> Swap(String: name, Object: defaultValue) -> Object
+
+
example:Swap('foo','examplevalue')
+
for the current thread, swap the input value with the named variable and returned the named variable, or return the default value if the named value is not defined.
+
+
+
+
long -> Swap(function.LongFunction: nameFunc) -> Object
+
+
example:Swap(NumberNameToString())
+
for the current thread, swap the input value with the named variable and returned the named variable, where the variable name is generated by the provided function.
+
+
+
+
long -> Swap(function.LongFunction: nameFunc, Object: defaultValue) -> Object
+
+
example:Swap(NumberNameToString(),'examplevalue')
+
for the current thread, swap the input value with the named variable and returned the named variable, where the variable name is generated by the provided function, or the default value if the named value is not defined.
+
+
+
+
long -> Swap(String: name) -> long
+
+
example:Swap('foo')
+
for the current thread, swap the input value with the named variable and returned the named variable.
+
+
+
+
long -> Swap(String: name, long: defaultValue) -> long
+
+
example:Swap('foo',234L)
+
for the current thread, swap the input value with the named variable and returned the named variable,or the default value if the named variable is not defined.
+
+
+
+
long -> Swap(function.LongFunction: nameFunc) -> long
+
+
example:Swap(NumberNameToString())
+
for the current thread, swap the input value with the named variable and returned the named variable, where the variable name is generated by the provided function.
+
+
+
+
long -> Swap(function.LongFunction: nameFunc, long: defaultValue) -> long
+
+
example:Swap(NumberNameToString(), 234L)
+
for the current thread, swap the input value with the named variable and returned the named variable, where the variable name is generated by the provided function, or the default value if the named variable is not defined.
+
+
+
+
ThreadNum
+
Matches a digit sequence in the current thread name and caches it in a thread local. This allows you to use any intentionally indexed thread factories to provide an analogue for concurrency. Note that once the thread number is cached, it will not be refreshed. This means you can't change the thread name and get an updated value.
+
+
+
long -> ThreadNum() -> int
+
+
+
long -> ThreadNum() -> long
+
+
+
UnsetOrLoad
+
Reads a long variable from the input, hashes and scales it to the unit interval 0.0d - 1.0d, then uses the result to determine whether to return UNSET.value or a loaded value.
+
+
long -> UnsetOrLoad(double: ratio, String: varname) -> Object
+
+
UnsetOrPass
+
Reads a long variable from the thread local variable map, hashes and scales it to the unit interval 0.0d - 1.0d, then uses the result to determine whether to return UNSET.value or the input value.
Provide a moving aggregate (min,max,avg,sum,count) of long values presented. This allows for sanity checks on values during prototyping, primarily. Using it for other purposes in actual workloads is not generally desirable, as this does not produce purely functional (deterministic) values.
+
+
+
double -> LongStats(String: spec) -> double
+
+
notes: Given the specified statistic, provide an
+updated result for all the values presented to this function's input.
+@param spec One of 'min', 'max', 'count', 'avg', or 'sum'
notes: Given the specified statistic, a function, and whether to allow truncating conversions,
+provide an updated result for all the values produced by the provided function when
+given the input.
+@param spec One of 'min', 'max', 'count', 'avg', or 'sum'
+@param func Any function which can take a long or compatible input and produce a numeric value
+@param truncate Whether or not to allow truncating conversions (long to int for example)
Utility function used for advanced data generation experiments.
+
+
int -> TokenMapFileCycle(String: filename, boolean: loopdata, boolean: ascending) -> long
+
+
TokenMapFileNextCycle
+
Utility function used for advanced data generation experiments.
+
+
int -> TokenMapFileNextCycle(String: filename, boolean: loopdata, boolean: ascending) -> long
+
+
TokenMapFileNextToken
+
Utility function used for advanced data generation experiments.
+
+
int -> TokenMapFileNextToken(String: filename, boolean: loopdata, boolean: ascending) -> long
+
+
TokenMapFileToken
+
Utility function used for advanced data generation experiments.
+
+
int -> TokenMapFileToken(String: filename, boolean: loopdata, boolean: ascending) -> long
+
+
TriangularStep
+
Compute a value which increases monotonically with respect to the cycle value.
+All values for f(X+(m>=0)) will be equal or greater than f(X). In effect, this
+means that with a sequence of monotonic inputs, the results will be monotonic as
+well as clustered. The values will approximate input/average, but will vary in frequency
+around a simple binomial distribution.
+
The practical effect of this is to be able to compute a sequence of values
+over inputs which can act as foreign keys, but which are effectively ordered.
+
Call for Ideas
+
Due to the complexity of generalizing this as a pure function over other distributions,
+this is the only function of this type for now. If you are interested in this problem
+domain and have some suggestions for how to extend it to other distributions, please
+join the project or let us know.
+
+
+
long -> TriangularStep(long: average, long: variance) -> long
+
+
example:TriangularStep(100,20)
+
Create a sequence of values where the average is 100, but the range of values is between 80 and 120.
+
example:TriangularStep(80,10)
+
Create a sequence of values where the average is 80, but the range of values is between 70 and 90.
This is the newly revamped driver for CQL which uses the DataStax OSS Driver version 4. As
+there was a significant restructuring of the APIs between CQL driver 4.x and previous versions, this
+driver is a clean and separate implementation which aims to use the features of version 4.x of the
+native driver directly as well as new internal NoSQLBench APIs.
+
This means that many features that advanced testers may have been used to (the syntactical sugar,
+the surfacing of advanced configuration properties in simple ways, and so on) will have to be
+redesigned to fit with version 4 of the driver. Most users who do basic testing with direct CQL
+syntax should see few issues, but advanced testers will need to consult this documentation
+specifically to understand the differences between cqld4 driver features and cql driver
+features.
+
Notably, these features need to be re-built on the cqld4 driver to bring it up to parity with
+previous advanced testing features:
+
+
verify
+
result set size metrics
+
explicit paging metrics
+
+
Configuration
+
The DataStax Java Driver 4.* has a much different configuration system than previous versions. For
+changing driver settings with this version it is highly recommended that users use the built-in
+driver settings and configuration file/profile options, just as they would for an application. This
+serves two goals: 1) to ensure that the settings you test with are portable from test environment to
+application, and 2) to allow you to configure driver settings directly, without depending on
+internal helper logic provided by NoSQLBench. This means that the driver options exposed are those
+provided by the low-level driver, thus removing another dependency from your test setup.
+
Config Sources
+
By using the option driverconfig, you can have as many configuration sources as you like, even
+mixing in JSON or remote URLs.
+
examples
+
Configure directly from a config file, or classpath resource:
+
# If this isn't found in the file system, the classpath will also be checked.
+nb5 ... driverconfig=myconfig.json
+
hosts & localdc - (required unless using scb) - Set the endpoint and local datacenter name
+for the driver.
+
+
example: host=mydsehost localdc=testdc1
+
+
+
port (optional for hosts) - Set the port to connect to for the CQL binary protocol.
+
driverconfig - (explained above) - set the configuration source for the driver.
+
username OR userfile - (optional, only one may be used) - If you need to specify a
+username but want to put it in a file instead, simply use the userfile=myfile option. It is
+not uncommon to say userfile=userfile.
+
+
+
password OR passfile - (optional, only one may be used) - Fi you need to specify a
+password but want to put it ina file instead, simply use the passfile=mypassfile option. It
+is not uncommon to say passfile=passfile.
+
showstmt - enable per-statement diagnostics which show as much of the statement as possible
+for the given statement type. WARNING - Do not use this for performance testing, only for
+diagnostics.
+
maxpages - configure the maximum number of pages allowed in a CQL result set. This is
+configured to maxpages=1 by default, so that users will be aware of any paging that occurs
+by default. If you expect and want to allow paging in your operation, then set this number
+higher. A synthetic exception is generated as UnexpectedPagingException by default when
+the number of pages exceeds maxpages.
+
+
Activity level Driver Config
+
The activity parameters which are provided by the driver are exposed as driver.<name>. Any
+configuration option that is specified this way will be applied directly to the driver through the
+type-safe configuration layer. For example, specifying driver.basic.request.timeout='2 seconds'
+has the same effect as setting basic.request.timeout in a driver configuration file.
+
Backwards Compatibility with cql and cqld3
+
Many driver options were provided in a more convenient form for testing in previous CQL drivers with
+NoSQLBench. Due to the changes in driver 4.x, the implementation of these options had to change as
+well. Where possible, a backwards-compatible option helper was provided so that test defined for
+cql and cqld3 drivers would just work with the cqld4 driver. In some cases, this simply was
+not possible as some options were no longer supported, or changed so much that there was no longer a
+direct mapping that would work consistently across versions. You can try to use the previous
+options, like pooling and so on. If the option is not supported as such, it will cause an error
+with an explanation. Otherwise, these helper options will simply set the equivalent options in the
+driver profile to achieve the same effect. As stated above, it is highly recommended that driver
+settings be captured in a configuration file and set with driverconfig=<file>.json
+
Statement Forms
+
The CQLd4 driver supports idiomatic usage of all the main statement APIs within the native Java
+driver. The syntax for specifying these types is simplified as well, using only a single
+type field which allows values of simple, prepared, raw, gremlin, fluent, and so on. The previous
+form of specifing type: cql and optional modifiers like prepared and
+parameterized is deprecated now, sinces all the forms are explicitly supported by a well-defined
+type name.
+
The previous form will work, but you will get a warning, as these should be deprecated going
+forward. It is best to use the forms in the examples below. The defaults and field names for the
+classic form have not changed.
+
CQLd4 Op Template Examples
+
ops:
+
+ # prepared statement
+# allows for parameterization via bindings, and uses prepared statements internally
+example-prepared-cql-stmt:
+ prepared:|
+ select one, two from buckle.myshoe where ...
+
+# prepared statement (verbose form)
+example-prepared-cql-stmt-verbose:
+ type:prepared
+ stmt:|
+ select one, two from buckle.myshoe where ...
+
+# simple statement
+# allows for parameterization via bindings, but does not use prepared statements internally
+example-simple-cql-stmt:
+ simple:|
+ select three, four from knock.onthedoor where ...
+
+# raw statement
+# pre-renders the statement into a string, with no driver-supervised parameterization
+# useful for testing variant DDL where some fields are not parameterizable
+# NOTE: the raw form does its best to quote non-literals where needed, but you may
+# have to inject single or double quotes in special cases.
+example-raw-cql-stmt:
+ raw:|
+ create table if not exist {ksname}.{tblname} ...
+
+# gremlin statement using the fluent API, as it would be written in a client application
+example-fluent-graph-stmt:
+ fluent:>-
+ g.V().hasLabel("device").has("deviceid", UUID.fromString({deviceid}))
+# if imports are not specified, the following is auto imported.
+# if imports are specified, you must also provide the __ class if needed
+imports:
+ -org.apache.tinkerpop.gremlin.process.traversal.dsl.graph.__
+
+ # gremlin statement using string API (not recommended)
+example-raw-gremlin-stmt:
+ gremlin:>-
+ g.V().hasLabel("device").has("deviceid", UUID.fromString('{deviceid})')
+
+
CQL Op Template - Optional Fields
+
If any of these are provided as op template fields or as op params, or as activity params, then they
+will have the described effect. The calls to set these parameters on an individual statement are
+only incurred if they are provided. Otherwise, defaults are used. These options can be applied to
+any of the statement forms above.
+
params:
+
+ # Set the consistency level for this statement
+# For Astra, use only LOCAL_QUORUM
+# Otherwise, one of
+# ALL|EACH_QUORUM|QUORUM|LOCAL_QUORUM|ONE|TWO|THREE|LOCAL_ONE|ANY
+cl:LOCAL_QUORUM
+ # or consistency_level: ...
+
+ # Set the serial consistency level for this statement.
+# Note, you probably don't need this unless you are using LWTs
+# SERIAL ~ QUORUM, LOCAL_SERIAL ~ LOCAL_QUORUM
+scl:LOCAL_SERIAL
+ # or serial_consistency_level: ...
+
+ # Set a statement as idempotent. This is important for determining
+# when ops can be trivially retried with no concern for unexpected
+# mutation in the event that it succeeds multiple times.
+# true or false
+idempotent:false
+
+ # Set the timeout for the operation, from the driver's perspective,
+# in seconds. "2 seconds" is the default, but DDL statements, truncate or drop
+# statements will generally need more. If you want milliseconds, just use
+# fractional seconds, like 0.500
+timeout:2.0
+
+ # Set the maximum number of allowed pages for this request before a
+# UnexpectedPagingException is thrown.
+maxpages:1
+
+ # Set the LWT rebinding behavior for this statement. If set to true, then
+# any statement result which was not applied will be retried with the
+# conditional fields set to the currently visible values. This makes all LWT
+# statements do another round trip of retrying (assuming the data doesn't
+# match the preconditions) in order to test LWT performance.
+retryreplace:true
+
+ # Set the number of retries allowed by the retryreplace option. This is set
+# to 1 conservatively, as with the maxpages setting. This means that you will
+# see an error if the first LWT retry after an unapplied change was not successful.
+maxlwtretries:1
+
+ ## The following options are meant for advanced testing scenarios only,
+## and are not generally meant to be used in typical application-level,
+## data mode, performance or scale testing. These expose properties
+## which should not be set for general use. These allow for very specific
+## scenarios to be constructed for core system-level testing.
+## Some of them will only work with specially provided bindings which
+## can provide the correct instantiated object type.
+
+ # replace the payload with a map of String->ByteBuffer for this operation
+# type: Map<String, ByteBuffer>
+custom_payload:...
+
+ # set an instantiated ExecutionProfile to be used for this operation
+# type: com.datastax.oss.driver.api.core.config.DriverExecutionProfile
+execution_profile:...
+
+ # set a named execution profile to be used for this operation
+# type: String
+execution_profile_name:...
+
+ # set a resolved target node to be used for this operation
+# type: com.datastax.oss.driver.api.core.metadata.Node
+node:...
+
+ # set the timestamp to be used as the "now" reference for this operation
+# type: int
+now_in_seconds:...
+
+ # set the page size for this operation
+# type: int
+page_size:...
+
+ # set the query timestamp for this operation (~ USING TIMESTAMP)
+# type: long
+query_timestamp:
+
+ # set the routing key for this operation, as a single bytebuffer
+# type: ByteArray
+routing_key:...
+
+ # set the routing key for this operation as an array of bytebuffers
+# type: ByteArray[]
+routing_keys:...
+
+ # set the routing token for this operation
+# type: com.datastax.oss.driver.api.core.metadata.token.Token
+routing_token:...
+
+ # enable (or disable) tracing for this operation
+# This should be used with great care, as tracing imposed overhead
+# far and above most point queries or writes. Use it sparsely or only
+# for functional investigation
+# type: boolean
+tracing:...
+
+
Driver Cache
+
Like all driver adapters, the CQLd4 driver has the ability to use multiple low-level driver
+instances for the purposes of advanced testing. To take advantage of this, simply set a space
+parameter in your op templates, with a dynamic value.
+WARNING: If you use the driver cache feature, be aware that creating a large number of driver
+instances will be very expensive. Generally driver instances are meant to be initialized and then
+shared throughout the life-cycle of an application process. Thus, if you are doing multi-instance
+driver testing, it is best to use bindings functions for the space parameter which have bounded
+cardinality per host.
This is a diagnostic activity type. Its action simply reports the cycle number and the reporting delay between it's schedule reporting time and the current time. The reporting is interleaved between the threads, with the logical number of reports remaining constant regardless of the thread count.
NoSQLBench supports a variety of different operations. For operations like
+sending a query to a database, a native driver is typically used with help of the
+DriverAdapter API. For basic operations, like writing the content of a templated
+message to stdout, no native driver is needed, although the mechanism of stdout
+is still implemented via the same Adapter API. In effect, if you want to
+allow NoSQLBench to understand your op templates in a new way, you add an Adapter
+and program it to interpret op templates in a specific way.
+
Each op template of an activity can be configured to use a specific adapter. The driver=...
+parameter sets the default adapter to use for all op templates in an activity. However,
+this can be overridden per op template with the driver field.
+
Discovering Driver Adapters
+
NoSQLBench comes with some drivers built-in. You can discover these by running:
+
nb5 --list-drivers
+
+
Each one comes with its own built-in documentation. It can be accessed with this command:
+
nb5 help <driver>
+
+
This section contains the per-driver documentation that you get when you run the above command.
+These driver docs are auto-populated when NoSQLBench is built, so they are exactly the same as
+you will see with the above command, only rendered in HTML.
+
External Adapter jars
+
It is possible to load an adapter from a jar at runtime. If the environment variable NBLIBDIR
+is set, it is taken as a library search path for jars, separated by a colon. For each element in the
+lib paths that exists, it is added to the classpath. If the element is a named .jar file, it is
+added. If it is a directory, then all jar files in that directory are added.
The DynamoDB driver supports a basic set of commands as specified at
+Amazon DynamoDB Docs.
+
Activity Parameters
+
The activity parameters for this driver are basically properties for the DynamoDB driver.
+If any of these are not specified, then they are not applied to the client builder,
+thus the default is whatever it is for the native client.
+
+
region - The region which the driver should connect to. This is the
+simplest way to configure the rest of the client, since defaults are
+automatically looked up for the region. This is the only option that is
+required.
+
endpoint - The endpoint for the region. Do not specify this if you have
+already specified region.
+
signing_region - The signing region for the client. You do not have
+to specify this if you specified region.
+
client_socket_timeout - adjust the default for the client session. (integer)
+
client_execution_timeout - adjust the default for the client session. (integer)
+
client_max_connections - adjust the default for the client session. (integer)
+
client_max_error_retry - adjust the default for the client session. (integer)
+
client_user_agent_prefix - adjust the default for the client session. (String)
+
client_consecutive_retries_before_throttling - adjust the default for
+the client session. (integer)
+
client_gzip - adjust the default for the client session. (boolean)
+
client_tcp_keepalive - adjust the default for the client session. (boolean)
+
client_disable_socket_proxy - adjust the default for the client session. (boolean)
+
client_so_send_size_hint - adjust the default for the client session. (integer)
+
client_so_recv_size_hint - adjust the default for the client session. (integer)
+
+
Op Templates
+
Specifically, the following commands are supported as of this release:
+
+
CreateTable
+
GetItem
+
PutItem
+
Query
+
DeleteTable
+
+
Examples
+
ops:
+
+ # the op name, used in logging and metrics
+example-CreateTable:
+ # the type and target of the command
+CreateTable:TEMPLATE(table,tabular)
+ # map of key structure for the table
+Keys:
+ part:HASH
+ clust:RANGE
+ # attributes of the fields
+Attributes:
+ part:S
+ clust:S
+ # either PROVISIONED or PAY_PER_REQUEST
+BillingMode:PROVISIONED
+ # required for BillingMode: PROVISIONED
+ReadCapacityUnits:"TEMPLATE(rcus,40000)"
+ # required for BillingMode: PROVISIONED
+WriteCapacityUnits:"TEMPLATE(wcus,40000)"
+
+ example-PutItem:
+ # the type and target of the command
+PutItem:TEMPLATE(table,tabular)
+ # A json payload
+json:|
+ {
+ "part": "{part_layout}",
+ "clust": "{clust_layout}",
+ "data0": "{data0}"
+ }
+
+example-GetItem:
+ # the type and target of the command
+GetItem:TEMPLATE(table,tabular)
+ # the identifiers for the item to read
+key:
+ part:"{part_read}"
+ clust:"{clust_read}"
+ ## optionally, set a projection
+# projection: projection-spec
+# optionally, override ConsistentRead defaults
+ConsistentRead:true
+
+
+ example-Query:
+ # the type and target of the command
+Query:TEMPLATE(table,tabular)
+ # The query key
+key:
+ part:"{part_read}"
+ clust:"{clust_read}"
+ # optionally, override the default for ConsistentRead
+ConsistentRead:true
+ # optionally, set a limit
+Limit:"{limit}"
+ ## optionally, set a projection
+# projection: projection-spec
+## optionally, set an exclusive start key
+# ExclusiveStartKey: key-spec
+
+ example-DeleteTable:
+ # the type and target of the command
+# the table identifier/name (string) to delete
+DeleteTable:TEMPLATE(table,timeseries)
+
This driver allows you to make http requests using the native HTTP client
+that is bundled with the JVM. It supports free-form construction of
+requests.
+
You specify what a request looks like by providing a set of request
+parameters. They can be in either literal (static) form with no dynamic
+data binding, or they can each be in a string template form that draws
+from data bindings. Each cycle, a request is assembled from these
+parameters and executed.
+
Example Statements
+
The simplest possible statement form looks like this:
+
op:http://google.com/
+
+
Or, you can have a list:
+
# A list of statements
+ops:
+ -http://google.com/
+ -http://amazon.com/
+
+
Or you can template the values used in the URI, and even add ratios:
+
# A list of named statements with variable fields and specific ratios:
+ops:
+ -s1:http://google.com/search?query={query}
+ ratio:3
+ -s2:https://www.amazon.com/s?k={query}
+ ratio:2
+bindings:
+ query:>
+ WeightedStrings('function generator;backup generator;static generator');
+ UrlEncode();
+
+
You can even make a detailed request with custom headers and result
+verification conditions:
+
# Require that the result be status code 200-299 match regex "OK, account id is .*" in the body
+ops:
+ -get-from-google:
+ method:GET
+ uri:"https://google.com/"
+ version:"HTTP/1.1"
+ Content-Type:"application/json"
+ ok-status:"2[0-9][0-9]"
+ ok-body:"^(OK, account id is .*)$"
+
+
For those familiar with what an HTTP request looks like on the wire, the
+format below may be familiar. This isn't actually the content that is
+submitted, but it is recognized as a valid way to express the request
+parameters in a familiar and condensed form. A custom config parser makes
+this form available fo rhose who want to emulate a well-known pattern:
+
ops:
+ -s1:|
+ GET https://google.com/ HTTP/1.1
+ Content-Type: application/json
+ok-status:2[0-9][0-9]
+ ok-body:^(OK, account id is.*)$
+
+
Of course, in the above form, the response validators are still separate
+parameters.
+
Bindings
+
All request fields can be made dynamic with binding functions. To make a
+request that has all dynamic fields, you can do something like this:
The above two examples are semantically identical, only the format is
+different. Notice that the expansion of the URI is still captured in a
+field called uri, with all the dynamic pieces stitched together in the
+value. You can't use arbitrary request fields. Every request field must
+from (method, uri, version, body, ok-status, ok-body) or otherwise be
+capitalized to signify an HTTP header.
+
The HTTP RFCs do not require headers to be capitalized, but they are
+capitalized ubiquitously in practice, so we follow that convention here
+for clarity. Headers are in-fact case-insensitive, so any issues created
+by this indicate a non-conformant server/application implementation.
+
For URIs which are fully static (There are no dynamic fields, request
+generation will be much faster, since the request is fully built and
+cached at startup.
+
Request Fields
+
At a minimum, a URI must be provided. This is enough to build a
+request with. All other request fields are optional and have reasonable
+defaults:
+
+
uri - This is the URI that you might put into the URL bar of your
+browser. There is no default.
+Example: https://en.wikipedia.org/wiki/Leonhard_Euler
+If the uri contains a question mark '?' as a query delimiter, then all
+embedded sections which are contained within URLENCODE[[ ... ]]
+sections are preprocessed by the HTTP driver. This allows you to keep
+your test data in a recognizable form. This is done at startup, so there
+is no cost during the test run. As an added convenience, binding points
+which are within the encoded block will be preserved, so
+both https://en.wikipedia.org/URLENCODE[[wiki/]]{topic} and
+https://en.wikipedia.org/URLENCODE[[wiki/{topic}]] will yield the same
+configuration. For a terser form, you can use E[[...]]. You must also
+ensure that the values that are inserted at binding points are produced
+in a valid form for a URI. You can use the URLEncode()
+binding function where needed to achieve this.
+NOTE, If you are using dynamic values for the uri field, and
+a test value for cycle 0 includes neither URLENCODE[[ nor E[],
+then it is skipped. You can override this with enable_urlencode: true.
+
method - An optional request method. If not provided, "GET" is
+assumed. Any method name will work here, even custom ones that are
+specific to a given target system. No validation is done for standard
+method names, as there is no way to know what method names may be valid.
+
version - The HTTP version to use. If this value is not provided,
+the default version for the Java HttpClient is used. If it is provided,
+it must be one of 'HTTP/1.1' or 'HTTP/2.0'.
+
body - The content of the request body, for methods which support
+it.
+
ok-status - An optional set of rules to verify that a response is
+valid. This is a simple comma or space separated list of integer status
+codes or a pattern which is used as a regex against the string form of a
+status code. If any characters other than digits spaces and commas are
+found in this value, then it is taken as a regex. If this is not
+provided, then any status code which is >=200 and <300 is considered
+valid.
+
ok-body - An optional regex pattern which will be applied to the
+body to verify that it is a valid response. If this is not provided,
+then content bodies are read, but any content is considered valid.
+
+
Any other statement parameter which is capitalized is taken as a request
+header. If additional fields are provided which are not included in the
+above list, or which are not capitalized, then an error is thrown.
+
Error Handling & Retries
+
By default, a request which encounters an exception is retried up to 10
+times. If you want to change this, set another value to the
+retries= activity parameters.
+
Presently, no determination is made about whether an errored
+response should be retryable, but it is possible to configure this if
+you have a specific exception type that indicates a retryable operation.
+
The HTTP driver is the first NB driver to include a completely
+configurable error handler chain. This is explained in the
+error-handlers topic. By default, the HTTP activity's error handler is
+wired to stop the activity for any error encountered.
+
SSL Support
+
SSL should work for any basic client request that doesn't need custom SSL
+configuration. If needed, more configurable SSL support will be added.
+
Client Behavior
+
TCP Sessions & Clients
+
Client instances are created for each unique space value. NoSQLBench
+provides a way for all driver adapters to instance native clients according
+to a data from a binding. This is standardized under the op template parameter
+space, which is wired by default to the static value default. This means
+that each activity that uses the http driver shares a client instance across
+all threads by default. If you want to have a new http client per-thread,
+simply add a binding for space: ThreadNumToInteger() and reference it in
+an op template like space: {space}, OR use an inline op field in your op
+template like space: {(ThreadNumToInteger())}.
+
You can use any binding function you want for the space op field. However,
+if you were to assign it something like "space: {(Identity()}" you would
+not have a good result, as you would be spinning up and caching a new http client
+instance for every single cycle.
+
Chunked encoding and web sockets
+
Presently, this driver only does basic request-response style requests.
+Thus, adding headers which take TCP socket control away from the
+HttpClient will likely yield inconsistent (or undefined)
+results. Support may be added for long-lived connections in a future
+release. However, chunked encoding responses are supported, although they
+will be received fully before being processed further. Connecting to a long-lived
+connection that streams chunked encoding responses indefinitely will have
+undefined results.
+
HTTP Activity Parameters
+
+
+
follow_redirects - default: normal - One of never, always, or
+normal. Normal redirects are those which do not redirect from HTTPS to
+HTTP.
+
+
+
diagnostics - default: none - synonym: diag
+example: diag=brief,1000 - print diagnostics for every 1000th cycle,
+including only brief details as explained below.
+
This setting is a selector for what level of verbosity you will get on
+the console. If you set this to diag=all, you'll get every request and
+response logged to console. This is only for verifying that a test is
+configured and to spot check services before running higher scale tests.
+
All the data shown in diagnostics is post-hoc, directly from the
+response provided by the internal HTTP client in the Java runtime.
+
If you want finer control over how much information diagnostics
+provides, you can specify a comma separated list of the below.
+
+
headers - show headers
+
stats - show basic stats of each request
+
data - show all of each response body this setting
+
data10 - show only the first 10 characters of each response body
+this setting supersedes data
+
data100 - show only the first 100 characters of each response body
+this setting supersedes data10
+
data1000 - show only the first 1000 characters of each response body
+this setting supersedes data100
+
redirects - show details for interstitial request which are made
+when the client follows a redirect directive like a location
+header
+
requests - show details for requests
+
responses - show details for responses
+
codes - shows explanatory details (high-level) of http response status codes
+
brief - Show headers, stats, requests, responses, and 10 characters
+
all - Show everything, including full payloads and redirects
+
a modulo - any number, like 3000 - causes the diagnostics to be
+reported only on this cycle modulo. If you set diag=300,brief
+then you will get the brief diagnostic output for every 300th
+response.
+
+
The requests, responses, and redirects settings work in combination.
+For example, if you specify responses, and redirect, but not requests,
+then you will only see the response portion of all calls made by the
+client. All available filters layer together in this way.
+
+
+
timeout - default: forever - Sets the timeout of each request in
+milliseconds.
This is a diagnostic activity type. Its action simply reports the cycle number and the reporting delay between it's schedule reporting time and the current time. The reporting is interleaved between the threads, with the logical number of reports remaining constant regardless of the thread count.
This JDBC driver leverages Hikari Connection Pool for connection pool and works with PostgreSQL®. This leverages NoSQLBench based workload generation and performance testing against any PostgreSQL-compatible database cluster. Example: CockroachDB® or YugabyteDB® (YSQL API).
+
Executing JDBC Workload
+
The following is an example of invoking a JDBC workload.
In the above NB command, following are JDBC driver specific parameters:
+
+
url: URL of the database cluster. Default is jdbc:postgresql://.
+
serverName: Default is localhost.
+
portNumber: Default is 5432.
+
serverName: The database name. The default is to connect to a database with the same name as the user name used to connect to the server.
+
+
Other NB engine parameters are straight forward:
+
+
driver: must be jdbc.
+
threads: depending on the workload type, the NB thread number determines how many clients will be created. All the clients will share the Connection originated from the Hikari Connection Pool.
+
*.yaml: the NB jdbc scenario definition workload yaml file.
+
<nb_cmd>: is ./nb (using binary) or the java -jar nb5.jar.
+
+
Configuration
+
These are the main configurations with which we could issue a query and process the results back based on the PostgreSQL® Query pattern.
+
Config Sources
+
+
execute: This is to issue any DDL statements such CREATE DATABASE|TABLE or DROP DATABASE|TABLE operations which returns nothing.
+
query: This is to issue DML statement such as SELECT operation which would return a ResultSet object to process.
+
update: This is to issue DML statements such as INSERT|UPDATE|DELETE operations that will return how many number of rows were affected by that operation.
+
+
Statement Forms
+
The syntax for specifying these types is simplified as well, using only a single type field which allows values of execute, query, & update
+and specifying the raw statements in the stmt. Alternatively, one could directly use one of the types and provide the raw query directly.
Negative acknowledgement and acknowledgement timeout redelivery backoff policy
+
+
+
+
1.1. Issues Tracker
+
If you have issues or new requirements for this driver, please add them at the pulsar issues tracker.
+
2. Execute the NB Pulsar Driver Workload
+
In order to run a NB Pulsar driver workload, it follows similar command as other NB driver types. But it does have its unique execution parameters. The general command has the following format:
description: This is an (optional) section where to provide general description of the Pulsar NB workload defined in this file.
+
bindings: This section defines all NB bindings that are required in all OpTemplate blocks
+
params: This section defines Document level configuration parameters that apply to all OpTemplate blocks.
+
blocks: This section defines the OpTemplate blocks that are needed to execute Pulsar specific workloads. Each OpTemplate block may contain multiple OpTemplates.
+
+
2.2. NB Pulsar Driver Configuration Parameters
+
The NB Pulsar driver configuration parameters can be set at 3 different levels:
+
+
Global level
+
Document level
+
+
The parameters at this level are those within a NB yaml file that impact all OpTemplates
+
+
+
Op level (or Cycle level)
+
+
The parameters at this level are those within a NB yaml file that are associated with each individual OpTemplate
+
+
+
+
Please NOTE that when a parameter is specified at multiple levels, the one at the lowest level takes precedence.
+
2.2.1. Global Level Parameters
+
The parameters at this level are those listed in the command line config properties file.
+
The NB Pulsar driver relies on Pulsar's Java Client API complete its workloads such as creating/deleting tenants/namespaces/topics, generating messages, creating producers to send messages, and creating consumers to receive messages. The Pulsar client API has different configuration parameters to control the execution behavior. For example, this document lists all possible configuration parameters for how a Pulsar producer can be created.
+
All these Pulsar "native" parameters are supported by the NB Pulsar driver, via the global configuration properties file (e.g. config.properties). An example of the structure of this file looks like below:
+
### Schema related configurations - MUST start with prefix "schema."
+#schema.key.type=avro
+#schema.key.definition=</path/to/avro-key-example.avsc>
+schema.type=avro
+schema.definition=</path/to/avro-value-example.avsc>
+
+### Pulsar client related configurations - MUST start with prefix "client."
+# http://pulsar.apache.org/docs/en/client-libraries-java/#client
+client.connectionTimeoutMs=5000
+client.authPluginClassName=org.apache.pulsar.client.impl.auth.AuthenticationToken
+client.authParams=
+# ...
+
+### Producer related configurations (global) - MUST start with prefix "producer."
+# http://pulsar.apache.org/docs/en/client-libraries-java/#configure-producer
+producer.sendTimeoutMs=
+producer.blockIfQueueFull=true
+# ...
+
+### Consumer related configurations (global) - MUST start with prefix "consumer."
+# http://pulsar.apache.org/docs/en/client-libraries-java/#configure-consumer
+consumer.subscriptionInitialPosition=Earliest
+consumer.deadLetterPolicy={"maxRedeliverCount":"5","retryLetterTopic":"public/default/retry","deadLetterTopic":"public/default/dlq","initialSubscriptionName":"dlq-sub"}
+consumer.ackTimeoutRedeliveryBackoff={"minDelayMs":"10","maxDelayMs":"20","multiplier":"1.2"}
+# ...
+
+
There are multiple sections in this file that correspond to different
+categories of the configuration parameters:
+
+
Pulsar Schema related settings:
+
+
All settings under this section starts with schema. prefix.
+
At the moment, there are 3 schema types supported
+
+
Default raw byte[]
+
Avro schema for the message payload
+
KeyValue based Avro schema for both message key and message payload
+
+
+
+
+
Pulsar Client related settings:
+
+
All settings under this section starts with client. prefix.
+
This section defines all configuration parameters that are related with defining a PulsarClient object.
+
For the Pulsar NB driver, Document level parameters can only be statically bound; and currently, the following Document level configuration parameters are supported:
+
+
async_api (boolean):
+
+
When true, use async Pulsar client API.
+
+
+
use_transaction (boolean):
+
+
When true, use Pulsar transaction.
+
+
+
admin_delop (boolean):
+
+
When true, delete Tenants/Namespaces/Topics. Otherwise, create them.
+
Only applicable to administration related operations
+
+
+
seq_tracking (boolean):
+
+
When true, a sequence number is created as part of each message's properties
+
This parameter is used in conjunction with the next one in order to simulate abnormal message processing errors and then be able to detect such errors successfully.
+
+
+
seqerr_simu:
+
+
A list of error simulation types separated by comma (,)
+
Valid error simulation types
+
+
out_of_order: simulate message out of sequence
+
msg_loss: simulate message loss
+
msg_dup: simulate message duplication
+
+
+
+
+
e2e_starting_time_source:
+
+
Starting timestamp for end-to-end operation. When specified, will update the e2e_msg_latency histogram with the calculated end-to-end latency. The latency is calculated by subtracting the starting time from the current time. The starting time is determined from a configured starting time source. The unit of the starting time is milliseconds since epoch.
+
The possible values for e2e_starting_time_source:
+
+
message_publish_time : uses the message publishing timestamp as the starting time
+
message_event_time : uses the message event timestamp as the starting time
+
message_property_e2e_starting_time : uses a message property e2e_starting_time as the starting time.
+
+
+
+
+
+
3. NB Pulsar Driver OpTemplates
+
For the NB Pulsar driver, each OpTemplate has the following format:
Its value is mandatory and depending on the actual identifier, its value can be one of the following:
+
+
Tenant name: for AdminTenant type
+
Namespace name: for AdminNamespace type and in format "/"
+
Topic name: for the rest of the types and in format [(persistent|non-persistent)://]//
+is mandatory for each NB Pulsar operation type
+
+
Each Pulsar OpType may have optional Op specific parameters. Please refer to here for the example NB Pulsar YAML files for each OpType
+
4. Message Generation and Schema Support
+
4.1. Message Generation
+
A Pulsar message has three main components: message key, message properties, and message payload. Among them, message payload is mandatory when creating a message.
+
When running the "message producing" workload, the NB Pulsar driver is able to generate a message with its full content via the following OpTemplate level parameters:
+
+
msg_key: defines message key value
+
msg_property: defines message property values
+
msg_value: defines message payload value
+
+
The actual values of them can be static or dynamic (which are determined by NB data binding rules)
+
For msg_key, its value can be either
+
+
a plain text string, or
+
a JSON string that follows the specified "key" Avro schema (when KeyValue schema is used)
+
+
For msg_property, its value needs to be a JSON string that contains a list of key-value pairs. An example is as below. Please NOTE that if the provided value is not a valid JSON string, the NB Pulsar driver will ignore it and treat the message as having no properties.
a JSON string that follows the specified "value" Avro schema (when Avro schema or KeyValue schema is used)
+
+
4.2. Schema Support
+
The NB Pulsar driver supports the following Pulsar schema types:
+
+
Primitive schema types
+
Avro schema type (only for message payload - msg_value)
+
KeyValue schema type (with both key and value follows an Avro schema)
+
+
The following 2 global configuration parameters define the required schema type
+
+
schema.key.type: defines message key type
+
schema.type: defines message value type
+For them, if the parameter value is not specified, it means using the default byte[]/BYTES type as the schema type. Otherwise, if it is specified as "avro", it means using Avro as the schema type.
+
+
The following 2 global configuration parameters define the schema specification (ONLY needed when Avro is the schema type)
+
+
schema.key.definition: a file path that defines the message key Avro schema specification
+
schema.definition: a file path the message value Avro schema specification
+The NB Pulsar driver will throw an error if the schema type is Avro but no schema specification definition file is not provided or is not valid.
This driver is similar to NB Pulsar driver that allows NB based workload generation and performance testing against a Pulsar cluster. It also follows a similar pattern to configure and connect to the Pulsar cluster for workload execution.
+
However, the major difference is instead of simulating native Pulsar client workloads, the NB S4J driver allows simulating JMS oriented workloads (that follows JMS spec 2.0 and 1.1) to be executed on the Pulsar cluster. Under the hood, this is achieved through DataStax's [Starlight for JMS API] (https://github.com/datastax/pulsar-jms).
+
2. Execute NB S4J Workload
+
The following is an example of executing a NB S4J workload (defined as pulsar_s4j.yaml)
In the above NB CLI command, the S4J driver specific parameters are listed as below:
+
+
num_conn: the number of JMS connections to be created
+
num_session: the number of JMS sessions per JMS connection
+
+
Note that multiple JMS sessions can be created from one JMS connection, and they share the same connection characteristics.
+
+
+
session_mode: the session mode used when creating a JMS session
+
web_url: the URL of the Pulsar web service
+
service_url: the URL of the Pulsar native protocol service
+
(optional) strict_msg_error_handling: whether to do strict error handling
+
+
when true, Pulsar client error will not stop NB S4J execution
+
otherwise, any Pulsar client error will stop NB S4J execution
+
+
+
(optional) max_s4jop_time: maximum time (in seconds) to execute the actual S4J operations (e.g. message sending or receiving). If NB execution time is beyond this limit, each NB cycle is just a no-op. Please NOTE:
+
+
this is useful when controlled NB execution is needed with NB CLI scripting.
+
if this parameter is not specified or the value is 0, it means no time limitation. Every single NB cycle will trigger an actual S4J operation.
+
+
+
(optional) track_msg_cnt: When set to true (with default as false), the S4J driver will keep track of the confirmed response count for message sending and receiving.
+
+
Other NB engine parameters are straight forward:
+
+
driver: must be s4j
+
threads: depending on the workload type, the NB thread number determines how many producers or consumers will be created. All producers or consumers will share the available JMS connections and sessions
+
yamL: the NB S4J scenario definition yaml file
+
config: specify the file that contains the connection parameters used by the S4J API
+
+
3. NB S4J Driver Configuration Parameter File
+
The S4J API has a list of configuration options that can be found here: https://docs.datastax.com/en/fast-pulsar-jms/docs/1.1/pulsar-jms-reference.html#_configuration_options.
+
The NB S4J driver supports these configuration options via a config property file, an example of which is listed below. The configuration parameters in this file are grouped into several groups. The comments below explain how the grouping works.
+
###########
+# Overview: Starlight for JMS (S4J) API configuration items are listed at:
+# https://docs.datastax.com/en/fast-pulsar-jms/docs/1.1/pulsar-jms-reference.html#_configuration_options
+enableTransaction=true
+
+####
+# S4J API specific configurations (non Pulsar specific) - jms.***
+
+jms.enableClientSideEmulation=true
+jms.usePulsarAdmin=false
+#...
+
+#####
+# Pulsar client related configurations - client.***
+# - Valid settings: http://pulsar.apache.org/docs/en/client-libraries-java/#client
+#
+# - These Pulsar client settings (without the "client." prefix) will be
+# directly used as S4J configuration settings, on a 1-to-1 basis.
+#--------------------------------------
+# only relevant when authentication is enabled
+client.authPlugin=org.apache.pulsar.client.impl.auth.AuthenticationToken
+client.authParams=file:///path/to/authentication/jwt/file
+# only relevant when in-transit encryption is enabled
+client.tlsTrustCertsFilePath=/path/to/certificate/file
+#...
+
+#####
+# Producer related configurations (global) - producer.***
+# - Valid settings: http://pulsar.apache.org/docs/en/client-libraries-java/#configure-producer
+#
+# - These Pulsar producer settings (without "producer." prefix) will be collectively (as a map)
+# mapped to S4J connection setting of "producerConfig"
+#--------------------------------------
+producer.blockIfQueueFull=true
+# disable producer batching
+#producer.batchingEnabled=false
+#...
+
+#####
+# Consumer related configurations (global) - consumer.***
+# - Valid settings: http://pulsar.apache.org/docs/en/client-libraries-java/#configure-consumer
+#
+# - These Pulsar producer settings (without "consumer." portion) will be collectively (as a map)
+# mapped to S4J connection setting of "consumerConfig"
+#--------------------------------------
+#...
+
+
4. NB S4J Scenario Definition File
+
Like any NB scenario yaml file, the NB S4J yaml file is composed of 3 major components:
+
+
bindings: define NB bindings
+
params: define document level parameters
+
blocks: define various statement blocks. Each statement block represents one JMS workload type
The parameters defined in this section will be applicable to all statement blocks. An example of some common parameters that can be set at the document level is listed below:
+
+
temporary_dest: whether JMS workload is dealing with a temporary destination
+
dest_type: JMS destination type - queue or topic
+
+
params:
+ temporary_dest: "false"
+ dest_type: "<jms_destination_type>"
+ async_api: "true"
+ txn_batch_num: <number_of_message_ops_in_one_transaction>
+ blocking_msg_recv: <whehter_to_block_when_receiving_messages>
+ shared_topic: <if_shared_topic_or_not> // only relevant when the destination type is a topic
+ durable_topic: <if_durable_topic_or_not> // only relevant when the destination type is a topic
+
+
Please NOTE that the above parameters won't necessarily be specified at the document level. If they're specified at the statement level, they will only impact the statement within which they're specified.
+
4.2. NB S4J Workload Types
+
The NB S4J driver supports 2 types of JMS operations:
+
+
One for message producing/sending/publishing
+
+
this is identified by NB Op identifier MessageProduce
+
+
+
One for message consuming/receiving/subscribing
+
+
this is identified by NB Op identifier MessageConsume
+
+
+
+
4.2.1. Publish Messages to a JMS Destination, Queue or Topic
+
The NB S4J statement block for publishing messages to a JMS destination (either a Queue or a topic) has the following format.
+
+
Optionally, you can specify the JMS headers (msg_header) and properties (msg_property) via valid JSON strings in key: value format.
+
The default message type (msg_type) is "byte". But optionally, you can specify other message types such as "text", "map", etc.
+
The message payload (msg_body) is the only mandatory field.
+
+
blocks:
+ msg-produce-block:
+ ops:
+ op1:
+ ## The value represents the destination (queue or topic) name)
+MessageProduce:"mys4jtest_t"
+
+ ## (Optional) JMS headers (in JSON format).
+msg_header:|
+ {
+ "<header_key>": "<header_value>"
+ }
+
+## (Optional) JMS properties, predefined or customized (in JSON format).
+msg_property:|
+ {
+ "<property1_key>": "<property_value1>",
+ "<property2_key>": "<property_value2>"
+ }
+
+## (Optional) JMS message types, default to be BYTES.
+msg_type:"text"
+
+ ## (Mandatory) JMS message body. Value depends on msg_type.
+msg_body:"{mytext_val}"
+
+
4.2.2. Receiving Messages from a JMS Destination, Queue or Topic
+
The generic NB S4J statement block for receiving messages to a JMS destination (either a Queue or a topic) has the following format. All the statement specific parameters are listed as below.
+
+
msg_selector: Message selector string
+
no_local: Only applicable to a Topic as the destination. This allows a subscriber to inhibit the delivery of messages published by its own connection.
+
read_timeout: The timeout value for receiving a message from a destination
+
+
This setting only works if no_wait is false
+
If the read_timeout value is 0, it behaves the same as no_wait is true
+
+
+
no_wait: Whether to receive the next message immediately if one is available
+
msg_ack_ratio: the ratio of the received messages being acknowledged
+
slow_ack_in_sec: whether to simulate a slow consumer (pause before acknowledging after receiving a message)
+
+
value 0 means no simulation (consumer acknowledges right away)
+
+
+
negative ack/ack timeout/deadletter topic related settings
+
+
The settings here (as the scenario specific settings) will be merged with the
+
global settings in s4j_config.properties file
+
+
+
+
blocks:
+ msg-produce-block:
+ ops:
+ op1:
+ ## The value represents the destination (queue or topic) name)
+MessageProduce:"mys4jtest_t"
+
+ ## (Optional) client side message selector
+msg_selector:""
+
+ ## (Optional) No Local
+no_local:"true"
+
+ ## (Optional) Read Timeout
+read_timeout:"10"
+
+ ## (Optional) Receive message without wait
+no_wait:"true"
+
+ ## (Optional) Message acknowledgement ratio
+msg_ack_ratio:"0.5"
+
+ ## (Optional) Simulate slow consumer acknowledgement
+# must be non-negative numbers. negative numbers will be treated as 0
+# 0 - means no simulation
+# positive value - the number of seconds to pause before acknowledgement
+slow_ack_in_sec:"0"
+
+ #####
+## (Optional) Statement level settings for Consumer
+#
+## AckTimeout value (at least 1 second)
+consumer.ackTimeoutMillis:1000
+
+ ## DLQ policy
+consumer.deadLetterPolicy:'{ "maxRedeliverCount": "2" }'
+
+ ## NegativeAck Redelivery policy
+consumer.negativeAckRedeliveryBackoff:|
+ {
+ }
+
+## AckTimeout Redelivery policy
+consumer.ackTimeoutRedeliveryBackoff:|
+ {
+ "minDelayMs":"10",
+ "maxDelayMs":"20",
+ "multiplier":"1.2"
+ }
+
This is an activity type which allows for the generation of data
+into to stdout or a file. It reads the standard nosqlbench YAML
+format. It can read YAML activity files for any activity type
+that uses the curly brace token form in statements.
+
Example activity definitions
+
Run a stdout activity named 'stdout-test', with definitions from activities/stdout-test.yaml
This last example shows that the cycle range is [inclusive..exclusive),
+to allow for stacking test intervals. This is standard across all
+activity types.
+
stdout ActivityType Parameters
+
+
filename - this is the name of the output file
+(defaults to "stdout", which actually writes to stdout, not the filesystem)
+
newline - whether to automatically add a missing newline to the end
+of any statements.
+default: true
+
format - which format to use. If provided, the format will override any statement formats provided by the YAML.
+valid values are (csv, readout, json, inlinejson, assignments, and diag)
+
+
When 'format=diag', then the internal construction logic for the binding is logged in detail and nosqlbench exits.
+This is useful for detailed diagnostics when you run into trouble, but not generally otherwise. This provides
+details that you may include in a bug report if you think there is a bindings bug.
+
+
+
bindings - This is a simple way to specify a filter for the names of bindings that you want to use.
+If this is 'doc', then all the document level bindings are used. If it is any other value, it is taken
+as a pattern (regex) to subselect a set of bindings by name. You can simply use the name of a binding
+here as well.
+default: doc
+
+
Configuration
+
This activity type uses the uniform yaml configuration format.
+For more details on this format, please refer to the
+Standard YAML Format
+
Configuration Parameters
+
+
newline - If a statement has this param defined, then it determines
+whether or not to automatically add a missing newline for that statement
+only. If this is not defined for a statement, then the activity-level
+parameter takes precedence.
+
+
Statement Format
+
The statement format for this activity type is a simple string. Tokens between
+curly braces are used to refer to binding names, as in the following example:
+
ops:
+ op1:"It is {minutes} past {hour}."
+
+
If you want to suppress the trailing newline that is automatically added, then
+you must either pass newline=false as an activity param, or specify it
+in the statement params in your config as in:
+
ops:
+ op1:
+ stmt:"It is {minutes} past {hour}."
+ newline:false
+
+
Auto-generated statements
+
If no statement is provided, then the defined binding names are used as-is
+to create a CSV-style line format. The values are concatenated with
+comma delimiters, so a set of bindings like this:
tcpclient acts like a client push version of stdout over TCP
+
The tcpclient driver is based on the behavior of the stdout driver. You configure the tcpclient driver in exactly the
+same way as the stdout driver, except for the additional parameters shown here.
+
The tcpclient driver connects to a configured host and port (a socket address). When a server is listening on that socket,
+then the data for each cycle is written via the socket just like stdout would write.
+
Examples
+
Run a stdout activity named 'stdout-test', with definitions from activities/stdout-test.yaml
+
... driver=tcpclient yaml=stdout-test
+
+
Driver Parameters
+
+
+
retry_delay - The internal retry frequency at which the internal cycle loop will attempt to add data to the
+buffer. This applies when the internal buffer is full and no clients are consuming data from it.
+
+
unit: milliseconds
+
default: 1000
+
dynamic: false
+
+
+
+
retries - The number of retries which the internal cycle loop will attempt before marking a row of output as
+failed.
+
+
default: 3
+
dynamic: false
+
+
+
+
ssl - boolean to enable or disable ssl
+
+
default: false
+
dynamic: false
+
+
To enable, specifies the type of the SSL implementation with either jdk or openssl.
+
See the ssl help topic for more details with nb5 help ssl for more details.
+
+
+
host - this is the name to bind to (local interface address)
+
+
default: localhost
+
dynamic: false
+
+
+
+
port - this is the name of the port to listen on
+
+
default: 12345
+
dynamic: false
+
+
+
+
capacity - the size of the internal blocking queue
+
+
default: 10
+
unit: lines of output
+
dynamic: false
+
+
+
+
Statement Format
+
Refer to the help for the stdout driver for details.
tcpserver acts like a server push version of stdout over TCP
+
The tcpserver driver is based on the behavior of the stdout driver. You configure the tcpserver driver in exactly the
+same way as the stdout driver, except for the additional parameters shown here.
+
The tcpserver driver listens on a configured host and port (a socket address). When any clients are connected, the
+internal queue is buffered to them as long as there is data in it. For each cycle of data in the internal buffer, one of
+the connected clients will get it in unspecified order.
+
The driver activity will block as long as there are still messages in the queue (Max:capacity). To ensure that the queue is empties and the activity shuts down correctly, one must make sure that a client connects to the server to receive the queued messages.
+
If the buffer is primed with data when a client is connected it will get all of the data at once. After this, data is
+added to the buffer at whatever cyclerate the activity is configured for. If you add data to the buffer faster than you
+can consume it with connected clients, you will have a number of failed operations.
+
However, the opposite is not true. You should generally ensure that you can consume the data as fast as you provide it,
+and the error counts give you a relatively easy way to verify this. If you wish to disable this behavior, set the
+retries to a very high value. In this case, the tries metric will still give you some measure of internal buffer
+saturation.
+
Examples
+
Run a stdout activity named 'stdout-test', with definitions from activities/stdout-test.yaml
+
... driver=tcpserver yaml=stdout-test
+
+
Driver Parameters
+
+
+
retry_delay - The internal retry frequency at which the internal cycle loop will attempt to add data to the
+buffer. This applies when the internal buffer is full and no clients are consuming data from it.
+
+
unit: milliseconds
+
default: 1000
+
dynamic: false
+
+
+
+
retries - The number of retries which the internal cycle loop will attempt before marking a row of output as
+failed.
+
+
default: 3
+
dynamic: false
+
+
+
+
ssl - boolean to enable or disable ssl
+
+
default: false
+
dynamic: false
+
+
To enable, specifies the type of the SSL implementation with either jdk or openssl.
+
See the ssl help topic for more details with nb5 help ssl for more details.
+
+
+
host - this is the name to bind to (local interface address)
+
+
default: localhost
+
dynamic: false
+
+
+
+
port - this is the name of the port to listen on
+
+
default: 12345
+
dynamic: false
+
+
+
+
capacity - the size of the internal blocking queue
+
+
default: 10
+
unit: lines of output
+
dynamic: false
+
+
+
+
Statement Format
+
Refer to the help for the stdout driver for details.
Op templates are the recipes provided by users for an operation. These hold examples of payload
+data, metadata that configures the driver, timeout settings and so on.
+
The field name used in workload templates to represent operations can often be symbolic to users.
+For this reason, several names are allowed: ops, op, operations, statements, statement. It doesn't
+matter whether the value is provided as a map, list, or scalar. These all allow for the same
+level of templating. Map forms are preferred, since they include naming in a more streamlined
+structure. When you use list form, you have to provide the name as a separate field.
+
A name is automatically provided by the API when there is one missing.
All the forms above merely show how you can structure op templates into common collection forms and
+have them be interpreted in a flexible yet obvious way.
+
However, all the properties described in templated_workloads.md
+can be attached directly to op templates too. This section contains a few examples to illustrate
+this at work.
+
detailed op template example
+
yaml:
+
ops:
+ op1:
+ name:special-op-name
+ op:select * from ks1.tb1;
+ bindings:
+ binding1:NumberNameToString();
+ tags:
+ block:schema
+ params:
+ prepared:false
+ description:This is just an example operation
+
+
json:
+
+{
+ "ops":{
+ "op1":{
+ "bindings":{
+ "binding1":"NumberNameToString();"
+ },
+ "description":"This is just an example operation",
+ "name":"special-op-name",
+ "op":"select * from ks1.tb1;",
+ "params":{
+ "prepared":false
+ },
+ "tags":{
+ "block":"schema"
+ }
+ }
+ }
+}
+
+
ops:
+
+[
+ {
+ "bindings":{
+ "binding1":"NumberNameToString();"
+ },
+ "description":"This is just an example operation",
+ "name":"block0--special-op-name",
+ "op":{
+ "stmt":"select * from ks1.tb1;"
+ },
+ "params":{
+ "prepared":false
+ },
+ "tags":{
+ "block":"schema",
+ "name":"block0--special-op-name",
+ "block":"block0"
+ }
+ }
+]
+
+
Property Layering
+
Properties that are provided at the top (doc) level become defaults for each nested layer (block or
+ops). Each named binding, param, or tag is automatically assigned to any contained layers which do
+not have one of the same name. When two layers contain the same named binding, param or tag, the
+inner-most scope decides the value seen at the op level.
+
block-level defaults and overrides
+
yaml:
+
tags:
+ docleveltag:is-tagging-everything# applies to all operations in this case
+
+bindings:
+ binding1:Identity();# will be overridden at the block level
+
+params:
+ prepared:true# set prepared true by default for all contained op templates
+
+blocks:
+ block-named-fred:
+ bindings:
+ binding1:NumberNameToString();
+ tags:
+ block:schema
+ params:
+ prepared:false
+ description:This is just an example operation
+ ops:
+ op1:
+ name:special-op-name
+ op:select * from ks1.tb1;
+
+
This directory contains the testable specification for workload definitions used by NoSQLBench.
+All the content blocks in this section have been validated with the latest NoSQLBench build.
+
Usually, users will not need to delve too deeply into this section. It is useful as a detailed
+guide for contributors and driver developers. If you are using a driver which leaves you
+wondering what a good op template example looks like, then the driver needs better examples in
+its documentation!
+
Synopsis
+
There are two primary views of workload definitions that we care about:
+
+
The User View of op templates
+
+
Op templates are simply the schematic recipes for building an operation once you know the
+cycle it is for.
+
Op templates are provided by users in YAML or JSON or even directly via runtime API. This
+is called a workload template, which contains op templates.
+
Op templates can be provided with optional metadata which serve to label, group,
+parameterize or otherwise make the individual op templates more manageable.
+
A variety of forms are supported which are self-evident, but which allow users to have
+some flexibility in how they structure their YAML, JSON, or runtime collections. **This
+specification is about how these various forms are allowed, and how they relate to a
+fully-qualified and de-normalized op template view.
+
+
+
The Developer View of the ParsedOp API. This is the view of an op template which presents the
+developer with a very high-level toolkit for building op synthesis functions.
+
+
Details
+
The documentation in this directory serve as a testable specification for all the above. It
+shows specific examples of all the valid op template forms in both YAML and JSON, as well as how
+the data is normalized to feed developer's view of the ParsedOp API.
+
Related Reading
+
If you want to understand the rest of this document, it is crucial that you have a working knowledge
+of the standard YAML format and several examples from the current drivers. You can learn this from
+the main documentation which demonstrates step-by-step how to build a workload. Reading further in
+this document will be most useful for core NB developers, or advanced users who want to know all
+the possible ways of building workloads.
Some or part of a templated workload in yaml format.
+
The JSON equivalent as it would be loaded. This is cross-checked against the result of parsing
+the yaml into data.
+
The Workload API view of the same data rendered as a JSON data structure. This is cross-checked
+against the workload API's rendering of the loaded data.
+
+
To be matched by the testing layer, you must prefix each section with a format marker with emphasis,
+like this:
+
format:
+
body of example
+
+
Further, to match the pattern above, these must occur in sequences like the following, with no other
+intervening content. If the second fenced code section is a JSON array, then each object within
+it is compared pair-wise with the yaml structure as in a multi-doc scenario. The following
+example is actually tested along with the non-empty templates. It is valid because the second
+block is in array form, and thus compares 0 pair-wise elements.
+
yaml:
+
# some yaml here
+
+
json:
+
+[]
+
+
ops:
+
+[]
+
+
The above sequence of 6 contiguous markdown elements follows a recognizable pattern to the
+specification testing harness. The names above the sections are required to match and fenced
+code sections are required to follow each.
+
All the markdown files in this directory are loaded and scanned for this pattern, and all
+such sequences are verified each time NoSQLBench is built.
params - decorates operations with special configurations
+
tags - describes elements for filtering and grouping
+
op, ops, operations statement, statements - defines op templates
+
blocks - groups any or all elements
+
+
+
Description
+
zero or one description fields:
+
The first line of the description represents the summary of the description in summary views.
+Otherwise, the whole value is used.
+
yaml:
+
description:|
+ summary of this workload
+ and more details
+
+
json:
+
+{
+ "description":"summary of this workload\nand more details\n"
+}
+
+
ops:
+
+[]
+
+
+
Scenarios
+
zero or one scenarios fields, containing one of the following forms
+
The way that you create macro-level workloads from individual stages is called named scenarios in
+NB. These are basically command line templates which can be invoked automatically by calling their
+name out on your command line. More details on their usage are in the workload construction guide.
+We're focused merely on the structural rules here.
For scenario steps which should not be overridable by user parameters on the command line, a double
+equals is used to lock the values for a given step without informing the user that their provided
+value was ignored. This can be useful in cases where there are multiple steps and some parameters
+should only be changeable for some steps.
+
yaml:
+
# The user is not allowed to change the value for the alias parameter, and attempting to do so
+# will cause an error to be thrown and the scenario halted.
+scenarios:
+ default:run alias==first driver=diag cycles=10
+
For scenario steps which should not be overridable by user parameters on the command line, a triple
+equals is used to indicate that changing these parameters is not allowed. If a user tries to
+override a verbose locked parameter, an error is thrown and the scenario is not allowed to run. This
+can be useful when you want to clearly indicate that a parameter must remain as it is.
+
yaml:
+
# The user is not allowed to change the value for the alias parameter, and attempting to do so
+# will cause an error to be thrown and the scenario halted.
+scenarios:
+ default:run alias===first driver=diag cycles=10
+
zero or one bindings fields, containing a map of named bindings recipes
+
Bindings are the functions which synthesize data for your operations. They are specified in recipes
+which are just function chains from the provided libraries.
zero of one params fields, containing a map of parameter names to values
+
Params are modifiers to your operations. They specify important details which are not part of the
+operation's command or payload, like consistency level, or timeout settings.
zero or one tags fields, containing a map of tag names and values
+
Tags are how you mark your operations for special inclusion into tests. They are basically naming
+metadata that lets you filter what type of operations you actually use. Further details on tags are
+in the workload construction guide.
+
yaml:
+
tags:
+ block:main
+
+
json:
+
+{
+ "tags":{
+ "block":"main"
+ }
+}
+
+
ops:
+
+[]
+
+
+
Blocks
+
Blocks are used to logically partition a workload for the purposes of grouping, configuring or
+executing subsets and op sequences. Blocks can contain any of the defined elements above.
+Every op template within a block automatically gets a tag with the name 'block' and the value of
+the block name. This makes it easy to select a whole block at a time with a tag filter like
+tags=block:"schema.*".
+
Blocks are not recursive. You may not put a block inside another block.
All documents, blocks, and ops within a workload can have an assigned name. When map and list forms
+are both supported for entries, the map form provides the name. When list forms are used, an
+additional field named name can be used.
This document is focused on the basic properties that can be added to a templated workload. To see
+how they are combined together, see Op Templates Basics.
Op templates are the recipes provided by users for an operation. These hold examples of payload
+data, metadata that configures the driver, timeout settings and so on.
+
The field name used in workload templates to represent operations can often be symbolic to users.
+For this reason, several names are allowed: ops, op, operations, statements, statement. It doesn't
+matter whether the value is provided as a map, list, or scalar. These all allow for the same
+level of templating. Map forms are preferred, since they include naming in a more streamlined
+structure. When you use list form, you have to provide the name as a separate field.
+
A name is automatically provided by the API when there is one missing.
All the forms above merely show how you can structure op templates into common collection forms and
+have them be interpreted in a flexible yet obvious way.
+
However, all the properties described in templated_workloads.md
+can be attached directly to op templates too. This section contains a few examples to illustrate
+this at work.
+
detailed op template example
+
yaml:
+
ops:
+ op1:
+ name:special-op-name
+ op:select * from ks1.tb1;
+ bindings:
+ binding1:NumberNameToString();
+ tags:
+ block:schema
+ params:
+ prepared:false
+ description:This is just an example operation
+
+
json:
+
+{
+ "ops":{
+ "op1":{
+ "bindings":{
+ "binding1":"NumberNameToString();"
+ },
+ "description":"This is just an example operation",
+ "name":"special-op-name",
+ "op":"select * from ks1.tb1;",
+ "params":{
+ "prepared":false
+ },
+ "tags":{
+ "block":"schema"
+ }
+ }
+ }
+}
+
+
ops:
+
+[
+ {
+ "bindings":{
+ "binding1":"NumberNameToString();"
+ },
+ "description":"This is just an example operation",
+ "name":"block0--special-op-name",
+ "op":{
+ "stmt":"select * from ks1.tb1;"
+ },
+ "params":{
+ "prepared":false
+ },
+ "tags":{
+ "block":"schema",
+ "name":"block0--special-op-name",
+ "block":"block0"
+ }
+ }
+]
+
+
Property Layering
+
Properties that are provided at the top (doc) level become defaults for each nested layer (block or
+ops). Each named binding, param, or tag is automatically assigned to any contained layers which do
+not have one of the same name. When two layers contain the same named binding, param or tag, the
+inner-most scope decides the value seen at the op level.
+
block-level defaults and overrides
+
yaml:
+
tags:
+ docleveltag:is-tagging-everything# applies to all operations in this case
+
+bindings:
+ binding1:Identity();# will be overridden at the block level
+
+params:
+ prepared:true# set prepared true by default for all contained op templates
+
+blocks:
+ block-named-fred:
+ bindings:
+ binding1:NumberNameToString();
+ tags:
+ block:schema
+ params:
+ prepared:false
+ description:This is just an example operation
+ ops:
+ op1:
+ name:special-op-name
+ op:select * from ks1.tb1;
+
+
Payloads in NoSQLBench op templates can be of any type that you can create from bindings, string
+templates, data structure, or any combination thereof. Yet, the recipe format for these bindings is
+simple and direct. If you know what you want the op to do, then one of the supported forms here
+should get you there.
+
The payload of an operation is handled as any content which is assigned to the op field in the
+uniform op structure.
+
assigned ops as string
+
When you are using the command, it is possible to specify op=... instead of providing a workload
+path. When you do that, a workload description is automatically derived for you on-the-fly.
+
nb run driver=stdout op='cycle number {{NumberNameToString}'
+
The equivalent structure has the same effect as if you had created a workload description like this:
+
yaml:
+
ops:"cycle number '{{NumberNameToString}}'"
+
+
json:
+
+{
+ "ops":"cycle number '{{NumberNameToString}}'"
+}
+
A more preferable way to add ops to a structure is in map form. This is also covered elsewhere, but
+it is important for examples further below so we'll refresh it here:
When your operation takes on a non-statement form, you simply provide a map structure at the
+top-level of the op. Notice that the structurally normalized form shows the field values moved
+underneath the canonical 'op' field by default. This is because the structurally normalized
+op template form always has a map in the op field. The structural short-hand for creating an
+op template that is list based simply moves any list entries at the op template level down in
+the named 'op' field for you.
From the examples above, it's clear that you can use string and maps in your op fields. Here, we
+show how you can use arbitrary levels of structured types based on commodity collection types like
+List and Map for Java and JSON objects and arrays.
You can use either named binding points references like {userid} or binding definitions such
+as {{Template('user-{}',ToString())}}. The only exception to this rule is that you may not
+(yet) use dynamic values for keys within your structure.
params field at op field level is not treated as special
+
When you put your params within the op fields by name, alongside the other op fields, it is not
+treated specially. This is not disallowed, as there may be scenarios where this is otherwise a valid
+value. Further, params within the op field would not provide any benefit over simply having those
+named values in the op field directly, as this is consulted first for dynamic and static values.
params field at op name level is not treated as special
+
When you are using map-based op template names, and one of them has a name of 'param', it is treated
+just as any other op template name. The fields will not be recognized as param names and values,
+but as op template fields.
In the workload template examples, we show statements as being formed from a string value. This is a
+specific type of statement form, although it is possible to provide structured op templates as well.
+
The ParsedOp API is responsible for converting all valid op template forms into a consistent and
+unambiguous model. Thus, the rules for mapping the various forms to the command model must be
+precise. Those rules are the substance of this specification.
+
Op Synthesis
+
Executable operations are created on the fly by NoSQLBench via a process called Op Synthesis.
+This is done incrementally in stages. The following narrative describes this process in logical
+stages. (The implementation may vary from this, but it explains the effects, nonetheless.)
+
Everything here happens during activity initialization, before the activity starts running
+cycles:
+
+
Template Variable Expansion - If there are template variables, such as
+TEMPLATE(name,defaultval) or <<name:defaultval>>, then these are expanded
+according to their defaults and any overrides provided in the activity params. This is a macro
+substitution only, so the values are simply interposed into the character stream of the document.
+
Jsonnet Evaluation - If the source file was in jsonnet format (the extension was .jsonnet)
+then it is interpreted by sjsonnet, with all activity parameters available as external variables.
+
Structural Normalization - The workload template (yaml, json, or data structure) is loaded
+into memory and transformed into a standard format. This means taking various list and map
+forms at every level and converting them to a singular standard form in memory.
+
Auto-Naming - All elements which do not already have a name are assigned a simple name like
+block2 or op3.
+
Auto-Tagging - All op templates are given standard tag values under reserved tag names:
+
+
block: the name of the block containing the op template. For example: block2.
+
name: the name of the op template, prefixed with the block value and --. For example,
+block2--op1.
+
+
+
Property De-normalization - Default values for all the standard op template properties are
+copied from the doc to the block layer unless the same-named key exists. Then the same
+method is applied from the doc layer to the op template layer. At this point, the op
+templates are effectively an ordered list of data structures, each containing all necessary
+details for use.
+
Tag Filtering - The activity's tag param is used to filter all the op templates
+according to their tag map.
+
Bind Point and Capture Points - Each op template is now converted into a ParsedOp, which is
+a swiss-army knife of op template introspection and function generation. It is the direct
+programmatic API that driver adapters use in subsequent steps.
+
+
Any string sequences with bind points like this has a {bindpoint} are automatically
+converted to a long -> string function.
+
Any direct references with no surrounding text like {bindpoint} are automatically
+converted to direct binding references.
+
Any other string form is cached as a static value.
+
The same process is applied to Lists and Maps, allowing structural templates which read
+like JSON with bind points in arbitrary places.
+
+
+
Op Mapping - Using the ParsedOp API, each op template is categorized by the active driver
+according to that driver's documented examples and type-matching rules. Once the op mapper
+determines what op type a user intended, it uses this information and the associated op
+fields to create an Op Dispenser.
+
Op Sequencing - The op dispensers are kept as an internal sequence, and installed into a
+LUT according to their ratios and the specified
+(or default) sequencer. By default, round-robin with bucket exhaustion is used. The ratios
+specified are used directly in the LUT.
+
+
When this is complete, you are left with an efficient lookup table which indexes into a set of
+OpDispensers. The length of this lookup table is called the sequence length, and that value is
+used, by default, to set the stride for the activity. This stride determines the size of
+per-thread cycle batching, effectively turning each sequence into a thread-safe set of
+operations which are serialized, and thus suitable for testing linearized operations with
+suitable dependency and error-handling mechanisms. (But wait, there's more!)
+
Special Cases
+
Drivers are assigned to op templates individually, meaning you can specify the driver within an
+op template, not even assigning a default for the activity. Further, certain drivers are able to
+fill in missing details for op templates, like the stdout driver which only requires bindings.
+
This means that there are distinct cases for configuration which are valid, and these are
+checked at initialization time:
+
+
A driver must be selected for each op template either directly or via activity params.
+
If the whole workload template provided does not include actual op templates AND a
+default driver is provided which can create synthetic op templates, it is given the raw
+workload template, incomplete as it is, and asked to provide op templates which have all
+normalization, naming, etc. already done. This is injected before the tag-filtering phase.
+
In any case that an actual non-zero list of op templates is provided and tag filtering removes
+them all, an error is thrown.
+
If, after tag filtering no op template are in the active list, an error is thrown.
+
+
The ParsedOp
+
The components of a fully-parsed op template (AKA a ParsedOp) are:
+
name
+
Each ParsedOp knows its name, which is simply the op template name that it was made from. This
+is useful for diagnostics, logging, and metrics.
+
description
+
Every named element of a workload may be given a description.
+
tags
+
Every op template has tags, even if they are auto-assigned from the block and op template names.
+If you assign explicit tags to an op template, the standard tags are still provided. Thus, it is
+an error to directly provide a tag named block or name.
+
bindings
+
Although bindings are usually defined as workload template level property, they can also be
+provided directly as an op field property.
+
op fields
+
The op property of an op template or ParsedOp is the root of the op fields. This is a map of
+specific fields specified by the user.
+
static op fields
+
Some op fields are simply static values. Since these values are not generated per cycle, they are
+kept separate as reference data. Knowing which fields are static and which are not makes it
+possible for developers to optimize op synthesis.
+
dynamic op fields
+
Other fields may be specified as recipes, with the actual value to be filled-in once the cycle
+value is known. All such fields are known as dynamic op fields, and are provided to the op
+dispenser as a long function, where the input is always the cycle value and the output is a
+type-specific value as determined by the associated binding recipe.
+
bind points
+
This is how dynamic values are indicated. Each bind point in an op template results in some type of
+procedural generation binding. These can be references to named bindings elsewhere in the
+workload template, or they can be inline.
+
capture points
+
Names of result values to save, and the variable names they are to be saved as. The names represent
+the name as it would be found in the native driver's API, such as the name userid
+in select userid from .... In string form statements, users can specify that the userid should be
+saved as the thread-local variable named userid simply by tagging it
+like select [userid] from .... They can also specify that this value should be captured under a
+different name with a variation like select [userid as user_id] from .... This is the standard
+variable capture syntax for any string-based statement form.
+
params
+
A backwards-compatible feature called op params is still available. This is another root
+property within an op template which can be used to accessorize op fields. By default, any op
+field which is not explicitly rooted under the op property are put there anyway. This is also
+true when there is an explicitly params property. However if the op property is provided, then
+all non-reserved fields are given to the params property instead. If both the op and the
+param op properties are specified, then no non-reserved op fields are allowed outside of these
+root values. Thus it is possible to still support params, but it is highly recommended that
+new driver developers avoid using this field, and instead allow all fields to be automatically
+anchored under the op property. This keeps configs terse and simple going forward.
+
Params may not be dynamic.
+
Mapping Rules
+
A ParsedOp does not necessarily describe a specific low-level operation to be performed by
+a native driver. It should do so, but it is up to the user to provide a valid op template
+according to the documented rules of op construction for that driver type. These rules should be
+clearly documented by the driver developer as examples in markdown that is required for every
+driver. With this documentation, users can use nb5 help <driver> to see exactly how
+to create op templates for a given driver.
+
String Form
+
Basic operations are made from a statement in some type of query language:
+
ops:
+ -stringform:select [userid] from db.users where user='{username}';
+ bindings:
+ username:NumberNameToString()
+
+
Reserved op fields
+
The property names ratio, driver, space, are considered reserved by the NoSQLBench runtime.
+These are extracted and handled specially by the core runtime.
+
Base OpDispenser fields
+
The BaseOpDispenser, which is will be required as the base implementation of any op
+dispenser going forward, provides cross-cutting functionality. These include start-timers,
+stop-timers, instrument, and likely will include more as future cross-driver functionality is
+added. These fields will be considered reserved property names.
+
Optimization
+
It should be noted that the op mapping process, where user intentions are mapped from op templates to
+op dispensers is not something that needs to be done quickly. This occurs at initialization
+time. Instead, it is more important to focus on user experience factors, such as flexibility,
+obviousness, robustness, correctness, and so on. Thus, priority of design factors in this part
+of NB is placed more on clear and purposeful abstractions and less on optimizing for speed. The
+clarity and detail which is conveyed by this layer to the driver developer will then enable
+them to focus on building fast and correct op dispensers. These dispensers are also constructed
+before the workload starts running, but are used at high speed while the workload is running.
+
In essence:
+
+
Any initialization code which happens before or in the OpDispenser constructor should not be
+concerned with careful performance optimization.
+
Any code which occurs within the OpDispenser#apply method should be as lightweight as is
+reasonable.
Op templates are the recipes provided by users for an operation. These hold examples of payload
+data, metadata that configures the driver, timeout settings and so on.
+
The field name used in workload templates to represent operations can often be symbolic to users.
+For this reason, several names are allowed: ops, op, operations, statements, statement. It doesn't
+matter whether the value is provided as a map, list, or scalar. These all allow for the same
+level of templating. Map forms are preferred, since they include naming in a more streamlined
+structure. When you use list form, you have to provide the name as a separate field.
+
A name is automatically provided by the API when there is one missing.
All the forms above merely show how you can structure op templates into common collection forms and
+have them be interpreted in a flexible yet obvious way.
+
However, all the properties described in templated_workloads.md
+can be attached directly to op templates too. This section contains a few examples to illustrate
+this at work.
+
detailed op template example
+
yaml:
+
ops:
+ op1:
+ name:special-op-name
+ op:select * from ks1.tb1;
+ bindings:
+ binding1:NumberNameToString();
+ tags:
+ block:schema
+ params:
+ prepared:false
+ description:This is just an example operation
+
+
json:
+
+{
+ "ops":{
+ "op1":{
+ "bindings":{
+ "binding1":"NumberNameToString();"
+ },
+ "description":"This is just an example operation",
+ "name":"special-op-name",
+ "op":"select * from ks1.tb1;",
+ "params":{
+ "prepared":false
+ },
+ "tags":{
+ "block":"schema"
+ }
+ }
+ }
+}
+
+
ops:
+
+[
+ {
+ "bindings":{
+ "binding1":"NumberNameToString();"
+ },
+ "description":"This is just an example operation",
+ "name":"block0--special-op-name",
+ "op":{
+ "stmt":"select * from ks1.tb1;"
+ },
+ "params":{
+ "prepared":false
+ },
+ "tags":{
+ "block":"schema",
+ "name":"block0--special-op-name",
+ "block":"block0"
+ }
+ }
+]
+
+
Property Layering
+
Properties that are provided at the top (doc) level become defaults for each nested layer (block or
+ops). Each named binding, param, or tag is automatically assigned to any contained layers which do
+not have one of the same name. When two layers contain the same named binding, param or tag, the
+inner-most scope decides the value seen at the op level.
+
block-level defaults and overrides
+
yaml:
+
tags:
+ docleveltag:is-tagging-everything# applies to all operations in this case
+
+bindings:
+ binding1:Identity();# will be overridden at the block level
+
+params:
+ prepared:true# set prepared true by default for all contained op templates
+
+blocks:
+ block-named-fred:
+ bindings:
+ binding1:NumberNameToString();
+ tags:
+ block:schema
+ params:
+ prepared:false
+ description:This is just an example operation
+ ops:
+ op1:
+ name:special-op-name
+ op:select * from ks1.tb1;
+
+
👉 The docs are presently updated to support NoSQLBench v5.17. 👈
+
Welcome to NB5! This release represents a massive leap forward. There are so many improvements
+that should have gone into smaller releases along the way, but here we are. We've had our heads
+down, focusing on new APIs, porting drivers, and fixing bugs, but it's time to talk about the
+new stuff!
+
For those who are experienced NB5 users, this will have few (but some!) surprises. For
+those of you who are NB (4 or earlier) users, NB5 is a whole different kind of testing tool. The
+changes allow for a much more streamlined user and developer experience, while also offering
+additional capabilities never seen together in a systems testing tool.
+
Everything mentioned here will find its way into the main docs site before were done.
+
We've taken some care to make sure that there is support for earlier workloads where at all
+possible. If we've missed something critical, please let us know, and we'll patch it up ASAP.
+
This is a narrative overview of changes for NB5 in general. Individual
+releases will have itemized code changes
+listed individually.
+
Artifacts
+
nb5
+
The main bundled artifact is now named nb5. This version of NoSQLBench is a
+significant departure from the previous limitations and conventions, so a new name was fitting.
+It also allows you to easily have both on your system if you are maintaining test harnesses.
+This is a combination of the NoSQLBench core runtime module nbr and all the bundled driver
+adapters which have been contributed to the project.
+
Packaging
+
The code base for nb5 is more modular and adaptable. The core runtime module nbr is now
+separate, including only the core diagnostic driver which is used in integration tests. This allows
+for leaner and meaner integration tests.
+
drivers
+
We've ported many drivers to the nb5 APIs. All CQL support is now being provided by
+Datastax Java Driver for Apache Cassandra.
+In addition, multiple contributors are stepping up to provide new drivers for many systems
+across the NoSQL ecosystem.
+
Project
+
Significant changes were made for the benefit of both users and developers.
+
Team
+
We've expanded the developer team which maintains tools like NoSQLBench. This should allow us to
+make improvements faster, focus on users more, and bring more strategic capabilities to the project
+which can redefine how advanced testing is done.
+
WYSiWYG Docs
+
We've connected the integration and specification tests to the documentation in a way that
+ties examples everything together. If the examples and integration tests that are used on this
+site fail, the build fails. Otherwise, the most recent examples are auto exported from the main
+code base to the docs site. This means that test coverage will improve examples in the docs,
+which will stay constantly up to date. Expect coverage of this method to improve with each
+release. Until we can say What You See Is What You Get across all nb5 functions and examples,
+we're not done yet.
+
Releases
+
Going forward we'll enforce stricter release criteria. Interim releases will be flagged as
+prerelease unless due diligence checks have been done and a peer review finds a prerelease
+suitable for promotion to a main release. Once flagged as a normal release, CI/CD tools can pick
+up the release from the github releases area automatically.
+
We have a set of release criteria which will be published to this site and used as a blueprint for
+releases going forward. More information on how releases are managed can be found in our
+Contributing section. This will include testing coverage,
+static
+analysis, and further integrated testing support.
+
Documentation
+
This doc site is a significant step up from the previous version. It is now more accessible,
+more standards compliant, and generally more user-friendly. The dark theme is highly usable.
+Syntax highlighting is much easier on the eyes, and page navigation works better! The starting
+point for this site was provided by the abridge theme by
+Jieiku.
+
Architecture
+
The impetus for a major new version of NoSQLBench came from user and developer needs. In
+order to provide a consistent user experience across a variety of testing needs, the core
+machinery needed an upgrade. The APIs for building drivers and features have been redesigned to
+this end, resulting in a vast improvement for all who use or maintain nb5 core or drivers.
+
These benefits include:
+
+
Vastly simplified driver contributor experience
+
Common features across all implemented DriverAdapters
+
Interoperability between drivers in the same scenario or activity
+
Standard core activity params across all drivers, like op=...
+
Standard metrics semantics across all drivers
+
Standard highly configurable error handler support
+
Standard op template features, like named start and stop timers
+
Standard diagnostic tools across all drivers, like dryrun=...
+
+
The amount of Standard you see in this list is directly related to the burden removed from
+both nb5 users and project contributors.
+
Some highlights of these will be described below, with more details in the user guide.
+
+
The error handlers mechanism is now fully generalized across all
+drivers.
+It is also chainable, with specific support for handling each error type with a specific chain of
+handlers, or
+simply assigning a default to all of them as before.
+
The rate limiter is more efficient. This should allow it to work better in some scenarios
+where inter-core contention was a limiting factor.
+
It is now possible to dry-run an activity with dryrun=op or similar. Each dryrun option goes
+a little further into a normal run so that incremental verification of workloads can be done.
+For example, the dryrun=op option uses all the logic of a normal execution, but it wraps
+the op implementation in a no-op. The results of this will tell you how fast the client can
+synthesize and dispatch operations when there is no op execution involved. The measurement
+will be conservative due to the extra wrapping layer.
+
Thread management within activities is now more efficient, more consistent, and more real-time.
+Polling calls were replaced with evented calls where possible.
+
Only op templates which are active (selected and have a positive ratio) are resolved at
+activity initialization. This improves startup times for large workload with subsets of
+operations enabled.
+
Native drivers (like the CQL Java Driver) now have their driver instance and object graph
+cached, indexed by a named op field called space. By default, this is wired to return
+default, thus each unique adapter will use the same internal object graph for execution.
+This is how things worked for most drivers before. However, if the user specifies that the
+space should vary, then they simply assign it a binding. This allows for advanced driver
+testing across a number of client instances, either pseudo-randomly or in lock-step with
+specific access patterns. If you don't want to use this, then ignore it and everything works
+as it did before. But if you do, this is built-in to every driver by design.
+
The activity parameter driver simply sets the default adapter for an activity. You can set
+this per op template, and run a different driver for every cycle. This field must be static on
+each op template, however. This allows for mixed-mode workloads in the same op sequence.
+
Adapters can be loaded from external jars. This can help users who are building adapters and want
+to avoid building the full runtime just for iterative testing.
+
The phase loop has been removed.
+
Operations can now generate more operations associated with a cycle. This opens the door to
+
There is a distinct API for implementing dynamic activity params distinctly from
+initialization params.
+
+
Ergonomics
+
Console
+
+
ANSI color support in some places, such as in console logging patterns. The --ansi and
+--console-pattern and --logging-pattern options work together. If a non-terminal is
+detected on stdout, ANSI is automatically disabled.
+
The progress meter has been modified to show real-time, accurate, detailed numbers
+including operations in flight.
+
+
Discovery
+
+
Discovery of bundled assets is now more consistent, supported with a family of --list-...
+options.
+
+
Configuration
+
+
Drivers know what parameters they can be configured with. This allows for more
+intelligent and useful feedback to users around appropriate parameter usage. If you get a
+param name wrong, nb5 will likely suggest the next closest option.
+
S3 Urls should work in most places, including for loading workload templates. You only need to
+configure your local authentication first.
+
+
Templating
+
Much of the power of NB5 is illustrated in the new ways you can template workloads. This
+includes structured data, dynamic op configuration, and driver instancing, to name a few.
+
+
The structure of op templates (the YAML you write to simulate access patterns) has been
+standardized around a strict set of specification tests and examples. These are documented
+in-depth and tested against a specification with round-trip validation.
+
Now, JSON and Jsonnet are supported directly as workload template formats. Jsonnet allows you to
+see the activity params as external variables.
+
All workload template structure is now supported as maps, in addition to the other structural
+forms (previously called workload YAMLs). All of these forms automatically de-sugar into the
+canonical forms for the runtime to use. This follows the previous pattern of "If it does what
+it looks like, it is valid", but further allows simplification of workloads with inline
+naming of elements.
+
In addition to workload template structure, op templates also support arbitrary structure
+instead of just scalar or String values. This is especially useful for JSON payload modeling.
+This means that op templates now have a generalized templating mechanism that works for all
+data structures. You can reference bindings as before, but you can also create collections and
+string templates by writing fields as they naturally occur, then adding {bindings} where you
+need.
+
All op template fields can be made dynamic if an adapter supports it. It is up to the adapter
+implementor to decide which op fields must be static.
+
Op template values auto-defer to configured values as static, then dynamic, and then
+configured from activity parameters as defaults. If an adapter supports a parameter at the
+activity level, and an op form supports the same field, then this occurs automatically.
+
Tags for basic workload template elements are provided gratis. You no longer need to specify the
+conventional tags. All op templates now have block: <blockname> and name: <blockname>--<name> tags added. This works with regexes in tag filtering.
+
Named scenarios now allow for nb5 <workload-file> <scenario-name>.<scenario-step> .... You can
+prototype and validate complex scenarios by sub-selecting the steps to execute.
+
You can use the op="..." activity parameter to specific a single-op workload on the
+command line, as if you had read it from a workload YAML. This allows
+for one-liner tests streamlined integration, and other simple utility usage.
+
Binding recipes can now occur inline, as {{Identity()}}. This works with the op
+parameter above.
+
You can now set a minimum version of NoSQLBench to use for a workload. The min_version: "4.17. 15" property is checked starting from the most-significant number down. If there are new core
+features that your workload depends on, you can use this to avoid ambiguous errors.
+
Template vars like <<name:value>> or TEMPLATE(name,value) can set defaults the first time they
+are seen. This means you don't have to update them everywhere. A nice way to handle this is to
+include them in the description once, since you should be documenting them anyway!
+
You can load JSON files directly. You can also load JSONNET files directly! If you need to
+sanity check your jsonnet rendering, you can use dryrun=jsonnet.
+
All workload template elements can have a description.
+
+
Misc Improvements
+
(some carry over from pre-nb5 features)
+
+
Argsfile support for allowing sticky parameters on a system.
+
Tag filters are more expressive, with regexes and conjunctions.
+
Some scenario commands now allow for regex-style globbing on activity alias names.
+
Startup logging now includes details on version, hardware config, and scenario commands for
+better diagnostics and bug reports.
+
The logging subsystem config is improved and standardized across the project.
+
Test output is now vectored exclusively through logging config.
+
Analysis methods are improved and more widely used.
+
+
Deprecations and Standards
+
+
NB5 depends on Java 17. Going forward, major versions will adopt the latest LTS java release.
+
Dependencies which require shading in order to play well with others are not supported. If you
+have a native driver or need to depend on a library which is not a good citizen, you can only
+use it with NB5 by using the external jar feature (explained elsewhere). This includes the
+previous CQL drivers which were the 1.9.* and 3.*.* version. Only CQL driver 4.* is
+provided in nb5.
+
Dependencies should behave as modular jars, according to JPMS specification. This does not
+mean they need to be JPMS modules, only that the get halfway there.
+
Log4J2 is the standard logging provider in the runtime for NoSQLBench. An SLF4J stub
+implementation is provided to allow clients which implement against the SLF4J API to work.
+
All new drivers added to the project are based on the Adapter API.
+
+
Works In Progress
+
+
These docs!
+
Bulk Loading efficiency for large tests
+
Linearized Op Modeling
+
+
We now have a syntax for designating fields to extract from op results. This is part of the
+support needed to make client-side joins and other patterns easy to emulate.
+
+
+
Rate Limiter v3
+
VictoriaMetrics Integration
+
+
Labeled metrics need to be fed to a victoria metrics docker via push. This approach will
+remove much of the pain involved in using prometheus as an ephemeral testing apparatus.
👉 The docs are presently updated to support NoSQLBench v5.17. 👈
+
Welcome to NB5! This release represents a massive leap forward. There are so many improvements
+that should have gone into smaller releases along the way, but here we are. We've had our heads
+down, focusing on new APIs, porting drivers, and fixing bugs, but it's time to talk about the
+new stuff!
+
For those who are experienced NB5 users, this will have few (but some!) surprises. For
+those of you who are NB (4 or earlier) users, NB5 is a whole different kind of testing tool. The
+changes allow for a much more streamlined user and developer experience, while also offering
+additional capabilities never seen together in a systems testing tool.
+
Everything mentioned here will find its way into the main docs site before were done.
+
We've taken some care to make sure that there is support for earlier workloads where at all
+possible. If we've missed something critical, please let us know, and we'll patch it up ASAP.
+
This is a narrative overview of changes for NB5 in general. Individual
+releases will have itemized code changes
+listed individually.
+
Artifacts
+
nb5
+
The main bundled artifact is now named nb5. This version of NoSQLBench is a
+significant departure from the previous limitations and conventions, so a new name was fitting.
+It also allows you to easily have both on your system if you are maintaining test harnesses.
+This is a combination of the NoSQLBench core runtime module nbr and all the bundled driver
+adapters which have been contributed to the project.
+
Packaging
+
The code base for nb5 is more modular and adaptable. The core runtime module nbr is now
+separate, including only the core diagnostic driver which is used in integration tests. This allows
+for leaner and meaner integration tests.
+
drivers
+
We've ported many drivers to the nb5 APIs. All CQL support is now being provided by
+Datastax Java Driver for Apache Cassandra.
+In addition, multiple contributors are stepping up to provide new drivers for many systems
+across the NoSQL ecosystem.
+
Project
+
Significant changes were made for the benefit of both users and developers.
+
Team
+
We've expanded the developer team which maintains tools like NoSQLBench. This should allow us to
+make improvements faster, focus on users more, and bring more strategic capabilities to the project
+which can redefine how advanced testing is done.
+
WYSiWYG Docs
+
We've connected the integration and specification tests to the documentation in a way that
+ties examples everything together. If the examples and integration tests that are used on this
+site fail, the build fails. Otherwise, the most recent examples are auto exported from the main
+code base to the docs site. This means that test coverage will improve examples in the docs,
+which will stay constantly up to date. Expect coverage of this method to improve with each
+release. Until we can say What You See Is What You Get across all nb5 functions and examples,
+we're not done yet.
+
Releases
+
Going forward we'll enforce stricter release criteria. Interim releases will be flagged as
+prerelease unless due diligence checks have been done and a peer review finds a prerelease
+suitable for promotion to a main release. Once flagged as a normal release, CI/CD tools can pick
+up the release from the github releases area automatically.
+
We have a set of release criteria which will be published to this site and used as a blueprint for
+releases going forward. More information on how releases are managed can be found in our
+Contributing section. This will include testing coverage,
+static
+analysis, and further integrated testing support.
+
Documentation
+
This doc site is a significant step up from the previous version. It is now more accessible,
+more standards compliant, and generally more user-friendly. The dark theme is highly usable.
+Syntax highlighting is much easier on the eyes, and page navigation works better! The starting
+point for this site was provided by the abridge theme by
+Jieiku.
+
Architecture
+
The impetus for a major new version of NoSQLBench came from user and developer needs. In
+order to provide a consistent user experience across a variety of testing needs, the core
+machinery needed an upgrade. The APIs for building drivers and features have been redesigned to
+this end, resulting in a vast improvement for all who use or maintain nb5 core or drivers.
+
These benefits include:
+
+
Vastly simplified driver contributor experience
+
Common features across all implemented DriverAdapters
+
Interoperability between drivers in the same scenario or activity
+
Standard core activity params across all drivers, like op=...
+
Standard metrics semantics across all drivers
+
Standard highly configurable error handler support
+
Standard op template features, like named start and stop timers
+
Standard diagnostic tools across all drivers, like dryrun=...
+
+
The amount of Standard you see in this list is directly related to the burden removed from
+both nb5 users and project contributors.
+
Some highlights of these will be described below, with more details in the user guide.
+
+
The error handlers mechanism is now fully generalized across all
+drivers.
+It is also chainable, with specific support for handling each error type with a specific chain of
+handlers, or
+simply assigning a default to all of them as before.
+
The rate limiter is more efficient. This should allow it to work better in some scenarios
+where inter-core contention was a limiting factor.
+
It is now possible to dry-run an activity with dryrun=op or similar. Each dryrun option goes
+a little further into a normal run so that incremental verification of workloads can be done.
+For example, the dryrun=op option uses all the logic of a normal execution, but it wraps
+the op implementation in a no-op. The results of this will tell you how fast the client can
+synthesize and dispatch operations when there is no op execution involved. The measurement
+will be conservative due to the extra wrapping layer.
+
Thread management within activities is now more efficient, more consistent, and more real-time.
+Polling calls were replaced with evented calls where possible.
+
Only op templates which are active (selected and have a positive ratio) are resolved at
+activity initialization. This improves startup times for large workload with subsets of
+operations enabled.
+
Native drivers (like the CQL Java Driver) now have their driver instance and object graph
+cached, indexed by a named op field called space. By default, this is wired to return
+default, thus each unique adapter will use the same internal object graph for execution.
+This is how things worked for most drivers before. However, if the user specifies that the
+space should vary, then they simply assign it a binding. This allows for advanced driver
+testing across a number of client instances, either pseudo-randomly or in lock-step with
+specific access patterns. If you don't want to use this, then ignore it and everything works
+as it did before. But if you do, this is built-in to every driver by design.
+
The activity parameter driver simply sets the default adapter for an activity. You can set
+this per op template, and run a different driver for every cycle. This field must be static on
+each op template, however. This allows for mixed-mode workloads in the same op sequence.
+
Adapters can be loaded from external jars. This can help users who are building adapters and want
+to avoid building the full runtime just for iterative testing.
+
The phase loop has been removed.
+
Operations can now generate more operations associated with a cycle. This opens the door to
+
There is a distinct API for implementing dynamic activity params distinctly from
+initialization params.
+
+
Ergonomics
+
Console
+
+
ANSI color support in some places, such as in console logging patterns. The --ansi and
+--console-pattern and --logging-pattern options work together. If a non-terminal is
+detected on stdout, ANSI is automatically disabled.
+
The progress meter has been modified to show real-time, accurate, detailed numbers
+including operations in flight.
+
+
Discovery
+
+
Discovery of bundled assets is now more consistent, supported with a family of --list-...
+options.
+
+
Configuration
+
+
Drivers know what parameters they can be configured with. This allows for more
+intelligent and useful feedback to users around appropriate parameter usage. If you get a
+param name wrong, nb5 will likely suggest the next closest option.
+
S3 Urls should work in most places, including for loading workload templates. You only need to
+configure your local authentication first.
+
+
Templating
+
Much of the power of NB5 is illustrated in the new ways you can template workloads. This
+includes structured data, dynamic op configuration, and driver instancing, to name a few.
+
+
The structure of op templates (the YAML you write to simulate access patterns) has been
+standardized around a strict set of specification tests and examples. These are documented
+in-depth and tested against a specification with round-trip validation.
+
Now, JSON and Jsonnet are supported directly as workload template formats. Jsonnet allows you to
+see the activity params as external variables.
+
All workload template structure is now supported as maps, in addition to the other structural
+forms (previously called workload YAMLs). All of these forms automatically de-sugar into the
+canonical forms for the runtime to use. This follows the previous pattern of "If it does what
+it looks like, it is valid", but further allows simplification of workloads with inline
+naming of elements.
+
In addition to workload template structure, op templates also support arbitrary structure
+instead of just scalar or String values. This is especially useful for JSON payload modeling.
+This means that op templates now have a generalized templating mechanism that works for all
+data structures. You can reference bindings as before, but you can also create collections and
+string templates by writing fields as they naturally occur, then adding {bindings} where you
+need.
+
All op template fields can be made dynamic if an adapter supports it. It is up to the adapter
+implementor to decide which op fields must be static.
+
Op template values auto-defer to configured values as static, then dynamic, and then
+configured from activity parameters as defaults. If an adapter supports a parameter at the
+activity level, and an op form supports the same field, then this occurs automatically.
+
Tags for basic workload template elements are provided gratis. You no longer need to specify the
+conventional tags. All op templates now have block: <blockname> and name: <blockname>--<name> tags added. This works with regexes in tag filtering.
+
Named scenarios now allow for nb5 <workload-file> <scenario-name>.<scenario-step> .... You can
+prototype and validate complex scenarios by sub-selecting the steps to execute.
+
You can use the op="..." activity parameter to specific a single-op workload on the
+command line, as if you had read it from a workload YAML. This allows
+for one-liner tests streamlined integration, and other simple utility usage.
+
Binding recipes can now occur inline, as {{Identity()}}. This works with the op
+parameter above.
+
You can now set a minimum version of NoSQLBench to use for a workload. The min_version: "4.17. 15" property is checked starting from the most-significant number down. If there are new core
+features that your workload depends on, you can use this to avoid ambiguous errors.
+
Template vars like <<name:value>> or TEMPLATE(name,value) can set defaults the first time they
+are seen. This means you don't have to update them everywhere. A nice way to handle this is to
+include them in the description once, since you should be documenting them anyway!
+
You can load JSON files directly. You can also load JSONNET files directly! If you need to
+sanity check your jsonnet rendering, you can use dryrun=jsonnet.
+
All workload template elements can have a description.
+
+
Misc Improvements
+
(some carry over from pre-nb5 features)
+
+
Argsfile support for allowing sticky parameters on a system.
+
Tag filters are more expressive, with regexes and conjunctions.
+
Some scenario commands now allow for regex-style globbing on activity alias names.
+
Startup logging now includes details on version, hardware config, and scenario commands for
+better diagnostics and bug reports.
+
The logging subsystem config is improved and standardized across the project.
+
Test output is now vectored exclusively through logging config.
+
Analysis methods are improved and more widely used.
+
+
Deprecations and Standards
+
+
NB5 depends on Java 17. Going forward, major versions will adopt the latest LTS java release.
+
Dependencies which require shading in order to play well with others are not supported. If you
+have a native driver or need to depend on a library which is not a good citizen, you can only
+use it with NB5 by using the external jar feature (explained elsewhere). This includes the
+previous CQL drivers which were the 1.9.* and 3.*.* version. Only CQL driver 4.* is
+provided in nb5.
+
Dependencies should behave as modular jars, according to JPMS specification. This does not
+mean they need to be JPMS modules, only that the get halfway there.
+
Log4J2 is the standard logging provider in the runtime for NoSQLBench. An SLF4J stub
+implementation is provided to allow clients which implement against the SLF4J API to work.
+
All new drivers added to the project are based on the Adapter API.
+
+
Works In Progress
+
+
These docs!
+
Bulk Loading efficiency for large tests
+
Linearized Op Modeling
+
+
We now have a syntax for designating fields to extract from op results. This is part of the
+support needed to make client-side joins and other patterns easy to emulate.
+
+
+
Rate Limiter v3
+
VictoriaMetrics Integration
+
+
Labeled metrics need to be fed to a victoria metrics docker via push. This approach will
+remove much of the pain involved in using prometheus as an ephemeral testing apparatus.
+
+
+
+
diff --git a/robots.txt b/robots.txt
new file mode 100644
index 000000000..2a4c1eab2
--- /dev/null
+++ b/robots.txt
@@ -0,0 +1,4 @@
+Sitemap: https://builddocs.nosqlbench.io/sitemap.xml
+User-Agent: *
+Allow: /
+Host: https://builddocs.nosqlbench.io
diff --git a/safari-pinned-tab.svg b/safari-pinned-tab.svg
new file mode 100644
index 000000000..31cf03f8d
--- /dev/null
+++ b/safari-pinned-tab.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/search.js b/search.js
new file mode 100644
index 000000000..61c67c097
--- /dev/null
+++ b/search.js
@@ -0,0 +1,343 @@
+// in page results when press enter or click search icon from search box
+function closeSearchNow() {
+ const main = document.querySelector("main");
+ main.innerHTML = window.main
+}
+function goSearchNow() {
+ const main = document.querySelector("main");
+ if (!window.main) {
+ window.main = main.innerHTML
+ };
+ var results = document.getElementById("suggestions");// suggestions div generated by search box
+
+ var ResultsClone = results.cloneNode(true);// make a clone of the results, so that we can alter it
+ ResultsClone.id = "results";// alter the id of our clone, so that we can apply different css style
+
+ var headerDiv = document.createElement("div");// create a div element
+
+ var headerContent = '");// header to use at top of results page
+
+ headerDiv.innerHTML = headerContent;// document element div (headerDiv), set the inner contents to our header html (headerContent)
+
+ ResultsClone.insertBefore(headerDiv, ResultsClone.firstChild);//insert our header div at the top of the page
+
+ main.innerHTML = ResultsClone.outerHTML;//display ResultsClone.outerHTML as the page
+ results.innerHTML = "";// clear the suggestions div popup
+ document.getElementById("searchinput").value = "";// clear the search input box
+ document.body.contains(document.closeSearch) && (document.closeSearch.onsubmit = function() { closeSearchNow() })
+ return false
+}
+window.onload = function() {
+ document.body.contains(document.goSearch) && (document.goSearch.onsubmit = function() { return goSearchNow() })
+};
+
+// Everything Below this line For Suggestsions as you type in search box
+var suggestions = document.getElementById("suggestions");
+var searchinput = document.getElementById("searchinput");
+
+/* Close search suggestion popup list */
+function closeAllLists(elmnt) {
+ var suggestions = document.getElementById("suggestions");
+ while (suggestions.firstChild) {
+ suggestions.removeChild(suggestions.firstChild);
+ }
+}
+
+function inputFocus(e) {
+
+ if (e.keyCode === 191//forward slash
+ && document.activeElement.tagName !== "INPUT"
+ && document.activeElement.tagName !== "TEXTAREA") {
+ e.preventDefault();
+ searchinput.focus();
+ suggestions.classList.remove('d-none');
+ }
+
+ if (e.keyCode === 27 ) {//escape
+ searchinput.blur();
+ suggestions.classList.add('d-none');
+ closeAllLists();
+ }
+
+ const focusableSuggestions= suggestions.querySelectorAll('a');
+ if (suggestions.classList.contains('d-none')
+ || focusableSuggestions.length === 0) {
+ return;
+ }
+ const focusable= [...focusableSuggestions];
+ const index = focusable.indexOf(document.activeElement);
+
+ let nextIndex = 0;
+
+ if (e.keyCode === 38) {//up arrow
+ e.preventDefault();
+ nextIndex= index > 0 ? index-1 : 0;
+ focusableSuggestions[nextIndex].focus();
+ }
+ else if (e.keyCode === 40) {//down arrow
+ e.preventDefault();
+ nextIndex= index+1 < focusable.length ? index+1 : index;
+ focusableSuggestions[nextIndex].focus();
+ }
+
+}
+
+document.addEventListener("keydown", inputFocus);
+document.addEventListener("click", function(event) {suggestions.contains(event.target) || suggestions.classList.add("d-none")});
+
+// Get substring by bytes
+// If using JavaScript inline substring method, it will return error codes
+// Source: https://www.52pojie.cn/thread-1059814-1-1.html
+function substringByByte(str, maxLength) {
+ var result = "";
+ var flag = false;
+ var len = 0;
+ var length = 0;
+ var length2 = 0;
+ for (var i = 0; i < str.length; i++) {
+ var code = str.codePointAt(i).toString(16);
+ if (code.length > 4) {
+ i++;
+ if ((i + 1) < str.length) {
+ flag = str.codePointAt(i + 1).toString(16) == "200d";
+ }
+ }
+ if (flag) {
+ len += getByteByHex(code);
+ if (i == str.length - 1) {
+ length += len;
+ if (length <= maxLength) {
+ result += str.substr(length2, i - length2 + 1);
+ } else {
+ break
+ }
+ }
+ } else {
+ if (len != 0) {
+ length += len;
+ length += getByteByHex(code);
+ if (length <= maxLength) {
+ result += str.substr(length2, i - length2 + 1);
+ length2 = i + 1;
+ } else {
+ break
+ }
+ len = 0;
+ continue;
+ }
+ length += getByteByHex(code);
+ if (length <= maxLength) {
+ if (code.length <= 4) {
+ result += str[i]
+ } else {
+ result += str[i - 1] + str[i]
+ }
+ length2 = i + 1;
+ } else {
+ break
+ }
+ }
+ }
+ return result;
+}
+
+// Get the string bytes from binary
+function getByteByBinary(binaryCode) {
+ // Binary system, starts with `0b` in ES6
+ // Octal number system, starts with `0` in ES5 and starts with `0o` in ES6
+ // Hexadecimal, starts with `0x` in both ES5 and ES6
+ var byteLengthDatas = [0, 1, 2, 3, 4];
+ var len = byteLengthDatas[Math.ceil(binaryCode.length / 8)];
+ return len;
+}
+
+// Get the string bytes from hexadecimal
+function getByteByHex(hexCode) {
+ return getByteByBinary(parseInt(hexCode, 16).toString(2));
+}
+
+/*
+Source:
+ - https://github.com/nextapps-de/flexsearch#index-documents-field-search
+ - https://raw.githack.com/nextapps-de/flexsearch/master/demo/autocomplete.html
+ - http://elasticlunr.com/
+ - https://github.com/getzola/zola/blob/master/docs/static/search.js
+ - https://github.com/aaranxu/adidoks/blob/main/static/js/search.js
+*/
+(function(){
+ var index = elasticlunr.Index.load(window.searchIndex);
+ searchinput.addEventListener('input', show_results, true);
+ suggestions.addEventListener('click', accept_suggestion, true);
+
+ function show_results(){
+ var value = this.value.trim();
+ var options = {
+ bool: "OR",
+ fields: {
+ title: {boost: 2},
+ body: {boost: 1},
+ }
+ };
+ var results = index.search(value, options);
+
+ var entry, childs = suggestions.childNodes;
+ var i = 0, len = results.length;
+ var items = value.split(/\s+/);
+ suggestions.classList.remove('d-none');
+
+ results.forEach(function(page) {
+ if (page.doc.body !== '') {
+ entry = document.createElement('div');
+
+ entry.innerHTML = '';
+
+ a = entry.querySelector('a'),
+ t = entry.querySelector('span:first-child'),
+ d = entry.querySelector('span:nth-child(2)');
+ a.href = page.ref;
+ t.textContent = page.doc.title;
+ d.innerHTML = makeTeaser(page.doc.body, items);
+
+ suggestions.appendChild(entry);
+ }
+ });
+
+ while(childs.length > len){
+ suggestions.removeChild(childs[i])
+ }
+
+ }
+
+ function accept_suggestion(){
+
+ while(suggestions.lastChild){
+
+ suggestions.removeChild(suggestions.lastChild);
+ }
+
+ return false;
+ }
+
+ // Taken from mdbook
+ // The strategy is as follows:
+ // First, assign a value to each word in the document:
+ // Words that correspond to search terms (stemmer aware): 40
+ // Normal words: 2
+ // First word in a sentence: 8
+ // Then use a sliding window with a constant number of words and count the
+ // sum of the values of the words within the window. Then use the window that got the
+ // maximum sum. If there are multiple maximas, then get the last one.
+ // Enclose the terms in .
+ function makeTeaser(body, terms) {
+ var TERM_WEIGHT = 40;
+ var NORMAL_WORD_WEIGHT = 2;
+ var FIRST_WORD_WEIGHT = 8;
+ var TEASER_MAX_WORDS = 30;
+
+ var stemmedTerms = terms.map(function (w) {
+ return elasticlunr.stemmer(w.toLowerCase());
+ });
+ var termFound = false;
+ var index = 0;
+ var weighted = []; // contains elements of ["word", weight, index_in_document]
+
+ // split in sentences, then words
+ var sentences = body.toLowerCase().split(". ");
+ for (var i in sentences) {
+ var words = sentences[i].split(/[\s\n]/);
+ var value = FIRST_WORD_WEIGHT;
+ for (var j in words) {
+
+ var word = words[j];
+
+ if (word.length > 0) {
+ for (var k in stemmedTerms) {
+ if (elasticlunr.stemmer(word).startsWith(stemmedTerms[k])) {
+ value = TERM_WEIGHT;
+ termFound = true;
+ }
+ }
+ weighted.push([word, value, index]);
+ value = NORMAL_WORD_WEIGHT;
+ }
+
+ index += word.length;
+ index += 1; // ' ' or '.' if last word in sentence
+ }
+
+ index += 1; // because we split at a two-char boundary '. '
+ }
+
+ if (weighted.length === 0) {
+ if (body.length !== undefined && body.length > TEASER_MAX_WORDS * 10) {
+ return body.substring(0, TEASER_MAX_WORDS * 10) + '...';
+ } else {
+ return body;
+ }
+ }
+
+ var windowWeights = [];
+ var windowSize = Math.min(weighted.length, TEASER_MAX_WORDS);
+ // We add a window with all the weights first
+ var curSum = 0;
+ for (var i = 0; i < windowSize; i++) {
+ curSum += weighted[i][1];
+ }
+ windowWeights.push(curSum);
+
+ for (var i = 0; i < weighted.length - windowSize; i++) {
+ curSum -= weighted[i][1];
+ curSum += weighted[i + windowSize][1];
+ windowWeights.push(curSum);
+ }
+
+ // If we didn't find the term, just pick the first window
+ var maxSumIndex = 0;
+ if (termFound) {
+ var maxFound = 0;
+ // backwards
+ for (var i = windowWeights.length - 1; i >= 0; i--) {
+ if (windowWeights[i] > maxFound) {
+ maxFound = windowWeights[i];
+ maxSumIndex = i;
+ }
+ }
+ }
+
+ var teaser = [];
+ var startIndex = weighted[maxSumIndex][2];
+ for (var i = maxSumIndex; i < maxSumIndex + windowSize; i++) {
+ var word = weighted[i];
+ if (startIndex < word[2]) {
+ // missing text from index to start of `word`
+ teaser.push(body.substring(startIndex, word[2]));
+ startIndex = word[2];
+ }
+
+ // add around search terms
+ if (word[1] === TERM_WEIGHT) {
+ teaser.push("");
+ }
+
+ startIndex = word[2] + word[0].length;
+ // Check the string is ascii characters or not
+ var re = /^[\x00-\xff]+$/
+ if (word[1] !== TERM_WEIGHT && word[0].length >= 12 && !re.test(word[0])) {
+ // If the string's length is too long, it maybe a Chinese/Japance/Korean article
+ // if using substring method directly, it may occur error codes on emoji chars
+ var strBefor = body.substring(word[2], startIndex);
+ var strAfter = substringByByte(strBefor, 12);
+ teaser.push(strAfter);
+ } else {
+ teaser.push(body.substring(word[2], startIndex));
+ }
+
+ if (word[1] === TERM_WEIGHT) {
+ teaser.push("");
+ }
+ }
+ teaser.push("…");
+ return teaser.join("");
+ }
+document.body.contains(document.goSearch) && (document.goSearch.onsubmit = function() { return goSearchNow() })
+}());
diff --git a/search_bundle.min.js b/search_bundle.min.js
new file mode 100644
index 000000000..4f68789ad
--- /dev/null
+++ b/search_bundle.min.js
@@ -0,0 +1 @@
+function closeSearchNow(){const d=document.querySelector("main");d.innerHTML=window.main}function goSearchNow(){const d=document.querySelector("main");window.main||(window.main=d.innerHTML);var e=document.getElementById("suggestions"),o=e.cloneNode(!0),t=(o.id="results",document.createElement("div")),s='");return t.innerHTML=s,o.insertBefore(t,o.firstChild),d.innerHTML=o.outerHTML,e.innerHTML="",document.getElementById("searchinput").value="",document.body.contains(document.closeSearch)&&(document.closeSearch.onsubmit=function(){closeSearchNow()}),!1}window.searchIndex={fields:["title","body"],pipeline:["trimmer","stopWordFilter","stemmer"],ref:"id",version:"0.9.5",index:{body:{root:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2},"https://abridge.netlify.app/overview-images/":{tf:6.557438524302}},df:2,c:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},1:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-images/":{tf:6},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:3,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"!":{docs:{},df:0,d:{docs:{},df:0,o:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{},df:0,y:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}}},".":{docs:{},df:0,8:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},0:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-images/":{tf:3.4641016151377544}},df:2,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,h:{docs:{},df:0,t:{docs:{},df:0,m:{docs:{},df:0,l:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}}}},0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},5:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},6:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},c:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,1:{docs:{},df:0,3:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},t:{docs:{},df:0,a:{docs:{},df:0,x:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,o:{docs:{},df:0,m:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}},1:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-images/":{tf:2.6457513110645907}},df:2,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},1:{docs:{},df:0,1:{docs:{},df:0,1:{docs:{},df:0,1:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},6:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},z:{docs:{},df:0,'"':{docs:{},df:0,"/":{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,p:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}},2:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:3.872983346207417}},df:2,l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,e:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},3:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:2.449489742783178}},df:2,l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,1:{docs:{},df:0,3:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},4:{docs:{"https://abridge.netlify.app/overview-images/":{tf:4.242640687119285}},df:1,h:{docs:{},df:0,7:{docs:{},df:0,c:{docs:{},df:0,3:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},5:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.449489742783178}},df:1,h:{docs:{},df:0,5:{docs:{},df:0,l:{docs:{},df:0,1:{docs:{},df:0,3:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}},6:{docs:{},df:0,c:{docs:{},df:0,1:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},7:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,c:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},8:{docs:{"https://abridge.netlify.app/overview-images/":{tf:3},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2,4:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},c:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},1:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},h:{docs:{},df:0,6:{docs:{},df:0,l:{docs:{},df:0,1:{docs:{},df:0,7:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}},z:{docs:{},df:0,'"':{docs:{},df:0,"/":{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,s:{docs:{},df:0,v:{docs:{},df:0,g:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}},p:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}},9:{docs:{"https://abridge.netlify.app/overview-images/":{tf:3.1622776601683795}},df:1,2:{docs:{},df:0,x:{docs:{},df:0,1:{docs:{},df:0,9:{docs:{},df:0,2:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1,".":{docs:{},df:0,p:{docs:{},df:0,n:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},m:{docs:{},df:0,".":{docs:{},df:0,p:{docs:{},df:0,n:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}},3:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},b:{docs:{},df:0,a:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,_:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}},c:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,2:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/privacy/":{tf:1}},df:1}}},2:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:6},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:3,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,h:{docs:{},df:0,t:{docs:{},df:0,m:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}},0:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:2.23606797749979}},df:2,1:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,5:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},2:{docs:{},df:0,2:{docs:{"https://abridge.netlify.app/privacy/":{tf:1}},df:1}},6:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},9:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},v:{docs:{},df:0,5:{docs:{},df:0,l:{docs:{},df:0,7:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}},1:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.449489742783178}},df:1,4:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},c:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},l:{docs:{},df:0,5:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},2:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:1,2:{docs:{},df:0,2:{docs:{},df:0,2:{docs:{},df:0,2:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},c:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1}}},3:{docs:{},df:0,l:{docs:{},df:0,1:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},4:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.23606797749979}},df:1,7:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},5:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,2:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},6:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.23606797749979}},df:1,c:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},7:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},8:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,3:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},9:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},9:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,3:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},9:{docs:{},df:0,c:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},c:{docs:{},df:0,4:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},l:{docs:{},df:0,1:{docs:{},df:0,8:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},2:{docs:{},df:0,4:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},t:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},3:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-images/":{tf:6.557438524302}},df:2,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,d:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}}},0:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:2.6457513110645907}},df:2,2:{docs:{},df:0,c:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},1:{docs:{},df:0,c:{docs:{},df:0,2:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},2:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,4:{docs:{},df:0,c:{docs:{},df:0,1:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},3:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,v:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},5:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,c:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},6:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1},7:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},8:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:1,c:{docs:{},df:0,1:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},l:{docs:{},df:0,2:{docs:{},df:0,9:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},9:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},d:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,c:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,p:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}},l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,9:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},u:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},v:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},4:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:5.477225575051661},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:3,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,0:{docs:{},df:0,".":{docs:{},df:0,w:{docs:{},df:0,e:{docs:{},df:0,b:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}}}}}},a:{docs:{},df:0,1:{docs:{},df:0,1:{docs:{},df:0,6:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}},1:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1},3:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,c:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},5:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2}},df:1},6:{docs:{},df:0,c:{docs:{},df:0,2:{docs:{},df:0,7:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},7:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2}},df:1},8:{docs:{},df:0,9:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},9:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,c:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},d:{docs:{},df:0,e:{docs:{},df:0,f:{docs:{},df:0,a:{docs:{},df:0,u:{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,_:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,g:{docs:{},df:0,u:{docs:{},df:0,a:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}},h:{docs:{},df:0,7:{docs:{},df:0,l:{docs:{},df:0,2:{docs:{},df:0,1:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}},l:{docs:{},df:0,2:{docs:{},df:0,4:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},5:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},u:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},z:{docs:{},df:0,m:{docs:{},df:0,4:{docs:{},df:0,0:{docs:{},df:0,2:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}},5:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:5.5677643628300215},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:3,"#":{docs:{},df:0,t:{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}},0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:1},1:{docs:{},df:0,2:{docs:{},df:0,x:{docs:{},df:0,5:{docs:{},df:0,1:{docs:{},df:0,2:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,".":{docs:{},df:0,p:{docs:{},df:0,n:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}},c:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},2:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,c:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},z:{docs:{},df:0,'"':{docs:{},df:0,"/":{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,p:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}},3:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1},5:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1},6:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,c:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},7:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},8:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},c:{docs:{},df:0,3:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},h:{docs:{},df:0,6:{docs:{},df:0,l:{docs:{},df:0,2:{docs:{},df:0,3:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}},l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,4:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},u:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},v:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},z:{docs:{},df:0,'"':{docs:{},df:0,"/":{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,g:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,p:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}},6:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-images/":{tf:5.0990195135927845}},df:3,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,d:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}}}},0:{docs:{},df:0,"*":{docs:{},df:0,6:{docs:{},df:0,0:{docs:{},df:0,"*":{docs:{},df:0,2:{docs:{},df:0,4:{docs:{},df:0,"*":{docs:{},df:0,3:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}},0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},1:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},4:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,0:{docs:{},df:0,".":{docs:{},df:0,w:{docs:{},df:0,e:{docs:{},df:0,b:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:2}},df:1}}}}}}},5:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1},7:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1},c:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,3:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},k:{docs:{},df:0,b:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}},l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:1}},7:{docs:{"https://abridge.netlify.app/overview-images/":{tf:5}},df:1,"#":{docs:{},df:0,"[":{docs:{},df:0,a:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,x:{docs:{},df:0,_:{docs:{},df:0,w:{docs:{},df:0,e:{docs:{},df:0,b:{docs:{},df:0,":":{docs:{},df:0,":":{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}},"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,b:{docs:{},df:0,o:{docs:{},df:0,d:{docs:{},df:0,y:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}}},1:{docs:{},df:0,z:{docs:{},df:0,'"':{docs:{},df:0,"/":{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,p:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}},b:{docs:{},df:0,u:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{},df:0,d:{docs:{},df:0,_:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{},df:0,_:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,x:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}},l:{docs:{},df:0,2:{docs:{},df:0,2:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},z:{docs:{},df:0,'"':{docs:{},df:0,"/":{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,p:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}},8:{docs:{"https://abridge.netlify.app/overview-images/":{tf:3.1622776601683795},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2,'"':{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:2}}}},4:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1},5:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},6:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},7:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1},a:{docs:{},df:0,s:{docs:{},df:0,y:{docs:{},df:0,n:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},m:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,i:{docs:{},df:0,f:{docs:{},df:0,y:{docs:{},df:0,_:{docs:{},df:0,h:{docs:{},df:0,t:{docs:{},df:0,m:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}},9:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:3}},df:2,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,b:{docs:{},df:0,o:{docs:{},df:0,d:{docs:{},df:0,y:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}}}},0:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1,0:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}}},5:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1},6:{docs:{},df:0,0:{docs:{},df:0,".":{docs:{},df:0,w:{docs:{},df:0,e:{docs:{},df:0,b:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}}}}}}},f:{docs:{},df:0,e:{docs:{},df:0,e:{docs:{},df:0,d:{docs:{},df:0,_:{docs:{},df:0,f:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}},l:{docs:{},df:0,3:{docs:{},df:0,6:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},4:{docs:{},df:0,9:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},_:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,"[":{docs:{},df:0,"'":{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,q:{docs:{},df:0,u:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,_:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{},df:0,o:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}},s:{docs:{},df:0,s:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,"[":{docs:{},df:0,"'":{docs:{},df:0,p:{docs:{},df:0,f:{docs:{},df:0,a:{docs:{},df:0,_:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,k:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}},a:{docs:{},df:0,b:{docs:{},df:0,b:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,e:{docs:{},df:0,v:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},o:{docs:{},df:0,v:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},r:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/about/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-abridge/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-code-blocks/":{tf:2.23606797749979}},df:3,e:{docs:{},df:0,".":{docs:{},df:0,c:{docs:{},df:0,s:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:2}}}}}}}}},s:{docs:{},df:0,"(":{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}},c:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{},df:0,p:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}},o:{docs:{},df:0,m:{docs:{},df:0,p:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,s:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},u:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2}},df:1,s:{docs:{},df:0,_:{docs:{},df:0,u:{docs:{},df:0,n:{docs:{},df:0,i:{docs:{},df:0,q:{docs:{},df:0,u:{docs:{},df:0,e:{docs:{},df:0,_:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{},df:0,w:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,_:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{},df:0,_:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,x:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}},t:{docs:{},df:0,i:{docs:{},df:0,x:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1}},u:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}}},d:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:3,d:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1},"https://abridge.netlify.app/overview-math/":{tf:2.449489742783178}},df:3,i:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}},r:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},j:{docs:{},df:0,u:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1.4142135623730951}},df:1}}}},m:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},v:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}},f:{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}},g:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},r:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}},l:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{},df:0,2:{docs:{},df:0,3:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},q:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},l:{docs:{},df:0,o:{docs:{},df:0,w:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:2}}},o:{docs:{},df:0,n:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},r:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,d:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}},t:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:3,"=":{docs:{},df:0,'"':{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.449489742783178}},df:1}}}}},i:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}},m:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},n:{docs:{},df:0,d:{docs:{},df:0,"/":{docs:{},df:0,o:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:2.449489742783178}},df:1}}},a:{docs:{},df:0,e:{docs:{},df:0,p:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},r:{docs:{},df:0,o:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1}}}}},n:{docs:{},df:0,o:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},o:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772}},df:2}}}},p:{docs:{},df:0,p:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}},l:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}},q:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},r:{docs:{},df:0,c:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},e:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1},n:{docs:{},df:0,"'":{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:2}}}},i:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,t:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,"'":{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},2:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}},r:{docs:{},df:0,a:{docs:{},df:0,y:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-rich-content/":{tf:2.6457513110645907}},df:2}}},t:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}}},s:{docs:{},df:0,p:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},s:{docs:{},df:0,o:{docs:{},df:0,c:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},t:{docs:{},df:0,o:{docs:{},df:0,m:{docs:{},df:0,".":{docs:{},df:0,x:{docs:{},df:0,m:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}},t:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,b:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}}}}}},u:{docs:{},df:0,d:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:2.8284271247461903}},df:1,"(":{docs:{},df:0,s:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,"=":{docs:{},df:0,"[":{docs:{},df:0,'"':{docs:{},df:0,o:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,9:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,".":{docs:{},df:0,o:{docs:{},df:0,g:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}},t:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1}}},g:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}},t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:2.8284271247461903}},df:1,o:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1.4142135623730951}},df:1,m:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}},p:{docs:{},df:0,a:{docs:{},df:0,u:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1}},df:1}}},l:{docs:{},df:0,a:{docs:{},df:0,y:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:3}}}},r:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1.7320508075688772}},df:1}}}}}}},v:{docs:{},df:0,o:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},w:{docs:{},df:0,h:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},b:{docs:{},df:0,$:{docs:{},df:0,2:{docs:{},df:0,5:{docs:{},df:0,4:{docs:{},df:0,$:{docs:{},df:0,2:{docs:{},df:0,4:{docs:{},df:0,9:{docs:{},df:0,$:{docs:{},df:0,1:{docs:{},df:0,2:{docs:{},df:0,7:{docs:{},df:0,$:{docs:{},df:0,1:{docs:{},df:0,2:{docs:{},df:0,4:{docs:{},df:0,$:{docs:{},df:0,1:{docs:{},df:0,2:{docs:{},df:0,3:{docs:{},df:0,$:{docs:{},df:0,1:{docs:{},df:0,1:{docs:{},df:0,3:{docs:{},df:0,$:{docs:{},df:0,8:{docs:{},df:0,8:{docs:{},df:0,$:{docs:{},df:0,8:{docs:{},df:0,3:{docs:{},df:0,$:{docs:{},df:0,7:{docs:{},df:0,4:{docs:{},df:0,$:{docs:{},df:0,6:{docs:{},df:0,1:{docs:{},df:0,$:{docs:{},df:0,5:{docs:{},df:0,5:{docs:{},df:0,$:{docs:{},df:0,5:{docs:{},df:0,4:{docs:{},df:0,$:{docs:{},df:0,3:{docs:{},df:0,8:{docs:{},df:0,$:{docs:{},df:0,3:{docs:{},df:0,1:{docs:{},df:0,$:{docs:{},df:0,3:{docs:{},df:0,0:{docs:{},df:0,$:{docs:{},df:0,2:{docs:{},df:0,8:{docs:{},df:0,$:{docs:{},df:0,2:{docs:{},df:0,7:{docs:{},df:0,$:{docs:{},df:0,2:{docs:{},df:0,2:{docs:{},df:0,$:{docs:{},df:0,2:{docs:{},df:0,1:{docs:{},df:0,$:{docs:{},df:0,1:{docs:{},df:0,9:{docs:{},df:0,$:{docs:{},df:0,1:{docs:{},df:0,7:{docs:{},df:0,$:{docs:{},df:0,1:{docs:{},df:0,6:{docs:{},df:0,$:{docs:{},df:0,1:{docs:{},df:0,2:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},a:{docs:{},df:0,c:{docs:{},df:0,k:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1,g:{docs:{},df:0,r:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-images/":{tf:4}},df:1,_:{docs:{},df:0,c:{docs:{},df:0,o:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}},t:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,k:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},n:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}},d:{docs:{},df:0,w:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}},s:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1},i:{docs:{"https://abridge.netlify.app/overview-math/":{tf:2.23606797749979}},df:1,c:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}}},e:{docs:{},df:0,a:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}},f:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:3}}},g:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},t:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}},i:{docs:{},df:0,t:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},l:{docs:{},df:0,o:{docs:{},df:0,c:{docs:{},df:0,k:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:2},"https://abridge.netlify.app/overview-math/":{tf:1.7320508075688772}},df:3,q:{docs:{},df:0,u:{docs:{},df:0,o:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:2.23606797749979}},df:1}}}}}}}},m:{docs:{},df:0,e:{docs:{},df:0,2:{docs:{},df:0,8:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},o:{docs:{},df:0,b:{docs:{},df:0,2:{docs:{},df:0,7:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},o:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1}}}}},r:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:2}},x:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},r:{docs:{},df:0,o:{docs:{},df:0,w:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:2.8284271247461903},"https://abridge.netlify.app/privacy/":{tf:1}},df:3}}}}}}},c:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2}},df:1,a:{docs:{},df:0,l:{docs:{},df:0,c:{docs:{},df:0,u:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},l:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}},t:{docs:{},df:0,e:{docs:{},df:0,g:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}},h:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}},r:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,"=":{docs:{},df:0,'"':{docs:{},df:0,u:{docs:{},df:0,t:{docs:{},df:0,f:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:2}}}}}}}}}},e:{docs:{},df:0,c:{docs:{},df:0,k:{docs:{},df:0,_:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,g:{docs:{},df:0,u:{docs:{},df:0,a:{docs:{},df:0,g:{docs:{},df:0,e:{docs:{},df:0,"(":{docs:{},df:0,f:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}},e:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},r:{docs:{},df:0,o:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1}}}}},i:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},l:{docs:{},df:0,a:{docs:{},df:0,s:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:5,"=":{docs:{},df:0,'"':{docs:{},df:0,s:{docs:{},df:0,v:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}},i:{docs:{},df:0,c:{docs:{},df:0,k:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,".":{docs:{},df:0,w:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,_:{docs:{},df:0,p:{docs:{},df:0,o:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,s:{docs:{},df:0,"(":{docs:{},df:0,d:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}},o:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,s:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}},o:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.6457513110645907},"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:2.23606797749979}},df:3},y:{docs:{},df:0,3:{docs:{},df:0,3:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},l:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},m:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},m:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}},o:{docs:{},df:0,n:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,k:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}},s:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},u:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}},p:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:2}}}},n:{docs:{},df:0,c:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,"(":{docs:{},df:0,d:{docs:{},df:0,o:{docs:{},df:0,c:{docs:{},df:0,u:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,".":{docs:{},df:0,g:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,b:{docs:{},df:0,y:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,"(":{docs:{},df:0,'"':{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,p:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{},df:0,'"':{docs:{},df:0,")":{docs:{},df:0,".":{docs:{},df:0,v:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},d:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},e:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},f:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,i:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1,".":{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,m:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-math/":{tf:1}},df:2}}}}},":":{docs:{},df:0,":":{docs:{},df:0,g:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}},u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}}}},n:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}},s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}},t:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/contact/":{tf:1}},df:1}},i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2}},df:1}}},e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:4,"/":{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,s:{docs:{},df:0,".":{docs:{},df:0,s:{docs:{},df:0,v:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2}},df:1}}}}}}}}}},o:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,9:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.7320508075688772}},df:1,".":{docs:{},df:0,f:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}},m:{docs:{},df:0,p:{docs:{},df:0,3:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}},o:{docs:{},df:0,g:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}},w:{docs:{},df:0,a:{docs:{},df:0,v:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}},_:{docs:{},df:0,a:{docs:{},df:0,v:{docs:{},df:0,1:{docs:{},df:0,".":{docs:{},df:0,m:{docs:{},df:0,p:{docs:{},df:0,4:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}}}}}}},v:{docs:{},df:0,p:{docs:{},df:0,9:{docs:{},df:0,".":{docs:{},df:0,w:{docs:{},df:0,e:{docs:{},df:0,b:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}}}}}}}}}}}}}}}}}}}},x:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,":":{docs:{},df:0,":":{docs:{},df:0,n:{docs:{},df:0,e:{docs:{},df:0,w:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}},r:{docs:{},df:0,o:{docs:{},df:0,l:{docs:{},df:0,s:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}}}}}}}},v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}},o:{docs:{},df:0,k:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1.4142135623730951},"https://abridge.netlify.app/privacy/":{tf:1}},df:4}}},p:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}},r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},r:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},u:{docs:{},df:0,r:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}},v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},r:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:2.449489742783178},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:4,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},s:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/about/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:4}},t:{docs:{},df:0,r:{docs:{},df:0,l:{docs:{},df:0,"+":{docs:{},df:0,a:{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,"+":{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}}}}},u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,l:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,u:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1},r:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},s:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},t:{docs:{},df:0,o:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}},d:{docs:{},df:0,3:{docs:{},df:0,c:{docs:{},df:0,3:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}},"=":{docs:{},df:0,'"':{docs:{},df:0,m:{docs:{},df:0,2:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},2:{docs:{},df:0,7:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},3:{docs:{},df:0,3:{docs:{},df:0,9:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},6:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},5:{docs:{},df:0,6:{docs:{},df:0,5:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},7:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},9:{docs:{},df:0,9:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}},a:{docs:{},df:0,i:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},n:{docs:{},df:0,d:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},t:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/privacy/":{tf:1}},df:2},e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/privacy/":{tf:1}},df:2}},y:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},e:{docs:{},df:0,c:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},f:{docs:{},df:0,a:{docs:{},df:0,u:{docs:{},df:0,l:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-code-blocks/":{tf:2.6457513110645907},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-images/":{tf:1}},df:5}}}}},m:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1,n:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},s:{docs:{},df:0,c:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,p:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},i:{docs:{},df:0,g:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1}},r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}}},i:{docs:{},df:0,e:{docs:{},df:0,"(":{docs:{},df:0,'"':{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,v:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}},f:{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:2}},df:2}}}},r:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1}}}}}},s:{docs:{},df:0,p:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,y:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/overview-math/":{tf:1}},df:3}}}}}},j:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,g:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},o:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2,u:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:2}}}}}}}}}}}}},".":{docs:{},df:0,c:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,"(":{docs:{},df:0,'"':{docs:{},df:0,d:{docs:{},df:0,i:{docs:{},df:0,v:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}},g:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,b:{docs:{},df:0,y:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,"(":{docs:{},df:0,'"':{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,p:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{},df:0,'"':{docs:{},df:0,")":{docs:{},df:0,".":{docs:{},df:0,v:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}},u:{docs:{},df:0,g:{docs:{},df:0,g:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}},q:{docs:{},df:0,u:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,y:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,"(":{docs:{},df:0,'"':{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}},e:{docs:{},df:0,s:{docs:{},df:0,n:{docs:{},df:0,"'":{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:2.449489742783178}},df:1}}}}},l:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,s:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},t:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}},l:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},o:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}},u:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},n:{docs:{},df:0,"'":{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},u:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},x:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,s:{docs:{},df:0,c:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,p:{docs:{},df:0,t:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}}}}}}}}}}}}},e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2}},df:1,".":{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,n:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,h:{docs:{},df:0,t:{docs:{},df:0,m:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1}}}}}}}}}},"^":{docs:{},df:0,"{":{docs:{},df:0,i:{docs:{},df:0,"\\":{docs:{},df:0,p:{docs:{},df:0,i:{docs:{},df:0,"}":{docs:{},df:0,"+":{docs:{},df:0,1:{docs:{},df:0,"=":{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1.7320508075688772}},df:1}}}}}}}}}}},a:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:2}},df:2}},s:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1}},df:3}}}},t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},d:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1.7320508075688772}},df:1,i:{docs:{},df:0,u:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},f:{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/privacy/":{tf:1}},df:1}}}}},i:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},l:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:2}}}}},i:{docs:{},df:0,f:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}},m:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2,c:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2}},df:1}}}},s:{docs:{},df:0,u:{docs:{},df:0,b:{docs:{},df:0,j:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2}},df:1}}}}}}}}}},b:{docs:{},df:0,e:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:1}},df:3}}}},n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,a:{docs:{},df:0,b:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-math/":{tf:2.6457513110645907}},df:2}}},d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-math/":{tf:1.4142135623730951}},df:3,i:{docs:{},df:0,g:{docs:{},df:0,n:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},g:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},i:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},o:{docs:{},df:0,u:{docs:{},df:0,g:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}},t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},o:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772}},df:1,s:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,q:{docs:{},df:0,u:{docs:{},df:0,o:{docs:{},df:0,d:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}},t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},q:{docs:{},df:0,u:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}},r:{docs:{},df:0,u:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},s:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,p:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}},q:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772}},df:1,c:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.7320508075688772}},df:1},u:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},v:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},r:{docs:{},df:0,y:{docs:{},df:0,b:{docs:{},df:0,o:{docs:{},df:0,d:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}},x:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,a:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}},m:{docs:{},df:0,p:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-math/":{tf:1.4142135623730951}},df:4}}}},c:{docs:{},df:0,e:{docs:{},df:0,p:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}},r:{docs:{},df:0,p:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},e:{docs:{},df:0,c:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},r:{docs:{},df:0,u:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},i:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:2}}},p:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{},df:0,b:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}},e:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},l:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}},q:{docs:{},df:0,u:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}},t:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1.7320508075688772}},df:1}}},r:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-math/":{tf:2.449489742783178}},df:2}}}}},f:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.449489742783178}},df:1,"(":{docs:{},df:0,"\\":{docs:{},df:0,x:{docs:{},df:0,i:{docs:{},df:0,")":{docs:{},df:0,",":{docs:{},df:0,e:{docs:{},df:0,"^":{docs:{},df:0,"{":{docs:{},df:0,2:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}},"\\":{docs:{},df:0,",":{docs:{},df:0,e:{docs:{},df:0,"^":{docs:{},df:0,"{":{docs:{},df:0,2:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}}}}}}},x:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1.4142135623730951}},df:1}},a:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1,t:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},t:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1}},i:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}},l:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1,b:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{},df:0,k:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1,_:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}},p:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}}}},s:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1},"https://abridge.netlify.app/overview-math/":{tf:1}},df:4},t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1,"=":{docs:{},df:0,'"':{docs:{},df:0,i:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}},s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:3,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},e:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-math/":{tf:1}},df:2}}}},e:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1},l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:5.477225575051661}},df:1,s:{docs:{},df:0,"]":{docs:{},df:0,"(":{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,s:{docs:{},df:0,".":{docs:{},df:0,s:{docs:{},df:0,v:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}}},h:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1,"=":{docs:{},df:0,4:{docs:{},df:0,8:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}},i:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},g:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}},l:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:2},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-math/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:2}},df:4,n:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1}}}},l:{docs:{},df:0,"=":{docs:{},df:0,'"':{docs:{},df:0,"#":{docs:{},df:0,8:{docs:{},df:0,f:{docs:{},df:0,1:{docs:{},df:0,f:{docs:{},df:0,1:{docs:{},df:0,d:{docs:{},df:0,'"':{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,p:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}},f:{docs:{},df:0,f:{docs:{},df:0,f:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1}}}}}}}},r:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:4}}}},l:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}},u:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}}},m:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,".":{docs:{},df:0,p:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,l:{docs:{},df:0,n:{docs:{},df:0,"(":{docs:{},df:0,'"':{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}},n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1},o:{docs:{},df:0,l:{docs:{},df:0,d:{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}},l:{docs:{},df:0,o:{docs:{},df:0,w:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},o:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},r:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:2.6457513110645907}},df:2}},u:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},u:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},r:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}},p:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1,"=":{docs:{},df:0,'"':{docs:{},df:0,o:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,9:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}}}}}}}},r:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}}}},u:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772}},df:1}}}},u:{docs:{},df:0,g:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},n:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1,c:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,t:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.6457513110645907}},df:1}}}}}},s:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,n:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}},w:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1,"=":{docs:{},df:0,6:{docs:{},df:0,4:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}},g:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1,e:{docs:{},df:0,n:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1}}},s:{docs:{},df:0,t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.449489742783178}},df:1,e:{docs:{},df:0,'"':{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:1,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,b:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},o:{docs:{},df:0,o:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},s:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}}},".":{docs:{},df:0,s:{docs:{},df:0,v:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.8284271247461903}},df:1}}}},i:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.449489742783178}},df:1,e:{docs:{},df:0,":":{docs:{},df:0,h:{docs:{},df:0,o:{docs:{},df:0,v:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:1}}}}}}}}}}}}}},t:{docs:{},df:0,p:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,y:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,f:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}},s:{docs:{},df:0,p:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,i:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,z:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,f:{docs:{},df:0,o:{docs:{},df:0,"(":{docs:{},df:0,g:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,s:{docs:{},df:0,p:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}},i:{docs:{},df:0,f:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:2.449489742783178}},df:2,"(":{docs:{},df:0,s:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,"=":{docs:{},df:0,"[":{docs:{},df:0,'"':{docs:{},df:0,o:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,9:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,_:{docs:{},df:0,a:{docs:{},df:0,v:{docs:{},df:0,1:{docs:{},df:0,".":{docs:{},df:0,m:{docs:{},df:0,p:{docs:{},df:0,4:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}},v:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}},l:{docs:{},df:0,o:{docs:{},df:0,b:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-math/":{tf:2}},df:1}}}}},o:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,a:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}},e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1},k:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},o:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1},g:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2}},df:1,e:{docs:{},df:0,"'":{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1}}}}},p:{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}},s:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}},t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.8284271247461903},"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:3.4641016151377544}},df:4,";":{docs:{},df:0,g:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}},u:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},h:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:2,1:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},2:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,o:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},3:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},4:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},5:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},6:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},"=":{docs:{},df:0,4:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.449489742783178}},df:1}}}},a:{docs:{},df:0,l:{docs:{},df:0,f:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1}},p:{docs:{},df:0,p:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2}},df:1},y:{docs:{},df:0,'"':{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,b:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,s:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}}},".":{docs:{},df:0,s:{docs:{},df:0,v:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2}},df:1}}}},i:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,e:{docs:{},df:0,":":{docs:{},df:0,h:{docs:{},df:0,o:{docs:{},df:0,v:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}},r:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{},df:0,s:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},s:{docs:{},df:0,_:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,f:{docs:{},df:0,i:{docs:{},df:0,e:{docs:{},df:0,d:{docs:{},df:0,_:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}},e:{docs:{},df:0,a:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}},v:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}},i:{docs:{},df:0,g:{docs:{},df:0,h:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.6457513110645907},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:2,"=":{docs:{},df:0,'"':{docs:{},df:0,4:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:1}},8:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1},9:{docs:{},df:0,'"':{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}},l:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:2}},p:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:2,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},r:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:2.449489742783178}},df:2}},y:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},i:{docs:{},df:0,g:{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},l:{docs:{},df:0,i:{docs:{},df:0,g:{docs:{},df:0,h:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:4}}}}}}}},o:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}},r:{docs:{},df:0,i:{docs:{},df:0,z:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.23606797749979}},df:1,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,b:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,d:{docs:{},df:0,i:{docs:{},df:0,v:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2}},df:1}}}}}}}}}}}}}}}}}}}}}}}}},r:{docs:{},df:0,e:{docs:{},df:0,f:{docs:{},df:0,"=":{docs:{},df:0,'"':{docs:{},df:0,h:{docs:{},df:0,t:{docs:{},df:0,t:{docs:{},df:0,p:{docs:{},df:0,s:{docs:{},df:0,":":{docs:{},df:0,"/":{docs:{},df:0,"/":{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,s:{docs:{},df:0,".":{docs:{},df:0,e:{docs:{},df:0,x:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{},df:0,p:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,".":{docs:{},df:0,c:{docs:{},df:0,o:{docs:{},df:0,m:{docs:{},df:0,'"':{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,d:{docs:{},df:0,a:{docs:{},df:0,s:{docs:{},df:0,h:{docs:{},df:0,b:{docs:{},df:0,o:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,d:{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,a:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},t:{docs:{},df:0,m:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:5,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:2}}},5:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:2}}},t:{docs:{},df:0,p:{docs:{},df:0,s:{docs:{},df:0,":":{docs:{},df:0,"/":{docs:{},df:0,"/":{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,g:{docs:{},df:0,e:{docs:{},df:0,".":{docs:{},df:0,n:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,f:{docs:{},df:0,y:{docs:{},df:0,".":{docs:{},df:0,a:{docs:{},df:0,p:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}},u:{docs:{},df:0,m:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.8284271247461903}},df:1,i:{docs:{},df:0,t:{docs:{},df:0,y:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,5:{docs:{},df:0,5:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}}}}}}}}}},n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},i:{docs:{},df:0,"'":{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1},v:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}},c:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:3}}},d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1.7320508075688772}},df:3,"=":{docs:{},df:0,'"':{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1}},o:{docs:{},df:0,o:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1}}}},s:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2}},df:1}}}}},e:{docs:{},df:0,b:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},f:{docs:{},df:0,r:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1.4142135623730951}},df:2}}}},m:{docs:{},df:0,a:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:4.898979485566356},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:2}},df:3,e:{docs:{},df:0,"(":{docs:{},df:0,s:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,"=":{docs:{},df:0,"[":{docs:{},df:0,'"':{docs:{},df:0,o:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,9:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}},"/":{docs:{},df:0,p:{docs:{},df:0,n:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1}}}}}}},g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,"(":{docs:{},df:0,s:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,"=":{docs:{},df:0,'"':{docs:{},df:0,".":{docs:{},df:0,"/":{docs:{},df:0,i:{docs:{},df:0,m:{docs:{},df:0,g:{docs:{},df:0,"/":{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}},"/":{docs:{},df:0,o:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,v:{docs:{},df:0,i:{docs:{},df:0,e:{docs:{},df:0,w:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}},f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}},"/":{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}},h:{docs:{},df:0,o:{docs:{},df:0,v:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,e:{docs:{},df:0,r:{docs:{},df:0,"(":{docs:{},df:0,s:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,"=":{docs:{},df:0,"[":{docs:{},df:0,'"':{docs:{},df:0,".":{docs:{},df:0,"/":{docs:{},df:0,i:{docs:{},df:0,m:{docs:{},df:0,g:{docs:{},df:0,"/":{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,s:{docs:{},df:0,".":{docs:{},df:0,s:{docs:{},df:0,v:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}},"/":{docs:{},df:0,o:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,v:{docs:{},df:0,i:{docs:{},df:0,e:{docs:{},df:0,w:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}},f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,s:{docs:{},df:0,".":{docs:{},df:0,s:{docs:{},df:0,v:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}},p:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:2}}},r:{docs:{},df:0,o:{docs:{},df:0,v:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}},n:{docs:{},df:0,c:{docs:{},df:0,l:{docs:{},df:0,u:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:2}}},t:{docs:{},df:0,u:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},d:{docs:{},df:0,e:{docs:{},df:0,x:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,".":{docs:{},df:0,h:{docs:{},df:0,t:{docs:{},df:0,m:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}},i:{docs:{},df:0,v:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}},f:{docs:{},df:0,l:{docs:{},df:0,u:{docs:{},df:0,x:{docs:{},df:0,d:{docs:{},df:0,b:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1,c:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}},o:{docs:{},df:0,r:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/privacy/":{tf:1}},df:2}}},t:{docs:{},df:0,y:{docs:{},df:0,"}":{docs:{},df:0,"^":{docs:{},df:0,"\\":{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,f:{docs:{},df:0,t:{docs:{},df:0,y:{docs:{},df:0,"\\":{docs:{},df:0,h:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1.4142135623730951}},df:1}}}}}}}}}}}}}}},i:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}},l:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-math/":{tf:1.7320508075688772}},df:3}}},s:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},r:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},p:{docs:{},df:0,i:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1}}},t:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2}},df:1,"(":{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,p:{docs:{},df:0,f:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}}}},_:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1.4142135623730951}},df:1,0:{docs:{},df:0,"^":{docs:{},df:0,1:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}},e:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1},r:{docs:{},df:0,n:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}},o:{docs:{},df:0,":":{docs:{},df:0,":":{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,u:{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"(":{docs:{},df:0,")":{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}},p:{docs:{},df:0,s:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},s:{docs:{},df:0,_:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},d:{docs:{},df:0,m:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}},o:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1},s:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,"(":{docs:{},df:0,$:{docs:{},df:0,_:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,s:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,"[":{docs:{},df:0,"'":{docs:{},df:0,p:{docs:{},df:0,f:{docs:{},df:0,a:{docs:{},df:0,_:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,k:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}},t:{docs:{},df:0,"'":{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},a:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,s:{docs:{},df:0,b:{docs:{},df:0,o:{docs:{},df:0,l:{docs:{},df:0,d:{docs:{},df:0,c:{docs:{},df:0,o:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,k:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{},df:0,r:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{},df:0,g:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}}}}}}}}}}}}}}}}}}}}}}}},t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},e:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:2.449489742783178}},df:2}}}},j:{docs:{},df:0,a:{docs:{},df:0,k:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1}},n:{docs:{"https://abridge.netlify.app/privacy/":{tf:1}},df:1},v:{docs:{},df:0,a:{docs:{},df:0,s:{docs:{},df:0,c:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,p:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:2}}}}}}}}},e:{docs:{},df:0,l:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1},y:{docs:{},df:0,":":{docs:{},df:0,":":{docs:{},df:0,a:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,x:{docs:{},df:0,_:{docs:{},df:0,w:{docs:{},df:0,e:{docs:{},df:0,b:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}},i:{docs:{},df:0,e:{docs:{},df:0,i:{docs:{},df:0,k:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1}}}}},o:{docs:{},df:0,h:{docs:{},df:0,n:{docs:{},df:0,5:{docs:{},df:0,9:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},p:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}},s:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:3,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1,b:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},k:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,x:{docs:{"https://abridge.netlify.app/overview-math/":{tf:3.605551275463989}},df:1,"(":{docs:{},df:0,b:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{},df:0,c:{docs:{},df:0,k:{docs:{},df:0,"=":{docs:{},df:0,f:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}},t:{docs:{},df:0,r:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}}}}}}}}}}},b:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1,d:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,y:{docs:{},df:0,2:{docs:{},df:0,3:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},y:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}},n:{docs:{},df:0,o:{docs:{},df:0,w:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},l:{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},c:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},n:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1,"=":{docs:{},df:0,'"':{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,'"':{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:2}}}}}}}},u:{docs:{},df:0,a:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1}}}}},r:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:2}},df:1,e:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}},s:{docs:{},df:0,t:{docs:{},df:0,_:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{},df:0,g:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}},e:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}}}}}},t:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}},t:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},y:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:1}},df:2}}}}},e:{docs:{},df:0,s:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}},v:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:2}}}},i:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/about/":{tf:1.4142135623730951}},df:1}}}},f:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},g:{docs:{},df:0,h:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:2,h:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:2}},df:3}}}},w:{docs:{},df:0,e:{docs:{},df:0,i:{docs:{},df:0,g:{docs:{},df:0,h:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:3}}}}}}}}},n:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:2},k:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}},s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:2.23606797749979}},df:1,e:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},o:{docs:{},df:0,a:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1},"https://abridge.netlify.app/overview-math/":{tf:2.23606797749979}},df:3}},c:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}},o:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:2}},w:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,"(":{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}},s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2,";":{docs:{},df:0,"!":{docs:{},df:0,d:{docs:{},df:0,o:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{},df:0,y:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}}}}},"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},"/":{docs:{},df:0,a:{docs:{},df:0,u:{docs:{},df:0,d:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}},b:{docs:{},df:0,o:{docs:{},df:0,d:{docs:{},df:0,y:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}}}}},d:{docs:{},df:0,i:{docs:{},df:0,v:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2}},df:1}}}}}},h:{docs:{},df:0,2:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},e:{docs:{},df:0,a:{docs:{},df:0,d:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}}}},t:{docs:{},df:0,m:{docs:{},df:0,l:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}}}}},p:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}}}},s:{docs:{},df:0,t:{docs:{},df:0,y:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:1}}}}}}}},v:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,o:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}}}}}}}}},"?":{docs:{},df:0,p:{docs:{},df:0,h:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},a:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1,u:{docs:{},df:0,d:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}},b:{docs:{},df:0,o:{docs:{},df:0,d:{docs:{},df:0,y:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}}}}},d:{docs:{},df:0,i:{docs:{},df:0,v:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.8284271247461903}},df:1,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1}},df:2}}}}}},f:{docs:{},df:0,i:{docs:{},df:0,g:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,f:{docs:{},df:0,i:{docs:{},df:0,g:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}},h:{docs:{},df:0,1:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,";":{docs:{},df:0,"—":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,h:{docs:{},df:0,6:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}}}}}}},2:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,b:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}},6:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},e:{docs:{},df:0,a:{docs:{},df:0,d:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}}}},t:{docs:{},df:0,m:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}},i:{docs:{},df:0,m:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:2,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}},o:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}},m:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:2}}}},p:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,p:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:2}}}}}}}}}}}}}}}}},i:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}}}}}}}}}},s:{docs:{},df:0,c:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,p:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1.4142135623730951}},df:1}}}}},o:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:3.3166247903554}},df:1,e:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}},t:{docs:{},df:0,d:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{},df:0,".":{docs:{},df:0,h:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}},y:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:1}}}}}}},v:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},t:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,e:{docs:{},df:0,x:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{},df:0,p:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:2}}}}}}}}}}}}}}},v:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.7320508075688772}},df:1}}}}}}}}}},u:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},m:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.449489742783178},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2,l:{docs:{},df:0,i:{docs:{},df:0,b:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,":":{docs:{},df:0,":":{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,"(":{docs:{},df:0,")":{docs:{},df:0,".":{docs:{},df:0,a:{docs:{},df:0,w:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}},n:{docs:{},df:0,d:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/about/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:6}}}}}},g:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},i:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1}},df:2},u:{docs:{},df:0,f:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,v:{docs:{},df:0,o:{docs:{},df:0,l:{docs:{},df:0,k:{docs:{},df:0,s:{docs:{},df:0,w:{docs:{},df:0,a:{docs:{},df:0,g:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,y:{docs:{},df:0,o:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{},df:0,f:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,d:{docs:{},df:0,h:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,v:{docs:{},df:0,r:{docs:{},df:0,o:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,b:{docs:{},df:0,m:{docs:{},df:0,w:{docs:{},df:0,h:{docs:{},df:0,y:{docs:{},df:0,u:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,a:{docs:{},df:0,u:{docs:{},df:0,d:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,i:{docs:{},df:0,s:{docs:{},df:0,s:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,k:{docs:{},df:0,i:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{},df:0,s:{docs:{},df:0,u:{docs:{},df:0,b:{docs:{},df:0,i:{docs:{},df:0,s:{docs:{},df:0,h:{docs:{},df:0,i:{docs:{},df:0,s:{docs:{},df:0,u:{docs:{},df:0,z:{docs:{},df:0,u:{docs:{},df:0,k:{docs:{},df:0,i:{docs:{},df:0,v:{docs:{},df:0,o:{docs:{},df:0,l:{docs:{},df:0,v:{docs:{},df:0,o:{docs:{},df:0,s:{docs:{},df:0,u:{docs:{},df:0,b:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,u:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,z:{docs:{},df:0,d:{docs:{},df:0,a:{docs:{},df:0,j:{docs:{},df:0,a:{docs:{},df:0,g:{docs:{},df:0,u:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,b:{docs:{},df:0,u:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,k:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,x:{docs:{},df:0,u:{docs:{},df:0,s:{docs:{},df:0,g:{docs:{},df:0,m:{docs:{},df:0,c:{docs:{},df:0,p:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,s:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,a:{docs:{},df:0,d:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},r:{docs:{},df:0,k:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,d:{docs:{},df:0,o:{docs:{},df:0,w:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:2.23606797749979}},df:2}}}}}},s:{docs:{},df:0,k:{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-math/":{tf:2.6457513110645907}},df:1,_:{docs:{},df:0,a:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,_:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-math/":{tf:2.23606797749979}},df:1}}}}}}}}}},e:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-math/":{tf:2.6457513110645907}},df:1}}}}}},x:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1.4142135623730951}},df:1}},b:{docs:{},df:0,":":{docs:{},df:0,1:{docs:{},df:0,2:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,p:{docs:{},df:0,x:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}}}}}}},d:{docs:{},df:0,5:{docs:{},df:0,"(":{docs:{},df:0,u:{docs:{},df:0,n:{docs:{},df:0,i:{docs:{},df:0,q:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,"(":{docs:{},df:0,'"':{docs:{},df:0,p:{docs:{},df:0,f:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}},e:{docs:{},df:0,a:{docs:{},df:0,s:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}},d:{docs:{},df:0,i:{docs:{},df:0,u:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},m:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}}},n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},s:{docs:{},df:0,s:{docs:{},df:0,a:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},t:{docs:{},df:0,a:{docs:{},df:0,p:{docs:{},df:0,h:{docs:{},df:0,y:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}},i:{docs:{},df:0,l:{docs:{},df:0,k:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},t:{docs:{"https://abridge.netlify.app/about/":{tf:1.4142135623730951}},df:1}},o:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1,r:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}},l:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,r:{docs:{},df:0,u:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}},o:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},u:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},v:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},p:{docs:{},df:0,3:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1},4:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}},u:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}}},w:{docs:{},df:0,":":{docs:{},df:0,5:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}}},y:{docs:{},df:0,v:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,1:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1},2:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}},n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,".":{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,n:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,h:{docs:{},df:0,t:{docs:{},df:0,m:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}},a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2,a:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},v:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772}},df:1}},b:{docs:{},df:0,s:{docs:{},df:0,p:{docs:{},df:0,";":{docs:{},df:0,s:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:2.449489742783178}},df:1}}}}}}}}},e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,a:{docs:{},df:0,r:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1}}}},c:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,s:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}}}}}},e:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}},w:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1}},df:3,".":{docs:{},df:0,u:{docs:{},df:0,p:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},x:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},o:{docs:{},df:0,a:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,p:{docs:{},df:0,a:{docs:{},df:0,u:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1}},df:1}}}}}}}},c:{docs:{},df:0,t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},n:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,q:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},s:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},t:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1.4142135623730951}},df:1}},e:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}},v:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,b:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},w:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-math/":{tf:1.7320508075688772}},df:3}},u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,l:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:3.1622776601683795}},df:1}},m:{docs:{},df:0,b:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}},s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},o:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1,b:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,v:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}}}}}}}}},d:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,t:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,q:{docs:{},df:0,u:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}},f:{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},i:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},s:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},g:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}},m:{docs:{},df:0,n:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,m:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},n:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,c:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,k:{docs:{},df:0,"=":{docs:{},df:0,'"':{docs:{},df:0,c:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{},df:0,"(":{docs:{},df:0,")":{docs:{},df:0,'"':{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},p:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:2},"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772}},df:5}}}}},r:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}},d:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:2}}}},t:{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,w:{docs:{},df:0,i:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1}},df:2}}}}}}},u:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2,e:{docs:{},df:0,r:{docs:{},df:0,m:{docs:{},df:0,o:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}},p:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.449489742783178},"https://abridge.netlify.app/overview-math/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-rich-content/":{tf:2}},df:3}}}}},v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:3,9:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1,".":{docs:{},df:0,f:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}},m:{docs:{},df:0,p:{docs:{},df:0,3:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}},w:{docs:{},df:0,a:{docs:{},df:0,v:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}},_:{docs:{},df:0,v:{docs:{},df:0,p:{docs:{},df:0,9:{docs:{},df:0,".":{docs:{},df:0,w:{docs:{},df:0,e:{docs:{},df:0,b:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}}}}}}}}}}}}},a:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},v:{docs:{},df:0,i:{docs:{},df:0,e:{docs:{},df:0,w:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}},p:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,"=":{docs:{},df:0,4:{docs:{},df:0,5:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:1}}},a:{docs:{},df:0,c:{docs:{},df:0,k:{docs:{},df:0,a:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},d:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2}},df:1},g:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/overview-math/":{tf:3}},df:4,"'":{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}},i:{docs:{},df:0,n:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,_:{docs:{},df:0,b:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}}}}}}}},r:{docs:{},df:0,a:{docs:{},df:0,g:{docs:{},df:0,r:{docs:{},df:0,a:{docs:{},df:0,p:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},m:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}}}},i:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,i:{docs:{"https://abridge.netlify.app/privacy/":{tf:1.4142135623730951}},df:1,c:{docs:{},df:0,u:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1.7320508075688772}},df:1}}}}}},n:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},s:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,w:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}},t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.8284271247461903},"https://abridge.netlify.app/overview-rich-content/":{tf:1.7320508075688772}},df:2}}},e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-math/":{tf:2.449489742783178}},df:1,c:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},f:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:2}}}},i:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},s:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/privacy/":{tf:1}},df:1}}}}},h:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},i:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1.4142135623730951}},df:1,k:{docs:{},df:0,e:{docs:{},df:0,"'":{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},1:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},l:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:2}},df:1}},n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1},y:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:2,e:{docs:{},df:0,r:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}}},n:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}}},l:{docs:{},df:0,i:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1.4142135623730951}},df:1}}}},s:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1,e:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}}}}}},e:{docs:{},df:0,a:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}},i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},p:{docs:{},df:0,g:{docs:{},df:0,s:{docs:{},df:0,q:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}},n:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}},o:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/privacy/":{tf:1}},df:1}}}},r:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/posts/":{tf:1}},df:3,f:{docs:{},df:0,i:{docs:{},df:0,x:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},w:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},r:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}},t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},e:{docs:{},df:0,s:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1}}}},v:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}},i:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},i:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}},n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,"(":{docs:{},df:0,'"':{docs:{},df:0,c:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{},df:0,l:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}}}},g:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,p:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,y:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,f:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}},s:{docs:{},df:0,q:{docs:{},df:0,u:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,"(":{docs:{},df:0,2:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}},f:{docs:{},df:0,"(":{docs:{},df:0,'"':{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}},v:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/privacy/":{tf:1.4142135623730951}},df:1}}}}},o:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{},df:0,d:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},f:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:2}}},g:{docs:{},df:0,r:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},j:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},p:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}},v:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:2}}}}},u:{docs:{},df:0,b:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,s:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}}}},r:{docs:{},df:0,p:{docs:{},df:0,o:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}}},y:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,3:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}},q:{docs:{},df:0,u:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1},i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772}},df:1,a:{docs:{},df:0,n:{docs:{},df:0,i:{docs:{},df:0,m:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},t:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},e:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},b:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},c:{docs:{},df:0,k:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}},o:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772}},df:1}}}},r:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1},g:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},t:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},i:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},o:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1,c:{docs:{},df:0,o:{docs:{},df:0,m:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:1}}}}}}},d:{docs:{},df:0,u:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}},f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},r:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},g:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}},u:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.23606797749979}},df:1,i:{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}},m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},n:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-math/":{tf:2.449489742783178}},df:1}}}},p:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.8284271247461903}},df:1}}},l:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},r:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}}},q:{docs:{},df:0,u:{docs:{},df:0,i:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1,e:{docs:{},df:0,_:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{},df:0,"(":{docs:{},df:0,"'":{docs:{},df:0,c:{docs:{},df:0,o:{docs:{},df:0,m:{docs:{},df:0,m:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,".":{docs:{},df:0,p:{docs:{},df:0,h:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}},r:{docs:{},df:0,i:{docs:{},df:0,b:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},o:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},u:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},s:{docs:{},df:0,p:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},u:{docs:{},df:0,l:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1,s:{docs:{},df:0,c:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,e:{docs:{},df:0,".":{docs:{},df:0,f:{docs:{},df:0,i:{docs:{},df:0,r:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}},i:{docs:{},df:0,n:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,t:{docs:{},df:0,b:{docs:{},df:0,e:{docs:{},df:0,f:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,"(":{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}},o:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,h:{docs:{},df:0,t:{docs:{},df:0,m:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}},t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.6457513110645907},"https://abridge.netlify.app/overview-rich-content/":{tf:2.23606797749979}},df:2}}}},v:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1}}}}},i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,c:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.449489742783178},"https://abridge.netlify.app/overview-rich-content/":{tf:3.4641016151377544}},df:2}}},o:{docs:{},df:0,b:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:1},o:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-images/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-math/":{tf:1}},df:3}},u:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{},df:0,"(":{docs:{},df:0,b:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,2:{docs:{},df:0,8:{docs:{},df:0,0:{docs:{},df:0,".":{docs:{},df:0,h:{docs:{},df:0,u:{docs:{},df:0,m:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},p:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,s:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}},t:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,p:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}},w:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},u:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,a:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1}}}}}}}}},s:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},m:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.8284271247461903},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:2},p:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},p:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,i:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},c:{docs:{},df:0,o:{docs:{},df:0,p:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},r:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:1}},df:3}}},r:{docs:{},df:0,o:{docs:{},df:0,l:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}},e:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}},c:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},o:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},r:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},t:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-math/":{tf:2.449489742783178}},df:2,_:{docs:{},df:0,n:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,"]":{docs:{},df:0,"/":{docs:{},df:0,_:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,x:{docs:{},df:0,".":{docs:{},df:0,m:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}},e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2,n:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}},l:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}},m:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:3}}}},n:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2,e:{docs:{},df:0,r:{docs:{},df:0,".":{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{},df:0,"(":{docs:{},df:0,"'":{docs:{},df:0,j:{docs:{},df:0,a:{docs:{},df:0,k:{docs:{},df:0,e:{docs:{},df:0,"@":{docs:{},df:0,e:{docs:{},df:0,x:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{},df:0,p:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,".":{docs:{},df:0,c:{docs:{},df:0,o:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},t:{docs:{"https://abridge.netlify.app/privacy/":{tf:1.4142135623730951}},df:1,e:{docs:{},df:0,n:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},o:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1},q:{docs:{},df:0,u:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},o:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},r:{docs:{},df:0,i:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}},v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},s:{docs:{},df:0,s:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}}}},t:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-images/":{tf:2},"https://abridge.netlify.app/overview-rich-content/":{tf:1},"https://abridge.netlify.app/privacy/":{tf:1}},df:7,c:{docs:{},df:0,o:{docs:{},df:0,o:{docs:{},df:0,k:{docs:{},df:0,i:{docs:{},df:0,e:{docs:{},df:0,"(":{docs:{},df:0,"'":{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}},u:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}},v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}},h:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772},"https://abridge.netlify.app/privacy/":{tf:1.4142135623730951}},df:2}}},i:{docs:{},df:0,f:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,w:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{},df:0,"=":{docs:{},df:0,4:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}},o:{docs:{},df:0,r:{docs:{},df:0,t:{docs:{},df:0,_:{docs:{},df:0,n:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},c:{docs:{},df:0,o:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-math/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-rich-content/":{tf:2.449489742783178}},df:5}}}}},w:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:4}}},i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,t:{docs:{},df:0,i:{docs:{},df:0,b:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},v:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},t:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},e:{docs:{"https://abridge.netlify.app/about/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-abridge/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-math/":{tf:2.23606797749979},"https://abridge.netlify.app/privacy/":{tf:1.4142135623730951}},df:5},i:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},x:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},z:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-images/":{tf:2.449489742783178},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:4}}},l:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},o:{docs:{},df:0,w:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}},m:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772}},df:2,e:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}},t:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,l:{docs:{},df:0,i:{docs:{},df:0,b:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}},o:{docs:{},df:0,c:{docs:{},df:0,k:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,".":{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}}}}}}}},f:{docs:{},df:0,t:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,p:{docs:{},df:0,"=":{docs:{},df:0,4:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}},l:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/privacy/":{tf:1.4142135623730951}},df:1}},u:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:2.8284271247461903}},df:2}}}},p:{docs:{},df:0,e:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2,i:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1},f:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}},n:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}},e:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}},r:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,i:{docs:{},df:0,u:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}},q:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1},u:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{},df:0,"(":{docs:{},df:0,x:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},i:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,u:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}},r:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-images/":{tf:2}},df:2,"=":{docs:{},df:0,'"':{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}},h:{docs:{},df:0,t:{docs:{},df:0,t:{docs:{},df:0,p:{docs:{},df:0,s:{docs:{},df:0,":":{docs:{},df:0,"/":{docs:{},df:0,"/":{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,g:{docs:{},df:0,e:{docs:{},df:0,".":{docs:{},df:0,n:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,f:{docs:{},df:0,y:{docs:{},df:0,".":{docs:{},df:0,a:{docs:{},df:0,p:{docs:{},df:0,p:{docs:{},df:0,"/":{docs:{},df:0,o:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,v:{docs:{},df:0,i:{docs:{},df:0,e:{docs:{},df:0,w:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:2.8284271247461903}},df:2}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},i:{docs:{},df:0,m:{docs:{},df:0,g:{docs:{},df:0,"/":{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}},o:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,9:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}}}},s:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,"=":{docs:{},df:0,'"':{docs:{},df:0,h:{docs:{},df:0,t:{docs:{},df:0,t:{docs:{},df:0,p:{docs:{},df:0,s:{docs:{},df:0,":":{docs:{},df:0,"/":{docs:{},df:0,"/":{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,g:{docs:{},df:0,e:{docs:{},df:0,".":{docs:{},df:0,n:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,f:{docs:{},df:0,y:{docs:{},df:0,".":{docs:{},df:0,a:{docs:{},df:0,p:{docs:{},df:0,p:{docs:{},df:0,"/":{docs:{},df:0,o:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,v:{docs:{},df:0,i:{docs:{},df:0,e:{docs:{},df:0,w:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.7320508075688772}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},t:{docs:{},df:0,a:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}},r:{docs:{},df:0,t:{docs:{},df:0,_:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1}}}},t:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/about/":{tf:1}},df:1}}},y:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},d:{docs:{},df:0,":":{docs:{},df:0,":":{docs:{},df:0,c:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},i:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},o:{docs:{},df:0,r:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/privacy/":{tf:1}},df:1}}},r:{docs:{},df:0,"(":{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,"(":{docs:{},df:0,h:{docs:{},df:0,u:{docs:{},df:0,m:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.23606797749979}},df:1}}}}},t:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,p:{docs:{},df:0,f:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.23606797749979}},df:1}}}}}}}}}},i:{docs:{},df:0,n:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:2.8284271247461903}},df:1}}}},u:{docs:{},df:0,f:{docs:{},df:0,f:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}},y:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},u:{docs:{},df:0,b:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,l:{docs:{},df:0,i:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},c:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},p:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,p:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-math/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-rich-content/":{tf:2.8284271247461903}},df:3}}}}},r:{docs:{},df:0,r:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:3}}}}}}},v:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:1}},df:3}},w:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}}}},y:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{},df:0,x:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-math/":{tf:1}},df:5}}}}}},t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,".":{docs:{},df:0,c:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,o:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,"(":{docs:{},df:0,"!":{docs:{},df:0,0:{docs:{},df:0,")":{docs:{},df:0,")":{docs:{},df:0,".":{docs:{},df:0,i:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}},i:{docs:{},df:0,n:{docs:{},df:0,n:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,h:{docs:{},df:0,t:{docs:{},df:0,m:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}},a:{docs:{},df:0,b:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:2.449489742783178}},df:2},s:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,p:{docs:{},df:0,"=":{docs:{},df:0,4:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}},g:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-rich-content/":{tf:3.7416573867739413}},df:3},k:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:2}},df:1}},l:{docs:{},df:0,k:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,c:{docs:{},df:0,t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},l:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}},m:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,e:{docs:{},df:0,r:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.23606797749979}},df:1,e:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,9:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}}}}}}}}}}},f:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2}},df:1},l:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},r:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},x:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}}},h:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/about/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-abridge/":{tf:2.449489742783178},"https://abridge.netlify.app/overview-code-blocks/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:5,_:{docs:{},df:0,c:{docs:{},df:0,o:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}},s:{docs:{},df:0,"\\":{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,g:{docs:{},df:0,e:{docs:{},df:0,"\\":{docs:{},df:0,c:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,"\\":{docs:{},df:0,_:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,x:{docs:{},df:0,".":{docs:{},df:0,m:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}}}}}}}}}}}}}}}},s:{docs:{},df:0,a:{docs:{},df:0,s:{docs:{},df:0,s:{docs:{},df:0,"\\":{docs:{},df:0,_:{docs:{},df:0,v:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,".":{docs:{},df:0,s:{docs:{},df:0,c:{docs:{},df:0,s:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},i:{docs:{},df:0,r:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/privacy/":{tf:1.4142135623730951}},df:2}}},o:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}},r:{docs:{},df:0,e:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}},i:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},m:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2,".":{docs:{},df:0,c:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,"(":{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}},s:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,e:{docs:{},df:0,p:{docs:{},df:0,"(":{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,v:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}},t:{docs:{},df:0,i:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1,e:{docs:{},df:0,"(":{docs:{},df:0,")":{docs:{},df:0,"+":{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,v:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}},o:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},s:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1}}}}}}},t:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951}},df:3,e:{docs:{},df:0,"=":{docs:{},df:0,'"':{docs:{},df:0,c:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}},r:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}},o:{docs:{},df:0,o:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},s:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1}}}}}}}},u:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},o:{docs:{},df:0,k:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}},m:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},p:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}},r:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{},df:0,k:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1}},df:2}}},i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1,g:{docs:{},df:0,g:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}},m:{docs:{},df:0,"(":{docs:{},df:0,s:{docs:{},df:0,a:{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,p:{docs:{},df:0,o:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,"(":{docs:{},df:0,"'":{docs:{},df:0,f:{docs:{},df:0,u:{docs:{},df:0,s:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,n:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}},u:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2.449489742783178},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:2},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-math/":{tf:3.1622776601683795}},df:4}}},w:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:1}},df:2}},y:{docs:{},df:0,p:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:2}},df:3,"=":{docs:{},df:0,'"':{docs:{},df:0,a:{docs:{},df:0,u:{docs:{},df:0,d:{docs:{},df:0,i:{docs:{},df:0,o:{docs:{},df:0,"/":{docs:{},df:0,f:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}},m:{docs:{},df:0,p:{docs:{},df:0,3:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}},o:{docs:{},df:0,g:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}},w:{docs:{},df:0,a:{docs:{},df:0,v:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}}},b:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}},i:{docs:{},df:0,m:{docs:{},df:0,g:{docs:{},df:0,"/":{docs:{},df:0,w:{docs:{},df:0,e:{docs:{},df:0,b:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.7320508075688772}},df:1}}}}}}}},m:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{},df:0,"/":{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,x:{docs:{},df:0,'"':{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,e:{docs:{},df:0,"^":{docs:{},df:0,"{":{docs:{},df:0,i:{docs:{},df:0,"\\":{docs:{},df:0,p:{docs:{},df:0,i:{docs:{},df:0,"}":{docs:{},df:0,"+":{docs:{},df:0,1:{docs:{},df:0,"=":{docs:{},df:0,0:{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,s:{docs:{},df:0,c:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,p:{docs:{},df:0,t:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},";":{docs:{},df:0,m:{docs:{},df:0,o:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,"=":{docs:{},df:0,d:{docs:{},df:0,i:{docs:{},df:0,s:{docs:{},df:0,p:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,y:{docs:{},df:0,'"':{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"\\":{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{},df:0,_:{docs:{},df:0,0:{docs:{},df:0,"^":{docs:{},df:0,1:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},v:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,o:{docs:{},df:0,"/":{docs:{},df:0,m:{docs:{},df:0,p:{docs:{},df:0,4:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}}},w:{docs:{},df:0,e:{docs:{},df:0,b:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1.4142135623730951}},df:1}}}}}}}}}}}}}}}},u:{docs:{},df:0,n:{docs:{},df:0,c:{docs:{},df:0,o:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}},d:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-math/":{tf:1.7320508075688772}},df:3}}},i:{docs:{},df:0,q:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}},t:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{},df:0,"(":{docs:{},df:0,'"':{docs:{},df:0,p:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,y:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}},n:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,"(":{docs:{},df:0,'"':{docs:{},df:0,p:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,y:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}},o:{docs:{},df:0,r:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},p:{docs:{},df:0,d:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1,e:{docs:{},df:0,_:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1}}}}}}}}}}}}}}},r:{docs:{},df:0,l:{docs:{},df:0,"(":{docs:{},df:0,"'":{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1,s:{docs:{},df:0,".":{docs:{},df:0,s:{docs:{},df:0,v:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1}}}}}}}}}},h:{docs:{},df:0,t:{docs:{},df:0,t:{docs:{},df:0,p:{docs:{},df:0,s:{docs:{},df:0,":":{docs:{},df:0,"/":{docs:{},df:0,"/":{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,g:{docs:{},df:0,e:{docs:{},df:0,".":{docs:{},df:0,n:{docs:{},df:0,e:{docs:{},df:0,t:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,f:{docs:{},df:0,y:{docs:{},df:0,".":{docs:{},df:0,a:{docs:{},df:0,p:{docs:{},df:0,p:{docs:{},df:0,"/":{docs:{},df:0,o:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,v:{docs:{},df:0,i:{docs:{},df:0,e:{docs:{},df:0,w:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.4142135623730951}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},i:{docs:{},df:0,m:{docs:{},df:0,g:{docs:{},df:0,"/":{docs:{},df:0,f:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,s:{docs:{},df:0,".":{docs:{},df:0,s:{docs:{},df:0,v:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}}}},s:{docs:{"https://abridge.netlify.app/about/":{tf:1},"https://abridge.netlify.app/overview-abridge/":{tf:2},"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-images/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-math/":{tf:1},"https://abridge.netlify.app/overview-rich-content/":{tf:3.3166247903554},"https://abridge.netlify.app/privacy/":{tf:1}},df:8,a:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.449489742783178},"https://abridge.netlify.app/overview-math/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-rich-content/":{tf:2}},df:3}},e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1,_:{docs:{},df:0,u:{docs:{},df:0,p:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}},r:{docs:{},df:0,"/":{docs:{},df:0,b:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{},df:0,"/":{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,v:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}},t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}},v:{docs:{},df:0,a:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}},u:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2}},r:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1,i:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},e:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,a:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,n:{docs:{},df:0,i:{docs:{},df:0,m:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}},n:{docs:{},df:0,i:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{},df:0,i:{docs:{},df:0,u:{docs:{},df:0,s:{docs:{},df:0,d:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}},r:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:2,f:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},i:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-rich-content/":{tf:3.605551275463989}},df:3,"(":{docs:{},df:0,s:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{},df:0,r:{docs:{},df:0,c:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,"=":{docs:{},df:0,"[":{docs:{},df:0,'"':{docs:{},df:0,o:{docs:{},df:0,v:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,9:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,_:{docs:{},df:0,a:{docs:{},df:0,v:{docs:{},df:0,1:{docs:{},df:0,".":{docs:{},df:0,m:{docs:{},df:0,p:{docs:{},df:0,4:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},e:{docs:{},df:0,w:{docs:{},df:0,b:{docs:{},df:0,o:{docs:{},df:0,x:{docs:{},df:0,"=":{docs:{},df:0,'"':{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}},m:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1,e:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:2}},df:1}}}},m:{docs:{},df:0,"(":{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,"=":{docs:{},df:0,'"':{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,_:{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1}},df:1}}}}}}}}}}}},o:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{},df:0,s:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,q:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}},o:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,b:{docs:{},df:0,u:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},r:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},u:{docs:{},df:0,p:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1,i:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,s:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}}}},w:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1.7320508075688772}},df:1,"=":{docs:{},df:0,6:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2.449489742783178}},df:1}}}},a:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1},"https://abridge.netlify.app/overview-images/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:5}},t:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}}}},v:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}},e:{docs:{},df:0,b:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1,m:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1},p:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}},i:{docs:{},df:0,d:{docs:{},df:0,t:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-images/":{tf:2.6457513110645907},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:3,"=":{docs:{},df:0,'"':{docs:{},df:0,6:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-images/":{tf:2}},df:1}},4:{docs:{},df:0,0:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}}},n:{docs:{},df:0,d:{docs:{},df:0,o:{docs:{},df:0,w:{docs:{},df:0,".":{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1}}}}}}}}},t:{docs:{},df:0,h:{docs:{},df:0,i:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772}},df:1}},o:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}}},o:{docs:{},df:0,r:{docs:{},df:0,l:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:2}},df:1}},m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}},r:{docs:{},df:0,a:{docs:{},df:0,p:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}},i:{docs:{},df:0,t:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}},x:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.4142135623730951}},df:1,'"':{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,i:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"&":{docs:{},df:0,l:{docs:{},df:0,t:{docs:{},df:0,";":{docs:{},df:0,"/":{docs:{},df:0,b:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,n:{docs:{},df:0,"&":{docs:{},df:0,g:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}},"^":{docs:{},df:0,2:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1.4142135623730951}},df:1}},e:{docs:{},df:0,r:{docs:{},df:0,u:{docs:{},df:0,m:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},i:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1.4142135623730951}},df:1},m:{docs:{},df:0,l:{docs:{},df:0,n:{docs:{},df:0,s:{docs:{},df:0,"=":{docs:{},df:0,'"':{docs:{},df:0,h:{docs:{},df:0,t:{docs:{},df:0,t:{docs:{},df:0,p:{docs:{},df:0,":":{docs:{},df:0,"/":{docs:{},df:0,"/":{docs:{},df:0,w:{docs:{},df:0,w:{docs:{},df:0,w:{docs:{},df:0,".":{docs:{},df:0,w:{docs:{},df:0,3:{docs:{},df:0,".":{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,g:{docs:{},df:0,"/":{docs:{},df:0,2:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,0:{docs:{},df:0,"/":{docs:{},df:0,s:{docs:{},df:0,v:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}},n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},"}":{docs:{},df:0,",":{docs:{},df:0,d:{docs:{},df:0,"\\":{docs:{},df:0,x:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}},"\\":{docs:{},df:0,",":{docs:{},df:0,d:{docs:{},df:0,"\\":{docs:{},df:0,x:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}}}}},y:{docs:{},df:0,e:{docs:{},df:0,l:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{},df:0,w:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,t:{docs:{},df:0,o:{docs:{},df:0,o:{docs:{},df:0,l:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}}}}}}}}}}},n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},o:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{},df:0,u:{docs:{},df:0,b:{docs:{"https://abridge.netlify.app/overview-embedded-youtube/":{tf:2}},df:1}}}}},t:{docs:{},df:0,"(":{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,"=":{docs:{},df:0,'"':{docs:{},df:0,t:{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,_:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,_:{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,r:{docs:{"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1}},df:1}}}}}}}}}}}}}}}}},z:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1},o:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/about/":{tf:1.4142135623730951},"https://abridge.netlify.app/overview-abridge/":{tf:2.23606797749979},"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1},"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1.7320508075688772},"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:7}},n:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1.7320508075688772}},df:1}}}}}},title:{root:{docs:{},df:0,a:{docs:{},df:0,b:{docs:{},df:0,r:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}}}}},b:{docs:{},df:0,l:{docs:{},df:0,o:{docs:{},df:0,c:{docs:{},df:0,k:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}}}}},c:{docs:{},df:0,o:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:1}},n:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/contact/":{tf:1}},df:1}}},e:{docs:{},df:0,n:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}}}}},e:{docs:{},df:0,m:{docs:{},df:0,b:{docs:{},df:0,e:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1}},df:2}}}}},g:{docs:{},df:0,u:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}},i:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,g:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}},m:{docs:{},df:0,a:{docs:{},df:0,r:{docs:{},df:0,k:{docs:{},df:0,d:{docs:{},df:0,o:{docs:{},df:0,w:{docs:{},df:0,n:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}}},t:{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}}}}},n:{docs:{},df:0,o:{docs:{},df:0,t:{docs:{},df:0,a:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/overview-math/":{tf:1}},df:1}}}}},p:{docs:{},df:0,o:{docs:{},df:0,l:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/privacy/":{tf:1}},df:1}}}},s:{docs:{},df:0,t:{docs:{"https://abridge.netlify.app/posts/":{tf:1}},df:1}}},r:{docs:{},df:0,i:{docs:{},df:0,v:{docs:{},df:0,a:{docs:{},df:0,c:{docs:{},df:0,i:{docs:{"https://abridge.netlify.app/privacy/":{tf:1}},df:1}}}}}}},r:{docs:{},df:0,i:{docs:{},df:0,c:{docs:{},df:0,h:{docs:{"https://abridge.netlify.app/overview-rich-content/":{tf:1}},df:1}}}},s:{docs:{},df:0,h:{docs:{},df:0,o:{docs:{},df:0,r:{docs:{},df:0,t:{docs:{},df:0,c:{docs:{},df:0,o:{docs:{},df:0,d:{docs:{"https://abridge.netlify.app/overview-images/":{tf:1}},df:1}}}}}}},t:{docs:{},df:0,y:{docs:{},df:0,l:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-markdown-and-style/":{tf:1}},df:1}}}}},t:{docs:{},df:0,h:{docs:{},df:0,e:{docs:{},df:0,m:{docs:{},df:0,e:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1},"https://abridge.netlify.app/overview-code-blocks/":{tf:1}},df:2}}}}},v:{docs:{},df:0,i:{docs:{},df:0,d:{docs:{},df:0,e:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1},"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1}},df:2}}},m:{docs:{},df:0,e:{docs:{},df:0,o:{docs:{"https://abridge.netlify.app/overview-embedded-vimeo/":{tf:1}},df:1}}}}},y:{docs:{},df:0,o:{docs:{},df:0,u:{docs:{},df:0,t:{docs:{},df:0,u:{docs:{},df:0,b:{docs:{"https://abridge.netlify.app/overview-embedded-youtube/":{tf:1}},df:1}}}}}},z:{docs:{},df:0,o:{docs:{},df:0,l:{docs:{},df:0,a:{docs:{"https://abridge.netlify.app/overview-abridge/":{tf:1}},df:1}}}}}}},documentStore:{save:!0,docs:{"https://abridge.netlify.app/":{body:"",id:"https://abridge.netlify.app/",title:""},"https://abridge.netlify.app/about/":{body:"This site provides a demo for the abridge theme for Zola the static site generator.\nAbridge was created by Jake G (jieiku) to be fast and lightweight, using semantic html, a class-light abridge.css, and No Mandatory JS.\nSome fun facts about the theme:\n\nPerfect score on Google's Lighthouse audit\nOnly ~6 KB of CSS before enabling the SVG CSS icons and syntax highlighting.\nNo mandatory JavaScript.\nNow with SEO!\n\nNearly half of the existing Zola MIT themes were inspiration for features and design of this theme.\nBoth the theme and this site are licensed under the MIT license.\n",id:"https://abridge.netlify.app/about/",title:"About"},"https://abridge.netlify.app/contact/":{body:"",id:"https://abridge.netlify.app/contact/",title:"Contact"},"https://abridge.netlify.app/overview-abridge/":{body:'Abridge is a fast and lightweight Zola theme using semantic html, only ~6kb css before svg icons and syntax highlighting, no mandatory JS, and perfect Lighthouse, YellowLabTools, and Observatory scores.\n\nFor quick setup copy the config.toml from the abridge theme into the root of your zola site, this will give you a base configuration with all config values used.\nYou can then edit or comment out the values in this file as necessary.\nYou should also uncomment out the line #theme = "abridge" in your root zola config.toml file. This tells your root zola site to use the abridge theme in the themes folder.\nYou can set the number of items that appear on the home page by editing themes\\abridge\\content\\_index.md file and adjusting paginate_by = 5\nYou can set the overal page width by editing themes\\abridge\\sass\\_variables.scss file, and adjusting these two lines:\n$mw:50% !default;// max-width\n$mb:1200px !default;// value at which to switch from fluid layout to using max-width\n\n',id:"https://abridge.netlify.app/overview-abridge/",title:"Abridge Zola Theme"},"https://abridge.netlify.app/overview-code-blocks/":{body:'This article shows various Code Blocks allowing to easily compare sublime themes.\nCode Blocks\nCode blocks.. ❤️ with automatic syntax highlighting ✨\nSee the docs for options.\nInline Code block\nIf we want, we can also specify inline code which is useful for the small stuff.\nrust\n1//! jelly-actix-web-starter - A starter template for actix-web projects that feels very Django-esque. Avoid the boring stuff and move faster.\n2\n3use jelly::actix_web;\n4use mainlib;\n5use std::io;\n6\n7#[actix_web::main]\n8async fn main() -> io::Result<()> {\n9 mainlib::main().await\n10}\n11\n12let context = Context::new();\n\nTOML\n1base_url = "https://abridge.netlify.app/"\n2title = "Abridge"\n3description = "Abridge is a fast and lightweight Zola theme using semantic html, abridge.css class-light CSS, and No Mandatory JS."\n4default_language = "en"\n5#theme = "abridge"\n6\n7build_search_index = true\n8minify_html = false\n9feed_filename = "atom.xml"\n10taxonomies = [\n11 {name = "categories", feed = true},\n12 {name = "tags", feed = true},\n13]\n\nhtml\n<!doctype html>\n<html lang="en">\n<head>\n <meta charset="utf-8">\n <title>Example HTML5 Document</title>\n</head>\n<body>\n <!--Main Content Area-->\n <p>Test</p>\n</body>\n</html>\n\njavascript\nfunction closeSearch() {//close the search displaying the regular page.\n const e = document.querySelector("main");\n e.innerHTML = window.main\n}\n\nfunction goSearch() {// on enter key or search icon click display results to the page.\n const e = document.querySelector("main");\n window.main || (window.main = e.innerHTML);\n var t = document.getElementById("suggestions"),\n n = ((ResultsClone = t.cloneNode(!0)).id = "results", document.createElement("div")),\n o = \'<h2><button type="button" title="Close Search" onclick="closeSearch()"><i class="svgs x"></i></button> Results For: \'.concat(document.getElementById("searchinput").value, "</h2>");\n return n.innerHTML = o, ResultsClone.insertBefore(n, ResultsClone.firstChild), e.innerHTML = ResultsClone.outerHTML, t.innerHTML = "", document.getElementById("searchinput").value = "", !1\n}! function() {\n // search function code goes here\n}\n\nphp\n<?php\n/**\n * Postfix Admin\n */\nrequire_once(\'common.php\');\n$CONF = Config::getInstance()->getAll();\n\nif ($_SERVER[\'REQUEST_METHOD\'] == "POST") {\n if (!isset($_SESSION[\'PFA_token\'])) {\n die("Invalid token (session timeout; refresh the page and try again?)");\n }\n $fUsername = trim(safepost(\'fUsername\'));\n if ($lang != check_language(false)) { # only set cookie if language selection was changed\n setcookie(\'lang\', $lang, time() + 60*60*24*30); # language cookie, lifetime 30 days\n }\n}\n\n$_SESSION[\'PFA_token\'] = md5(uniqid("pfa" . rand(), true));\n\n/* vim: set expandtab softtabstop=4 tabstop=4 shiftwidth=4: */\n\njson\n{\n "name": "Abridge Zola Theme",\n "short_name": "Abridge",\n "description": "Fast & Lightweight Zola Theme",\n "start_url": "/index.html",\n "scope": "/",\n "background_color": "#111111",\n "theme_color": "#222222",\n "display": "standalone",\n "icons": [\n {\n "src": "/android-chrome-192x192.png",\n "sizes": "192x192",\n "type": "image/png"\n },\n {\n "src": "/android-chrome-512x512.png",\n "sizes": "512x512",\n "type": "image/png"\n },\n {\n "src": "/android-chrome-192x192m.png",\n "sizes": "192x192",\n "type": "image/png",\n "purpose": "maskable"\n }\n ]\n}\n\nSQL\n-- jelly-actix-web-starter - Creates an accounts table, along with some associated helpers.\n\ncreate or replace function update_timestamp() returns trigger as $$\nbegin\n new.updated = now();\n return new;\nend;\n$$ language \'plpgsql\';\n\ncreate table if not exists accounts (\n id serial primary key,\n name text not null,\n email text not null unique,\n password text not null,\n profile jsonb not null default \'{}\',\n plan integer not null default 0,\n is_active boolean not null default true,\n is_admin boolean not null default false,\n has_verified_email boolean not null default false,\n last_login timestamp with time zone,\n created timestamp with time zone not null default now(),\n updated timestamp with time zone not null default now()\n);\n\ncreate unique index accounts_unique_lower_email_idx on accounts (lower(email));\n\ncreate trigger user_updated before insert or update on accounts\nfor each row execute procedure update_timestamp();\n\nLua\nfunction square(x)\n return x * x\nend\n\nprint(square(2)) -- prints \'4\'\n\nfunction getPlayerInformation()\n playerName = UnitName("player")\n playerLevel = UnitLevel("player")\n specId, specName = GetSpecializationInfo(GetSpecialization())\n\n return "Hey, I\'m " .. playerName .. " (Level " .. playerLevel .. "). I\'m currently in spec " .. specName .. "."\nend\n\nprint(getPlayerInformation())\n\nC\n#include <stdio.h>\nint main() {\n int a;\n /* actual initialization */\n a = 10;\n printf("Hello, World!");\n return 0;\n}\n\nC++\n// Your First C++ Program\n\n#include <iostream>\n\nint main() {\n int a;\n /* actual initialization */\n a = 10;\n std::cout << "Hello World!";\n return 0;\n}\n\nGo\npackage main\n\nimport "fmt"\n\nfunc main() {\n var myvariable1 = 20\n var myvariable2 = "hello world"\n fmt.Println("hello world")\n}\n\nPython\n#!/usr/bin/env python3\nimport smtplib, socket\nfrom influxdb import InfluxDBClient\n\nwhile True:\n send = 1\n later = time.time() + 25200\n iso = time.ctime(later)\n tempF = round(bme280.temperature * 1.8 + 29, 3) #C to F formula is +32, difference is to correct bme280 temperature offset\n humidity = round(bme280.humidity, 3)\n pressure = round(bme280.pressure, 3)\n # serialize data as JSON\n data = [\n {\n "measurement": measurement,\n "tags": {\n "location": location,\n },\n "time": iso,\n "fields": {\n "temperature" : tempF,\n "humidity": humidity,\n "pressure": pressure\n }\n }\n ]\n # Send the JSON data to InfluxDB\n try:\n client.write_points(data)\n except socket.error as e:\n print("Could Not Connect to InfluxDB!")\n if tempF > 90 and humidity > 55:\n emailSubject = "Humidity>55: " + str(int(humidity)) + "%H , Temperature>90: " + str(int(tempF)) + "F"\n emailContent = \'Humidity: \' + str(int(humidity)) + \'%H , Temperature: \' + str(int(tempF)) + \'F <a href="https://metrics.example.com">Dashboard</a>\'\n elif humidity > 55:\n emailSubject = "Humidity>55: " + str(int(humidity)) + "%H"\n emailContent = \'Humidity: \' + str(int(humidity)) + \'%H , Temperature: \' + str(int(tempF)) + \'F <a href="https://metrics.example.com">Dashboard</a>\'\n elif tempF > 90:\n emailSubject = "Temperature>90: " + str(int(tempF)) + "F"\n emailContent = \'Humidity: \' + str(int(humidity)) + \'%H , Temperature: \' + str(int(tempF)) + \'F <a href="https://metrics.example.com">Dashboard</a>\'\n else:\n send = 0\n if send == 1:\n try:\n if time.time() > lastEmailTime or abs(lastTemp-int(tempF)) > 1:\n lastEmailTime = time.time()+emailInterval\n lastTemp = int(tempF)\n sender.sendmail(\'jake@example.com\', emailSubject, emailContent)\n except socket.error as e:\n print("Could Not Connect to SMTP server!")\n time.sleep(interval)\n\n',id:"https://abridge.netlify.app/overview-code-blocks/",title:"Code Blocks and Themes"},"https://abridge.netlify.app/overview-embedded-vimeo/":{body:'Zola has many shortcodes, and new are easily added, this example shows vimeo.\nVimeo\nwith vm(id="id_here")\n\nid: the video id (mandatory)\nclass: a class to add to the <div> surrounding the iframe (optional)\nautoplay: when set to "true", the video autoplays on load (optional)\nloop: when set to "true", the video plays on a loop (optional)\nnoautopause: when set to "true", the video will not autopause (optional)\ntitle - set alt title for the iframe (optional, defaults to "Vimeo")\ncookie - set to "true" if you want tracking cookies, otherwise it defaults to false.\n\n\n\t\n\n',id:"https://abridge.netlify.app/overview-embedded-vimeo/",title:"Embedded Vimeo Videos"},"https://abridge.netlify.app/overview-embedded-youtube/":{body:'Zola has many shortcodes, and new are easily added, this example shows youtube.\nYoutube\nwith yt(id="the_id_here")\n\nid: the video id (mandatory)\nplaylist: the playlist id (optional)\nclass: a class to add to the <div> surrounding the iframe (optional)\nautoplay: when set to "true", the video autoplays on load (optional)\ntitle - set alt title for the iframe (optional, defaults to "Youtube")\ncookie - set to "true" if you want tracking cookies, otherwise it defaults to false.\n\n\n\t\n\n',id:"https://abridge.netlify.app/overview-embedded-youtube/",title:"Embedded Youtube Videos"},"https://abridge.netlify.app/overview-images/":{body:'This post covers the imghover and img shortcodes. Images can also be embedded directly using markdown ![Ferris](ferris.svg), but it is better to use a shortcode so you can explicitly set the width and height, this will help prevent content layout shift which improves user experience and the google lighthouse score.\nimg Shortcode\n\nsrc is the path and filename of the image. (mandatory)\nclass sets a class for the image. (optional)\nalt sets the alt note for the image. (recommended for google lighthouse)\nw is the width of the image. (recommended for google lighthouse)\nh is the height of the image. (recommended for google lighthouse)\n\nUsage (same path)\n{{ img(src="ferris-happy.svg" alt="Ferris is Happy" w=600 h=400) }}\n\nOutput\n \n<img src="ferris-happy.svg" alt="Ferris is Happy" width="600" height="400" />\n\n\n \n\nUsage (relative path ./)\n{{ img(src="./img/ferris-gesture.svg" alt="Ferris says Hello" w=600 h=400) }}\n\nOutput\n \n<img src="img/ferris-gesture.svg" alt="Ferris says Hello" width="600" height="400" />\n\n\n \n\nUsage (root path /)\n{{ img(src="/overview-rich-content/ferris.svg" alt="Ferris the Rustacean" w=600 h=400) }}\n\nOutput\n \n<img src="https://abridge.netlify.app/overview-rich-content/ferris.svg" alt="Ferris the Rustacean" width="600" height="400" />\n\n\n \n\nSVG image directly in code:\n<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 600 489" width="600" height="489"><g fill="#8f1f1d"><path d="M70 324c1 3 3 4 6 4l24 4 2 5-10 20v5l7 4 24-1 4 4-7 21c0 2 0 4 3 6 1 2 4 2 7 2l24-5 4 4-1 22c-1 2 0 4 3 5h6l23-8 6 3 4 22c0 2 1 3 3 4h7l21-13 6 2 8 20 5 5c3 0 5 0 7-2l18-15h5l13 18c1 2 3 4 6 4l5-3 14-18h6l17 16c1 1 4 2 6 1 3 0 5-1 6-3l9-21 6-1 19 14h7c3 0 5-3 5-5l4-21 6-2 22 9 7-1c2-1 3-2 3-5v-21l5-4 24 6c3 0 5 0 6-2 2-2 3-4 2-6l-4-21 3-5 24 2c4 0 6 0 7-3v-5l-9-21 3-5 25-2 5-5-1-5-14-18c0-1-1-19-12-33v-1c-26-36-106-64-201-65-100-2-184 26-206 64-10 10-12 26-11 26l-15 17c-1 3-2 5-1 7z"/><path d="M565 247c-1-3-12-2-14-3l-38 1-4-7 26-38c1-2 7-7 6-9-3-3-12 1-14 1l-39 7-4-6 22-49c0-3 7-15 5-18-2-2-11 6-13 6l-43 28-4-5 11-43c0-3 7-18 5-19-2-2-9 5-12 6l-38 30-5-4 9-51c0-3 3-18 1-19s-15 13-17 14l-30 38-6-3-3-56c0-3 0-14-3-14-3-1-5 8-8 11l-25 50-7-1-13-57c-1-3-2-12-4-12-4 0-5 10-7 13l-15 56-7 1-22-52c-1-2-4-12-7-12-3 1-2 9-3 12l-7 61-6 3-19-27c-3-1-15-19-18-18-2 1 0 21-1 23l1 40-6 4-36-35c-3-1-7-7-10-5-2 2 0 10 0 13l13 53-4 5-41-26c-2-1-10-7-12-4-2 2 3 4 3 7l22 56-5 6-65-22c-3-1-10-5-12-1-1 2 5 6 6 9l49 53-3 7-47-6c-3 0-9-1-11 1-1 4 6 7 7 9l36 40a116 116 0 0 0 14 46c27 50 110 87 209 87 105 0 193-41 214-95 11-15 13-37 12-38l29-31c2-4 9-9 8-11z"/><path d="m99 289-67 10c-13 3-5 5 0 6 14 2 84 3 85 4zm402 3 67 10c13 4 5 6 0 6-14 2-84 5-86 5z"/></g><path d="M227 293s-24-26-47 0c0 0-18 35 0 52 0 0 30 24 47 0 0 0 21-20 0-52z"/><path fill="#fff" d="M200 302c0 11 6 19 14 19 7 0 12-8 12-19 0-10-5-18-12-18-8 0-14 8-14 18z"/><path d="M360 283s-40-17-52 22c0 0-11 47 33 49 0 0 58-10 19-71z"/><path fill="#fff" d="M339 299c0 10 7 20 14 20 8 0 14-10 14-20s-6-18-14-18c-7 0-14 8-14 18z"/></svg>\n\n\nimghover Shortcode\nThe first image in the src array is the one compared to all the others.\nWhen you hover your mouse over an image it will display the image to compare.\nThis can be used to compare only one image with another by passing only two src in the array.\n\nsrc is an array of paths and filenames for the images. (mandatory)\nw is the width of the image.\nh is the height of the image.\np is the percent size that you want the image to use on the page. (50 is the default)\n\nw and h are used only to calculate the aspect ratio, overall size is set by p\nUsage (same path)\n{{ imghover(src=["ferris.svg", "ferris-gesture.svg", "ferris-happy.svg"] w=600 h=400 p=45) }}\n\nOutput\n \n <div id="same-ferris-gesture"><b>same-ferris-gesture on hover</b></div>\n <div id="same-ferris-gestureimage" title="same-ferris-gesture"> </div> \n <div id="same-ferris-happy"><b>same-ferris-happy on hover</b></div>\n <div id="same-ferris-happyimage" title="same-ferris-happy"> </div>\n\n<style> \n #same-ferris-gestureimage {\n background-image: url(\'ferris.svg\');\n background-size: contain;\n background-repeat: no-repeat;\n width: 45%;\n height: 0;\n padding-top: 30%;\n\n }\n #same-ferris-gestureimage:hover {\n background-image: url(\'ferris-gesture.svg\');\n } \n #same-ferris-happyimage {\n background-image: url(\'ferris.svg\');\n background-size: contain;\n background-repeat: no-repeat;\n width: 45%;\n height: 0;\n padding-top: 30%;\n\n }\n #same-ferris-happyimage:hover {\n background-image: url(\'ferris-happy.svg\');\n }\n</style>\n\n\n \n same-ferris-gesture on hover\n \n same-ferris-happy on hover\n \n\n\nUsage (relative path ./)\n{{ imghover(src=["./img/ferris.svg", "./img/ferris-gesture.svg"] w=600 h=400 p=45) }}\n\nOutput\n \n <div id="rel-ferris-gesture"><b>rel-ferris-gesture on hover</b></div>\n <div id="rel-ferris-gestureimage" title="rel-ferris-gesture"> </div>\n\n<style> \n #rel-ferris-gestureimage {\n background-image: url(\'img/ferris.svg\');\n background-size: contain;\n background-repeat: no-repeat;\n width: 45%;\n height: 0;\n padding-top: 30%;\n\n }\n #rel-ferris-gestureimage:hover {\n background-image: url(\'img/ferris-gesture.svg\');\n }\n</style>\n\n\n \n rel-ferris-gesture on hover\n \n\n\nUsage (root path /)\n{{ imghover(src=["/overview-rich-content/ferris.svg", "/overview-rich-content/ferris-gesture.svg"] w=600 h=400 p=45) }}\n\nOutput\n \n <div id="root-ferris-gesture"><b>root-ferris-gesture on hover</b></div>\n <div id="root-ferris-gestureimage" title="root-ferris-gesture"> </div>\n\n<style> \n #root-ferris-gestureimage {\n background-image: url(\'https://abridge.netlify.app/overview-rich-content/ferris.svg\');\n background-size: contain;\n background-repeat: no-repeat;\n width: 45%;\n height: 0;\n padding-top: 30%;\n\n }\n #root-ferris-gestureimage:hover {\n background-image: url(\'https://abridge.netlify.app/overview-rich-content/ferris-gesture.svg\');\n }\n</style>\n\n\n \n root-ferris-gesture on hover\n \n\n\n',id:"https://abridge.netlify.app/overview-images/",title:"Image Shortcodes"},"https://abridge.netlify.app/overview-markdown-and-style/":{body:'This article offers a sample of basic Markdown syntax that can be used in Zola content files, also it shows if basic HTML elements are decorated with CSS in a Zola theme.\nHeadings\nThe following HTML <h1>—<h6> elements represent six levels of section\nheadings. <h1> is the highest section level while <h6> is the lowest.\nH1\nH2\nH3\nH4\nH5\nH6\nParagraph\nXerum, quo qui aut unt expliquam qui dolut labo. Aque venitatiusda cum,\nvoluptionse latur sitiae dolessi aut parist aut dollo enim qui voluptate ma\ndolestendit peritin re plis aut quas inctum laceat est volestemque commosa as\ncus endigna tectur, offic to cor sequas etum rerum idem sintibus eiur? Quianimin\nporecus evelectur, cum que nis nust voloribus ratem aut omnimi, sitatur?\nQuiatem. Nam, omnis sum am facea corem alique molestrunt et eos evelece arcillit\nut aut eos eos nus, sin conecerem erum fuga. Ri oditatquam, ad quibus unda\nveliamenimin cusam et facea ipsamus es exerum sitate dolores editium rerore\neost, temped molorro ratiae volorro te reribus dolorer sperchicium faceata\ntiustia prat.\nItatur? Quiatae cullecum rem ent aut odis in re eossequodi nonsequ idebis ne\nsapicia is sinveli squiatum, core et que aut hariosam ex eat.\nBlockquotes\nThe blockquote element represents content that is quoted from another source,\noptionally with a citation which must be within a footer or cite element,\nand optionally with in-line changes such as annotations and abbreviations.\nBlockquote no attribution\n\nTiam, ad mint andaepu dandae nostion secatur sequo quae.\nNote that you can use Markdown syntax within a blockquote.\n\nBlockquote with attribution\n\nDon\'t communicate by sharing memory, share memory by communicating.\n— Rob Pike1\n\n\nAll men by nature desire to know.\n― Aristotle2\n\n\nPower comes in response to a need, not a desire.\n— Goku\n\nTables\nTables aren\'t part of the core Markdown spec, but Zola supports them\nout-of-the-box.\nNameAge\nAlice23\nBob27\nCody33\nJohn59\nKerry23\n\nTable with Inline Markdown\nItalicsBoldCodeStrikeThrough\nitalicsboldcodestrikethrough\n\nLarge table within figure tag\nSurround very Large tables with <figure></figure> so they can scroll horizontally.\n\nManufacturerVolkswagenToyotaFordHondaChevroletBMWHyundaiAudiNissanKiaMercedesTeslaMitsubishiSuzukiVolvoSubaruMazdaJaguarBuickLexusGMCPorscheCadillac\nRevenue $B$254$249$127$124$123$113$88$83$74$61$55$54$38$31$30$28$27$22$21$19$17$16$12\n\n\n*revenue values found on search engine and not verified, for figure scroller table demonstration purposes only.*\nFoldable Text\n\n Title 1\n IT\'S A SECRET TO EVERYBODY.\n\n\n Title 2\n Stay awhile, and listen!\n\nCode Blocks\nCode blocks.. ❤️ with automatic syntax highlighting ✨\nSee the docs for options.\nCode block with backticks\n<!doctype html>\n<html lang="en">\n<head>\n <meta charset="utf-8">\n <title>Example HTML5 Document</title>\n</head>\n<body>\n <p>Test</p>\n</body>\n</html>\n\nLine Numbers, Highlighting\n1<!doctype html>\n2<html lang="en">\n3<head>\n4 <meta charset="utf-8">\n5 <title>Example HTML5 Document</title>\n6</head>\n7<body>\n8 <p>Test</p>\n9</body>\n10</html>\n\nInline Code block\nIf we want, we can also specify inline code which is useful for the small stuff.\nList Types\nOrdered List\n\nFirst item\nSecond item\nThird item\n\nUnordered List\n\nList item\nAnother item\nAnd another item\n\nNested list\n\nFruit\n\nApple\nOrange\nBanana\n\n\nDairy\n\nMilk\nCheese\n\n\n\nOther Elements — abbr, sub, sup, kbd, mark, link\nGIF is a bitmap image format.\nH2O\nXn + Yn = Zn\nPress CTRL+ALT+Delete to end the\nsession.\nMost salamanders are nocturnal, and hunt for insects, worms, and\nother small creatures.\n\n\n1\nThe above quote is excerpted from Rob Pike\'s talk\nduring Gopherfest, November 18, 2015.\n\n2\nThe quote is the first sentence of Aristotle\'s Metaphysics.\n\nNavs\nSite primary menu is created by nesting the nav under a header tag. Refer to the top of this site for an example.\nIf the nav tag is under the main tag instead of the header tag then the links will have a border:\n\n \n \n < Previous\n Next >\n \n \n\nForms\n\n Name\n \n \n Email Address\n \n \n Message\n \n \n \n Send\n \n \n Fruit\n \n Select a fruit…\n Banana\n Watermelon\n Apple\n Orange\n Mango\n \n \n \n Size\n \n \n Small\n \n \n \n Medium\n \n \n \n Large\n \n \n \n Extra Large\n \n \n \n \n \n \n I agree to the Terms and Conditions\n \n \n \n I agree to share my information with partners\n \n \n \n \n \n \n Publish on my profile\n \n \n \n Publish on my profile my accomplishments\n \n \n \n \n \n \n File browser\n \n \n \n Range slider\n \n \n \n Date\n \n \n \n Time\n \n \n \n Color\n \n \n\n',id:"https://abridge.netlify.app/overview-markdown-and-style/",title:"Markdown and Style Guide"},"https://abridge.netlify.app/overview-math/":{body:'You can use KaTeX to render mathematical notations.\nYou can enable the $\\KaTeX$ support globally, per-section or per-page basis.\nEnable Globally\nTo enable the $\\KaTeX$ support globally, add math = true under [extra] of the config.toml\nat your site root. Now the katex shortcode will be rendered, you can also add math_auto_render = true\nand every section and page of your site will load the KaTeX autorender extension.\n[extra]\nmath = true\nmath_auto_render = false\n\nPer-section Basis\nTo enable the $\\KaTeX$ support in a particular section, add math = true under [extra] in the [SECTION_NAME]/_index.md.\nNow the katex shortcode will be rendered, you can also add math_auto_render = true\nand the section of your site will load the KaTeX autorender extension.\n+++\n[extra]\nmath = true\n+++\n\nPer-page Basis\nTo enable the $\\KaTeX$ support in a particular page, add math = true under [extra] in the page\'s\nfrontmatter. Now the katex shortcode will be rendered, you can also add math_auto_render = true\nand the page of your site will load the KaTeX autorender extension.\n+++\n[extra]\nmath = true\nmath_auto_render = true\n+++\n\nIt is a good practice to enable $\\KaTeX$ support on a per-page basis, since this will only load the\nrequired files on that particular page, without affecting the page load speed of other pages.\nIf your site is not math-heavy, please do NOT enable this feature globally or per-section basis.\nUsage\nWrap any valid $\\KaTeX$ syntax with $...$ for inline\nMathematics and $$...$$ for block Mathematics.\nInline Mathematics\nThis is the most beautiful equation I\'ve ever seen:\nUsage\n{% katex(block=false) %} e^{i\\pi}+1=0 {% end %}\n\nOutput\n<script type="math/tex">e^{i\\pi}+1=0</script>\n\n\n\nBlock Mathematics\nSome Mathematics in display mode is fair enough:\nUsage\n{% katex(block=true) %} \\int_0^1 x^2 dx {% end %}\n\nOutput\n<script type="math/tex;mode=display">\\int_0^1 x^2 dx</script>\n\n\n\nAuto Render Inline Example\n$ e^{i\\pi}+1=0 $\n\n$ e^{i\\pi}+1=0 $\nAuto Render Block Example\n$$\nf(x) = \\int_{-\\infty}^\\infty\\hat f(\\xi)\\,e^{2 \\pi i \\xi x}\\,d\\xi\n$$\n\n$$\nf(x) = \\int_{-\\infty}^\\infty\\hat f(\\xi),e^{2 \\pi i \\xi x},d\\xi\n$$\n',id:"https://abridge.netlify.app/overview-math/",title:"Mathematical Notations"},"https://abridge.netlify.app/overview-rich-content/":{body:'Several custom shortcodes are included to augment CommonMark (courtesy of d3c3nt theme), in addition to those already provided by Zola. video, image, gif, and audio were created to help you take advantage of modern HTML elements in your writing.\nVideo\nThe video shortcode takes a sources parameter (an array of strings)\nand returns a <video> tag. Each string in the sources array should\nbe a path to a video file of a different type (webm, mp4, etc). Each\nindividual source is then converted into a <source> tag, and the\nelement is returned.\nUsage\n{{ video(sources=["over9000_av1.mp4", "over9000_vp9.webm"]) }}\n\nOutput\n<video controls>\n <source src="https://abridge.netlify.app/overview-rich-content/over9000_av1.mp4" type="video/mp4" />\n <source src="https://abridge.netlify.app/overview-rich-content/over9000_vp9.webm" type="video/webm" />\n Your browser doesn\'t support the video tag and/or the video formats in use here – sorry!\n</video>\n\n\n\n \n \n Your browser doesn\'t support the video tag and/or the video formats in use here – sorry!\n\nImage\nThe image shortcode returns a <picture> tag and accepts three\nparameters: sources (an array of strings), fallback_path, and\nfallback_alt (both strings).\nEach string in the sources array should be a path to an image file of\na different type (webp, png, jpg, etc). fpath and\nfalt are used to create an <img> tag for the browser to fall\nback on if the other formats aren\'t yet supported, fw and fh set the width and height of the fallback\nUsage\n{{ image(sources=["over9000-960.webp", "over9000-640.webp", "over9000-400.webp"], fpath="over9000-640.webp", fw=640, fh=480, falt="ITS OVER 9000!") }}\n\nOutput\n<picture>\n <source srcset="https://abridge.netlify.app/overview-rich-content/over9000-960.webp" type="img/webp" />\n <source srcset="https://abridge.netlify.app/overview-rich-content/over9000-640.webp" type="img/webp" />\n <source srcset="https://abridge.netlify.app/overview-rich-content/over9000-400.webp" type="img/webp" />\n <img src="over9000-640.webp" alt="ITS OVER 9000!" width="640" height="480" />\n</picture>\n\n\n\n \n \n \n \n\nGIF\nThe gif shortcode is exactly the same as the video shortcode\n– it takes an array of strings called sources and returns a\n<video> tag. The only difference is in the outermost tag, which has\nfour additional properties: autoplay, loop, muted, playsinline.\nUsing the <video> tag in place of gifs allows for reduced file sizes,\nwhich is especially important in regions where internet is slower or\nless reliable.\nUsage\n{{ gif(sources=["over9000_av1.mp4", "over9000_vp9.webm"]) }}\n\nOutput\n<video autoplay loop muted playsinline>\n <source src="https://abridge.netlify.app/overview-rich-content/over9000_av1.mp4" type="video/mp4" />\n <source src="https://abridge.netlify.app/overview-rich-content/over9000_vp9.webm" type="video/webm" />\n Your browser doesn\'t support the video tag, which I use in place of .gifs, and/or the video formats in use here – sorry!\n</video>\n\n\n\n \n \n Your browser doesn\'t support the video tag, which I use in place of .gifs, and/or the video formats in use here – sorry!\n\nAudio\nThe audio shortcode takes a sources array of strings and returns an\n<audio> tag. Each string in the sources array should be a path to an\naudio file of a different type (ogg, mp3, flac, wav, etc).\nThe browser will play the first type it supports, so placing them in order of size smallest to largest will use the least bandwidth if that is your goal.\nUsage\n{{ audio(sources=["over9000.ogg", "over9000.mp3", "over9000.flac", "over9000.wav"]) }}\n\nOutput\n<audio controls>\n <source src="https://abridge.netlify.app/overview-rich-content/over9000.ogg" type="audio/ogg" />\n <source src="https://abridge.netlify.app/overview-rich-content/over9000.mp3" type="audio/mp3" />\n <source src="https://abridge.netlify.app/overview-rich-content/over9000.flac" type="audio/flac" />\n <source src="https://abridge.netlify.app/overview-rich-content/over9000.wav" type="audio/wav" />\n Your browser doesn\'t support the audio tag and/or the audio formats in use here – sorry!\n</audio>\n\n\n\n \n \n Your browser doesn\'t support the audio tag and/or the audio formats in use here – sorry!\n\n',id:"https://abridge.netlify.app/overview-rich-content/",title:"Rich Content"},"https://abridge.netlify.app/pages/":{body:"",id:"https://abridge.netlify.app/pages/",title:""},"https://abridge.netlify.app/posts/":{body:"",id:"https://abridge.netlify.app/posts/",title:"posts"},"https://abridge.netlify.app/privacy/":{body:"Privacy\n\nThis site does not set or use cookies.\nThis site does not store data in the browser to be shared, sent, or sold to third-parties.\nNo personal information is shared, sent, or sold to third-parties.\n\nEffective Date: 1st Jan 2022\n",id:"https://abridge.netlify.app/privacy/",title:"Privacy Policy"}},docInfo:{"https://abridge.netlify.app/":{body:0,title:0},"https://abridge.netlify.app/about/":{body:63,title:0},"https://abridge.netlify.app/contact/":{body:0,title:1},"https://abridge.netlify.app/overview-abridge/":{body:96,title:3},"https://abridge.netlify.app/overview-code-blocks/":{body:627,title:3},"https://abridge.netlify.app/overview-embedded-vimeo/":{body:59,title:3},"https://abridge.netlify.app/overview-embedded-youtube/":{body:50,title:3},"https://abridge.netlify.app/overview-images/":{body:953,title:2},"https://abridge.netlify.app/overview-markdown-and-style/":{body:488,title:3},"https://abridge.netlify.app/overview-math/":{body:203,title:2},"https://abridge.netlify.app/overview-rich-content/":{body:375,title:2},"https://abridge.netlify.app/pages/":{body:0,title:0},"https://abridge.netlify.app/posts/":{body:0,title:1},"https://abridge.netlify.app/privacy/":{body:26,title:2}},length:14},lang:"English"},function(){function b(d){var e=new b.Index;return e.pipeline.add(b.trimmer,b.stopWordFilter,b.stemmer),d&&d.call(e,e),e}var r,n,d,e,p,l,v,h,g,y,w,m,u,k,x,S,_,I,T,$,E,C,F,q,o;b.version="0.9.5",((lunr=b).utils={}).warn=(o=this,function(d){o.console&&console.warn&&console.warn(d)}),b.utils.toString=function(d){return null==d?"":d.toString()},(b.EventEmitter=function(){this.events={}}).prototype.addListener=function(){var d=Array.prototype.slice.call(arguments),e=d.pop();if("function"!=typeof e)throw new TypeError("last argument must be a function");d.forEach(function(d){this.hasHandler(d)||(this.events[d]=[]),this.events[d].push(e)},this)},b.EventEmitter.prototype.removeListener=function(d,e){this.hasHandler(d)&&-1!==(e=this.events[d].indexOf(e))&&(this.events[d].splice(e,1),0==this.events[d].length&&delete this.events[d])},b.EventEmitter.prototype.emit=function(d){var e;this.hasHandler(d)&&(e=Array.prototype.slice.call(arguments,1),this.events[d].forEach(function(d){d.apply(void 0,e)},this))},b.EventEmitter.prototype.hasHandler=function(d){return d in this.events},(b.tokenizer=function(d){return arguments.length&&null!=d?Array.isArray(d)?(e=(e=d.filter(function(d){return null!=d})).map(function(d){return b.utils.toString(d).toLowerCase()}),o=[],e.forEach(function(d){d=d.split(b.tokenizer.seperator);o=o.concat(d)},this),o):d.toString().trim().toLowerCase().split(b.tokenizer.seperator):[];var e,o}).defaultSeperator=/[\s\-]+/,b.tokenizer.seperator=b.tokenizer.defaultSeperator,b.tokenizer.setSeperator=function(d){null!=d&&"object"==typeof d&&(b.tokenizer.seperator=d)},b.tokenizer.resetSeperator=function(){b.tokenizer.seperator=b.tokenizer.defaultSeperator},b.tokenizer.getSeperator=function(){return b.tokenizer.seperator},(b.Pipeline=function(){this._queue=[]}).registeredFunctions={},b.Pipeline.registerFunction=function(d,e){e in b.Pipeline.registeredFunctions&&b.utils.warn("Overwriting existing registered function: "+e),d.label=e,b.Pipeline.registeredFunctions[e]=d},b.Pipeline.getRegisteredFunction=function(d){return d in b.Pipeline.registeredFunctions!=1?null:b.Pipeline.registeredFunctions[d]},b.Pipeline.warnIfFunctionNotRegistered=function(d){d.label&&d.label in this.registeredFunctions||b.utils.warn("Function is not registered with pipeline. This may cause problems when serialising the index.\n",d)},b.Pipeline.load=function(d){var o=new b.Pipeline;return d.forEach(function(d){var e=b.Pipeline.getRegisteredFunction(d);if(!e)throw new Error("Cannot load un-registered function: "+d);o.add(e)}),o},b.Pipeline.prototype.add=function(){Array.prototype.slice.call(arguments).forEach(function(d){b.Pipeline.warnIfFunctionNotRegistered(d),this._queue.push(d)},this)},b.Pipeline.prototype.after=function(d,e){b.Pipeline.warnIfFunctionNotRegistered(e);d=this._queue.indexOf(d);if(-1===d)throw new Error("Cannot find existingFn");this._queue.splice(d+1,0,e)},b.Pipeline.prototype.before=function(d,e){b.Pipeline.warnIfFunctionNotRegistered(e);d=this._queue.indexOf(d);if(-1===d)throw new Error("Cannot find existingFn");this._queue.splice(d,0,e)},b.Pipeline.prototype.remove=function(d){d=this._queue.indexOf(d);-1!==d&&this._queue.splice(d,1)},b.Pipeline.prototype.run=function(d){for(var e=[],o=d.length,t=this._queue.length,s=0;si[t]&&t++:(e.add(c[o]),o++,t++);return e},lunr.SortedSet.prototype.clone=function(){var d=new lunr.SortedSet;return d.elements=this.toArray(),d.length=d.elements.length,d},lunr.SortedSet.prototype.union=function(d){for(var e,d=this.length>=d.length?(e=this,d):(e=d,this),o=e.clone(),t=0,s=d.toArray();t",a=o.querySelector("a"),t=o.querySelector("span:first-child"),d=o.querySelector("span:nth-child(2)"),a.href=e.ref,t.textContent=e.doc.title,d.innerHTML=function(d,e){var o=e.map(function(d){return elasticlunr.stemmer(d.toLowerCase())}),t=!1,s=0,f=[],c=d.toLowerCase().split(". ");for(h in c){var i,a=c[h].split(/[\s\n]/),r=8;for(i in a){if(0<(u=a[i]).length){for(var n in o)elasticlunr.stemmer(u).startsWith(o[n])&&(r=40,t=!0);f.push([u,r,s]),r=2}s=s+u.length+1}s+=1}if(0===f.length)return void 0!==d.length&&300b&&(b=p[h],g=h);for(var y=[],w=f[g][2],h=g;h"),w=u[2]+u[0].length;40!==u[1]&&12<=u[0].length&&!/^[\x00-\xff]+$/.test(u[0])?(m=substringByByte(d.substring(u[2],w),12),y.push(m)):y.push(d.substring(u[2],w)),40===u[1]&&y.push("")}return y.push("…"),y.join("")}(e.doc.body,i),suggestions.appendChild(o))});for(;f.length>c;)suggestions.removeChild(f[0])},!0),suggestions.addEventListener("click",function(){for(;suggestions.lastChild;)suggestions.removeChild(suggestions.lastChild);return!1},!0),document.body.contains(document.goSearch)&&(document.goSearch.onsubmit=function(){return goSearchNow()})}();
\ No newline at end of file
diff --git a/search_bundle_stork.min.js b/search_bundle_stork.min.js
new file mode 100644
index 000000000..5a86067f1
--- /dev/null
+++ b/search_bundle_stork.min.js
@@ -0,0 +1 @@
+var stork;function loadStork(){var e=document.querySelector("meta[name='base']").getAttribute("content");"/"==e.slice(-1)&&(e=e.slice(0,-1)),stork.initialize(e+"/stork.wasm").then(()=>{console.log("Finished downloading WASM")}).catch(e=>{console.error(e)}),stork.register("stork",e+"/stork.st")}(()=>{"use strict";var n={214:(e,t,o)=>{let u;o.r(t),o.d(t,{default:()=>v,wasm_register_index:()=>function(e,t){try{var n=u.__wbindgen_add_to_stack_pointer(-16),r=f(e,u.__wbindgen_malloc,u.__wbindgen_realloc),o=c,s=function(e){var t=(0,u.__wbindgen_malloc)(+e.length);return d().set(e,+t),c=e.length,t}(t),i=c,a=(u.wasm_register_index(n,r,o,s,i),g()[n/4+0]),l=g()[n/4+1];return m(a,l)}finally{u.__wbindgen_add_to_stack_pointer(16),u.__wbindgen_free(a,l)}},wasm_search:()=>function(e,t){try{var n=u.__wbindgen_add_to_stack_pointer(-16),r=f(e,u.__wbindgen_malloc,u.__wbindgen_realloc),o=c,s=f(t,u.__wbindgen_malloc,u.__wbindgen_realloc),i=c,a=(u.wasm_search(n,r,o,s,i),g()[n/4+0]),l=g()[n/4+1];return m(a,l)}finally{u.__wbindgen_add_to_stack_pointer(16),u.__wbindgen_free(a,l)}},wasm_stork_version:()=>function(){try{var e=u.__wbindgen_add_to_stack_pointer(-16),t=(u.wasm_stork_version(e),g()[e/4+0]),n=g()[e/4+1];return m(t,n)}finally{u.__wbindgen_add_to_stack_pointer(16),u.__wbindgen_free(t,n)}}});const s=new Array(32).fill(void 0);function i(e){return s[e]}s.push(void 0,null,!0,!1);let a=s.length,c=0,n=null;function d(){return n=null!==n&&n.buffer===u.memory.buffer?n:new Uint8Array(u.memory.buffer)}let l=new TextEncoder("utf-8");const h="function"==typeof l.encodeInto?function(e,t){return l.encodeInto(e,t)}:function(e,t){var n=l.encode(e);return t.set(n),{read:e.length,written:n.length}};function f(e,t,n){if(void 0===n){const n=l.encode(e),r=t(n.length);return d().subarray(r,r+n.length).set(n),c=n.length,r}let r=e.length,o=t(r);const s=d();let i=0;for(;i{Object.defineProperty(l,"__esModule",{value:!0}),l.calculateOverriddenConfig=l.defaultConfig=void 0;var u=t(445),c=t(466);l.defaultConfig={showProgress:!0,printIndexInfo:!1,showScores:!1,showCloseButton:!0,minimumQueryLength:3,forceOverwrite:!1,resultNoun:{singular:"file",plural:"files"},onQueryUpdate:void 0,onResultSelected:void 0,onResultsHidden:void 0,onInputCleared:void 0,transformResultUrl:function(e){return e}},l.calculateOverriddenConfig=function(e){var t,n=(0,c.difference)(Object.keys(e),Object.keys(l.defaultConfig));if(0{Object.defineProperty(t,"__esModule",{value:!0}),t.existsBeyondContainerBounds=t.setText=t.clear=t.add=t.create=void 0,t.create=function(e,t){e=document.createElement(e);return t.classNames&&e.setAttribute("class",t.classNames.join(" ")),e},t.add=function(e,t,n){n.insertAdjacentElement(t,e)},t.clear=function(e){for(;e&&e.firstChild;)e.removeChild(e.firstChild)},t.setText=function(e,t){t=document.createTextNode(t);e&&e.firstChild?e.replaceChild(t,e.firstChild):e&&e.appendChild(t)},t.existsBeyondContainerBounds=function(e,t){e=e.getBoundingClientRect(),t=t.getBoundingClientRect();return e.bottom>t.bottom||e.top{Object.defineProperty(t,"__esModule",{value:!0}),t.Entity=void 0;var r=n(139),o=n(771),s=n(214),i=n(445);function a(e,t,n){var r=this;this._state="initialized",this.downloadProgress=0,this.results=[],this.totalResultCount=0,this.eventListenerFunctions={},this.highlightedResult=0,this.resultsVisible=!1,this.hoverSelectEnabled=!0,this.setDownloadProgress=function(e){r.state="loading",r.downloadProgress=e,r.config.showProgress&&r.render()},this.name=e,this.url=t,this.config=n}Object.defineProperty(a.prototype,"state",{get:function(){return this._state},set:function(e){this._state=e,this.render()},enumerable:!1,configurable:!0}),a.prototype.getCurrentMessage=function(){if(!this.domManager)return null;var e=this.domManager.getQuery();return"error"===this.state?"Error! Check the browser console.":"ready"!=this.state?"Loading...":(null==e?void 0:e.length){Object.defineProperty(t,"__esModule",{value:!0}),t.EntityDom=void 0;var a=n(227),i=n(9),o={results:[],resultsVisible:!1,showScores:!1,message:null,showProgress:!1,progress:1,state:"ready"};function r(n,e){var r=this,e=(this.scrollAnchorPoint="end",this.entity=e,[{selector:'input[data-stork="'.concat(n,'"]'),elementName:"input"},{selector:'div[data-stork="'.concat(n,'-output"]'),elementName:"output"}].map(function(e){var t=document.querySelector(e.selector);if(t)return t;throw new Error('Could not register search box "'.concat(n,'": ').concat(e.elementName," element not found. Make sure an element matches the query selector `").concat(e.selector,"`"))})),t=e[0],e=e[1];this.elements={input:t,output:e,list:(0,a.create)("ul",{classNames:["stork-results"]}),attribution:(0,a.create)("div",{classNames:["stork-attribution"]}),progress:(0,a.create)("div",{classNames:["stork-progress"]}),message:(0,a.create)("div",{classNames:["stork-message"]}),closeButton:(0,a.create)("button",{classNames:["stork-close-button"]})},this.elements.input.removeEventListener("input",this.entity.eventListenerFunctions.inputInputEvent),this.elements.input.removeEventListener("keydown",this.entity.eventListenerFunctions.inputKeydownEvent),this.entity.eventListenerFunctions={inputInputEvent:function(e){r.handleInputEvent(e)},inputKeydownEvent:function(e){r.handleKeyDownEvent(e)}},this.elements.input.addEventListener("input",this.entity.eventListenerFunctions.inputInputEvent),this.elements.input.addEventListener("keydown",this.entity.eventListenerFunctions.inputKeydownEvent),null!=(t=this.elements.list)&&t.addEventListener("mousemove",function(){r.hoverSelectEnabled=!0}),this.elements.attribution.innerHTML='Powered by Stork',this.elements.closeButton.innerHTML='\n',this.entity.config.showProgress&&(0,a.add)(this.elements.progress,"afterend",this.elements.input),null!=(e=this.elements.closeButton)&&e.addEventListener("click",function(){r.elements.input.value="",r.elements.input.focus(),r.render(o);var e=[r.entity.config.onInputCleared,r.entity.config.onResultsHidden],t=e[0],e=e[1];t&&t(),e&&e()})}r.prototype.clearDom=function(){var e;(0,a.clear)(this.elements.output),(0,a.clear)(this.elements.list),null!=(e=this.elements.closeButton)&&e.remove(),this.elements.output.classList.remove("stork-output-visible")},r.prototype.render=function(r){var e,o=this,t=this.elements.input.value;if(this.clearDom(),(this.lastRenderState=r).showProgress&&((e=function(){switch(r.state){case"ready":case"error":return 1;case"initialized":case"loading":return.9*r.progress+.05}}())<1?(this.elements.progress.style.width="".concat(100*e,"%"),this.elements.progress.style.opacity="1"):(this.elements.progress.style.width="100%",this.elements.progress.style.opacity="0")),"error"===r.state&&this.elements.input.classList.add("stork-error"),0{Object.defineProperty(t,"__esModule",{value:!0}),t.loadIndexFromUrl=void 0,t.loadIndexFromUrl=function(e,r){var t=new XMLHttpRequest;t.addEventListener("load",function(e){var t=e.target,n=t.status,t=t.response;0!==n?n<200||299{Object.defineProperty(t,"__esModule",{value:!0}),t.highlight=void 0,t.highlight=function(e,t){function n(e,t,n){return e.substr(0,t)+n+e.substr(t)}for(var r=0,o=0,s=t;o',l="";e=n(e,i.beginning+r,a),r+=a.length,e=n(e,i.end+r,l),r+=l.length}return e}},9:(e,t,n)=>{Object.defineProperty(t,"__esModule",{value:!0}),t.resultToListItem=void 0;var r=n(112);t.resultToListItem=function(e,t){var n=document.createElement("template");return n.innerHTML='\n
This section delves deeper than the essential documentation. The goal of this section is to give
+insight into the design thinking and approach for some of the more esoteric-but-essential gears
+and cogs of NoSQLBench.
+
You might want to read this section if you want to learn as much as possible about NoSQLBench.
+There will still be some things not explained here, but if there are particular topics of
+interest, please file a request.
+
👉 If you find anything in this section which you believe should be in the user guide proper,
+please let us know.
An argsfile (Command Line Arguments File) is a simple text file which contains defaults for
+command-line arguments. You can use an args file to contain a set of global defaults that you
+want to use by default and automatically.
+
A command, --argsfile <path> is used to specify an args file. You can use it like an instant
+import statement in the middle of a command line. There are three variants of this option.
+The --argsfile <path> variant will warn you if the file doesn't exist. If you want to load an
+argsfile if it exist, but avoid warnings if it doesn't, then use the --argsfile-optional <path>
+form. If you want to throw an error if the argsfile doesn't exist, then use
+the --argsfile-required <path> form.
+
Default argsfile
+
The default args file location is $NBSTATEDIR/argsfile. After the NBSTATEDIR environment variable or default is resolved, the default argsfile will be
+searched for in that directory.
+
$NBSTATEDIR is a mechanism for setting and finding the local state directory for NoSQLBench. It is
+a search path, delimited by colons, and allowing Java system properties and shell environment
+variables. When the NBSTATEDIR location is first needed, the paths are checked in order, and the
+first one found is used. If one is not found on the filesystem, the first expanded value is used to
+create the state directory.
+
If the default argsfile is is present, it is loaded by nosqlbench when it starts even if you don't
+ask it to. That is, nosqlbench behaves as if your first set of command line arguments is
+
--argsfile-optional "$NBSTATEDIR/argsfile
+
+
Just as with the NBSTATEDIR location, the argsfile can also be used like a search path. That is, if
+the value provided is a colon-delimited set of possible paths, the first one found (after variable
+expansion) will be used. If needed, the first expanded path will be used to create an argsfile when
+pinning options are used.
+
Args file format
+
An args file simply contains an argument on each line, like this:
+
--docker-metrics
+--annotate all
+--grafana-baseurl http://localhost:3000/
+
+
Pinning
+
Pin an option
+
It is possible to pin an option to the default args file by use of the --pin meta-option. This
+option will take the following command line argument and add it to the currently active args file.
+That means, if you use --pin --docker-metrics, then --docker-metrics is added to the args file.
+If there is an exact duplicate of the same option and value, then it is skipped, but if the option
+name is the same with a different value, then it is added at the end. This allows for options which
+may be called multiple times normally.
+
If the --pin option occurs after an explicit use of --argsfile <filename>, then the filename
+used in this argument is the one that is modified.
+
After the --pin option, the following argument is taken as any global option (
+--with-double-dashes) and any non-option values after it which are not commands (reserved words)
+
When the --pin option is used, it does not cause the pinned option to be excluded from the current
+command line call. The effects of the pinned option will take place in the current nosqlbench
+invocation just as they would without the --pin. However, when pinning global options when there
+are no commands on the command line, nosqlbench will not run a scenario, so this form is suitable
+for setting arguments.
+
Unpin an option
+
To reverse the effect of pinning an option, you simply use --unpin ....
+
The behavior of --unpin is slightly different than --pin. Specifically, an option which is unpinned
+will be removed from the arg list, and will not be used in the current invocation of nosqlbench
+after removal.
+
Further, you can specify --unpin --grafana-baseurl to unpin an option which normally has an
+argument, and all instances of that argument will be removed. If you want to unpin a specific
+instance of a multi-valued option, or one that can be specified more than once with different
+parameter values, then you must provide the value as well, as
+in --unpin --log-histograms 'histodata.log:.*:1m'
+
Example
+
To simply set global defaults, you can run nosqlbench with a command line like this:
+
./nb --pin --docker-metrics-at metricsnode --pin --annotate all
+
+
Compatibility
+
You should use the --pin and --unpin options to make any changes to the argsfile when integrating or
+automating workflows. This ensures that any changes to file format are handled for you by nb.
To configure a NoSQLBench Scenario to do something useful, you have to provide parameters to it.
+This can occur in one of several ways. This section is a guide on all the ways you can
+configure an nb5 scenario.
+
👉 Most users will not need to understand all the ways you can parameterize nb5. If you are doing
+lots of customization or scenario design, then the details of this section may be useful.
+Otherwise, the examples are a better starting point, particularly the built-in scenarios.
+
Global Options
+
The command line is used to configure both the overall runtime (logging, etc.) and
+individual scenarios or activities. Global options can be distinguished from scenario commands and
+their parameters because global options always start with a -single or --double-hyphen.
+
You can co-mingle global options with scenario or activity params. They will be parsed out
+without disturbing the sequence of those other options.
+
Script Params
+
params object
+
When you run a scenario script directly, like nb5 script mysript.js , you can provide
+params to it, just as you can with an activity. These params are provided to the scripting
+environment in a service object called params.
+
template variables
+
Further, template variables are expanded in this script before evaluation just as with workload
+templates. So, you can use TEMPLATE(varname,value) or <<varname:value>> to create textual
+parameters which will be recognized on the command line.
+
Scenario Commands
+
Any command line argument which is not a global option (starting with a - or --), is a
+scenario command. These are all described in CLI Scripting.
+
Most of the time, when you are running scenario commands, they are being used to start, modify,
+or stop an activity. Scenario commands are all about managing activities. So, in practice,
+most scenario commands are activity commands.
+
Activity Params
+
When you run an activity in a scenario with run or start, every named parameter after the
+command is an activity param. Core Activity Params
+allow you to initialize your workload. There are a few ways that these parameters work together.
+
+
A default driver may be specified. This is the most common way to use nb5.
+
A workload or op may be specified. Either will provide a (possibly empty) list of op templates.
+
Each op template which is found will be interpreted with the selected driver, even if this is
+a locally assigned driver in the op template itself.
+
Each active driver, according to the active op templates, will enable new activity params
+above and beyond the core activity params. For example, by assigning the driver stdout to an
+activity or an op template directly, the param filename becomes available. This activity
+param will apply only to those activity instances for which it is valid. The other drivers
+will not see the parameter. In this way, a single set of activity params can be used to
+configure multiple drivers.
+
In special cases, when there are no op templates, and this wasn't because of tag filtering, a
+driver may synthesize ops according to their documented behavior. The stdout driver does
+this. These drivers are given a view of the raw workload template from which to build
+fully-qualified op templates.
+
+
👉 Depending on the driver param, additional activity params may become available.
+
👉 The driver param is actually assigned per op template.
+
Named Scenario Params
+
It is common to use the Named Scenarios
+feature to
+bundle up multiple activity workflows into a single command. When you do this, it is
+possible to apply params on the command line to the named scenario. In effect, this means you are
+applying a single set of parameters to possibly multiple activities, so there is a one-to-many
+relationship.
+
For this reason, the named scenarios have a form of parameter locking that allows you to drop
+your param overrides from the command line into exactly the places which are intended to be
+changeable. Double equals is a soft (quiet) lock, and triple equals is a hard (verbose) lock.
+Any parameter provided after a workflow will replace the same named parameters in the named
+scenario steps which are not locked.
+
Op Template Fields
+
Activities, all activities, are based on a set of op templates. These are the
+YAML, json, jsonnet, or direct
+data structures which follow the workload definition
+standard. These schematic values are provided to the selected driver to be mapped to native
+operations.
+
Generally, you define all the schematic values (templates, bindings, other config properties)
+for an op template directly within the workload template. However, there are cases where it
+makes sense to also allow those op fields to be initialized with the activity params.
+
This is easy to enable for driver developers. All that is required is that the op field has a compatible
+config model entry that matches the name and type. This also allows the driver adapter to
+describe the parameter, indicate whether the parameter is required or has defaults.
+
A good example of this is the consistency_level activity param for the cqld4 driver. If the
+user sets this value when the cqld4 driver is active, then all op templates will take their
+default from this.
+
👉 Depending on the driver param, op templates will be interpreted a certain way.
+
Standard Activity Params
+
Some parameters that can be specified for an activity are standardized in the NoSQLBench design.
+These include parameters like driver, alias, and threads. Find more info on standard
+params at Standard Activity Params.
+
Dynamic Activity Params
+
Some driver params are instrumented in the runtime to be dynamically changed during a scenario's
+execution. This means that a scenario script can assign a value to an activity parameter after
+the activity is started. Further, these assignments are treated like events which force the
+activity to observe any changes and modify its behavior in real time.
+
This is accomplished with a two-layer configuration model. The initial configuration for an
+activity is gated by a type model that knows what parameters are required, what the default
+values are, and so on. This configuration model is an aggregate of all the active drivers in
+an activity.
+
A second configuration model, called the reconfiguration model, can expose a set of the
+original config params for modification. In this way, driver developers can allow for a variety
+of dynamic behaviors which allow for advanced analysis of workloads without restarts. These
+parameters are not meant for casual use. They are generally used in advanced scripting scenarios.
+
Parameters that are dynamic should be documented as such where they are described.
+
Template Variables
+
If you need to provide general-purpose overrides to a in a workload template, then you
+may use a mechanism called template variables. These are just like activity parameters, but they
+are set via macro and can have defaults. This allows you to easily template workload properties
+in a way that is easy to override on the command line or via scripting.
To configure a NoSQLBench Scenario to do something useful, you have to provide parameters to it.
+This can occur in one of several ways. This section is a guide on all the ways you can
+configure an nb5 scenario.
+
👉 Most users will not need to understand all the ways you can parameterize nb5. If you are doing
+lots of customization or scenario design, then the details of this section may be useful.
+Otherwise, the examples are a better starting point, particularly the built-in scenarios.
+
Global Options
+
The command line is used to configure both the overall runtime (logging, etc.) and
+individual scenarios or activities. Global options can be distinguished from scenario commands and
+their parameters because global options always start with a -single or --double-hyphen.
+
You can co-mingle global options with scenario or activity params. They will be parsed out
+without disturbing the sequence of those other options.
+
Script Params
+
params object
+
When you run a scenario script directly, like nb5 script mysript.js , you can provide
+params to it, just as you can with an activity. These params are provided to the scripting
+environment in a service object called params.
+
template variables
+
Further, template variables are expanded in this script before evaluation just as with workload
+templates. So, you can use TEMPLATE(varname,value) or <<varname:value>> to create textual
+parameters which will be recognized on the command line.
+
Scenario Commands
+
Any command line argument which is not a global option (starting with a - or --), is a
+scenario command. These are all described in CLI Scripting.
+
Most of the time, when you are running scenario commands, they are being used to start, modify,
+or stop an activity. Scenario commands are all about managing activities. So, in practice,
+most scenario commands are activity commands.
+
Activity Params
+
When you run an activity in a scenario with run or start, every named parameter after the
+command is an activity param. Core Activity Params
+allow you to initialize your workload. There are a few ways that these parameters work together.
+
+
A default driver may be specified. This is the most common way to use nb5.
+
A workload or op may be specified. Either will provide a (possibly empty) list of op templates.
+
Each op template which is found will be interpreted with the selected driver, even if this is
+a locally assigned driver in the op template itself.
+
Each active driver, according to the active op templates, will enable new activity params
+above and beyond the core activity params. For example, by assigning the driver stdout to an
+activity or an op template directly, the param filename becomes available. This activity
+param will apply only to those activity instances for which it is valid. The other drivers
+will not see the parameter. In this way, a single set of activity params can be used to
+configure multiple drivers.
+
In special cases, when there are no op templates, and this wasn't because of tag filtering, a
+driver may synthesize ops according to their documented behavior. The stdout driver does
+this. These drivers are given a view of the raw workload template from which to build
+fully-qualified op templates.
+
+
👉 Depending on the driver param, additional activity params may become available.
+
👉 The driver param is actually assigned per op template.
+
Named Scenario Params
+
It is common to use the Named Scenarios
+feature to
+bundle up multiple activity workflows into a single command. When you do this, it is
+possible to apply params on the command line to the named scenario. In effect, this means you are
+applying a single set of parameters to possibly multiple activities, so there is a one-to-many
+relationship.
+
For this reason, the named scenarios have a form of parameter locking that allows you to drop
+your param overrides from the command line into exactly the places which are intended to be
+changeable. Double equals is a soft (quiet) lock, and triple equals is a hard (verbose) lock.
+Any parameter provided after a workflow will replace the same named parameters in the named
+scenario steps which are not locked.
+
Op Template Fields
+
Activities, all activities, are based on a set of op templates. These are the
+YAML, json, jsonnet, or direct
+data structures which follow the workload definition
+standard. These schematic values are provided to the selected driver to be mapped to native
+operations.
+
Generally, you define all the schematic values (templates, bindings, other config properties)
+for an op template directly within the workload template. However, there are cases where it
+makes sense to also allow those op fields to be initialized with the activity params.
+
This is easy to enable for driver developers. All that is required is that the op field has a compatible
+config model entry that matches the name and type. This also allows the driver adapter to
+describe the parameter, indicate whether the parameter is required or has defaults.
+
A good example of this is the consistency_level activity param for the cqld4 driver. If the
+user sets this value when the cqld4 driver is active, then all op templates will take their
+default from this.
+
👉 Depending on the driver param, op templates will be interpreted a certain way.
+
Standard Activity Params
+
Some parameters that can be specified for an activity are standardized in the NoSQLBench design.
+These include parameters like driver, alias, and threads. Find more info on standard
+params at Standard Activity Params.
+
Dynamic Activity Params
+
Some driver params are instrumented in the runtime to be dynamically changed during a scenario's
+execution. This means that a scenario script can assign a value to an activity parameter after
+the activity is started. Further, these assignments are treated like events which force the
+activity to observe any changes and modify its behavior in real time.
+
This is accomplished with a two-layer configuration model. The initial configuration for an
+activity is gated by a type model that knows what parameters are required, what the default
+values are, and so on. This configuration model is an aggregate of all the active drivers in
+an activity.
+
A second configuration model, called the reconfiguration model, can expose a set of the
+original config params for modification. In this way, driver developers can allow for a variety
+of dynamic behaviors which allow for advanced analysis of workloads without restarts. These
+parameters are not meant for casual use. They are generally used in advanced scripting scenarios.
+
Parameters that are dynamic should be documented as such where they are described.
+
Template Variables
+
If you need to provide general-purpose overrides to a in a workload template, then you
+may use a mechanism called template variables. These are just like activity parameters, but they
+are set via macro and can have defaults. This allows you to easily template workload properties
+in a way that is easy to override on the command line or via scripting.
This section delves deeper than the essential documentation. The goal of this section is to give
+insight into the design thinking and approach for some of the more esoteric-but-essential gears
+and cogs of NoSQLBench.
+
You might want to read this section if you want to learn as much as possible about NoSQLBench.
+There will still be some things not explained here, but if there are particular topics of
+interest, please file a request.
+
👉 If you find anything in this section which you believe should be in the user guide proper,
+please let us know.
+
+
+
+
+
+
diff --git a/user-guide/advanced-topics/performance-factoring/1param_2values.png b/user-guide/advanced-topics/performance-factoring/1param_2values.png
new file mode 100644
index 000000000..356d2ef77
Binary files /dev/null and b/user-guide/advanced-topics/performance-factoring/1param_2values.png differ
diff --git a/user-guide/advanced-topics/performance-factoring/1param_3values.png b/user-guide/advanced-topics/performance-factoring/1param_3values.png
new file mode 100644
index 000000000..6f3f37c47
Binary files /dev/null and b/user-guide/advanced-topics/performance-factoring/1param_3values.png differ
diff --git a/user-guide/advanced-topics/performance-factoring/2params_2values.png b/user-guide/advanced-topics/performance-factoring/2params_2values.png
new file mode 100644
index 000000000..10b4b3de4
Binary files /dev/null and b/user-guide/advanced-topics/performance-factoring/2params_2values.png differ
diff --git a/user-guide/advanced-topics/performance-factoring/3params_2values.png b/user-guide/advanced-topics/performance-factoring/3params_2values.png
new file mode 100644
index 000000000..8ad417e8c
Binary files /dev/null and b/user-guide/advanced-topics/performance-factoring/3params_2values.png differ
diff --git a/user-guide/advanced-topics/performance-factoring/3params_3values.png b/user-guide/advanced-topics/performance-factoring/3params_3values.png
new file mode 100644
index 000000000..0169902c2
Binary files /dev/null and b/user-guide/advanced-topics/performance-factoring/3params_3values.png differ
diff --git a/user-guide/advanced-topics/performance-factoring/3params_3values_contrast.png b/user-guide/advanced-topics/performance-factoring/3params_3values_contrast.png
new file mode 100644
index 000000000..2b727f1f8
Binary files /dev/null and b/user-guide/advanced-topics/performance-factoring/3params_3values_contrast.png differ
diff --git a/user-guide/advanced-topics/performance-factoring/4params_2values_tesseract1.png b/user-guide/advanced-topics/performance-factoring/4params_2values_tesseract1.png
new file mode 100644
index 000000000..5f16fae16
Binary files /dev/null and b/user-guide/advanced-topics/performance-factoring/4params_2values_tesseract1.png differ
diff --git a/user-guide/advanced-topics/performance-factoring/4params_2values_tesseract2.png b/user-guide/advanced-topics/performance-factoring/4params_2values_tesseract2.png
new file mode 100644
index 000000000..8239f9388
Binary files /dev/null and b/user-guide/advanced-topics/performance-factoring/4params_2values_tesseract2.png differ
diff --git a/user-guide/advanced-topics/performance-factoring/concepts/index.html b/user-guide/advanced-topics/performance-factoring/concepts/index.html
new file mode 100644
index 000000000..017090c52
--- /dev/null
+++ b/user-guide/advanced-topics/performance-factoring/concepts/index.html
@@ -0,0 +1,242 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Concepts | NoSQLBench Project (BUILD)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
When a system has no headroom, it is unable to be responsive during hardware or infrastructure
+outages, load spikes, changes in system topology, capacity scaling activities, other
+administrative activities like external backups.
+
User-facing systems are more often operational systems, which require a degree of built-in
+headroom in order to absorb load spikes or events described above. These systems are designed
+and tuned more for operational responsiveness and resiliency under load and workload changes.
+They are also combined with other less-operational designs as a fronting layer to absorb and
+manage around limitations in classic system designs.
+
Measuring the capacity of operational system without putting its operational headroom and
+responsiveness in focus is usually meaningless. We often measure the capacity of such systems as
+a reference point within which other, more realistic measurements may be taken.
+
Saturating Throughput
+
The throughput at which a system is fully utilizing its most critical resource and can go no
+faster. This is a measure of how much work a system is capable of. When a system is saturated,
+you will see the highest response times. This is typically measured in ops per second, sometimes
+abbreviated as "ops/s", "Kops/s", or just simply "Kops".
+
Operational Headroom
+
At saturating throughput, a system has no operational headroom. Operational headroom is the
+fraction of system capacity (throughput) which remains in reserve to handle load spikes,
+topology changes, hardware failures, external backup loads, and so on.
+
Nominal Throughput
+
A production-like workload in an operational system is one which takes into account the
+operational headroom that it would normally be run with in production. This is absolutely
+essential in how findings are presented so that users and operators make safe choices about
+system sizing, tuning, and so on. Case in point: If you ran a cluster at 100% of its saturating
+capacity, it would not be able to absorb a node outage without affecting the load. The load
+(user-facing workload, ostensibly) would be compromised in some way, from a lowered aggregate
+capacity, which would also likely result in significantly higher response times. Systems which
+are affected by over-saturation in this way are often much more severely affected than new
+operators would expect, due to the combination of service time and capacity effects stacking in
+an unhealthy way. This is easy to demonstrate in practice, and made more lucid through the lens
+of Little's law.
+
Thus, a key insight about operational systems is that they need to be deployed and tested
+with operational headroom as explained above. A typical approach for testing a distributed
+system with built-in fault management capabilities would be to characterize its saturation
+throughput and then factor in headroom depending on the topology of the system. For a 5 node
+cluster, 70% throughput is
+
End-to-End Testing
+
To accurately gauge how a system performs (operationally) or behaves (functionally), you can
+either test it piece-wise -- subsystem by subsystem, or you can test it fully assembled in an
+integrated fashion. An end-to-end test takes this further to include the composed system from
+the end-user, access pathways and infrastructure, and all internal services or endpoints that
+the system under test may depend on. End-to-end testing comes with a few key trade-offs:
+
+
End-to-end testing provides the most coverage of any type of test. As such, it is often one
+of the first ways an operational system is tested in order to determine whether the system as
+a whole is operating and functioning correctly.
+
End-to-end testing does not provide a destructured of the system. It doesn't focus the
+user's on the element which may be causing a test failure. Further analysis is required to
+figure out why errors occur.
+
+
In the very best system designs, failure modes manifest with specific reasons
+for why an error occurred, and specific details about where in the system it occurred. This
+does not carry over to performance testing very well.
+
Further, many systems intentionally obscure internal details for reasons of security or
+decoupling user experience from system details. This is common with cloud services for example.
+
+
+
End-to-end testing is a form of integration testing, where potentially large deployments or
+configurations are required in order to facilitate testing activities. This leaves a larger
+surface area for misconfiguration which is incongruent with prod or customer specific details.
+Where these deployment manifests may diverge from actual customer systems, results may be
+invalid, in the form of false-positives or false-negatives.
+
Testing end-to-end on real systems is feasible with the right supporting test apparatus and
+path addressing methods, such as multi-tenant features of a system. However,
+doing this reliably and accurately requires the provisioning and auditing logic for testing
+systems to be one and the same as what customers or operators use in order to ensure that
+there is no difference between what is tested and what is intended to be tested.
+
Physical topology and logical pathing are not always visible or explicitly addressable from
+the testing apparatus, meaning that testing a whole system across all flows may be impossible
+without special routing or flagging logic in testing operations. If these are not
+in-built mechanisms in the user-facing functionality, then it creates a side-path which
+breaks system coverage guarantees. This makes some end-to-end test scenarios probabilistic,
+in that physical traversal paths may only be hit a fraction of the time.
+
+
Scaling Math
+
In distributed systems testing, a common misconception is that you have to test your production
+loads at the same level as you would run them in production. This flies in the face of the
+design principles of linearly scalable systems. What is more important is that the effects of
+proportionality and congruence hold, in a durable way, so that observations on smaller systems
+can be used to make reliable predictions of larger systems. The methods used to do this are
+often described as scaling math in the NoSQLBench user community.
+
For example, suppose you have a workload that supports a line of business, a peak capacity
+requirement. With a provable presumption of linear scalability over nodes, you can do a test with 5
+nodes which will provide the baselines measurements needed to project node requirements for any
+given throughput. This example doesn't include the latency, but it can be extended to that once the
+throughput requirements are understood.
+
In fact, the principles and specific mechanisms of scale should be the key focus of any such
+studies. Once these are established as trust-worthy, further predictions or analysis become much
+easier.
+
Detailed examples with latency factors included will be given elsewhere.
If you are using NoSQLBench to perform advanced testing or performance analysis, then this
+section is for you. It is meant to be a reference point and idea percolator for advanced testing methods,
+particularly those which can be streamlined or automated with NoSQLBench.
+
The key objective of performance factoring is to identify and measure factors in system
+performance using repeatable and reliable methods. These methods are often implied in
+common practice, but there are few places where they are explained in a useful way for new
+distributed systems engineers. They may involve numerical or analytical methods, but a cohesive
+view includes the basic workflow, orchestration and staging elements which make results valid.
+
One of the key strategic elements of NoSQLBench is the ability to codify those workflows and methods
+which have historically been unspecified and thus unreliable from one effort to the next.
+Methods and workflows which are useful will be demonstrated, with the express purpose of
+validating them with our user community before building them into NoSQLBench capabilities.
+
Consider this section as an on-ramp, an incremental sandbox, and an attempt to close the gap between
+advanced methods and automation for performance factoring.
If you are using NoSQLBench to perform advanced testing or performance analysis, then this
+section is for you. It is meant to be a reference point and idea percolator for advanced testing methods,
+particularly those which can be streamlined or automated with NoSQLBench.
+
The key objective of performance factoring is to identify and measure factors in system
+performance using repeatable and reliable methods. These methods are often implied in
+common practice, but there are few places where they are explained in a useful way for new
+distributed systems engineers. They may involve numerical or analytical methods, but a cohesive
+view includes the basic workflow, orchestration and staging elements which make results valid.
+
One of the key strategic elements of NoSQLBench is the ability to codify those workflows and methods
+which have historically been unspecified and thus unreliable from one effort to the next.
+Methods and workflows which are useful will be demonstrated, with the express purpose of
+validating them with our user community before building them into NoSQLBench capabilities.
+
Consider this section as an on-ramp, an incremental sandbox, and an attempt to close the gap between
+advanced methods and automation for performance factoring.
There are a few workflows which we see routinely in performance testing. These workflows capture,
+or at least describe the steps in a complete analysis method, and are the basis for extant analysis
+method
+scripts in NoSQLBench. This section describes some of these, the steps involved, and the purposes of
+each step.
+
On the surface, many of these techniques may appear to be pretty basic. Descriptions like "find
+the highest throughput" are an egregious misnomer for what is required to do this type of work
+accurately and precisely. This is important for everyone to understand who depends on the
+results of performance testing. Over-simplifying or rushing past these details can effectively
+invalidate the results to the point of being less useful than guesswork, particularly if the
+stakeholders presume a degree of methodical approach or empiricism which is not present.
+
System Preparation
+
+
Deploy Testing apparatus, infrastructure, and target systems.
+
Document deployment details, including steps needed to redeploy the same type of system with
+the same provisioning levels, settings, topology, etc.
+
+
Once a system is deployed for testing, all the essential details of the system which define
+the test scenario should be captured.
+
Empirically, this is every detail of the system, but
+practically speaking this is often not possible or convenient for these details to be capture.
+A key strategy is to use well-defined reference points, like a system image, default
+configuration, hardware profile, etc., and then document only deltas from this initial state.
+This also emphasizes the value of testing system as shipped or configured by default, since this
+is also a meaningful reference point for any other analysis.
+
Measure Saturating Throughput
+
To balance and inform how performance data is interpreted, it is crucial to consider the
+throughput and response (AKA latency) in contrast and in combination. In order to do this well,
+it is essential to determine the maximum rate at which a system can process requests for the
+workload under study. This is what measuring saturating throughput is all about.
+
The saturating throughput for a composed system may be limited by any component, including
+clients, infrastructure, servers or endpoints, proxy layers, storage subsystems, etc.
+By definition, it is dependent on each and every part of the composed system. However, it is
+almost always more dependent on one component than others. This is often called a bottleneck.
+When a given component is disproportionately utilized over others, the system is may not be considered
+well-balanced. When this component hits full saturation, limiting the throughput of the composed
+system, then it is called a bottleneck. It is not always easy to define what constitutes a
+meaningful bottleneck when resource utilization is relatively even. Over-tuning for full
+saturation over
+
It is essential to understand the general state of balance of a system at saturation. If
+optimizing for throughput, then key metrics should include any skews in workload distribution
+over nodes, resources, or services. As well, within vertical resource profiles, such as within a
+node, serious imbalances may invalidate the purposes of a test. This all depends on the specific
+reason for running the test.
+
Yet, it is possible, in a well-balanced system, that many components are highly saturated
+together. In many cases, this a desired state of balance. In a well-balanced system,
+appreciable speed-ups require creating more headroom (scaling up capacity) in all the components
+or subsystems. Once this is achieved in vertical resource profiles, simpler scale-out
+strategies become available, wherein you know each unit of capacity is representative of a unit of
+consumption for the given workload.
+
In practice, valid results are only possible when the target being tested is the limiting
+factor. Further, as the testing apparatus sees higher utilization (client-side or
+infrastructure) , the fidelity of results decreases. The relationship between client saturation
+and measurement accuracy is not well-defined, but is nearly always a non-linear relationship.
+For example running the client system at 60% utilization will certainly increase the measured
+latency of the composed system over the same test rate on a client which would only be 40%
+utilized. It is important to remember that whey you are running a test, there is no way to
+only test the server. However, you can shift the measurements to be more descriptive of the
+test target by ensuring that the whole system is over-provisioned in the testing apparatus as a
+rule.
+
Steps
+
+
Prepare target system, infrastructure, and client systems.
+
Instrument client system for basic metrics capture, including throughput and discrete latency
+histograms. (Avoid time-decaying or other leavening effects.)
+
+
(Advanced) Instrument each key subsystem, messaging layer, system boundary, and resource
+pool in the entire composed system.
+
(Advanced) Baseline key subsystems for capacity using automated benchmarking tools.
+
(Advanced) Verify or record performance congruity and coherence across tested systems.
+
+
+
Configure workload at sufficient concurrency. Minimum concurrency should keep all
+messaging paths primed at all times (transport, buffering requests, processing elements), with
+minimal over-commit. A good rule of thumb is to set concurrency to 2X estimated operational
+parallelism.
+
Method 1 - Run the workload at the full capacity of the client, adjusting the concurrency to
+find the local maxima in throughput. Adjust settings as need to optimize throughput until
+further improvement is minimal.
+
Method 2 - Run the workload with a rate limiter on the client side as the limiting factor.
+Adjust the rate limit to find the local maxima in throughput. Adjust setting as needed to
+optimize throughput until further improvement is minimal.
+
Method 3 - Use an automated and iterative analysis method like findmax in order to
+streamline the testing time, and codify the analysis method for reproducible and specific
+results.
+
Method 4 - Use an automated and iterative analysis method like optimo in order to genearlize
+over an n-dimensional parameter space which includes concurrency and other dynamic settings.
+
Record the result: maximum throughput and the settings required to achieve it.
+
(Advanced) Record the response curve of the system across key throughput stages.
+
+
Measure Ideal Latency
+
Determine the latency of an operation under the best possible circumstances, i.e. all JIT,
+cache-warming is done, indexes and compaction state are optimal, and so on.
The ultimate goal of a performance factoring effort is to establish a reliable model of how design
+choices determine system behavior.
+
Design Choices includes any choice a system designer can make that effects system behavior,
+including hardware selection, network topology, data models and schema, configuration items,
+service provisioning levels, and so on. The set of valid choices constitutes the valid
+Parameter Space.
+
System Behavior includes functional and operational aspects -- What a system does logically as
+well as how it performs mechanically, including how resilient it is to unexpected events like hardware
+failure. The result of a study provides a set of values describing these results, organized into
+a set of data which
+
Parameters
+
In empirical terms, this means we are trying to map independent parameters to dependent results in a
+durable way. From a data science perspective, this simply means we're trying to create a model
+describing how a subset of parameters (those things in our control) are related to others (those
+things we presume to be consequent).
+
👉 An idealized parameter is represented as a variable which can hold a scalar value, or one which
+can be assigned a single value from a range of valid values.
+
👉 An idealized parameter space is an n-dimensional Euclidean space, where each dimension is
+represented as an idealized parameter, and each point in the space is represented by a set of
+parameter values, where a specific result or set of result data can be found.
+
Parameter Selection
+
Parameters are generally selected which are expected to have the highest impact on results, from
+prior knowledge of the problem domain. Intuition of which parameters matter most and how they
+manifest in results is often described as Mechanical Sympathy in distributed systems practice.
+
However, once there are more than 2 parameters in play, even the best mechanical sympathy starts
+to falter, particularly when non-linear relationships are present. It is only through empirical
+analysis methods that we can understand these results in practice. To put this in perspective,
+it is a challenge to merely visualize in a lucid way what a 4-dimensional manifold looks like at
+the extrema, much less predict it reliably.
+
So how do we ultimately arrive at a set of meaningful parameters? Without going through a full
+PCA analysis phase, we can use our intuition to get started. However, we don't simply create a
+long list of parameters, for reasons will soon be obvious.
+
Parameter Ranges
+
Each parameter has a domain of valid values which can be defined. In the least helpful case, that
+domain is any valid number. In a more helpful case, we know which values are of practical import.
+
However, the parameter range might be selected for different reasons based on the anticipated
+effects of changing the values. Consider this schematic of a single parameter's range:
It could be said that we really only care about values which are practical in a real sense.
+This will often be the case, as we may know from mechanical sympathy that the values outside of
+this range are simply untenable for a clearly defined reason. In other cases, we may actually care
+more about the contrast between what has historically been considered practical vs impractical given
+new scenarios or information. This happens frequently with systems of scale as fundamental behavior
+shift over time in response to subsystem optimization and new features. It is useful to think
+of each parameter in this way to choose useful starting values.
+
Further, the effect a parameter has in isolation may be counter-balanced by the extrema of
+another parameter's effect. For example, if you have two parameters which have a super-linear
+impact to a result, with one positively correlated and the other negatively correlated, then
+understanding the practical values at these extremes can still be quite useful. It will not
+always be obvious when you should extend the range of a parameter under test, but it can be
+explored by looking at 3 or more controlled results for that parameter.
+
Since each study takes resources and time, choosing wisely the initial book-end values for each of our
+parameters is essential.
+
Comparability
+
The phrase "all else being equal" is extremely important in how we conceptualize tests as
+parameters. Every detail about a specific system is presumed to be significant to the
+results. There is not a trivial way to establish only a subset of a few important details in such a
+way that we can forget the rest. Because of this, it is essential that testing systems be deployed
+in way that preserves architectural congruence by default for everything except for the parameters
+which are the focus of a given study.
+
In other words, deploying systems for performance test is something best done by reliable and
+automatic mechanisms. This is especially true for those studies which are comparative in nature.
+In practice, this is often done by selecting a baseline reference system that will be reused or
+deployed in cookie-cutter fashion for all tests. This includes all provisioning details.
+
Everything else which is not kept the same between performance studies is, by definition, a
+parameter of the test. This is a key principle in empirical testing which should not be
+overlooked.
+
Hidden Parameters
+
Often, it is presumed what is changed for testing, and the study is framed in this way as "how
+does the average field size affect the p99 latency?". However, it is also very common for
+systems to be deployed with whatever the current system defaults are when a subsequent
+comparative test is needed weeks or months later. Thus, it is imperative to select a reference
+system which is likely to be reproducible at that time with little complication. If this isn't
+possible, then whatever has changed between system vintages become hidden parameters to your
+study. This is one of the most common reasons that empirical studies lose comparability over
+time.
+
Visual Examples
+
This section presents some visual examples of parameter spaces, with some accompanying
+mathematical expressions to illustrate parameter space growth.
+
One Parameter, One Value
+
+
👉 Each box represents a distinct combination of values for all included parameters. In this case,
+it reduced down to a single value for a single parameter.
+
One Parameter, Controlled Study
+
+
The dotted line indicates that there are a myriad of other factors which are in play for every
+performance study. These are generally hidden from view but they are always part of the test.
+
Every one which is not part of our study should be controlled (kept the same between
+studies). They need to be kept stable by using automatic and reliable test setup (deployment,
+configuration, data staging, ...) mechanisms.
+
One Parameter, Two Values
+
+
👉 Each parameter is represented as a schematic axis as in a Euclidean coordinate system. All
+lines oriented in the same direction should be considered part of the same axis.
+
As will be the case for every other example in this section, an edge connecting two nodes in the
+graph represents a change in a single parameter value.
The NoSQLBench runtime is a combination of a scripting runtime and a workload execution machine.
+This is not accidental. With this particular arrangement, it has been possible to build
+sophisticated tests across a variety of scenarios. In particular, logic which can observe and react
+to the system under test can be powerful. With this approach, it becomes possible to break away from
+the conventional run-interpret-adjust cycle which is all too often done by human hands.
+
The approach that enables this is based on a few key principles:
+
+
NoSQLBench is packaged by default for users who want to use pre-built testing configurations.
+
The whole runtime is modular and designed for composition.
+
The default testing configurations are assembled from the modules components as needed.
+
Users can choose to build their own testing configurations from these modules.
+
When a user moves from using pre-built configurations to custom configurations, is an incremental
+process.
+
+
Design Motive
+
Why base the internal logic on a scripting engine?
+
The principles described above apply all the way to the scripting layer. Every NoSQLBench scenario
+is after-all, a script. For users who just need to run the pre-package configurations, it doesn't
+matter that a scripting engine is at the core. For others who need to create advanced testing logic,
+it is a crucial enabler. This feature allows them to build on the self-same concepts and components
+that other NoSQLBench users are already familiar with and using. This provides common ground that
+pays for itself in terms of usability, clarity, and a shared approach to testing at different levels
+of detail.
+
Machinery, Controls & Instruments
+
All the heavy lifting is left to Java and the core NoSQLBench runtime. This includes the iterative
+workloads that are meant to test the target system. This is combined with a control layer which is
+provided by GraalVM. This division of responsibility allows the high-level test logic to be "script"
+and the low-level activity logic to be "machinery". While the scenario script has the most control,
+it also is the least busy relative to activity workloads. The net effect is that you have the
+efficiency of the iterative test loads in conjunction with the open design palette of a first-class
+scripting language. You aren't having to buy test flexibility at the expense of testing speed or
+efficiency. You get the best of both worlds, working together.
+
Essentially, the drivers are meant to handle the workload-specific machinery. They also provide
+dynamic control points and parameters which special to each driver. This exposes a full feedback
+loop between a running scenario script and the activities that run under its control. The scenario
+is free to read the performance metrics from a live activity and make changes to it on the fly.
+
Getting Started
+
For users who want to tap into the programmatic power of NoSQLBench, it's easy to get started by
+using the --show-script option. For any normal command line that you might use with NoSQLBench,
+this option causes it to dump the scenario script to stdout and exit instead of running the
+scenario.
+
You can store this into a file with a .js extension, and then use a command line like
+
nb5 script myfile.js
+
+
to invoke it. This is exactly the same as running the original command line, only with a couple of
+extra steps that let you see what it is doing directly in the scenario script.
NoSQLBench may look like a simplistic runtime for the casual user. This is quite intentional. Yet, a
+serious amount of expressive power lies just below the surface for the adventuring tester.
+
Dynamic Parameters
+
Dynamic parameters are control variables which, when assigned to, cause an immediate change in the
+behavior of the runtime. Driver implementors have the option to make changes to activity
+parameters reactive within the driver. These parameters are thus able to respond to direct
+changes from within the scenario script. Additionally, some core parameters are dynamic.
+
Global Variables
+
scenario
+
This is the Scenario Controller object which manages the activity executors in the runtime. All
+the methods on this Java type are provided to the scripting environment directly.
+
Activity Parameters
+
activities.<alias>.<paramname>
+
Each activity parameter for a given activity alias is available at this name within the scripting
+environment. Thus, you can change the number of threads on an activity named foo (alias=foo) in the
+scripting environment by assigning a value to it as in activities.foo.threads=3. Any assignments
+take effect synchronously before the next line of the script continues executing.
+
metrics.<alias>.<metric name>
+
Each activity metric for a given activity alias is available at this name. This gives you access to
+the metrics objects directly. Some metrics objects have also been enhanced with wrapper logic to
+provide simple getters and setters, like
+.p99ms or .p99ns, for example.
+
Interaction with the NoSQLBench runtime and the activities therein is made easy by the above
+variables and objects. When an assignment is made to any of these variables, the changes are
+propagated to internal listeners. For changes to
+threads, the thread pool responsible for the affected activity adjusts the number of active
+threads (AKA slots). Other changes are further propagated directly to the thread harnesses and
+components which implement the ActivityType.
+
WARNING:
+Assignment to the workload and alias activity parameters has no special effect, as you can't
+change an activity to a different driver once it has been created.
+
You can make use of more extensive Java or Javascript libraries as needed, mixing then with the
+runtime controls provided above.
+
Enhanced Metrics for Scripting
+
The metrics available in NoSQLBench are slightly different than the standard kit with dropwizard
+metrics. The key differences are:
+
HDR Histograms
+
All histograms use HDR histograms with four significant digits.
+
All histograms reset on snapshot, automatically keeping all data until you report the snapshot or
+access the snapshot via scripting. (see below).
+
The metric types that use histograms have been replaced with nicer version for scripting. You don't
+have to do anything differently in your reporter config to use them. However, if you need to use the
+enhanced versions in your local scripting, you can. This means that Timer and Histogram types are
+enhanced. If you do not use the scripting extensions, then you will automatically get the standard
+behavior that you are used to, only with higher-resolution HDR and full snapshots for each report to
+your downstream metrics systems.
+
Scripting with Delta Snapshots
+
For both the timer and the histogram types, you can call getDeltaReader(), or access it simply as
+<metric>.deltaReader. When you do this, the delta snapshotting behavior is maintained until
+you use the deltaReader to access it. You can get a snapshot from the deltaReader by calling
+getDeltaSnapshot(10000), which causes the snapshot to be reset for collection, but retains a cache
+of the snapshot for any other consumer of getSnapshot() for that duration in milliseconds. If, for
+example, metrics reporters access the snapshot in the next 10 seconds, the reported snapshot will be
+exactly what was used in the script.
+
This is important for using local scripting methods and calculations with aggregate views
+downstream. It means that the histograms will match up between your local script output and your
+downstream dashboards, as they will both be using the same frame of data, when done properly.
+
Histogram Convenience Methods
+
All histogram snapshots have additional convenience methods for accessing every percentile in (P50,
+P75, P90, P95, P98, P99, P999, P9999) and every time unit in (s, ms, us, ns). For example,
+getP99ms() is supported, as is getP50ns(), and every other possible combination. This means that you
+can access the 99th percentile metric value in your scripts for activity foo as _
+metrics.foo.cycles.snapshot.p99ms_.
+
Control Flow
+
When a script is run, it has absolute control over the scenario runtime while it is active. Once the
+script reaches its end, however, it will only exit if all activities have completed. If you want to
+explicitly stop a script, you must stop all activities.
+
Strategies
+
You can use NoSQLBench in the classic form with run driver=<activity_type> param=value ... command
+line syntax. There are reasons, however, that you will sometimes want to customize and modify your
+scripts directly, such as:
+
+
Permute test variables to cover many sub-conditions in a test.
+
Automatically adjust load factors to identify the nominal capacity of a system.
+
Adjust rate of a workload in order to get a specific measurement of system behavior.
+
React to changes in test or target system state in order to properly sequence a test.
+
+
Script Input & Output
+
Internal buffers are kept for stdin, stdout, and stderr for the scenario script execution.
+These are logged to the logfile upon script completion, with markers showing the timestamp and file
+descriptor (stdin, stdout, or stderr) that each line was recorded from.
Extensions are injected into the scripting environment as plugins. They appear as service
+objects in the script environment under a name determined by the plugin.
+
This section describes some of the scripting extensions available.
+
csvmetrics
+
Allows a script to log some or all metrics to CSV files.
+
files
+
Allows for convenient read access to local files.
+
globalvars
+
Allows access to the shared variable state that can be populated from operations.
+
histologger
+
Allows script control of HDR histogram interval logging.
+
histostatslogger
+
Allows script control of histogram stats logging in CSV files.
+
http
+
Easily use http get and post in scripts.
+
optimos
+
Allows use of the BOBYQA optimizer in scripts.
+
scriptingmetrics
+
Allows you to create and append metrics within your scenario scripts
The NoSQLBench runtime is a combination of a scripting runtime and a workload execution machine.
+This is not accidental. With this particular arrangement, it has been possible to build
+sophisticated tests across a variety of scenarios. In particular, logic which can observe and react
+to the system under test can be powerful. With this approach, it becomes possible to break away from
+the conventional run-interpret-adjust cycle which is all too often done by human hands.
+
The approach that enables this is based on a few key principles:
+
+
NoSQLBench is packaged by default for users who want to use pre-built testing configurations.
+
The whole runtime is modular and designed for composition.
+
The default testing configurations are assembled from the modules components as needed.
+
Users can choose to build their own testing configurations from these modules.
+
When a user moves from using pre-built configurations to custom configurations, is an incremental
+process.
+
+
Design Motive
+
Why base the internal logic on a scripting engine?
+
The principles described above apply all the way to the scripting layer. Every NoSQLBench scenario
+is after-all, a script. For users who just need to run the pre-package configurations, it doesn't
+matter that a scripting engine is at the core. For others who need to create advanced testing logic,
+it is a crucial enabler. This feature allows them to build on the self-same concepts and components
+that other NoSQLBench users are already familiar with and using. This provides common ground that
+pays for itself in terms of usability, clarity, and a shared approach to testing at different levels
+of detail.
+
Machinery, Controls & Instruments
+
All the heavy lifting is left to Java and the core NoSQLBench runtime. This includes the iterative
+workloads that are meant to test the target system. This is combined with a control layer which is
+provided by GraalVM. This division of responsibility allows the high-level test logic to be "script"
+and the low-level activity logic to be "machinery". While the scenario script has the most control,
+it also is the least busy relative to activity workloads. The net effect is that you have the
+efficiency of the iterative test loads in conjunction with the open design palette of a first-class
+scripting language. You aren't having to buy test flexibility at the expense of testing speed or
+efficiency. You get the best of both worlds, working together.
+
Essentially, the drivers are meant to handle the workload-specific machinery. They also provide
+dynamic control points and parameters which special to each driver. This exposes a full feedback
+loop between a running scenario script and the activities that run under its control. The scenario
+is free to read the performance metrics from a live activity and make changes to it on the fly.
+
Getting Started
+
For users who want to tap into the programmatic power of NoSQLBench, it's easy to get started by
+using the --show-script option. For any normal command line that you might use with NoSQLBench,
+this option causes it to dump the scenario script to stdout and exit instead of running the
+scenario.
+
You can store this into a file with a .js extension, and then use a command line like
+
nb5 script myfile.js
+
+
to invoke it. This is exactly the same as running the original command line, only with a couple of
+extra steps that let you see what it is doing directly in the scenario script.
When running a script, it is sometimes necessary to pass parameters to it in the same way
+that you would for an activity. For example, you might have a scenario script like this:
This is what the script form of starting an activity might look like. It is
+simply passing a parameter map with the activity parameters to the scenario controller.
+
You might invoke it like this:
+
nb5 script myscript
+
+
Suppose that you want to allow the user to run such an activity by calling the script directly,
+but you also want them to allow them to add their own parameters specifically to the
+activity.
+
NoSQLBench supports this type of flexibility by providing any command-line arguments to the
+script as a script object. It is possible to then combine the parameters that a user provides
+with any templated parameters in your script. You can make either one the primary, while allowing
+the other to backfill values. In either case, it's a matter of using helper methods that are
+baked into the command line parameters object.
+
To force parameters to specific values while allowing user command line parameters to backfill,
+use a pattern like this:
This will force 'myparam' to the specified values irrespective of what the user has provided for
+that value, and will add the value if it is not present already.
+
To force unset a parameter, use a similar pattern, but with the value UNSET instead:
How do you know what kind of client to run? How do you know how many clients to run? How do you see
+the result of running multiple clients? We attempt to answer these questions below.
+
Testing Asymmetry
+
When you are measuring system performance, certain precautions have to be taken in order to ensure
+that you are measuring what you intend. One of the most fundamental requirements is that the systems
+that generate load and measure results must be inherently more capable of scale than the system you
+are measuring. In other words, if the testing system is the bottle-neck in the composed system, then
+you are effectively measuring your testing system and not the target.
+
This is a sliding scale. For instance, if your testing system's resources are mostly saturated,
+say "80% utilized", then you are likely still leaving some performance untapped, or at the very
+least, measuring higher response times than you would otherwise. This follows from the fact that
+clients are not real-time systems and must juggle threads and other parallel computing resources in
+order to service results from the target system. There is no simple or practical way to avoid this.
+
So, in order to have empirical results which are accurate with respect to the target of your tests,
+your client resources must not be over-utilized. In most cases, a client which uses less than 60% of
+its CPU and is not otherwise throttled by resource contention will provide generally accurate
+results.
+
In some testing systems, such as those which pipe around data in order to replay operations, you
+will find that it is more difficult to scale your client nodes up than with NoSQLBench. That is
+because moving serialized operations around and then consuming them in real time is simply much more
+work on the client system. This forces you into a situation where the load bank needs to be much
+larger to offset processing demands in the testing apparatus. Methods involving local IO or
+pre-processing will generally be much slower than those which operate almost entirely within the
+CPU-Memory complex. NoSQLBench's approach to synthesizing operations from recipes avoids much of
+this concern, or at least makes it easier to manage and scale.
+
Yet, the amount of client capability you need in order to run an accurate performance test still
+depend on how capable your test target is. Consider a target cluster comprised of 5 nodes. This may
+take a couple of test clients of the same basic hardware profile in order to adequately load the
+target system without saturating client resources. However, if those clients are saturating
+their CPUs, this is not enough.
+
Verify client overhead
+
When running serious tests, it is important that you look at your clients' system utilization
+metrics. The key metric to watch for is CPU utilization. If it goes over 50%, you may need more
+client resources in order to keep your metrics accurate. There are a couple ways to approach this.
+
Use a larger test node
+
If you need to scale up your test client capabilities, the simplest method is to just use a larger
+client node. This may also require you to increase the number of threads, but
+threads=auto will always size up reasonably to the available cores.
+
Add more client nodes
+
If you need, you can add more nodes to your test. There are a couple of strategies for doing this
+effectively.
+
+
If you want to run the same number of operations overall, you can split them over nodes. Simply
+change your cycles activity param so that it allocates a share of the cycles to each node. For
+example, a single-node test which uses cycles=1M can be split into two different ones which
+use cycles=500K and cycles=500k..1M. If you instead simply set the number to
+500k for each of them, you would be running the same exact operations twice.
+
If you are gathering metrics on a dashboard, you will want to alias the workload name. If you are
+using Named Scenarios as most users do, then you can simply copy the workload name to another.
+This allows you to receive distinct metrics by client. When tagged metrics are implemented, this
+will no longer be necessary.
+
+
A good rule of thumb to use is 1 client node for each 3 target nodes in a cluster. This will
+of-course vary by workload and hardware, but it is a reasonable starting point.
+
It is also important to consider the section on Scaling Math
+when sizing your load bank.
This section touches on topics of using randomized data within NoSQLBench tests.
+
Benefits
+
The benefits of using procedural generation for the purposes of load testing is taken as granted in
+this section. For a more thorough discussion on the merits, please see the showcase
+section for Virtual DataSet
+
Basic Theory
+
In NoSQLBench, the data used for each operation is generated on the fly. However, the data is also
+deterministic by default. That means, for a given activity, any numbered cycle will produce the same
+operation from test to test, so long as the parameters are the same.
+
NoSQLBench runs each activity over a specific range of cycles. For each cycle, an operation is
+built from a template using that cycle as a seed. The cycle value determines which op template
+to use as well as all the payload values and/or configuration details for that operation.
+
The machinery which is responsible for dispatching operations to run is initialized before an
+activity starts to optimize performance while the activity is running.
+
Managing Variation
+
There are ways of selecting how much variation you have from one test scenario to another.
+
Sometimes you will want to run the same test with the same operations, access patterns, and data.
+This may be necessary in advanced testing scenarios, since it can make data or order-specific
+problems reproducible. The ability to run the same test between different target systems is
+extremely valuable.
+
Selecting Cycles
+
You can cause an activity to run a different set of operations simply by changing the cycle range
+used in the test.
+
For an activity that is configured with cycles=100M, 100 million independent cycles will be used.
+These cycles will be automatically apportioned to the client threads as needed until they are all
+used up.
+
If you want to run 100 million different cycles, all you have to do is specify a different set of
+seeds. This is as simple as specifying cycles=100M..200M, as the first example above is only
+short-hand for cycles=0..100M.
+
Selecting Bindings
+
The built-in workloads come with bindings which support the rampup and main testing activities
+appropriately. This means that the cycles for rampup will use a binding that lays data into a
+dataset incrementally, as you would build a log cabin. Each cycle adds to the data. The bindings are
+chosen for this effect so that the rampup activity is incremental with the cycle value.
+
The main activity bindings are selected differently. In the main activity, you don't want to
+address over the data in order. To emulate a real workload, you need to select the data
+pseudo-randomly so that storage devices don't get to cheat with read-ahead (more than they
+would realistically) and so on. That means that the main activity bindings are also specifically
+chosen for the "random" access patterns that you might expect in some workloads.
+
The distinction between these two types of bindings should tell you something about the binding
+capabilities. You can really do what ever you want as long as you can stitch the right functions
+together to get there. Although the data produced by some functions (like Hash() for
+example) look random, they are not. They are, however, effectively random enough for most
+distributed systems performance testing.
+
If you need to add randomization to fields, it doesn't hurt to add an additional Hash() to the
+front. Just be advised that the same constructions from one bind point to the next will yield
+the same outputs, so season to taste.
In general, there is a triangular trade-off between service time, op rate, and data density,
+where reads (index traversals or "lookups") are generally more dependent on working set size than
+other operations. There is a fundamental three-way trade-off between higher throughput, higher
+working set, and lower (better) latency. This is true for all modern databases. In general, read
+patterns always access indexes of some type for a random-access system. The performance of these
+indexes varies widely based on hardware, software, and data factors. Ideally, index performance
+tends to $ \Omega(Log_2(n)) $ for binary search, which approximates the best performance
+possible, except in special data-dependent cases, where slightly improved performance is
+possible, but not generally reliable. No matter the underlying hardware, the cardinality of
+your indexable values will be a factor.
+
This is important to keep in mind, since it makes it very clear that you must test with
+realistic data -- enough data with enough variety and a characteristic distribution. However,
+you don't have to do this at the same scale as the system you are characterizing for.
+
The data or index density question can be addressed per unit, meaning per-node, per-core, or
+whatever the fundamental unit of deployed scale your system architecture offers.
+
Test small and extrapolate
+
It is not necessary nor reasonable to test every system at production scale for the purposes of
+trusting its operational behavior. The basic principles used to build scalable systems allow for
+us to build scale models of these systems and verify the character of scaling itself. Nearly
+all scalable systems amortize work over time and space. There is always a per-unit way of
+measuring capacity such that the projected capacity is directly proportional to the scale of the
+system deployment. Identifying how the system capacity relates to the deployment footprint is
+essential, and depends on the primary scaling mechanisms of the system in question.
+
You want to be able to make reliable statements about the scale of some supposed system
+deployment from the data you collect in the smaller-scale test, like
+
+
If I add more data to the existing system, how does this manifest in terms of throughput or
+latency impact?
+
If I were to add more resources to the system, does it scale up linearly or is there otherwise a
+reliable estimate which can be used?
+
If a unit of scale is removed (like a node), what happens to the throughput and latency?
+
For a given increase in system resources across the board, how are the throughput and
+latency affected?
+
+
To answer these questions, you need to establish some basic formulae and verify them. What this
+means will depend directly on your system design.
+
Test the extrapolation
+
Once you have established the formulae to estimate changes in capacity or performance based on
+changes in system topology, you need change the scale of the system and run your test again to
+verify that the character of scaling holds. If it does not, then there is an important detail
+that needs to be discovered and added to the scaling math.
+
Trust your extrapolation
+
Once you've proven that your scaling math works, it's time to capture it in context. You need to
+document the details of the test, including workload, dataset, density, topology, system
+configuration, and so on. It is only by knowing how similar this system is to another supposed
+system that you can trust the data for other estimates.
+
A frequent mistake that technologists make when testing systems like this is using the data out
+of context, against a system where only a couple details have changed. While this might sound
+reasonable, once you go beyond two or three small changes, our ability to reliably predict
+systemic changes drops of very quickly. More advanced analysis might help to make this test data
+more portable, but it is certainly a fools errand to try to intuit too far away from your known
+configuration.
+
By knowing when a system is congruent to the one you tested, you can known when to trust your
+scaling math. This is why it is critical that you document the circumstances of what and how you
+tested, so that in future situations you know if you have meaningfully relevant data.
+
Once you have the ability to extrapolate or interpolate (within reason) how system topology
+affect the operational behavior of your system, it's time contextualize your data. If is fair to
+use the scaling math for other systems which are the same as the one you tested. One frequent
+mistake with using this type of
In general, there is a triangular trade-off between service time, op rate, and data density,
+where reads (index traversals or "lookups") are generally more dependent on working set size than
+other operations. There is a fundamental three-way trade-off between higher throughput, higher
+working set, and lower (better) latency. This is true for all modern databases. In general, read
+patterns always access indexes of some type for a random-access system. The performance of these
+indexes varies widely based on hardware, software, and data factors. Ideally, index performance
+tends to $ \Omega(Log_2(n)) $ for binary search, which approximates the best performance
+possible, except in special data-dependent cases, where slightly improved performance is
+possible, but not generally reliable. No matter the underlying hardware, the cardinality of
+your indexable values will be a factor.
+
This is important to keep in mind, since it makes it very clear that you must test with
+realistic data -- enough data with enough variety and a characteristic distribution. However,
+you don't have to do this at the same scale as the system you are characterizing for.
+
The data or index density question can be addressed per unit, meaning per-node, per-core, or
+whatever the fundamental unit of deployed scale your system architecture offers.
+
Test small and extrapolate
+
It is not necessary nor reasonable to test every system at production scale for the purposes of
+trusting its operational behavior. The basic principles used to build scalable systems allow for
+us to build scale models of these systems and verify the character of scaling itself. Nearly
+all scalable systems amortize work over time and space. There is always a per-unit way of
+measuring capacity such that the projected capacity is directly proportional to the scale of the
+system deployment. Identifying how the system capacity relates to the deployment footprint is
+essential, and depends on the primary scaling mechanisms of the system in question.
+
You want to be able to make reliable statements about the scale of some supposed system
+deployment from the data you collect in the smaller-scale test, like
+
+
If I add more data to the existing system, how does this manifest in terms of throughput or
+latency impact?
+
If I were to add more resources to the system, does it scale up linearly or is there otherwise a
+reliable estimate which can be used?
+
If a unit of scale is removed (like a node), what happens to the throughput and latency?
+
For a given increase in system resources across the board, how are the throughput and
+latency affected?
+
+
To answer these questions, you need to establish some basic formulae and verify them. What this
+means will depend directly on your system design.
+
Test the extrapolation
+
Once you have established the formulae to estimate changes in capacity or performance based on
+changes in system topology, you need change the scale of the system and run your test again to
+verify that the character of scaling holds. If it does not, then there is an important detail
+that needs to be discovered and added to the scaling math.
+
Trust your extrapolation
+
Once you've proven that your scaling math works, it's time to capture it in context. You need to
+document the details of the test, including workload, dataset, density, topology, system
+configuration, and so on. It is only by knowing how similar this system is to another supposed
+system that you can trust the data for other estimates.
+
A frequent mistake that technologists make when testing systems like this is using the data out
+of context, against a system where only a couple details have changed. While this might sound
+reasonable, once you go beyond two or three small changes, our ability to reliably predict
+systemic changes drops of very quickly. More advanced analysis might help to make this test data
+more portable, but it is certainly a fools errand to try to intuit too far away from your known
+configuration.
+
By knowing when a system is congruent to the one you tested, you can known when to trust your
+scaling math. This is why it is critical that you document the circumstances of what and how you
+tested, so that in future situations you know if you have meaningfully relevant data.
+
Once you have the ability to extrapolate or interpolate (within reason) how system topology
+affect the operational behavior of your system, it's time contextualize your data. If is fair to
+use the scaling math for other systems which are the same as the one you tested. One frequent
+mistake with using this type of
This is a simple high-level overview of a scale-up testing method, for those who are new to it
+and need a basic blueprint.
+
Testing for scale is often a complicated business. Often, users think that it is too expensive
+because a test at scale requires many systems. This is a not often the case. You can establish
+the relationships between testing resources and results on any scale. Further, you can establish
+the character of scalability at small scales at first and then go from there. This is often the
+best approach in terms of incremental results.
+
Start Small
+
When testing for scale, it is useful to establish your testing method at some arbitrarily small
+size of system and go from there. This allows you to prove out your testing apparatus and target
+system configuration in an affordable and easier-to-manage way. While a small system may not
+show you the scaled up performance that you want to measure, it gives you the first reference
+point you need in order to verify how a system scales.
+
Select Key Metrics
+
You can only establish the character of scaling by plotting multiple points between an
+independent and a dependent variable. At the highest level, the independent variable is
+"hardware" from a system scaling perspective, and "investment" from an TCO perspective. Most
+users will need to focus on how much hardware is needed to meet a given performance requirement,
+SLA, or throughput (or both). You must know the fundamental questions you are asking for the
+test to be framed and represented properly.
+
Metrics which are most often used include:
+
+
response time, one of (or all of)
+
+
p99 response time (historically called latency, but see the section on
+Timing Terms
+
read-specific p99 response time, as read patterns in systems are often much more indicative
+of overally scaling character
+
+
+
Dataset Size
+
Saturating throughput
+
p99 response time at some nominal throughput
+
data density
+
+
Nominal vs. Saturating throughput
+
Operational systems are not run in production at 100% utilization. As an operational system,
+properly managed, they will be deployed to scale over the demand with some added headroom. This
+headroom is crucial to allow for self-healing or administrative events at the control plane layer
+while maintaining the workload at some acceptable level (throughput and latency).
+
Realistic measurement of a production-like system in depends on knowing both what throughput that
+system is capable of and how it responds at a realistic loading level. Running a system in
+production at 10% utilization is not cost-effective. Conversely, running a system at 99%
+utilization is not effective for maintaining availability through variable load, hardware
+failures, or administrative actions. So how do you find the level of throughput to test at?
+
A basic recipe consists of:
+
+
Run the workload at a reasonable concurrency and no rate throttling and measure the
+saturating throughput.
+
Select a proportional loading level which represents what you would do in production. For
+example, for a headroom of 30%, you might select a loading level of 70%
+
Run your main workload at the proportional rate using client-side rate-limiting, and take
+your response time measurements from this run.
+
+
Scale Up
+
Once you have established your metrics at nominal throughput on your baseline system, you are
+ready to scale up your test. For the scaling mode that you are testing, change an independent
+variable (add a node, add cores, or whatever constitutes a resource in your target system).
+
Repeat the test flow above with a different set of parameters. Once you get two sets of results,
+you have enough to start characterizing scalability.
+
In general, distributed systems which are designed to scale focus on maintaining a given
+response time character when the proportion of resources is congruent to the rate of requests.
+Other scaling modes are available, but details vary by system, so be clear about what you are
+testing and why.
+
Explain Your Results
+
It is crucial that you frame the fundamental premise of your test in any results.
+
This includes key details, like:
+
+
What questions you aim to answer - What is the purpose of the test?
+
A high-level outline of the testing method or workflow used.
+
Details on how metrics are measured
+(see Vantage Points)
+
The basic formulae (like the nominal vs saturating rates above)
+
a direct and uncomplicated visual about the fundamental relationship between
+the test parameters
+
Full details of the system and workload used for testing. Even if you don't provide this
+up-front it has to be readily available when asked for any result to be taken seriously.
A successful test of a system results in a set of measurements. However, there are many ways to take
+measurements and they all serve to answer different questions. Where you take your measurements also
+determines what you measure.
+
Consider the following diagram:
+
+
This diagram illustrates a prototypical set of services and their inner service dependencies. This
+view only shows synchronous calls to keep the diagram simple.
+
User Impact
+
The outer-most layer of the onion is what the user interacts with. In most modern services, this is
+the browser. As well, most modern applications, there is an active client-side component which acts
+as part of the composed application, with local page state being pseudo-persistent except for cache
+controls and full reloads. This highlights how far designers will go to make interactions "local"
+for users to avoid the cost of long request loops.
+
As such, the browser is subject to any response times included within the inner service layers.
+Still, the browser represents the outer-most and thus most authentic vantage point from which to
+measure user impact of service time. This is called the User View in the above diagram.
+
Looking Inward
+
Beyond the outer layer, you'll usually find more layers. In terms of what these layers are called: "
+endpoint", "service", "web app", "app server", there is a ton of subjectivity. although the names
+change, the underlying mechanisms are generally the same. The naming conventions come more from
+local norms within a tech space or community of builders. One person's "App Server" is another's "
+RESTful endpoint". What is important to notice is how the layers form a cascade of dependencies down
+to some physical device which is responsible for storing data. This pattern will be durable in nearly
+every system you look at.
+
Between each layer is a type of messaging component. These are sometimes called "media", or "
+transport" in RFCs. Each connection between the layers carries with it a set of fundamental
+trade-offs that, if understood, can establish reasonably durable minimum and maximum response times
+in the realm of possibilities.
+
For example, a storage device that is using NVMe as the host bus will, all else being equal, perform
+better than one service by a SATA channel. The specification for these "transports" say as much, but
+more importantly, real-world results back this up.
+
Understanding of the connections between each layer of abstraction is essential. At least, knowing
+the limits of technology at each layer, theoretical and practical is useful. Not to fear, a good
+testing setup can help you find these limits in specific terms.
+
Service Time Math
+
There will be a limit to how much data you can collect, and from which vantage points you can get it
+from. That means that sometimes you need to do some sleuthing with the data you have in order to
+tease out important details.
+
For example, say you have a good set of metrics for the app server in the diagram above. You know
+that the p95 service time is 121ms. Suppose you also know the p95 service time for the same calls
+at the DB layer. That is 32ms. If you don't know anything else about the calls, you can at least
+infer that the difference between these two layers is around 89ms (P95). That means that, for 5 out
+of every 100 operations, somewhere between your web app, your db driver, and your db service, you
+are spending at least 89ms doing something. This could be in the active processing, or in the
+passive transport of data -- the ethernet layer or otherwise. At least you can set book-end
+expectations between these layers.
+
Applied Principles
+
outside-in
+
Generally speaking, to understand how service times impact users, you generally want to measure from
+outer vantage points. To understand why the user sees these service times, you look at the inner
+layers.
+
detailed enough
+
When constructing layered views of your metrics, it is useful to add the elements you need and can
+instrument for metrics first. The above diagram goes to a degree of detail that may be too much to
+be useful in a practical analysis scenario. You could add placeholders to capture elements of the
+transport and inter-connections, additional internal subsystems of layers, etc. This is only useful
+if it helps tell an important story about the details of your system, i.e. details that you can use
+to take action for an improvement or to help you focus effort in the right place.
+
clear labeling
+
When you are capturing metrics, make sure that the nesting and vantage points are very clear to
+observers. A little detail in naming goes a long way to keeping operators honest with each other
+about what is actually happening in the system.
+
contextual views
+
As you learn to build operational views of systems, be sure to tailor them to the user-impacting
+services that your business is measured by. This starts on the outside of your system, and cuts
+through critical paths, focusing on those areas which have the highest variability in responsiveness
+or availability. It includes the details that need the most attention. You can't start from a rich
+dashboard of data that includes the kitchen sink to arrive at this. It is an art form that you must
+constantly practice in order to keep operational relevant. Yes, there will be long-standing themes
+and objectives, but the more ephemeral factors need to be treated as such.
Often, terms used to describe latency can create confusion. In fact, the term latency is so
+overloaded in practice that it is not oftent useful by itself. Because of this, NoSQLBench will
+avoid using the term latency except in a specific way. Instead, the terms described in this
+section will be used.
+
NoSQLBench is a client-centric testing tool. The measurement of operations occurs on the client,
+without visibility to what happens in transport or on the server. This means that the client can
+see how long an operation takes, but it cannot see how much of the operational time is spent in
+transport and otherwise. This has a bearing on the terms that are adopted by NoSQLBench.
+
Some terms are anchored by the context in which they are used. For latency terms, service time can
+be subjective. When using this term to describe other effects in your system, what is included
+depends on the perspective of the requester. The concept of service is universal, and every layer in
+a system can be seen as a service. Thus, the service time is defined by the vantage point of the
+requester. This is the perspective taken by the NoSQLBench approach for naming and semantics below.
+
responsetime
+
The duration of time a user has to wait for a response from the time they submitted the request.
+Response time is the duration of time from when a request was expected to start, to the time at
+which the response is finally seen by the user. A request is generally expected to start immediately
+when users make a request. For example, when a user enters a URL into a browser, they expect the
+request to start immediately when they hit enter.
+
In NoSQLBench, the response time for any operation can be calculated by adding its wait time and its
+service time together.
+
waittime
+
The duration of time between when an operation is intended to start and when it actually starts on
+a client. This is also called scheduling delay in some places. Wait time occurs because clients
+are not able to make all requests instantaneously when expected. There is an ideal time at which the
+request would be made according to user demand. This ideal time is always earlier than the actual
+time in practice. When there is a shortage of resources of any kind that delays a client request,
+it must wait.
+
Wait time can accumulate when you are running something according to a dispatch rate, as with a rate
+limiter.
+
servicetime
+
The duration of time it takes a server or other system to fully process to a request and send a
+response. From the perspective of a testing client, the system includes the infrastructure as
+well as remote servers. As such, the service time metrics in NoSQLBench include any operational time
+that is external to the client, including transport latency.
Short options, like '-v', represent simple options, like verbosity. Using multiples increases the
+level of the option, like '-vvv'.
+
Long options, like '--help', are top-level options that may only be used once. These modify general
+behavior, or allow you to get more details on how to use nb5.
+
All other options are either commands, or named arguments to commands.
+
+
Any single word without dashes is a command that will be converted into script form.
+
Any option that includes an equals sign is a named argument to the previous command.
+
+
The following example is a commandline with a command start, and two named arguments to that command.
+
nb5 start driver=diag alias=example
+
+
Discovery options
+
These options help you learn more about running nb5, and about the plugins that are present in your
+particular version.
+
Get a list of additional help topics that have more detailed documentation:
+
nb5 help topics
+
+
Provide specific help for the named activity type:
+
nb5 help <activity type>
+
+
List the available drivers:
+
--list-drivers
+
+
List the available scenarios:
+
--list-scenarios
+
+
List only the available workloads which contain the above scenarios:
+
--list-workloads
+
+
Copy a workload or other file to your local directory as a starting point:
+
--copy <name>
+
+
Provide the metrics that are available for scripting:
This is how you actually tell nb5 what scenario to run. Each of these commands appends script logic
+to the scenario that will be executed. These are considered as commands, can occur in any order and
+quantity. The only rule is that arguments in the arg=value form will apply to the preceding script
+or activity.
+
Add the named script file to the scenario, interpolating named parameters:
+
script <script file> [arg=value]...
+
+
Add the named activity to the scenario, interpolating named parameters
+
activity [arg=value]...
+
+
General options
+
These options modify how the scenario is run.
+
Specify a directory for scenario log files:
+
--logs-dir <dirname>
+
+
Specify a limit on logfiles (old files will be purged):
+
--logs-max <count>
+
+
Specify the priority level of file logs:
+
--logs-level <level>
+
+
where <level> can be one of OFF, ERROR, WARN, INFO, DEBUG, TRACE, or ALL
--logfile-pattern '%date %level [%thread] %logger{10} [%file:%line] %msg%n'
+--logfile-pattern 'VERBOSE'
+
+# See https://logging.apache.org/log4j/2.x/manual/layouts.html#Pattern_Layout
+# These shortcuts are allowed
+TERSE %8r %-5level [%t] %-12logger{0} %msg%n%throwable
+VERBOSE %d{DEFAULT}{GMT} [%t] %logger %-5level: %msg%n%throwable
+TERSE-ANSI %8r %highlight{%-5level} %style{%C{1.} [%t] %-12logger{0}} %msg%n%throwable
+VERBOSE-ANSI %d{DEFAULT}{GMT} [%t] %highlight{%logger %-5level}: %msg%n%throwable
+# ANSI variants are auto promoted for console if --ansi=enable
+# ANSI variants are auto demoted for logfile in any case
+
+
Explicitly enable or disable ANSI logging support
+(ANSI support is enabled if the TERM environment variable is defined):
+
--ansi=enabled
+--ansi=disabled
+
+
Specify a directory and enable CSV reporting of metrics:
+
--report-csv-to <dirname>
+
+
Specify the graphite destination and enable reporting
+
--report-graphite-to <addr>[:<port>]
+
+
Specify the interval for graphite or CSV reporting in seconds:
+
--report-interval 10
+
+
Specify the metrics name prefix for graphite reporting:
--log-histostats stats.csv
+--log-histostats 'stats.csv:.*' # same as above
+--log-histostats 'stats.csv:.*:1m' # with 1-minute interval
+--log-histostats 'stats.csv:.*specialmetrics:10s'
+
+
Adjust the HDR histogram precision:
+
--hdr-digits 3
+
+
The default is 3 digits, which creates 1000 equal-width histogram buckets for every named metric in
+every reporting interval. For longer running test or for test which require a finer grain of
+precision in metrics, you can set this up to 4 or 5. Note that this only sets the global default.
+Each activity can also override this value with the hdr_digits parameter. Be aware that each
+increase in this number multiples the amount of detail tracked on the client by 10x, so use
+caution.
+
Adjust the progress reporting interval:
+
--progress console:1m
+
+
or
+
--progress logonly:5m
+
+
👉 The progress indicator on console is provided by default unless logging levels are turned up
+or there is a script invocation on the command line.
+
If you want to add in classic time decaying histogram metrics for your histograms and timers, you
+may do so with this option:
+
--classic-histograms prefix
+--classic-histograms 'prefix:.*' # same as above
+--classic-histograms 'prefix:.*specialmetrics' # subset of names
+
+
Name the current session, for logfile naming, etc. By default, this will be "scenario-TIMESTAMP", and
+a logfile will be created for this name.
+
--session-name <name>
+
+
Enlist nosqlbench to stand up your metrics infrastructure using a local docker runtime:
+
--docker-metrics
+
+
When this option is set, nosqlbench will start graphite, prometheus, and grafana automatically on
+your local docker, configure them to work together, and point nosqlbench to send metrics the system
+automatically. It also imports a base dashboard for nosqlbench and configures grafana snapshot
+export to share with a central DataStax grafana instance (grafana can be found on localhost:3000
+with the default credentials admin/admin).
+
Console Options
+
Increase console logging levels: (Default console logging level is warning)
+
-v (info)
+-vv (debug)
+-vvv (trace)
+
+--progress console:1m (disables itself if -v options are used)
+
+
These levels affect only the console output level. Other logging level parameters affect logging
+to the scenario log, stored by default in logs/...
+
Show version, long form, with artifact coordinates.
+
--version
+
+
Summary Reporting
+
The classic metrics logging format is used to report results into the logfile for every scenario.
+This format is not generally human-friendly, so a better summary report is provided by default to
+the console and/or a specified summary file by default.
+
Examples:
+
# report to console if session ran more than 60 seconds
+--report-summary-to stdout:60
+
+# report to auto-named summary file for every session
+--report-summary-to _LOGS_/_SESSION_.summary
+
+# do both (the default)
+--report-summary-to stdout:60,_LOGS_/_SESSION_.summary
+
+
Values of stdout or stderr are send summaries directly to the console, and any other pattern is
+taken as a file name.
+
You can use _SESSION_ and _LOGS_ to automatically name the file according to the current session
+name and log directory.
+
The reason for the optional timing parameter is to allow for results of short scenario runs to be
+squelched. Metrics for short runs are not generally accurate nor meaningful. Spamming the console
+with boilerplate in such cases is undesirable. If the minimum session length is not specified, it
+is assumed to be 0, meaning that a report will always show on that channel.
Sometimes you want to run a set of workloads in a particular order, or call other specific test
+setup logic in between activities or workloads. While the full scripting environment allows you
+to do this and more, it is not necessary to write JavaScript for every scenario.
+
For more basic setup and sequencing needs, you can achieve a fair degree of flexibility on the
+command line. A few key API calls are supported directly on the command line. This guide explains
+each of them, what the do, and how to use them together.
+
Script Construction
+
As the command line is parsed, from left to right, the scenario script is built in an internal
+scripting buffer. Once the command line is fully parsed, this script is executed. Each of the
+commands below is effectively a macro for a script fragment. It is important to remember that
+order is important.
+
Command line format
+
Newlines are not allowed when building scripts from the command line. As long as you follow the
+allowed forms below, you can simply string multiple commands together with spaces between.
+Single word options without leading dashes, like run, are scenario commands. Subsequent
+key=value style arguments are their named parameters. Named parameters which follow a scenario
+command apply to that command only.
+
Global options, meaning any argument that starts with a - or --, are applied when NoSQLBench
+starts up, before any scenario is run. These are automatically taken out of the list of what the
+scenario scripting sees.
+
Concurrency & Control
+
All activities operate parallel to the scenario script, allowing them to run independently at
+their own pace, while the scenario controls and manages them like an automaton. The scenario concludes
+only when both the scenario script and the activities are finished.
+
Scenario Commands
+
start
+
example: start driver=<driver> alias=<alias> ...
+
You can start an activity with this command. At the time this command is evaluated, the activity is
+started, and the script continues without blocking. This is an asynchronous start of an activity. If
+you start multiple activities in this way, they will run concurrently.
+
The driver argument is required to identify which nb5 driver to run. The alias parameter is not
+strictly required, unless you want to be able to interact with the started activity later. In any
+case, it is a good idea to name all your activities with a meaningful alias.
+
await
+
example: await <alias>
+
Await the normal completion of an activity with the given alias. This causes the scenario script to
+pause while it waits for the named activity to finish. This does not tell the activity to stop. It
+simply puts the scenario script into a paused state until the named activity is complete.
+
run
+
example: run driver=<driver> alias=<alias> ...
+
Run an activity to completion, waiting until it is complete before continuing with the scenario
+script. It is effectively the same as
Stop an activity with the given alias. This is synchronous, and causes the scenario to pause until
+the activity is stopped. This means that all threads for the activity have completed and signalled
+that they're in a stopped state. This command allows an activity to stop gracefully if possible.
+It waits for a number of seconds for all threads to come to a stopped state and will then resort
+to using forceStop if needed. Threads which are occupied blocking on remote timeouts or blocking
+behavior can prevent an activity from shutting down gracefully.
+
forcestop
+
syntax: forcestop <alias>
+
This is like the stop command, except that it doesn't allow the activity to shut down gracefully.
+This command immediately shutdown down the thread pool for a given activity.
+
waitmillis
+
example: waitmillis <milliseconds>
+
Pause the scenario script for this many milliseconds. This doesn't affect any running activities
+directly. This is useful for controlling workload run duration, etc.
+
script
+
example: script <script file>
+
Add the contents of the named file to the scenario script buffer.
+
fragment
+
example: fragment <script text>
+
Add the contents of the next argument to the scenario script buffer.
+
An example CLI script
+
Any sequence of these commands, when strung together, constitutes a scenario script.
+An example of this is: ./nb5 start driver=stdout alias=a cycles=100K workload=cql-iot tags=block:main start driver=stdout alias=b cycles=200K workload=cql-iot tags=block:main waitmillis 10000 await a stop b
+
This is terribly confusing to look at, so we do something like this instead. The backslashes at the end allow you to insert a discarded newline, as long as there are no
+spaces after the backslash.
Here is a narrative of what happens for each line:
+
1
# nb5 is invoked
+
2
# An activity named 'a' is started, with 100K cycles of work.
+
3
# An activity named 'b' is started, with 200K cycles of work.
+
4
# While these activities run, the scenario script waits for ten seconds.
+
5
# The scenario blocks, waiting for activity 'a' to complete its 100K cycles.
+
6
# Activity 'b' is immediately stopped.
+
+
After the stop command at the end of the scenarios script, the whole scenario exits, because all
+activities are stopped or complete, and the script is complete.
Activity parameters are passed as named arguments for an activity, either on the command line
+or from a scenario script. On the command line, these take the form of
+
... <param>=<value> ...
+
+
Some activity parameters are universal in that they can be used with any driver type. These
+parameters are called core activity params. Only core activity parameters are documented here.
+When starting out, you want to familiarize yourself with these parameters.
+
👉 To see what activity parameters are valid for a given driver, see the documentation for that
+driver with nb5 help <driver>. Each driver comes with documentation that describes what how
+to configure it with additional driver params as well as what op template forms it can understand.
+
Essential
+
The activity params described in this section are those which you will use almost all the time
+when configuring activities.
+
👉 If you aren't using one of these options with a run or start
+command, or otherwise in your named scenarios, double check that you aren't missing something
+important.
+
driver
+
+
driver=<driver>
+
default: unset
+
required: no
+
dynamic: no
+
+
Every activity can have a default driver. If provided, it will be used for any op template which
+does not have one directly assigned as an op field. For each op template in the workload, if no
+driver is set, an error is thrown.
+
As each activity can have multiple op templates, and each op template can have its own driver,
+the available activity params for a workload are determined by the superset of valid params for
+all active drivers.
+
You can find out what drivers are available in nb5 with the --list-drivers option from
+discovery options. You can then get details on
+what each of these drivers allow with nb5 help <driver>.
+
The driver selection for an op template determines the valid constructions for the op template.
+For example:
If an activity were started up which references this file as workload=test.yaml, then all
+the activity params recognized by the stdout driver would be valid, in addition to the core
+activity params documented in this section.
+
examples
+
+
driver=stdout - set the default driver to stdout
+
+
workload
+
+
+
default: unset, required: one of workload= or op= or stmt=, dynamic: no
+
+
+
workload=<filename> where filename is a
+YAML,
+JSON, or
+Jsonnet file with matching extension.
+If the extension is missing, then it is presumed to be a yaml file. Workload filenames are
+resolved on the local filesystem first, then from the files which are bundled into the
+nosqlbench binary or jar.
+
+
+
workload="<URL>" where URL with an valid
+scheme,
+like http, https, or
+S3.
+S3 support has been added directly, so you can use these URIs so long as you have a valid AWS
+configuration.
+
+
+
workload="<JSON Object>" where the param value is a JSON object starting with {. Escaping
+might be necessary for some characters when using this on the command line.
The workload param tells an activity where to load its [workload template]
+(@/workloads-101/_index.md) from. The workload template is a collection of op templates which
+are blueprints for the operations that an activity runs.
+
If the file is a Jsonnet file (by extension), then a local jsonnet interpreter will be run
+against it before being handled as above. Within this evaluation context, all provided activity
+parameters are available as external variables and accessible via the standard Jsonnet APIs,
+specifically std.extVar(str). For doing robust data
+type conversion, use std.parseJson(str) by default.
+
op
+
When the op param is provided, then the contents of this are taken as an op template which are
+which
+consists of a string template only. This is equivalent to providing a workload which contains a
+single op with a single op field named stmt.
+
stmt
+
tags
+
+
tags=<filterspec>
+
default: unset
+
required: no
+
dynamic : no
+
+
Tags are used to filter the set of op templates presented for sequencing in an activity. Each op
+template has a set of tags, which include two auto-tags that are provided by the runtime:
+
+
block - the block name that contains the op template. All op templates are part of some
+block, even if they are configured at the root of a document. There is a virtual block named
+block0 which all of the root-level op templates are assigned to.
+
name - a unique name for the op template within the workload template. This is a
+concatenation of the block name, two dashes (--) and the base op template name. For example,
+an op in block0 with a base name of opexample2 would be block0--opexample2. This allows for
+regex matching that can be globally distinct within a workload.
+
+
The rules for tag filtering are explained in depth in the Op Tags
+of the Workloads 101 tutorial.
+
threads
+
+
threads=<threads>
+
default: 1
+
required: no
+
dynamic: yes
+
+
You should set the threads parameter when you need to ramp up a workload.
+
This controls how many threads will be started up for an activity to run its cycles with.
+
default value : For now, the default is simply 1. Users must be aware of this setting and
+adjust it to a reasonable value for their workloads.
+
threads=auto : Set the number of threads to 10x the number of cores in your system. There is
+no distinction here between full cores and hardware threads. This is generally a reasonable
+number of threads to tap into the processing power of a client system.
+
threads=__x : When you set threads=5x or threads=10x, you will
+Set the number of threads to some multiplier of the logical CPUs in the local system.
+
A good rule of thumb for setting threads for maximum effect is to set it relatively high, such
+as 10XvCPU when running synchronous workloads (when not providing the async parameter), and to
+5XvCPU for all async workloads. Variation in system dynamics make it difficult to peg an ideal
+number, so experimentation is encouraged while you dial in your settings initially.
+
examples
+
+
threads=30x - set the number of threads in an activity to $30 * cores$.
+
threads=auto - set the number of threads in an activity to $10 * cores$.
+
threads=10 - set the number of threads in an activity to $10$.
+
+
cycles
+
+
cycles=<cycle count>
+
cycles=<cycle min>..<cycle max>
+
default: 1
+
required: no
+
dynamic: no
+
+
The cycles parameter determines the starting and ending point for an activity. It determines
+the range of values which will act as seed values for each operation. For each cycle of the
+activity, a statement is built from a statement template and executed as an operation.
+
For each cycle in an activity, the cycle is used as the input to the binding functions. This
+allows you to create numerical relationships between all of the data used in your activity.
+
If you do not set the cycles parameter, then it will automatically be set to the size of the
+sequence. The sequence is simply the length of the op sequence that is constructed from the
+active op templates and ratios in your activity.
+
You should set the cycles for every activity except for schema-like activities, or activities
+which you run just as a sanity check of active statements.
+
In the cycles=<cycle count> version, the count indicates the total number of cycles, and is
+equivalent to cycles=0..<cycle max>. In both cases, the max value is not the actual number
+of the last cycle. This is because all cycle parameters define a closed-open interval. In other
+words, the minimum value is either zero by default or the specified minimum value, but the
+maximum value is the first value not included in the interval. This means that you can easily
+stack intervals over subsequent runs while knowing that you will cover all logical cycles
+without gaps or duplicates. For example, given cycles=1000 and then cycles=1000..2000, and
+then cycles=2000..5K, you know that all cycles between 0 (inclusive) and 5000 (exclusive)
+have been specified.
+
examples
+
+
cycles=500 - run an activity over cycles $[0,500)$, including $0$ and $499$, but not $500$.
+
cycles=20M - run an activity over cycles $[0,20000000)$.
+
cycles=2k..3k - run an activity over cycles $[2000,3000)$.
+
+
errors
+
+
errors=<error handler spec>
+
default: errors=stop
+
required: no
+
dynamic: no
+
+
This activity param allows you to specify what happens when an exception is thrown during
+execution of an operation (within a cycle). You can configure any named exception to be handled
+with any of the available handler verbs in the order your choosing.
+
👉 By default, any single error in any operation will cause your test to stop. This is not
+generally what you want to do for significant test scenarios.
+
You generally want to configure this so that you can run an activity as long as needed without a
+single error stopping the whole thing. However, it is important for users to know exactly how
+this is configured, so it is up to the user to set this appropriately.
+
The detailed configuration of error handlers is covered in
+error handlers
+
Diagnostic
+
These params allow you to see more closely how an activity works for the purpose of
+troubleshooting or test verification.
+
dryrun
+
+
dryrun=<stepname>
+
default: unset
+
required: no
+
dynamic: no
+
+
This option is checked at various stages of activity initialization in order to modify the
+way an activity runs. Some of the dryrun options stop an activity and dump out a summary of some
+specific step. Others wrap normal mechanisms in a
+noop in order to exercise other parts of the
+machinery at full speed.
+
examples
+
+
dryrun=jsonnet - When rendering a jsonnet workload, dump the result to the console and exit.
+
dryrun=op - Wrap the operation in a noop, to measure core nb5 execution speed without
+invoking operations.
+
+
Metrics
+
alias
+
+
alias=<alias>
+
default: inferred from yaml, or 'UNSET'
+
required: no
+
dynamic: no
+
+
You should set the alias parameter when you have multiple activities, when you want to name
+metrics per-activity, or when you want to control activities via scripting.
+
Each activity can be given a symbolic name known as an alias. It is good practice to give all
+your activities an alias, since this determines the named used in logging, metrics, and even
+scripting control.
+
default value : The name of any provided YAML filename is used as the basis for the default
+alias. Otherwise, the activity type name is used. This is a convenience for simple test
+scenarios only.
+
If you are using the --docker-metrics option, then you may want to ensure that you are either
+using named scenarios or setting your activity to use the same alias expansion pattern that it
+does: WORKLOAD_SCENARIO_STEP. These values are not interpolated for you at this time.
+
examples
+
+
alias=mytest2 - set the activity alias to mystest2
+
alias=workload42_scenario20230101_step5 - conform to the current --docker-metrics expansion
+template
+
+
instrument
+
+
instrument=<boolean>
+
default: false
+
_required: no
+
dynamic: no
+
+
This activity param allows you to set the default value for the
+instrument op field.
+
hdr_digits
+
+
hdr_digits=<num digits>
+
default: 4
+
required: no
+
dynamic: no
+
+
This parameter determines the number of significant digits used in all HDR
+histograms for metrics collected from this activity. The default of 4
+allows 4 significant digits, which means up to 10000 distinct histogram
+buckets per named metric, per histogram interval. This does not mean that
+there will be 10000 distinct buckets, but it means there could be if
+there is significant volume and variety in the measurements.
+
If you are running a scenario that creates many activities, then you can
+set hdr_digits=1 on some of them to save client resources.
+
Customization
+
cyclerate
+
+
cyclerate=<cycle per second>
+
cyclerate=<cycles per second>,<burst_ratio>
+
default: unset
+
required: no
+
dynamic: yes
+
+
The cyclerate parameter sets a maximum op rate for individual cycles within the activity,
+across the whole activity, irrespective of how many threads are active.
+
👉 The cyclerate is a rate limiter, and can thus only throttle an activity to be slower than it
+would otherwise run. Rate limiting is also an invasive element in a workload, and will always
+come at a cost. For extremely high throughput testing, consider carefully whether your testing
+would benefit more from concurrency-based throttling such as adjust the number of threads.
+
When the cyclerate parameter is provided, two additional metrics are tracked: the wait time and
+the response time. See Timing Terms Explained for more details.
+
When you try to set very high cyclerate values on systems with many cores, the performance will
+degrade. Be sure to use dryrun features to test this if you think it is a limitation. You can
+always set the rate high enough that the rate limiter can't sustain. This is like telling it to
+get in the way and then get out of the way even faster. This is just the nature of this type of
+rate limiter.
+
There are plans to make the rate limiter adaptive across a wider variety of performance
+scenarios, which will improve this.
+
examples
+
+
cyclerate=1000 - set the cycle rate limiter to 1000 ops/s and a
+default burst ratio of 1.1.
+
cyclerate=1000,1.0 - same as above, but with burstrate set to 1.0
+(use it or lose it, not usually desired)
+
cyclerate=1000,1.5 - same as above, with burst rate set to 1.5 (aka
+50% burst allowed)
+
+
burst ratio
+
This is only an optional part of the cyclerate as shown in examples above.
+If you do not specify it when you initialize a cyclerate, then it defaults
+1.1. The burst ratio is only valid as part of a rate limit and can not be
+specified by itself.
+
+
default: 1.1
+
dynamic: yes
+
+
The NoSQLBench rate limiter provides a sliding scale between strict rate
+limiting and average rate limiting. The difference between them is
+controlled by a burst ratio parameter. When the burst ratio is 1.0
+(burst up to 100% relative rate), the rate limiter acts as a strict rate
+limiter, disallowing faster operations from using time that was previously
+forfeited by prior slower operations. This is a "use it or lose it" mode
+that means things like GC events can steal throughput from a running
+client as a necessary effect of losing time in a strict timing sense.
+
When the burst ratio is set to higher than 1.0, faster operations may
+recover lost time from previously slower operations. For example, a burst
+ratio of 1.3 means that the rate limiter will allow bursting up to 130% of
+the base rate, but only until the average rate is back to 100% relative
+speed. This means that any valleys created in the actual op rate of the
+client can be converted into plateaus of throughput above the strict rate,
+but only at a speed that fits within (op rate * burst ratio). This allows
+for workloads to approximate the average target rate over time, with
+controllable bursting rates. This ability allows for near-strict behavior
+while allowing clients to still track truer to rate limit expectations, so
+long as the overall workload is not saturating resources.
+
👉 The default burst ratio of 1.1 makes testing results slightly more stable
+on average, but can also hide some short-term slow-downs in system
+throughput. It is set at the default to fit most tester's expectations for
+averaging results, but it may not be strict enough for your testing
+purposes. However, a strict setting of 1.0 nearly always adds cold/startup
+time to the result, so if you are testing for steady state, be sure to
+account for this across test runs.
+
striderate
+
+
striderate=<strides per second>
+
striderate=<strides per second>,<burst_ratio>
+
default: unset
+
required: no
+
dynamic: yes
+
+
The striderate parameter allows you to limit the start of a stride
+according to some rate. This works almost exactly like the cyclerate
+parameter, except that it blocks a whole group of operations from starting
+instead of a single operation. The striderate can use a burst ratio just
+as the cyclerate.
+
This sets the target rate for strides. In NoSQLBench, a stride is a group
+of operations that are dispatched and executed together within the same
+thread. This is useful, for example, to emulate application behaviors in
+which some outside request translates to multiple internal requests. It is
+also a way to optimize a client runtime for more efficiency and
+throughput. The stride rate limiter applies to the whole activity
+irrespective of how many threads it has.
+
WARNING:
+When using the cyclerate and striderate options together, operations are
+delayed based on both rate limiters. If the relative rates are not
+synchronised with the size of a stride, then one rate limiter will
+artificially throttle the other. Thus, it usually doesn't make sense to
+use both of these settings in the same activity.
+
stride
+
+
stride=<stride>
+
default: same as op sequence length
+
required: no
+
dynamic: no
+
+
Usually, you don't want to provide a setting for stride, but it is still important to
+understand what it does. Within NoSQLBench, each time a thread needs to allocate a set of
+cycles to run, it takes a contiguous range of values from an activity-wide source, usually an
+atomic sequence. Thus, the stride is the unit of micro-batching within NoSQLBench. It also
+means that you can use stride to optimize a workload by setting the value higher than the
+default. For example if you are running a single-statement workload at a very high rate, it
+doesn't make sense for threads to allocate one op at a time from a shared atomic value. You can
+simply set stride=1000 to cause (ballpark estimation) about 1000X less internal contention.
+The stride is initialized to the calculated sequence length. The sequence length is simply the
+number of operations in the op sequence that is planned from your active statements and their
+ratios.
+
You usually do not want to set the stride directly. If you do, make sure it is a multiple of
+what it would normally be set to if you need to ensure that sequences are not divided up
+differently. This can be important when simulating the access patterns of applications.
+
examples
+
+
stride=1000 - set the stride to 1000
+
+
seq
+
+
seq=<bucket|concat|interval>
+
default: seq=bucket
+
required: no
+
dynamic: no
+
+
The seq=<bucket|concat|interval> parameter determines the type of
+sequencing that will be used to plan the op sequence. The op sequence is a
+look-up-table that is used for each stride to pick statement forms
+according to the cycle offset. It is simply the sequence of statements
+from your YAML that will be executed, but in a pre-planned, and highly
+efficient form.
+
An op sequence is planned for every activity. With the default ratio on
+every statement as 1, and the default bucket scheme, the basic result is
+that each active statement will occur once in the order specified. Once
+you start adding ratios to statements, the most obvious thing that you
+might expect will happen: those statements will occur multiple times to
+meet their ratio in the op mix. You can customize the op mix further by
+changing the seq parameter to concat or interval.
+
👉 The op sequence is a look-up table of op templates, not
+individual statements or operations. Thus, the cycle still determines the
+uniqueness of an operation as you would expect. For example, if statement
+form ABC occurs 3x per sequence because you set its ratio to 3, then each
+of these would manifest as a distinct operation with fields determined by
+distinct cycle values.
+
There are three schemes to pick from:
+
bucket
+
This is a round-robin planner which draws operations from buckets in
+circular fashion, removing each bucket as it is exhausted. For example,
+the ratios A:4, B:2, C:1 would yield the sequence A B C A B A A. The
+ratios A:1, B5 would yield the sequence A B B B B B.
+
concat
+
This simply takes each statement template as it occurs in order and
+duplicates it in place to achieve the ratio. The ratios above (A:4, B:2,
+C:1) would yield the sequence A A A A B B C for the concat sequencer.
+
interval
+
This is arguably the most complex sequencer. It takes each ratio as a
+frequency over a unit interval of time, and apportions the associated
+operation to occur evenly over that time. When two operations would be
+assigned the same time, then the order of appearance establishes
+precedence. In other words, statements appearing first win ties for the
+same time slot. The ratios A:4 B:2 C:1 would yield the sequence A B C A A
+B A. This occurs because, over the unit interval (0.0,1.0), A is assigned
+the positions A: 0.0, 0.25, 0.5, 0.75, B is assigned the
+positions B: 0.0, 0.5, and C is assigned position C: 0.0. These
+offsets are all sorted with a position-stable sort, and then the
+associated ops are taken as the order.
+
In detail, the rendering appears
+as 0.0(A), 0.0(B), 0.0(C), 0.25(A), 0.5(A), 0.5(B), 0.75(A), which
+yields A B C A A B A as the op sequence.
+
This sequencer is most useful when you want a stable ordering of operation
+from a rich mix of statement types, where each operation is spaced as
+evenly as possible over time, and where it is not important to control the
+cycle-by-cycle sequencing of statements.
Some op template fields are reserved by nb5. These are provided by the runtime, not any
+particular driver, and can be used in any op template.
+
👉 op fields can be defined at any level of a workload template with a param property. Op
+templates which do not have this op field by name will automatically inherit it.
+
General
+
driver
+
+
default: unset
+
required: yes, by op template or by activity params
+
dynamic: no
+
+
Each op template in an activity can use a specific driver. If this op field is not provided in
+the op template, then it is set by default from the activity params. If neither is set, an error
+is thrown.
+
Since each op template can have a unique driver, and each activity can have multiple op
+templates, each activity will have multiple drivers active while it is running. These drivers
+are instanced and shared between op templates which specify the same driver by name.
+
During activity initialization, all the drivers which are loaded by active op templates (those
+not filtered out) are consulted for valid activity params. Only params which are valid for at least
+one active driver will be allowed to be set on the activity. This includes core activity
+params.
+
space
+
+
default: "default"
+
required: yes, but set by default
+
dynamic: yes
+
+
The space is a named cache of driver state. For each driver, a cache of driver-specific "driver
+space" objects is kept. If the value is not set in the op template, then the effect is the same
+as all op templates sharing a single instance of a driver for a given name (Where the name is
+the same for multiple op templates). However, if the users sets the space op field to
+a binding, then the driver will be virtualized over the names provided, allowing for a given
+driver to be effectively multi-instanced within the activity.
+
👉 Be careful with this op field! The way it works allows for quite advanced testing
+scenarios to be built with very minimal effort, compared to nearly all other approaches.
+However, if you set this op field to a binding function which produces a high cardinality values,
+you will be asking your client to create many instances of a native driver. This is not likely
+to end well for at least the client, and in some cases the server. This does present interesting
+stress testing scenarios, however!
+
When an activity is shutting down, it will automatically close out any driver spaces
+according to their own built-in shutdown logic, but not until the activity is complete. At
+present, there is no space cache expiry mechanism, but this can be added if someone
+needs it.
+
ratio
+
An op field called ratio can be specified on an op template to set the number of times this op
+will occur in the op sequence.
+
When an activity is initialized, all the active statements are combined into a sequence based
+on their relative ratios. By default, all op templates are initialized with a ratio of
+1 if none is specified by the user.
+
For example, consider the op templates below:
+
+ops:
+ s1:
+ op:"select foo,bar from baz where ..."
+ ratio:1
+ s2:
+ op:"select bar,baz from foo where ..."
+ ratio:2
+ s3:
+ op:"select baz,foo from bar where ..."
+ ratio:3
+
+
If all ops are activated (there is no tag filtering), then the activity will be initialized
+with a sequence length of 6. In this case, the relative ratio of op "s3" will be 50% overall.
+If you filtered out the first op, then the sequence would be 5 operations long. In this case,
+the relative ratio of op "s3" would be 60% overall. It is important to remember that op ratios
+are always relative to the total sum of the active ops' ratios.
+
This op field works closely with the core activity
+parameter seq
+
Instrumentation
+
instrument
+
By setting this to true, each named op template will be instrumented with a set of metrics, with
+the metric name derived from its op name.
+
For example, with the following workload template:
With instrument enabled for each of these ops, six additional metrics will be created:
+four timers named op1-success,
+op1-error, op2-success, and op2-error, and two
+histograms named
+op1-result-size and op2-result size.
+
This is very useful for understanding performance dynamics of individual operations. However, be
+careful when enabling this for a large number of metrics (by setting it as a doc or block level
+param), especially when you are running with more than 3 significant digits of HDR histogram
+precision.
+
start-timers
+
stop-timers
+
This op fields allow for a timer or set of timers to be started immediately before an operation is
+started and stopped immediately after another (or the same!) operation is completed. This allows
+you to instrument your access patterns with arbitrary timers across any number of operations.
+
These timers are started and stopped unconditionally, which means failed operations will be
+included. Be sure to correlate your metrics so you know what you are truly measuring.
In this case, before op1 is executed, a timer is started for stanza1 and stanza2. After op2 has
+been executed, successful or not, the timer for stanza1 will be stopped. After
+op3 has been executed, successful or not, the timer named stanza2 will be stopped.
+
These are treated just like any other timers, with a single named instance per activity, thus
+the measurements are an aggregate over all threads.
+
👉 The instancing of these named timers is per-thread! There is no way to cross the streams, so
+measurements are coherent within serialized operations which represent real access patterns in a
+given application thread.
+
Result Verification
+
You can now verify results of operations using property-based assertions or result equality.
+These methods use a compiled script which has access to binding variables in the same way that
+op templates use them, as bind points like ... {mybinding} .... This means that you can write
+script for verification logic naturally. The verification script is parsed and compiled ahead of
+time, with full awareness of the bindings which need to be generated before per-cycle evaluation.
+
The verifier is implemented in groovy 4, and is thus compatible with typical Java forms. It also
+allows for some terse and simplified views for assertion-based testing. Consult the Groovy
+Language Documentation or
+the Groovy API docs for more details on the
+language.
+
verifier variables
+
Within the scripting environment of the verifier, you can access some pre-defined variables:
+
+
result - The result of the last operation. This value is provided optionally by different
+drivers, so if you are using a verifier, ensure that the driver adapter you are using is
+compatible
+
cycle - The cycle number associated with the op.
+
_parsed_op - The op template in full-parsed form. This can be used for things like naming
+or labeling data for metrics, or to make some verifier logic conditional on other fields.
+
bindings - any binding variables which are defined for your op template can be used. You
+reference these just as in op templates, like {mybindingvalue}. These are computed and
+injected per-cycle.
+
+
verifier
+
Using the result variable, you can make your assertion logic read like what it does. For
+example, if you want to verify that the result of an operation is equal to the string "this
+worked 42!", you can specify something like this:
+
ops:
+ op1:
+ stmt:"this worked 42!\n"
+ verifier:|
+ result.equals("this worked 42!\n");
+
+
The verifier allows you to use bindings in exactly the same format as your string-based op
+templates:
+
ops:
+ op1:
+ stmt:"this worked {numname}!\n"
+ verifier:|
+ result.equals("this worked ${numname}!\n");
+bindings:
+ numname:NumberNameToString();
+
+
This example doesn't do much like a real test would, since it is simply asserting that the
+result looks the way we know it should. However, this mechanism can be used in any scenario
+where you know a property or feature of a result that you can check for to verify correctness.
+
The verifier can be specified as a string, list, or map structure. In each case, the form is
+interpreted as a sequence of named verifiers. in the string or list forms, names are created for
+you. These names may be used in logging or other views needed to verify or troubleshoot the actual
+logic of your verifier script.
+
When multiple verifiers are supplied, they are executed each in turn. This means that errors will
+present distinctly when verifiers are separated for clarity.
+
All verifier execution contexts share the same compiled script for a given verifier code body, but
+each thread has its own instanced variable state, including results. However, the variables
+which were present after any verifier-init code are injected into the initial context for each
+instance.
+
expected-result
+
If you want to test with a more concise and declarative form, and your result content isn't complex,
+you can use the expected-result op field instead. This form allows you to prototype an object
+in declarative or literal form which can then be checked against a result using Java equals
+semantics. For example, to verify the same result as shown with the verifier above, but in a
+simpler form, you could do this:
+
ops:
+ op1:
+ stmt:"this worked {numname}!\n"
+ expected-result:"this worked "+{numname}+"!\n"
+ bindings:
+ numname:NumberNameToString();
+
+
Since the expected-result value is rendered by active code, you must treat it as code where the
+bind points are simply injected variables. This form can also use container types and inline or
+literal forms.
+
verifier-imports
+
For the verifier capabilities explained above, you may need to import symbols from packages in
+your runtime. This allows you to do so. These imports will apply equally to any per-cycle
+verification logic for the given op template, and only need to be specified once (per op template).
+
verifier-init
+
Sometimes you want to initialize your verifier logic once before you invoke it every cycle. Any
+verifier code provided in verifier-init fields is run exactly this way. The variable bindings
+which are created here are persisted and injected into every other verifier as such. This allows
+you to create instrumentation, for example.
NoSQLBench v5 has a standard error handler mechanism which is available to all drivers.
+
If no error handler is configured for an activity, then the default error handler is used:
+
errors=stop
+
+
This error handler is a modular and highly configurable error handler with a very basic set of
+defaults: If you don't configure it, then any error thrown by an activity will cause it to
+stop. This is indicated by the default errors=stop.
+
The default configuration is just a fail-fast default. It is the simplest possible error handler
+configuration.
+
Handler Verbs
+
The currently supported handler verbs, like stop above, are:
+
+
ignore If an error matches this verb in a handler chain, then it will be silently ignored.
+It will not be retried. Although combining ignore with retry will cause it to be silently retried.
+
counter Count each uniquely named error with a counter metric.
+
histogram Track the session time of each uniquely named error with a histogram.
+
meter Meter each uniquely named error with a meter metric.
+
retry - Mark the error as retryable. If an activity has retries available, the operation will
+be retried. By default, activities which allow ops to be retried will have maxtries=10.
+
stop Allow the error to propagate through the stack to cause the activity to be stopped.
+
timer Count, Meter, and Track session times of each uniquely named error with a timer metric,
+which combines the three forms above.
+
warn Log a warning to the log with the error details.
+
code Assign a specific result code to any matching errors. You can configure this as
+shorthand in a handler list as just a number: errors=RuntimeException:33,warn
+
+
You can use any of these verbs in any order in a handler list.
+
Chains of Handler Lists
+
The structure of a handler is a list of list. In order to make this easier to discuss in terms of
+configuration, the top level is called the Handler Chain and the individual handler lists at each
+link in the chain are called Handler Verbs.
+
The first matching set of handler verbs is used. When an exception is thrown, the handler chain
+is walked until a matching handler is found. If none are found, then no handlers apply, which is
+the same as assigning ignore, which does nothing. If you want a default handler list to apply
+to otherwise-unmatched exceptions, be sure to add one at the end for anything matching .*.
+
The illustration below shows the handler chain structure for the handler configuration shown.
In this example, there are three handler configuration. When an error is thrown by an activity, the
+handler chain is called. When a matching handler is found by matching any of the error patterns
+against the exception name, that column (the handler list) is selected, and each handler in that
+list is applied in order to the error.
+
Specifically, the Java class name of the exception type is matched against the error patterns going
+left to right. The first one that matches selects the error handler list to use.
+
Error Handler Configuration
+
The default setting of errors=stop uses a shorthand form for specifying error handlers. The
+parameter name errors will become the universal activity parameter for configuring error handling
+going forward.
+
errors=stop is shorthand for errors=.*:stop. This is simply a single handler list which has the
+default wildcard error matcher.
+
A handler definition is thus comprised of the error matching patterns and the error handling verbs
+which should be applied when an error matches the patterns. If the error matching patterns are not
+provided, then the default wildcard pattern and delimiter .*:is automatically prepended.
+
Error Pattern Formats
+
An error pattern is simply a regular expression, although the characters are limited intentionally
+to a subset: [a-zA-Z0-9.-_*+]. Multiple patterns may be used, like Missing.*,RuntimeEx.*.
+
More customizable ways to map an error to a particular handler list may be provided if/when needed.
+
Handler Verb Formats
+
A specific handler list is defined as a set of handler verbs separated by commas. Alternately,
+handler verbs may be blocks of JSON or other standard NoSQLBench encoding formats, as long as they
+are protected by quotes:
+
# basic verb -only form
+counter,warn
+
+# using JSON
+"{\"handler\"=\"counter\"},{\"handler\"=\"warn\"}"
+
+# using simplified params form
+"handler=counter,handler=warn,handler=code code=42"
+
+
This shows that handler verbs are really just shorthand for more canonical object definitions which
+have their own properties. The handler property is the one that select which handler implementation
+to use. Each handler implementation may have its own options. Those will be documented as they are
+added.
+
Building Handler Chains
+
To construct a handler entry, simply concatenate the error pattern to the verbs using a colon.
+
To have multiple handler entries, concatenate them in the order of your choosing with semicolons.
+
Examples
+
Count (in metrics) all occurrences of exceptions named Missing.* (anything matching the regex), but
+count and warn about anything matching the exception nameRuntimeEx.*. Ignore everything else.
NoSQLBench comes with a built-in helper to get you up and running quickly with client-side testing
+metrics. This functionality is based on docker, and a built-in method for bringing up a docker
+stack, automated by NoSQLBench.
+
WARNING:
+This feature requires that you have docker running on the local system and that your user is in a
+group that is allowed to manage docker. Using the --docker-metrics command will attempt to
+manage docker on your local system.
+
To ask NoSQLBench to stand up your metrics infrastructure using a local docker runtime, use this
+command line option with any other NoSQLBench commands:
+
--docker-metrics
+
+
When this option is set, NoSQLBench will start graphite, prometheus, and grafana automatically on
+your local docker, configure them to work together, and to send metrics the system automatically.
+
Annotations
+
As part of this integration, the internal annotation facility for NoSQLBench is also pointed at the
+grafana instance. Several life-cycle events are reported, in both instant and span form. For
+example, when an activity is stopped, an annotation is recorded with its parameters, start time,
+end time, and so on. The built-in dashboards have support for toggling these annotations as a
+way to provide traceability to test scenarios and events.
+
Using a Remote Dashboard
+
If you have started a dashboard docker stack as described above, then you can also run clients
+in a mode where the metrics and annotations are forwarded to it. In order to do that, simply add
+this too your command line:
Short options, like '-v', represent simple options, like verbosity. Using multiples increases the
+level of the option, like '-vvv'.
+
Long options, like '--help', are top-level options that may only be used once. These modify general
+behavior, or allow you to get more details on how to use nb5.
+
All other options are either commands, or named arguments to commands.
+
+
Any single word without dashes is a command that will be converted into script form.
+
Any option that includes an equals sign is a named argument to the previous command.
+
+
The following example is a commandline with a command start, and two named arguments to that command.
+
nb5 start driver=diag alias=example
+
+
Discovery options
+
These options help you learn more about running nb5, and about the plugins that are present in your
+particular version.
+
Get a list of additional help topics that have more detailed documentation:
+
nb5 help topics
+
+
Provide specific help for the named activity type:
+
nb5 help <activity type>
+
+
List the available drivers:
+
--list-drivers
+
+
List the available scenarios:
+
--list-scenarios
+
+
List only the available workloads which contain the above scenarios:
+
--list-workloads
+
+
Copy a workload or other file to your local directory as a starting point:
+
--copy <name>
+
+
Provide the metrics that are available for scripting:
This is how you actually tell nb5 what scenario to run. Each of these commands appends script logic
+to the scenario that will be executed. These are considered as commands, can occur in any order and
+quantity. The only rule is that arguments in the arg=value form will apply to the preceding script
+or activity.
+
Add the named script file to the scenario, interpolating named parameters:
+
script <script file> [arg=value]...
+
+
Add the named activity to the scenario, interpolating named parameters
+
activity [arg=value]...
+
+
General options
+
These options modify how the scenario is run.
+
Specify a directory for scenario log files:
+
--logs-dir <dirname>
+
+
Specify a limit on logfiles (old files will be purged):
+
--logs-max <count>
+
+
Specify the priority level of file logs:
+
--logs-level <level>
+
+
where <level> can be one of OFF, ERROR, WARN, INFO, DEBUG, TRACE, or ALL
--logfile-pattern '%date %level [%thread] %logger{10} [%file:%line] %msg%n'
+--logfile-pattern 'VERBOSE'
+
+# See https://logging.apache.org/log4j/2.x/manual/layouts.html#Pattern_Layout
+# These shortcuts are allowed
+TERSE %8r %-5level [%t] %-12logger{0} %msg%n%throwable
+VERBOSE %d{DEFAULT}{GMT} [%t] %logger %-5level: %msg%n%throwable
+TERSE-ANSI %8r %highlight{%-5level} %style{%C{1.} [%t] %-12logger{0}} %msg%n%throwable
+VERBOSE-ANSI %d{DEFAULT}{GMT} [%t] %highlight{%logger %-5level}: %msg%n%throwable
+# ANSI variants are auto promoted for console if --ansi=enable
+# ANSI variants are auto demoted for logfile in any case
+
+
Explicitly enable or disable ANSI logging support
+(ANSI support is enabled if the TERM environment variable is defined):
+
--ansi=enabled
+--ansi=disabled
+
+
Specify a directory and enable CSV reporting of metrics:
+
--report-csv-to <dirname>
+
+
Specify the graphite destination and enable reporting
+
--report-graphite-to <addr>[:<port>]
+
+
Specify the interval for graphite or CSV reporting in seconds:
+
--report-interval 10
+
+
Specify the metrics name prefix for graphite reporting:
--log-histostats stats.csv
+--log-histostats 'stats.csv:.*' # same as above
+--log-histostats 'stats.csv:.*:1m' # with 1-minute interval
+--log-histostats 'stats.csv:.*specialmetrics:10s'
+
+
Adjust the HDR histogram precision:
+
--hdr-digits 3
+
+
The default is 3 digits, which creates 1000 equal-width histogram buckets for every named metric in
+every reporting interval. For longer running test or for test which require a finer grain of
+precision in metrics, you can set this up to 4 or 5. Note that this only sets the global default.
+Each activity can also override this value with the hdr_digits parameter. Be aware that each
+increase in this number multiples the amount of detail tracked on the client by 10x, so use
+caution.
+
Adjust the progress reporting interval:
+
--progress console:1m
+
+
or
+
--progress logonly:5m
+
+
👉 The progress indicator on console is provided by default unless logging levels are turned up
+or there is a script invocation on the command line.
+
If you want to add in classic time decaying histogram metrics for your histograms and timers, you
+may do so with this option:
+
--classic-histograms prefix
+--classic-histograms 'prefix:.*' # same as above
+--classic-histograms 'prefix:.*specialmetrics' # subset of names
+
+
Name the current session, for logfile naming, etc. By default, this will be "scenario-TIMESTAMP", and
+a logfile will be created for this name.
+
--session-name <name>
+
+
Enlist nosqlbench to stand up your metrics infrastructure using a local docker runtime:
+
--docker-metrics
+
+
When this option is set, nosqlbench will start graphite, prometheus, and grafana automatically on
+your local docker, configure them to work together, and point nosqlbench to send metrics the system
+automatically. It also imports a base dashboard for nosqlbench and configures grafana snapshot
+export to share with a central DataStax grafana instance (grafana can be found on localhost:3000
+with the default credentials admin/admin).
+
Console Options
+
Increase console logging levels: (Default console logging level is warning)
+
-v (info)
+-vv (debug)
+-vvv (trace)
+
+--progress console:1m (disables itself if -v options are used)
+
+
These levels affect only the console output level. Other logging level parameters affect logging
+to the scenario log, stored by default in logs/...
+
Show version, long form, with artifact coordinates.
+
--version
+
+
Summary Reporting
+
The classic metrics logging format is used to report results into the logfile for every scenario.
+This format is not generally human-friendly, so a better summary report is provided by default to
+the console and/or a specified summary file by default.
+
Examples:
+
# report to console if session ran more than 60 seconds
+--report-summary-to stdout:60
+
+# report to auto-named summary file for every session
+--report-summary-to _LOGS_/_SESSION_.summary
+
+# do both (the default)
+--report-summary-to stdout:60,_LOGS_/_SESSION_.summary
+
+
Values of stdout or stderr are send summaries directly to the console, and any other pattern is
+taken as a file name.
+
You can use _SESSION_ and _LOGS_ to automatically name the file according to the current session
+name and log directory.
+
The reason for the optional timing parameter is to allow for results of short scenario runs to be
+squelched. Metrics for short runs are not generally accurate nor meaningful. Spamming the console
+with boilerplate in such cases is undesirable. If the minimum session length is not specified, it
+is assumed to be 0, meaning that a report will always show on that channel.
All metrics collected from activities are recorded in nanoseconds and ops per second. All histograms
+are recorded with 3 digits of precision using HDR histograms unless otherwise modified with the
+global --hdr-digits option or the activity-specific hdr_digits activity param.
+
Metric Outputs
+
Metrics from a scenario run can be gathered in multiple ways:
+
+
In the log output
+
In CSV files
+
In HDR histogram logs
+
In Histogram Stats logs (CSV)
+
To a monitoring system via graphite
+
via the --docker-metrics option
+
remotely via the --docker-metrics-at <host> option
+
+
When --docker-metrics or --docker-metrics-at <host> methods are used, they take over the global
+options for the other methods. Except these, the others may generally be combined and used in
+combination. The command line options for enabling these are documented in the command line docs,
+although some examples of these may be found below.
+
Metrics via Graphite
+
If you like to have all of your testing data in one place, then you may be interested in reporting
+your measurements to a monitoring system. For this, NoSQLBench includes a
+Metrics Library. Graphite reporting is baked in as the
+default reporter.
+
In order to enable graphite reporting, use one of these options formats:
Core metrics use the prefix nosqlbench by default. You can override this with the ``
+--metrics-prefix` option:
+
--metrics-prefix myclient.group5
+
+
Identifiers
+
Metrics associated with a specific activity will have the activity alias in their name. There is a
+set of core metrics which are always present regardless of the activity type. The names and types of
+additional metrics provided for each activity type vary.
+
Sometimes, an activity type will expose metrics on a per-statement basis, measuring over all
+invocations of a given statement as defined in the YAML. In these cases, you will see --
+separating the name components of the metric. At the most verbose, a metric name could take on the
+form like
+<activity>.<docname>--<blockname>--<statementname>--<metricname>, although this is rare when you
+name your statements, which is recommended. Just keep in mind that the double dash connects an
+activity's alias with named statements within
+that activity.
+
HDR Histograms
+
Recording HDR Histogram Logs
+
You can record details of histograms from any compatible metric (histograms and timers) with an
+option like this:
+
--log-histograms hdrdata.log
+
+
If you want to record only certain metrics in this way, then use this form:
+
--log-histograms 'hdrdata.log:.*suffix'
+
+
Notice that the option is enclosed in single quotes. This is because the second part of the option
+value is a regex. The
+'.*suffix' pattern matches any metric name that ends with "suffix". Effectively, leaving out the
+pattern is the same as using '.*', which matches all metrics. Any valid regex is allowed here.
+
Metrics may be included in multiple logs, but care should be taken not to overdo this. Keeping
+higher fidelity histogram reservoirs does come with a cost, so be sure to be specific in what you
+record as much as possible.
+
If you want to specify the recording interval, use this form:
+
--log-histograms 'hdrdata.log:.*suffix:5s'
+
+
If you want to specify the interval, you must use the third form above, although it is valid to
+leave the pattern empty, such as 'hdrdata.log::5s'.
+
Each interval specified will be tracked in a discrete reservoir in memory, so they will not
+interfere with each other in terms of accuracy.
+
Recording HDR Histogram Stats
+
You can also record basic snapshots of histogram data on a periodic interval just like above with
+HDR histogram logs. The option to do this is:
+
--log-histostats 'hdrstats.log:.*suffix:10s'
+
+
Everything works the same as for hdr histogram logging, except that the format is in CSV as shown in
+the example below:
+
#logging stats for session scenario-1479089852022
+#[Histogram log format version 1.0]
+#[StartTime: 1479089852.046 (seconds since epoch), Sun Nov 13 20:17:32 CST 2016]
+#Tag,Interval_Start,Interval_Length,count,min,p25,p50,p75,p90,p95,p98,p99,p999,p9999,max
+Tag=diag1.delay,0.457,0.044,1,16,31,31,31,31,31,31,31,31,31,31
+Tag=diag1.cycles,0.48,0.021,31,4096,8191,8191,8191,8191,8191,8191,8191,8191,8191,2097151
+Tag=diag1.delay,0.501,0.499,1,1,1,1,1,1,1,1,1,1,1,1
+Tag=diag1.cycles,0.501,0.499,498,1024,2047,2047,4095,4095,4095,4095,4095,4095,4095,4194303
+...
+
+
This includes the metric name (Tag), the interval start time and length (from the beginning of
+collection time), number of metrics recorded (count), minimum magnitude, a number of percentile
+measurements, and the maximum value. Notice that the format used is similar to that of the HDR
+logging, although instead of including the raw histogram data, common percentiles are recorded
+directly.
Labels are used to identify everything you can configure as a user, and as a result the specific
+details of where your results come from. Labels are expressed by NoSQLBench as details in
+metrics, annotations, log lines, error messages, and so on.
+
Everything is Named
+
Users interact directly with key components of NoSQLBench, such as scenarios, workload templates,
+op templates, and metrics. Whether configuring a component or analyzing results of a test, it is
+essential that all components are clearly identified in context. This means that users can
+configure key elements of a test by name, just as they can look up results in metrics views
+by name.
+
In cases where users do not provide a name for a component, element, or operation, a name is
+created for them based on the surrounding structure, so at least error messages, log lines, and
+other forms of output are specific and relatable to the configuration and workload. Users should
+not be wondering "Which op template does this error pertain to?", nor should they be wondering
+"How do I label the results of this test so that they don't get mixed up with other results?"
+The naming and labeling systems are there to provide clear and useful identifiers so this
+doesn't happen.
+
Everything is Labeled
+
The runtime context of a single operation in NoSQLBench has layers. An operation is executed for
+a given cycle, which is run within an activity, which is an independent process within a
+scenario, which executes with global options. This means that it is not sufficient to only know
+the name of an operation to isolate it from all others of the same name. You must also know
+which activity and which scenario it runs within. Imagine you are looking for the results of a
+specific test run within a dashboard, and there are multiple concurrent results available. The
+context of the operation is required information to be able to look at and use specific results.
+
Thus, the naming scheme in NoSQLBench is also extended to be used as a labeling system. For any
+key element that a user can interact with, it will know its label set. Each unique label set
+uniquely identifies a single and distinct component within the runtime.
+
Naming for all key elements is provided as a set of labels. Strictly speaking, the label
+set is unordered, however it is maintained in construction order by default to make
+reading and reasoning about the layers and nesting easier.
+
Everything is Hierarchic
+
There are various levels of labeling which are combined for each level of nesting in
+the runtime. At the outermost layer, there are fewer labels, with more specific labels added as
+you go deeper.
+
Runtime Layers
+
A comprehensive view of runtime layers is given below. Those which are called out like
+[this] are provided as standard labels for any metrics, logging, or error conditions as
+appropriate. They others may be added to specific context views when needed.
+
+
[session] - Each runtime session has a unique name by default. A session is a single
+invocation of NoSQLBench, including global options, scenario or workload selection, etc.
+
+
[scenario] - The scenario is the top-level process within the NoSQLBench runtime. It is
+represented by a scenario script which is either synthesized for the user or provided
+directly by the user. It's value is the selected scenario from a named scenario, or just the
+session name if invoked as an ad-hoc scenario.
+
+
[activity] - Within a scenario, zero or more activities will run. An activity is a
+separate iterative process that runs within a scenario. Activities operate on a set of
+inputs called cycles, by default an interval of long values.
+
+
thread - An activity has its own thread pool, and each thread operates on cycles from
+the activity's cycle source known as an input.
+
+
cycle range - Each thread in an activity iterates over a set of cycles as governed by
+the stride, which simply aligns the micro-batching size around a logically defined
+sequence.
+
+
cycle - Each activity thread iterates over the cycles in it's current cycle
+range.
+
+
[op name] - For a given cycle, a specific deterministic operation is synthesized
+and executed within the owning thread, and doing any additional logic such as
+error handling is applied as specified by the user.
+
+
space - Each op can use a named context which remains stateful for the
+duration of the activity. This is handled by default for most testing scenarios,
+but can be customized for some powerful testing capabilities when needed.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Auxiliary Labels
+
While some of the elements above are labeled as a standard within the runtime, there results of
+testing may need to be queried, indexed, or aggregated by additional labels which explain the
+purpose or parameters of a specific test run. These are provided by users in accordance with
+how that particular layer is normally configured.
+
User-Provided Labels
+
Users can add additional labels to be added to the label set for every single element in a
+session by providing them on the command line.
+
Automatic Labels
+
Some labels are added for you automatically to describe the usage context of a session, etc.
+
appname
+
A default label of appname is always provided with a value of nosqlbench. This by itself
+uniquely identifies metrics which were produced by nosqlbench, which is useful in shared metrics
+settings.
+
workload
+
The name of the workload template which was used in the scenario, if any.
+
usermode
+
This is set to a value like named_scenario or script or adhoc depending on how the
+scenario was invoked.
+
step
+
The name of the related step from a named scenario if that is how the scenario was invoked.
+
alias
+
This is deprecated. Use activity in combination with other labels instead.
+
Labels vs Tags
+
There are two distinct facilities within NoSQLBench for identifying and managing op templates
+and their related downstream effects.
+
Labels
+
Labels are meant to be nominal for what something is, and are applied using a well-defined
+naming schema as a way to express in detail which consequences are related to which
+configurations, in detail, and over time.
+
Tags
+
Tags are meant to be useful filters for specifying which things should be used in a test,
+and are applied ad-hoc within workload templates for the purposes of customizing scenarios
+around specific operations to be included.
+
Each layer is configured and informed by a set of inputs. Each layer adds a specific set of
+labels, one or more.
+
Key APIs
+
The way that labels are supported in NB code is through a labeling API.
+
NBLabeledElement
+
NBLabeledElement is a property decorator. It signifies any element which has a set of labels
+known as NBLabels, accessed with the getLabels() method. In nearly every case, the labels for
+an element are computed on the fly by combining the parent labels with some named property of
+the current element. This is why many runtime types require an NBLabeledElement in their
+constructor as a requirement.
+
NBLabels
+
NBLabels provides convenient construction patterns to allow for composing more detailed label sets.
👉 When it comes to building workloads, the op template is where everything comes together.
+Understanding op templates in detail will help you bring other features of NoSQLBench into
+focus more quickly.
+
Each activity that you run in an nb5 scenario is built from operations. These operations are
+generated at the time they are needed.
+They need a blueprint for what an operation looks like, which we provide in the form of a
+simple data structure called an op template.
+
The op template is effectively a JSON object or a YAML map. It is simply a data structure.
+
De-sugaring
+
The op template structure is quite flexible in practice, and only requires you to be as detailed
+as necessary for a given situation.
+
For example, you can provide an op template which looks like this:
These both mean the same thing! NoSQLBench performs a structural de-sugaring and
+normalization, and then applies property de-normalization and overrides. At the end, it's as if
+you used a fully-qualified and detailed format, even if you didn't. The reason you might use
+the second form over the first is to provide additional op templates or properties when tailing
+for more specific situations.
+
If you want to know why or how nb5 does this, expand the details below. Otherwise, Let's skip to
+the next part!
+
+Workload Specification Design
+
Why?
+
Why? Because YAML is not fun for most people if we're being really honest. System
+designers and developers have a persistent habit of pushing their configuration problems into
+pop-markup formats. Yet, without a truly innovative and widely supported alternative, it's still
+a not a bad choice. There are really not many practical alternatives that are portable,
+supported by many languages, and so ubiquitous that they are immediately recognized by most
+users. Further, it plays generally well with JSON, a proper subset and close runner-up, which
+is also extended by jsonnet.
+
Even so, there is still a problem to solve: NoSQLBench needs to support trivial testing
+scenarios with trivial configuration AND advanced testing scenarios with a detailed configuration.
+So, why not both?
+
+
Simple things should be simple, complex things should be possible.
+(Alan Kay)
+
+
So in terms of tooling, this provides a rich layering of tools which can scale from the
+trivial to the sophisticated.
+
How?
+
The rules for this mechanism are part of the nb5
+workload definition
+standard, which covers all the details and corner cases. The nb5 runtime handles all the
+structural processing for users and developers, so there is little ambiguity about valid or
+equivalent forms. This standard is elevated to a tested specification because it is part of the
+core nb5 code base, tested for validity with every build. You can only see a documented and
+working specification on this site.
+
The design principles used when building this standard include:
+
+
If it looks like what it does, and it does what it looks like, then it is valid.
+
If there is a reasonable chance of ambiguity, then disallow the pattern, or make the user be
+more specific.
+
+
+
Valid Forms
+
👉 The Workloads 101 tutorial is a great way to learn about op
+templates.
+
What determines if a given op template is valid or not depends on a couple of things: Can it be
+recognized according to the workload definition
+standard?
+Can it be
+recognized by the specified driver as a valid op template, according to the field names and values?
+
In general, you have three places to look for valid op templates. Here they are in order of
+preference:
+
+
The built-in workloads. Use the --list-workloads and --copy <workload> options to discover
+and copy out some examples. These are documented under
+discovery options.
+
The driver documentation. Each driver should provide clear examples that can be pasted right
+into a new workload if you want. Access the documentation for a specific driver with nb5 help <driver>.
Finally, the detailed workload definition specification,
+if you need, for example to see all the possibilities. Developers will generally want to know
+what can be specified, but those who are just using nb5 will get by easily on the examples.
+
+
Template Form
+
Let's take a look at the longer example again, with line numbers this time:
+
1
bindings:
+
2
binding1:Combinations('a-z')
+
3
blocks:
+
4
warmup-block:
+
5
ops:
+
6
op1:
+
7
op:
+
8
prepared:"example {binding1} body"
+
+
We see some of the surrounding workload template format, then the op template, and then the
+single op field prepared. Here is a line-by-line readout of each part:
+
1
At the root of the workload template, a bindings property sets global bindings.
+
2
One global binding is defined as "Combinations('a-z')", with name binding1.
+
3
At the root of the workload template, the blocks property is defined.
+
4
The first block is named warmup-block.
+
5
The ops property for warmup-block is defined.
+
6
The first op template is named op1. Everything under it is called the op template.
+
7
The op template named op1 has an explicity op fields property.
+
8
The first op field of the op template is named "prepared", and it is a string template.
+
+
Additionally, the string template above might be called a statement. It could just as well be
+select user_id from mytable where token={user_token} instead of example {binding1} body.
+When the entirety of an op template is passed as a string like in the very first example, that
+string is stored in an op field called stmt automatically.
+
The part of the string that looks like {binding1} is called a bind point. Bind points are the
+places where you will inject data into the template later to create a fully constructed
+synthetic operation. In this case binding1 is the bind point name. It matches up with a
+binding name above (also binding1), to create a full blueprint for constructing the whole
+string when you need it.
+
As for the longer example, you might notice that this is a fully mapped structure with no lists.
+That is, every property is basically a container for a collection of named elements. If you get
+lost in the layers, just remember that everything follows this pattern: From the root inward,
+the map keys mean property name > member name > property name > member name ...
+until the very end where leaf nodes are simply values.
+
Synthetic Op Values
+
The power of an op template comes from the fact that it is a real template. If your op template
+contains a string template select user_id from mytable where token={user_token}, you can't
+take it as it is and send it to the database. Either your client or your server will throw a
+syntax error on the {user_token} part. What nb5 is excellent at is working with a (nb5) driver to
+create a synthetic operation to execute. For native drivers, the nb5 driver (known in the API as
+a DriverAdapter) interprets the op template structure, and uses the native driver APIs to
+behave exactly as an application might. For other nb5 drivers, like stdout, something else may
+be done with the op template, like printing out the rendered op template in schematic form. No
+native driver is needed to do this.
+
Structure!
+
So far, you've seen a simple op field with a synthetic op value. The prefix {bindpoint} suffix
+form is a string template. But what about other forms? You can have any structural form for an
+op field, and it will be handled as a synthetic structure, including lists, maps and strings!
The one named op1 looks like a string template, but it has no prefix nor suffix in the string.
+The double curly brace form removes the need to reference a binding by name. It is an anonymous
+binding function directly within the bind point. Further, it isn't necessary to put only a bind
+point in a string template like this when you are assigning a string value. That happens
+automatically. So we use this case to promote this to a direct binding. That means that the type
+of value produced by the binding directly will be used. If it needs to be a string, it will anyway.
+
op2
+
The one named op2 shows that the op property of an op template has special significance. If
+you want to do anything beyond a trivial string binding, you can use this to explicitly set the
+root of the object used for the op fields. This allows for other properties of the op template
+to be stored separately and interpreted separately. Besides reserved op template properties
+(like tag, bindings, params, etc.), all other keys in the op template will be put in the
+params property. Thus, prepared is stored as if you had put it under the params block.
+This is unambiguous because of the explicit op definition.
+
op3
+
The third op template example shows the complimentary scenario to having an explicit op property.
+The directly defined params property means that any unknown keyword (like tag, bindings, etc.
+) will automatically be stored under an injected op field for you. You can think of the op
+property as the payload, and the params property as the metadata. For protocols and
+drivers that can distinguish between these, the distinction is meaningful. For those that don't,
+where the whole protocol is described within an JSON object for example, the params field is
+useless.
+
👉 When you have a trivial op structure with no need for params, you need to specify neither the
+op nor the params property, and all non-reserved keys will automatically be stored in the
+op. This is recommended as the convention for all new drivers. Usage of the params
+property is still supported, but should only be employed by driver developers when it is
+strictly necessary.
+
Synthesis!
+
All the op fields can be fully dynamic! However, it is not efficient for everything about an
+operation to be undetermined until cycle time. Therefore, driver developers will often require
+certain identifying op fields to be static for the purposes of determining op type. The rules
+for this are up to each driver. For example, with the cqld4 driver
+, you can
+specify that you want a raw, prepared, or other type of statement to be executed, but each op
+template must pick one. This is necessary to allow activities to pre-compute or pre-bake much
+of the op synthesis logic as it can. This can be done much more efficiently if at least the
+type of operation doesn't change from cycle to cycle.
These options are used when you need to configure SSL for a driver. The configuration logic for
+SSL is centralized, and supports both the version of TLS which is shipped with the JVM, as well
+as the openssl version.
+
Whenever a driver indicates that it can be configured with SSL, and points you to
+the standard SSL options, this is the page it is referring to.
+
SSL Options
+
ssl
+
Specifies the type of the SSL implementation.
+
Disabled by default, possible values are jdk, and openssl.
+
examples
+
+
ssl=jdk
+
ssl-openssl
+
+
The options available depend on which of these you choose. See the relevant sections below.
+
with ssl=jdk
+
tlsversion
+
Specifies the TLS version to use for SSL.
+
examples
+
+
tlsversion=TLSv1.2 (the default)
+
+
truststore
+
Specifies the path to the SSL truststore.
+
examples
+
+
truststore=file.truststore
+
+
tspass
+
Specifies the password for the SSL truststore.
+
examples
+
+
tspass=truststore_pass
+
+
keystore
+
Specifies the path to the SSL keystore.
+
examples
+
+
keystore=file.keystore
+
+
kspass
+
Specifies the password for the SSL keystore.
+
examples
+
+
kspass=keystore_pass
+
+
keyPassword
+
Specifies the password for the key.
+
examples
+
+
keyPassword=password
+
+
with ssl=openssl
+
For openssl type, the following options are available:
NoSQLBench comes with a set of standard metrics that are part of every driver. Each driver enhances
+the metrics available by adding their own metrics with the NoSQLBench APIs. This section explains
+what the standard metrics are, and how to interpret them.
+
read-input
+
Within NoSQLBench, a data stream provider called an Input is responsible for providing the actual
+cycle number that will be used by consumer threads. Because different Input implementations may
+perform differently, a separate metric is provided to track the performance in terms of client-side
+overhead. The read-input metric is a timer that only measured the time it takes for a given
+activity thread to read the input value, nothing more.
+
strides
+
A stride represents the work-unit for a thread within NoSQLBench. It allows a set of cycles to be
+logically grouped together for purposes of optimization -- or in some cases -- to simulate realistic
+client-side behavior over multiple operations. The stride is the number of cycles that will be
+allocated to each thread before it starts iterating on them.
+
The strides timer measures the time each stride takes, including all cycles within the stride.
+It starts measuring time before the cycle starts, and stops measuring after the last cycle in the
+stride has run.
+
cycles
+
Within NoSQLBench, each logical iteration of a statement is handled within a distinct cycle. A cycle
+represents an iteration of a workload. This corresponds to a single operation executed according to
+some statement definition.
+
The cycles metric is a timer that starts counting at the start of a cycle, before any specific
+activity behavior has control. It stops timing once the logical cycle is complete.
+
cycles.servicetime
+
Each cycle of an activity has a metric which measures its internal service time, measured from the
+moment the cycle starts processing to the moment is fully complete. This is provided
+
When rate limiters are used, this sub-name identifies the service time component as
+
*cycles.waittime
+
When a rate limiter is used, the waittime metric captures the notion of scheduling delay with
+respect to the requested rate. For example, if you specify a rate of 10 Kops/S, but at the 20 second
+mark, only 190Kops have completed, this represents one second of scheduling delay (10 Kops worth of
+operations at 10 Kops/S = 1 second). The cycles.waittime metric would thus indicate ~ 1S worth of
+waittime as the workload falling behind by about 1 second, although it would report in nanos.
+
*cycles.responsetime
+
When a rate limiter is used, the responsetime metric combines the servicetime and waittime values to
+yield a computed responsetime. This is a measure of how long a user would have had to wait for an
+operation to complete based on some ideal schedule, as described by a rate limiter. In this way, a
+rate limiter acts as both a minimal and a maximal target. It is presumed that the composed system is
+fast enough to run at the limited rate, thus any slow-downs which cause the system to run
+effectively behind schedule represent a user-impacting effect.
+
result
+
👉 This metric is provided directly by drivers. All conforming driver implementations should provide
+this metric as described below.
+
Each operation's execution is tracked with the result timer. This timer is used to measure
+ALL operations, even those with errors.
+
result-success
+
👉 This metric is provided directly by drivers. All conforming driver implementations should provide
+this metric as described below.
+
For operations which completed successfully with no exception, a separate result-success timer is
+used. When your workload is running well, both the result and result-success timer count the
+same number and rate of operations. This provides a useful cross-check between metrics.
+
*-error
+
👉 This metric is provided directly by drivers. All conforming driver implementations should provide
+this metric as described below. This happens automatically when the standard error handler
+implementation is used.
+
When the error handler sees an exception, the name of the exception is converted to a metric name
+with -error as the suffix. There will be one of these metric names created for each unique
+exception that occurs within an activity.
There are a few built-in workloads which you may want to run. These can be run from a command
+without having to configure anything, or they can be tailored with their built-in parameters.
+
Finding Workloads
+
To find the build-in scenarios, ask NoSQLBench like this:
+
nb5 --list-workloads
+
+
This specifically lists the workloads which provide named scenarios. Only named scenarios are
+included. Workloads are contained in yaml files. If a yaml file is in the standard path and contains
+a root scenarios element, then it is included in the listing above.
+
Each of these scenarios has a set of parameters which can be changed on the command line.
+
Running Workloads
+
You can run them directly, by name with nb5 <workload> [<scenario>] [<params>...]. If not provided,
+scenario is assumed to be default.
+
For example, the cql-iot workload is listed with the above command, and can be executed like this:
+
# put your normal extra params in ... below, like hosts, for example
+nb5 cql-iot default ...
+
+# OR, with scenario name default
+nb5 cql-iot ...
+
+
You can add any parameters to the end, and these parameters will be passed automatically to each
+stage of the scenario as needed. Within the scenario, designers have the ability to lock parameters
+so that overrides are used appropriately.
+
Conventions
+
The built-in workloads follow a set of conventions so that they can be used interchangeably. This is
+more for users who are using the stages of these workloads directly, or for users who are designing
+new scenarios to be included in the built-ins.
+
Phases
+
Each built-in contains the following tags that can be used to break the workload up into uniform
+steps:
+
+
schema - selected with tags=block:schema
+
rampup - selected with tags=block:rampup
+
main - selected with tags=block:main
+
+
Parameters
+
Each built-in has a set of adjustable parameters which is documented below per workload. For
+example, the cql-iot workload has a sources parameter which determines the number of unique
+devices in the dataset.
It is recommended that you read through the examples in each of the design sections in order. This
+guide was designed to give you a detailed understanding of workload construction with NoSQLBench.
+The examples will also give you better insight into how NoSQLBench works at a fundamental level.
+
Workloads, Defined
+
Workloads in NoSQLBench are defined by a workload template. You can use workload templates to
+describe operations that you want to execute, using any available operation type. A workload
+template is usually provided in a YAML file according to the conventions and formats provided in
+this section. From here on, we'll simply call them workloads.
+
👉 Workload templates are basically blueprints for operations that you organize in whatever
+order and mix you need.
+
With NoSQLBench, a standard configuration format is provided that's used across all workloads.
+This makes it easy to specify op templates, parameters, data bindings, and tags. By default, we
+use YAML as our workload format, but you could just as easily use JSON. (YAML is a superset of
+JSON). After all, workloads templates are really collections of data structure templates.
+
This section describes the standard workload syntax in YAML and how to use it.
+
Multi-Protocol Support
+
You will notice that this guide is not overly CQL-specific. That is because NoSQLBench is a
+multi-protocol tool. All that is needed for you to use this guide with other protocols is a
+different driver parameter. Try to keep that in mind as you think about designing workloads.
+
Advice for new builders
+
Look for built-ins first
+
If you haven't yet run NoSQLBench with a built-in workload, then this section may not be necessary
+reading. It is possible that a built-in workload will do what you need. If not, please read on.
+
Review existing examples
+
The built-in workloads that are include with NoSQLBench are also easy to copy out as a starting
+point. You just need to use two commands:
+
# find a workload you want to copy
+nb5 --list-workloads
+
+# copy a workload to your local directory
+nb5 --copy cql-iot
+
+
Follow the conventions
+
The block names and other conventions demonstrated here represent a pretty common pattern. If
+you follow these patterns, your workloads will be more portable across environments. All the
+baselines workloads that we publish for NoSQLBench follow these conventions.
Op templates are data blueprints that represent a particular kind of operation. They are
+templates because they are used to create possibly many operations to be executed. For example,
+you may have only one op template that you use to drive a billion operations to a system under
+test, or you may have a myriad of different access patterns. In either case, all the
+operations that you want to execute must be defined in template form beforehand.
+
As templates, you indicate where the variable parts are filled in when needed. How this is done
+will be explained in the bindings section.
+
Simple statements
+
In essence, the config format is all about configuring operations. Every other element in the
+config format is in some way modifying or otherwise helping create operations to be used in an
+activity.
+
Op templates are the single most important part of a YAML config.
+
# a single operation
+ops:
+ -a single statement body
+
+
This is a valid activity YAML file in and of itself. It has a single op template.
+
It is up to the individual activity types like cql, or stdout to interpret the op template in
+some way. The example above is valid as an operation in the stdout activity, but it does not produce
+a valid CQL statement when used with the CQL activity type. The contents of the op can be provided
+as free-form text. If the op template is valid CQL, then the CQL activity type can use it without
+throwing an error. Each activity type determines what a statement means, and how it will be used.
+
Multiple Op Templates
+
You can specify multiple op templates:
+
ops:
+ -This is a statement, and the file format doesn't know how statements will be used!
+ -submit job {alpha} on queue {beta} with options {gamma};
+
+
YAML formatting prefixes
+
You can use the YAML pipe to put them on multiple lines, indented a little further in:
+
ops:
+ -|
+ This is a statement, and the file format doesn't
+ know how statements will be used!
+-|
+ submit job {alpha} on queue {beta} with options {gamma};
+
ops:
+ -s1:|
+ This is a statement, and the file format doesn't
+ know how statements will be used!
+ - s2: |
+ submit job {alpha} on queue {beta} with options {gamma};
+
+
Actually, every op template in a YAML file has a name. If you don't provide one, then a name is
+auto-generated for the op template based on its position in the YAML file.
It is best to keep every workload template self-contained within a single YAML file, including
+schema, rampup, and the main steps in your testing workflow. These steps in a typical testing
+workflow are controlled by tags as described below.
+
👉 The step names described below have been adopted as a convention within the built-in workloads
+templates. It is strongly advised that new workload templates use the same naming scheme so
+that they are easier to re-use..
+
Automatic tags
+
To make tag filters more useful, every op template in NoSQLBench is given a set of automatic
+tags based on the block and op template names:
+
+
block: <blockname>
+
name: <blockname>--<op name>
+
+
For example, if you had a block named block42 and an op named op007, then you would be able
+to match it with tags=block:block42, or tags=name:block42--op007, or any regex which also
+matched. The difference between the first example and the second is this: There is only one op
+template which will have the name shown, but multiple statements could be in block42. 1
+
👉 In previous versions of NoSQLBench, you had to add tags directly to your docs, blocks, or op
+templates. This is still supported if you need, but most cases will only require that you group
+statements together in named blocks. When used with the regex matching pattern demonstrated
+above, you get quite a bit of flexibility without having to create boilerplate tags everywhere.
+
A Standard Workflow
+
These steps are very commonly used by nb5 users. The standard test workflow described here is
+understood as lingua franca for seasoned NoSQLBench
+users.
+
Schema step
+
The schema step is simply where you create the necessary schema on your target system. For CQL,
+this generally consists of a keyspace and one or more table statements. There is no special
+schema layer in NoSQLBench. All statements executed are simply statements. This provides the
+greatest flexibility in testing since every activity type is allowed to control its DDL and DML
+using the same machinery.
+
The schema step is normally executed with defaults for most parameters. This means that
+operations will execute in the order specified in the workload template, serially, exactly once.
+This is a welcome side effect of how the initial parameters like cycles are set from the
+op templates which are activated by tagging.
+
The nb5 way of selecting all op templates in a block is to use the built-in block name in a tag
+filter, like this:
+
# select all op templates in the block named schema
+./nb5 ... tags=block:schema ...
+
+# select all op templates in all blocks that have a name matching the regex
+./nb5 ... tags='block:schema-.*'
+
+
Rampup step
+
When you run a performance test, it is very important to be aware of how much data is present.
+Higher density tests are more realistic for systems which accumulate data over time, or which
+accumulate a larger working set every day. The amount of data on the system you are testing
+should recreate a realistic amount of data that you would run in production.
+
It is the purpose of the rampup activity is to create the backdrop data on a target system that
+makes a
+test meaningful for some level of data density. Data density is normally discussed as average per
+node, but it is also important to consider distribution of data as it varies from the least dense to
+the most dense nodes in your target system.
+
Because it is useful to be able to add data to a target system in an incremental way, the bindings
+which are used with a rampup step may actually be different from the ones used for a main
+step. In most cases, you want the rampup step to create data in a way that incrementally adds to
+the working set. This allows you to add some data to a cluster with cycles=0..1M and then
+decide whether to continue adding data using the next contiguous range of cycles, with
+cycles=1M..2M and so on.
+
Main step
+
The main step of a performance testing scenario is the one during which you really care about
+recording the metrics. This is the actual test that everything else has prepared your system for.
+
You will want to run your main workload for a significant amount of time. This doesn't mean a
+long time, but it may. What is significant in terms of getting realistic results is a question
+of statistical significance. If you have a very small system which you can push into
+steady-state performance in 20 minutes, then 30 minutes may be enough testing time. However,
+most modern systems of scale, even with a few nodes, will take longer to get reasonably accurate
+measurements. It depends on how you are measuring.
+
1
+
All block names must be unique and all op names within a block must be unique.
Procedural data generation is built-in to the NoSQLBench runtime by way of the
+Virtual Data Set library. This allows us to create named data generation recipes. These named
+recipes for generated data are called bindings. Procedural generation for test data has
+many benefits over shipping bulk test data around,
+including speed and deterministic behavior. With the Virtual Data Set approach, most of the hard
+work is already done for us. We just have to pull in the recipes we want.
This is a YAML map which provides names and function specifiers. The first binding is named_alpha_,
+and calls an Identity function that takes an input value and returns the same value. Together, the
+name and value constitute a binding named alpha. All four bindings together are called a bindings
+set.
+
The above bindings block is also a valid activity YAML, at least for the stdout activity type.
+The stdout activity can construct a statement template from the provided bindings if needed, so
+this is valid:
Above, you can see that the stdout activity type is ideal for experimenting with data generation
+recipes. It uses the default format=csv parameter above, but it also supports formats like json,
+inlinejson, readout, and assignments.
+
This is all you need to provide a formulaic recipe for converting an ordinal value to a set of field
+values. Each time NoSQLBench needs to create a set of values as parameters to a statement, the
+binding functions are called with an input, known as the cycle. The functions produce a set of named
+values that, when combined with a statement template, can yield an individual statement for a
+database operation. In this way, each cycle represents a specific operation. Since the functions
+above are pure functions, the cycle number of an operation will always produce the same operation,
+thus making all NoSQLBench workloads that use pure functions deterministic.
+
In the example above, you can see the cycle numbers down the left.
+
Binding Anchors
+
If you combine the op template section and the bindings sections above into one activity yaml, you
+get a slightly different result, as the bindings apply to the operations that are provided, rather
+than creating a default op template for all provided bindings. See the example below:
+
# stdout-test.yaml
+statements:
+ -|
+ This is a statement, and the file format doesn't
+ know how statements will be used!
+-|
+ submit job {alpha} on queue {beta} with options {gamma};
+bindings:
+ alpha:Identity()
+ beta:NumberNameToString()
+ gamma:Combinations('0-9A-F;0-9;A-Z;_;p;r;o;')
+ delta:WeightedStrings('one:1;six:6;three:3;')
+
+
[test]$ ./nb5 run driver=stdout workload=stdout-test cycles=10
+This is a statement, and the file format doesn't
+know how statements will be used!
+submit job 1 on queue one with options 00B_pro;
+This is a statement, and the file format doesn't
+know how statements will be used!
+submit job 3 on queue three with options 00D_pro;
+This is a statement, and the file format doesn't
+know how statements will be used!
+submit job 5 on queue five with options 00F_pro;
+This is a statement, and the file format doesn't
+know how statements will be used!
+submit job 7 on queue seven with options 00H_pro;
+This is a statement, and the file format doesn't
+know how statements will be used!
+submit job 9 on queue nine with options 00J_pro;
+
+
There are a few things to notice here. First, the statements that are executed are automatically
+alternated between. If you had 10 different operations listed, they would all get their turn with 10
+cycles. Since there were two, each was run 5 times.
+
Also, the op templates that had named anchors acted as a template, whereas the other one was
+evaluated just as it was. In fact, they were both treated as templates, but one of them had no
+anchors.
+
One more minor but important detail is that the fourth binding delta was not referenced directly
+in the statements. Since the op templates did not pair up an anchor with this binding name, it was
+not used. No values were generated for it.
+
Bindings are templates for data generation, only to be used when necessary. Bindings that are
+defined nearby an op template are like a menu of data generation options. If the op template
+references those bindings with {named_anchors}, then the recipes will be used to construct data
+when that op template is selected for a specific cycle. The cycle number both selects the
+operation (via the op sequence) and also provides the input value as the initial input to the
+binding functions.
+
Further Details
+
A deeper explanation of binding concepts can be found in the Binding Concepts
+part of the Reference Section, where you will also find documentation about how to use the various
+binding functions that are available.
Op templates within a YAML can be accessorized with parameters. These are known as op params and
+are different from the parameters that you use at the activity level. They apply specifically to a
+statement template, and are interpreted by an activity type when the statement template is used to
+construct a native statement form.
+
For example, the op param ratio is used when an activity is initialized to construct the op
+sequence. In the cql activity type, the op parameter prepared is a boolean that can be used to
+designated when a CQL statement should be prepared or not.
+
As with the bindings, a params section can be added at the same level, setting additional parameters
+to be used with op templates. Again, this is an example of modifying or otherwise designating a
+specific type of op template, but always in a way specific to the activity type. Op params can be
+thought of as properties of an operation. As such, params don't really do much on their own,
+although they have the same basic map syntax as bindings:
+
Op Params Example
+
params:
+ ratio:1
+
+
As with op template, it is up to each activity type to interpret the provided op params in a useful
+way.
Tags are used to mark and filter groups of op templates for controlling which ones get used in a
+given scenario. Tags are generally free-form, but there is a set of conventions that can make your
+testing easier.
+
An example:
+
tags:
+ name:foxtrot
+ unit:bravo
+
+
Tag Filtering Rules
+
The tag filters provide a flexible set of conventions for filtering tagged statements. Tag filters
+are usually provided as an activity parameter when an activity is launched. The rules for tag
+filtering are:
+
+
If no conjugate is specified, all(...) is assumed. This is in keeping with the previous
+default. If you do specify a conjugate wrapper around the tag filter, it must be in one of
+these forms: all(...), any(...), or none(...).
+
If no tag filter is specified, then the op template matches.
+
A tag name predicate like tags=name asserts the presence of a specific tag name, regardless of
+its value.
+
A tag value predicate like tags=name:foxtrot asserts the presence of a specific tag name and a
+specific value for it.
+
A tag pattern predicate like tags=name:'fox.*' asserts the presence of a specific tag name and
+a value that matches the provided regular expression.
+
Multiple tag predicates may be specified as in tags=name:'fox.*',unit:bravo
+
+
+
If the all conjugate form is used (the default), then if any predicate fails to match a
+tagged element, then the whole tag filtering expression fails to match.
+
If the any conjugate form is used, then if all predicates fail to match a tagged element,
+then the whole tag filtering expression fails to match.
+
If the none conjugate form is used, then if any predicate matches, a tagged element, then
+the whole tag filtering expression matches.
# no tag filter matches any
+[test]$ ./nb5 run driver=stdout workload=stdout-test
+I'm alive!
+
+# tag name assertion matches
+[test]$ ./nb5 run driver=stdout workload=stdout-test tags=name
+I'm alive!
+
+# tag name assertion does not match
+[test]$ ./nb5 run driver=stdout workload=stdout-test tags=name2
+02:25:28.158 [scenarios:001] ERROR i.e.activities.stdout.StdoutActivity - Unable to create a stdout statement if you have no active statements or bindings configured.
+
+# tag value assertion does not match
+[test]$ ./nb5 run driver=stdout workload=stdout-test tags=name:bravo
+02:25:42.584 [scenarios:001] ERROR i.e.activities.stdout.StdoutActivity - Unable to create a stdout statement if you have no active statements or bindings configured.
+
+# tag value assertion matches
+[test]$ ./nb5 run driver=stdout workload=stdout-test tags=name:foxtrot
+I'm alive!
+
+# tag pattern assertion matches
+[test]$ ./nb5 run driver=stdout workload=stdout-test tags=name:'fox.*'
+I'm alive!
+
+# tag pattern assertion does not match
+[test]$ ./nb5 run driver=stdout workload=stdout-test tags=name:'tango.*'
+02:26:05.149 [scenarios:001] ERROR i.e.activities.stdout.StdoutActivity - Unable to create a stdout statement if you have no active statements or bindings configured.
+
+# compound tag predicate matches every assertion
+[test]$ ./nb5 run driver=stdout workload=stdout-test tags='name=fox.*',unit=bravo
+I'm alive!
+
+# compound tag predicate does not fully match
+[test]$ ./nb5 run driver=stdout workload=stdout-test tags='name=fox.*',unit=delta
+11:02:53.490 [scenarios:001] ERROR i.e.activities.stdout.StdoutActivity - Unable to create a stdout statement if you have no active statements or bindings configured.
+
+# any(...) form will work as long as one of the tags match
+[test]$ ./nb5 run driver=stdout workload=stdout-test tags='any(name=fox.*,thisone:wontmatch)',unit=bravo
+I'm alive!
+
All the basic primitives described above (names, ops, bindings, params, tags) can be used to
+describe and parameterize a set of op templates in a yaml document. In some scenarios, however, you
+may need to structure your op templates in a more sophisticated way. You might want to do this if
+you have a set of common operational forms or op params that need to apply to many statements, or
+perhaps if you have several different groups of operations that need to be configured
+independently.
This shows a couple of important features of blocks. All blocks inherit defaults for bindings,
+params, and tags from the root document level. Any of these values that are defined at the base
+document level apply to all blocks contained in that document, unless specifically overridden within
+a given block.
The template forms available in NoSQLBench are very flexible. That means that there are multiple
+ways of expressing an operations. Thankfully, in most cases, the forms look like what they do, and
+most of the ways you can imagine constructing a statement or operation will simply work, as long as
+the required details are provided for which driver you are using.
+
Extended Forms
+
In recent versions of NoSQLBench, there is expanded support for how operations are specified. The
+role of the statement has been moved into a field of a uniform op template structure. In essence,
+the body of a statement is merely a part of an op template definition, one specifically having
+the field name stmt. However, all definitions which use the previous statement forms should be
+compatible with no significant changes. This is because the String type of value is taken as a
+shortcut for providing a more canonically defined op template containing a stmt field with
+that value. All other forms of statements are structural, and open up new ways to specify
+structured op templates. This greatly simplifies op templates which emulate structured data such
+as JSON, as the template can contain exact models of prototypical operations.
Whether it is a valid op template through the lens of any given driver is a different question.
+Essentially, if a driver recognizes it as valid, it is valid for that driver. If you are a
+seasoned user of NoSQLBench, you don't have to worry about this form interfering with your
+existing workloads, as the new functionality encapsulates what was there already with no changes
+required.
+
Statement Delimiting
+
Sometimes, you want to specify the text of a statement in different ways. Since statements are
+strings, the simplest way for small statements is in double quotes. If you need to express a much
+longer statement with special characters an newlines, then you can use YAML's literal block
+notation (signaled by the '|' character) to do so:
+
statements:
+ -|
+ This is a statement, and the file format doesn't
+ know how statements will be used!
+-|
+ submit job {alpha} on queue {beta} with options {gamma};
+
+
Notice that the block starts on the following line after the pipe symbol. This is a very popular
+form in practice because it treats the whole block exactly as it is shown, except for the initial
+indentations, which are removed.
+
Statements in this format can be raw statements, statement templates, or anything that is
+appropriate for the specific activity type they are being used with. Generally, the statements
+should be thought of as a statement form that you want to use in your activity -- something that has
+placeholders for data bindings. These placeholders are called named anchors. The second line
+above is an example of a statement template, with anchors that can be replaced by data for each
+cycle of an activity.
+
There is a variety of ways to represent block statements, with folding, without, with the newline
+removed, with it retained, with trailing newlines trimmed or not, and so forth. For a more
+comprehensive guide on the YAML conventions regarding multi-line blocks, see
+YAML Spec 1.2, Chapter 8, Block Styles
+
Op Template Sequences
+
To provide a degree of flexibility to the user for statement definitions, multiple statements may be
+provided together as a sequence.
+
# a list of statements
+statements:
+ -"This a statement."
+ -"The file format doesn't know how statements will be used."
+ -"submit job {job} on queue {queue} with options {options};"
+
+
# an ordered map of statements by name
+statements:
+ name1:statement one
+ name2:"statement two"
+
+
In the first form, the names are provided automatically by the YAML loader. In the second form, they
+are specified as ordered map keys.
+
Statement Properties
+
You can also configure individual statements with named properties, using the statement
+properties form:
+
# a list of statements with properties
+statements:
+ -name:name1
+ stmt:statement one
+ -name:name2
+ stmt:statement two
+
+
This is the most flexible configuration format at the statement level. It is also the most verbose.
+Because this format names each property of the statement, it allows for other properties to be
+defined at this level as well. This includes all of the previously described configuration
+elements: name, bindings, params, tags, and additionally stmt. A detailed example follows:
+
statements:
+ -name:foostmt
+ stmt:"{alpha},{beta}\n"
+ bindings:
+ beta:Combinations('COMBINATIONS;')
+ params:
+ parm1:pvalue1
+ tags:
+ tag1:tvalue1
+ freeparam3:a value, as if it were assigned under the params block.
+
+
In this case, the values for bindings, params, and tags take precedence, overriding those set
+by the enclosing block or document or activity when the names match. Parameters called free
+parameters are allowed here, such as
+freeparam3. These are simply values that get assigned to the params map once all other processing
+has completed.
+
Named Statement form
+
It is possible to mix the <name>: <statement> form as above in the example for mapping
+statement by name, so long as some specific rules are followed. An example, which is equivalent to
+the above:
You must avoid using both the name property and the initial
+<name>: <statement> together. Doing so will cause an error to be thrown.
+
Do not use the <name>: <statement> form in combination with a
+stmt: <statement> property. It is not possible to detect if this occurs. Use caution if you
+choose to mix these forms.
+
+
As explained above, parm1: pvalue1 is a free parameter, and is simply shorthand for setting
+values in the params map for the statement.
+
Named Statement Maps
+
By combining all the forms together with a map in the middle, we get this form, which allows for the
+enumeration of multiple statements, each with an obvious name, and a set of properties:
This form is arguably the easiest to read, but retains all the expressive power of the other forms
+too. The distinction between this form and the named properties form is that the structure
+underneath the first value is a map rather than a single value. Particularly, under the 'foostmt'
+name above, all the content contained within is formatted as properties of it—indented
+properties.
+
Here are the basic rules for using this form:
+
+
Each statement is indicated by a YAML list entry like '-'.
+
Each entry is a map with a single key. This key is taken as the statement name.
+
The properties of this map work exactly the same as for named properties above, but repeating the
+name will throw an error since this is ambiguous.
+
If the template is being used for CQL or another driver type which expects a 'stmt' property, it
+must be provided as an explicitly named 'stmt' property as in the foostmt example above.
+
+
Notice in the 'barstmt' example above that there is no "stmt" property. Some drivers have more
+flexible op templates may not require this. This is just a property name that was chosen to
+represent the "main body" of a statement template in the shorter YAML forms. While the 'stmt'
+property is required for drivers like CQL which have a solid concept for "statement body", it isn't
+required for all driver types which may build their operations from other properties.
+
Per-Statement Format
+
It is indeed possible to use any of the three statement formats within each entry of a statement
+sequence:
+
statements:
+ -first statement body
+ -second:second statement body
+ -name:third
+ stmt:third statement body
+ -forth:fourth statement body
+ freeparam1:freeparamvalue1
+ tags:
+ type:preload
+ -fifth:
+ stmt:fifth statement body
+ freeparam2:freeparamvalue2
+ tags:
+ tag2:tagvalue2
+
+
The above is valid NoSQLBench YAML, although a reader would need to know about the rules explained
+above in order to really make sense of it. For most cases, it is best to follow one format
+convention, but there is flexibility for overrides and naming when you need it. The main thing to
+remember is that the statement form is determined on an element-by-element basis for maximum
+flexibility.
+
Detailed Examples
+
The above examples are explained in detail below in JSON schematic form, to assist users and
+developers understanding of the structural rules:
+
statements:
+
+ # ---------------------------------------------------------------------------------------
+
+ # string form:
+# detected when the element is a single string value
+
+ -first statement body
+
+ # read as:
+# {
+# name: 'stmt1', // a generated name is also added
+# stmt: 'first stmt body'
+# }
+
+ # ---------------------------------------------------------------------------------------
+
+ # named statement form:
+# detected when reading properties form and the first property name is not a reserved
+# word, like stmt, name, params, bindings, tags, ...
+
+ -second:second statement body
+
+ # read as:
+# {
+# name: 'second',
+# stmt: 'second statement body'
+# }
+
+ # ---------------------------------------------------------------------------------------
+
+ # properties form:
+# detected when the element is a map and the value of the first entry is not a map
+
+ name:third
+ stmt:third statement body
+
+ # read as:
+# {
+# name: 'third',
+# stmt: 'third statement body'
+# }
+
+ # ---------------------------------------------------------------------------------------
+
+ # properties form with free parameters:
+# detected when properties are used which are not reserved words.
+# Unrecognized words are pushed into the parameters map automatically.
+
+ -forth:fourth statement body
+ freeparam1:freeparamvalue1
+ tags:
+ type:preload
+
+ # read as:
+# {
+# name: 'fourth',
+# stmt: 'fourth statement body',
+# params: {
+# freeparam1: 'freeparamvalue1'
+# },
+# tags: {
+# tag2: 'tagvalue2'
+# }
+# }
+
+ # ---------------------------------------------------------------------------------------
+
+ # named statement maps:
+# detected when the element is a map and the only entry is a map.
+
+ -fifth:
+ stmt:fifth statement body
+ freeparam2:freeparamvalue2
+ tags:
+ tag2:tagvalue2
+
+ # read as:
+# {
+# name: 'fifth',
+# stmt: 'fifth statement body'
+# params: {
+# freeparam2: 'freeparamvalue2'
+# },
+# tags: {
+# tag2: 'tagvalue2'
+# }
+# }
+
+ # ---------------------------------------------------------------------------------------
+
The YAML spec allows for multiple yaml documents to be concatenated in the same file with a
+separator:
+
---
+
+
This offers an additional convenience when configuring activities.
+
Multi-Docs Example
+
If you want to parameterize or tag some a set of operations with their own bindings, params, or
+tags, but alongside another set of uniquely configured statements, you need only put them in
+separate logical documents, separated by a triple-dash.
[test]$ ./nb5 run driver=stdout workload=stdout-test cycles=10
+doc1.form1 doc1.1
+doc1.form2 doc1.2
+doc2.number two
+doc1.form1 doc1.2
+doc1.form2 doc1.1
+doc2.number five
+doc1.form1 doc1.2
+doc1.form2 doc1.2
+doc2.number eight
+doc1.form1 doc1.1
+
+
This shows that you can use the power of blocks and tags together at one level and also allow op
+templates to be broken apart into a whole other level of partitioning if desired.
+
WARNING:
+The multi-doc support is there as a ripcord when you need it. However, it is strongly advised that
+you keep your YAML workloads simple to start and only use features like the multi-doc when you
+absolutely need it. For this, blocks are generally a better choice. See examples in the standard
+workloads.
All NoSQLBench YAML formats support a parameter macro format that applies before YAML processing
+starts. It is a basic macro facility that allows named anchors to be placed in the document as a
+whole:
+
Template Param Formats
+
Template params can be provided with a name and a default value in one of these forms:
+
<<varname:defaultval>>
+
+
or
+
TEMPLATE(varname,defaultval)
+
+
In this example, the name of the parameter is varname. It is given a default value of defaultval
+. If an activity parameter named varname is provided, as in varname=barbaz, then this whole
+expression will be replaced with
+barbaz. If none is provided then the default value will be used instead. For example:
+
Shared Namespace
+
You must ensure that your template params do not overlap with the names of other parameters if
+you want to avoid an error. NoSQLBench makes it possible for drivers to
+detect when unrecognized parameters are provided to a driver or op template. As such, when
+template parameters are accessed from configuration sources, they are also consumed. This is to
+ensure unambiguous usage of every parameter.
+
Template Param Examples
+
[test]$ cat > stdout-test.yaml
+statements:
+ -"<<linetoprint:MISSING>>\n"
+ # EOF (control-D in your terminal)
+
+ [test]$ ./nb5 run driver=stdout workload=stdout-test cycles=1
+ MISSING
+
+ [test]$ ./nb5 run driver=stdout workload=stdout-test cycles=1 linetoprint="THIS IS IT"
+ THIS IS IT
+
+
If an empty value is desired by default, then simply use an empty string in your template,
+like <<varname:>> or
+TEMPLATE(varname,).
This provides a layered naming scheme for operations. It is
+not usually important to name things except for documentation or metric
+naming purposes.
+
If no names are provided, then names are automatically created for blocks
+and op templates. Op templates assigned at the document level are assigned
+to "block0". All other statements are named with the
+format doc#--block#--stmt#.
+
For example, the full name of statement1 above would
+be doc1--block1--stmt1.
+
👉 If you anticipate wanting to get metrics for a specific statement in
+addition to the other metrics, then you will want to adopt the habit of
+naming all your op templates something basic and descriptive.
These forms simply provide finesse for common editing habits, but they are automatically read
+internally as a list of named steps. In the map form, the names are used to name activities. The
+order is retained.
+
Scenario selection
+
When a named scenario is run, it is always named, so that it can be looked up in the list of named
+scenarios under your scenarios: property. The only exception to this is when an explicit scenario
+name is not found on the command line, in which case it is automatically assumed to be default.
+
Some examples may be more illustrative:
+
# runs the scenario named 'default' if it exists, or throws an error if it does not.
+nb5 myworkloads
+# or
+nb5 myworkloads default
+
+# runs the named scenario 'longrun' if it exists, or throws an error if it does not.
+nb5 myworkloads longrun
+
+# runs the named scenario 'longrun' if it exists, or throws an error if it does not.
+# this is simply the canonical form which is more verbose, but more explicit.
+nb5 scenario myworkloads longrun
+
+# run multiple named scenarios from one workload, and then some from another
+nb5 scenario myworkloads longrun default longrun scenario another.yaml name1 name2
+# In this form ^ you may have to add the explicit form to avoid conflicts between
+# workload names and scenario names. That's why the explicit form is provided, after all.
+
+
Workload selection
+
The examples above contain no reference to a workload (formerly called yaml).
+They don't need to, as they refer to themselves implicitly. You may add a workload=
+parameter to the command templates if you like, but this is never needed for basic use, and it is
+error-prone to keep the filename matched to the command template. Just leave it out by default.
+
However, if you are doing advanced scripting across multiple systems, you can actually provide
+a workload= parameter particularly to use another workload description in your test.
+
👉 This is a powerful feature for workload automation and organization. However, it can get unwieldy
+quickly. Caution is advised for deep-linking too many scenarios in a workspace, as there is no
+mechanism for keeping them in sync when small changes are made.
+
Named Scenario Discovery
+
For named scenarios, there is a way for users to find all the named scenarios that are currently
+bundled or in view of their current directory. A couple simple rules must be followed by scenario
+publishers in order to keep things simple:
+
+
Workload files in the current directory *.yaml are considered.
+
Workload files under in the relative path activities/ with name *.yaml are considered.
+
The same rules are used when looking in the bundled NoSQLBench, so built-ins come along for the
+ride.
+
Any workload file that contains a scenarios: tag is included, but all others are ignored.
+
+
This doesn't mean that you can't use named scenarios for workloads in other locations. It simply
+means that when users use the --list-scenarios option, these are the only ones they will see
+listed.
+
Parameter Overrides
+
You can override parameters that are provided by named scenarios. Any parameter that you specify on
+the command line after your workload and optional scenario name will be used to override or augment
+the commands that are provided for the named scenario.
+
This is powerful, but it also means that you can sometimes munge user-provided activity parameters
+on the command line with the named scenario commands in ways that may not make sense. To solve this,
+the parameters in the named scenario commands may be locked. You can lock them silently, or you can
+provide a verbose locking that will cause an error if the user even tries to adjust them.
+
Silent locking is provided with a form like param==value. Any silent locked parameters will reject
+overrides from the command line, but will not interrupt the user.
+
Verbose locking is provided with a form like param===value. Any time a user provides a parameter
+on the command line for the named parameter, an error is thrown, and they are informed that this is
+not possible. This level is provided for cases in which you would not want the user to be unaware of
+an unset parameter which is germain and specific to the named scenario.
+
All other parameters provided by the user will take the place of the same-named parameters provided
+in each command templates, in the order they appear in the template. Any other parameters provided
+by the user will be added to each
+of the command templates in the order they appear on the command line.
+
This is a little counter-intuitive at first, but once you see some examples it should make sense.
+
Parameter Override Examples
+
Consider a simple workload with three named scenarios:
If you run the second scenario s2 with your own value for cycles=7, then it does what the locked
+parameter
+cycles==10 requires, without telling you that it is ignoring the specified value on your command
+line.
Sometimes, this is appropriate, such as when specifying settings like threads== for schema
+activities.
+
Verbose Locking example
+
If you run the third scenario s3 with your own value for cycles=7, then you will get an error
+telling you that this is not possible. Sometimes you want to make sure tha the user knows a
+parameter should not be changed, and that if they want to change it, they'll have to make their own
+custom version of the scenario in question.
+
$ nb5 basics s3 cycles=7
+ERROR: Unable to reassign value for locked param 'cycles===7'
+$
+
+
Ultimately, it is up to the scenario designer when to lock parameters for users. The built-in
+workloads offer some examples on how to set these parameters so that the right value are locked in
+place without bother the user, but some values are made very clear in how they should be set. Please
+look at these examples for inspiration when you need.
+
Forcing Undefined Parameters
+
If you want to ensure that any parameter in a named scenario template remains unset in the generated
+scenario script, you can assign it a value of UNDEF. The locking behaviors described above apply to
+this one as well. Thus, for schema commands which rely on the default sequence length (which is
+based on the number of active statements), you can set cycles==UNDEF to ensure that when a user
+passes a cycles parameter, the schema activity doesn't break with too many cycles.
+
Automatic Parameters
+
Some parameters are already known due to the fact that you are using named scenarios.
+
workload
+
The workload parameter is, by default, set to the logical path (fully qualified workload name) of
+the yaml file containing the named scenario. However, if the command template contains this
+parameter, it may be overridden by users as any other parameter depending on the assignment
+operators as explained above.
+
alias
+
The alias parameter is, by default, set to the expanded name of WORKLOAD_SCENARIO_STEP, which
+means that each activity within the scenario has a distinct and symbolic name. This is important for
+distinguishing metrics from one another across workloads, named scenarios, and steps within a named
+scenario. The above words are interpolated into the alias as follows:
+
+
+
WORKLOAD - The simple name part of the fully qualified workload name. For example, with a
+workload (yaml path) of foo/bar/baz.yaml, the WORKLOAD name used here would be baz.
+
+
+
SCENARIO - The name of the scenario as provided on the command line.
+
+
+
STEP - The name of the step in the named scenario. If you used the list or string forms to provide
+a command template, then the steps are automatically named as a zero-padded number representing
+the step in the named scenario, starting from 000, per named scenario. (The numbers are not
+globally assigned)
+
+
+
Because it is important to have uniquely named activities for the sake of sane metrics and logging,
+any alias provided when using named scenarios which does not include the three tokens above will
+cause a warning to be issued to the user explaining why this is a bad idea.
+
👉 UNDEF is handled before alias expansion above, so it is possible to force the default activity
+naming behavior above with alias===UNDEF. This is generally recommended, and will inform users if
+they try to set the alias in an unsafe way.
This section describes errors that you might see if you have a YAML loading issue, and what you can do to fix them.
+
Undefined Name-Statement Tuple
+
This exception is thrown when the statement body is not found in a statement definition in any of the supported formats.
+For example, the following block will cause an error:
This is because name and params are reserved property names -- removed from the list of name-value pairs before free
+parameters are read. If the statement is not defined before free parameters are read, then the first free parameter is
+taken as the name and statement in name: statement form.
+
To correct this error, supply a statement property in the map, or simply replace the name: statement-foo entry with a
+statement-foo: statement body at the top of the map:
+
Either of these will work:
+
statements:
+ -name:statement-foo
+ stmt:statement body
+ params:
+ aparam:avalue
+---
+statements:
+ -statement-foo:statement body
+ params:
+ aparam:avalue
+
+
In both cases, it is clear to the loader where the statement body should come from, and what (if any) explicit naming
+should occur.
+
Redefined Name-Statement Tuple
+
This exception is thrown when the statement name is defined in multiple ways. This is an explicit exception to avoid
+possible ambiguity about which value the user intended. For example, the following statements definition will cause an
+error:
+
statements:
+ -name:name1
+ name2:statement body
+
+
This is an error because the statement is not defined before free parameters are read, and the name: statement form
+includes a second definition for the statement name. In order to correct this, simply remove the separate name entry,
+or use the stmt property to explicitly set the statement body. Either of these will work:
+
statements:
+ -name2:statement body
+---
+statements:
+ -name:name1
+ stmt:statement body
+
+
In both cases, there is only one name defined for the statement according to the supported formats.
+
YAML Parsing Error
+
This exception is thrown when the YAML format is not recognizable by the YAML parser. If you are not working from
+examples that are known to load cleanly, then please review your document for correctness according to the
+YAML Specification.
+
If you are sure that the YAML should load, then please
+submit a bug report with details on the type of YAML
+file you are trying to load.
+
YAML Construction Error
+
This exception is thrown when the YAML was loaded, but the configuration object was not able to be constructed from the
+in-memory YAML document. If this error occurs, it may be a bug in the YAML loader implementation. Please
+submit a bug report with details on the type of YAML
+file you are trying to load.
It is recommended that you read through the examples in each of the design sections in order. This
+guide was designed to give you a detailed understanding of workload construction with NoSQLBench.
+The examples will also give you better insight into how NoSQLBench works at a fundamental level.
+
Workloads, Defined
+
Workloads in NoSQLBench are defined by a workload template. You can use workload templates to
+describe operations that you want to execute, using any available operation type. A workload
+template is usually provided in a YAML file according to the conventions and formats provided in
+this section. From here on, we'll simply call them workloads.
+
👉 Workload templates are basically blueprints for operations that you organize in whatever
+order and mix you need.
+
With NoSQLBench, a standard configuration format is provided that's used across all workloads.
+This makes it easy to specify op templates, parameters, data bindings, and tags. By default, we
+use YAML as our workload format, but you could just as easily use JSON. (YAML is a superset of
+JSON). After all, workloads templates are really collections of data structure templates.
+
This section describes the standard workload syntax in YAML and how to use it.
+
Multi-Protocol Support
+
You will notice that this guide is not overly CQL-specific. That is because NoSQLBench is a
+multi-protocol tool. All that is needed for you to use this guide with other protocols is a
+different driver parameter. Try to keep that in mind as you think about designing workloads.
+
Advice for new builders
+
Look for built-ins first
+
If you haven't yet run NoSQLBench with a built-in workload, then this section may not be necessary
+reading. It is possible that a built-in workload will do what you need. If not, please read on.
+
Review existing examples
+
The built-in workloads that are include with NoSQLBench are also easy to copy out as a starting
+point. You just need to use two commands:
+
# find a workload you want to copy
+nb5 --list-workloads
+
+# copy a workload to your local directory
+nb5 --copy cql-iot
+
+
Follow the conventions
+
The block names and other conventions demonstrated here represent a pretty common pattern. If
+you follow these patterns, your workloads will be more portable across environments. All the
+baselines workloads that we publish for NoSQLBench follow these conventions.