Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions RUNNING-DOCS-LOCALLY.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ To run these docs locally you'll need:

If you want to fully render all documentation locally you need to install the following plugins with `pip install`:

* [glightbox](https://pypi.org/project/mkdocs-glightbox/0.1.0/)
* [multirepo](https://pypi.org/project/mkdocs-multirepo/)
* [redirects](https://pypi.org/project/mkdocs-redirects/)
* [mkdocs-glightbox](https://pypi.org/project/mkdocs-glightbox/0.1.0/)
* [mkdocs-multirepo](https://pypi.org/project/mkdocs-multirepo/)
* [mkdocs-redirects](https://pypi.org/project/mkdocs-redirects/)

You also need to sign up to the [Insiders Programme](https://squidfunk.github.io/mkdocs-material/insiders/).

Expand Down
28 changes: 8 additions & 20 deletions docs/apis/data-catalogue-api/intro.md
Original file line number Diff line number Diff line change
@@ -1,37 +1,25 @@
# Introduction

The Data Catalogue HTTP API allows you to fetch data stored in the Quix
platform. You can use it for exploring the platform, prototyping
applications, or working with stored data in any language with HTTP
capabilities.
The Data Catalogue HTTP API allows you to fetch data stored in the Quix platform. You can use it for exploring the platform, prototyping applications, or working with stored data in any language with HTTP capabilities.

The API is fully described in our [Swagger
documentation](get-swagger.md). Read on for
a guide to using the API, including real-world examples you can execute
from your language of choice, or via the command line using `curl`.
The API is fully described in our [Swagger documentation](get-swagger.md). Read on for a guide to using the API, including real-world examples you can invoke from your language of choice, or using the command line using `curl`.

## Preparation

Before using any of the endpoints, you’ll need to know how to
[authenticate your requests](authenticate.md) and
how to [form a typical request to the
API](request.md).
Before using any of the endpoints, you’ll need to know how to [authenticate your requests](authenticate.md) and how to [form a typical request to the API](request.md).

You’ll also need to have some data stored in the Quix platform for API
use to be meaningful. You can use any Source from our [Code Samples](../../platform/samples/samples.md) to do this using the Quix
portal.
You’ll also need to have some data stored in the Quix platform for API use to be meaningful. You can use any Source from our [Code Samples](../../platform/samples/samples.md) to do this using the Quix portal.

## Further documentation

| | | |
| ------------------------------------------------------------------ | ------------------ | ----------------------------------------- |
| Documentation | Endpoint | Examples |
| Documentation | Endpoint | Examples |
| -------------------------------------------- | ------------------ | ----------------------------------------- |
| [Streams, paged](streams-paged.md) | `/streams` | Get all streams in groups of ten per page |
| [Streams, filtered](streams-filtered.md) | `/streams` | Get a single stream, by ID |
| | | Get only the streams with LapNumber data |
| | | Get only the streams with LapNumber data |
| [Streams & models](streams-models.md) | `/streams/models` | Get stream hierarchy |
| [Raw data](raw-data.md) | `/parameters/data` | Get all the `Speed` readings |
| | | Get `Speed` data between timestamps |
| | | Get `Speed` data between timestamps |
| [Aggregated data by time](aggregate-time.md) | `/parameters/data` | Downsample or upsample data |
| [Aggregated by tags](aggregate-tags.md) | `/parameters/data` | Show average Speed by LapNumber |
| [Tag filtering](filter-tags.md) | `/parameters/data` | Get data for just one Lap |
13 changes: 4 additions & 9 deletions docs/apis/streaming-writer-api/intro.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,9 @@
# Introduction

The Streaming Writer API allows you to stream data into the Quix
platform via HTTP endpoints or SignalR. It’s an alternative to using our
C\# and Python client libraries. You can use the Streaming Writer API from any
HTTP-capable language.

The API is fully documented in our [Swagger
documentation](get-swagger.md). Read on for a
guide to using the API, including real-world examples you can execute
from your language of choice, or via the command line using curl.
The Streaming Writer API allows you to stream data into the Quix platform via HTTP endpoints or SignalR. It’s an alternative to using our C# and Python client libraries. You can use the Streaming Writer API from any HTTP-capable language.

The API is fully documented in our [Swagger documentation](get-swagger.md). Read on for a guide to using the API, including real-world examples you can invoke
from your language of choice, or using the command line using curl.

## Preparation

Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ Read more about the Quix Streams Client Library and APIs.

---

Query historic time-series data in Quix using HTTP interface.
Query historical time-series data in Quix using HTTP interface.

[:octicons-arrow-right-24: Learn more](./apis/data-catalogue-api/intro.md)

Expand Down
6 changes: 3 additions & 3 deletions docs/platform/MLOps.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,18 @@ a seamless journey from concept to production. The key steps are:
Any member of any team can quickly access data in the Catalogue without
support from software or regulatory teams.

## Develop features in historic data
## Develop features in historical data

Use Visualise to discover, segment, label and store significant features
in the catalogue.

## Build & train models on historic data
## Build & train models on historical data

Use Develop and Deploy to:

- Write model code in Python using their favourite IDE.

- Train models on historic data.
- Train models on historical data.

- Evaluate results against raw data and results from other models.

Expand Down
4 changes: 2 additions & 2 deletions docs/platform/definitions.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Workspaces are collaborative. Multiple users, including developers, data scienti

## Project

A set of code in Quix Platform that can be edited, compiled, executed, and deployed as one Docker image. Projects in Quix Platform are fully version controlled. You can also tag your code as an easy way to manage releases of your project.
A set of code in Quix Platform that can be edited, compiled, run, and deployed as one Docker image. Projects in Quix Platform are fully version controlled. You can also tag your code as an easy way to manage releases of your project.

## Deployment

Expand Down Expand Up @@ -153,7 +153,7 @@ A [WebSockets API](../apis/streaming-reader-api/intro.md) used to stream any dat

### Data Catalogue API

An [HTTP API](../apis/data-catalogue-api/intro.md) used to query historic data in the Data Catalogue. Most commonly used for dashboards, analytics and training ML models. Also useful to call historic data when running an ML model, or to call historic data from an external application.
An [HTTP API](../apis/data-catalogue-api/intro.md) used to query historical data in the Data Catalogue. Most commonly used for dashboards, analytics and training ML models. Also useful to call historical data when running an ML model, or to call historical data from an external application.

### Portal API

Expand Down
2 changes: 1 addition & 1 deletion docs/platform/how-to/jupyter-nb.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ You need to be logged into the platform for this:

![how-to/jupyter-wb/connect-python.png](../../platform/images/how-to/jupyter-wb/connect-python.png)

Copy the Python code to your Jupyter notebook and execute.
Copy the Python code to your Jupyter notebook and run.

![how-to/jupyter-wb/jupyter-results.png](../../platform/images/how-to/jupyter-wb/jupyter-results.png)

Expand Down
2 changes: 1 addition & 1 deletion docs/platform/how-to/webapps/write.md
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,7 @@ req.end();
```

In the preceding example, tags in the event data request are optional.
Tags add context to your data points and help you to execute efficient
Tags add context to your data points and help you to run efficient
queries over them on your data like using indexes in traditional
databases.

Expand Down
16 changes: 8 additions & 8 deletions docs/platform/tutorials/data-science/data-science.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ In other words, you will complete all the typical phases of a data science proje

- Store the data efficiently

- Train some ML models with historic data
- Train some ML models with historical data

- Deploy the ML models into production in real time

Expand Down Expand Up @@ -128,9 +128,9 @@ You now have a working real time stream of bike data. Now use the OpenWeather ac

## 4. View and store the data

With Quix it's easy to visualize your data in a powerful and flexible way, you can see the real-time data and view historic data.
With Quix it's easy to visualize your data in a powerful and flexible way, you can see the real-time data and view historical data.

At it's heart Quix is a real-time data platform, so if you want to see data-at-rest for a topic, you must turn on data persistence for that topic (You'll do this [below](#historic)).
At it's heart Quix is a real-time data platform, so if you want to see data-at-rest for a topic, you must turn on data persistence for that topic (You'll do this [below](#historical)).

### Real-time

Expand All @@ -148,9 +148,9 @@ At it's heart Quix is a real-time data platform, so if you want to see data-at-r

If you don't see any streams or parameters, just wait a moment or two. The next time data arrives these lists will be automatically populated.

### Historic
### Historical

In order to train a machine learning model we will need to store the data we are ingesting so that we start building a historic dataset. However topics are real time infrastructures, not designed for data storage. To solve this, Quix allows you to send the data going through a topic to an efficient real time database if you need it:
In order to train a machine learning model we will need to store the data we are ingesting so that we start building a historical dataset. However topics are real time infrastructures, not designed for data storage. To solve this, Quix allows you to send the data going through a topic to an efficient real time database if you need it:

1. Navigate to the topics page using the left hand navigation

Expand All @@ -172,7 +172,7 @@ Follow the along and we'll show you how to get data out of Quix so you can train

We mentioned earlier in [Weather real time stream](#3-weather-real-time-stream) that free access to the OpenWeather API only allows us to consume new data every 30 minutes therefore, at this point you will have a limited data set.

You can leave the data consumption process running overnight or for a few days to gather more data, but for the time being there's no problem in continuing with the tutorial with your limited historic data.
You can leave the data consumption process running overnight or for a few days to gather more data, but for the time being there's no problem in continuing with the tutorial with your limited historical data.

#### Get the data

Expand Down Expand Up @@ -204,15 +204,15 @@ You can leave the data consumption process running overnight or for a few days t

### Train the model

At this point, you are generating historic data and know how to query it. You can train your ML models as soon as you've gathered enough data.
At this point, you are generating historical data and know how to query it. You can train your ML models as soon as you've gathered enough data.

!!! example "Need help?"

Follow our "How to train an ML model" tutorial [here](../train-and-deploy-ml/train-ml-model.md)

We walk you through the process of getting the code to access the data (as described above), running the code in a Jupyter notebook, training the model and uploading your pickle file to Quix.

However, it would take several weeks to accumulate enough historic data to train a model, so let's continue the tutorial with some pre-trained models we have provided. We've done it using the very same data flow you've just built, and can find the Jupyter notebook code we used [here](https://github.com/quixio/NY-bikes-tutorial/blob/1stversion/notebooks-and-sample-data/04%20-%20Train%20ML%20models.ipynb){target=_blank}.
However, it would take several weeks to accumulate enough historical data to train a model, so let's continue the tutorial with some pre-trained models we have provided. We've done it using the very same data flow you've just built, and can find the Jupyter notebook code we used [here](https://github.com/quixio/NY-bikes-tutorial/blob/1stversion/notebooks-and-sample-data/04%20-%20Train%20ML%20models.ipynb){target=_blank}.

## 6. Run the model

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -258,6 +258,6 @@ With the microservices for control and input processing deployed along with the

## Thanks

Thanks for following the tutorial, hopefully you learnt something about Quix and had some fun doing it!
Thanks for following the tutorial, hopefully you learned something about Quix and had some fun doing it!

If you need any help, got into difficulties or just want to say hi then please join our [Slack community](https://quix.io/slack-invite?_ga=2.132866574.1283274496.1668680959-1575601866.1664365365){target=_blank}.
2 changes: 1 addition & 1 deletion docs/platform/tutorials/eventDetection/crash-detection.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# 2. Event Detection

Our event detection pipeline is centered around this service, which executes an ML model to detect whether a vehicle has been involved in an accident.
Our event detection pipeline is centered around this service, which runs an ML model to detect whether a vehicle has been involved in an accident.

In reality our ML model was trained to detect the difference between a phone being shaken versus just being used normally. You actually don’t have to use an ML model at all! There are various ways this service could have been written, for example, you could detect a change in the speed or use the speed and another parameter to determine if an event has occurred.

Expand Down
23 changes: 23 additions & 0 deletions docs/platform/tutorials/train-and-deploy-ml/conclusion.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Conclusion

In this tutorial you've learned how to use Quix to generate real-time data. You've also learned how to import that data into Jupyter Notebook using the Quix code generator. You then saw how to deploy your ML model to the Quix Platform, and visualize its output in real time.

![Data explorer](./images/visualize-result.png)

The objective of the tutorial was not to create the most accurate model, but to step you through the overall process, and show you some of the useful features of Quix. It shows one possible workflow, where you train your ML model in Jupyter Notebook, and the integration between Quix and Jupyter was demonstrated. It is also possible to train your ML model directly in Quix, on live data, or persisted data using the [replay functionality](../../how-to/replay.md).

## Next Steps

Here are some suggested next steps to continue on your Quix learning journey:

* Visit the [Quix Code Samples GitHub](https://github.com/quixio/quix-samples){target=_blank}. If you decide to build your own connectors and apps, you can contribute something to the Quix Code Samples. Fork our Code Samples repo and submit your code, updates, and ideas.

* [Sentiment analysis tutorial](../sentiment-analysis/index.md) - In this tutorial you learn how to build a sentiment analysis pipeline, capable of analyzing real-time chat.

* [Data science tutorial](../data-science/data-science.md) - In this tutorial you use data science to build a real-time bike availability pipeline.

What will you build? Let us know! We’d love to feature your project or use case in our [newsletter](https://www.quix.io/community/).

## Getting help

If you need any assistance, we're here to help in [The Stream](https://join.slack.com/t/stream-processing/shared_invite/zt-13t2qa6ea-9jdiDBXbnE7aHMBOgMt~8g){target=_blank}, our free Slack community. Introduce yourself and then ask any questions in `quix-help`.
Binary file not shown.
43 changes: 43 additions & 0 deletions docs/platform/tutorials/train-and-deploy-ml/create-data.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Get your data

In this part of the tutorial you learn how to obtain some real-time data to work with in the rest of the tutorial. You use a demo data source that generates Formula 1 race car data from a computer game. You use this data as the basis to build a ML model to predict braking patterns.

## Create a persisted topic

To make things a little easier, first create a **persisted topic** to receive the generated data.

1. Login to the Quix Portal.

2. Click `Topics` on the left-hand menu.

3. Click the `Add new`, which is located top right.

4. Enter a topic name of `f1-data`.

5. Leave other values in the `Create new topic` dialog at their defaults.

6. Click `Done`. Now wait while the topic is created for you.

7. Once the topic has been created, click the persistence slider button to ensure your data is persisted, as shown in the following screenshot:

![Enable topic persistence](./images/enable-topic-persistence.png)

## Generate data from the demo data source

Now generate the actual data for use later in the tutorial by completing the following steps:

1. Click `Code Samples` on the left-hand sidebar.

2. Find the `Demo Data` source. This service streams F1 Telemetry data into a topic from a recorded game session.

3. Click the `Setup & deploy` button in the `Demo Data` panel.

4. You can leave `Name` as the default value.

5. Make sure `Topic` is set to `f1-data` and then click `Deploy`.

Once this service is deployed it will run as a [job](../../definitions.md#job) and generate real-time data to the `f1-data`, and this will be persisted.

This data is retrieved later in this tutorial using Python code that uses the [Data Catalogue API](../../../apis/data-catalogue-api/intro.md), generated for you by Quix.

[Import data into Jupyter Notebook :material-arrow-right-circle:{ align=right }](./import-data.md)
Loading