Skip to content

Commit

Permalink
Merge e233b13 into 8d76dee
Browse files Browse the repository at this point in the history
  • Loading branch information
chicco785 committed Jan 28, 2021
2 parents 8d76dee + e233b13 commit bb722a8
Show file tree
Hide file tree
Showing 37 changed files with 725 additions and 1,113 deletions.
4 changes: 1 addition & 3 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ install:
- pip install pipenv

before_script:
- pipenv install
- pipenv install --dev
- sudo service postgresql stop

script:
Expand All @@ -26,7 +26,5 @@ notifications:

env:
jobs:
- CRATE_VERSION=3.3.2 QL_PREV_IMAGE=smartsdk/quantumleap:0.5.1 PREV_CRATE=3.3.0
- CRATE_VERSION=4.0.12 QL_PREV_IMAGE=smartsdk/quantumleap:0.5.1 PREV_CRATE=3.3.5
- CRATE_VERSION=4.0.12 QL_PREV_IMAGE=smartsdk/quantumleap:0.7.5 PREV_CRATE=3.3.5
- CRATE_VERSION=4.1.4 QL_PREV_IMAGE=smartsdk/quantumleap:0.7.5 PREV_CRATE=4.0.12
12 changes: 5 additions & 7 deletions Pipfile
Original file line number Diff line number Diff line change
Expand Up @@ -12,26 +12,24 @@ geocoder = "~=1.33"
geojson = "~=2.4"
geomet = "~=0.2"
gunicorn = "~=20.0.4"
influxdb = "~=4.0"
pg8000 = "==1.16.5"
pymongo = "~=3.4"
pytest = "~=3.0"
pytest-cov = "~=2.7.1"
coveralls = "~=2.0"
pytest-flask = "~=0.10"
python-dateutil = ">=2.7"
pyyaml = ">=4.2"
redis = "~=2.10"
requests = ">=2.20"
rethinkdb = "==2.3"
pickle-mixin = "==1.0.2"
pytest-lazy-fixture = "~=0.6.3"

# run `pipenv install --dev` to get the packages below in your env
[dev-packages]
aiohttp = "~=3.7"
matplotlib = "~=3.3"
pandas = "~=1.1"
pytest-lazy-fixture = "~=0.6.3"
pytest-flask = "~=0.10"
pytest = "~=3.0"
pytest-cov = "~=2.7.1"
coveralls = "~=2.0"

[requires]
python_version = "3.8.5"
701 changes: 410 additions & 291 deletions Pipfile.lock

Large diffs are not rendered by default.

13 changes: 6 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,12 +57,13 @@ QuantumLeap supports both Crate DB and Timescale as time-series DB
backends but please bear in mind that at the moment we only support
the following versions:

* Crate backend: Crate DB version `3.3.*` (will be deprecated from QL `0.9` version) and `4.*`
* Timescale backend: Postgres version `10.*` or `11.*` +
- Crate backend: Crate DB version `4.1.*`
- Timescale backend: Postgres version `10.*` or `11.*` +
Timescale extension `1.3.*` + Postgis extension `2.5.*`.

PR #373 introduced basic support for NGSI-LD. In short this means that using
the current endpoint you are able to store NGSI-LD payloads with few caveats (see #398)
PR [#373](https://github.com/smartsdk/ngsi-timeseries-api/pull/373) introduced
basic support for NGSI-LD. In short this means that using the current endpoint
you are able to store NGSI-LD payloads with few caveats (see [#398](https://github.com/smartsdk/ngsi-timeseries-api/issue/398))

## Usage

Expand All @@ -85,9 +86,7 @@ additional documentation about QuantumLeap. Note that these guides could be
outdated (so could the official docs!), so we appreciate all efforts to keep
consistency.

- [SmartSDK Guided-tour](https://guided-tour-smartsdk.readthedocs.io/en/latest/)
- [FIWARE Step-by-step](https://fiware-tutorials.readthedocs.io/en/latest/time-series-data/index.html)
- [SmartSDK Recipes](https://smartsdk-recipes.readthedocs.io/en/latest/data-management/quantumleap/readme/)
- [Orchestra Cities Helm Charts](https://github.com/orchestracities/charts)

---
Expand All @@ -96,4 +95,4 @@ consistency.

QuantumLeap is licensed under the [MIT](LICENSE) License

© 2017-2020 Martel Innovate
© 2017-2021 Martel Innovate
4 changes: 1 addition & 3 deletions deps.env
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,12 @@

export MONGO_VERSION=3.2
export ORION_VERSION=2.2.0

export INFLUX_VERSION=1.2.2
export RETHINK_VERSION=2.3.5
export CRATE_VERSION=4.1.4
export TIMESCALE_VERSION=1.7.1-pg12

export REDIS_VERSION=3

# Update this tag considering previous major/minor, not patches
# TODO once this version will made to a release, QL version should be updated to 0.9, CRATE version to 4.0.12
export QL_PREV_IMAGE=smartsdk/quantumleap:0.7.5
export PREV_CRATE=3.3.5
2 changes: 0 additions & 2 deletions docs/manuals.ja/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,8 +196,6 @@ REST API を介したデータのクエリと取得が計画されています
"Grafana Home"
[grafana.pg]: http://docs.grafana.org/features/datasources/postgres/
"Grafana PostgreSQL Data Source"
[influx]: https://docs.influxdata.com/influxdb
"InfluxDB Documentation"
[ngsi-spec]: https://fiware.github.io/specifications/ngsiv2/stable/
"FIWARE-NGSI v2 Specification"
[orion]: https://fiware-orion.readthedocs.io
Expand Down
7 changes: 0 additions & 7 deletions docs/manuals/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -159,9 +159,6 @@ to use multiple time series databases. This design choice is justified
by the fact that a database product may be more suitable than
another depending on circumstances at hand. Currently QuantumLeap
can be used with both [CrateDB][crate] and [Timescale][timescale].
Experimental support is also available for [InfluxDB][influx] and
[RethinkDB][rethink] but development for these two back ends has
stalled so they are **not** usable at the moment.

The [Database Selection][ql-man.db-sel] section of this manual
explains how to configure QuantumLeap to use one of the available
Expand Down Expand Up @@ -293,8 +290,6 @@ As of today, the query caching stores:
"Grafana Home"
[grafana.pg]: http://docs.grafana.org/features/datasources/postgres/
"Grafana PostgreSQL Data Source"
[influx]: https://docs.influxdata.com/influxdb
"InfluxDB Documentation"
[ngsi-spec]: https://fiware.github.io/specifications/ngsiv2/stable/
"FIWARE-NGSI v2 Specification"
[ngsi-ld-spec]: https://www.etsi.org/deliver/etsi_gs/CIM/001_099/009/01.01.01_60/gs_CIM009v010101p.pdf
Expand All @@ -319,8 +314,6 @@ As of today, the query caching stores:
"NGSI-TSDB Specification"
[ql-tut]: https://fiware-tutorials.readthedocs.io/en/latest/time-series-data/
"FIWARE Tutorials - Time Series Data"
[rethink]: https://www.rethinkdb.com/
"RethinkDB Home"
[smartsdk.tour]: http://guided-tour-smartsdk.readthedocs.io/en/latest/
"SmartSDK Guided Tour"
[timescale]: https://www.timescale.com
Expand Down
45 changes: 45 additions & 0 deletions docs/manuals/user/troubleshooting.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,51 @@ latency between the data writing and the time the data is available for reading.
on node B. So if you issue a query right after writing a message and QL
picks node B, probability to find the data you just pushed is even lower.

### Crate configuration and active shards

The `wait_for_active_shards` value only affects table (or partition) creation
and is not (directly) affecting writes.
In case of a partitioned table, new partitions are created on-fly taking this
setting into account and such this setting will partly affect writes here.
If not all defined N shards for this new partition are active, the write will
stall until the replica becomes active or an internal timeout of `30s` is
reached. Thus writes get slow if e.g. the setting is set to > `1` and replica
shards are unassigned/unreachable due to a missing node.

To avoid such slow writes (and possible data loss due to missing replicas )
when `write_for_active_shards` is set to `>1` while doing a
[rolling upgrade](https://crate.io/docs/crate/howtos/en/latest/admin/rolling-upgrade.html),
`cluster.graceful_stop.min_availability` should be set to `full` and nodes must
be shutdown gracefully . By doing so, it is ensured that primary **and**
replica shards are moved away from the to-shutdown node **before** the
node will stop.

Here's the scenario:

1. A node N1 holds a primary shard S with records r[1] to r[m + n].
1. Another node N2 holds S's replica shard, R, with records r[1] to r[m],
i.e. n records haven't been replicated yet.
1. N1 goes down.
1. Crate won't promote N2 as primary since it knows R is stale w/r/t S.

The only way out of the impasse would be to manually force replica promotion.

In case N1 goes down **before** the operation request was sent on the replica
shard at N2, the cluster will not promote the (stale) replica as a new primary
and thus won't process any new writes, resulting in a red table health.
After the primary shard comes back (yellow/green health, writes possible again),
the missing operations are synced to the replica.
If the primary cannot be started (e.g. due to disk corruption) the replica
can be [forced](https://crate.io/docs/crate/reference/en/4.3/sql/statements/alter-table.html#alter-table-reroute-promote-replica) to be promoted as the new primary. Of course the missing operations
are then lost.

If N1 goes down **after** the replication request was sent, the replica may
process the operation and afterwards can be promoted as a new primary.

See also [storage-consistency](https://crate.io/docs/crate/reference/en/4.3/concepts/storage-consistency.html),
[resiliency](https://crate.io/docs/crate/reference/en/4.3/concepts/resiliency.html)
and [resiliency](https://crate.io/docs/crate/reference/en/4.3/appendices/resiliency.html).

## Bug reporting

Bugs should be reported in the form of
Expand Down
2 changes: 0 additions & 2 deletions setup_dev_env.sh
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,6 @@ export MONGO_HOST=$LH
export QL_HOST=$LH
export CRATE_HOST=$LH
export POSTGRES_HOST=$LH
export INFLUX_HOST=$LH
export RETHINK_HOST=$LH

export REDIS_HOST=$LH

Expand Down
2 changes: 1 addition & 1 deletion src/reporter/tests/test_1T1E1A.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
from datetime import datetime
from exceptions.exceptions import AmbiguousNGSIIdError
from reporter.tests.utils import insert_test_data, delete_test_data
from utils.common import assert_equal_time_index_arrays
from utils.tests.common import assert_equal_time_index_arrays
import dateutil.parser
import pytest
import requests
Expand Down
2 changes: 1 addition & 1 deletion src/reporter/tests/test_1T1ENA.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
from reporter.tests.utils import insert_test_data, delete_test_data
import pytest
import requests
from utils.common import assert_equal_time_index_arrays
from utils.tests.common import assert_equal_time_index_arrays
import dateutil.parser

entity_type = 'Room'
Expand Down
2 changes: 1 addition & 1 deletion src/reporter/tests/test_1TNE1A.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from conftest import QL_URL
from datetime import datetime, timezone
from reporter.tests.utils import insert_test_data, delete_test_data
from utils.common import assert_equal_time_index_arrays
from utils.tests.common import assert_equal_time_index_arrays
import pytest
import requests
import dateutil.parser
Expand Down
2 changes: 1 addition & 1 deletion src/reporter/tests/test_1TNENA.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from conftest import QL_URL
from datetime import datetime
from reporter.tests.utils import insert_test_data, delete_test_data
from utils.common import assert_equal_time_index_arrays
from utils.tests.common import assert_equal_time_index_arrays
import pytest
import requests
import dateutil.parser
Expand Down
2 changes: 1 addition & 1 deletion src/reporter/tests/test_Headers.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from datetime import datetime
from conftest import QL_URL
from utils.common import assert_equal_time_index_arrays
from utils.tests.common import assert_equal_time_index_arrays
from reporter.tests.utils import delete_entity_type
import copy
import json
Expand Down
2 changes: 1 addition & 1 deletion src/reporter/tests/test_notify.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
from datetime import datetime, timezone
from conftest import QL_URL
from utils.common import assert_equal_time_index_arrays
from utils.tests.common import assert_equal_time_index_arrays
from reporter.tests.utils import delete_entity_type
import copy
import json
Expand Down
14 changes: 5 additions & 9 deletions src/tests/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,12 +61,8 @@ def check_orion_url():
assert res.ok, "{} not accessible. {}".format(ORION_URL, res.text)


def check_ql_url(is_old_ql_image=False):
if is_old_ql_image:
version_url = f"{QL_URL}/v2/version"
else:
version_url = f"{QL_URL}/version"
res = requests.get(version_url)
def check_ql_url():
res = requests.get("{}/version".format(QL_URL))
assert res.ok, "{} not accessible. {}".format(QL_URL, res.text)


Expand Down Expand Up @@ -97,9 +93,9 @@ def post_orion_subscriptions(entities):
"{}".format(r.text)


def load_data(is_old_ql_image=False):
def load_data():
check_orion_url()
check_ql_url(is_old_ql_image)
check_ql_url()

entities = create_entities()

Expand Down Expand Up @@ -211,4 +207,4 @@ def check_deleted_data(entities):


if __name__ == '__main__':
load_data(is_old_ql_image=True)
load_data()
2 changes: 1 addition & 1 deletion src/tests/test_bc.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from src.tests.common import check_data, unload_data, create_entities, \
from tests.common import check_data, unload_data, create_entities, \
check_deleted_data


Expand Down
2 changes: 1 addition & 1 deletion src/tests/test_integration.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from src.tests.common import load_data, check_data, unload_data, \
from tests.common import load_data, check_data, unload_data, \
check_deleted_data


Expand Down
2 changes: 1 addition & 1 deletion src/translators/benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@


def benchmark(translator, num_types=10, num_ids_per_type=10, num_updates=10, use_time=False, use_geo=False):
from utils.common import create_random_entities, pick_random_entity_id
from utils.tests.common import create_random_entities, pick_random_entity_id

results = {}
entities = create_random_entities(num_types, num_ids_per_type, num_updates, use_time=use_time, use_geo=use_geo)
Expand Down
19 changes: 0 additions & 19 deletions src/translators/conftest.py

This file was deleted.

Loading

0 comments on commit bb722a8

Please sign in to comment.