Sort of, Please refer to the update data for more information.
Yes, it does. Please refer to the delete data for more information.
Of course, you can set TTL for every table when creating it:
CREATE TABLE IF NOT EXISTS temperatures(
ts TIMESTAMP TIME INDEX,
temperature DOUBLE DEFAULT 10,
) engine=mito with(ttl='7d');
The TTL of temperatures is set to be seven days.
You can refer to the TTL option of the table create statement here.
The answer is it depends. GreptimeDB uses the columnar storage layout, and compresses time series data by best-in-class algorithms. And it will select the most suitable compression algorithm based on the column data's statistics and distribution. GreptimeDB will provide rollups that can compress data more compactly but lose accuracy.
Therefore, the data compression rate of GreptimeDB may be between 2 and several hundred times, depending on the characteristics of your data and whether you can accept accuracy loss.
GreptimeDB resolves this issue by:
- Sharding: It distributes the data and index between different region servers. Read the architecture of GreptimeDB.
- Smart Indexing: It doesn't create the inverted index for every tag mandatorily, but chooses a proper index type based on the tag column's statistics and query workload. Find more explanation in this blog.
- MPP: Besides the indexing capability, the query engine will use the vectorize execution query engine to execute the query in parallel and distributed.
It doesn't, but we have a new project GreptimeFlow
for it, please refer to the tracking issue.
Yes, GreptimeDB's data access layer is based on OpenDAL, which supports most kinds of object storage services. The data can be stored in cost-effective cloud storage services such as AWS S3 or Azure Blob Storage, please refer to storage configuration guide here.
GreptimeDB also offers a fully-managed cloud service GreptimeCloud to help you manage data in the cloud.