Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Catalog API #867

Closed
dantengsky opened this issue Jun 22, 2021 · 14 comments
Closed

Catalog API #867

dantengsky opened this issue Jun 22, 2021 · 14 comments
Labels
C-feature Category: feature

Comments

@dantengsky
Copy link
Member

dantengsky commented Jun 22, 2021

Summary

Currently DataSource API provides us the essential functionalities of manipulating metadata, but as we moving the v0.5, some extra requirements should be fulfilled, including but not limited to:

  • periodically sync metadata with(from) meta-store
  • metadata of database, table should be versioned, as we are going to provide time travel features
  • exposes a session-level catalog API so that we can keep uncommitted metadata invisible
  • provide simple transaction API
    for per statement, implicit transaction.
    Note: In v0.5 we are not going to support distributed/parallel insertion/deletion
@dantengsky dantengsky added the C-feature Category: feature label Jun 22, 2021
@BohuTANG BohuTANG added this to the v0.5 milestone Jun 22, 2021
@drmingdrmer
Copy link
Member

drmingdrmer commented Jun 22, 2021

  • metadata of database, table should be versioned, as we are going to provide time travel features

Versions of table is a single listed list, or could it be a DAG?

v0 <- v1 <- v2 ...

or

 .--- v2 <----+- v3
|      |       |
v      v       |
v0 <- v1 <----'

@BohuTANG
Copy link
Member

I think it's a single chain like git

@leiysky
Copy link
Collaborator

leiysky commented Jun 22, 2021

  • periodically sync metadata with(from) meta-store

How does DDL work?

@drmingdrmer
Copy link
Member

I think it's a single chain like git

You're right. In every case it does not need a DAG history. I was overthinking.

@dantengsky
Copy link
Member Author

  • metadata of database, table should be versioned, as we are going to provide time travel features

Versions of table is a single listed list, or could it be a DAG?

v0 <- v1 <- v2 ...

or

 .--- v2 <----+- v3
|      |       |
v      v       |
v0 <- v1 <----'

"v0 <- v1 <- v2 " is enought for time travel.

 .--- v2 <----+- v3
|      |       |
v      v       |
v0 <- v1 <----'

The above figure may describe:

  • both v2 and v1 concurrently commit on the base of v0, and v1 wins. v2 successfully resolves conflicts and commits later.
  • both v3 and v2 concurrently commit on the base of v1, and v2 wins. v4 successfully resolves conflicts and commits later.

Transaction operation log will contain an item/entry to record the base version (I might forget this in the last discussion though;-), but that's for post-audit/diagnostics, for time travel, linear history is enough I think.

@dantengsky
Copy link
Member Author

  • periodically sync metadata with(from) meta-store

How does DDL work?

Roughly speaking,

  • DDL interpreters call catalog API
  • catalog API call metastore (via fight service rpc)
  • catalog sync with metastore

Any specific concerns or suggestions(sincerely welcome)?

@leiysky
Copy link
Collaborator

leiysky commented Aug 8, 2021

@dantengsky

I've just realized something, and I will appreciate it if you can give some feedback:

  1. Do we distinguish between immutable and mutable tables(or partitions)? Since it's simple to do some optimization on static dataset, e.g. accurate statistics, efficient cache policy.

  2. Shall we have more fine-grained item classification? Like supporting Field item, Function item, so we can implement fine-grained priviledge management and security ensurance.

@dantengsky
Copy link
Member Author

dantengsky commented Aug 8, 2021

@dantengsky

I've just realized something, and I will appreciate it if you can give some feedback:

pleasure to discuss with you ( and all the kind people who cares about datafause)

  1. Do we distinguish between immutable and mutable tables(or partitions)? Since it's simple to do some optimization on static dataset, e.g. accurate statistics, efficient cache policy.

For each version of a given table, it is immutable: including schema, partitions, and other meta data of it. When computation layer accessing a table without specifying the version, the latest version of the table will be returned.

There might be some exceptions, like re-orging a table (background merging), things of these parts are not fully settled, we need some deeper thoughts here.

I agree with you that immutability will benefit cache policy, and statistics (with an affordable cost if the statistics index we are trying to maintain is not "heavy").

  1. Shall we have more fine-grained item classification? Like supporting Field item, Function item, so we can implement fine-grained priviledge management and security insurance.

I think we should have them eventually, at least, the ansi information_schema need them.
I am not sure if we should provide "field/column" level access control, but if we could cover that without scarifying too much, it will be great.

As always, suggestions are sincerely welcome.

@jyizheng
Copy link
Contributor

jyizheng commented Aug 8, 2021

What are the file format and table format we want to support?
Parquet is a popular file format and the cost-based-optimization may rely on it.
Is the iceberg the table format we want to support?

@leiysky
Copy link
Collaborator

leiysky commented Aug 8, 2021

Thanks for the reply.

For each version of a given table, it is immutable: including schema, partitions, and other meta data of it. When computation layer accessing a table without specifying the version, the latest version of the table will be returned.

Correct me if I was wrong, for each write operation, like insert a single row with INSERT INTO t VALUES(...), it will generate a unique version.

This scheme would bring serious overhead, that is, user have to batch the records and ingest them with bulk load.

How do you handle this case?

@leiysky
Copy link
Collaborator

leiysky commented Aug 8, 2021

What are the file format and table format we want to support?
Parquet is a popular file format and the cost-based-optimization may rely on it.
Is the iceberg the table format we want to support?

And besides, I have the question too.

Maybe we can have our own unified columnar format?

As https://www.firebolt.io/performance-at-scale said, they will transform the external data(e.g. JSON, Parquet, Avro, CSV) into their own data format F3 through the ETL pipeline.

@jyizheng
Copy link
Contributor

jyizheng commented Aug 8, 2021

@leiysky I agree and thought about this, too. Maybe it is a good idea to borrow some ideas from clickhouse to define our own file format. My impression is that clickhouse is very good at filtering data so that the query can be fast. A Parquet file has some stats or summaries for the data it stores. We may want to extend a parquet file's stats or summaries with the clickhouse tricks.

@leiysky
Copy link
Collaborator

leiysky commented Aug 8, 2021

@leiysky I agree and thought about this, too. Maybe it is a good idea to borrow some ideas from clickhouse to define our own file format. My impression is that clickhouse is very good at filtering data so that the query can be fast. A Parquet file has some stats or summaries for the data it stores. We may want to extend a parquet file's stats or summaries with the clickhouse tricks.

Yes, I think it's mainly because of sparse index, which can help skipping some data units(depends on data distribution, the effect can be surprising) with min/max, bloom filter or else. It's convenient for us to build different index on a data file, even on demand on the fly.

And AFAIK, ORC does have sparse index in their data format, but I haven't do any benchmark on them, so just FYI.

Anyway, we can choose a major data format for now to bootstrap fuse-store, but eventually there should be a unified data format designed by ourselves.

@dantengsky
Copy link
Member Author

Clickhouse's mergetree engine, and the way they organize the index, is definitely a valuable reference, but IMHO, I am afraid it is not "cloud-native" friendly, @zhang2014 @sundy-li may have more insightful comments on this.

We do consider using something like iceberg's table format, which is more "batch_write" friendly as @leiysky mentioned, i.e. if small parts keep being ingested in without batching, something like CK's "Too many parts.." may happen (and the metadata increases for each small ingestion, which brings burdens to the meta layer, as iceberg/Uber mentioned)

Totally agreed with you that statistics of parquet and orc is a kind of sparse index.

But I am afraid, the statistics of Parquet or ORC can not be used directly, since the data files are supposed to be stored in Object Storage, which is not that efficient to access; the sparse index should be maintained by meta service, and ideally, they could be accessed by computation layers as a normal data source (relation), so that we can do computations with the sparse index parallelly (or even distributedly, but that might be too much). Also, we are considering utilizing some fancy data structures to accelerating the query, like BloomRF etc.

May I transfer this issue to a discussion? DataFuse is young, constructive discussions like this are preferred.

@datafuselabs datafuselabs locked and limited conversation to collaborators Aug 8, 2021

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
C-feature Category: feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants