You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since relying on IAM only works with solutions like minio, we can intercept S3 read conditions.
For example if our IAM allows reading of all objects in a bucket (path), but we have the tenant ID as the prefix for an object path, we can intercept the reads to that S3 bucket and ensures that only valid files would be read (otherwise cancel the query).
DuckDB has an inherent issue here that we'd be looking at only allowing S3 querying to a single bucket at a time, unlike clickhouse which can use the s3 table engine and read from multiple private buckets per-query.
For example a tenant would not be able to join data on their own private s3 bucket.
The text was updated successfully, but these errors were encountered:
Because of the above, we could also achieve security by injecting AWS credentials into the final query, rather than having them within the session, so that read_parquet can't be abused.
Since relying on IAM only works with solutions like minio, we can intercept S3 read conditions.
For example if our IAM allows reading of all objects in a bucket (path), but we have the tenant ID as the prefix for an object path, we can intercept the reads to that S3 bucket and ensures that only valid files would be read (otherwise cancel the query).
DuckDB has an inherent issue here that we'd be looking at only allowing S3 querying to a single bucket at a time, unlike clickhouse which can use the s3 table engine and read from multiple private buckets per-query.
For example a tenant would not be able to join data on their own private s3 bucket.
The text was updated successfully, but these errors were encountered: