-
Notifications
You must be signed in to change notification settings - Fork 332
JDBC: Handle schema evolution #2714
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JDBC: Handle schema evolution #2714
Conversation
243baf5 to
3b4e033
Compare
|
Thanks @singhpk234 for working on it. This is critical for a smooth Polaris upgrade! A must-have for 1.2. I will take a look soon. |
...onal-jdbc/src/main/java/org/apache/polaris/persistence/relational/jdbc/models/Converter.java
Outdated
Show resolved
Hide resolved
...al-jdbc/src/main/java/org/apache/polaris/persistence/relational/jdbc/models/ModelEntity.java
Outdated
Show resolved
Hide resolved
...rsistence/relational/jdbc/AtomicMetastoreManagerWithJdbcBasePersistenceImplV1SchemaTest.java
Show resolved
Hide resolved
| FeaturesConfiguration featureConfiguration) { | ||
| var optimizedSiblingCheck = FeatureConfiguration.OPTIMIZED_SIBLING_CHECK; | ||
| var errors = new ArrayList<Error>(); | ||
| if (Boolean.parseBoolean(featureConfiguration.defaults().get(optimizedSiblingCheck.key()))) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fine to enable this when schema version is larger or equals to 2. In that case, we might need an input for schema version, which I'm not sure if it's possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's fine to enable this when schema version is larger or equals to 2.
my understanding is even when schema version is 2 we don't know if the table was migrated from v1 to v2 ?
which I'm not sure if it's possible
would not have the handle of schema version here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any module can produce a ProductionReadinessCheck, I believe. If we did it in the JDBC persistence, would it be easier to access schema details?
I'd suggest checking for the some values in the new columns and assuming the values are correct if present. It should catch most common upgrade scenarios and also allow backfilling the values manually. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we did it in the JDBC persistence, would it be easier to access schema details
ACK
I'd suggest checking for the some values in the new columns and assuming the values are correct if present
This would, still not guarantee, that backfill was done correctly for example a user went to v2 from v1 added the column to the entities table, now its create a table for that table we populated the location_without_schema but now for the subsequent table thats created under the namespace, we still doesn't have enough info if all the tables in the namespace have populated the value ? hence cutting this config entirely, if its helpful i can add a config to bypass this readiness check ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any module can produce a ProductionReadinessCheck
That's cool! Yes, we still have to check the schema version, purely checking the column won't work. And we could make it simpler by just relying on the schema versions. In that case, schema upgrade should include backfilling data if needed. If users just add the column without update the schema version, sorry, we don't recommend this feature for them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before we having a migration tool, I think it's OK to block the feature universally, as it is risky to rely on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Re: guarantees: Polaris cannot guarantee that user actions are correct in any case. If a user touches the database, the user has to assume responsibility for data correctness.
Polaris should strive to help users to detect and avoid common pitfalls (e.g. updating the schema without back-filling data). That we can do via the "production readiness check", I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Polaris should strive to help users to detect and avoid common pitfalls (e.g. updating the schema without back-filling data). That we can do via the "production readiness check", I think.
For this its even tricky, lets say we bootstrap a new realm and we were at 1.0, we will automatically go to schema 2, we don't have realm specific keys here
hence this is a bit tricky IMHO, i think the check you suggested later might be best way out
flyrain
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for working on it, @singhpk234 ! LGTM overall. Left some comments.
f6bf7e7 to
7813616
Compare
dimas-b
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking care of schema issues, @singhpk234 ! I had only a brief look. Will review more tomorrow. Overall LGTM 👍
| if (Boolean.parseBoolean(featureConfiguration.defaults().get(optimizedSiblingCheck.key()))) { | ||
| errors.add( | ||
| Error.ofSevere( | ||
| "This setting is not recommended for production environments as it may lead to incorrect behavior, due to missing data for location_without_scheme column in case of migrating from older Polaris versions." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we detect upgrades and only flag this error in upgraded environments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking about it, but didn't find a way to detect the upgrade here as it may happen on bootstrapped with v1 schema and then upgraded to v2 schema by adding column, in this case the schema will contain v2 only but its a migrated case, hence disabling it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like refining this check in upgrade situations requires some more thinking.
I propose to keep the "severe" error, but add another dedicated option for users to acknowledge the risks involved in OPTIMIZED_SIBLING_CHECK... kind of similar to ALLOW_INSECURE_STORAGE_TYPES.
I still believe this check belongs with the JDBC module since the risks come from there (EclipseLink is deprecated already).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check belongs with the JDBC module since the risks come from there
I tried this but since FeaturesConfiguration class is in service runtime and the JdbcProdReadiness is in persistence module, which service runtime module takes a dependency on, hence i couldn't add it there, am i missing something ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point. Le'ts deal with this later, though.
| FeaturesConfiguration featureConfiguration) { | ||
| var optimizedSiblingCheck = FeatureConfiguration.OPTIMIZED_SIBLING_CHECK; | ||
| var errors = new ArrayList<Error>(); | ||
| if (Boolean.parseBoolean(featureConfiguration.defaults().get(optimizedSiblingCheck.key()))) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any module can produce a ProductionReadinessCheck, I believe. If we did it in the JDBC persistence, would it be easier to access schema details?
I'd suggest checking for the some values in the new columns and assuming the values are correct if present. It should catch most common upgrade scenarios and also allow backfilling the values manually. WDYT?
persistence/relational-jdbc/src/main/resources/h2/schema-v0.sql
Outdated
Show resolved
Hide resolved
...al-jdbc/src/main/java/org/apache/polaris/persistence/relational/jdbc/models/ModelEntity.java
Outdated
Show resolved
Hide resolved
| public class AtomicMetastoreManagerWithJdbcBasePersistenceImplV1SchemaTest | ||
| extends AtomicMetastoreManagerWithJdbcBasePersistenceImplTest { | ||
| @Override | ||
| public int schemaVersion() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(optional) this may be more convenient to implement as @Nested tests under the JUnit5 framework (less top-level classes).
persistence/relational-jdbc/src/main/resources/h2/schema-v0.sql
Outdated
Show resolved
Hide resolved
dimas-b
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
JDBC Persistence changes LGTM. I only have concerns with the "readiness" checks.
| if (Boolean.parseBoolean(featureConfiguration.defaults().get(optimizedSiblingCheck.key()))) { | ||
| errors.add( | ||
| Error.ofSevere( | ||
| "This setting is not recommended for production environments as it may lead to incorrect behavior, due to missing data for location_without_scheme column in case of migrating from older Polaris versions." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think "severe" is too strong for this check in its current form. It will cause a startup failure and will require a global suppression flag, which will suppress all other possible severe issues.
I propose to make a JDBC-specific check in RelationalJdbcProductionReadinessChecks.
- On startup (when the check is called)
- For each realm where
OPTIMIZED_SIBLING_CHECKis enabled: - Run a
SELECTfor all namespaces, table-like and view-like entities withlocation_without_schemebeingNULL(limit 1) - If found, produce a severe error.
Since this SELECT is not covered by an index, it may be expensive. Add a feature flag to disable it (for users who know what's involved). WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be ok with deferring the "readiness" check to another PR, by the way.
dimas-b
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All java changes LGTM. I still have a concern with the error message and readiness checks.
I propose to use a two flag approach for now (prompt users to do due diligence with data migration) and improve errors detection / move the check to the JDBC module later (after 1.2.0).
| if (Boolean.parseBoolean(featureConfiguration.defaults().get(optimizedSiblingCheck.key()))) { | ||
| errors.add( | ||
| Error.ofSevere( | ||
| "This setting is not recommended for production environments as it may lead to incorrect behavior, due to missing data for location_without_scheme column in case of migrating from older Polaris versions." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to avoid using the term "production". Correctness issues apply to all environments.
How about This setting should be used with care and only enabled in new realms. Enabling it in previously used realms and may lead to incorrect behavior, due to missing data for location_without_scheme column. Set the ALLOW_OPTIMIZED_SIBLING_CHECK flag to acknowledge this warning and enable Polaris to start..
Then, I think we could add another feature flag ALLOW_OPTIMIZED_SIBLING_CHECK as a user-level safety. If ALLOW_OPTIMIZED_SIBLING_CHECK is true we do not flag this as an error.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK ! sounds reasonable to me, let me push the changes :)
233e9b1 to
4d3733b
Compare
persistence/relational-jdbc/src/main/resources/h2/schema-v0.sql
Outdated
Show resolved
Hide resolved
dimas-b
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍 Thanks for your work on this, @singhpk234 !
|
Thank you so much for your reviews everyone ! really appreciate your inputs :) !! |
* Build: remove code to post-process generated Quarkus jars (apache#2667) Before Quarkus 3.28, the Quarkus generated jars used the "current" timestamp for all ZIP entries, which made the jars not-reproducible. Since Quarkus 3.28, the generated jars use a fixed timestamp for all ZIP entries, so the custom code is no longer necessary. This PR depends on Quarkus 3.28. * Update docker.io/jaegertracing/all-in-one Docker tag to v1.74.0 (apache#2751) * Updating metastore documentation with Aurora postgres example (apache#2706) * added Aurora postgres to metastore documentation * Service: Add events for APIs awaiting API changes (apache#2712) * fix(enhancement): add .idea, .vscode, .venv to top level .gitignore (apache#2718) fix(enhancement): add .idea, .vscode, .venv to top level .gitignore * Fix javadocs of `PolarisPrincipal.getPrincipalRoles()` (apache#2752) * fix(enhancement): squash commits (apache#2643) * fix(deps): update dependency io.smallrye.config:smallrye-config-core to v3.14.1 (apache#2755) * Extract interface for RequestIdGenerator (apache#2720) Summary of changes: 1. Extracted an interface from `RequestIdGenerator`. 2. The `generateRequestId` method now returns a `Uni<String>` in case custom implementations need to perform I/O or other blocking calls during request ID generation. 3. Also addressed comments in apache#2602. * JDBC: Handle schema evolution (apache#2714) * Deprecate legacy management endpoints for removal (apache#2749) * Deprecate LegacyManagementEndpoints for removal * Add PolarisResolutionManifestCatalogView.getResolvedCatalogEntity helper (apache#2750) this centralizes some common code and simplifies some test setups * Enforce that S3 credentials are vended when requested (apache#2711) This is a follow-up change to apache#2672 striving to improve user-facing error reporting for S3 storage systems without STS. * Add property to `AccessConfig` to indicate whether the backing storage integration can produce credentials. * Add a check to `IcebergCatalogHandler` (leading to 400) that storage credentials are vended when requested and the backend is capable of vending credentials in principle. * Update `PolarisStorageIntegrationProviderImpl` to indicate that FILE storage does not support credential vending (requesitng redential vending with FILE storage does not produce any credentials and does not flag an error, which matches current Polaris behaviour). * Only those S3 systems where STS is not available (or disabled / not permitted) are affected. * Other storage integrations are not affected by this PR. * [Catalog Federation] Ignore JIT entities when deleting federated catalogs, add integration test for namespace/table-level RBAC (apache#2690) When enabling table/namespace level RBAC in federated catalog, JIT entities will be created during privilege grant. In the short term, we should ignore them when dropping the catalog. In the long term, we will clean-up those entities when deleting the catalog. This will be the first step towards JIT entity clean-up: 1. Ignore JIT entities when dropping federated catalog (orphan entities) 2. Register tasks/in-place cleanup JIT entities during catalog drop 3. Add new functionality to PolarisMetastoreManager to support atomic delete non-used JIT entities during revoke. 4. Global Garbage Collector to clean-up unreachable entities (entities with non-existing catalog path/parent) * SigV4 Auth Support for Catalog Federation - Part 3: Service Identity Info Injection (apache#2523) This PR introduces service identity management for SigV4 Auth Support for Catalog Federation. Unlike user-supplied parameters, the service identity represents the identity of the Polaris service itself and should be managed by Polaris. * Service Identity Injection * Return injected service identity info in response * Use AwsCredentialsProvider to retrieve the credentials * Move some logic to ServiceIdentityConfiguration * Rename ServiceIdentityRegistry to ServiceIdentityProvider * Rename ResolvedServiceIdentity to ServiceIdentityCredential * Simplify the logic and add more tests * Use SecretReference and fix some small issues * Disable Catalog Federation * Update actions/stale digest to 5f858e3 (apache#2758) * Service: RealmContextFilter test refactor (apache#2747) * Update dependency software.amazon.awssdk:bom to v2.35.0 (apache#2760) * Update apache/spark Docker tag to v3.5.7 (apache#2727) * Update eric-maynard Team entry (apache#2763) I'm no longer affiliated with Snowflake, so we should update this page accordingly * Refactor resolutionManifest handling in PolarisAdminService (apache#2748) - remove mutable `resolutionManifest` field in favor of letting the "authorize" methods return their `PolarisResolutionManifest` - replace "find" helpers with "get" helpers that have built-in error handling * Implement Finer Grained Operations and Privileges For Update Table (apache#2697) This implements finer grained operations and privileges for update table in a backwards compatible way as discussed on the mailing list. The idea is that all the existing privileges and operations will work and continue to work even after this change. (i.e. TABLE_WRITE_PROPERTIES will still ensure update table is authorized even after these changes). However, because Polaris will now be able to identify each operation within an UpdateTable request and has a privilege model with inheritance that maps to each operation, users will now have the option of restricting permissions at a finer level if desired. * [Python CLI][CI Failure] Pin pydantic version to < 2.12.0 to fix CI failure (apache#2770) * Delete ServiceSecretReference (apache#2768) * JDBC: Fix Bootstrap with schema options (apache#2762) * Site: Add puppygraph integration (apache#2753) * Update Changelog with finer grained authz (apache#2775) * Add Arguments to Various Event Records (apache#2765) * Update immutables to v2.11.5 (apache#2776) * Client: add support for policy management (apache#2701) Implementation for policy management via Polaris CLI (apache#1867). Here are the subcommands to API mapping: attach - PUT /polaris/v1/{prefix}/namespaces/{namespace}/policies/{policy-name}/mappings create - POST /polaris/v1/{prefix}/namespaces/{namespace}/policies/{policy-name}/mappings delete - DELETE /polaris/v1/{prefix}/namespaces/{namespace}/policies/{policy-name} detach - POST /polaris/v1/{prefix}/namespaces/{namespace}/policies/{policy-name}/mappings get - GET /polaris/v1/{prefix}/namespaces/{namespace}/policies/{policy-name} list - GET /polaris/v1/{prefix}/namespaces/{namespace}/policies - This is default for `list` operation - GET /polaris/v1/{prefix}/applicable-policies - This is when we have `--applicable` option provided update - PUT /polaris/v1/{prefix}/namespaces/{namespace}/policies/{policy-name} * Update dependency com.google.cloud:google-cloud-storage-bom to v2.58.1 (apache#2764) * Update dependency org.jboss.weld:weld-junit5 to v5.0.3.Final (apache#2777) * Update the LICENSE and NOTICE files in the runtime (apache#2779) * SigV4 Auth Support for Catalog Federation - Part 4: Connection Credential Manager (apache#2759) This PR introduces a flexible credential management system for Polaris. Building on Part 3's service identity management, this system combines Polaris service identities with user-provided authentication parameters to generate credentials for remote catalog access. The core of this PR is the new ConnectionCredentialVendor interface, which: Generates connection credentials by combining service identity with user auth parameters Supports different authentication types (AWS SIGV4, AZURE Entra, GCP IAM) through CDI, currently only supports SigV4. Provides on-demand credential generation Enables easy extension for new authentication types In the long term, we should move the storage credential management logic out of PolarisMetastoreManager, PolarisMetastoreManager should only provide persistence interfaces. * Extract IcebergCatalog.getAccessConfig to a separate class AccessConfigProvider (apache#2736) This PR extracts credential vending entrypoint getAccessConfig from IcebergCatalog into a new centralized AccessConfigProvider class, decoupling credential generation from catalog implementations. The old SupportsCredentialVending is removed in this PR upon discussion * Update immutables to v2.11.6 (apache#2780) * Enhance Release docs (apache#2787) * Spark: Remove unnecessary dependency (apache#2789) * Update Pull Request Template (apache#2788) * Freeze 1.2 change log (apache#2783) * [Catalog Federation] Enable Credential Vending for Passthrough Facade Catalog (apache#2784) This PR introduces credential vending support for passthrough-facade catalogs. When creating a passthrough-facade catalog, the configuration currently requires two components: StorageConfig – specifies the storage info for the remote catalog. ConnectionInfo – defines connection parameters for the underlying remote catalog. With this change, the StorageConfig is now also used to vend temporary credentials for user requests. Credential vending honors table-level RBAC policies to determine whether to issue read-only or read-write credentials, ensuring access control consistency with Polaris authorization semantics. A new test case validates the credential vending workflow, verifying both read and write credential vending. Note: the remote catalog referenced by the passthrough-facade does not need to support IRC * Site: Add docs for catalog federation (apache#2761) * Python client: update CHANGELOG.MD for recent changes (apache#2796) * Python client: remove Python 3.9 support (apache#2795) * Update dependency software.amazon.awssdk:bom to v2.35.5 (apache#2799) * FIX REG tests with cloud providers (apache#2793) * [Catalog Federation] Block credential vending for remote tables outside allowed location list (apache#2791) * Correct invalid example in management service OpenAPI spec (apache#2801) The `example` was incorrectly placed as a sibling of `$ref` within a `schema` object in `polaris-management-service.yml`. According to the OpenAPI specification, properties that are siblings of a `$ref` are ignored. This was causing a `NullPointerException` in OpenAPI Generator v7.13.0+ due to a change in how examples are processed. The generator now expects all `examples` to be valid and non-empty, and a misplaced `example` can lead to a null reference when the generator tries to access it (we are not yet using v7.13.0+, thus not a problem at the moment). This commit moves the `example` to be a sibling of the `schema` object, which is the correct placement according to the OpenAPI specification. Reference error when using newer version of openapi-generator-cli: ``` openapi-generator-cli generate -i spec/polaris-catalog-service.yaml -g python -o client/python --additional-properties=packageName=polaris.catalog --additional-properties=apiNameSuffix="" --skip-validate-spec --additional-properties=pythonVersion=3.13 --ignore-file-override /local/client/python/.openapi-generator-ignore ... Exception: Cannot invoke "io.swagger.v3.oas.models.examples.Example.getValue()" because the return value of "java.util.Map.get(Object)" is null at org.openapitools.codegen.DefaultGenerator.processOperation(DefaultGenerator.java:1606) at org.openapitools.codegen.DefaultGenerator.processPaths(DefaultGenerator.java:1474) at org.openapitools.codegen.DefaultGenerator.generateApis(DefaultGenerator.java:663) at org.openapitools.codegen.DefaultGenerator.generate(DefaultGenerator.java:1296) at org.openapitools.codegen.cmd.Generate.execute(Generate.java:535) at org.openapitools.codegen.cmd.OpenApiGeneratorCommand.run(OpenApiGeneratorCommand.java:32) at org.openapitools.codegen.OpenAPIGenerator.main(OpenAPIGenerator.java:66) Caused by: java.lang.NullPointerException: Cannot invoke "io.swagger.v3.oas.models.examples.Example.getValue()" because the return value of "java.util.Map.get(Object)" is null at org.openapitools.codegen.utils.ExamplesUtils.unaliasExamples(ExamplesUtils.java:75) at org.openapitools.codegen.DefaultCodegen.unaliasExamples(DefaultCodegen.java:2343) at org.openapitools.codegen.DefaultCodegen.fromResponse(DefaultCodegen.java:4934) at org.openapitools.codegen.DefaultCodegen.fromOperation(DefaultCodegen.java:4575) at org.openapitools.codegen.DefaultGenerator.processOperation(DefaultGenerator.java:1574) ... 6 more ``` * Update dependency io.opentelemetry:opentelemetry-bom to v1.55.0 (apache#2804) * Update dependency io.micrometer:micrometer-bom to v1.15.5 (apache#2806) * Bump version for python deps (apache#2800) * bump version for python deps * bump version for python deps * bump version for python deps * Update openapi-generatr-cli from 7.11.0.post0 to 7.12.0 * Pin poetry version * Pin poetry version * Update dependency io.projectreactor.netty:reactor-netty-http to v1.2.11 (apache#2809) * [Catalog Federation] Add Connection Credential Vendors for Other Auth Types (apache#2782) Add Connection Credential Vendors for Other Auth Types This change is a prerequisite for enabling connection credential caching. By making PolarisCredentialManager the central entry point for obtaining connection credentials, we can introduce caching cleanly and manage all credential flows in a consistent way. * Last merged commit 6b957ec --------- Co-authored-by: Mend Renovate <bot@renovateapp.com> Co-authored-by: fabio-rizzo-01 <fabio.rizzocascio@jpmorgan.com> Co-authored-by: Adnan Hemani <adnan.h@berkeley.edu> Co-authored-by: Artur Rakhmatulin <artur.rakhmatulin@gmail.com> Co-authored-by: Alexandre Dutra <adutra@apache.org> Co-authored-by: Prashant Singh <35593236+singhpk234@users.noreply.github.com> Co-authored-by: Christopher Lambert <xn137@gmx.de> Co-authored-by: Dmitri Bourlatchkov <dmitri.bourlatchkov@gmail.com> Co-authored-by: Honah (Jonas) J. <honahx@apache.org> Co-authored-by: Rulin Xing <xjdkcsq3@gmail.com> Co-authored-by: Eric Maynard <eric.maynard+oss@snowflake.com> Co-authored-by: Travis Bowen <122238243+travis-bowen@users.noreply.github.com> Co-authored-by: Jaz Ku <jsku@dons.usfca.edu> Co-authored-by: Yong Zheng <yongzheng0809@gmail.com> Co-authored-by: JB Onofré <jbonofre@apache.org> Co-authored-by: Yufei Gu <yufei@apache.org>
About the change
location_without_schemedoesn't exist, v1 tableslocation_without_schemehas missing datasolves