-
Notifications
You must be signed in to change notification settings - Fork 332
JDBC: Fix Bootstrap with schema options #2762
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JDBC: Fix Bootstrap with schema options #2762
Conversation
facf804 to
339657e
Compare
...lational-jdbc/src/main/java/org/apache/polaris/persistence/relational/jdbc/DatabaseType.java
Outdated
Show resolved
Hide resolved
polaris-core/src/main/java/org/apache/polaris/core/persistence/bootstrap/SchemaOptions.java
Outdated
Show resolved
Hide resolved
runtime/admin/src/main/java/org/apache/polaris/admintool/BootstrapCommand.java
Outdated
Show resolved
Hide resolved
| StandardInputOptions stdinOptions; | ||
|
|
||
| @CommandLine.ArgGroup(exclusive = false, heading = "File Input Options:%n") | ||
| FileInputOptions fileOptions; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still not sure we end up with the right delineation of options.
I think --print-credentials is conceptually applicable to all cases that involve root credentials (regardless of the source of credentials).
Also --realm is applicable to all cases. A use may want a sub-set of realms from a file or from --credential (which may come from a large env. var, etc.).
Rather than treating --credential and --credentials-file as mutually exclusive, I'd prefer the tool to merge RootCredentialsSet from all sources.
In the end all options are applicable and we default --realm to all realms within the merged RootCredentialsSet.
I think that would be nicer from the end user POV. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conceptually I think you're right, though merging a conflict across --credential and --credentials-file is nontrivial, so I think it's okay if we want to leave that edge case out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, IMHO would also prefer to take this up in a follow-up, specially realm is both the place would be tricky like what if for a realm --credential had just client id and the --credetial-file has the secret do we merge ? please let me know you think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
follow up is ok from my POV 👍
flyrain
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @singhpk234, thanks a lot for fixing it! Left some comments. Keep in mind these are my personal options, I'm open for discussion for these comments.
polaris-core/src/main/java/org/apache/polaris/core/persistence/bootstrap/SchemaOptions.java
Outdated
Show resolved
Hide resolved
...-jdbc/src/main/java/org/apache/polaris/persistence/relational/jdbc/DatasourceOperations.java
Outdated
Show resolved
Hide resolved
...lational-jdbc/src/main/java/org/apache/polaris/persistence/relational/jdbc/DatabaseType.java
Outdated
Show resolved
Hide resolved
...rc/main/java/org/apache/polaris/persistence/relational/jdbc/JdbcMetaStoreManagerFactory.java
Outdated
Show resolved
Hide resolved
| int requestedSchemaVersion = getSchemaVersion(bootstrapOptions); | ||
| Preconditions.checkState( | ||
| (requestedSchemaVersion == schemaVersion) | ||
| || (schemaVersion == 0 || requestedSchemaVersion == -1), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case schemaVersion is 0, I think we have to make sure requestedSchemaVersion is 1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The version we want to bootstrapped should be schemaVersion when requestedSchemaVersion is -1. The current logic will always go to v3. Would this be problematic?
switch (schemaOptions.schemaVersion()) {
case null -> schemaSuffix = "schema-v3.sql";
case 1 -> schemaSuffix = "schema-v1.sql";
case 2 -> schemaSuffix = "schema-v2.sql";
case 3 -> schemaSuffix = "schema-v3.sql";
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, what exactly are we trying to do here? If the metastore is already bootstrapped with any schema version, shouldn't bootstrapping fail? We don't support "upgrades" (e.g. type changes).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the metastore is already bootstrapped with any schema version, shouldn't bootstrapping fail
our sql scripts are create if not exists kind of sql so when we lets say one bootstrap a new realm when there are already bootstraped with lower version but now when its ran with like another version, it just bump the schema to upgraded version without doing anything like for example some realms in v1 and new realm to added in v2, hence this handling, am i missing something ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case schemaVersion is 0, I think we have to make sure requestedSchemaVersion is 1.
schemaVersion 0 will be 2 cases,
- 1.0 bootstraped realms they are at v1, so i guess they should stay at v1
- no bootstraped realms doesn't matter which version they go so in this case we can go to requestedSchemaVersion
requestSchema -1 :
- already bootstraped realms schema should stay currentSchema
- no realms bootstrapped doesn't matter which version they go so in this case we can go to most recent
Nice catch on the cases i think we just need to know if we ever ran bootstrap if yes, then
- schemaVersion 0 means v1
- requestSchema -1 means current schema
...rc/main/java/org/apache/polaris/persistence/relational/jdbc/JdbcMetaStoreManagerFactory.java
Outdated
Show resolved
Hide resolved
0375836 to
f531e4b
Compare
| MERGE INTO version (version_key, version_value) | ||
| KEY (version_key) | ||
| VALUES ('version', 2); | ||
| VALUES ('version', 3); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
spent last night chasing test failures due to this :'(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems a copy-paste error. I guess we could add a test to validate it. We could probably do that in another PR.
f531e4b to
0f9f66a
Compare
...lational-jdbc/src/main/java/org/apache/polaris/persistence/relational/jdbc/DatabaseType.java
Outdated
Show resolved
Hide resolved
| default -> throw new IllegalArgumentException("Unknown schema version " + schemaVersion); | ||
| } | ||
| ClassLoader classLoader = DatasourceOperations.class.getClassLoader(); | ||
| return classLoader.getResourceAsStream(this.getDisplayName() + "/" + schemaSuffix); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: getClass().getResource(shortName) is preferable IMHO
...-jdbc/src/main/java/org/apache/polaris/persistence/relational/jdbc/DatasourceOperations.java
Show resolved
Hide resolved
| * @throws IllegalStateException if the combination of parameters represents an invalid state. | ||
| */ | ||
| public static int getRealmBootstrapSchemaVersion( | ||
| int currentSchemaVersion, int requiredSchemaVersion, boolean hasAlreadyBootstrappedRealms) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having the hasAlreadyBootstrappedRealms parameter make the logic in this method's body hard to follow as it depends on external factors... Can we fold hasAlreadyBootstrappedRealms into this method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, the feedback is to infer hasAlreadyBootstrappedRealms inside function, though this would mean passing datasourceOperation here and making the API call to infer the same, and the if else still remain same,
am i missing something ?
The present one make testing easy and the same time keeps the logic to infer if the any of the realms bootstraps in JdbcMetastoreManagerFactory hence i wrote it this way, please let me know your thoughts considering above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's more about overall confusion 😅
We're trying to figure out what version of the DDL script to run when bootstrap is called.
We check the version table and the existence of some other table and the bootstrap options.
However, from my POV, the big question is whether to run the DDL at all.
If tables exist and already contain realm A in schema X, then someone bootstraps realm B in schema Y, why would we (automatically) run DDL for Y and affect realm A?
I'd think we should deduce the current schema version (X) and if X != Y error out (or require a new "upgrade" flag in SchemaOptions).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If tables exist and already contain realm A in schema X, then someone bootstraps realm B in schema Y, why would we (automatically) run DDL for Y and affect realm A?
This is because schema table is not realm specific :( meaning if i had a realm in version 1 and then i bootstrap with version 2 then i set the schema version globally to 2
specially now we have schemas for example 1.0 which don't have version at all, so we deduce the value as 0 which would mean yes we did bootstrap but this 0 means 1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to have hasAlreadyBootstrappedRealms as an input other than embedded within this method. It make the tests much easier by passing a boolean.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please consider my previous comments non-blocking
...al-jdbc/src/main/java/org/apache/polaris/persistence/relational/jdbc/JdbcBootstrapUtils.java
Outdated
Show resolved
Hide resolved
...rc/main/java/org/apache/polaris/persistence/relational/jdbc/JdbcMetaStoreManagerFactory.java
Show resolved
Hide resolved
flyrain
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on it! LGTM!
| private static final Logger LOGGER = LoggerFactory.getLogger(DatasourceOperations.class); | ||
|
|
||
| // PG STATUS CODES | ||
| private static final String CONSTRAINT_VIOLATION_SQL_CODE = "23505"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a blocker: do we need to handle a different code for H2? It would be nice to do so, but it's not related to this PR.
| if (fallbackOnDoesNotExist && datasourceOperations.isRelationDoesNotExist(e)) { | ||
| return SchemaVersion.MINIMUM.getValue(); | ||
| } | ||
| LOGGER.error("Failed to load schema version due to {}", e.getMessage(), e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor: Do we need this error message given that we throw right after it?
| * @throws IllegalStateException if the combination of parameters represents an invalid state. | ||
| */ | ||
| public static int getRealmBootstrapSchemaVersion( | ||
| int currentSchemaVersion, int requiredSchemaVersion, boolean hasAlreadyBootstrappedRealms) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to have hasAlreadyBootstrappedRealms as an input other than embedded within this method. It make the tests much easier by passing a boolean.
| } else { | ||
| // A truly fresh start. Default to v3 for auto-detection, otherwise use the specified | ||
| // version. | ||
| return requiredSchemaVersion == -1 ? 3 : requiredSchemaVersion; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, we could improve it by having a variable like latestSchemaVersion, so that we don't have to change this method every time we update the schema version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack, let me add an ENUM in a follow-up pr
|
Thank you for the reviews @flyrain @dimas-b @eric-maynard ! |
* Build: remove code to post-process generated Quarkus jars (apache#2667) Before Quarkus 3.28, the Quarkus generated jars used the "current" timestamp for all ZIP entries, which made the jars not-reproducible. Since Quarkus 3.28, the generated jars use a fixed timestamp for all ZIP entries, so the custom code is no longer necessary. This PR depends on Quarkus 3.28. * Update docker.io/jaegertracing/all-in-one Docker tag to v1.74.0 (apache#2751) * Updating metastore documentation with Aurora postgres example (apache#2706) * added Aurora postgres to metastore documentation * Service: Add events for APIs awaiting API changes (apache#2712) * fix(enhancement): add .idea, .vscode, .venv to top level .gitignore (apache#2718) fix(enhancement): add .idea, .vscode, .venv to top level .gitignore * Fix javadocs of `PolarisPrincipal.getPrincipalRoles()` (apache#2752) * fix(enhancement): squash commits (apache#2643) * fix(deps): update dependency io.smallrye.config:smallrye-config-core to v3.14.1 (apache#2755) * Extract interface for RequestIdGenerator (apache#2720) Summary of changes: 1. Extracted an interface from `RequestIdGenerator`. 2. The `generateRequestId` method now returns a `Uni<String>` in case custom implementations need to perform I/O or other blocking calls during request ID generation. 3. Also addressed comments in apache#2602. * JDBC: Handle schema evolution (apache#2714) * Deprecate legacy management endpoints for removal (apache#2749) * Deprecate LegacyManagementEndpoints for removal * Add PolarisResolutionManifestCatalogView.getResolvedCatalogEntity helper (apache#2750) this centralizes some common code and simplifies some test setups * Enforce that S3 credentials are vended when requested (apache#2711) This is a follow-up change to apache#2672 striving to improve user-facing error reporting for S3 storage systems without STS. * Add property to `AccessConfig` to indicate whether the backing storage integration can produce credentials. * Add a check to `IcebergCatalogHandler` (leading to 400) that storage credentials are vended when requested and the backend is capable of vending credentials in principle. * Update `PolarisStorageIntegrationProviderImpl` to indicate that FILE storage does not support credential vending (requesitng redential vending with FILE storage does not produce any credentials and does not flag an error, which matches current Polaris behaviour). * Only those S3 systems where STS is not available (or disabled / not permitted) are affected. * Other storage integrations are not affected by this PR. * [Catalog Federation] Ignore JIT entities when deleting federated catalogs, add integration test for namespace/table-level RBAC (apache#2690) When enabling table/namespace level RBAC in federated catalog, JIT entities will be created during privilege grant. In the short term, we should ignore them when dropping the catalog. In the long term, we will clean-up those entities when deleting the catalog. This will be the first step towards JIT entity clean-up: 1. Ignore JIT entities when dropping federated catalog (orphan entities) 2. Register tasks/in-place cleanup JIT entities during catalog drop 3. Add new functionality to PolarisMetastoreManager to support atomic delete non-used JIT entities during revoke. 4. Global Garbage Collector to clean-up unreachable entities (entities with non-existing catalog path/parent) * SigV4 Auth Support for Catalog Federation - Part 3: Service Identity Info Injection (apache#2523) This PR introduces service identity management for SigV4 Auth Support for Catalog Federation. Unlike user-supplied parameters, the service identity represents the identity of the Polaris service itself and should be managed by Polaris. * Service Identity Injection * Return injected service identity info in response * Use AwsCredentialsProvider to retrieve the credentials * Move some logic to ServiceIdentityConfiguration * Rename ServiceIdentityRegistry to ServiceIdentityProvider * Rename ResolvedServiceIdentity to ServiceIdentityCredential * Simplify the logic and add more tests * Use SecretReference and fix some small issues * Disable Catalog Federation * Update actions/stale digest to 5f858e3 (apache#2758) * Service: RealmContextFilter test refactor (apache#2747) * Update dependency software.amazon.awssdk:bom to v2.35.0 (apache#2760) * Update apache/spark Docker tag to v3.5.7 (apache#2727) * Update eric-maynard Team entry (apache#2763) I'm no longer affiliated with Snowflake, so we should update this page accordingly * Refactor resolutionManifest handling in PolarisAdminService (apache#2748) - remove mutable `resolutionManifest` field in favor of letting the "authorize" methods return their `PolarisResolutionManifest` - replace "find" helpers with "get" helpers that have built-in error handling * Implement Finer Grained Operations and Privileges For Update Table (apache#2697) This implements finer grained operations and privileges for update table in a backwards compatible way as discussed on the mailing list. The idea is that all the existing privileges and operations will work and continue to work even after this change. (i.e. TABLE_WRITE_PROPERTIES will still ensure update table is authorized even after these changes). However, because Polaris will now be able to identify each operation within an UpdateTable request and has a privilege model with inheritance that maps to each operation, users will now have the option of restricting permissions at a finer level if desired. * [Python CLI][CI Failure] Pin pydantic version to < 2.12.0 to fix CI failure (apache#2770) * Delete ServiceSecretReference (apache#2768) * JDBC: Fix Bootstrap with schema options (apache#2762) * Site: Add puppygraph integration (apache#2753) * Update Changelog with finer grained authz (apache#2775) * Add Arguments to Various Event Records (apache#2765) * Update immutables to v2.11.5 (apache#2776) * Client: add support for policy management (apache#2701) Implementation for policy management via Polaris CLI (apache#1867). Here are the subcommands to API mapping: attach - PUT /polaris/v1/{prefix}/namespaces/{namespace}/policies/{policy-name}/mappings create - POST /polaris/v1/{prefix}/namespaces/{namespace}/policies/{policy-name}/mappings delete - DELETE /polaris/v1/{prefix}/namespaces/{namespace}/policies/{policy-name} detach - POST /polaris/v1/{prefix}/namespaces/{namespace}/policies/{policy-name}/mappings get - GET /polaris/v1/{prefix}/namespaces/{namespace}/policies/{policy-name} list - GET /polaris/v1/{prefix}/namespaces/{namespace}/policies - This is default for `list` operation - GET /polaris/v1/{prefix}/applicable-policies - This is when we have `--applicable` option provided update - PUT /polaris/v1/{prefix}/namespaces/{namespace}/policies/{policy-name} * Update dependency com.google.cloud:google-cloud-storage-bom to v2.58.1 (apache#2764) * Update dependency org.jboss.weld:weld-junit5 to v5.0.3.Final (apache#2777) * Update the LICENSE and NOTICE files in the runtime (apache#2779) * SigV4 Auth Support for Catalog Federation - Part 4: Connection Credential Manager (apache#2759) This PR introduces a flexible credential management system for Polaris. Building on Part 3's service identity management, this system combines Polaris service identities with user-provided authentication parameters to generate credentials for remote catalog access. The core of this PR is the new ConnectionCredentialVendor interface, which: Generates connection credentials by combining service identity with user auth parameters Supports different authentication types (AWS SIGV4, AZURE Entra, GCP IAM) through CDI, currently only supports SigV4. Provides on-demand credential generation Enables easy extension for new authentication types In the long term, we should move the storage credential management logic out of PolarisMetastoreManager, PolarisMetastoreManager should only provide persistence interfaces. * Extract IcebergCatalog.getAccessConfig to a separate class AccessConfigProvider (apache#2736) This PR extracts credential vending entrypoint getAccessConfig from IcebergCatalog into a new centralized AccessConfigProvider class, decoupling credential generation from catalog implementations. The old SupportsCredentialVending is removed in this PR upon discussion * Update immutables to v2.11.6 (apache#2780) * Enhance Release docs (apache#2787) * Spark: Remove unnecessary dependency (apache#2789) * Update Pull Request Template (apache#2788) * Freeze 1.2 change log (apache#2783) * [Catalog Federation] Enable Credential Vending for Passthrough Facade Catalog (apache#2784) This PR introduces credential vending support for passthrough-facade catalogs. When creating a passthrough-facade catalog, the configuration currently requires two components: StorageConfig – specifies the storage info for the remote catalog. ConnectionInfo – defines connection parameters for the underlying remote catalog. With this change, the StorageConfig is now also used to vend temporary credentials for user requests. Credential vending honors table-level RBAC policies to determine whether to issue read-only or read-write credentials, ensuring access control consistency with Polaris authorization semantics. A new test case validates the credential vending workflow, verifying both read and write credential vending. Note: the remote catalog referenced by the passthrough-facade does not need to support IRC * Site: Add docs for catalog federation (apache#2761) * Python client: update CHANGELOG.MD for recent changes (apache#2796) * Python client: remove Python 3.9 support (apache#2795) * Update dependency software.amazon.awssdk:bom to v2.35.5 (apache#2799) * FIX REG tests with cloud providers (apache#2793) * [Catalog Federation] Block credential vending for remote tables outside allowed location list (apache#2791) * Correct invalid example in management service OpenAPI spec (apache#2801) The `example` was incorrectly placed as a sibling of `$ref` within a `schema` object in `polaris-management-service.yml`. According to the OpenAPI specification, properties that are siblings of a `$ref` are ignored. This was causing a `NullPointerException` in OpenAPI Generator v7.13.0+ due to a change in how examples are processed. The generator now expects all `examples` to be valid and non-empty, and a misplaced `example` can lead to a null reference when the generator tries to access it (we are not yet using v7.13.0+, thus not a problem at the moment). This commit moves the `example` to be a sibling of the `schema` object, which is the correct placement according to the OpenAPI specification. Reference error when using newer version of openapi-generator-cli: ``` openapi-generator-cli generate -i spec/polaris-catalog-service.yaml -g python -o client/python --additional-properties=packageName=polaris.catalog --additional-properties=apiNameSuffix="" --skip-validate-spec --additional-properties=pythonVersion=3.13 --ignore-file-override /local/client/python/.openapi-generator-ignore ... Exception: Cannot invoke "io.swagger.v3.oas.models.examples.Example.getValue()" because the return value of "java.util.Map.get(Object)" is null at org.openapitools.codegen.DefaultGenerator.processOperation(DefaultGenerator.java:1606) at org.openapitools.codegen.DefaultGenerator.processPaths(DefaultGenerator.java:1474) at org.openapitools.codegen.DefaultGenerator.generateApis(DefaultGenerator.java:663) at org.openapitools.codegen.DefaultGenerator.generate(DefaultGenerator.java:1296) at org.openapitools.codegen.cmd.Generate.execute(Generate.java:535) at org.openapitools.codegen.cmd.OpenApiGeneratorCommand.run(OpenApiGeneratorCommand.java:32) at org.openapitools.codegen.OpenAPIGenerator.main(OpenAPIGenerator.java:66) Caused by: java.lang.NullPointerException: Cannot invoke "io.swagger.v3.oas.models.examples.Example.getValue()" because the return value of "java.util.Map.get(Object)" is null at org.openapitools.codegen.utils.ExamplesUtils.unaliasExamples(ExamplesUtils.java:75) at org.openapitools.codegen.DefaultCodegen.unaliasExamples(DefaultCodegen.java:2343) at org.openapitools.codegen.DefaultCodegen.fromResponse(DefaultCodegen.java:4934) at org.openapitools.codegen.DefaultCodegen.fromOperation(DefaultCodegen.java:4575) at org.openapitools.codegen.DefaultGenerator.processOperation(DefaultGenerator.java:1574) ... 6 more ``` * Update dependency io.opentelemetry:opentelemetry-bom to v1.55.0 (apache#2804) * Update dependency io.micrometer:micrometer-bom to v1.15.5 (apache#2806) * Bump version for python deps (apache#2800) * bump version for python deps * bump version for python deps * bump version for python deps * Update openapi-generatr-cli from 7.11.0.post0 to 7.12.0 * Pin poetry version * Pin poetry version * Update dependency io.projectreactor.netty:reactor-netty-http to v1.2.11 (apache#2809) * [Catalog Federation] Add Connection Credential Vendors for Other Auth Types (apache#2782) Add Connection Credential Vendors for Other Auth Types This change is a prerequisite for enabling connection credential caching. By making PolarisCredentialManager the central entry point for obtaining connection credentials, we can introduce caching cleanly and manage all credential flows in a consistent way. * Last merged commit 6b957ec --------- Co-authored-by: Mend Renovate <bot@renovateapp.com> Co-authored-by: fabio-rizzo-01 <fabio.rizzocascio@jpmorgan.com> Co-authored-by: Adnan Hemani <adnan.h@berkeley.edu> Co-authored-by: Artur Rakhmatulin <artur.rakhmatulin@gmail.com> Co-authored-by: Alexandre Dutra <adutra@apache.org> Co-authored-by: Prashant Singh <35593236+singhpk234@users.noreply.github.com> Co-authored-by: Christopher Lambert <xn137@gmx.de> Co-authored-by: Dmitri Bourlatchkov <dmitri.bourlatchkov@gmail.com> Co-authored-by: Honah (Jonas) J. <honahx@apache.org> Co-authored-by: Rulin Xing <xjdkcsq3@gmail.com> Co-authored-by: Eric Maynard <eric.maynard+oss@snowflake.com> Co-authored-by: Travis Bowen <122238243+travis-bowen@users.noreply.github.com> Co-authored-by: Jaz Ku <jsku@dons.usfca.edu> Co-authored-by: Yong Zheng <yongzheng0809@gmail.com> Co-authored-by: JB Onofré <jbonofre@apache.org> Co-authored-by: Yufei Gu <yufei@apache.org>
About the change
TODO
while testing i found each QuarkusMainlauncher is launching new PG container, thinking of fix still to reuse it to test already bootstrapped schema.
Note: I don't think its a 1.2 blocker since we can't use this, but would love to know other community member takes.