-
Notifications
You must be signed in to change notification settings - Fork 415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while accessing Cloudbeaver #3021
Comments
Hi @andres-chavez-bi You can use environment variables. It will also skip initial server configuration.
|
Hi @EvgeniaBzzz my goal is to have a cloudbeaver server running in an Openshift environment. I am currently testing with a predefined admin user, yes, at some point I'd like to test and check SAML authentication for the users (also the admins). As for now though, a predefined is fine for me. |
The Admin should be defined in a separate config, not in
Please note that the initial data will only be applied during the server's initial startup. |
Hi @EvgeniaBzzz I finally got my config correctly initialized, I've used the cloudbeaver.conf file, pointing to the self created init file for the admin credentials like you mentioned: Here's my config:
But now I'm facing a db initialization error:
here's my init db config (from the file I created):
Thank you so much for your help!! |
@andres-chavez-bi |
Hi @EvgeniaBzzz I'm using 24.2.1
I think the problem is my PV retain policy, I need to check on that, because from what I see, my Helm chart is correctly removing the PVC when uninstalling it, but when redeploying info seems to be there (as shown on the error) Would it make sense to skip initialization for those objects that are already created to be skipped (or overwritten if config has changed). Maybe it might help in scenarios where this happens, not only on my case but on all K8s and openshift deployments. Since the cloudbeaver db is on bootstrap mode, you still need to initialize it, even if objects are still there, but it's just a suggestion. I will look into it and comeback with feedback. Thanks again. |
Hi @EvgeniaBzzz so I've checked and our pv's and pvc's have a RetainPolicy set to "Delete". I also did a test: I openend the container and found that the trace file was there, I deleted it, and tried to run the "run_server.sh" script and it failed again with the same error (Duplicate columns). Also, I've tried to delete the db data file so maybe it can recreate it, but no luck there as well. I'm uploading the trace file for you to check on the error. I'm also attaching the pv is not available after the deployment is deleted (uninstalled using helm): LAstly, I've also updated to Cloudbeaver 24.2.2 and the same happens. Is there anything else we can try? Thanks! |
@andres-chavez-bi Could you, please, try to start server in a new workspace. Also you could try to delete the whole |
Hi @EvgeniaBzzz since these are tests, I am deleting the Openshift (K8s) components everytime before I deploy the application. So, data folder and all components of the application are deleted everytime. I have also deleted the .data folder entirely, as mentioned in my previous comment, and ran the run_server script and the issue is still present. I have also tried to deploy the server on a different namespace, to avoid any duplicates of any kind and the issue persists, here's the snippet of the logs on the new namespace, as you can see, it's also complaining about duplicate columns. Again, db corruption might not be possible since I delete all my components for every test I perform to avoid any duplicity of any kind. Also I've checked that my env variables are not messing with any configuration: Do you have any other suggestions? Do I need to provide more information to help find the issue? Thanks! |
Hello @EvgeniaBzzz I have tried to use SQLlite as a database, and the error is exactly the same (I'm using cloudbeaver 24.2.2)
I think the issue might be somewhere else, can you help me out, is there something I can check to see what it might be? I'm attaching the full logs again for you to review. |
We need more time to investigate your case |
Hello. |
Hello @LonwoLonwo will check on this and come back to you with the results. Thanks! |
Hi @LonwoLonwo the issue persists. I have created a new deployment, on a fully new configuration of k8s. I'm attaching the error log, the server settings and the init-db settings as well. cloudbeaver_init_db_settings.txt I'm also uninstalling and deleting all k8s components that belong to the namespace and no pvc is left with persistance. I can send the configuration of the k8s components if needed. |
Hi @LonwoLonwo @EvgeniaBzzz I've prepared a report on the issue, maybe it helps. Issue: Database Initialization and Schema Update Conflicts in Containerized EnvironmentsProblem DescriptionWhen deploying CloudBeaver 24.3.1 in containerized environments (particularly Kubernetes/OpenShift), the application fails to start when configuration files are mounted as volumes. The specific error occurs during database initialization/schema upgrade:
Root Cause AnalysisThe issue stems from ambiguity in the initialization process:
Proposed SolutionsSolution 1: Distinct Initialization StatesModify the initialization logic to clearly separate different database states: public void initialize() throws DBException {
DatabaseState state = determineDatabaseState();
switch (state) {
case FRESH_INSTALL:
performFreshInitialization();
break;
case EXISTING_DATABASE:
handleExistingDatabase();
break;
case PARTIAL_INITIALIZATION:
handleIncompleteInitialization();
break;
}
}
private DatabaseState determineDatabaseState() {
// Check actual database files and tables
// Not just configuration presence
} Benefits:
Solution 2: Initialization Lock File SystemImplement a robust file-based state tracking: public class DatabaseStateManager {
private static final String STATE_FILE = ".db_state";
public enum DBState {
UNINITIALIZED,
INITIALIZING,
INITIALIZED,
NEEDS_UPGRADE
}
public void markState(DBState state) {
// Write state to file atomically
}
public DBState getCurrentState() {
// Read current state from file
}
} Benefits:
Solution 3: Configuration-Based ControlAdd explicit configuration options: {
"database": {
"initialization": {
"mode": "AUTO|FRESH|SKIP|FORCE",
"allowUpgrade": true|false,
"strictMode": true|false
}
}
} Benefits:
Recommended ApproachWe recommend implementing a combination of Solutions 1 and 2:
This would provide:
Test Cases to Consider
Impact on Current Deployments
Would be happy to provide more detailed technical specifications or proof-of-concept code if needed. |
Thank you @andres-chavez-biex, we'll take a look! |
Hello @EvgeniaBzzz the only way I managed to get the server to work was with env vars. I had to create a configMap, load it in my Deployment and it worked, so passing on a configuration either with the cloudbeaver.conf or .cloudbeaver.runtime.conf is not working since the db is starting the initialization process as suspected. |
Hi, @andres-chavez-biex ! |
Describe the bug
The server pod is running without any issues, there's no errors on the logs as to make it that anything failed. However, the login functionality is not working even when using the credentials in the config. So the pod/container starts, asks for login but then it says login failed, specifically indicating that the user has no password and that the credentials are invalid.
Here're the credentials:
Also, this is what I get when I get onto the gql console as a response, which I understand is correct since I can't login in the first place.
I'm attaching my settings and logs as files as well so you can check bootstrapping and other logs.
The text was updated successfully, but these errors were encountered: