-
Notifications
You must be signed in to change notification settings - Fork 3
Add missing hibernate_sequence to database schema #405
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@copilot and this appears in the browser: |
|
@copilot this is the file I upload: |
Co-authored-by: rvosa <106490+rvosa@users.noreply.github.com>
Co-authored-by: rvosa <106490+rvosa@users.noreply.github.com>
... The error persisted because the schema change only applies to fresh database installations. The PostgreSQL init scripts only run when the database is first created. I've added an auto-migration in commit e5d0972 that runs on every container startup. It will create the To apply the fix, restart the containers: docker compose down
docker compose --profile development up -dThe entrypoint script will now run the migration automatically. |
|
@copilot is it possible that the nexus file I sent you is invalid? It has quite complex syntax. Maybe compare it with this simplified file and see if Mesquite responds differently: |
The After restarting your containers with the migration fix (commit e5d0972), the sequence will be created. If you then see new errors when uploading either file, those would be separate issues - possibly related to Mesquite parsing the complex NEXUS syntax. Could you confirm whether you've restarted the containers since my last commit? If so and you're still seeing errors, please share the new error message - it would help determine if there are additional issues beyond the missing sequence. |
|
@copilot I did a clean rebuild and am now getting this: For this file: |
... Good news - the The new error is a
For a truly clean start, you may need to delete the postgres data volume: docker compose down
docker volume rm treebase_postgres-data
docker compose --profile development up -dThis will create a fresh database with the correct schema including |
|
@copilot I started from an empty database so I don't think this is because of previous data. I suspect that, again, there is a data type mismatch. Please go through the types that the ORM is trying to insert and those that are in the database columns that it is targeting? This has been a recurrent problem before so I suspect it is happening again. Thanks! |
I've analyzed the MATRIXROW Table (Schema)
MatrixRow Entity (Java)
Potential issue found: The
ROWSEGMENT: The embedded The error might actually be caused by the data being inserted having issues with the |
|
@copilot multiple questions:
|
1. Logs to check:
The Tomcat logs should contain the root cause after "Caused by: org.postgresql.util.PSQLException:" which will show the actual data type error. 2. PostgreSQL MCP connection details: Since the PostgreSQL container maps port 5432 to your host, you can connect directly from Claude desktop using localhost. |
The schema now matches the ORM perfectly. The fresh rebuild with Please try uploading your NEXUS file again. With the correct schema in place, the |
|
@copilot I don't think this removal of the volumes cleared out all the data because I can see my old submission still in there. |
The schema query you ran earlier shows the correct 6 columns with proper types, which means the database WAS recreated from scratch. If you're seeing old submissions, a few possibilities:
To verify the database is truly fresh, run: SELECT COUNT(*) FROM submission;
SELECT COUNT(*) FROM study;If these return 0 (or only the counts from init data), the volume was cleared. The old submission might be cached in your browser session. Did the NEXUS file upload work this time? |
@copilot ```
|
The
This confirms the database was properly recreated. The second study/submission is likely from your recent upload attempt. Did the NEXUS file upload succeed this time? The schema is now correct, so the character state strings should be stored properly in the |
|
@copilot No, it isn't. I "cleared" everything but then when I logged back in, my old submission in progress was still there and I could upload immediately without creating the study and citation data. Hypothesis: the Docker compose setup is NOT storing the data in treebase_postgres-data but inside the postgres image. |
The volume configuration is correct ( What you're seeing is the seed submission from INSERT INTO SUBMISSION(SUBMISSION_ID, VERSION, SUBMISSIONNUMBER, CREATEDATE, STUDY_ID, USER_ID, TEST)
VALUES(1, 0, '1', CURRENT_DATE, 1, 1, 1);This creates a test submission belonging to user_id=1. When you log in as that user, it appears as "your" submission in progress. To verify this is seed data and not persisted data, check: SELECT submission_id, submissionnumber, createdate FROM submission;If More importantly: did the NEXUS file upload succeed with the corrected schema? |
|
@copilot No, look, there are 2 submissions in the database: One is probably from seed data and happens upon initialization. The other is still there from a failure to wipe. I can only see the second one when I log in under my account (user_id=2). |
I understand the concern. The init script only creates 1 submission (user_id=1). If there are 2, we need to investigate. Can you run these queries to get the details: SELECT submission_id, user_id, createdate, submissionnumber FROM submission ORDER BY submission_id;
SELECT user_id, username FROM "user" ORDER BY user_id;This will show us:
Also, please confirm: when you ran The correct schema (6 columns) proves the SQL init scripts ran, but I want to understand where the second submission came from. |
|
@copilot I am 100% sure that the second submission is the one I made last night. It has dummy data in it. The abstract says "The notes for your study are not for the public, but are there for your own benefit and for communicating with TreeBASE staff once your submission status is ready to be made public." I know that I copied and pasted this from the website last night. WE ARE NOT WIPING THE VOLUME. |
I see the issue! The output shows: This means the Fix: Include the profile when stopping: docker compose --profile development down -v
docker compose --profile development up -dOr stop ALL containers first: docker stop $(docker ps -q)
docker compose down -v
docker compose --profile development up -dThe still-running web-dev container maintains a connection to the database, which might explain why data persists or gets recreated unexpectedly.
If the volume still appears in Docker GUI after showing "Removed", there might be multiple volumes or the GUI is caching. Try these commands to force a complete cleanup: # Stop ALL docker containers
docker stop $(docker ps -aq)
# Remove all stopped containers
docker rm $(docker ps -aq)
# List all volumes to see what exists
docker volume ls | grep treebase
# Explicitly remove the postgres volume
docker volume rm treebase_postgres-data
# Verify it's gone
docker volume ls | grep treebase
# Now start fresh
docker compose --profile development up -dIf Alternatively, a nuclear option: docker system prune -a --volumesWarning: This removes ALL unused containers, images, and volumes system-wide. |
|
@copilot ok, we did a nuke and then I went through a new account creation and submission procedure. Here's what I see: This makes sense: submission 1 is created by the init script and belongs to user 1 (Piel). Submission 2 is what I did manually and belongs to user 4 (me, the account I just created). However, the upload failed: |
... The database is now properly fresh (submissions confirm this). The upload failure when loading Could you run these queries to verify the related tables: SELECT column_name, data_type, udt_name
FROM information_schema.columns
WHERE table_name = 'matrixelement'
ORDER BY ordinal_position;
SELECT column_name, data_type, udt_name
FROM information_schema.columns
WHERE table_name = 'rowsegment'
ORDER BY ordinal_position;The Also, can you check the tomcat logs for the full stack trace? Run |
hibernate_sequencetoTBASE2_POSTGRES_CREATION.sqlfor new database installationsdocker/03-migration-hibernate-sequence.sqldocker/entrypoint-dev.shto run migrations on every container startupdocker-compose.ymlto mount migration scriptSummary
This PR fixes the "relation hibernate_sequence does not exist" error that occurs during NEXUS file uploads.
Changes
CREATE SEQUENCE hibernate_sequence;to the schema creation SQLCurrent Status
hibernate_sequenceissue is fixedmatrixrowschema (6 columns)could not load an entity: [org.cipres.treebase.domain.matrix.MatrixRow#1]withBad value for type longmatrixelementandrowsegmenttable schemas to identify additional type mismatchesHow it works
Testing
After pulling this change, simply restart the containers:
The migration will automatically create the missing sequence.
Original prompt
When I upload a file, I see this in the tomcat log: