New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to generate cohorts: BadSqlGrammer error #1055
Comments
@PRijnbeek , I extracted the cohort sql via atlas into the attached txt file. Would you be able to run it (after replacing the tokens with your schema names) and let us know if you see the error? To answer your question about new tables: you can look at the output of /WebAPI/ddl/results and see if there are any table definitions there that do not match your results schema. |
@chrisknoll i do not see an error when I run that sql directly in DataGrip. Is this the exact same sql run? I remember you mentioned earlier that there is some extra sql added to the SQL that is available in the export tab correct? The error is get is from org.springframework.jdbc i do not think this is also used in DataGrip so not sure if this rules out SQL code that cannot be process by that jdbc driver.. There are many new tables in that /WebAPI/ddl/results i have now all added. Do I now also need to check if certain tables were changed or is that never the case? How would I actually know i have to do this for a new WebApi release? Maybe this should be on the Wiki or in the release notes? After adding the results tables that were missing I still have the same error. Is there a way to debug Atlas and see where and with what part of the query this error is generated. It first runs for a while doing the initial queries and then gives the error. Thanks |
There are additional statistics created during cohort generation. I can produce a full sql for you after we check something else: There was modifications made to three of the results stats tables, which was described here:
To address your points:
|
Thanks @chrisknoll That change in the results schema did the trick :) since we were moving from 2.4 to 2.6. I did not see that in the webAPI release notes indeed. I looked at the Atlas release notes (considering it is the main application of which WebAPI is a communication layer, but i also probably had to double check the WebAPI release notes). However I did search here: so I also looked here: It is quite challenging to keep all things up-to-date fully understand that, maybe we should leave the OHDSI wiki all together and only use the Github Wiki as the main documentation point? Yes it has always been a manual step. I was thinking about this a bit though, maybe we can think of a simple R package that picks all the SQL files in the current /WebAPI/ddl/results and compares this with a result schema. If it is not identical it could flag it or add the missing tables. checkResultSchema(resultSchemaName, connectionDetails, updateSchemas = true, webApiVersion=2.6.0) something like that. This you could then run over all the database versions. Not sure how you are doing this internally with all the many databases you have so maybe there is a smarter way to do this already. Thanks for your valuable help Chris! |
If others run into the same problem, please review the "Migration Notes" that appear under the WebAPI releases as shown for v2.6: |
Expected behavior
Succes
Actual behavior
Failed.
I have a multi database setup that worked with previous WebApi and Atlas
Error message:
error.txt
Steps to reproduce behavior
Running on Postgres. Cohort (json):
cohort.txt
How can I best proceed to find the issue? can I get the actual full sql that is executed from somewhere? the error field in cohort_generation_info has a max length that does not allow the full error message. Maybe we should increase this field size? Would it help to manually do this niw or is the text length also limited somewhere else in the code?
Are there maybe new tables that need to be added in the individual database schemas that I am not aware of?
The text was updated successfully, but these errors were encountered: