Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Suggested Feature] Analyze White Rabbit scan document to detect which vocabulary(ies) are present in a source table #46

Open
aguynamedryan opened this issue Apr 23, 2015 · 1 comment

Comments

@aguynamedryan
Copy link
Contributor

CDMv5 introduced the ideas of “domains”. Domains dictate which table(s) a concept should live in and these domain assignments can be surprising. For instance, HCPCS code G0248 has been assigned the domain of “Observation” meaning that this HCPCS code does not generate a procedure_occurrence, but instead is stored in the observation table. As another example, the ICD-9 code V53.2 (fitting of hearing aid) maps to the domain of “Procedure”, meaning the ICD-9 code generates a procedure_occurrence, not a condition_occurrence.

It would be nice to lessen the cognitive burden on our ETLers and provide them some automated guidance on how their source data might best be mapped to the CDM. For example, when working on the SynPUF ETL, we completely forgot that some ICD-9 codes generate procedure_occurrences. We had to go back and draw a bunch of arrows and come up with a lot of new logic long after we thought we had finished the spec and our minds had moved on to other matters.

If we matched the values for each column in White Rabbit’s scan report against the values in concept.concept_code, we could see if the column consistently matches codes for a particular vocabulary. This gives us very helpful information that we can use to automate some of the work done in Rabbit in a Hat.

For instance, in the SynPUF data, we could take White Rabbit’s values from the dx1 column in the inpatient file and try to match them against concept.concept_code. We’d find that those values all match ICD-9 concepts. From that we reasonably infer that inpatient.dx1 is an ICD-9 column. And since the CDMv5 vocabulary tells us that ICD-9 codes are not only associated with the Condition domain, but Observation, Measurement, and Procedure as well, we could then have RiaH automatically draw arrows between the inpatient table and condition_occurrence, but inpatient and measurement, observation, and procedure_occurrence as well.

We can do this for every vocabulary we find in each source table, drawing arrows between source tables and their potential target tables. Not only can we make the table-level mappings, but we can link the *_concept_id field and the *_source_value fields to the source column.

Combine this feature with #35 and RaiH begins to be a tool that guides an ETLer through the mapping process, rather than requiring an ETLer to intimately know the source data, CDM, and the domains assigned to each source vocabulary. Instead RiaH informs an ETLer about the relationships the ETLer must consider. Plus it will help avoid situations like the one we encountered where we completely ignored a set of relationships because we didn’t realize what domains ICD-9 mapped into.

@aguynamedryan
Copy link
Contributor Author

Also, if we see a column that contains cost-related information, we could draw an arrow between the source table and the appropriate cost table(s).

janblom added a commit that referenced this issue Feb 28, 2024
…dd tests, various improvements (#408)

* Use and configure license-maven-plugin (org.honton.chas)

* First setup of distribution verification integration test

* Use Java 17 for compilation, updates of test dependencies, update license validation config

* Update comment on CacioTest annotation

* Cleanup

* Add generating fat jars for WhiteRabbit and RabbitInAHat; lock hsqldb version for Java 1.8

* Enforce Java 1.8 for distributed dependencies

* Update main.yml

Project now requires Java 17 to build. Should still produce java 8 (1.8) compatible artifacts though.

* Bump org.apache.avro:avro from 1.11.2 to 1.11.3 in /rabbit-core

Bumps org.apache.avro:avro from 1.11.2 to 1.11.3.

---
updated-dependencies:
- dependency-name: org.apache.avro:avro
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

* Use jdk8 classifier for hsqldb 2.7.x

* Exclude older version of hsqldb

* Fix image crop when using stem table

* Update stem table image

* Decrease size of table panel when using stem table.

Without this change, the table panel height is always higher than
needed (when using stem table), because the stem table is counted
as one of the items in the components list. It is however shown
separately at the top, which is already accounted for by the
stem table margin.

* Add snowflake support (#37)

* Refactor RichConnection into separate classes, and add an abstraction for the JDBC connection. Implement a Snowflake connection with this abstraction

* Add unit tests for SnowflakeConnector

* Added Snowflake support for SourceDataScan; added minimal test for it; some refactorings to move database responsibility to rabbit-core/databases

* Move more database details to rabbit-core/databases

* Clearer name for method

* Ignore snowflake.env

* Create PostgreSQL container in the TestContainers way

* Refactored Snowflake tests + a bit of documentation

* Fix Snowflake test for Java 17, and make it into an automated integration test instead of a unit test

* Remove duplicate postgresql test

* Make TestContainers based database tests into automated integration tests

* Suppress some warnings when generating fat jars

* Let autimatic integration tests fail when docker is not available

* Allow explicit skipping of Snowflake integration tests

* Added tests for Snowflake, delimited text files

* Switch to fully verifying the scan results against a reference version (v0.10.7)

* Working integration test for Snowflake, and some refactorings

* Some proper logging, small code improvements and cleanup

* Remove unused interface

* Added tests, some changes to support testing

* Make automated test work reliably (way too many changes, sorry)

* Rudimentary support for Snowflake authenticator parameter (untested)

* review xmlbeans dependencies, remove conflict

* extend integration test for distribution

* Restructuring database configuration. Work in process, but unit and integration tests all OK

* Restructuring database configuration 2/x. Still work in process, but unit and integration tests all OK

* Restructuring database configuration 3/x. Still work in process, but unit and integration tests all OK

* Restructuring database configuration 4/x. Still work in process, but unit and integration tests all OK

* Restructuring database configuration 5/x. Still work in process, but unit and integration tests all OK

* Restructuring database configuration 6/x. Still work in process, but unit and integration tests all OK

* Restructuring database configuration 7/x. Still work in process, but unit and integration tests all OK

* Intermezzo: get rid of the package naming error (upper case R in whiteRabbit)

* Intermezzo: code cleanup

* Snowflake is now working from the GUI. And many small refactorings, like logging instead of printing to stout/err

* Refactor DbType into an enum, get rid of DBChoice

* Move DbType and DbSettings classes into configuration subpackage

* Avoid using a manually destructured DbSettings object when creating a RochConnection object

* Code cleanup, remove unneeded Snowflake references

* Refactoring, code cleanup

* More refactoring, code cleanup

* More refactoring, code cleanup and documentation

* Make sure that order of databases in pick list in GUI is the same as before, and enforce completeness of that list in a test

* Add/update copyright headers

* Add line to verify that a tooltip is shown for a DBConnectionInterface implementing class

* Test distribution for Snowflake JDBC issue with Java 17

* cleanup of build files

* Add verification that all JDBC drivers are in the distributed package

* Add/improve error reporting for Snowflake

* Disable screenshottaker in GuiTestExtension, hoping that that is what blocks the build on github. Fingers crossed

* Better(?) naming for database interface and implementing class

* Use our own GUITestExtension class

---------

Co-authored-by: Jan Blom <janblom@thehyve.nl>

* Add mysql test (#38)

* Fixed a bug in the comparison for sort; let comparison report report all differences before failing

* Allow the user to specify the port for a MySQL server

* Add tests for a MySQL source database

* Add sas test (#39)

* Add automated regression tests for SAS files

* Fix problems with comparisons of test results to references

* create bypass for value mismatch that only shows up in github actions so far

* create bypass for value mismatch that only shows up in github actions so far, 2nd

* Pom updates to enable building on MacOS

* Prepare release (#40)

* Add warehouse/database handling to StorageHandler class

* Show stdout/stderr from distribution verification when there are errors

* Pom updates to enable building on MacOS

* Update dependencies as far as possible without code changes

* Update README.md

---------

Co-authored-by: Jan Blom <janblom@thehyve.nl>

* Update whiterabbit/src/main/java/org/ohdsi/whiterabbit/WhiteRabbitMain.java

The sample size should start disabled, as the calculateNumericStats checkbox is unchecked by default.

Co-authored-by: Maxim Moinat <maximmoinat@gmail.com>

* Fixes from windows (#41)

* Fix problems blocking verification on Windows

* Avoid using bind mounts for TestContainers, copy files instead

* Remove file copy (was for debugging purposes)

* Oracle Tests: use the actual TestContainer hostname/ip address instead of localhost

* Remove debug print statement and stale imports

* Remove commented code

---------

Co-authored-by: Jan Blom <janblom@thehyve.nl>

* Use The Hyve fork of the caciocavello project (#42)

* Use The Hyve fork of the caciocavello project (Swing virtual graphics environment for testing) until the parent project has been fixed for JDK 18+

* Use updated cacio-tta version, should run fine when headless

* For developemnt, JDK versions 17-21 are supported

* Update docs (#44)

* Update documentation for Snowflake

* Add Snowflake.ini example file

* Add password field in Snowflake example

* Bump org.apache.commons:commons-compress in /rabbit-core (#47)

Bumps org.apache.commons:commons-compress from 1.25.0 to 1.26.0.

---
updated-dependencies:
- dependency-name: org.apache.commons:commons-compress
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Bump org.postgresql:postgresql from 42.7.1 to 42.7.2 in /rabbit-core (#46)

Bumps [org.postgresql:postgresql](https://github.com/pgjdbc/pgjdbc) from 42.7.1 to 42.7.2.
- [Release notes](https://github.com/pgjdbc/pgjdbc/releases)
- [Changelog](https://github.com/pgjdbc/pgjdbc/blob/master/CHANGELOG.md)
- [Commits](https://github.com/pgjdbc/pgjdbc/commits)

---
updated-dependencies:
- dependency-name: org.postgresql:postgresql
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Turn on mvn verify in github (#45)

* Turn on mvn verify in github

* Disable Oracle integration tests in Github workflow as they tend to generate a timeout

* Fix typo

* Avoid starting containers for Snowflake when configuration is not present

* Switch cacio back to parent project (#48)

* Use cacio-tta 1.18 instead of patched fork

* Remove license exception for The Hyve fork of cacio-tta

* Remove bigquery jars (#49)

* Initial setup for BigQuery integration test (WIP)

* Initial setup for BigQuery integration test (WIP)

* Removed BigQuery JDBC jars, added test to confirm it missing, and the way to make it work

* Make Teradata JDNC dependency a normal maven repo dependency. Include a basic tester, and remove lib directory

* Rename Teradata test, it only tests getting tablenames, does not perform a scan

* Add basic connection test for MS Sql Server

* Round up removing BigQuery and Teradata (licences do not allow redistribution). Better feedback to user, added to documentation

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Jan Blom <janblom@thehyve.nl>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Spayralbe <stefan@thehyve.nl>
Co-authored-by: Maxim Moinat <maximmoinat@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant