- Capture Reconcile metadata in delta tables for dashbaords (#369). In this release, changes have been made to improve version control management, reduce repository size, and enhance build times. A new directory, "spark-warehouse/", has been added to the Git ignore file to prevent unnecessary files from being tracked and included in the project. The
WriteToTableException
class has been added to theexception.py
file to raise an error when a runtime exception occurs while writing data to a table. A newReconCapture
class has been implemented in thereconcile
package to capture and persist reconciliation metadata in delta tables. Therecon
function has been updated to initialize this new class, passing in the required parameters. Additionally, a new file,recon_capture.py
, has been added to the reconcile package, which implements theReconCapture
class responsible for capturing metadata related to data reconciliation. Therecon_config.py
file has been modified to introduce a new class,ReconcileProcessDuration
, and restructure the classesReconcileOutput
,MismatchOutput
, andThresholdOutput
. The commit also captures reconcile metadata in delta tables for dashboards in the context of unit tests in thetest_execute.py
file and includes a new file,test_recon_capture.py
, to test the reconcile capture functionality of theReconCapture
class. - Expand translation of Snowflake
expr
(#351). In this release, the translation of theexpr
category in the Snowflake language has been significantly expanded, addressing uncovered grammar areas, incorrect interpretations, and duplicates. Thesubquery
is now excluded as a validexpr
, and new case classes such asNextValue
,ArrayAccess
,JsonAccess
,Collate
, andIff
have been added to theExpression
class. These changes improve the comprehensiveness and accuracy of the Snowflake parser, allowing for a more flexible and accurate translation of various operations. Additionally, theSnowflakeExpressionBuilder
class has been updated to handle previously unsupported cases, enhancing the parser's ability to parse Snowflake SQL expressions. - Fixed orcale missing datatypes (#333). In the latest release, the Oracle class of the Tokenizer in the open-source library has undergone a fix to address missing datatypes. Previously, the KEYWORDS mapping did not require Tokens for keys, which led to unsupported Oracle datatypes. This issue has been resolved by modifying the test_schema_compare.py file to ensure that all Oracle datatypes, including LONG, NCLOB, ROWID, UROWID, ANYTYPE, ANYDATA, ANYDATASET, XMLTYPE, SDO_GEOMETRY, SDO_TOPO_GEOMETRY, and SDO_GEORASTER, are now mapped to the TEXT TokenType. This improvement enhances the compatibility of the code with Oracle datatypes and increases the reliability of the schema comparison functionality, as demonstrated by the test function test_schema_compare, which now returns is_valid as True and a count of 0 for is_valid =
false
in the resulting dataframe. - Fixed the recon_config functions to handle null values (#399). In this release, the recon_config functions have been enhanced to manage null values and provide more flexible column mapping for reconciliation purposes. A
__post_init__
method has been added to certain classes to convert specified attributes to lowercase and handle null values. A new helper method,_get_is_string
, has been introduced to determine if a column is of string type. Additionally, new functions such asget_tgt_to_src_col_mapping_list
,get_layer_tgt_to_src_col_mapping
,get_src_to_tgt_col_mapping_list
, andget_layer_src_to_tgt_col_mapping
have been added to retrieve column mappings, enhancing the overall functionality and robustness of the reconciliation process. These improvements will benefit software engineers by ensuring more accurate and reliable configuration handling, as well as providing more flexibility in mapping source and target columns during reconciliation. - Improve Exception handling (#392). The commit titled
Improve Exception Handling
enhances error handling in the project, addressing issues #388 and #392. Changes include refactoring thecreate_adapter
method in theDataSourceAdapter
class, updating method arguments in test functions, and adding new methods in thetest_execute.py
file for better test doubles. TheDataSourceAdapter
class is replaced with thecreate_adapter
function, which takes the same arguments and returns an instance of the appropriateDataSource
subclass based on the providedengine
parameter. The diff also modifies the behavior of certain test methods to raise more specific and accurate exceptions. Overall, these changes improve exception handling, streamline the codebase, and provide clearer error messages for software engineers. - Introduced morph_sql and morph_column_expr functions for inline transpilation and validation (#328). Two new classes, TranspilationResult and ValidationResult, have been added to the config module of the remorph package to store the results of transpilation and validation. The morph_sql and morph_column_exp functions have been introduced to support inline transpilation and validation of SQL code and column expressions. A new class, Validator, has been added to the validation module to handle validation, and the validate_format_result method within this class has been updated to return a ValidationResult object. The _query method has also been added to the class, which executes a given SQL query and returns a tuple containing a boolean indicating success, any exception message, and the result of the query. Unit tests for these new functions have been updated to ensure proper functionality.
- Output for the reconcile function (#389). A new function
get_key_form_dialect
has been added to theconfig.py
module, which takes aDialect
object and returns the corresponding key used in theSQLGLOT_DIALECTS
dictionary. Additionally, theMorphConfig
dataclass has been updated to include a new attribute__file__
, which sets the filename to "config.yml". Theget_dialect
function remains unchanged. Two new exceptions,WriteToTableException
andInvalidInputException
, have been introduced, and the existingDataSourceRuntimeException
has been modified in the same module to improve error handling. Theexecute.py
file's reconcile function has undergone several changes, including adding imports forInvalidInputException
,ReconCapture
, andgenerate_final_reconcile_output
fromrecon_exception
andrecon_capture
modules, and modifying theReconcileOutput
type. Thehash_query.py
file's reconcile function has been updated to include a new_get_with_clause
method, which returns aSelect
object for a given DataFrame, and thebuild_query
method has been updated to include a new query construction step using thewith_clause
object. Thethreshold_query.py
file's reconcile function's output has been updated to include query and logger statements, a new method for allowing user transformations on threshold aliases, and the dialect specified in the sql method. A newgenerate_final_reconcile_output
function has been added to therecon_capture.py
file, which generates a reconcile output given a recon_id and a SparkSession. New classes and dataclasses, includingSchemaReconcileOutput
,ReconcileProcessDuration
,StatusOutput
,ReconcileTableOutput
, andReconcileOutput
, have been introduced in thereconcile/recon_config.py
file. Thetests/unit/reconcile/test_execute.py
file has been updated to include new test cases for therecon
function, including tests for different report types and scenarios, such as data, schema, and all report types, exceptions, and incorrect report types. A new test case,test_initialise_data_source
, has been added to test theinitialise_data_source
function, and thetest_recon_for_wrong_report_type
test case has been updated to expect anInvalidInputException
when an incorrect report type is passed to therecon
function. Thetest_reconcile_data_with_threshold_and_row_report_type
test case has been added to test thereconcile_data
method of theReconciliation
class with a row report type and threshold options. Overall, these changes improve the functionality and robustness of the reconcile process by providing more fine-grained control over the generation of the final reconcile output and better handling of exceptions and errors. - Threshold Source and Target query builder (#348). In this release, we've introduced a new method,
build_threshold_query
, that constructs a customizable threshold query based on a table's partition, join, and threshold columns configuration. The method identifies necessary columns, applies specified transformations, and includes a WHERE clause based on the filter defined in the table configuration. The resulting query is then converted to a SQL string using the dialect of the source database. Additionally, we've updated the test file for the threshold query builder in the reconcile package, including refactoring of function names and updated assertions for query comparison. We've added two new test methods:test_build_threshold_query_with_single_threshold
andtest_build_threshold_query_with_multiple_thresholds
. These changes enhance the library's functionality, providing a more robust and customizable threshold query builder, and improve test coverage for various configurations and scenarios. - Unpack nested alias (#336). This release introduces a significant update to the 'lca_utils.py' file, addressing the limitation of not handling nested aliases in window expressions and where clauses, which resolves issue #334. The
unalias_lca_in_select
method has been implemented to recursively parse nested selects and unalias lateral column aliases, thereby identifying and handling unsupported lateral column aliases. This method is utilized in thecheck_for_unsupported_lca
method to handle unsupported lateral column aliases in the input SQL string. Furthermore, the 'test_lca_utils.py' file has undergone changes, impacting several test functions and introducing two new ones,test_fix_nested_lca
and 'test_fix_nested_lca_with_no_scope', to ensure the code's reliability and accuracy by preventing unnecessary assumptions and hallucinations. These updates demonstrate our commitment to improving the library's functionality and test coverage.
- Added
Configure Secrets
support todatabricks labs remorph configure-secrets
cli command (#254). TheConfigure Secrets
feature has been implemented in thedatabricks labs remorph
CLI command, specifically for the newconfigure-secrets
command. This addition allows users to establish Scope and Secrets within their Databricks Workspace, enhancing security and control over resource access. The implementation includes a newrecon_config_utils.py
file in thedatabricks/labs/remorph/helpers
directory, which contains classes and methods for managing Databricks Workspace secrets. Furthermore, theReconConfigPrompts
helper class has been updated to handle prompts for selecting sources, entering secret scope names, and handling overwrites. The CLI command has also been updated with a newconfigure_secrets
function and corresponding tests to ensure correct functionality. - Added handling for invalid alias usage by manipulating the AST (#219). The recent commit addresses the issue of invalid alias usage in SQL queries by manipulating the Abstract Syntax Tree (AST). It introduces a new method,
unalias_lca_in_select
, which unaliases Lateral Column Aliases (LCA) in the SELECT clause of a query. The AliasInfo class is added to manage aliases more effectively, with attributes for the name, expression, and a flag indicating if the alias name is the same as a column. Additionally, the execute.py file is modified to check for unsupported LCA using thelca_utils.check_for_unsupported_lca
method, improving the system's robustness when handling invalid aliases. Test cases are also added in the new file, test_lca_utils.py, to validate the behavior of thecheck_for_unsupported_lca
function, ensuring that SQL queries are correctly formatted for Snowflake dialect and avoiding errors due to invalid alias usage. - Added support for
databricks labs remorph generate-lineage
CLI command (#238). A new CLI command,databricks labs remorph generate-lineage
, has been added to generate lineage for input SQL files, taking the source dialect, input, and output directories as arguments. The command uses existing logic to generate a directed acyclic graph (DAG) and then creates a DOT file in the output directory using the DAG. The new command is supported by new functions_generate_dot_file_contents
,lineage_generator
, and methods in theRootTableIdentifier
andDAG
classes. The command has been manually tested and includes unit tests, with plans for adding integration tests in the future. The commit also includes a new methodtemp_dirs_for_lineage
and updates to theconfigure_secrets_databricks
method to handle a new source type "databricks". The command handles invalid input and raises appropriate exceptions. - Custom oracle tokenizer (#316). In this release, the remorph library has been updated to enhance its handling of Oracle databases. A custom Oracle tokenizer has been developed to map the
LONG
datatype to text (string) in the tokenizer, allowing for more precise parsing and manipulation ofLONG
columns in Oracle databases. The Oracle dialect in the configuration file has also been updated to utilize the new custom Oracle tokenizer. Additionally, the Oracle class from the snow module has been imported and integrated into the Oracle dialect. These improvements will enable the remorph library to manage Oracle databases more efficiently, with a particular focus on improving the handling of theLONG
datatype. The commit also includes updates to test files in the functional/oracle/test_long_datatype directory, which ensure the proper conversion of theLONG
datatype to text. Furthermore, a new test file has been added to the tests/unit/snow directory, which checks for compatibility with Oracle's long data type. These changes enhance the library's compatibility with Oracle databases, ensuring accurate handling and manipulation of theLONG
datatype in Oracle SQL and Databricks SQL. - Removed strict source dialect checks (#284). In the latest release, the
transpile
andgenerate_lineage
functions incli.py
have undergone changes to allow for greater flexibility in source dialect selection. Previously, onlysnowflake
ortsql
dialects were supported, but now any source dialect supported by SQLGLOT can be used, controlled by theSQLGLOT_DIALECTS
dictionary. Providing an unsupported source dialect will result in a validation error. Additionally, the input and output folder paths for thegenerate_lineage
function are now validated against the file system to ensure their existence and validity. In theinstall.py
file of thedatabricks/labs/remorph
package, the source dialect selection has been updated to useSQLGLOT_DIALECTS.keys()
, replacing the previous hardcoded list. This change allows for more flexibility in selecting the source dialect. Furthermore, recent updates to various test functions in thetest_install.py
file suggest that the source selection process has been modified, possibly indicating the addition of new sources or a change in source identification. These modifications provide greater flexibility in testing and potentially in the actual application. - Set Catalog, Schema from default Config (#312). A new feature has been added to our open-source library that allows users to specify the
catalog
andschema
configuration options as part of thetranspile
command-line interface (CLI). If these options are not provided, thetranspile
function in thecli.py
file will now set them to the values specified indefault_config
. This ensures that a default catalog and schema are used if they are not explicitly set by the user. Thelabs.yml
file has been updated to reflect these changes, with the addition of thecatalog-name
andschema-name
options to thecommands
object. Thedefault
property of thevalidation
object has also been updated totrue
, indicating that the validation step will be skipped by default. These changes provide increased flexibility and ease-of-use for users of thetranspile
functionality. - Support for Null safe equality join for databricks generator (#280). In this release, we have implemented support for a null-safe equality join in the Databricks generator, addressing issue #280. This feature introduces the use of the " <=> " operator in the generated SQL code instead of the
is not distinct from
syntax to ensure accurate comparisons when NULL values are present in the columns being joined. The Generator class has been updated with a new method, NullSafeEQ, which takes in an expression and returns the binary version of the expression using the " <=> " operator. The preprocess method in the Generator class has also been modified to include this new functionality. It is important to note that this change may require users to update their existing code to align with the new syntax in the Databricks environment. With this enhancement, the Databricks generator is now capable of performing null-safe equality joins, resulting in consistent results regardless of the presence of NULL values in the join conditions.
- Added serverless validation using lsql library (#176). Workspaceclient object is used with
product
name andproduct_version
along with correspondingcluster_id
orwarehouse_id
assdk_config
inMorphConfig
object. - Enhanced install script to enforce usage of a warehouse or cluster when
skip-validation
is set toFalse
(#213). In this release, the installation process has been enhanced to mandate the use of a warehouse or cluster when theskip-validation
parameter is set toFalse
. This change has been implemented across various components, including the install script,transpile
function, andget_sql_backend
function. Additionally, new pytest fixtures and methods have been added to improve test configuration and resource management during testing. Unit tests have been updated to enforce usage of a warehouse or cluster when theskip-validation
flag is set toFalse
, ensuring proper resource allocation and validation process improvement. This development focuses on promoting a proper setup and usage of the system, guiding new users towards a correct configuration and improving the overall reliability of the tool. - Patch subquery with json column access (#190). The open-source library has been updated with new functionality to modify how subqueries with JSON column access are handled in the
snowflake.py
file. This change includes the addition of a check for an opening parenthesis after theFROM
keyword to detect and break loops when a subquery is found, as opposed to a table name. This improvement enhances the handling of complex subqueries and JSON column access, making the code more robust and adaptable to different query structures. Additionally, a new test method,test_nested_query_with_json
, has been introduced to thetests/unit/snow/test_databricks.py
file to test the behavior of nested queries involving JSON column access when using a Snowflake dialect. This new method validates the expected output of a specific nested query when it is transpiled to Snowflake's SQL dialect, allowing for more comprehensive testing of JSON column access and type casting in Snowflake dialects. The existingtest_delete_from_keyword
method remains unchanged. - Snowflake
UPDATE FROM
to DatabricksMERGE INTO
implementation (#198). - Use Runtime SQL backend in Notebooks (#211). In this update, the
db_sql.py
file in thedatabricks/labs/remorph/helpers
directory has been modified to support the use of the Runtime SQL backend in Notebooks. This change includes the addition of a newRuntimeBackend
class in thebackends
module and an import statement foros
. Theget_sql_backend
function now returns aRuntimeBackend
instance when theDATABRICKS_RUNTIME_VERSION
environment variable is present, allowing for more efficient and secure SQL statement execution in Databricks notebooks. Additionally, a new test case for theget_sql_backend
function has been added to ensure the correct behavior of the function in various runtime environments. These enhancements improve SQL execution performance and security in Databricks notebooks and increase the project's versatility for different use cases. - Added Issue Templates for bugs, feature and config (#194). Two new issue templates have been added to the project's GitHub repository to improve issue creation and management. The first template, located in
.github/ISSUE_TEMPLATE/bug.yml
, is for reporting bugs and prompts users to provide detailed information about the issue, including the current and expected behavior, steps to reproduce, relevant log output, and sample query. The second template, added under the path.github/ISSUE_TEMPLATE/config.yml
, is for configuration-related issues and includes support contact links for general Databricks questions and Remorph documentation, as well as fields for specifying the operating system and software version. A new issue template for feature requests, named "Feature Request", has also been added, providing a structured format for users to submit requests for new functionality for the Remorph project. These templates will help streamline the issue creation process, improve the quality of information provided, and make it easier for the development team to quickly identify and address bugs and feature requests. - Added Databricks Source Adapter (#185). In this release, the project has been enhanced with several new features for the Databricks Source Adapter. A new
engine
parameter has been added to theDataSource
class, replacing the originalsource
parameter. The_get_secrets
and_get_table_or_query
methods have been updated to use theengine
parameter for key naming and handling queries with aselect
statement differently, respectively. A Databricks Source Adapter for Oracle databases has been introduced, which includes a newOracleDataSource
class that provides functionality to connect to an Oracle database using JDBC. A Databricks Source Adapter for Snowflake has also been added, featuring theSnowflakeDataSource
class that handles data reading and schema retrieval from Snowflake. TheDatabricksDataSource
class has been updated to handle data reading and schema retrieval from Databricks, including a newget_schema_query
method that generates the query to fetch the schema based on the provided catalog and table name. Exception handling for reading data and fetching schema has been implemented for all new classes. These changes provide increased flexibility for working with various data sources, improved code maintainability, and better support for different use cases. - Added Threshold Query Builder (#188). In this release, the open-source library has added a Threshold Query Builder feature, which includes several changes to the existing functionality in the data source connector. A new import statement adds the
re
module for regular expressions, and new parameters have been added to theread_data
andget_schema
abstract methods. The_get_jdbc_reader_options
method has been updated to accept aoptions
parameter of type "JdbcReaderOptions", and a new static method, "_get_table_or_query", has been added to construct the table or query string based on provided parameters. Additionally, a new class, "QueryConfig", has been introduced in the "databricks.labs.remorph.reconcile" package to configure queries for data reconciliation tasks. A new abstract base class QueryBuilder has been added to the query_builder.py file, along with HashQueryBuilder and ThresholdQueryBuilder classes to construct SQL queries for generating hash values and selecting columns based on threshold values, transformation rules, and filtering conditions. These changes aim to enhance the functionality of the data source connector, add modularity, customizability, and reusability to the query builder, and improve data reconciliation tasks. - Added snowflake connector code (#177). In this release, the open-source library has been updated to add a Snowflake connector for data extraction and schema manipulation. The changes include the addition of the SnowflakeDataSource class, which is used to read data from Snowflake using PySpark, and has methods for getting the JDBC URL, reading data with and without JDBC reader options, getting the schema, and handling exceptions. A new constant, SNOWFLAKE, has been added to the SourceDriver enum in constants.py, which represents the Snowflake JDBC driver class. The code modifications include updating the constructor of the DataSource abstract base class to include a new parameter 'scope', and updating the
_get_secrets
method to accept akey_name
parameter instead of 'key'. Additionally, a test file 'test_snowflake.py' has been added to test the functionality of the SnowflakeDataSource class. This release also updates the pyproject.toml file to version lock the dependencies like black, ruff, and isort, and modifies the coverage report configuration to exclude certain files and lines from coverage checks. These changes were completed by Ravikumar Thangaraj and SundarShankar89. remorph reconcile
baseline for Query Builder and Source Adapter for oracle as source (#150).
Dependency updates:
- Bump sqlglot from 22.4.0 to 22.5.0 (#175).
- Updated databricks-sdk requirement from <0.22,>=0.18 to >=0.18,<0.23 (#178).
- Updated databricks-sdk requirement from <0.23,>=0.18 to >=0.18,<0.24 (#189).
- Bump actions/checkout from 3 to 4 (#203).
- Bump actions/setup-python from 4 to 5 (#201).
- Bump codecov/codecov-action from 1 to 4 (#202).
- Bump softprops/action-gh-release from 1 to 2 (#204).
- Added Pylint Checker (#149). This diff adds a Pylint checker to the project, which is used to enforce a consistent code style, identify potential bugs, and check for errors in the Python code. The configuration for Pylint includes various settings, such as a line length limit, the maximum number of arguments for a function, and the maximum number of lines in a module. Additionally, several plugins have been specified to load, which add additional checks and features to Pylint. The configuration also includes settings that customize the behavior of Pylint's naming conventions checks and handle various types of code constructs, such as exceptions, logging statements, and import statements. By using Pylint, the project can help ensure that its code is of high quality, easy to understand, and free of bugs. This diff includes changes to various files, such as cli.py, morph_status.py, validate.py, and several SQL-related files, to ensure that they adhere to the desired Pylint configuration and best practices for code quality and organization.
- Fixed edge case where column name is same as alias name (#164). A recent commit has introduced fixes for edge cases related to conflicts between column names and alias names in SQL queries, addressing issues #164 and #130. The
check_for_unsupported_lca
function has been updated with two helper functions_find_aliases_in_select
and_find_invalid_lca_in_window
to detect aliases with the same name as a column in a SELECT expression and identify invalid Least Common Ancestors (LCAs) in window functions, respectively. Thefind_windows_in_select
function has been refactored and renamed to_find_windows_in_select
for improved code readability. Thetranspile
andparse
functions in thesql_transpiler.py
file have been updated with try-except blocks to handle cases where a column name matches the alias name, preventing errors or exceptions such asParseError
,TokenError
, andUnsupportedError
. A new unit test, "test_query_with_same_alias_and_column_name", has been added to verify the fix, passing a SQL query with a subquery having a column aliasca_zip
which is also used as a column name in the same query, confirming that the function correctly handles the scenario where a column name conflicts with an alias name. TO_NUMBER
withoutformat
edge case (#172). TheTO_NUMBER without format edge case
commit introduces changes to address an unsupported usage of theTO_NUMBER
function in Databicks SQL dialect when theformat
parameter is not provided. The new implementation introduces constantsPRECISION_CONST
andSCALE_CONST
(set to 38 and 0 respectively) as default values forprecision
andscale
parameters. These changes ensure Databricks SQL dialect requirements are met by modifying the_to_number
method to incorporate these constants. AnUnsupportedError
will now be raised whenTO_NUMBER
is called without aformat
parameter, improving error handling and ensuring users are aware of the requiredformat
parameter. Test cases have been added forTO_DECIMAL
,TO_NUMERIC
, andTO_NUMBER
functions with format strings, covering cases where the format is taken from table columns. The commit also ensures that an error is raised whenTO_DECIMAL
is called without a format parameter.
Dependency updates:
- Bump sqlglot from 21.2.1 to 22.0.1 (#152).
- Bump sqlglot from 22.0.1 to 22.1.1 (#159).
- Updated databricks-labs-blueprint[yaml] requirement from ~=0.2.3 to >=0.2.3,<0.4.0 (#162).
- Bump sqlglot from 22.1.1 to 22.2.0 (#161).
- Bump sqlglot from 22.2.0 to 22.2.1 (#163).
- Updated databricks-sdk requirement from <0.21,>=0.18 to >=0.18,<0.22 (#168).
- Bump sqlglot from 22.2.1 to 22.3.1 (#170).
- Updated databricks-labs-blueprint[yaml] requirement from <0.4.0,>=0.2.3 to >=0.2.3,<0.5.0 (#171).
- Bump sqlglot from 22.3.1 to 22.4.0 (#173).
- Added conversion logic for Try_to_Decimal without format (#142).
- Identify Root Table for folder containing SQLs (#124).
- Install Script (#106).
- Integration Test Suite (#145).
Dependency updates:
- Updated databricks-sdk requirement from <0.20,>=0.18 to >=0.18,<0.21 (#143).
- Bump sqlglot from 21.0.0 to 21.1.2 (#137).
- Bump sqlglot from 21.1.2 to 21.2.0 (#147).
- Bump sqlglot from 21.2.0 to 21.2.1 (#148).
- Added support for WITHIN GROUP for ARRAY_AGG and LISTAGG functions (#133).
- Fixed Merge "INTO" for delete from syntax (#129).
- Fixed
DATE TRUNC
parse errors (#131). - Patched Logger function call during wheel file (#135).
- Patched extra call to root path (#126).
Dependency updates:
- Updated databricks-sdk requirement from ~=0.18.0 to >=0.18,<0.20 (#134).
Dependency updates:
- Added test_approx_percentile and test_trunc Testcases (#98).
- Updated contributing/developer guide (#97).
- Added baseline for Databricks CLI frontend (#60).
- Added custom Databricks dialect test cases and lateral struct parsing (#77).
- Extended Snowflake to Databricks functions coverage (#72, #69).
- Added
databricks labs remorph transpile
documentation for installation and usage (#73).
Dependency updates:
- Bump sqlglot from 20.8.0 to 20.9.0 (#83).
- Updated databricks-sdk requirement from ~=0.17.0 to ~=0.18.0 (#90).
- Bump sqlglot from 20.9.0 to 20.10.0 (#91).
Initial commit