-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve SqlTypeName
to support more types and also improve error handling
#824
Conversation
Codecov Report
@@ Coverage Diff @@
## main #824 +/- ##
==========================================
- Coverage 74.88% 74.86% -0.03%
==========================================
Files 71 71
Lines 3588 3588
Branches 748 748
==========================================
- Hits 2687 2686 -1
+ Misses 771 768 -3
- Partials 130 134 +4
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
"BINARY" => Ok(SqlTypeName::BINARY), | ||
"VARBINARY" => Ok(SqlTypeName::VARBINARY), | ||
"CHAR" => Ok(SqlTypeName::CHAR), | ||
"VARCHAR" | "STRING" => Ok(SqlTypeName::VARCHAR), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've just tried these changes with my own tests.
I can now use "string" types, but varchars w/ a defined limit still fail:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/envs/rapids/lib/python3.9/site-packages/dask_sql/context.py", line 238, in create_table
dc = InputUtil.to_dc(
File "/opt/conda/envs/rapids/lib/python3.9/site-packages/dask_sql/input_utils/convert.py", line 68, in to_dc
table = filled_get_dask_dataframe(input_item)
File "/opt/conda/envs/rapids/lib/python3.9/site-packages/dask_sql/input_utils/convert.py", line 57, in <lambda>
filled_get_dask_dataframe = lambda *args: cls._get_dask_dataframe(
File "/opt/conda/envs/rapids/lib/python3.9/site-packages/dask_sql/input_utils/convert.py", line 90, in _get_dask_dataframe
return plugin.to_dc(
File "/opt/conda/envs/rapids/lib/python3.9/site-packages/dask_sql/input_utils/hive.py", line 69, in to_dc
column_information = {
File "/opt/conda/envs/rapids/lib/python3.9/site-packages/dask_sql/input_utils/hive.py", line 70, in <dictcomp>
col: sql_to_python_type(SqlTypeName.fromString(col_type.upper()))
RuntimeError: Internal("Cannot determine SQL type name for 'VARCHAR(65535)'")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have now pushed changes to support VARCHAR(n)
and some other parameterized types
SqlTypeName
to support more types and also improve error handling
|
|
TIL that you cannot reference PyMethods from Rust unit tests (at least, not without additional environment changes). Tests should now be passing. |
SqlTypeName::from_string()
to support more types, using the sql parser to parse parameterized types such asVARCHAR(n)
andDECIMAL(p, s)
todo!
andunimplemented!
withErr
and made corresponding changes in call sites