New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
maxlength metadata not working with pyspark #137
Comments
Hi @dokipen, This is a known issue which stems from limitations in PySpark's column metadata APIs. In order to change a column's metadata you need to create a new DataFrame using the new metadata; modifying it in-place like this won't work. For this reason, the example in the README uses the It seems like you might be able to set column metadata by using This limitation is documented at the bottom of the "Configuring the maximum size of string columns" section in the README, although I suppose I could put a See also: #54 (comment) I'm open to the idea of adding new |
Thanks, sorry I didn't read more carefully. FYI, both work-arounds worked. |
I have gone ahead and updated the README to make this caveat a little clearer: ed75de1 Therefore, I'm going to close this issue for now. When Spark expands its language support for column metadata operations, I'll be sure to update the README to include examples in other languages. |
@dokipen What were the workarounds that worked? |
I don't remember at this point. |
I made sure the table doesn't exist, then run the following:
The maxlength metadata is ignored and the column is created with
character varying(256)
type. Any ideas?The text was updated successfully, but these errors were encountered: