We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug
Postgres has a limitation of 64 chars for table names. Generated pre-aggregation table names can reach this limit:
Schema | Name | Type | Owner ----------------------+-----------------------------------------------------------------+-------+---------- stb_pre_aggregations | impressions_advertiser_count_by_day20200601_hnkacipl_vhthxxhc_1 | table | postgres stb_pre_aggregations | impressions_advertiser_count_by_day20200701_wzp3uyld_qnhk3tsr_1 | table | postgres stb_pre_aggregations | impressions_publisher_count_by_day20200601_kmpd5gs3_x0zn2l0v_15 | table | postgres stb_pre_aggregations | impressions_publisher_count_by_day20200701_1zpylxe_uma3nxf1_159 | table | postgres stb_pre_aggregations | requests_fill_rate_by_day20200601_5ypc2raf_gevo3jro_15954312366 | table | postgres stb_pre_aggregations | requests_fill_rate_by_day20200701_jsp0kf3_d4klpmk1_159572190821 | table | postgres
In this case, timestamps are not valids and runs issue in preaggregation computations.
Maybe generate a fixed hash for the (cube, preAggregationName) part to avoid reaching this limit?
Version: [0.19.39]
The text was updated successfully, but these errors were encountered:
hey @lvauvillier! Thanks for posting this one. There's a similar issue #86.
As a workaround you can use sqlAlias for member names https://cube.dev/docs/cube#parameters-sql-alias
sqlAlias
Sorry, something went wrong.
98ffad3
No branches or pull requests
Describe the bug
Postgres has a limitation of 64 chars for table names.
Generated pre-aggregation table names can reach this limit:
In this case, timestamps are not valids and runs issue in preaggregation computations.
Maybe generate a fixed hash for the (cube, preAggregationName) part to avoid reaching this limit?
Version:
[0.19.39]
The text was updated successfully, but these errors were encountered: