ISS-268999: add aliasing to solve Ambiguous column reference#219
Merged
shriram-devrev merged 1 commit intomainfrom Mar 16, 2026
Merged
ISS-268999: add aliasing to solve Ambiguous column reference#219shriram-devrev merged 1 commit intomainfrom
shriram-devrev merged 1 commit intomainfrom
Conversation
029144e to
4d56c04
Compare
0831c3a to
fe5886e
Compare
fe5886e to
91fa573
Compare
91fa573 to
1d329bb
Compare
1d329bb to
5281529
Compare
5281529 to
82becc6
Compare
zaidjan-devrev
approved these changes
Mar 14, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
ISS-268999
Add SQL column qualification for TableSchema measures and dimensions
Summary
TableSchemameasure/dimension SQL expressions by prepending the table name (e.g.customer_id→orders.customer_id)json_serialize_sql/json_deserialize_sqlfor proper AST-based parsing and serialization, with batched processing per table for performancequalifyTableSchemas()wrappers from bothmeerkat-nodeandmeerkat-browserArchitecture
The implementation is split into four layers:
Data flow (per table)
qualifyTableSchemasSqlshallow-copies the inputTableSchema, collects all measures and dimensions whose SQL doesn't contain{MEERKAT}.placeholdersSELECT expr1, expr2, ...and sent to DuckDB'sjson_serialize_sqlin one call. The returned AST hasquery_locationmetadata stripped at parse time. IndividualParsedExpression[]are extracted from theselect_listqualifySqlExpressionColumnsBatchwalks each parsed expression AST in-memory. Single-partCOLUMN_REFnodes (e.g.['customer_id']) are rewritten to['tableName', 'customer_id']. Already-qualified refs (['orders', 'id']), lambda-bound variables, and whitespace-containing identifiers are skippedParsedExpression[]are wrapped in a syntheticSELECTstatement with unique batch aliases (__meerkat_batch_expr_0__, etc.), passed to DuckDB'sjson_deserialize_sqlin one call, and the returned SQL string is split back into individual expressions using the alias markersKey design decisions
TableSchemaobjects are never mutated; a shallow copy is returnedjson_serialize_sql+ onejson_deserialize_sqlcall per table (not per expression)COLUMN_REFwas qualified, the serialize step is skipped entirelyjson_serialize_sql/json_deserialize_sqlqueries directly without modifying the sharedast-serializer/ast-deserializermodulesquery_locationstripping: DuckDB addsquery_locationmetadata during parsing that breaksjson_deserialize_sql. This is stripped once at parse time via an iterative in-place walk{MEERKAT}.are skipped since they aren't valid SQL until placeholder replacementConsumer usage
Node
Browser