Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions datafusion/core/tests/dataframe/dataframe_functions.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1310,8 +1310,8 @@ async fn test_count_wildcard() -> Result<()> {
@r"
Sort: count(*) ASC NULLS LAST [count(*):Int64]
Projection: count(*) [count(*):Int64]
Aggregate: groupBy=[[test.b]], aggr=[[count(Int64(1)) AS count(*)]] [b:UInt32, count(*):Int64]
TableScan: test [a:UInt32, b:UInt32, c:UInt32]
Aggregate: groupBy=[[test.b]], aggr=[[count(Int64(1)) AS count(*)]] [test.b:UInt32, count(*):Int64]
TableScan: test [test.a:UInt32, test.b:UInt32, test.c:UInt32]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the test already tell us the qualifier?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For TableScan, however. the schema printing code is same for every plan node and for many it's not much less clear. Without this change, the plan printout is incomplete and insufficient to understand the plan.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can special case the schema printing code to have a version to skip the qualifiers in cases where it is always the same 🤔

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could that be confusing? If some qualifiers are printed but some not, the projections without qualifiers will look as if they did not have any, which is a different state from the one when they all have the same qualifier.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was more thinking how redundant this line is now

It goes from

-          TableScan: test [a:UInt32, b:UInt32, c:UInt32]
+          TableScan: test [test.a:UInt32, test.b:UInt32, test.c:UInt32]

That is the qualifier test is now repeated 4 times. It will be even worse when there are

  1. long qualifiers "my_really_obxiously_long_table_name"
  2. Multiple columns selected as each column gets the same name

For a TableScan, there can be, by definition, only a single relation, so appending the relation name to all expressions just makes the plans harder to read

More generally, when there is only one relation in the query, as is the case in many queries, adding a qualifier to all expressions I think makes the plans harder to read, not better



Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More generally, when there is only one relation in the query, as is the case in many queries, adding a qualifier to all expressions I think makes the plans harder to read, not better

Agreed.
But also, single-table queries are not the ones we should optimize EXPLAIN output for.
These represent a subset of all queries which naturally is simpler than all queries, without source table count limit.

");

Ok(())
Expand Down
92 changes: 46 additions & 46 deletions datafusion/core/tests/dataframe/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1863,7 +1863,7 @@ async fn with_column_renamed_join() -> Result<()> {
assert_snapshot!(
df_renamed.logical_plan(),
@r"
Projection: t1.c1 AS AAA, t1.c2, t1.c3, t2.c1, t2.c2, t2.c3
Projection: t1.c1 AS t1.AAA, t1.c2, t1.c3, t2.c1, t2.c2, t2.c3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't seem right to me -- the alias shouldn't have a qualifier on it, should it? AAA doesn't come from the t1 relation, it is created in the outer query

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I honestly have no idea where t1. comes from, and what should be here.

Limit: skip=0, fetch=1
Sort: t1.c1 ASC NULLS FIRST, t1.c2 ASC NULLS FIRST, t1.c3 ASC NULLS FIRST, t2.c1 ASC NULLS FIRST, t2.c2 ASC NULLS FIRST, t2.c3 ASC NULLS FIRST
Inner Join: t1.c1 = t2.c1
Expand All @@ -1878,15 +1878,15 @@ async fn with_column_renamed_join() -> Result<()> {

assert_snapshot!(
df_renamed.clone().into_optimized_plan().unwrap(),
@r###"
Projection: t1.c1 AS AAA, t1.c2, t1.c3, t2.c1, t2.c2, t2.c3
@r"
Projection: t1.c1 AS t1.AAA, t1.c2, t1.c3, t2.c1, t2.c2, t2.c3
Sort: t1.c1 ASC NULLS FIRST, t1.c2 ASC NULLS FIRST, t1.c3 ASC NULLS FIRST, t2.c1 ASC NULLS FIRST, t2.c2 ASC NULLS FIRST, t2.c3 ASC NULLS FIRST, fetch=1
Inner Join: t1.c1 = t2.c1
SubqueryAlias: t1
TableScan: aggregate_test_100 projection=[c1, c2, c3]
SubqueryAlias: t2
TableScan: aggregate_test_100 projection=[c1, c2, c3]
"###
"
);

let df_results = df_renamed.collect().await?;
Expand Down Expand Up @@ -3606,12 +3606,12 @@ async fn join_with_alias_filter() -> Result<()> {
let actual = formatted.trim();
assert_snapshot!(
actual,
@r###"
Projection: t1.a, t2.a, t1.b, t1.c, t2.b, t2.c [a:UInt32, a:UInt32, b:Utf8, c:Int32, b:Utf8, c:Int32]
Inner Join: t1.a + UInt32(3) = t2.a + UInt32(1) [a:UInt32, b:Utf8, c:Int32, a:UInt32, b:Utf8, c:Int32]
TableScan: t1 projection=[a, b, c] [a:UInt32, b:Utf8, c:Int32]
TableScan: t2 projection=[a, b, c] [a:UInt32, b:Utf8, c:Int32]
"###
@r"
Projection: t1.a, t2.a, t1.b, t1.c, t2.b, t2.c [t1.a:UInt32, t2.a:UInt32, t1.b:Utf8, t1.c:Int32, t2.b:Utf8, t2.c:Int32]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is an improvement for the Projection and Inner Join here to have the qualifiers on them -- that makes them less ambiguous when there are potentially multiple relations

Inner Join: t1.a + UInt32(3) = t2.a + UInt32(1) [t1.a:UInt32, t1.b:Utf8, t1.c:Int32, t2.a:UInt32, t2.b:Utf8, t2.c:Int32]
TableScan: t1 projection=[a, b, c] [t1.a:UInt32, t1.b:Utf8, t1.c:Int32]
TableScan: t2 projection=[a, b, c] [t2.a:UInt32, t2.b:Utf8, t2.c:Int32]
"
);

let results = df.collect().await?;
Expand Down Expand Up @@ -3651,14 +3651,14 @@ async fn right_semi_with_alias_filter() -> Result<()> {
let actual = formatted.trim();
assert_snapshot!(
actual,
@r###"
RightSemi Join: t1.a = t2.a [a:UInt32, b:Utf8, c:Int32]
Projection: t1.a [a:UInt32]
Filter: t1.c > Int32(1) [a:UInt32, c:Int32]
TableScan: t1 projection=[a, c] [a:UInt32, c:Int32]
Filter: t2.c > Int32(1) [a:UInt32, b:Utf8, c:Int32]
TableScan: t2 projection=[a, b, c] [a:UInt32, b:Utf8, c:Int32]
"###
@r"
RightSemi Join: t1.a = t2.a [t2.a:UInt32, t2.b:Utf8, t2.c:Int32]
Projection: t1.a [t1.a:UInt32]
Filter: t1.c > Int32(1) [t1.a:UInt32, t1.c:Int32]
TableScan: t1 projection=[a, c] [t1.a:UInt32, t1.c:Int32]
Filter: t2.c > Int32(1) [t2.a:UInt32, t2.b:Utf8, t2.c:Int32]
TableScan: t2 projection=[a, b, c] [t2.a:UInt32, t2.b:Utf8, t2.c:Int32]
"
);

let results = df.collect().await?;
Expand Down Expand Up @@ -3698,13 +3698,13 @@ async fn right_anti_filter_push_down() -> Result<()> {
let actual = formatted.trim();
assert_snapshot!(
actual,
@r###"
RightAnti Join: t1.a = t2.a Filter: t2.c > Int32(1) [a:UInt32, b:Utf8, c:Int32]
Projection: t1.a [a:UInt32]
Filter: t1.c > Int32(1) [a:UInt32, c:Int32]
TableScan: t1 projection=[a, c] [a:UInt32, c:Int32]
TableScan: t2 projection=[a, b, c] [a:UInt32, b:Utf8, c:Int32]
"###
@r"
RightAnti Join: t1.a = t2.a Filter: t2.c > Int32(1) [t2.a:UInt32, t2.b:Utf8, t2.c:Int32]
Projection: t1.a [t1.a:UInt32]
Filter: t1.c > Int32(1) [t1.a:UInt32, t1.c:Int32]
TableScan: t1 projection=[a, c] [t1.a:UInt32, t1.c:Int32]
TableScan: t2 projection=[a, b, c] [t2.a:UInt32, t2.b:Utf8, t2.c:Int32]
"
);

let results = df.collect().await?;
Expand Down Expand Up @@ -4382,12 +4382,12 @@ async fn unnest_with_redundant_columns() -> Result<()> {
let actual = formatted.trim();
assert_snapshot!(
actual,
@r###"
Projection: shapes.shape_id [shape_id:UInt32]
Unnest: lists[shape_id2|depth=1] structs[] [shape_id:UInt32, shape_id2:UInt32;N]
Aggregate: groupBy=[[shapes.shape_id]], aggr=[[array_agg(shapes.shape_id) AS shape_id2]] [shape_id:UInt32, shape_id2:List(Field { name: "item", data_type: UInt32, nullable: true, dict_id: 0, dict_is_ordered: false, metadata: {} });N]
TableScan: shapes projection=[shape_id] [shape_id:UInt32]
"###
@r#"
Projection: shapes.shape_id [shapes.shape_id:UInt32]
Unnest: lists[shape_id2|depth=1] structs[] [shapes.shape_id:UInt32, shape_id2:UInt32;N]
Aggregate: groupBy=[[shapes.shape_id]], aggr=[[array_agg(shapes.shape_id) AS shape_id2]] [shapes.shape_id:UInt32, shape_id2:List(Field { name: "item", data_type: UInt32, nullable: true, dict_id: 0, dict_is_ordered: false, metadata: {} });N]
TableScan: shapes projection=[shape_id] [shapes.shape_id:UInt32]
"#
);

let results = df.collect().await?;
Expand Down Expand Up @@ -5748,11 +5748,11 @@ async fn test_alias() -> Result<()> {
.into_unoptimized_plan()
.display_indent_schema()
.to_string();
assert_snapshot!(plan, @r###"
SubqueryAlias: table_alias [a:Utf8, b:Int32, one:Int32]
Projection: test.a, test.b, Int32(1) AS one [a:Utf8, b:Int32, one:Int32]
TableScan: test [a:Utf8, b:Int32]
"###);
assert_snapshot!(plan, @r"
SubqueryAlias: table_alias [table_alias.a:Utf8, table_alias.b:Int32, table_alias.one:Int32]
Projection: test.a, test.b, Int32(1) AS one [test.a:Utf8, test.b:Int32, one:Int32]
TableScan: test [test.a:Utf8, test.b:Int32]
");

// Select over the aliased DataFrame
let df = df.select(vec![
Expand Down Expand Up @@ -5822,10 +5822,10 @@ async fn test_alias_empty() -> Result<()> {
.into_unoptimized_plan()
.display_indent_schema()
.to_string();
assert_snapshot!(plan, @r###"
SubqueryAlias: [a:Utf8, b:Int32]
TableScan: test [a:Utf8, b:Int32]
"###);
assert_snapshot!(plan, @r"
SubqueryAlias: [.a:Utf8, .b:Int32]
TableScan: test [test.a:Utf8, test.b:Int32]
");

assert_snapshot!(
batches_to_sort_string(&df.select(vec![col("a"), col("b")])?.collect().await.unwrap()),
Expand Down Expand Up @@ -5857,12 +5857,12 @@ async fn test_alias_nested() -> Result<()> {
.into_optimized_plan()?
.display_indent_schema()
.to_string();
assert_snapshot!(plan, @r###"
SubqueryAlias: alias2 [a:Utf8, b:Int32, one:Int32]
SubqueryAlias: alias1 [a:Utf8, b:Int32, one:Int32]
Projection: test.a, test.b, Int32(1) AS one [a:Utf8, b:Int32, one:Int32]
TableScan: test projection=[a, b] [a:Utf8, b:Int32]
"###);
assert_snapshot!(plan, @r"
SubqueryAlias: alias2 [alias2.a:Utf8, alias2.b:Int32, alias2.one:Int32]
SubqueryAlias: alias1 [alias1.a:Utf8, alias1.b:Int32, alias1.one:Int32]
Projection: test.a, test.b, Int32(1) AS one [test.a:Utf8, test.b:Int32, one:Int32]
TableScan: test projection=[a, b] [test.a:Utf8, test.b:Int32]
");

// Select over the aliased DataFrame
let select1 = df
Expand Down
24 changes: 12 additions & 12 deletions datafusion/core/tests/sql/explain_analyze.rs
Original file line number Diff line number Diff line change
Expand Up @@ -182,9 +182,9 @@ async fn csv_explain_plans() {
actual,
@r"
Explain [plan_type:Utf8, plan:Utf8]
Projection: aggregate_test_100.c1 [c1:Utf8View]
Filter: aggregate_test_100.c2 > Int64(10) [c1:Utf8View, c2:Int8, c3:Int16, c4:Int16, c5:Int32, c6:Int64, c7:Int16, c8:Int32, c9:UInt32, c10:UInt64, c11:Float32, c12:Float64, c13:Utf8View]
TableScan: aggregate_test_100 [c1:Utf8View, c2:Int8, c3:Int16, c4:Int16, c5:Int32, c6:Int64, c7:Int16, c8:Int32, c9:UInt32, c10:UInt64, c11:Float32, c12:Float64, c13:Utf8View]
Projection: aggregate_test_100.c1 [aggregate_test_100.c1:Utf8View]
Filter: aggregate_test_100.c2 > Int64(10) [aggregate_test_100.c1:Utf8View, aggregate_test_100.c2:Int8, aggregate_test_100.c3:Int16, aggregate_test_100.c4:Int16, aggregate_test_100.c5:Int32, aggregate_test_100.c6:Int64, aggregate_test_100.c7:Int16, aggregate_test_100.c8:Int32, aggregate_test_100.c9:UInt32, aggregate_test_100.c10:UInt64, aggregate_test_100.c11:Float32, aggregate_test_100.c12:Float64, aggregate_test_100.c13:Utf8View]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a good example of a plan which is much less readable after this change in my mind

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this because all fields are qualified and all have the same qualifier?

TableScan: aggregate_test_100 [aggregate_test_100.c1:Utf8View, aggregate_test_100.c2:Int8, aggregate_test_100.c3:Int16, aggregate_test_100.c4:Int16, aggregate_test_100.c5:Int32, aggregate_test_100.c6:Int64, aggregate_test_100.c7:Int16, aggregate_test_100.c8:Int32, aggregate_test_100.c9:UInt32, aggregate_test_100.c10:UInt64, aggregate_test_100.c11:Float32, aggregate_test_100.c12:Float64, aggregate_test_100.c13:Utf8View]
"
);
//
Expand Down Expand Up @@ -253,9 +253,9 @@ async fn csv_explain_plans() {
actual,
@r"
Explain [plan_type:Utf8, plan:Utf8]
Projection: aggregate_test_100.c1 [c1:Utf8View]
Filter: aggregate_test_100.c2 > Int8(10) [c1:Utf8View, c2:Int8]
TableScan: aggregate_test_100 projection=[c1, c2], partial_filters=[aggregate_test_100.c2 > Int8(10)] [c1:Utf8View, c2:Int8]
Projection: aggregate_test_100.c1 [aggregate_test_100.c1:Utf8View]
Filter: aggregate_test_100.c2 > Int8(10) [aggregate_test_100.c1:Utf8View, aggregate_test_100.c2:Int8]
TableScan: aggregate_test_100 projection=[c1, c2], partial_filters=[aggregate_test_100.c2 > Int8(10)] [aggregate_test_100.c1:Utf8View, aggregate_test_100.c2:Int8]
"
);
//
Expand Down Expand Up @@ -399,9 +399,9 @@ async fn csv_explain_verbose_plans() {
actual,
@r"
Explain [plan_type:Utf8, plan:Utf8]
Projection: aggregate_test_100.c1 [c1:Utf8View]
Filter: aggregate_test_100.c2 > Int64(10) [c1:Utf8View, c2:Int8, c3:Int16, c4:Int16, c5:Int32, c6:Int64, c7:Int16, c8:Int32, c9:UInt32, c10:UInt64, c11:Float32, c12:Float64, c13:Utf8View]
TableScan: aggregate_test_100 [c1:Utf8View, c2:Int8, c3:Int16, c4:Int16, c5:Int32, c6:Int64, c7:Int16, c8:Int32, c9:UInt32, c10:UInt64, c11:Float32, c12:Float64, c13:Utf8View]
Projection: aggregate_test_100.c1 [aggregate_test_100.c1:Utf8View]
Filter: aggregate_test_100.c2 > Int64(10) [aggregate_test_100.c1:Utf8View, aggregate_test_100.c2:Int8, aggregate_test_100.c3:Int16, aggregate_test_100.c4:Int16, aggregate_test_100.c5:Int32, aggregate_test_100.c6:Int64, aggregate_test_100.c7:Int16, aggregate_test_100.c8:Int32, aggregate_test_100.c9:UInt32, aggregate_test_100.c10:UInt64, aggregate_test_100.c11:Float32, aggregate_test_100.c12:Float64, aggregate_test_100.c13:Utf8View]
TableScan: aggregate_test_100 [aggregate_test_100.c1:Utf8View, aggregate_test_100.c2:Int8, aggregate_test_100.c3:Int16, aggregate_test_100.c4:Int16, aggregate_test_100.c5:Int32, aggregate_test_100.c6:Int64, aggregate_test_100.c7:Int16, aggregate_test_100.c8:Int32, aggregate_test_100.c9:UInt32, aggregate_test_100.c10:UInt64, aggregate_test_100.c11:Float32, aggregate_test_100.c12:Float64, aggregate_test_100.c13:Utf8View]
"
);
//
Expand Down Expand Up @@ -470,9 +470,9 @@ async fn csv_explain_verbose_plans() {
actual,
@r"
Explain [plan_type:Utf8, plan:Utf8]
Projection: aggregate_test_100.c1 [c1:Utf8View]
Filter: aggregate_test_100.c2 > Int8(10) [c1:Utf8View, c2:Int8]
TableScan: aggregate_test_100 projection=[c1, c2], partial_filters=[aggregate_test_100.c2 > Int8(10)] [c1:Utf8View, c2:Int8]
Projection: aggregate_test_100.c1 [aggregate_test_100.c1:Utf8View]
Filter: aggregate_test_100.c2 > Int8(10) [aggregate_test_100.c1:Utf8View, aggregate_test_100.c2:Int8]
TableScan: aggregate_test_100 projection=[c1, c2], partial_filters=[aggregate_test_100.c2 > Int8(10)] [aggregate_test_100.c1:Utf8View, aggregate_test_100.c2:Int8]
"
);
//
Expand Down
10 changes: 9 additions & 1 deletion datafusion/expr/src/expr.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3456,7 +3456,15 @@ pub const UNNEST_COLUMN_PREFIX: &str = "UNNEST";
impl Display for Expr {
fn fmt(&self, f: &mut Formatter) -> fmt::Result {
match self {
Expr::Alias(Alias { expr, name, .. }) => write!(f, "{expr} AS {name}"),
Expr::Alias(Alias {
expr,
relation,
name,
..
}) => match relation {
None => write!(f, "{expr} AS {name}"),
Some(relation) => write!(f, "{expr} AS {relation}.{name}"),
},
Expr::Column(c) => write!(f, "{c}"),
Expr::OuterReferenceColumn(_, c) => {
write!(f, "{OUTER_REFERENCE_COLUMN_PREFIX}({c})")
Expand Down
42 changes: 35 additions & 7 deletions datafusion/expr/src/logical_plan/display.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ use crate::dml::CopyTo;
use arrow::datatypes::Schema;
use datafusion_common::display::GraphvizBuilder;
use datafusion_common::tree_node::{TreeNodeRecursion, TreeNodeVisitor};
use datafusion_common::{Column, DataFusionError};
use datafusion_common::{Column, DFSchema, DataFusionError};
use serde_json::json;

/// Formats plans with a single line per node. For example:
Expand Down Expand Up @@ -72,11 +72,7 @@ impl<'n> TreeNodeVisitor<'n> for IndentVisitor<'_, '_> {
write!(self.f, "{:indent$}", "", indent = self.indent * 2)?;
write!(self.f, "{}", plan.display())?;
if self.with_schema {
write!(
self.f,
" {}",
display_schema(&plan.schema().as_ref().to_owned().into())
)?;
write!(self.f, " {}", display_df_schema(plan.schema().as_ref()))?;
}

self.indent += 1;
Expand All @@ -92,7 +88,7 @@ impl<'n> TreeNodeVisitor<'n> for IndentVisitor<'_, '_> {
}
}

/// Print the schema in a compact representation to `buf`
/// Print the schema in a compact representation
///
/// For example: `foo:Utf8` if `foo` can not be null, and
/// `foo:Utf8;N` if `foo` is nullable.
Expand Down Expand Up @@ -135,6 +131,38 @@ pub fn display_schema(schema: &Schema) -> impl fmt::Display + '_ {
Wrapper(schema)
}

/// Print the schema in a compact representation.
/// Similar to `display_schema`, but includes field qualifiers if any.
pub fn display_df_schema(schema: &DFSchema) -> impl fmt::Display + '_ {
struct Wrapper<'a>(&'a DFSchema);

impl fmt::Display for Wrapper<'_> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "[")?;
for (idx, (qualifier, field)) in self.0.iter().enumerate() {
if idx > 0 {
write!(f, ", ")?;
}
let nullable_str = if field.is_nullable() { ";N" } else { "" };
write!(
f,
"{}{}:{:?}{}",
if let Some(q) = qualifier {
format!("{q}.")
} else {
"".to_string()
},
field.name(),
field.data_type(),
nullable_str
)?;
}
write!(f, "]")
}
}
Wrapper(schema)
}

/// Formats plans for graphical display using the `DOT` language. This
/// format can be visualized using software from
/// [`graphviz`](https://graphviz.org/)
Expand Down
14 changes: 7 additions & 7 deletions datafusion/expr/src/logical_plan/plan.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1595,8 +1595,8 @@ impl LogicalPlan {
/// // Format using display_indent_schema
/// let display_string = format!("{}", plan.display_indent_schema());
///
/// assert_eq!("Filter: t1.id = Int32(5) [id:Int32]\
/// \n TableScan: t1 [id:Int32]",
/// assert_eq!("Filter: t1.id = Int32(5) [t1.id:Int32]\
/// \n TableScan: t1 [t1.id:Int32]",
/// display_string);
/// ```
pub fn display_indent_schema(&self) -> impl Display + '_ {
Expand Down Expand Up @@ -4270,11 +4270,11 @@ mod tests {
let plan = display_plan()?;

assert_snapshot!(plan.display_indent_schema(), @r"
Projection: employee_csv.id [id:Int32]
Filter: employee_csv.state IN (<subquery>) [id:Int32, state:Utf8]
Subquery: [state:Utf8]
TableScan: employee_csv projection=[state] [state:Utf8]
TableScan: employee_csv projection=[id, state] [id:Int32, state:Utf8]
Projection: employee_csv.id [employee_csv.id:Int32]
Filter: employee_csv.state IN (<subquery>) [employee_csv.id:Int32, employee_csv.state:Utf8]
Subquery: [employee_csv.state:Utf8]
TableScan: employee_csv projection=[state] [employee_csv.state:Utf8]
TableScan: employee_csv projection=[id, state] [employee_csv.id:Int32, employee_csv.state:Utf8]
");
Ok(())
}
Expand Down
Loading