-
Notifications
You must be signed in to change notification settings - Fork 24.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ESQL: Union Types Support #107545
base: main
Are you sure you want to change the base?
ESQL: Union Types Support #107545
Conversation
Hi @craigtaverner, I've created a changelog YAML for you. |
d740c88
to
e76988c
Compare
Hi @craigtaverner, I've created a changelog YAML for you. |
13c4186
to
20582c4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a round of comments.
* then there could still be some FieldAttributes that contain unresolved MultiTypeEsFields. | ||
* These need to be converted back to actual UnresolvedAttribute in order for validation to generate appropriate failures. | ||
*/ | ||
private static class UnresolveUnionTypes extends AnalyzerRules.AnalyzerRule<LogicalPlan> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This rule tries to rollback something that either shouldn't have occurred in the first place or that is ignored by the verifier.
If there's no AbstractConvertFunction, MultiTypeFields can considered unsupported in projections and unsupported inside expressions (see EsqlProject and the Verifier).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This rule does something that used to be done by the ResolveRefs code. We have a catch-22 situation where ResolveUnionTypes depends on ResolveRefs to run first, but also relies on InvalidMappedField (or currently MultiTypeEsField.UnresolvedField) to not be converted into an UnresolvedAttribute (which ResolveRefs also does). So that conversion needed to be removed from ResolveRefs and moved after ResolveUnionTypes. This is that new location. This code also did something much more complex before, also editing the contents of EsRelation and the contained EsIndex, but that was getting very messy, and only needed to handle regression tests around unsupported types. I had three choices:
- Continue to make UnresolveUnionTypes more complex (I got all but one regression test passing, so just needed to add nested resolution of Object fields)
- Find a way to get
MultiTypeEsField.UnresolvedField
to pass through Plan serialization (this is the route I took, but it looks hacky, and I see your comment on that already). This allowed me to reduce this class a lot (you see the simplified version here). - Merge
MultiTypeEsField.UnresolvedField
withInvalidMappedField
so we avoid the plan serialisation hack. This is my new preferred approach, and I see you suggest this approach too in another comment.
But in none of these cases does this class completely disappear. Somewhere we need to recognise InvalidMappedField as an unresolved type. That used to happen in ResolveRefs, and now happens here. You are suggesting an alternative approach/location? Verifying EsqlProject? I can investigate that on Monday. I've looked through the Verifier a few times, so I can imagine possibilities there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks - it's a minor point but you could incorporate the rule as part of the UnionType rules in a second pass.
That is convert all InvalidMappedFields that were are not wrapped in a conversion function to the unresolved attribute.
Again minor.
of( | ||
EsField.class, | ||
MultiTypeEsField.UnresolvedField.class, // This extends InvalidMappedField with information needed only during Analysis | ||
PlanNamedTypes::writeInvalidMappedField, | ||
PlanNamedTypes::readInvalidMappedField | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might be causing more problems than it tries to fix. Allow AbstractConvertFunction to work on InvalidMappedField and replace them with MultiType/ConvertedEsField.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. It did actually fix a bunch of things, because UnresolveUnionTypes
was becoming quite complex in order to get regressions to pass, and putting this here made it much simpler. However, if I instead combine the classes InvalidMappedField
and MultiTypeEsField.UnresolvedField
into one class, that same simplification occurs, without this hack. I noticed a lot of failures in CI that I could not reproduce locally, but this particular hack feels like a likely source of the failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, in case it was not clear, the existence of this in the plan was only when there is no AbstractConvertFunction, so the remaining InvalidMappedField needs to be serialised in the plan, to support existing behaviour around unsupported types.
* If type resolution is not possible, due to the plan not containing explicit type conversion functions, then this class will | ||
* be used to communicate the type resolution failure, since it's parent class is InvalidMappedField. | ||
*/ | ||
public static class UnresolvedField extends InvalidMappedField { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use InvalidMappedField directly since that is already picked up by the analyzer/verifier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since InvalidMappedField was in QL, I did not want to touch it, but I'm been coming round more and more to the idea that I really should. I think it will make things much simpler, and avoid the awkward PlanNamedTypes hack.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This conversation has come full circle now, due to the recent reviews asking to not edit QL directly. After the above comment, I switched to editing QL directly. There were two classes where we edited QL FieldAttribute
where the equals and hashcode were updated to include the underlying field, so query plan re-writing would work (could be seen as an oversight in QL code, so perhaps even a bug-fix), and the above mentioned InvalidMappedField. An attempt to completely extricate these two from QL failed (dependencies increased scope dramatically, turning a moderate PR into a monster). A second attempt at a compromise by making ESQL versions that maintained the simple name, and extended the QL versions worked for both classes, but was quite messy for FieldAttribute. After discussing with the team, we decided to keep the ESQL port of InvalidMappedField, but revert to the direct edit of the FieldAttribute. The thinking here is the up-coming split of QL and ESQL would be slightly facilitated by not having any edits in InvalidMappedField, while the edit to FieldAttribute we would probably want to keep anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a round of comments.
4125129
to
cc66455
Compare
Pinging @elastic/es-analytical-engine (Team:Analytics) |
After some recent refinements, and the addition of the unit tests in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Craig, I did a couple of tests, but I think I need some guidance to understand which types/functions this PR covers, because I couldn't manage to make it work...
I did the following:
- created two indexes,
test1
andtest2
with only one@timestamp
field of typelong
anddate
respectively. - tried a few queries with empty indexes:
from test* | eval x = to_long(@timestamp) | keep x
: all goodfrom test* | eval x = to_string(@timestamp) | keep x
: all goodfrom test* | eval x = to_date(@timestamp) | keep x
:verification_exception
,Found 1 problem\nline 1:31: Cannot use field [@timestamp] due to ambiguities being mapped as [2] incompatible types: [date] in [test2], [long] in [test1]
Then I added one record per index
{"index": {"_index":"test1"}}
{"@timestamp":10000000}
{"index": {"_index":"test2"}}
{"@timestamp":"2022-05-06T12:01:00.000Z"}
and tried the same queries again:
from test* | eval x = to_long(@timestamp) | keep x
:null_pointer_exception
,Cannot invoke \"Object.hashCode()\" because \"pk\" is null
from test* | eval x = to_string(@timestamp) | keep x
: same as abovefrom test* | eval x = to_date(@timestamp) | keep x
:verification_exception
.Found 1 problem\nline 1:31: Cannot use field [@timestamp] due to ambiguities being mapped as [2] incompatible types: [date] in [test2], [long] in [test1]
from test* | eval x = to_ip(@timestamp) | keep x
:esql_illegal_argument_exception
,illegal data type [long]
(suppressednull_pointer_exception
as above)
I also tried IP
vs long
and I got some results with to_string()
but not with other functions.
So a couple of questions:
- does this cover all the types? I didn't find any type-specific code in the PR, so I assumed it was the case.
- is it supposed to handle partially invalid conversions? Eg. if I have an IP and a date and I'm only interested in the date, can I do it?
[edit] it should be to_datetime()
(to_date()
does not exist); probably the validation order makes it trip on the incompatible types before it realizes that the function name is wrong, it makes the message a bit confusing, but probably it's a minor problem.
The above queries, with to_datetime()
return the same error as to_long()
30047bf
to
78824e7
Compare
The issues with @timestamp have been fixed in #6fb0622dc43070aa3c71c425d405ac273bf43d45, and more tests added in that and other commits to cover this case. The related issue regarding exactly which error message to return can be dealt with later, perhaps in this PR, or perhaps in another. |
I feel bad for the github user |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a few comments. I suggested reworking how row-by-row loading is modeled with the BlockBuilder
s inside of the value loaders themselves. This feels like it's much cleaner to think about. I think we can make it compatible with a shared loader if we need to. But for now the copying seems fine because this is never the hottest path.
} catch (IOException e) { | ||
throw new UncheckedIOException(e); | ||
} finally { | ||
if (success == false) { | ||
Releasables.closeExpectNoException(blocks); | ||
} | ||
} | ||
return page.appendBlocks(blocks); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
for (int r = 0; r < rowStrideReaders.size(); r++) { | ||
RowStrideReaderWork work = rowStrideReaders.get(r); | ||
work.reader.read(doc, storedFields, work.builder); | ||
storedFields.advanceTo(doc); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
.../esql/compute/src/main/java/org/elasticsearch/compute/lucene/ValuesSourceReaderOperator.java
Outdated
Show resolved
Hide resolved
builders[f] = fields[f].info.type.newBlockBuilder(docs.getPositionCount(), blockFactory); | ||
fieldTypeBuilders[f] = fields[f].info.type.newBlockBuilder(docs.getPositionCount(), blockFactory); | ||
builders[f] = new Block.Builder[shardContexts.size()]; | ||
converters[f] = new BlockLoader[shardContexts.size()]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we keep the BlockBuilder
and converter inside the block loader itself? That'd rework the row-stride block loader somewhat, but I think that'd make this more readable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember originally trying the combine these, but seem to remember it causing hassles with the ColumnAtATimeReader. This was before I really got the row-stride reader working, so perhaps it is time for another attempt.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd be ok delaying it to a followup too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I realise a key difference between what you suggested and what I tried (and was thinking of trying again). You said to put the builder and converter inside the loader. Instead I had tried to put the converter inside the builder. This is because we will always have both a builder and loader (the builder is made by the loader once additional information is known using code like field.loader.builder(loaderBlockFactory, docs.count())
), so merging the loader and builder seems unworkable. Right now the converter is the only thing at play. It is currently implemented by the loader, since it does not need any of the information in the builder. But this does lead to lines like this:
Block build() {
return (Block) loader.convert(builder.build());
}
If I put the converter into the builder, we can hide it inside the build()
method call, simplifying things slightly. That was what I was trying before, and could try again.
@@ -0,0 +1,582 @@ | |||
singleIndexIp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to replace the IT_tests_only
suffix with a check against a feature
similar to how we do version testing. We're already going to add feature
s for BWC skipping, we can add a skip in the hand-rolled CSV testing infrastructure too. I did it for one example here: #108313
Would you be ok doing the same on this one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. I've done this now, will push soon.
...in/esql/src/main/java/org/elasticsearch/xpack/esql/planner/EsPhysicalOperationProviders.java
Show resolved
Hide resolved
...in/esql/src/main/java/org/elasticsearch/xpack/esql/planner/EsPhysicalOperationProviders.java
Show resolved
Hide resolved
|
||
@Override | ||
public final String toString() { | ||
return "TypeConvertingBlockLoader[delegate=" + delegate + "]"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you include the evaluator too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
...in/esql/src/main/java/org/elasticsearch/xpack/esql/planner/EsPhysicalOperationProviders.java
Show resolved
Hide resolved
return super.equals(obj) && Objects.equals(path, ((FieldAttribute) obj).path); | ||
return super.equals(obj) | ||
&& Objects.equals(path, ((FieldAttribute) obj).path) | ||
&& Objects.equals(field, ((FieldAttribute) obj).field); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modifying this one scares me a bit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of modifying this, we should create a copy in the esql project.
This may involve some yak shaving, but I'm happy to help with that!
e2c21ff
to
13cc584
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have minor comments around the logistics and code dependency - such as moving the QL classes like FieldAttribute, EsRelation and InvalidMapperField to ESQL.
An alternative would be to have some workaround code in ESQL that adds/maintains that information outside of QL (even though it's ugly) and then get rid of that during the QL migration to not postpone this PR any longer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
...in/esql/src/main/java/org/elasticsearch/xpack/esql/planner/EsPhysicalOperationProviders.java
Show resolved
Hide resolved
/** | ||
* Support multiple field mappings if appropriate conversion function is used (union types) | ||
*/ | ||
public static final NodeFeature UNION_TYPES = new NodeFeature("esql.union_types"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unrelated to this PR - we need to come up with a naming strategy for the features so we can reason about when they were added without having to open this file which is already too long.
I propose YY_MM.esql.feature_name
pattern with the date prefix used in the variable name as well:
NodeFeature 24_05_UNION_TYPES = new NodeFeature("24_05.esql.union_types")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sounds like a solution to a problem we suspect we might get in future, but do not (yet) suffer from, the issue of the creation of too many NodeFeatures. I think there are many solutions to that problem, like regular removal of older NodeFeature instances and references to them, since they should only really matter for BWC and rollover of recent releases. Or some other solution... let's not make it part of this PR (or any feature PR).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm with @craigtaverner on this. I'd prefer not to adopt a new pattern here. I'm not really sure what the pattern should be either. Or if we need one.
@@ -153,6 +157,7 @@ public Set<NodeFeature> getFeatures() { | |||
ST_CONTAINS_WITHIN, | |||
ST_DISJOINT, | |||
STRING_LITERAL_AUTO_CASTING, | |||
UNION_TYPES, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And add the features here in reverse order, so the latest features are at the top (that's because in time, the early features become wildly spread).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's pretty reasonable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe just stick this one on top and leave a comment saying add new ones on top.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just for those reading - it's an unordered Set - but we humans do read it in order.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So far it seems everyone, including me, has be appending to the end. So we're talking about reversing the order of the complete list? I can do that, although I don't fully understand the reason for this. The term 'wildly spread' does not clarify things for me.
@Override | ||
public boolean equals(Object obj) { | ||
if (super.equals(obj) == false) { | ||
return false; | ||
} | ||
if (obj instanceof MultiTypeEsField other) { | ||
return super.equals(other) && indexToConversionExpressions.equals(other.indexToConversionExpressions); | ||
} | ||
return false; | ||
} | ||
|
||
@Override | ||
public int hashCode() { | ||
return Objects.hash(super.hashCode(), indexToConversionExpressions); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: move these methods at the bottom since their non-essential.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clone this to ESQL and add the modifications there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. I'll discuss this with @alex-spies and do it together with the FieldAttribute changes, since both are really part of the 'split from QL' project.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done for InvalidMappedField, but we decided to keep the QL changes in FieldAttribute. Moving that was a much bigger job, and it was argued that these changes are an improvement anyway.
return true; | ||
} | ||
|
||
@Override | ||
public int hashCode() { | ||
return Objects.hash(index, esSourceOptions, frozen); | ||
return Objects.hash(index, attrs, esSourceOptions, frozen); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EsRelation
was already ported to esql
by Nhat - I think this fix should be applied to esql
's copy of that, not here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curiously, in the QL version, it used attrs in the hashCode, but not in the equals method. My change adds it to equals. Nhat's version removed it from hashCode. I'm curious if his change fixed something else, that I will now break.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Turns out the changes I made were no longer necessary, and only needed for an earlier version of union-types. I kept just one clarifying comment.
b1c6dc7
to
35b0fc9
Compare
Note, this extends the QL version of InvalidMappedField, so is not a complete port. This is necessary because of the intertwining of QL IndexResolver and EsqlIndexResolver. Once those classes are disentangled, we can completely break InvalidMappedField from QL and make it a forbidden type.
So as to remove any edits to QL code, we extend FieldAttribute in the ESQL code with the changes required, since is simply to include the `field` in the hascode and equals methods.
This reverts commit 168c6c7.
And removed unused method from earlier union-types work where we kept the NodeId during re-writing (which we no longer do).
…o not work in mixed clusters
…lities do not work in mixed clusters" This reverts commit 56d58be.
7cd3104
to
5357667
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a couple of comments on tests, probably it needs just a little fix on the validation side (or to make a bit more resilient to invalid conversions) [edit] and a bit more logic to avoid leaking the newly added fields. [/edit]
Apart from that, technically it works, so it LGTM, but please wait for a final approval from @costin on the general approach.
As we discussed, there are two aspects that don't completely convince me, but as long as we are aware of them and we all agree, I'm good with them as well:
- having
MultiTypeEsField
containExpression
s does not feel natural (thought it's functional to the solution we adopted) - doing the conversion at extraction time could be expensive, and since we can no longer decide where to push the execution of the conversion, we cannot speculate on it in the optimizer
I would have preferred a solution where you:
- replace the original multi-type field with multiple (normal) fields, one per type
- extract the new fields normally, just multiplexing on the right type (and using NullBlocks where the types don't match, that should be pretty efficient in terms of both memory and performance)
- replace the conversion function with an optimized version of
COALESCE(newField1, to_xy(newField2), ...)
, that can be further optimized (and sometimes maybe completely removed) by the optimizers
required_capability: union_types | ||
|
||
FROM sample_data* | ||
| WHERE TO_LONG(@timestamp) < 1698068014937 AND TO_STRING(client_ip) LIKE "172.21.2.16?" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you have a chance, please add a few tests where you convert the same field multiple times to different types, eg.
| EVAL x = to_string(@timestamp), y = to_datetime(@timestamp), z= to_long(@timestamp)
I tested it and it works fine btw, but it's worth having the coverage ;-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And I think conversions to partially incompatible types (eg. to_ip(keyword_or_date)
) have some validation problems. Nothing critical, but it's worth checking.
Probably in these cases we can just return null (probably we should, at least when one of the types is compatible with the conversion), but we could also fail the validation phase with a meaningful error message, up to you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One more thing that just came to my mind:
with multiple indexes with different @timestamp
fields:
from test* | drop @timestamp
works just fine
from test* | eval x = to_datetime(@timestamp), y = to_string(@timestamp) | drop x, y, @timestamp
returns a verification exception:
{
"error": {
"root_cause": [
{
"type": "verification_exception",
"reason": "Found 2 problems\nline 1:1: Cannot use field [@timestamp] due to ambiguities being mapped as [3] incompatible types: [datetime] in [test2], [keyword] in [test3, test4], [long] in [test1]\nline 1:87: Reference [@timestamp] is ambiguous (to disambiguate use quotes or qualifiers); matches any of [line 1:1 [@timestamp], line 1:35 [@timestamp], line 1:62 [@timestamp]]"
}
],
"type": "verification_exception",
"reason": "Found 2 problems\nline 1:1: Cannot use field [@timestamp] due to ambiguities being mapped as [3] incompatible types: [datetime] in [test2], [keyword] in [test3, test4], [long] in [test1]\nline 1:87: Reference [@timestamp] is ambiguous (to disambiguate use quotes or qualifiers); matches any of [line 1:1 [@timestamp], line 1:35 [@timestamp], line 1:62 [@timestamp]]"
},
"status": 400
}
I think it's due to the fact that now you have multiple @timestamp
fields.
Also,
from test* | eval x = to_datetime(@timestamp), y = to_string(@timestamp)
actually returns multiple @timestamp
fields (with different types), including the original unsupported
. I don't think it's what we want... we should at least implicitly drop them after the renaming, otherwise basic operations like a DROP will become difficult.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Answers to the three comments above:
- I've added tests like this now, creating multiple columns from a single field, each with different types
- The error generated by using the wrong convert function is indeed cryptic. This turned out to be a bug, and the error is being generated from the evaluator method at runtime, instead of the resolve method at planning time. I've added a new test, and am working to fix this so we generate the correct error at planning (validation) time.
- The error generated looks like what I would have expected, but I agree that dropping this column should work as it does with the unsupported column in the previous case. I'll investigate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so we generate the correct error at planning (validation) time.
I wonder if it makes sense to just short-circuit the wrong conversions, returning null. It would allow to manage field name collisions where a single conversion cannot be applied (eg. when the dataset contains userful data + garbage)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But the third issue is more subtle. The idea was to generate a new field with the same name as the old one, and the old one would satisfy use cases where we expect to see unsupported
and null
, while the new one would satisfy the use cases where we expect the type to by converted. This seems to have worked great for a wide number of cases, but fails on the DROP field
command, which I had not tested. It seems like we need to update DROP
to handle this case. I don't want to remove the new field, because it still covers the case where we do not DROP, and we expect to see the extra field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the new fields have to be dropped anyway, even without a DROP
Consider the following:
PUT index1
{
"mappings": {
"properties": {
"@timestamp": {
"type": "long"
},
"name": {
"type": "keyword"
}
}
}
}
PUT index2
{
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
},
"name": {
"type": "keyword"
}
}
}
}
from test* | eval x = to_datetime(@timestamp)
{
"columns": [
{
"name": "@timestamp",
"type": "unsupported"
},
{
"name": "name",
"type": "keyword"
},
{
"name": "@timestamp",
"type": "date"
},
{
"name": "x",
"type": "date"
}
],
"values": []
}
We have two @timestamp
fields now, that is unexpected
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I've written a few more tests, and indeed if we don't explicitly type KEEP
we do end up with multiple columns with the same name, as you suspected. So likely the fix to that will also fix the DROP
error. All my existing tests used KEEP
to ensure predictable column ordering, but it turns out they had the side effect of hiding this issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't want to remove the new field, because it still covers the case where we do not DROP, and we expect to see the extra field.
As a user, I expect to only see the value of x
in this case; the extra @timestamp
field is an implementation detail that should not appear in the result IMHO
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is fixed in the latest commits, including tests that assert the new fields are not in the final output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might have missed but I don't see the basic test (see my previous comment):
FROM sample_data
EVAL as_ip = TO_IP(client_ip), as_string = TO_STRING(client_ip)
or
FROM sample_data
EVAL as_ts = TO_DATETIME(@timestamp), as_string = TO_STRING(@timestamp)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This class is both great and scary at the same time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This issue still hasn't been addressed - is it a leftover or is blocked by a dependency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I commented on this elsewhere. We discussed this in the team meeting last week and decided to keep this change. I commented on this last week in other places in this PR, but can summary things here too. The TLDR is, there were two QL classes I had modified:
InvalidMappedField
- Relatively large change, but purely additive, so does not change QL behaviour at all.
- Moving this to ESQL was not to complex
- We decided to move it to QL, so that later work on moving all classes to ESQL would be less messy.
FieldAttribute
- Tiny change that everyone agreed was probably an improvement for both QL and ESQL, and no-one could think of a reason this should not be done in QL itself
- Attempts to move this to ESQL were very complex
- Decided to leave the change in QL both because it was complex to move, and because it was perceived as an improvement to FieldAttribute anyway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had a chat with Alex, who was willing to delay his major QL->ESQL port work to wait for union-types to merge. But I said he did not have to wait, and I will handle the massive merge conflict when it comes. In this case, the FieldAttribute in QL will not need the change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the basic test I mentioned some time ago that shows the union type and its various facets at the same time:
FROM sample_data
EVAL as_ip = TO_IP(client_ip), as_string = TO_STRING(client_ip)
Same for @timestamp
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests were written in the file 140_union_types.yml where it was easier to create more than just two index mappings, and also where I could write negative tests (tests asserting correct errors were thrown). The tests covered combinations of date, IP and long types, and used functions like TO_IP
and TO_LONG
, separately and together, but did not cover @timestamp
conversion. Instead that was covered in union_types.csv-spec
using the mapping sample_data_ts.json
, where I combined TO_DATETIME
with TO_IP
. I also have tests with TO_STRING
, and nested functions like TO_STRING(TO_IP(...))
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just noticed that one difference between the tests I wrote and what you are asking for is the creation of multiple columns from the same original field. I'm writing a test for that now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK. I've pushed a test that creates multiple columns from the same field.
Both IP and Date are tested
And added more tests
This fix simply removes the original field already at the EsRelation level, which covers all test cases but has the side effect of having the final field no-longer be unsupported/null when the alias does not overwrite the field with the same name. This is not exactly the correct semantic intent. The original field name should be unsupported/null unless the user explicitly overwrote the name with `field=TO_TYPE(field)`, which effectively deletes the old field anyway.
This also fixes the issue with the previous fix that incorrectly reported the converted type for the original field.
After the QL code was ported to esql.core, we can now make the edits directly in InvalidMappedField instead of having one extend the other.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates - 👍 that the majority of this PR is made up of tests.
newIndexWriterConfig().setMergePolicy(NoMergePolicy.INSTANCE).setMaxBufferedDocs(IndexWriterConfig.DISABLE_AUTO_FLUSH) | ||
) | ||
) { | ||
for (int d = 0; d < size; d++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code is highly specific to this test - it would be useful to have a description of what the docs look like since there's no templating or external file to look at.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maintaining a 2K file is going to be a challenge - it's over half of the PR.
In time we should try and externalize the field extractor creation - but that's for another time.
required_capability: metadata_fields | ||
|
||
FROM sample_data, sample_data_str METADATA _index | ||
| EVAL host_ip = TO_STRING(TO_IP(client_ip)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
required_capability: metadata_fields | ||
|
||
FROM sample_data, sample_data_str METADATA _index | ||
| WHERE STARTS_WITH(TO_STRING(client_ip), "172.21.2") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
|
||
FROM sample_data* METADATA _index | ||
| EVAL @timestamp = TO_DATETIME(@timestamp), client_ip = TO_IP(client_ip) | ||
| KEEP _index, @timestamp, client_ip, event_duration, message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
|
||
// Otherwise drop the converted attributes after the alias function, as they are only needed for this function, and | ||
// the original version of the attribute should still be seen as unconverted. | ||
plan = dropConvertedAttributes(plan, unionFieldAttributes); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should not be needed - let's talk more on why this happens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was the fix to the one of the issues that Luigi reported. Without this, there were multiple fields with the same name in the output, the original field with unsupported type (and null values) and the converted field (or many converted fields if we have the same field converted many times), with their converted values. But in reality all the converted fields are immediate aliased to whatever was written in the query, so the internal converted fields need to be removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Started looking at the PR and the first thing that jumped at me (and I stopped reviewing at that point to both have a hopefully useful review and also quick one) was the new FieldAttributes that get created... more specifically in the Analyzer FieldAttribute resolvedAttr = new FieldAttribute(source, null, field.getName(), field, null, Nullability.TRUE, id, false);
(the null
parent)
Hierarchy of fields in the QL ecosystem is tricky, but that's not the point. I think you shouldn't ignore that "parent" constructor argument and think of this as a whole new use case where the fields that can be converted are sub-fields of other fields.
An example below:
test1
{
"mappings": {
"properties": {
"obj": {
"properties": {
"keyword": {
"type": "keyword"
},
"integer": {
"type": "integer"
}
}
},
"keyword": {
"type": "keyword"
},
"integer": {
"type": "integer"
}
}
}
}
test2
{
"mappings": {
"properties": {
"obj": {
"properties": {
"keyword": {
"type": "boolean"
},
"integer": {
"type": "version"
}
}
},
"keyword": {
"type": "boolean"
},
"integer": {
"type": "version"
}
}
}
}
Test data
{ "index": {"_id": 1, "_index":"test1"} }
{ "obj.keyword": "true", "obj.integer": 100, "keyword": "true", "integer": 100 }
{ "index": {"_id": 2, "_index":"test1"} }
{ "obj.keyword": "US", "obj.integer": 20, "keyword": "US", "integer": 20 }
{ "index": {"_id": 12, "_index":"test2"} }
{ "obj.keyword": "true", "obj.integer": "50", "keyword": "true", "integer": "50" }
{ "index": {"_id": 22, "_index":"test2"} }
{ "obj.keyword": false, "obj.integer": "1.2.3", "keyword": false, "integer": "1.2.3" }
Query to test:
from test* metadata _id | eval v = to_version(obj.integer), s = to_string(obj.keyword) | keep v, s, _id
and even simpler
from test* metadata _id | eval s = to_string(obj.keyword) | keep s, _id
fails with
NullPointerException: Cannot invoke "org.elasticsearch.xpack.esql.core.expression.Expression.dataType()" because "convertExpr" is null
at org.elasticsearch.xpack.esql.type.MultiTypeEsField.resolveFrom(MultiTypeEsField.java:57)
at org.elasticsearch.xpack.esql.analysis.Analyzer$ResolveUnionTypes.resolvedMultiTypeEsField(Analyzer.java:1038)
at org.elasticsearch.xpack.esql.analysis.Analyzer$ResolveUnionTypes.resolveConvertFunction(Analyzer.java:1004)
at org.elasticsearch.xpack.esql.analysis.Analyzer$ResolveUnionTypes.lambda$doRule$0(Analyzer.java:949)
while the following one works from test* metadata _id | eval s = to_string(keyword) | keep s, _id
If the query sources multiple indexes, and the same field exists in multiple indexes with different types, this would normally fail the query. However, if the query includes a conversion function to resolve the field to a single type before it is used in other functions or aggregations, then this should work.
The following query works in this third prototype:
The client_ip field is an
IP
in thesample_data
index, but akeyword
in thesample_data_str
index.The first prototype did stuff to the drivers to create an index specific DriverContext to use during field evaluator construction so that the conversion function would be index/type aware. However, that abuses the idea of multi-threaded drivers. So the second prototype took a new approach to instead re-plan the logical plan to extract the converter from the EVAL expressions, setting them as resolved (claiming the input type is already the converted type), and stored the converter in the EsRelation for later use in Physical planning. This third prototype takes this further by replacing the conversion function with a new FieldAttribute. ANd both old and new FieldAttributes exist in parallel, so that the logic around handling unsupported fields is not changed.
Fixes #100603
Tasks to do:
LoadFromMany
loadFromSingleLeaf
Date
are not workingKEEP
we get multiple columns with the same name