Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DRILL-8188: Convert HDF5 format to EVF2 #2515

Merged
merged 7 commits into from
Jan 15, 2024
Merged

Conversation

luocooong
Copy link
Member

@luocooong luocooong commented Apr 4, 2022

DRILL-8188: Convert HDF5 format to EVF2

Description

Use EVF V2 instead of old V1.

Included two bug fixes with the V2 framework :

1. Projected an unprojected column error in array object
2. IndexOutOfBoundsException at add column

Note, these items were fixed in the context of DRILL-8375

Documentation

N/A

Testing

  1. Use the CI.
  2. Add the tests for the bugfix.

@luocooong luocooong added bug refactoring PR related to code refactoring dependencies labels Apr 4, 2022
@luocooong luocooong self-assigned this Apr 4, 2022
Copy link
Contributor

@cgivre cgivre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM +1. I'll wait for @paul-rogers review for the final word on the other changes, but the HDF5 reader looks good to me.

Copy link
Contributor

@paul-rogers paul-rogers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@luocooong, thanks for doing this conversion. I'm glad to see you are getting to be an expert in this part of Drill!

The review has a few minor comments and a few questions. Other than that, nice work!

// Case for datasets of greater than 2D
// These are automatically flattened
buildSchemaFor2DimensionalDataset(dataSet);
{ // Opens an HDF5 file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the nested block necessary? It is handy when we want to reuse variable names (as sometimes occurs in tests), but is it needed here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting thing, when we use EVF2, the batch reader only needs a constructor to start all initialization operations. But we will write a lot of code into that function, initialize the final variables, open files, define the schema, and so on. So I used the code blocks to group these actions to make them easier to read.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess some of these could become private methods but it's a minor point for me.

@@ -173,7 +173,7 @@ public ColumnHandle insert(int posn, ColumnMetadata col) {
}

public ColumnHandle insert(ColumnMetadata col) {
return insert(insertPoint++, col);
return insert(insertPoint == -1 ? size() : insertPoint++, col);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What was the bug here? I'm wondering if there is something else broken, and the above is a work around for that other bug.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I fixed the following error, I received a new one : Dynamically create a repeated list field (and success) in next(), then this field may be assigned to the index value of -1.

@@ -189,7 +189,7 @@ private void insertColumn(ColumnMetadata col) {
switch (mode) {
case FIRST_READER_SCHEMA:
case READER_SCHEMA:
if (schema.projectionType() != ProjectionType.ALL) {
if (schema.projectionType() != ProjectionType.ALL && !col.isArray()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is an array column special here? I think this is trying to say that, if the array is not projected, it should not have been created. There are dummy structures used instead. This fix suggests that there is a bug somewhere other than here.

Copy link
Member Author

@luocooong luocooong Apr 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's look at this case :

  1. batch reader received the project column, from SELECT path, data_type, file_name FROM in the constructor().
  2. schema is [TupleSchema [ProjectedColumn [path (LATE:REQUIRED)]], [ProjectedColumn [data_type (LATE:REQUIRED)]], [ProjectedColumn [file_name (LATE:REQUIRED)]]].
  3. ProjectionType = 'SOME'.
  4. batch reader create new repeated list column in next().
  5. do the project for the schema.
ColumnHandle existing = schema.find(colSchema.name());
 if (existing == null) {
  insertColumn(colSchema);
}
  1. catch and throw the IllegalStateException.
  2. failed to reader the format data.

In EVF1, creating repeated list fields dynamically is allowed and does not throw such exceptions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This error check is correct: the problem must be in the ResultSetLoader. This error says that the ResultSetLoader is trying to add a materialized column, but that column is not projected. The result set loader should have created a dummy column and not passed the column along to this mechanism.

.addRepeatedList("int_list")
.addArray(MinorType.INT)
.resumeSchema()
.addRepeatedList("long_list")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for testing this. The two messiest vector types in Drill are the repeated Map and repeated list. Repeated list has many, many problems. It isn't even well defined in SQL since it's types can change from row to row.

I wonder, why do we need a repeated list for this reader? Because we want a 2D array? What would a Drill user do with a 2D array?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These tests are to verify the above fix. Is it better to split into a new test class?

.addNullable(DATASET_DATA_TYPE_NAME, MinorType.VARCHAR)
.addNullable(DIMENSIONS_FIELD_NAME, MinorType.VARCHAR);

negotiator.tableSchema(builder.buildSchema(), false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this path we are telling EVF2 the schema to use. The false argument says we are free to add columns later.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to have the GitHub comment above this one as a comment above line 214 in the code.

Dataset dataSet = hdfFile.getDatasetByPath(readerConfig.defaultPath);
dimensions = dataSet.getDimensions();

loader = negotiator.build();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this path, we did not tell EVF2 about the schema. This means EVF2 expects us to discover columns as we go along. Is this the intent? The comment can be read as saying either "Drill can obtain the schema NOW" or "Drill can obtain the schema LATER".

But, it is odd, within the same reader, to provide a schema down one path, and not in the other path. It almost seems that there are two distinct readers in this case.

@@ -189,7 +189,7 @@ private void insertColumn(ColumnMetadata col) {
switch (mode) {
case FIRST_READER_SCHEMA:
case READER_SCHEMA:
if (schema.projectionType() != ProjectionType.ALL) {
if (schema.projectionType() != ProjectionType.ALL && !col.isArray()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This error check is correct: the problem must be in the ResultSetLoader. This error says that the ResultSetLoader is trying to add a materialized column, but that column is not projected. The result set loader should have created a dummy column and not passed the column along to this mechanism.

@paul-rogers
Copy link
Contributor

This PR is getting a bit complex with the bug or two that this PR uncovered. Iexplain a bit about how EVF2 works. There are two case: wildcard projection (SELECT *) and explicit projection (SELECT a, b, c). The way EVF2 works is different in these two cases.

Then, for each reader, there are three other cases. The reader might know all its columns before the file is even opened. The PCAP reader is an example: all PCAP files have the same schema, so we don't need to look at the file to know the schema. The second case are files were we can learn the schema when opening the file. Parquet and CSV are examples: we can learn the Parquet schema from the file metadata, and CSV schema from the headers. The last case is where we don't know the schema until we read each row. JSON is the best example.

So, now we have six cases to consider. This is why EVF2 is so complex!

For the wildcard, EVF2 "discovers" columns as the reader creates them: either via the up-front schema, or as the reader reads data. In JSON, for example, we can discover a new column at any time. Once a column is added, EVF2 will automatically fill in null values if values are missing. In the extreme case, it can fill in nulls for an entire batch. Because of the wildcard, all discovered columns are materialized and added to the result set. If reading JSON, and a column does not appear until the third batch, then the first two won't contain that column, but the third batch will have a schema change and will include the column. This can cause a problem for operators such as joins, sort or aggregation that have to store a collection of rows, not all can handle a schema change.

Now, for the explicit schema case, EVF2 knows what columns the user wants: those in the list. EVF2 waits as long as it can, hoping the reader will provide the columns. Again, the reader can provide them up front, before the first record, or as the read proceeds (as in JSON.) As the reader provides each column, EVF2 has to decide: do we need that column? If so, we create a vector and a column writer: we materialize the column. If the column is not needed, EVF2 creates a dummy column writer.
Now the interesting part. Suppose we get to the end of the first batch, the query wants column c, and the reader has never defined column c? What do we do? In this case, we have to make something up. Historically, Drill would make up a Nullable Int, with all-null values. EVF added the ability to specify the type for such columns, and we use that. If a provided schema is available, then the user tells us the type.

Now we get to another interesting part. What if we guessed, say, Varchar, but the column later shows up as a JSON array? We're stuck: we can't go back and redo the old batches. We end up with a "hard" schema change. Bad things happen unless the query is really simple. This is the fun of Drill's schemaless system.

With that background, we can try to answer your question. The answer is: it depends. If the reader says, "hey Mr. EVF2, here is the full schema I will read, I promise not to discover more columns", then EVF2 will throw an exception if later you say, "ha! just kidding. Actually, I discovered another one." I wonder if that's what is happening here.

If, however, the reader left the schema open, and said, "here are the columns I know about now, but I might find more later", then EVF2 will expect more columns, and will handle them as above: materialize them if they are projected or if we have a wildcard, provide a dummy writer if we have explicit projection and the column is not projected.

In this PR, we have two separate cases in the reader constructor.

  • In the if path, we define a "reader schema", and reserve the right to add more columns later. "That's what the false argument means to tableSchema().
  • In the else path, we define no schema at all: we don't all tableSchema().

This means the reader is doing two entirely different things. In the if case, we define the schema and we just ask for column writers by name. In the else case, we don't define a schema, and we have to define the column when we ask for the column writers.

This seems horribly complicated! I wonder, are we missing logic in the then case? Or, should there be two distinct readers, each of which implements one of the above cases?

@paul-rogers
Copy link
Contributor

Found the bug. It is in ColumnBuilder which seems to be missing code to handle an unprojected repeated list. This bug then caused the other "bugs" that we discussed in the review: those bits of code are working as they should. The problem is that the result set loader is materializing a vector when it should not. It will take some time to remember how all this stuff works. Stay tuned.

@luocooong
Copy link
Member Author

Found the bug. It is in ColumnBuilder which seems to be missing code to handle an unprojected repeated list. This bug then caused the other "bugs" that we discussed in the review: those bits of code are working as they should. The problem is that the result set loader is materializing a vector when it should not. It will take some time to remember how all this stuff works. Stay tuned.

@paul-rogers Great! Thank you for the quick work.
From the end of the most recent discussion, I completely rejected my previous code revision, guess that the unprojected handle might have been lost. Actually, I've added this function locally, but I'm not sure it's correct. Would you mind checking mine before you submit the new revision?

@cgivre
Copy link
Contributor

cgivre commented May 26, 2022

Hi @luocooong Thank you for this PR. Where are we in terms of getting it merged?

@luocooong
Copy link
Member Author

Hi @cgivre Thank you for paying attention to this PR.
The pull request cannot be merged now, and Paul is going to re-review the V2 section code. and we're going to fix the bugs above from the framework.

@jnturton jnturton marked this pull request as draft July 11, 2022 08:30
@jnturton
Copy link
Contributor

Converted to draft to prevent merging.

@cgivre
Copy link
Contributor

cgivre commented Nov 2, 2022

Hey @luocooong @paul-rogers I hope all is well. I wanted to check in on this PR to see where we are. At this point, nearly all the other format plugins have been converted to EVF V2.

The other outstanding ones are the image format and LTSV. I'd really like to see this merged so that we can remove the EVF V1 code.

Do you think we could get this ready to go soon?

@cgivre
Copy link
Contributor

cgivre commented Jan 9, 2024

I think I hosed the version control somehow.... This PR should only modify a few files in the HDF5 reader.

@paul-rogers
Copy link
Contributor

It seems you did this work on top of the master with my unsquashed commits. When you try to push, those commits come along for the ride. I think you should grab the latest master, then rebase your branch on it.

Plan B is to a) grab the latest master, and b) create a new branch that cherry-picks the commit(s) you meant to add.

If even this doesn't work, then I'll clean up this branch for you since I created the mess in the first place...

@cgivre cgivre removed the bug label Jan 9, 2024
@cgivre
Copy link
Contributor

cgivre commented Jan 9, 2024

@paul-rogers I attempted to fix. I kind of suck at git, so I think it's more or less correct now, but there was probably a better way to do this.

@jnturton
Copy link
Contributor

jnturton commented Jan 9, 2024

@paul-rogers I attempted to fix. I kind of suck at git, so I think it's more or less correct now, but there was probably a better way to do this.

I think you still want something like

git pull --rebase upstream master
git push --force-with-lease

@jnturton
Copy link
Contributor

jnturton commented Jan 9, 2024

I see Git's "patch contents already upstream" feature doesn't automatically clean up the unwanted commits. I've dropped them manually in a new branch in my fork and now suggest

git reset --hard origin/master
git pull --rebase https://github.com/jnturton/drill.git 8188-hdf5-evf2
git push --force # to luocooong's fork

@cgivre
Copy link
Contributor

cgivre commented Jan 10, 2024

@jnturton I did as you suggested. Would you mind please taking a look?

@jnturton
Copy link
Contributor

@paul-rogers I attempted to fix. I kind of suck at git, so I think it's more or less correct now, but there was probably a better way to do this.

Just workng through the review comments that @paul-rogers left (the ones unrelated to the needed functionality that was missing from EVF2).

@paul-rogers
Copy link
Contributor

Did the recent EVF revisions allow the tests for this PR to pass? Is there anything that is still missing? Also, did the excitement over my botched merge settle down and are we good now?

@cgivre
Copy link
Contributor

cgivre commented Jan 11, 2024

Did the recent EVF revisions allow the tests for this PR to pass? Is there anything that is still missing? Also, did the excitement over my botched merge settle down and are we good now?

All the unit tests pass.... whether that means that everything is working.... this plugin has a decent amount of tests, so I'd feel pretty good.

// Case for datasets of greater than 2D
// These are automatically flattened
buildSchemaFor2DimensionalDataset(dataSet);
{ // Opens an HDF5 file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess some of these could become private methods but it's a minor point for me.

.addNullable(DATASET_DATA_TYPE_NAME, MinorType.VARCHAR)
.addNullable(DIMENSIONS_FIELD_NAME, MinorType.VARCHAR);

negotiator.tableSchema(builder.buildSchema(), false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to have the GitHub comment above this one as a comment above line 214 in the code.

}

private HDF5DataWriter buildWriter(MinorType dataType) {
switch (dataType) {
/*case GENERIC_OBJECT:
return new HDF5EnumDataWriter(hdfFile, writerSpec, readerConfig.defaultPath);*/
/* case GENERIC_OBJECT:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should have an explanatory comment if we're going to keep commented out code.

@jnturton jnturton merged commit 04289f0 into apache:master Jan 15, 2024
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies refactoring PR related to code refactoring
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants