Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposed change in AvroSchema to handle circular references. #445

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Proposed change in AvroSchema to handle circular references. #445

wants to merge 1 commit into from

Conversation

wernerdaehn
Copy link

https://issues.apache.org/jira/browse/PARQUET-129
@rdblue

This is a suggestion only at the moment and not complete. Need help to complete the feature - see below. Should not be much work for a more experienced person, I hope.
I have added a Patient.asvc schema into the resource folder as a complex example.

Note:

  • The code has one TODO.
  • And it needs the matching changes to actually write the records into the correct fields.
  • The reader is not impacted!! Just the writing!
  • No backward compatibility issue as currently such circular schema are not possible and for non-circular schema nothing does change
  • No issue with Spark etc as the reader would return regular fields still, and nothing Spark cannot handle.

I tried to be as conservative as possible to not break something, however had to change GroupType and MessageType classes to allow creating them first and adding fields second. That makes sense anyhow, not always you have the complete list of fields at the point in time this object is instantiated.

Just to define the term "circular references" correctly, we have the following scenarios:

  1. SchemaN is defined in one element and reused in a parallel field, e.g. there are two fields, buildingaddress and postaladdress. The first defines the recordtype ADDRESS, the second is using it. --> no problem.
  2. SchemaN has a field that is of type SchemaN --> a direct circular reference causing a stack overflow
  3. SchemaN has a field that has a field that.... has a field of type SchemaN --> an indirect circular reference causing a stack overflow

The way to resolve that is to check before creating a new GroupType for SchemaN, if such a Schema had been created already in one of the parents. It should trigger only for case 2&3, not case 1 above.
If such a thing happens, then instead of creating the full GroupType for this field, it is rather an INT64 pointer. And the SchemaN definiton should get an additional INT64 field holding the reference pointer.

So you will end up with rows coming from the first occurrence of that schema, all having that INT64 empty, plus rows from the child columns of type SchemaN with the INT64 being set.

Example for Case 2:
Person: {firstname STRING, lastname String, [children: Person]}
would be turned into
Person: {firstname STRING, lastname String, [children: INT64], __ID}

My example Patient.asvc AvroSchema is converted into a nice Parquet schema, that is working.
What is missing:

  1. SchemaN might not be an array in the initial definition but just a single nested structure. Then we cannot add more records to it, hence it has to be turned into a REPEATABLE schema. I am not sure if I understand all about the Parquet model, hence would like you to do that.
  2. The entire data writing has to be implemented.

The reason I have not implemented the writer is for the following questions:

  • That __ID column, can we reuse an internal pointer for that? Something that will make joins faster by jumping to the record directly instead of scanning the column? Not important for Spark but might be interesting later.
  • When creating the Parquet schema, can we add properties to tell the writer where to add the record? Currently that can be derived from the AvroSchema but requires the tree-building in the writer a second time. Would like to avoid that.

…as suggestion only at the moment and not complete.

The code has one TODO.
And it needs the matching changes to actually write the records into the correct fields.
@rdblue
Copy link
Contributor

rdblue commented Jan 5, 2018

@wernerdaehn, thanks for working on this.

The approach here appears to be to replace nested fields with an ID and then add a list of the circular type to the type itself. For example, if we had this tree class:

class Node:
  Node left;
  Node right;

That would get turned into this version:

class NestedNode:
  long __ID;
  long left__ID;
  long right__ID;

class Node:
  long __ID;
  long left__ID;
  long right__ID;
  list<NestedNode>;

Is that right?

@wernerdaehn
Copy link
Author

You have a root record calls Node. It has a field called left of the same type, hence the field datatype changed to long. Same for the field right. And because the root node now has to store multiple elements, it needs to be turned into an array.
Hence the result would be:

class Node:
  long __ID;
  long left;  // datatype Node replaced with the ID and with the given ID the data can be found in the parent
  long right; // datatype Node replaced with the ID and with the given ID the data can be found in the parent

list<Node>; // As the class Node is a single entry only but will have multiple rows now, it needs to be repeatable

(Your example is a bit weird because there is no actual payload, which makes it hard to read.)

@wernerdaehn
Copy link
Author

wernerdaehn commented Jan 6, 2018

Better explanation: Whenever a child (=left Node; right Node in your example) has the same datatype as a parent, the child holds the instance pointer and the actual data is stored in the parent as new record identified by that pointer.
So instead of storing the data there, you store a pointer and the record elsewhere.

@jinyius
Copy link
Contributor

jinyius commented Jun 9, 2022

any updates here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants