Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BEAM-10139][BEAM-10140] Add cross-language support for Java SpannerIO with python wrapper #12611

Merged
merged 6 commits into from
Nov 16, 2020

Conversation

pjotrekk
Copy link

@pjotrekk pjotrekk commented Aug 18, 2020

What is left:

  • transaction support (in the future I suppose)
  • Is representing Mutation as Row ok? To discuss.
  • There is a lot of duplication code in struct -> row and row -> struct translation. I would be grateful for some advice how (if possible in this case) to deal with it.
  • SpannerWriteResult is replaced with PDone thus FailureMode must be FAIL_FAST
  • To read from spanner Schema is needed to be added to the configuration. Schema can be constructed from Struct while running the pipeline, but idk how to pass the RowCoder to the pipeline otherwise. And it's to discuss whether validation of schemas equality is needed.
  • I didn't generify Read and Write because there is ReadAll , Transaction etc and this way is much less complicated. ReadRows and WriteRows shouldn't be used outside cross-language transforms.

Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Choose reviewer(s) and mention them in a comment (R: @username).
  • Format the pull request title like [BEAM-XXX] Fixes bug in ApproximateQuantiles, where you replace BEAM-XXX with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

Post-Commit Tests Status (on master branch)

Lang SDK Dataflow Flink Samza Spark Twister2
Go Build Status --- Build Status --- Build Status ---
Java Build Status Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status Build Status
Build Status
Build Status
Build Status
Python Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
--- Build Status ---
XLang Build Status --- Build Status --- Build Status ---

Pre-Commit Tests Status (on master branch)

--- Java Python Go Website
Non-portable Build Status Build Status
Build Status
Build Status
Build Status
Build Status Build Status
Portable --- Build Status --- ---

See .test-infra/jenkins/README for trigger phrase, status and link of all Jenkins jobs.

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels

See CI.md for more information about GitHub Actions CI.

@pjotrekk
Copy link
Author

@TheNeuralBit I know I'm merciless to give such a big PR to review, but I think you're the most up-to-date person about rows and schemas :) There are some unit tests and TODOs left but overall I think it's almost completed. The integration tests work well on FlinkRunner.

@TheNeuralBit TheNeuralBit self-requested a review August 18, 2020 18:18
@TheNeuralBit
Copy link
Member

No worries I'm happy to help review :) It might take me a few days to get to it though.

Regarding testing: we could consider adding a spanner instance to apache-beam-testing for integration testing, I'd suggest raising it on dev@ if you want to pursue it. I also just came across https://cloud.google.com/spanner/docs/emulator which could be a good option too. Its a docker container that starts up an in-memory version of spanner to test against.

@pjotrekk
Copy link
Author

pjotrekk commented Aug 19, 2020

Regarding testing: we could consider adding a spanner instance to apache-beam-testing for integration testing, I'd suggest raising it on dev@ if you want to pursue it. I also just came across https://cloud.google.com/spanner/docs/emulator which could be a good option too. Its a docker container that starts up an in-memory version of spanner to test against.

@TheNeuralBit Great advice as always! I tried to find something like this emulator on dockerhub but without success. I managed to successfully use this emulator, it has much better support than aws for localstack.

Few comments about this PR:

I am almost certain that the Schema doesn't have to be sent as proto in Read but I didn't come up with anything else.

Another issue is representing the Mutation - for now it's a Row containing 4 fields: operation, table, rows and key_set. It does quite well but I wonder whether I can do it better.

I erased SpannerWriteResult and return PDone for now - I don't see the way to keep it without including spanner dependencies to java.core. Because of that failure mode is FAIL_FAST and I didn't include it in configuration params.

Transactions are not supported because they require a ptransform to be transferred. I suppose it's doable though and it could be a good future improvement.

FYI - I'll be OOO the next week so there is absolutely no haste :)

@pjotrekk pjotrekk force-pushed the spanner-xlang branch 2 times, most recently from e1d9001 to 2839b8e Compare August 20, 2020 13:12
@pjotrekk pjotrekk changed the title [BEAM-10131][BEAM-10140] Add cross-language support for Java SpannerIO with python wrapper [BEAM-10139][BEAM-10140] Add cross-language support for Java SpannerIO with python wrapper Aug 20, 2020
@codecov
Copy link

codecov bot commented Aug 31, 2020

Codecov Report

Merging #12611 (1c43284) into master (3d6cc0e) will decrease coverage by 0.04%.
The diff coverage is 56.73%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #12611      +/-   ##
==========================================
- Coverage   82.48%   82.44%   -0.05%     
==========================================
  Files         455      456       +1     
  Lines       54876    54975      +99     
==========================================
+ Hits        45266    45324      +58     
- Misses       9610     9651      +41     
Impacted Files Coverage Δ
sdks/python/apache_beam/io/gcp/spanner.py 56.73% <56.73%> (ø)
...eam/runners/interactive/interactive_environment.py 89.45% <0.00%> (-0.36%) ⬇️
sdks/python/apache_beam/io/iobase.py 83.75% <0.00%> (-0.29%) ⬇️
...hon/apache_beam/runners/worker/bundle_processor.py 94.07% <0.00%> (-0.27%) ⬇️
...ks/python/apache_beam/runners/worker/sdk_worker.py 89.47% <0.00%> (-0.16%) ⬇️
...runners/interactive/display/pcoll_visualization.py 85.26% <0.00%> (-0.08%) ⬇️
...beam/runners/portability/local_job_service_main.py 0.00% <0.00%> (ø)
sdks/python/apache_beam/runners/common.py 89.20% <0.00%> (+0.44%) ⬆️
.../python/apache_beam/transforms/periodicsequence.py 98.24% <0.00%> (+1.75%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 02a1cd2...6370a87. Read the comment docs.

Copy link
Member

@TheNeuralBit TheNeuralBit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the contribution Piotr! Sorry it took me until after you were back from OOO to get to this :P

I have a few high-level comments and questions


List<Schema.Field> fields = schema.getFields();
Row.FieldValueBuilder valueBuilder = null;
// TODO: Remove this null-checking once nullable fields are supported in cross-language
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the issue here? Nullable fields should be supported in cross-language

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NullableCoder is not a standard coder as was mentioned here: https://issues.apache.org/jira/browse/BEAM-10529?jql=project%20%3D%20BEAM%20AND%20text%20~%20%22nullable%20python%22
So I suppose the only way to support null values is not to set them.
I noticed that when I tried to read a null field from Spanner table. But I may be wrong

Copy link
Member

@TheNeuralBit TheNeuralBit Sep 3, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm so it should be supported. RowCoder encodes nulls for top-level fields separately so there's no need for NullableCoder. NullableCoder is only used when you have a nullable type in a container type, e.g. ARRAY<NULLABLE INT>. This wasn't supported in Python until recently - #12426 should have fixed it though.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure where my message has gone, but I wrote that nulls come up with no problems, I've just used ImmutableMap which does not allow null values. Replacing it with java.util.HashMap solved the issue.

public ReadRows(Read read, Schema schema) {
super("Read rows");
this.read = read;
this.schema = schema;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be really great if SpannerIO.ReadRows could determine the schema at pipeline construction time so the user doesn't have to specify it. In SpannerIO.Read#expand we require the user to specify either a query or a list of columns:

if (getReadOperation().getQuery() != null) {
// TODO: validate query?
} else if (getReadOperation().getTable() != null) {
// Assume read
checkNotNull(
getReadOperation().getColumns(),
"For a read operation SpannerIO.read() requires a list of "
+ "columns to set with withColumns method");
checkArgument(
!getReadOperation().getColumns().isEmpty(),
"For a read operation SpannerIO.read() requires a"
+ " list of columns to set with withColumns method");
} else {
throw new IllegalArgumentException(
"SpannerIO.read() requires configuring query or read operation.");
}

In both case we're very close to a schema. We just need to analyze the query and/or get the output types for the projected columns. I looked into it a little bit, but I'm not quite sure the best way to use the spanner client to look up the schema. The only thing I could figure out was to start a read and look at the type of ResultSet#getCurrentRowAsStruct which seems less than ideal.

CC @nielm who's done some work with SpannerIO recently - do you have any suggestions for a way to determine the types of the Structs that SpannerIO.Read will produce?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could also punt on this question and file a jira with a TODO here. I recognize this is a little out of scope for BEAM-10139, BEAM-10140.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd really like to do it in this PR, but the only thing that comes to mind is to do what you said - perform the read request with client and then read the schema. The obvious disadvantage is that the Spanner query will be executed twice. I researched that limit of 1 row added to the end of query will not improve the performance so this is not the thing to do for huge result sets

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can reach out to the Spanner team to see if there's a good way to do this, I'll let you know if I learn anything. For now we can just plan on a jira and a TODO

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any good solution here...
When reading an entire table, it could be possible to read the table's schema first, and determine what types the columns are, but this does not work for a query as the query output columns may not correspond to table columns.

Adding LIMIT 1 would only work for simple queries, anything with joins, GROUP BY, ORDER BY will require the majority of the query to be executed before a single row is returned.

So the only solution I can see is for the caller to specify the row Schema as you do here..

Copy link
Member

@TheNeuralBit TheNeuralBit Oct 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like it should be possible to analyze the query and determine the output schema, SqlTransform and JdbcIO both do this.

I got a similar response from my internal queries though, it doesn't look like there's a good way to do this with the Spanner client

Copy link
Author

@pjotrekk pjotrekk Oct 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @nielm ! I thought about the LIMIT approach but then I found the same arguments not to do that.

It appears there exist a jdbc client for Spanner: https://cloud.google.com/spanner/docs/jdbc-drivers . I'll try to figure out if I can use it.

There is ResultSetMetadata in Spanner's REST API which extends json object. https://cloud.google.com/spanner/docs/reference/rest/v1/ResultSetMetadata but at the end of the day it requires at least partially to fetch the data.

But I would leave it for another PR as it supposedly require to move SchemaUtils from io/jdbc to some more general place (extensions/sql?). As I can see Struct type is mapped to String/Varchar as is mentioned in the FAQ, so it may not be the best option

The Cloud Spanner STRUCT data type is mapped to a SQL VARCHAR data type, accessible through 
this driver as String types. All other types have appropriate mappings.

)


class WriteToSpanner(ExternalTransform):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like there's already a native SpannerIO in the Python SDK in apache_beam/io/gcp/experimental/spannerio.py. Are we planning on removing that one? Should the API for this one be compliant with that one?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can try to make the API compliant with the native one. I think it'd be valuable for Beam to compare the performance of both IOs and then decide which one to leave.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that makes sense. There's definitely still value in adding this even if we end up preferring the native Python one, since we can use it from the Go SDK in the future.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably it makes sense to converge into one implementation. I'd prefer the Java implementation (hence cross-language) since it's being around for longer and used by many users. We have to make sure that the cross-language version works for all runners before native version can be removed. For example, cross-language version will not work for current production Dataflow (Runner v1) and we have to confirm that it works adequate for Dataflow Runner v2.

@pjotrekk
Copy link
Author

pjotrekk commented Sep 3, 2020

@TheNeuralBit I've upgraded it a bit.

  • Checking schemas equality is redundant because it will throw an exception with a good message anyway (class cast failure or unknown column). Also, it's possible to just add row.addFieldValues(Map<String, Object> values) and depend on the casts following the schema.
  • I managed to unify addArray and addIterable code duplication with a bit ugly casts (needed SuppressWarning("unchecked")) but I don't think it can be easily achieved otherwise.
  • Nothing comes to my mind to remove duplication in addIterableToMutationBuilder and addIterableToStructBuilder methods. These are unrelated classes (Struct.Builder and Mutation.WriteBuilder. Maybe my Java knowledge is insufficient here. I could make an interface that simulates .setInt64Array, setStructArray etc but it would be even more boilerplate.
  • I unified a bit the API of both python spanners. Not everything could be done 1:1, but the corresponding keywords were changed and the positions of positional arguments.
  • Nulls run with no problems - I used ImmutableMap.Builder that doesn't allow null values. I changed it to normal HashMap and now it's ok.

Copy link
Member

@TheNeuralBit TheNeuralBit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking pretty good overall, my biggest hangup is over the API for the Write transform. I suggested an alternative approach in a comment.

Also FYI - I'm going to be out of the office starting tomorrow (Friday), and back next Thursday. If you get blocked on this before then it may make sense to ask Cham to take a look in the meantime.

)


class WriteToSpanner(ExternalTransform):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that makes sense. There's definitely still value in adding this even if we end up preferring the native Python one, since we can use it from the Go SDK in the future.

[('id', int), ('name', unicode)])
coders.registry.register_coder(ExampleRow, coders.RowCoder)

mutation_creator = MutationCreator('table', ExampleRow, 'ExampleMutation')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall I think it makes a lot of sense to use Rows for the Mutations, with a nested Row for the data, but this API is pretty tricky. Could you look into adding a separate PTransform (or multiple PTransforms) for converting the Rows to mutations? I think an API like this should be possible:

pc = ... #some PCollection with a schema

pc | RowToMutation.insert('table')
     | WriteToSpanner(...)

OR 

pc | RowToMutation.insertOrUpdate('table')
     | WriteToSpanner(...)

OR

pc | RowToMutation.delete('table')
     | WriteToSpanner(...)

The PTransform would be able to look at the element_type of the input PCollection and create a mutation type that wraps it in the expand method. There's not a lot of examples of logic like this in the Python SDK (yet) the only one I know of is here:

def expand(self, pcoll):
columns = [
name for name, _ in named_fields_from_element_type(pcoll.element_type)
]
return pcoll | self._batch_elements_transform | beam.Map(
lambda batch: pd.DataFrame.from_records(batch, columns=columns))

That way the user wouldn't need to pass the type they're planning on using to MutationCreator. What do you think of that?

Copy link
Author

@pjotrekk pjotrekk Sep 4, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That way we loose possibility of mixing different kinds of mutations. I don't imagine any sane usage of mixed insert/delete as the order is not guaranteed so I aggree that removing this assumption is justified.

Since we will always map rows to mutations before then it would be good to enclose mapping rows to mutations inside WriteToSpanner. How about such an API?:

pc.with_output_types(CustomRow) | WriteToSpanner(...).insert(table)
pc.with_output_types(CustomRow) | WriteToSpanner(...).delete(table)
pc.with_output_types(List[CustomRow]) | WriteToSpanner(...).delete(table)

It's not consistent with ReadFromSpanner(...) but I think it's better than forcing the user to call RowToMutation each time.
To be more consistent I could do something like ReadFromSpanner(...).from_table(table) and ReadFromSpanner(...).from_sql(sql_query)

@pjotrekk
Copy link
Author

pjotrekk commented Sep 4, 2020

@chamikaramj Brian asked me to ask you for the further review as he is going OOO this week. I'd be grateful :)
If you won't have time until thursday then this PR can wait, there is no haste with it.
I've changed the API of WriteToSpanner to use WriteToSpanner(config).insert(table) etc instead of MutationCreator.

@chamikaramj
Copy link
Contributor

cc: @allenpradeep @nielm

@chamikaramj
Copy link
Contributor

In addition to Brian's review, @allenpradeep or @nielm can you briefly look at Java SpannerIO changes here ?

@pjotrekk
Copy link
Author

@TheNeuralBit @nielm @allenpradeep ping

@pjotrekk
Copy link
Author

pjotrekk commented Oct 6, 2020

@nielm Could you take a look at this thread? #12611 (comment)

@TheNeuralBit
Copy link
Member

Sorry for dropping the ball on this @piotr-szuberski. I'll look over the changes to the Python API this week

Copy link
Member

@TheNeuralBit TheNeuralBit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The way WriteTransform is written it's not possible to mix mutations that perform different operations. If we're going to have that limitation I think this could be simplified if we just had a separate xlang transform for each write operation, and send just the field values over the xlang boundary. Then the Java external transforms would be responsible for making the appropriate Mutation for each operation.

That would remove the need to construct a NamedTuple in RowToMutation.expand.

If we do want to keep using Mutations-as-Rows over xlang there will need to be more work on the type system. The types for row and keyset should really be a union of the relevant types for each table that might be written to. Unfortunately, I'm not sure Python schemas are mature enough for users to be able to express this well. (Alternatively we might express mutations as a logical type, that uses table/operation as a key for the union).

I think what we should do for now is just have separate xlang transforms for each write operation (beam:external:java:spanner:{delete,insert,update,...}). We can file a follow-on jira to add a generic beam:external:java:spanner:write that will allow mixing mutations with various operations and tables, and note that its blocked on support for unions of structs in Python/portable schemas. Does that sound reasonable?

sdks/python/apache_beam/io/gcp/spanner.py Outdated Show resolved Hide resolved
sdks/python/apache_beam/io/gcp/spanner.py Outdated Show resolved Hide resolved
[
('operation', unicode),
('table', unicode),
('keyset', List[row_type]) if is_delete else ('row', row_type),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should make sure this works when schemas are specified via beam.Row as well, right now I think this will only work with the NamedTuple style.

You could use element_type = named_tuple_from_schema(schema_from_element_type(pcoll.element_type)) to make sure element_type is a NamedTuple that you can use here (it might be worth adding a convenience function for that patttern).

def named_tuple_from_schema(schema):

def schema_from_element_type(element_type): # (type) -> schema_pb2.Schema

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. I'm not sure if you didn't mean to add that convenience to schemas.py. I'm leaving it in spanner.py for now

@TheNeuralBit
Copy link
Member

I think most of my comments in that review are actually not relevant any more if we go down the path of separate xlang transforms per operation.

@pjotrekk
Copy link
Author

@TheNeuralBit

@pjotrekk pjotrekk force-pushed the spanner-xlang branch 2 times, most recently from 7aef88a to 46e0f1a Compare October 27, 2020 14:31
@pjotrekk
Copy link
Author

pjotrekk commented Nov 9, 2020

@TheNeuralBit I promise that this is the last big review from me. I've just recently realized how much work I've made you to do. Better late than never I guess!

Copy link
Member

@TheNeuralBit TheNeuralBit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is moving towards what I had in mind, but I think we should just avoid having a concept of mutations on the Python side for now.

@pjotrekk pjotrekk force-pushed the spanner-xlang branch 2 times, most recently from 9efa8c7 to 7228622 Compare November 12, 2020 14:49
@pjotrekk
Copy link
Author

Run Python 3.7 PostCommit

Copy link
Member

@TheNeuralBit TheNeuralBit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you @piotr-szuberski, and sorry for the incredibly long review cycle!

I just have one last request which is to try to eliminate or minimize the places where were suppressing the nullness warnings

- https://beam.apache.org/roadmap/portability/

For more information specific to Flink runner see:
- https://beam.apache.org/documentation/runners/flink/
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This information is getting duplicated across a lot of docstrings. It looks like #13317 will actually add similar information to the programming guide. I think we should re-write all these docstrings to refer to that once its complete.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree - it refers to all the existing xlang transforms, so it'll be done in another PR?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it can be done in another PR. Filed BEAM-11269 to track this.


@SuppressWarnings({
"nullness" // TODO(https://issues.apache.org/jira/browse/BEAM-10402)
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you try to address any lingering nullness errors here and in the other files that have it suppressed? If there are any intractable issues we could consider a smaller @SuppressWarnings blocks around a few functions, but in general we should make sure that new classes pass the null checker.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Oh, it was quite painful as all of the row getters return a @nullable value. Especially that checkNotNull doesn't work with the checker and there is even no possibility to check for null in a function (only if (var == null) { throw new NullPointerException("Null var"); } seem to work.

It doesn't even work in chained functions as in this example:

@Nullable Object var = new Object();
if (var != null) {
  someObject.doSth().doChained(var); // checker doesn't understand that var is checked for nullness)
}

So it's quite unfriendly. In general I'm really excited about dealing with NPE problem, but for now it adds much more complexity and reduces the contributor friendliness. But I guess that it's worth it, especially when the checker gets smarter and will work with the Guava checks and chained functions (if it's even possible?)

@pjotrekk
Copy link
Author

Run Python 3.7 PostCommit

@pjotrekk pjotrekk force-pushed the spanner-xlang branch 3 times, most recently from d05e0e4 to aa73ae1 Compare November 14, 2020 04:52
@pjotrekk
Copy link
Author

Run Python 3.7 PostCommit

@TheNeuralBit
Copy link
Member

Looks good, merging now. Thanks for all your work on this @piotr-szuberski :)

@TheNeuralBit TheNeuralBit merged commit 2f2ffda into apache:master Nov 16, 2020
@pjotrekk
Copy link
Author

Looks good, merging now. Thanks for all your work on this @piotr-szuberski :)

Thank you too for your reviews! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants