Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Issue 283 - cluster codec artifact #287

Merged
merged 23 commits into from Jan 8, 2020

Conversation

@eutkin
Copy link
Contributor

eutkin commented Jan 2, 2020

Extract medatadata/message codec on separate artifact.

The plugin provides implementation of the following interfaces:

  • io.scalecube.cluster.transport.api.MessageCodec
  • io.scalecube.cluster.metadata.MetadataEncoder
  • io.scalecube.cluster.metadata.MetadataDecoder

So far only a plugin has been written that provides an implementation of interfaces using Jackson

eutkin and others added 4 commits Jan 1, 2020
@eutkin eutkin requested a review from artem-v Jan 2, 2020
eutkin added 2 commits Jan 2, 2020
Copy link
Contributor

artem-v left a comment

Really cool. Thank you. Would you mind create another implementation of cluster codec based on protostuff. It's relatively easy -- we have prototuff codec in scalecube-services project.

if (buffer.remaining() == 0) {
return Collections.emptyMap();
}
return this.delegate.readValue(buffer.array(), TYPE);

This comment has been minimized.

Copy link
@segabriel

segabriel Jan 4, 2020

Member

Now it's hardcoded type Map<String, String>, what about other custom types of metadata, can it be more generic? It's kinda passing a needed type and using it into mapper:

return mapper.readValue(buffer.array(), mapper.getTypeFactory().constructType(type));

This comment has been minimized.

Copy link
@eutkin

eutkin Jan 4, 2020

Author Contributor

@segabriel, i commited, please see

This comment has been minimized.

Copy link
@segabriel

segabriel Jan 4, 2020

Member

Hm, probably let's revert your last commit.
Maybe we don't need to break down the current API and just enable putting type identifier into JSON as a property ("@class": "java.util.HashMap"). I mean something like that:

mapper.enableDefaultTyping(DefaultTyping.NON_FINAL, As.PROPERTY);

and then in the decode phase you need:

Object value = mapper.readValue(json.getBytes(), Object.class)

PS: it's our default impl so if someone wants to improve such mechanism they should set it explicitly.

*/
public class JacksonMetadataEncoder implements MetadataEncoder, MetadataDecoder {

private static final TypeReference<Map<String, String>> TYPE =

This comment has been minimized.

Copy link
@artem-v

artem-v Jan 4, 2020

Contributor

<String, String> is too narrow typing. We want Type in general or Class<?>. Looks like need to add new method into decoder of metadata interface. Hold on, I will create separate PR branched from this one.

This comment has been minimized.

Copy link
@segabriel

segabriel Jan 4, 2020

Member

@artem-v WDYT about #287 (comment)
Adding Type and Class<?> to arguments will bring more redundant complexity, because custom implementations will have to deal with this type, check it out, adjust to it, etc. And IMHO lots of custom implementations of the interface will just ignore this parameter.
All encoding tools can persist enough type information to deserialize the target in the future. For instance: sbe has templateId, protostuff has schema and jackson can do it via type id - https://github.com/FasterXML/jackson-docs/wiki/JacksonPolymorphicDeserialization#instance-type-information-type-metadata

This comment has been minimized.

Copy link
@ronenhamias

ronenhamias Jan 4, 2020

Member

50% of users will not use metadata.
40% of users will use <String, String> as metadata.
10% will consider to customize metadata.

we need to allow it to be customized but not create complexity

This comment has been minimized.

Copy link
@artem-v

artem-v Jan 4, 2020

Contributor

@ronenhamias
On seed node metadata is null, on other nodes u have metadata. <String, String>` didn't work for us, it's good for pet project, but not for something more or less real.

This comment has been minimized.

Copy link
@artem-v

artem-v Jan 4, 2020

Contributor

@segabriel
Regarding sbe true (what a maniac would use scalecube-cluster with sbe), regarding jackson true (with certain cartwheels you can extract @type/@class info, that's true), regarding protostuff not true, you have to know type/class. Regarding gson, not true, they don't preserve type info.

This comment has been minimized.

Copy link
@ronenhamias

ronenhamias Jan 5, 2020

Member

@ronenhamias
On seed node metadata is null, on other nodes u have metadata. <String, String>` didn't work for us, it's good for pet project, but not for something more or less real.

so in that case this ability to change metadata decoders is a risky feature.
as cluster decoders might be inconsistent and error prone.
i think its better to have one codec metadata <String,bytes>
and user can handle the bytes as he want.
and this decoder/encoder of <String,bytes> is by default exists on all nodes thus its not needed at all to separate packages.

Copy link
Contributor

artem-v left a comment

Metadata decoder has design flaw (not your fault). I will create another branch.

@ronenhamias

This comment has been minimized.

Copy link
Member

ronenhamias commented Jan 4, 2020

if we separate the codecs to another artifacts then we need to create another package for scalecube cluster that contains all defaults.

so normal user can use scalecube-cluster (all in one) or choose to take the bits and parts and compose from nuts and bolts on customized scalecube cluster.

most users will want the default for start.
simular to netty that have -all as a delivery package or one can take just codecs.

@ronenhamias

This comment has been minimized.

Copy link
Member

ronenhamias commented Jan 4, 2020

another question are you guys sure about this SPI thing? my feeling is littel over complicated, WDYT?

@segabriel

This comment has been minimized.

Copy link
Member

segabriel commented Jan 4, 2020

@ronenhamias
It will not so complicated as you said. It will look like:

...
<dependency>
  <groupId>io.scalecube</groupId>
  <artifactId>scalecube-cluster</artifactId>
  <version>x.y.z</version>
</dependency>
<dependency>
  <groupId>io.scalecube</groupId>
  <artifactId>scalecube-cluster-codec-jackson</artifactId>
  <version>x.y.z</version>
</dependency>
...

Sure without the last dependency user will get an exception when it tries to read/write metadata

@artem-v

This comment has been minimized.

Copy link
Contributor

artem-v commented Jan 4, 2020

another question are you guys sure about this SPI thing? my feeling is littel over complicated, WDYT?

Kinda agree with you. Who needs gson? Who needs protostuff? We're a little library for cluster membership. Ok. But that doesn't mean I have to pack jackson (and his friends) into scalecube-cluster. There still must be a possibility to plugin custom codec (what if client wants not a textual json but binary json like msgpack or smile or even bson, it's very realistic). With that said, depending on any 3rdparty implementation (such as jackson) in sc-cluster is bad.

I propose following:

  • come with new requirements for metadata and transport message -- they must be Externalizable; this will give default codec implementations. Done at #290.
  • keep ability to provide custom codec implementations (let's go with @segabriel idea and est. a convention that we will deal only with that codecs which can preserve type info, if not, then this codec is not supported (good bye protostuff)). Done at #289.
artem-v added 7 commits Jan 5, 2020
…c-artifact-CR1arvy

[CR][287] Feature/issue 283 cluster codec artifact
@artem-v

This comment has been minimized.

Copy link
Contributor

artem-v commented Jan 6, 2020

I propose following:

  • come with new requirements for metadata and transport message -- they must be Externalizable; this will give default codec implementations. Done at #290.

Finished #290, it's ready for review. You can give to the client just scalecube-cluster without specific codec dependencies or mandatory programatic settings (i.e validation of configuration would pass). Plus, you still can provide custom codecs either by modifying pom or programmatically.

…c-artifact-CR2arvy

[CR287] Feature/issue 283 cluster codec artifact
Copy link
Member

segabriel left a comment

👍

artem-v added 2 commits Jan 8, 2020
@artem-v artem-v merged commit 00a8c84 into develop Jan 8, 2020
2 of 3 checks passed
2 of 3 checks passed
Codacy/PR Quality Review Not up to standards. This pull request quality could be better.
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
continuous-integration/travis-ci/push The Travis CI build passed
Details
@artem-v artem-v deleted the feature/issue-283-cluster-codec-artifact branch Jan 8, 2020
io-scalecube-ci added a commit that referenced this pull request Jan 8, 2020
…c-artifact

WIP: Issue 283 - cluster codec artifact [skip ci] prepare release v2.4.10-RC2
io-scalecube-ci added a commit that referenced this pull request Jan 8, 2020
…c-artifact

WIP: Issue 283 - cluster codec artifact [skip ci] prepare for next development iteration
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.