Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kafka codec #4950

Merged
merged 33 commits into from
May 6, 2019
Merged

Kafka codec #4950

merged 33 commits into from
May 6, 2019

Conversation

adamkotwasinski
Copy link
Contributor

Description: Placeholder for Kafka codec, API inspired on some level by Mongo codec (internals partially by Redis though)
Risk Level: medium
Testing: so far manual, by setting up Kafka cluster & Kafka clients (I keep filter code that's relatively trivial and not part of this PR)

Recommended (?) order of reading:

  • codec.h - abstract codecs (there will be one for kafka request & response - to be used in onData/onWrite filter methods)
  • request_codec.h / .cc - kafka request encoder&decoder for requests - decoder notifies listeners (similar to mongo) and uses a parser-loop to handle complex messages
  • parser.h - parser is a parser (the name isn't perfect) - it's a stateful thingy that keeps current buffering state, and can tell if it's done (returns a message) or should be changed (nextParser)
  • (now jump a bit to the bottom) kafka_types.h - kafka data types
  • kafka_protocol.h - constants coming from kafka (request-only now)
  • serialization.h - how char* gets translated into kafka types + complex buffer template (that combines N buffers, and then combines their result into one return object) and encoder
  • kafka_request.h / .cc:
    ** request header (4 fields present in every msg)
    ** request context keeping that header that's passed in the parser chain mentioned above
    ** request mapper (basically a mapper from request_type x request_version -> parser)
    ** message start parser (consumes message length)
    ** header parser (consumes request header)
    ** some request parsers (keys 0-9 with versions up to Kafka 0.11)
    ** (at the bottom) UnknownRequest for requests that couldn't be recognised by mapper

Tests:

  • serialization_test- trivial kafka types, checked in both ways: feed whole buffer at once & trickle by one byte (to ensure state is being kept by buffer properly - compare with redis filter that appears to be doing the same)
  • kafka_request_test - mapper testing; the details of parse loop start parser -> header parser -> specific parser;
  • request_codec_test - full tests for each of currently implemented request type/version combos

@mattklein123 mattklein123 self-assigned this Nov 3, 2018
@stale
Copy link

stale bot commented Nov 11, 2018

This pull request has been automatically marked as stale because it has not had activity in the last 7 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions!

@stale stale bot added the stale stalebot believes this issue/PR has not been touched recently label Nov 11, 2018
@mattklein123
Copy link
Member

Sorry for the delay. I'm still planning on reviewing this. I'm behind.

@stale stale bot removed the stale stalebot believes this issue/PR has not been touched recently label Nov 11, 2018
* Composes several buffers into one.
* The returned value is constructed via { buffer1.get(), buffer2.get() ... }
*/
template <typename RT, typename...> class CompositeBuffer;
Copy link
Contributor Author

@adamkotwasinski adamkotwasinski Nov 13, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just learned I can remove these specializations with something similar to

template <class... Ts> struct tuple {};

template <class T, class... Ts>
struct tuple<T, Ts...> : tuple<Ts...> {
  tuple(T t, Ts... ts) : tuple<Ts...>(ts...), tail(t) {}
  T tail;
};

not 100% sure though, as I might run across problems creating the result in get()

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general we don't have this level of template programming in the code base, as it's incredibly hard to read. Can you try to figure out a way to do this without so many templates? If it's not obvious can you describe what you are trying to do?

Copy link
Contributor Author

@adamkotwasinski adamkotwasinski Nov 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The most-offending CompositeBuffer deserializer template has been removed and replaced with explicit domain deserializers.
All other deserializers are subclasses of Deserializer<T> interface and deserialize into instance of T.

The encoding template functions have been documented - in general I wanted to have single method that could work as encode(string), encode(int8), encode(bool) vs having encodeString/encodeInt8/encodeBool as IMHO it makes it harder to make generic code

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. Let's see how it looks on the next pass.

@mattklein123
Copy link
Member

@adamkotwasinski sorry for the really long delay. I have time to review this now. This is too much code to review in a single PR. Can you potentially cut this down to whatever infrastructure is required for a few Kafka messages and an incomplete codec and associated tests? Then we can review just that to make sure we are on the same page and go from there?

@adamkotwasinski
Copy link
Contributor Author

@mattklein123 Done! Left only one request type, but it's complex enough to see how the versioning / changing structure is handled.

Copy link
Member

@mattklein123 mattklein123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is so exciting! I did a first very high level pass. I apologize in advance but in the beginning I'm going to be focusing on a bunch of readability things which will ultimately help me drill down and then do a better real code review, so please have patience while we get through this "annoying" stuff, I promise everything will come out better in the end. I left a bunch of high level readabiltiy comments but for this next round I would really like to focus on:

  1. Having doc comments in all header files
  2. Starting to split up some of the large files where applicable
  3. Figure out how to reduce some of the more extreme cases of template programming. In general we don't use templates that heavily in Envoy. Of course we do sometimes, but some of the cases here, though impressive, had my eyes bleeding. 😉 (If it turns out that it's really impossible to do without hard core templates please make sure the code is extremely well commented and we can discuss).

Anyway, thanks for working on this and I promise I will be much more responsive with reviews from here on out.

@@ -65,6 +65,7 @@ EXTENSIONS = {
"envoy.filters.network.echo": "//source/extensions/filters/network/echo:config",
"envoy.filters.network.ext_authz": "//source/extensions/filters/network/ext_authz:config",
"envoy.filters.network.http_connection_manager": "//source/extensions/filters/network/http_connection_manager:config",
"envoy.filters.network.kafka": "//source/extensions/filters/network/kafka:config",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please revert this in this PR and also remove any config library from this PR

namespace NetworkFilters {
namespace Kafka {

// from http://kafka.apache.org/protocol.html#protocol_api_keys
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we strip this down in this PR to just the request types we are implementing?

namespace Kafka {

// from http://kafka.apache.org/protocol.html#protocol_api_keys
enum RequestType : INT16 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: enum class

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but then I cannot use RequestType::OffsetFetch as INT16 in my OffsetFetchRequest constructors .... or am I missing something?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't you just use int16_t? All I'm saying is to use the system types which are really clear? I don't see the point in the typedef when we use these types all over the code base?

namespace NetworkFilters {
namespace Kafka {

// === REQUEST PARSER MAPPING (REQUEST TYPE => PARSER) =========================
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Can we remove comments like these, we don't so this elsewhere. The names of functions, namespace, etc. should generally be self describing. Same elsewhere.

namespace NetworkFilters {
namespace Kafka {

typedef int8_t INT8;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do these typedefs provide any value? Can we just kill them all and use the direct types? For some of the ones down at the bottom if you want those to safe typing, use CamelCase such as NullableBytes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure!
My rationale was that I wanted to use "kafka types" such as INT8 when I use things coming from Kafka itself. e.g. request's api_version is INT16, but I could have made an explicit typedef uint16_t api_version too

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would just avoid typedefs for the integer cases. int8_t is pretty specific for example.

* Composes several buffers into one.
* The returned value is constructed via { buffer1.get(), buffer2.get() ... }
*/
template <typename RT, typename...> class CompositeBuffer;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general we don't have this level of template programming in the code base, as it's incredibly hard to read. Can you try to figure out a way to do this without so many templates? If it's not obvious can you describe what you are trying to do?

Buffer::OwnedImpl data_buffer;
INT32 data_len = encoder.encode(message, data_buffer); // encode data computing data length
encoder.encode(data_len, output_); // encode data length into result
output_.add(data_buffer); // copy data into result
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move the buffer to avoid the copy? More generally, would it be faster to precompute the size before encoding so we can encode directly into the output?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do.
I was considering that, but then it needs a method for each request type (& substructure) that's trivially returning something along the lines of

size_t FetchRequest::size() {
  return sizeof(field1) + sizeof(field2) + substructure.size()
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to defer this problem until we decide how actually we are going to serialize/deserialize the data. First let's solve how are we going to manage encode / decode operations, computeSize will be yet another that will follow precisely the same pattern (after all our requests/responses are trivial trees - so let's decide on tooling to traverse the trees first).
Makes sense?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup that's fine, I would just put a TODO comment to remind us of a potential optimization later.


void RequestDecoder::onData(Buffer::Instance& data) {
uint64_t num_slices = data.getRawSlices(nullptr, 0);
Buffer::RawSlice slices[num_slices];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you merge master? I think you need to use the new VLA macros that were added recently.

RequestHeader request_header_;
};

// === OFFSET COMMIT (8) =======================================================
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, I think this code would benefit from starting to split it into multiple files. Can we potentially have a file for base objects/classes, and then a file per message type since they seem to be fairly large? I think that would make the code easier to read and review as it grows.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

naming them request_type.h and putting them in messages subdirectory
I envision one header will contain both request & response for the same type

return consumed;
}
bool ready() const { return partitions_.ready(); }
OffsetCommitTopic get() const { return {topic_.get(), partitions_.get()}; }
Copy link
Contributor Author

@adamkotwasinski adamkotwasinski Nov 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it starts to be pretty obvious here that OffsetCommitPartitionV0Buffer, OffsetCommitPartitionV1Buffer, OffsetCommitTopicV0Buffer, OffsetCommitTopicV1Buffer & co. are very similar to each other, being composed of 2 or 3 or 4 sub-deserializers, that then take the deserialization result, and just bundle them up into resulting T from Deserializer<T> via { .... } constructor

I think we can attack it in a few different ways:
a) keep doing the same stuff, and just cope with the code bloat
b) introduce a new class CompositeDeserializer that would take a vector (tuple?) of sub-deserializers, something like

class CompositeDeserializer : public Deserializer<T> {
private:
  vector<Deserializer<?>> delegates_; // I would be storing Deserializer<int8> next to Deserializer<string> here, so I might need a pointer I guess
public:
  bool ready() { return delegates_.end()->ready() } // same thing, ready when last deserializer is full

  size_t feed(const char*& buffer, uint64_t& remaining) {
    for (Deserializer &d : delegates) {
      result += d.feed(data, remaining)
    }
    return result;
  }

  T get() const {
    vector<void*> results = // get result from each of delegates and put them in vector
    return T{results} // needs a constructor that would downcast each of elements from void* to proper type
  }
}

What I don't like here I have void* flowing around requiring downcasting and then depending on constructor to fill in request's (or other subelement's) fields properly.

c)
Common super classes looking like "deserializer composed of 2 deserializers"

class CompositeDeserializerComposedOf2Deserializers<T, Delegate1T, Delegate2T> {
private:
  Delegate1T d1;
  Delegate2T d2;
public:
  bool ready() { return d2.ready() }
  size_t feed (....) {
     d1.feed(....)
     d2.feed(....)
  }
  T get() {
    return { d1.get(), d2.get() };
  }
}

drawbacks : it's the template solution back again ;)

d) bonus consideration: make OffsetCommitRequest abstract and add OffsetCommitRequestV0, OffsetCommitRequestV1 concrete classes - it could alleviate some of problems present in b) - as the constructor would always know that field 0 is e.g. topic name; instead of needing to check with api_version first

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm yeah, I think I'm starting to see the issue here, agree that b) is not great. I'm not completely opposed to templates, if you think it's the best way. I would pick what you think is best for now (with an eye towards trying to avoid complex templates if possible) and just make sure all of the header files are really well commented. Then I can take another pass? Sound good?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure! Thank you - I think it's visible the most when we actually have N semi-similar classes, then the benefit of (templated) superclass becomes appearent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The composite deserializers have been added in serialization_composite.h and are tested in serialization_test.cc
Replaced the offsetcommit deserializers & request header deserializer (also composed of 4 delegates).

class OffsetCommitTopicV0ArrayBuffer
: public ArrayDeserializer<OffsetCommitTopic, OffsetCommitTopicV0Buffer> {};
// Deserializes bytes into OffsetCommitRequest (api version 0): group_id, topics (v0)
class OffsetCommitRequestV0Deserializer
: public CompositeDeserializerWith2Delegates<OffsetCommitRequest, StringDeserializer, OffsetCommitTopicV0ArrayBuffer> {};

Copy link
Contributor Author

@adamkotwasinski adamkotwasinski Nov 19, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one can write:

class OffsetCommitRequestV0Deserializer:
public CompositeDeserializerWith2Delegates<
  OffsetCommitRequest,
  StringDeserializer,
  ArrayDeserializer<
    OffsetCommitTopic,
    CompositeDeserializerWith2Delegates<
      OffsetCommitTopic,
      StringDeserializer,
      ArrayDeserializer<
        OffsetCommitPartition,
        CompositeDeserializerWith3Delegates<
          OffsetCommitPartition,
          Int32Deserializer,
          Int64Deserializer,
          NullableStringDeserializer
>>>>> {}

but I think it makes it even harder to understand?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes that is not good. At this point, please just make the code is readable as you think it can be, while trying to remove egregious templates. When this is done I will take another pass and offer different suggestions if I see them. It will be much easier to do this with all the other changes I have asked for such as comments, etc.

Copy link
Contributor Author

@adamkotwasinski adamkotwasinski Nov 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The remaining big-template classes will be present only in messages/*.h files, as we need to define the structure somehow.

The end code is actually pretty similar to how Java client code looks like ( https://github.com/apache/kafka/blob/2.1/clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java#L58 ) but Java without templates forces them to use raw Objects flying around and (nicely hidden) downcasts just like here https://github.com/apache/kafka/blob/2.1/clients/src/main/java/org/apache/kafka/common/protocol/types/Type.java#L211

Other changes like comments, splits, renames etc. have been already applied.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adamkotwasinski OK so the PR is ready for another pass?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you fix DCO before I start reviewing again?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@repokitteh-read-only
Copy link

🙀 Error while processing event:

evaluation error
error: finished: error from server: {module load error GET https://api.github.com/repos/repokitteh/modules/contents/assign.star: 401 Bad credentials [] map[]}
🐱

Caused by: #4950 was synchronize by adamkotwasinski.

see: more, trace.

Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
- remove unnecessary request type constants
- remove garbage comments
- move operator<< helper to separate header
- move requests (currently only offset_fetch) to separate header
- remove unnecessary typedefs
- add missing documentation
- improve GeneratorMap

Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
- properly access buffer slices
- remove CompositeDeserializer and replace it with expanded classes
- documentation

Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
- some renames
- more comments
- missing include

Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
Copy link
Member

@mattklein123 mattklein123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, flushing next round of comments. Will get into the serialization headers next.

/wait

namespace NetworkFilters {
namespace Kafka {

// abstract codecs for requests and responses
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: del, obvious

* Kafka message decoder
* @tparam MT message type (Kafka request or Kafka response)
*/
template <typename MT> class MessageDecoder {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/MT/MessageType, same below.

namespace NetworkFilters {
namespace Kafka {

// functions present in this header are used by request / response objects to print their fields
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: del

Also, this seems general and not only for Kafka. Can we move this into https://github.com/envoyproxy/envoy/tree/master/test/test_common somewhere? Perhaps https://github.com/envoyproxy/envoy/blob/master/test/test_common/printers.h? Not sure of best place.

* Kafka request type identifier (int16_t value present in header of every request)
* @see http://kafka.apache.org/protocol.html#protocol_api_keys
*/
enum RequestType : int16_t {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

enum class

Copy link
Contributor Author

@adamkotwasinski adamkotwasinski Nov 27, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I keep it this way?
Making enum class out of request type will require adding another value for "unknown request" and possibly forces us to keep two fields in UnknownRequest - RequestType for unknown value and then int16_t with the real value that was received.

The other idea would be to make N constexprs like

constexpr int16_t PRODUCE_REQUEST_TYPE{0}
....
constexpr int16_t OFFSET_COMMIT_REQUEST_TYPE{8}

extemely related : https://stackoverflow.com/questions/1965249/how-to-write-a-java-enum-like-class-with-multiple-data-fields-in-c

@@ -0,0 +1,25 @@
#pragma once

#include <vector>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not used

namespace Kafka {

/**
* Represents a sequence of characters or null. For non-null strings, first the length N is given as
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a higher layer code perspective, does the user care about encoding the length inside this string? Don't they just want to operate on a string or nothing? What is the intention of these types? IMO the encoding is an implementation detail. What is actually contained in these types? Can we clarify? Same for below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point!, I can remove that in this header, this stuff is going to be present in deserialization.h anyways (as there I want to state why I'm parsing the way I am)

* @param remaining remaining data in buffer, will be updated by parser
* @return parse status - decision what should be done with current parser (keep/replace)
*/
virtual ParseResponse parse(const char*& buffer, uint64_t& remaining) PURE;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small readability comment: I would have a struct ParseState or something like that, that contains the buffer pointer and remaining length, and then pass that by reference for modification. Alternatively and possible better, is it possible to just pass an absl::string_view and modify the string view (or return a new one)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ping on this comment?

* Consumes INT32 bytes as request length and updates the context with that value
* @return RequestHeaderParser instance to process request header
*/
ParseResponse parse(const char*& buffer, uint64_t& remaining);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For overridden functions:

  1. Use the override keyword
  2. Don't duplicate the doc comment from the interface header
  3. Precede the overrides of the comments with a comment similar to // Extensions::Networkfilter::Kafka::Parser

Please audit for this elsewhere.

* @param BT deserializer type corresponding to request class (should be subclass of
* Deserializer<RT>)
*/
template <typename RT, typename BT> class RequestParser : public Parser {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Spell out RT, BT, etc. same elsewhere.

public:
virtual ~RequestCallback() = default;

virtual void onMessage(MessageSharedPtr) PURE;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doc comment

namespace NetworkFilters {
namespace Kafka {

/**
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are these templates actually needed? AFAICT all they do is string together multiple feed calls. Can't feed be an interface method, and then there can be a class that takes a list/vector of feeders and generically does what all of this template code does?

Copy link
Contributor Author

@adamkotwasinski adamkotwasinski Nov 26, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think what you described is basically point b) of #4950 (comment)

The feed part is not a problem, but I'm getting problems with constructing results in get().
If I keep a vector of Deserializer-s then their return type is going to have to be something like void* or anything sufficiently generic, and then I'd need to convert these results into Request's constructor arguments.

Basically would want to have something like

std::vector<Deserializer<?>> delegates_;

ReturnType get() const {
  return { delegates_[0].get(), delegates_[1].get(), delegates_[2].get() .... };
}

As a minimum feed will be changed to be more generic.

When it comes to get, I will take a look into possibility of having an array instead of vector - this way I think I'll be able to do some templating to change array<N>{ deserializer1, ....} into argument list.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK I see the problem. Alright do what you can and I will take another pass in the next round. Sorry for not seeing the get() issue.

Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
@adamkotwasinski
Copy link
Contributor Author

/retest

@repokitteh-read-only
Copy link

🔨 rebuilding ci/circleci: release (failed build)
🔨 rebuilding ci/circleci: coverage (failed build)

🐱

Caused by: a #4950 (comment) was created by @adamkotwasinski.

see: more, trace.

Copy link
Member

@mattklein123 mattklein123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, is this ready to go other than figuring out why coverage is failing? Maybe try merging master? If that doesn't work how can we help debug?

/wait

// TODO(adamkotwasinski) discuss capturing the data as-is, and simply putting it back
// this would add ability to forward unknown types of requests in cluster-proxy
/**
* It is impossible to encode unknown request, as it is only a placeholder.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this can never happen in practice? If so should it be NOT_IMPLEMENTED?

@adamkotwasinski
Copy link
Contributor Author

/wait
I'm afraid it might be something happening due to gcovr not handling the python-generated files properly, will investigate that, but it might take some time.

@repokitteh-read-only
Copy link

🔨 rebuilding ci/circleci: coverage (failed build)

🐱

Caused by: a #4950 (comment) was created by @adamkotwasinski.

see: more, trace.

…et picked up by gcovr

Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
@adamkotwasinski
Copy link
Contributor Author

the coverage should be fixed now but I still need to address the 100% coverage & other comments
/wait

Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
- Put request parse failures in separate objects;
- Simplify message hierarchy;
- Remove message.h and make Encoder/Parser/ParseResponse templated to support Response objects in future

Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
@adamkotwasinski
Copy link
Contributor Author

/wait

@adamkotwasinski
Copy link
Contributor Author

/retest