Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kafka codec #4950

Open
wants to merge 30 commits into
base: master
from

Conversation

Projects
None yet
5 participants
@adamkotwasinski
Copy link

commented Nov 2, 2018

Description: Placeholder for Kafka codec, API inspired on some level by Mongo codec (internals partially by Redis though)
Risk Level: medium
Testing: so far manual, by setting up Kafka cluster & Kafka clients (I keep filter code that's relatively trivial and not part of this PR)

Recommended (?) order of reading:

  • codec.h - abstract codecs (there will be one for kafka request & response - to be used in onData/onWrite filter methods)
  • request_codec.h / .cc - kafka request encoder&decoder for requests - decoder notifies listeners (similar to mongo) and uses a parser-loop to handle complex messages
  • parser.h - parser is a parser (the name isn't perfect) - it's a stateful thingy that keeps current buffering state, and can tell if it's done (returns a message) or should be changed (nextParser)
  • (now jump a bit to the bottom) kafka_types.h - kafka data types
  • kafka_protocol.h - constants coming from kafka (request-only now)
  • serialization.h - how char* gets translated into kafka types + complex buffer template (that combines N buffers, and then combines their result into one return object) and encoder
  • kafka_request.h / .cc:
    ** request header (4 fields present in every msg)
    ** request context keeping that header that's passed in the parser chain mentioned above
    ** request mapper (basically a mapper from request_type x request_version -> parser)
    ** message start parser (consumes message length)
    ** header parser (consumes request header)
    ** some request parsers (keys 0-9 with versions up to Kafka 0.11)
    ** (at the bottom) UnknownRequest for requests that couldn't be recognised by mapper

Tests:

  • serialization_test- trivial kafka types, checked in both ways: feed whole buffer at once & trickle by one byte (to ensure state is being kept by buffer properly - compare with redis filter that appears to be doing the same)
  • kafka_request_test - mapper testing; the details of parse loop start parser -> header parser -> specific parser;
  • request_codec_test - full tests for each of currently implemented request type/version combos

@mattklein123 mattklein123 self-assigned this Nov 3, 2018

@stale

This comment has been minimized.

Copy link

commented Nov 11, 2018

This pull request has been automatically marked as stale because it has not had activity in the last 7 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions!

@stale stale bot added the stale label Nov 11, 2018

@mattklein123

This comment has been minimized.

Copy link
Member

commented Nov 11, 2018

Sorry for the delay. I'm still planning on reviewing this. I'm behind.

@stale stale bot removed the stale label Nov 11, 2018

* Composes several buffers into one.
* The returned value is constructed via { buffer1.get(), buffer2.get() ... }
*/
template <typename RT, typename...> class CompositeBuffer;

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 13, 2018

Author

I just learned I can remove these specializations with something similar to

template <class... Ts> struct tuple {};

template <class T, class... Ts>
struct tuple<T, Ts...> : tuple<Ts...> {
  tuple(T t, Ts... ts) : tuple<Ts...>(ts...), tail(t) {}
  T tail;
};

not 100% sure though, as I might run across problems creating the result in get()

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 14, 2018

Member

In general we don't have this level of template programming in the code base, as it's incredibly hard to read. Can you try to figure out a way to do this without so many templates? If it's not obvious can you describe what you are trying to do?

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 16, 2018

Author

The most-offending CompositeBuffer deserializer template has been removed and replaced with explicit domain deserializers.
All other deserializers are subclasses of Deserializer<T> interface and deserialize into instance of T.

The encoding template functions have been documented - in general I wanted to have single method that could work as encode(string), encode(int8), encode(bool) vs having encodeString/encodeInt8/encodeBool as IMHO it makes it harder to make generic code

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 16, 2018

Member

Sounds good. Let's see how it looks on the next pass.

@mattklein123

This comment has been minimized.

Copy link
Member

commented Nov 13, 2018

@adamkotwasinski sorry for the really long delay. I have time to review this now. This is too much code to review in a single PR. Can you potentially cut this down to whatever infrastructure is required for a few Kafka messages and an incomplete codec and associated tests? Then we can review just that to make sure we are on the same page and go from there?

@adamkotwasinski

This comment has been minimized.

Copy link
Author

commented Nov 14, 2018

@mattklein123 Done! Left only one request type, but it's complex enough to see how the versioning / changing structure is handled.

@mattklein123
Copy link
Member

left a comment

This is so exciting! I did a first very high level pass. I apologize in advance but in the beginning I'm going to be focusing on a bunch of readability things which will ultimately help me drill down and then do a better real code review, so please have patience while we get through this "annoying" stuff, I promise everything will come out better in the end. I left a bunch of high level readabiltiy comments but for this next round I would really like to focus on:

  1. Having doc comments in all header files
  2. Starting to split up some of the large files where applicable
  3. Figure out how to reduce some of the more extreme cases of template programming. In general we don't use templates that heavily in Envoy. Of course we do sometimes, but some of the cases here, though impressive, had my eyes bleeding. 😉 (If it turns out that it's really impossible to do without hard core templates please make sure the code is extremely well commented and we can discuss).

Anyway, thanks for working on this and I promise I will be much more responsive with reviews from here on out.

@@ -65,6 +65,7 @@ EXTENSIONS = {
"envoy.filters.network.echo": "//source/extensions/filters/network/echo:config",
"envoy.filters.network.ext_authz": "//source/extensions/filters/network/ext_authz:config",
"envoy.filters.network.http_connection_manager": "//source/extensions/filters/network/http_connection_manager:config",
"envoy.filters.network.kafka": "//source/extensions/filters/network/kafka:config",

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 14, 2018

Member

Please revert this in this PR and also remove any config library from this PR

namespace NetworkFilters {
namespace Kafka {

// from http://kafka.apache.org/protocol.html#protocol_api_keys

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 14, 2018

Member

Can we strip this down in this PR to just the request types we are implementing?

namespace Kafka {

// from http://kafka.apache.org/protocol.html#protocol_api_keys
enum RequestType : INT16 {

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 14, 2018

Member

nit: enum class

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 16, 2018

Author

but then I cannot use RequestType::OffsetFetch as INT16 in my OffsetFetchRequest constructors .... or am I missing something?

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 16, 2018

Member

Can't you just use int16_t? All I'm saying is to use the system types which are really clear? I don't see the point in the typedef when we use these types all over the code base?

namespace NetworkFilters {
namespace Kafka {

// === REQUEST PARSER MAPPING (REQUEST TYPE => PARSER) =========================

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 14, 2018

Member

nit: Can we remove comments like these, we don't so this elsewhere. The names of functions, namespace, etc. should generally be self describing. Same elsewhere.

namespace NetworkFilters {
namespace Kafka {

typedef int8_t INT8;

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 14, 2018

Member

Do these typedefs provide any value? Can we just kill them all and use the direct types? For some of the ones down at the bottom if you want those to safe typing, use CamelCase such as NullableBytes

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 14, 2018

Author

Sure!
My rationale was that I wanted to use "kafka types" such as INT8 when I use things coming from Kafka itself. e.g. request's api_version is INT16, but I could have made an explicit typedef uint16_t api_version too

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 14, 2018

Member

I would just avoid typedefs for the integer cases. int8_t is pretty specific for example.

Show resolved Hide resolved source/extensions/filters/network/kafka/parser.h
* Composes several buffers into one.
* The returned value is constructed via { buffer1.get(), buffer2.get() ... }
*/
template <typename RT, typename...> class CompositeBuffer;

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 14, 2018

Member

In general we don't have this level of template programming in the code base, as it's incredibly hard to read. Can you try to figure out a way to do this without so many templates? If it's not obvious can you describe what you are trying to do?

Buffer::OwnedImpl data_buffer;
INT32 data_len = encoder.encode(message, data_buffer); // encode data computing data length
encoder.encode(data_len, output_); // encode data length into result
output_.add(data_buffer); // copy data into result

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 14, 2018

Member

Can we move the buffer to avoid the copy? More generally, would it be faster to precompute the size before encoding so we can encode directly into the output?

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 14, 2018

Author

Will do.
I was considering that, but then it needs a method for each request type (& substructure) that's trivially returning something along the lines of

size_t FetchRequest::size() {
  return sizeof(field1) + sizeof(field2) + substructure.size()
}

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 15, 2018

Author

I would like to defer this problem until we decide how actually we are going to serialize/deserialize the data. First let's solve how are we going to manage encode / decode operations, computeSize will be yet another that will follow precisely the same pattern (after all our requests/responses are trivial trees - so let's decide on tooling to traverse the trees first).
Makes sense?

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 15, 2018

Member

Yup that's fine, I would just put a TODO comment to remind us of a potential optimization later.


void RequestDecoder::onData(Buffer::Instance& data) {
uint64_t num_slices = data.getRawSlices(nullptr, 0);
Buffer::RawSlice slices[num_slices];

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 14, 2018

Member

Can you merge master? I think you need to use the new VLA macros that were added recently.

RequestHeader request_header_;
};

// === OFFSET COMMIT (8) =======================================================

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 14, 2018

Member

In general, I think this code would benefit from starting to split it into multiple files. Can we potentially have a file for base objects/classes, and then a file per message type since they seem to be fairly large? I think that would make the code easier to read and review as it grows.

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 16, 2018

Author

naming them request_type.h and putting them in messages subdirectory
I envision one header will contain both request & response for the same type

@repokitteh repokitteh bot removed the waiting label Nov 16, 2018

return consumed;
}
bool ready() const { return partitions_.ready(); }
OffsetCommitTopic get() const { return {topic_.get(), partitions_.get()}; }

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 16, 2018

Author

I think it starts to be pretty obvious here that OffsetCommitPartitionV0Buffer, OffsetCommitPartitionV1Buffer, OffsetCommitTopicV0Buffer, OffsetCommitTopicV1Buffer & co. are very similar to each other, being composed of 2 or 3 or 4 sub-deserializers, that then take the deserialization result, and just bundle them up into resulting T from Deserializer<T> via { .... } constructor

I think we can attack it in a few different ways:
a) keep doing the same stuff, and just cope with the code bloat
b) introduce a new class CompositeDeserializer that would take a vector (tuple?) of sub-deserializers, something like

class CompositeDeserializer : public Deserializer<T> {
private:
  vector<Deserializer<?>> delegates_; // I would be storing Deserializer<int8> next to Deserializer<string> here, so I might need a pointer I guess
public:
  bool ready() { return delegates_.end()->ready() } // same thing, ready when last deserializer is full

  size_t feed(const char*& buffer, uint64_t& remaining) {
    for (Deserializer &d : delegates) {
      result += d.feed(data, remaining)
    }
    return result;
  }

  T get() const {
    vector<void*> results = // get result from each of delegates and put them in vector
    return T{results} // needs a constructor that would downcast each of elements from void* to proper type
  }
}

What I don't like here I have void* flowing around requiring downcasting and then depending on constructor to fill in request's (or other subelement's) fields properly.

c)
Common super classes looking like "deserializer composed of 2 deserializers"

class CompositeDeserializerComposedOf2Deserializers<T, Delegate1T, Delegate2T> {
private:
  Delegate1T d1;
  Delegate2T d2;
public:
  bool ready() { return d2.ready() }
  size_t feed (....) {
     d1.feed(....)
     d2.feed(....)
  }
  T get() {
    return { d1.get(), d2.get() };
  }
}

drawbacks : it's the template solution back again ;)

d) bonus consideration: make OffsetCommitRequest abstract and add OffsetCommitRequestV0, OffsetCommitRequestV1 concrete classes - it could alleviate some of problems present in b) - as the constructor would always know that field 0 is e.g. topic name; instead of needing to check with api_version first

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 16, 2018

Member

Hmm yeah, I think I'm starting to see the issue here, agree that b) is not great. I'm not completely opposed to templates, if you think it's the best way. I would pick what you think is best for now (with an eye towards trying to avoid complex templates if possible) and just make sure all of the header files are really well commented. Then I can take another pass? Sound good?

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 16, 2018

Author

Sure! Thank you - I think it's visible the most when we actually have N semi-similar classes, then the benefit of (templated) superclass becomes appearent.

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 19, 2018

Author

The composite deserializers have been added in serialization_composite.h and are tested in serialization_test.cc
Replaced the offsetcommit deserializers & request header deserializer (also composed of 4 delegates).

class OffsetCommitTopicV0ArrayBuffer
: public ArrayDeserializer<OffsetCommitTopic, OffsetCommitTopicV0Buffer> {};
// Deserializes bytes into OffsetCommitRequest (api version 0): group_id, topics (v0)
class OffsetCommitRequestV0Deserializer
: public CompositeDeserializerWith2Delegates<OffsetCommitRequest, StringDeserializer, OffsetCommitTopicV0ArrayBuffer> {};

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 19, 2018

Author

one can write:

class OffsetCommitRequestV0Deserializer:
public CompositeDeserializerWith2Delegates<
  OffsetCommitRequest,
  StringDeserializer,
  ArrayDeserializer<
    OffsetCommitTopic,
    CompositeDeserializerWith2Delegates<
      OffsetCommitTopic,
      StringDeserializer,
      ArrayDeserializer<
        OffsetCommitPartition,
        CompositeDeserializerWith3Delegates<
          OffsetCommitPartition,
          Int32Deserializer,
          Int64Deserializer,
          NullableStringDeserializer
>>>>> {}

but I think it makes it even harder to understand?

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 20, 2018

Member

Yes that is not good. At this point, please just make the code is readable as you think it can be, while trying to remove egregious templates. When this is done I will take another pass and offer different suggestions if I see them. It will be much easier to do this with all the other changes I have asked for such as comments, etc.

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 20, 2018

Author

The remaining big-template classes will be present only in messages/*.h files, as we need to define the structure somehow.

The end code is actually pretty similar to how Java client code looks like ( https://github.com/apache/kafka/blob/2.1/clients/src/main/java/org/apache/kafka/common/requests/OffsetCommitRequest.java#L58 ) but Java without templates forces them to use raw Objects flying around and (nicely hidden) downcasts just like here https://github.com/apache/kafka/blob/2.1/clients/src/main/java/org/apache/kafka/common/protocol/types/Type.java#L211

Other changes like comments, splits, renames etc. have been already applied.

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 20, 2018

Member

@adamkotwasinski OK so the PR is ready for another pass?

This comment has been minimized.

Copy link
@adamkotwasinski

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 20, 2018

Member

Can you fix DCO before I start reviewing again?

This comment has been minimized.

Copy link
@adamkotwasinski
@repokitteh

This comment has been minimized.

Copy link

commented Nov 20, 2018

🙀 Error while processing event:

evaluation error
error: finished: error from server: {module load error GET https://api.github.com/repos/repokitteh/modules/contents/assign.star: 401 Bad credentials [] map[]}
🐱

Caused by: #4950 was synchronize by adamkotwasinski.

see: more, trace.

adamkotwasinski added some commits Sep 25, 2018

WIP: Kafka codec
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
Fix compile error after rebases
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
Remove all request types except OffsetCommit v0..v1 (for review)
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
Remove bytes buffers (for review - unused in these requests)
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
Apply review fixes
- remove unnecessary request type constants
- remove garbage comments
- move operator<< helper to separate header
- move requests (currently only offset_fetch) to separate header
- remove unnecessary typedefs
- add missing documentation
- improve GeneratorMap

Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
Apply review fixes
- properly access buffer slices
- remove CompositeDeserializer and replace it with expanded classes
- documentation

Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
Introduce composite deserializers for 2, 3, 4 delegates
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
Review fixes:
- some renames
- more comments
- missing include

Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>

@adamkotwasinski adamkotwasinski force-pushed the adamkotwasinski:codec branch from 8e81089 to 8cb67ee Nov 20, 2018

@mattklein123
Copy link
Member

left a comment

Nice, flushing next round of comments. Will get into the serialization headers next.

/wait

namespace NetworkFilters {
namespace Kafka {

// abstract codecs for requests and responses

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 26, 2018

Member

nit: del, obvious

* Kafka message decoder
* @tparam MT message type (Kafka request or Kafka response)
*/
template <typename MT> class MessageDecoder {

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 26, 2018

Member

nit: s/MT/MessageType, same below.

namespace NetworkFilters {
namespace Kafka {

// functions present in this header are used by request / response objects to print their fields

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 26, 2018

Member

nit: del

Also, this seems general and not only for Kafka. Can we move this into https://github.com/envoyproxy/envoy/tree/master/test/test_common somewhere? Perhaps https://github.com/envoyproxy/envoy/blob/master/test/test_common/printers.h? Not sure of best place.

* Kafka request type identifier (int16_t value present in header of every request)
* @see http://kafka.apache.org/protocol.html#protocol_api_keys
*/
enum RequestType : int16_t {

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 26, 2018

Member

enum class

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 27, 2018

Author

Can I keep it this way?
Making enum class out of request type will require adding another value for "unknown request" and possibly forces us to keep two fields in UnknownRequest - RequestType for unknown value and then int16_t with the real value that was received.

The other idea would be to make N constexprs like

constexpr int16_t PRODUCE_REQUEST_TYPE{0}
....
constexpr int16_t OFFSET_COMMIT_REQUEST_TYPE{8}

extemely related : https://stackoverflow.com/questions/1965249/how-to-write-a-java-enum-like-class-with-multiple-data-fields-in-c

@@ -0,0 +1,25 @@
#pragma once

#include <vector>

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 26, 2018

Member

not used

namespace Kafka {

/**
* Represents a sequence of characters or null. For non-null strings, first the length N is given as

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 26, 2018

Member

From a higher layer code perspective, does the user care about encoding the length inside this string? Don't they just want to operate on a string or nothing? What is the intention of these types? IMO the encoding is an implementation detail. What is actually contained in these types? Can we clarify? Same for below.

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 26, 2018

Author

good point!, I can remove that in this header, this stuff is going to be present in deserialization.h anyways (as there I want to state why I'm parsing the way I am)

* @param remaining remaining data in buffer, will be updated by parser
* @return parse status - decision what should be done with current parser (keep/replace)
*/
virtual ParseResponse parse(const char*& buffer, uint64_t& remaining) PURE;

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 26, 2018

Member

Small readability comment: I would have a struct ParseState or something like that, that contains the buffer pointer and remaining length, and then pass that by reference for modification. Alternatively and possible better, is it possible to just pass an absl::string_view and modify the string view (or return a new one)?

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Mar 4, 2019

Member

Ping on this comment?

* Consumes INT32 bytes as request length and updates the context with that value
* @return RequestHeaderParser instance to process request header
*/
ParseResponse parse(const char*& buffer, uint64_t& remaining);

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 26, 2018

Member

For overridden functions:

  1. Use the override keyword
  2. Don't duplicate the doc comment from the interface header
  3. Precede the overrides of the comments with a comment similar to // Extensions::Networkfilter::Kafka::Parser

Please audit for this elsewhere.

* @param BT deserializer type corresponding to request class (should be subclass of
* Deserializer<RT>)
*/
template <typename RT, typename BT> class RequestParser : public Parser {

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 26, 2018

Member

nit: Spell out RT, BT, etc. same elsewhere.

public:
virtual ~RequestCallback() = default;

virtual void onMessage(MessageSharedPtr) PURE;

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 26, 2018

Member

doc comment

@repokitteh repokitteh bot added the waiting label Nov 26, 2018

namespace NetworkFilters {
namespace Kafka {

/**

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 26, 2018

Member

Why are these templates actually needed? AFAICT all they do is string together multiple feed calls. Can't feed be an interface method, and then there can be a class that takes a list/vector of feeders and generically does what all of this template code does?

This comment has been minimized.

Copy link
@adamkotwasinski

adamkotwasinski Nov 26, 2018

Author

I think what you described is basically point b) of #4950 (comment)

The feed part is not a problem, but I'm getting problems with constructing results in get().
If I keep a vector of Deserializer-s then their return type is going to have to be something like void* or anything sufficiently generic, and then I'd need to convert these results into Request's constructor arguments.

Basically would want to have something like

std::vector<Deserializer<?>> delegates_;

ReturnType get() const {
  return { delegates_[0].get(), delegates_[1].get(), delegates_[2].get() .... };
}

As a minimum feed will be changed to be more generic.

When it comes to get, I will take a look into possibility of having an array instead of vector - this way I think I'll be able to do some templating to change array<N>{ deserializer1, ....} into argument list.

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Nov 26, 2018

Member

OK I see the problem. Alright do what you can and I will take another pass in the next round. Sorry for not seeing the get() issue.

@repokitteh repokitteh bot added waiting and removed waiting labels Mar 22, 2019

Reorganize test code
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>

@repokitteh repokitteh bot added waiting and removed waiting labels Mar 25, 2019

Create separate test class for each of Kafka tests; add missing forma…
…tting

Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>

@repokitteh repokitteh bot added waiting and removed waiting labels Mar 26, 2019

Put Kafka tests in dedicated namespaces to avoid duplicate mock class…
…es when running coverage builds; some renames

Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>

@repokitteh repokitteh bot added waiting and removed waiting labels Mar 26, 2019

@repokitteh repokitteh bot removed the waiting label Mar 27, 2019

@adamkotwasinski

This comment has been minimized.

Copy link
Author

commented Mar 27, 2019

/wait

@repokitteh repokitteh bot added the waiting label Mar 27, 2019

Kick CI
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>

@repokitteh repokitteh bot removed the waiting label Mar 28, 2019

adamkotwasinski added some commits Mar 28, 2019

Kick CI
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
Kick CI
Signed-off-by: Adam Kotwasinski <adam.kotwasinski@gmail.com>
@adamkotwasinski

This comment has been minimized.

Copy link
Author

commented Mar 29, 2019

/retest

@repokitteh

This comment has been minimized.

Copy link

commented Mar 29, 2019

🔨 rebuilding ci/circleci: release (failed build)
🔨 rebuilding ci/circleci: coverage (failed build)

🐱

Caused by: a #4950 (comment) was created by @adamkotwasinski.

see: more, trace.

@mattklein123
Copy link
Member

left a comment

Thanks, is this ready to go other than figuring out why coverage is failing? Maybe try merging master? If that doesn't work how can we help debug?

/wait

// TODO(adamkotwasinski) discuss capturing the data as-is, and simply putting it back
// this would add ability to forward unknown types of requests in cluster-proxy
/**
* It is impossible to encode unknown request, as it is only a placeholder.

This comment has been minimized.

Copy link
@mattklein123

mattklein123 Apr 2, 2019

Member

So this can never happen in practice? If so should it be NOT_IMPLEMENTED?

@repokitteh repokitteh bot added the waiting label Apr 2, 2019

@repokitteh repokitteh bot removed the waiting label Apr 15, 2019

@adamkotwasinski

This comment has been minimized.

Copy link
Author

commented Apr 15, 2019

/wait
I'm afraid it might be something happening due to gcovr not handling the python-generated files properly, will investigate that, but it might take some time.

@repokitteh repokitteh bot added the waiting label Apr 15, 2019

@adamkotwasinski

This comment has been minimized.

Copy link
Author

commented Apr 18, 2019

/retest

@repokitteh

This comment has been minimized.

Copy link

commented Apr 18, 2019

🔨 rebuilding ci/circleci: coverage (failed build)

🐱

Caused by: a #4950 (comment) was created by @adamkotwasinski.

see: more, trace.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.