Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Must be able to bind to different types in trigger functions #44

Closed
5 of 8 tasks
fbeltrao opened this issue Apr 2, 2019 · 16 comments
Closed
5 of 8 tasks

Must be able to bind to different types in trigger functions #44

fbeltrao opened this issue Apr 2, 2019 · 16 comments
Milestone

Comments

@fbeltrao
Copy link
Contributor

fbeltrao commented Apr 2, 2019

The trigger implementation has been tested using KafkaEventData and string as parameter types.
We must implement the following scenarios:

POCO (importance: high)

  • If the POCO class implements ISpecificRecord. Avro deserialiser should be set when creating the KafkaListener
  • If the POCO class implements IMessage (Google.Protobuf contract). The protobuf deserialiser should be set when creating the KafkaListener

byte[] (importance: high)

  • Allows deserialisation to be implemented directly in the function.

IGenericRecord (importance: low)

  • If an Avro schema was provided and getting the fields will be implemented direct in the function. The Avro deserialiser should be set during KafkaListener creation

string (importance: very low)

  • If an Avro schema was provided we should return a JSON presentation of the object (currently it only does 1-level depth)
  • If a Protobuf contract was supplied we could return a JSON presentation of the object (currently it only does 1-level depth)

support key, partition, offset, timestamp and topic as stand-alone parameters (importance: medium)

  • In single dispatches
  • In multi dispatch
@fbeltrao fbeltrao added this to the P0 milestone Apr 2, 2019
@fbeltrao
Copy link
Contributor Author

fbeltrao commented Apr 2, 2019

@brandonh-msft and @ryancrawcour please submit your feedback about this item. Once we agreed, Brandon can start.

@brandonh-msft
Copy link
Member

if i'm understanding the byte[] part... this means if they use byte array, we want the function developer to implement the creating of KafkaEventData objects?

@ryancrawcour
Copy link
Contributor

@brandonh-msft and @ryancrawcour please submit your feedback about this item. Once we agreed, Brandon can start.

Looks good to me.

@ryancrawcour
Copy link
Contributor

if i'm understanding the byte[] part... this means if they use byte array, we want the function developer to implement the creating of KafkaEventData objects?

this is my understanding too.
@fbeltrao, was this your intention?

@ryancrawcour ryancrawcour added the question Further information is requested label Apr 2, 2019
@fbeltrao
Copy link
Contributor Author

fbeltrao commented Apr 2, 2019

Kafka stores the messages as byte[]. The intention is to get the raw data coming from the topic and do all the required serialisation on the function.

The following code does that:

[FunctionName(nameof(ByteArrayUser))]
        public static void ByteArrayUser(
           [KafkaTrigger("LocalBroker", "users", ConsumerGroup = "azfunc_byte_array", ValueType = typeof(byte[]))] KafkaEventData[] kafkaEvents,
           ILogger logger)
        {
            foreach (var kafkaEvent in kafkaEvents)
            {
                logger.LogInformation($"users message has {((byte[])kafkaEvent.Value).Length} length");
            }
        }

The idea would be to be able to replace the parameter definition the following way:

[KafkaTrigger("LocalBroker", "users", ConsumerGroup = "azfunc_byte_array")] byte[][] rawKafkaEvents

Does it make sense?

@fbeltrao
Copy link
Contributor Author

fbeltrao commented Apr 2, 2019

It is ok for me if you don't think there is value here. The more I think about it the more I have the impression that this issue a nice to have, as there is a way to get things working using KafkaEventData.

This issue is about making it developer friendly.

@brandonh-msft
Copy link
Member

brandonh-msft commented Apr 2, 2019

well, the above code does and doesn't do that. if you look at #43 we never implemented what we actually do with the byte array. So you don't actually get any event data out of the bytestream. This is the piece lacking definition; but if we're saying "if you choose byte[], you need to provide the implementation" that could prove to be difficult given the structure of bindings.

@fbeltrao
Copy link
Contributor Author

fbeltrao commented Apr 2, 2019

I agree, that constructor shouldn't be there.

The way the KafkaEventData will be built in case the ValueType = typeof(byte[]) is the following:

new KafkaEventData(new ConsumeResultWrapper<TKey, TValue>(consumeResult));
  // TValue => byte[]
  // ctor will set KafkaEventData.Value to IConsumeResultData.Value which is a byte[]

https://github.com/Microsoft/azure-functions-kafka-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Kafka/Listeners/KafkaListener.cs#L285

https://github.com/Microsoft/azure-functions-kafka-extension/blob/master/src/Microsoft.Azure.WebJobs.Extensions.Kafka/KafkaEventData.cs#L26

@fbeltrao
Copy link
Contributor Author

fbeltrao commented Apr 2, 2019

I will check if I can remove that ctor.

@brandonh-msft brandonh-msft removed the question Further information is requested label Apr 2, 2019
brandonh-msft added a commit to brandonh-msft/azure-functions-kafka-extension that referenced this issue Apr 2, 2019
brandonh-msft added a commit to brandonh-msft/azure-functions-kafka-extension that referenced this issue Apr 2, 2019
brandonh-msft added a commit to brandonh-msft/azure-functions-kafka-extension that referenced this issue Apr 2, 2019
brandonh-msft added a commit to brandonh-msft/azure-functions-kafka-extension that referenced this issue Apr 2, 2019
@brandonh-msft
Copy link
Member

brandonh-msft commented Apr 2, 2019

The way the KafkaEventData will be built in case the ValueType = typeof(byte[]) is the following:

new KafkaEventData(new ConsumeResultWrapper<TKey, TValue>(consumeResult));
  // TValue => byte[]
  // ctor will set KafkaEventData.Value to IConsumeResultData.Value which is a byte[]

so IConsumeResultData.Value is always byte[]?

@fbeltrao
Copy link
Contributor Author

fbeltrao commented Apr 3, 2019

No, it will be of the type in which the KafkaListener<TKey, TValue> was created

@brandonh-msft
Copy link
Member

got it. so you were just saying that the only case where KafkaEventData.Value will end up as byte[] is the case where it gets set as part of the ConsumeResultWrapper. 👍

@brandonh-msft
Copy link
Member

Does this mean that ✅ 3 in this issue is moot, then?

@fbeltrao
Copy link
Contributor Author

fbeltrao commented Apr 3, 2019

Correct. It should be simple.
What could be a bit tricky is to handle byte[] as a single consumer and byte[][] as a batch consumer.

brandonh-msft added a commit to brandonh-msft/azure-functions-kafka-extension that referenced this issue Apr 3, 2019
@ryancrawcour
Copy link
Contributor

Remember folks … mvp first, then we add on the "developer friendly" and nice to haves once developers start using it and requesting it. Hopefully by then we'll have a community of open source developers that will submit their own PRs :)

brandonh-msft added a commit to brandonh-msft/azure-functions-kafka-extension that referenced this issue Apr 3, 2019
brandonh-msft added a commit to brandonh-msft/azure-functions-kafka-extension that referenced this issue Apr 3, 2019
brandonh-msft added a commit to brandonh-msft/azure-functions-kafka-extension that referenced this issue Apr 3, 2019
brandonh-msft added a commit that referenced this issue Apr 4, 2019
* Remove KafkaEventData(byte[]) ctor
Add to sample function example of custom deserialisation
Add to sample function binding to strings

* adding logging to configProvider during converter operations

* task 3 in #44 doesn't require any additional work; added sample handlers to demonstrate

* adding logging to configProvider during converter operations

* Adding tests to prove pt3 of #44
@ryancrawcour
Copy link
Contributor

What's the status of this issue? Closed, or still being worked on?

@brandonh-msft & @fbeltrao

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants