Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Context.Emit -> Context.MustEmit #241

Closed
gm42 opened this issue Mar 31, 2020 · 6 comments
Closed

Context.Emit -> Context.MustEmit #241

gm42 opened this issue Mar 31, 2020 · 6 comments
Labels

Comments

@gm42
Copy link

gm42 commented Mar 31, 2020

I just noticed that Context.Emit(...) calls Context.Fail(err error) which in turn panics.

I would mention in the GoDoc of Fail that a panic will happen and I would have called Emit as MustEmit to follow Go's practice to prefix panicking methods with Must.

What are your thoughts about this? Perhaps you could include some improvements in next major version.

@frairon
Copy link
Contributor

frairon commented Mar 31, 2020

Hi @gm42,
well, technically you're right with the go practice to prefix panicking functions with Must.
However, the reason that Emit panics is to stop processing the current message immediately. It is not the user's concern that Emit is actually panicking but to rely on the fact that the processor will shutdown immediately on any errors.
Processing a message is like a transaction, either all Emits/Stores succeed and the message is commited upstream or it fails. There's no reason to name it MustEmit, because the idea is to not provide a version where failure is tolerated.
If you do need that behavior, use an external Emitter and handle the error differently, but then you're leaving the transactional semantics of the callback. So if the Emit then fails, there's no way to get the input message again.
Cheers!

@gm42
Copy link
Author

gm42 commented Mar 31, 2020

It is not the user's concern that Emit is actually panicking but to rely on the fact that the processor will shutdown immediately on any errors.

It is if I don't want the process to panic; this would force to wrap code with recover() which is also not a good practice. I am reporting here that Godoc is expected to make it clear when code panics :)

There's no reason to name it MustEmit, because the idea is to not provide a version where failure is tolerated.

I always thought that the Must* ought be prefixed regardless of the existence of a non-Must version, but what you say makes sense.

If you do need that behaviour, use an external Emitter and handle the error differently, but then you're leaving the transactional semantics of the callback.

I would not know how to accomplish the same functionality of a callback with an external Emitter, but thanks for your reply.

@frairon
Copy link
Contributor

frairon commented Apr 1, 2020

So as said, the panic in the Emit is not the user's concern here. If it fails, the processor shuts down, that's goka's contract. Prefixing it with Must actually indicates that the user might recover if needed, but that shouldn't be done, so I think it would actually be misleading.
If Emit fails, there's either something wrong with the Codec or with the Kafka connection and in both cases there's no safe way to continue.

If you do need to continue, here's a version where you could use an external Emitter.

	emitter, err := goka.NewEmitter(brokers, targetTopic, new(codec.String))
	// handle error
	proc, err := goka.NewProcessor(brokers, goka.DefineGroup("group",
		goka.Input("input-topic", new(codec.String), func(ctx goka.Context, msg interface{}) {
			// ...
			err := emitter.EmitSync("key", "some-value")
			if err != nil {
				// handle emit error
			}
			// ...
		}),
	))
	proc.Run(context.Background())

But to be honest that's a pattern we never used. And the only scenario I can imagine where that could be used is when a processor emits messages into a different Kafka cluster that might be unavailable.

@gm42
Copy link
Author

gm42 commented Apr 1, 2020

So as said, the panic in the Emit is not the user's concern here. If it fails, the processor shuts down, that's goka's contract.

We are going in a loop :) I am saying that "processor shutting down" does not equate to crashing a Go process with a panic. That is something which at least needs to be pointed out in the GoDoc.

Prefixing it with Must actually indicates that the user might recover if needed, but that shouldn't be done, so I think it would actually be misleading.

Prefixing it with Must is not an invite to wrap it in recover() syntax, it is just used to signal that this version of the function can panic and does not return an error. I already wrote in previous comment that I agree with you that since there is no version returning an error (not on the Context), you do not need the Must* version.

If Emit fails, there's either something wrong with the Codec or with the Kafka connection and in both cases there's no safe way to continue.

I understand that, however these are not bugs in the code. The package is simply designed to panic in case of network issues, I also understand and accept that. I am just thinking it could be better communicated that "in case of network issues some calls (not all) will panic".

If you do need to continue, here's a version where you could use an external Emitter.

Thanks for the example. So it looks like using the Context in the callback is not mandatory?

But to be honest that's a pattern we never used. And the only scenario I can imagine where that could be used is when a processor emits messages into a different Kafka cluster that might be unavailable.

Yes, when designing circuit breakers and service readiness we need to take this into account. It does not matter how often this happens, we need to cover that specific occurrence.

Right now as I see it we cannot wait for a cluster to become back available so the service using goka must have some circuit breaker to avoid restarting many times waiting for the cluster to be up and running.

frairon added a commit that referenced this issue Jul 9, 2020
* Co-authored-by: frairon <franz.eichhorn@lovoo.com>
Co-authored-by: R053NR07 <jan.bickel@lovoo.com>
* bugfix in shutdown/rebalance: correctly closing joins
* run update/request/response stats in own goroutine
* fix rebalancing by adding a copartitioning rebalance strategy
* updated readme for configuration, added changelog
* Open Storage in PartitionTable when performing Setup
* return trackOutput if stats are nil
* view.get fixed for uninitialized view
added lots of godoc
fixed many linter errors
added Open call when creating storage
* context stats tracking: use queueing mechanism to avoid race conditions
* Add simpleBackoff and add backoff options for processor and view
* added strings to streams helper
* #249 view example
* issue #249: migration guide, #241 panic documentation of context
* #248 exposing partition table's connection state to view
* Migration: describe offsetbug
* partition_table: implement autoreconnect in recovery mode
* bugfix goroutine-leak in statsloop in partition table
* #248: refactored state merger, bugfix race condition when shutting down the signal/observers, created example
* bugfix partition state notification
* remove change in example, updated changelog
* fix readme example and add readme as example
* restore 1-simplest fix, remove readme-example
Co-authored-by: Jan Bickel <jan.bickel@lovoo.com>
@frairon
Copy link
Contributor

frairon commented Oct 10, 2020

Hi @gm42, we have improved documentation a lot describing which functions might panic, see here.
Does that resolve the issue for you?

@gm42
Copy link
Author

gm42 commented Oct 10, 2020

That is much better now, thanks!

@gm42 gm42 closed this as completed Oct 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants