Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom receive timeout for nack with SQS using JMS #75

Open
enVolt opened this issue Feb 5, 2019 · 15 comments
Open

Custom receive timeout for nack with SQS using JMS #75

enVolt opened this issue Feb 5, 2019 · 15 comments

Comments

@enVolt
Copy link

enVolt commented Feb 5, 2019

The behavior of negative acknowledgment is to change the visibility timeout of a received message to 0. Where the value of NACK_TIMEOUT is not configurable while creating the SQS Factory for JMS.

https://github.com/awslabs/amazon-sqs-java-messaging-lib/blob/master/src/main/java/com/amazon/sqs/javamessaging/acknowledge/NegativeAcknowledger.java#L99

When a message is received, and the processing fails (Listener method throws an error), the message is immediately received again. In most of the cases, the message can be processed with a certain delay.

Is it possible to configure it to not change the visibility timeout, so it respects the Queue's default Receive Timeout configuration?

PS - I originally asked on Stackoverflow

@enVolt
Copy link
Author

enVolt commented Feb 25, 2019

Anyone?

@robertpila
Copy link

I am having the same issue. This solution or the ability to use the SQS Default Visibility Timeout would be nice.

@cmyker
Copy link

cmyker commented Aug 16, 2019

Same here, immediate re-delivery on NACK results in message quickly ending up in dead letter

@AlexRosier
Copy link

AlexRosier commented Oct 10, 2019

We are having the same problem. Using the default that is set on sqs seems to be better behavior.
This seems to be done here : #72 but the pull request is closed

@vghero
Copy link

vghero commented Nov 29, 2019

Same here. New news on this?

@cmyker
Copy link

cmyker commented Jan 30, 2020

Workaround on stackoverflow https://stackoverflow.com/a/55187272/1079659

@adressin
Copy link

adressin commented Apr 9, 2020

Thanks for the workaround @cmyker! Downside is it affects all change visibility requests... but it solves my immediate issue. +1 for incorporating an option into the library

@sedooe
Copy link

sedooe commented Oct 16, 2020

See my answer here, I believe this is the most ideal way to keep visibilityTimeout set on the queue itself if you're using Spring JMS.

This default behavior of the library is pretty strange though. If you were going to set it to 0 for every failed message by default, then why do we have visibilityTimeout attribute on the queue itself in the first place?

@AlexandreGuidin
Copy link

AlexandreGuidin commented Nov 13, 2020

For those who use spring-jms: I have just tested the @sedooe solution in my local environment with an elasticmq, and it worked.

public class CustomJmsListenerContainerFactory extends DefaultJmsListenerContainerFactory {

    @Override
    protected DefaultMessageListenerContainer createContainerInstance() {
        return new CustomMessageListenerContainer();
    }
}
public class CustomMessageListenerContainer extends DefaultMessageListenerContainer {

    public CustomMessageListenerContainer() {
        super();
    }

    @Override
    protected void rollbackOnExceptionIfNecessary(Session session, Throwable ex) throws JMSException {
 // do nothing
    }

    @Override
    protected void rollbackIfNecessary(Session session) throws JMSException {
        super.rollbackIfNecessary(session);
    }
}

And then:

        CustomJmsListenerContainerFactory factory = new CustomJmsListenerContainerFactory();

I didn't like it too much, but it solves my problem.

@AlexandreGuidin
Copy link

I think spring have much to learn with Akka/Alpakka, when it comes to messaging

https://doc.akka.io/docs/alpakka/current/sqs.html

@sedooe
Copy link

sedooe commented Nov 13, 2020

I think spring have much to learn with Akka/Alpakka, when it comes to messaging

https://doc.akka.io/docs/alpakka/current/sqs.html

I'm glad it worked for you but this strange behavior is coming from this amazon-sqs-java-messaging-lib and has nothing to do with Spring. 🙂

@AlexandreGuidin
Copy link

I think spring have much to learn with Akka/Alpakka, when it comes to messaging
https://doc.akka.io/docs/alpakka/current/sqs.html

I'm glad it worked for you but this strange behavior is coming from this amazon-sqs-java-messaging-lib and has nothing to do with Spring. 🙂

Sorry, my mistake
I was so frustrated that i wasn't thinking straight haha

@vghero
Copy link

vghero commented Aug 16, 2021

More than 2 years later still having to use the workaround :)?

@danmoorenhs
Copy link

This default behavior of the library is pretty strange though. If you were going to set it to 0 for every failed message by default, then why do we have visibilityTimeout attribute on the queue itself in the first place?

this is exactly what is perplexing us.

we actually have been relying on the visibilityTimeout to give us a simple backoff strategy, only today have we realised that if there is an error downstream or in handling a particular message, the result of this forcing to no wait is that it will attempt to DOS the downstream service.

at first and second looks this is very wrong :(

@jahlbornG
Copy link

I too am saddened that there is no fix for this yet. We process messages which involve calls to external apis which have rate limiting. when messages are retried due to rate limit errors, the immediate retries only exacerbate the problem and increase the chances that the message will quickly exhaust all the retries and fail completely.

i investigated some of the options above. @sedooe's solution of overriding the rollbackOnExceptionIfNecessary() method is a fairly simple solution, however, when i looked into the the recover() method which is skipped, i noticed that the unacked messages won't be cleaned out. i'm a little worried that that could cause issues over time.

ultimately, we are forced to go the route of forking the library and directly modifying the negative ack logic for now. would love to see this issue fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants