Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add max and ttl for reponse maps #761

Merged
merged 2 commits into from Oct 3, 2018
Merged

Add max and ttl for reponse maps #761

merged 2 commits into from Oct 3, 2018

Conversation

derekcollison
Copy link
Member

Add in ability to expire and limit number of response map entries for accounts mapping request/reply where the response is not sent.

Signed-off-by: Derek Collison derek@nats.io

/cc @nats-io/core

Signed-off-by: Derek Collison <derek@nats.io>
@coveralls
Copy link

coveralls commented Oct 2, 2018

Coverage Status

Coverage increased (+0.1%) to 92.097% when pulling 364f3fe on expire into fd3e5fd on master.

server/const.go Outdated
DEFAULT_MAX_ACCOUNT_AE_RESPONSE_MAPS = 100000

// DEFAULT_TTL_AE_RESPONSE_MAP
DEFAULT_TTL_AE_RESPONSE_MAP = 60 * time.Second * 10 // 10 minutes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can use 10 * time.Minute to define duration too

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes thx.

Signed-off-by: Derek Collison <derek@nats.io>
a.mu.RUnlock()

if numOver <= 0 {
return
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's seem wrong: there is a race where adding service will see that pruning is still true and not trigger the go routine. I believe that the decision to return should be mutually exclusive with the decision to start pruning.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many maps can be removed or added by the time this function executes, so can't really make any assumptions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But you see what I meant, right? If services maps are added while this go routine executes and before it returns where it sets pruning to false, then those added ones will not be pruned, until a new one is added that starts the go routine.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe I coded it such that if we are over the max or any ones are above ttl we will keep pruning. We do not hold the lock exclusively while we are pruning. We only exit when we are below max and we check ttl first.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above, you check that numOver is <= 0 (meaning that we are below the limit) and then you return. There is a time window between that time and when the pruning will be set to false. At the same time, when services are added, we have this code (under lock):

if a.nae > a.maxnae && !a.prunning {
    a.prunning = true
    go a.pruneAutoExpireResponseMaps()
}

My argument is that it is possible that we now go over the limit and do not have a go routine that will do the pruning (since the previous one is about to exit). Granted, as soon as a new addition (over the limit) will be done, then pruning routine will be started.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes there could be some condition. I felt it was good to not hold the lock the complete time for processing, however I could do that. It may impact request/response latency more than the current method, but probably not too much unless there are a large number of entries. WDYT? Simplify and hold the lock throughout?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think something like this would be enough?
Remove defer at top of function and then:

a.mu.Lock()
numOver := a.nae - a.maxnae
if numOver <= 0 {
    a.prunning = false
    a.mu.Unlock()
    return
}
a.mu.Unlock()

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That will not handle any ttl expired messages unless we are also over the limit. Although we do not call on that condition yet, I wanted to the code flexible enough to handle. The test for this exercises both max and ttl conditions.

server/auth.go Show resolved Hide resolved
@kozlovic
Copy link
Member

kozlovic commented Oct 3, 2018

@derekcollison Can always iterate later. LGTM.

@derekcollison derekcollison merged commit 871644b into master Oct 3, 2018
@derekcollison derekcollison deleted the expire branch October 3, 2018 15:36
@derekcollison
Copy link
Member Author

ok will change prunning to pruning on master.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants