-
-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
purgeStale skips purge due to expiration ceil (assumption) making cache use stale values #7
Comments
Ah, I was able to reproduce it, this answers a puzzle in #3 that'd been bugging me for a while now. Fixed on 1.1.0. Here's a little rate limiter I whipped up to reproduce: import TTLCache from './'
import type {Options as TTLCacheOptions} from './'
export interface Options<K> extends TTLCacheOptions<K, number> {
maxHits: number
}
export class RateLimiter<K> extends TTLCache<K, number> {
readonly maxHits: number
constructor (options: Options<K>) {
options.updateAgeOnGet = false
options.noUpdateTTL = true
super(options)
if (!options.maxHits || typeof options.maxHits !== 'number' || options.maxHits <= 0) {
throw new TypeError('must specify a positive number of max hits allowed within the period')
}
this.maxHits = options.maxHits
}
// call limiter.hit(key) and it'll return true if it's allowed,
// or false if it should be rejected.
hit (key:K) {
const value = (this.get(key) || 0) + 1
this.set(key, value)
return value < this.maxHits
}
} Example: const rl = new RateLimiter<string>({ ttl: 100, maxHits: 10 })
const run = () => {
const interval = setInterval(() => {
const allowed = rl.hit('test')
console.log(Date.now(), allowed, rl.get('test'))
if (!allowed) {
console.error('> > > > > hit rate limit')
clearInterval(interval)
setTimeout(run, 1)
}
}, 10)
}
run() |
By the way, though, this isn't a very clever rate limiter, so I wouldn't make it too load-bearing. You could easily do stuff like this, assuming a rate limit of 100 hits every minute:
Here's one that uses the TTLCache's auto-purge to keep a time-series of hits by keeping a TTLCache for each key the limiter knows about. Bit more CPU and memory usage, but probably still not too bad. import type {Options as TTLCacheOptions} from './'
import TTLCache from './'
export interface Options {
window: number
max: number
}
interface RLEntryOptions extends TTLCacheOptions<number, boolean> {
onEmpty: () => any
}
class RLEntry extends TTLCache<number, boolean> {
onEmpty: () => any
constructor(options: RLEntryOptions) {
super(options)
this.onEmpty = options.onEmpty
}
purgeStale() {
const ret = super.purgeStale()
if (this.size === 0 && ret) {
this.onEmpty()
}
return ret
}
}
class RateLimiter<K> extends Map<K, TTLCache<number, boolean>> {
window: number
max: number
constructor(options: Options) {
super()
this.window = options.window
this.max = options.max
}
hit(key: K) {
const c = super.get(key) || new RLEntry({
ttl: this.window,
onEmpty: () => this.delete(key),
})
this.set(key, c)
if (c.size > this.max) {
// rejected, too many hits within window
return false
}
c.set(performance.now(), true)
return true
}
count (key: K) {
const c = super.get(key)
return c ? c.size : 0
}
} All these examples licensed under the same ISC license as this repo, but truly don't expect any support if they have bugs, lol |
I am trying to set up API rate limiter using ttlcache:
The idea is that requests keep coming in, and I count them using some key, e.g.
API_TOKEN_xxxx
. I do not refresh TTL and let them expire, value will reset automatically.And yet I am hitting rate limit errors. I've set limit to 3 and sending 2 request per second, here is the log:
purgeStale, exp > n 8470 8468.831500053406
<- here we skip resetting data and it seems the next one will be queued upon nextset
. And no checks are done while getting value from the cache.Thank you very much for your work.
The text was updated successfully, but these errors were encountered: