Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore!: bump lru-cache to 10.x & increase node engine #695

Closed
wants to merge 1 commit into from

Conversation

H4ad
Copy link
Contributor

@H4ad H4ad commented Apr 7, 2024

Trying to improve/solve the issue with dedupe on cli: npm/cli#7350

I don't know if it's a good idea since semver is being used by a lot of projects and bumping the version of node-lru-cache requires the bump of engines.node to >=16 , dropping a lot of node versions at once.

@H4ad H4ad requested a review from a team as a code owner April 7, 2024 14:24
@wraithgar
Copy link
Member

This would need to be part of #501 since changing engines is a semver major.

@domoritz
Copy link

Node 16 is not supported anymore so maybe this doesn't need to be a breaking change?

@wraithgar
Copy link
Member

Changing the engines to drop support for any node version, supported or not, is a breaking change for a package.

@domoritz
Copy link

Okay, a major version bump makes sense then.

@mbtools
Copy link
Contributor

mbtools commented Apr 11, 2024

This package doesn't use any fancy features of lru-cache. It's just caching up to 1000 parses.

How about we get rid of the dependency here and replace it with a simple class based on Map?

class LRUCache {
    constructor(max = 0) {
        if (!Number.isInteger(max) || max < 0)
            throw new TypeError('max must be a nonnegative integer');

        this.max = max;
        this.map = new Map();
    }

    get(key) {
        const value = this.map.get(key);
        if (value === undefined)
            return undefined;
        else {
            // Remove the key from the map and add it to the end
            this.map.delete(key);
            this.map.set(key, value);
            return value;
        }
    }

    has(key) {
        return this.map.has(key);
    }

    delete(key) {
        if (this.map.has(key)) {
            this.map.delete(key);
            return true;
        }
        else
            return false;
    }

    set(key, value) {
        const deleted = this.delete(key);

        if (!deleted && value !== undefined) {
            // If cache is full, delete the least recently used item
            if (this.map.size >= this.max) {
                const firstKey = this.map.keys().next().value;
                this.delete(firstKey);
            }

            this.map.set(key, value);
        }

        return this;
    }

    clear() {
        this.map.clear();
    }

    capacity() {
        return this.max;
    }

    size() {
        return this.map.size;
    }

    *entries() {
        for (const [key, value] of this.map)
            yield [key, value];
    }
}

export default LRUCache;

I'm happy to provide a PR with 100% test coverage if you like.

PS: You could also argue whether LRU is actually the best caching strategy. FIFO might be much faster for the majority of cases. But that's a different story.

@wraithgar
Copy link
Member

I'm happy to provide a PR with 100% test coverage if you like.

On one hand it does seem like a simple Map cache is all we need and should be easy to implement.

On the other hand "why don't you just" and "how hard can it be?" are usually hard lessons once one goes down the path of answering them.

If it's not too much trouble, you can make that PR. I think that will inform a lot of our decision better than an academic discussion would.

This was referenced Apr 11, 2024
@H4ad
Copy link
Contributor Author

H4ad commented Apr 24, 2024

Closing in favor of #697

@H4ad H4ad closed this Apr 24, 2024
@H4ad H4ad deleted the chore/bump-lru-cache branch April 24, 2024 15:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants