Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use container's RAM limit set by deploy resources limit #4686

Open
iamriajul opened this issue Jun 10, 2024 · 7 comments
Open

Use container's RAM limit set by deploy resources limit #4686

iamriajul opened this issue Jun 10, 2024 · 7 comments
Labels
bug Something isn't working as expected spike A spike is where we need to do investigation before we could estimate the effort to fix

Comments

@iamriajul
Copy link

iamriajul commented Jun 10, 2024

Describe the bug
I was having an issue of suddenly crashing meilisearch. I was using the same data sets on another server (2GiB RAM, EC2) there it had no container resource limitation, but on this new server (32 GiB Bare Metal) I deployed meilisearch with resources limitation of '8 GiB RAM Limit'.

To Reproduce

  1. Deploy Meilisearch in a Docker Swarm Environment with Resources Limit (Memory Limit).
    deploy:
      resources:
        limits:
          memory: 8G # 1/4  of the available RAM Host
  1. docker exec -it {containerId} sh
  2. meilisearch --help
    It'll show the default meilisearch was going to use for max available, as you can see my case it defaulted to 20 GiB but I've resources limit of 8 GiB Memory.
    image

Expected behavior
It should detect the maximum available memory set by the container and then automatically 2/3 of the memory from the limit instead of the host's total memory.

Workaround
As a workaround I had to redeploy meilisearch with MEILI_MAX_INDEXING_MEMORY: '6 GiB' environment configuration to consider the container's RAM Limit.

Meilisearch version:
v1.8.1 (docker)

Additional context
Architecture: x64
Docker Version: 26.1.3
Docker Mode: Swarm
Server: Bare Metal 6 cores 12 threads / 32 GiB RAM. 512 NVMe Storage.

@iamriajul
Copy link
Author

An open issue regarding this in the sysinfo repo: GuillaumeGomez/sysinfo#207

@curquiza curquiza changed the title Respect Container's Memory Limit Set By Deploy Resources Limit. Use container's RAM limit set by deploy resources limit Jun 10, 2024
@curquiza curquiza added the bug Something isn't working as expected label Jun 10, 2024
@curquiza
Copy link
Member

Hello @iamriajul

Thanks for the report. This is indeed an issue we know

For example when using K8s, by default Meilisearch uses

  • the RAM of the node
  • the CPU of the pod

The workaround is indeed to set your limit manually.

@pkruithof
Copy link

@curquiza

The workaround is indeed to set your limit manually.

I was going to report an issue, but I found this comment so I'm asking it here instead: we're trying out Meilisearch in our K8S cluster, and found that it kept on increasing memory usage over time. Up to the point it gets restarted by K8S. We've set the MEILI_MAX_INDEXING_MEMORY environment variable to 1300Mb (the memory resources are set to 1000Mi/1.95Gi), but it still shows a sawtooth graph:

Screenshot 2024-06-19 at 09 17 14

The --help output shows that it's picking up the variable at least:

      --max-indexing-memory <MAX_INDEXING_MEMORY>
          Sets the maximum amount of RAM Meilisearch can use when indexing. By default, Meilisearch uses no more than two thirds of available memory
          
          [env: MEILI_MAX_INDEXING_MEMORY=1300Mb]
          [default: "10.42 GiB"]

I would have expected the line to drop — or at least stabilise — around the 1.4G mark. Is this not working correctly, or are we doing something wrong here?

@irevoire irevoire added the spike A spike is where we need to do investigation before we could estimate the effort to fix label Jun 19, 2024
@iamriajul
Copy link
Author

@pkruithof which version of Meilisearch? There's a memory leak issue with 1.8 and up. So I'm using 1.7.6 now instead.

@pkruithof
Copy link

Ah, I didn't know that. We're using version 1.8 indeed. Do you have more information about the leak and if/when it's fixed? Maybe we'll have to downgrade for now as well.

@Kerollmops
Copy link
Member

Hey @pkruithof 👋

@iamriajul is right there is a leak in v1.8.0 and v1.8.1. We are investigating in this Discord channel and on this GitHub issue. At this point we think we have found the root cause and are deploying a fix soon.

Sorry for the inconvenience 😞

@curquiza
Copy link
Member

v1.8.3 with the fix has been released @iamriajul

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working as expected spike A spike is where we need to do investigation before we could estimate the effort to fix
Projects
None yet
Development

No branches or pull requests

5 participants