-
Notifications
You must be signed in to change notification settings - Fork 16.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
n8n is taking a lot of RAM memory #7939
Comments
Hey @pablorq, n8n loads things into memory while working so I would expect it to grow and to also be higher the more items you are using but the memory itself will drop down. Can you check the Portainer memory stats while your workflow is running to check if that is showing the memory being released? I would expect that you would see that grow then decrease a few seconds after the workflow has finished. Or can you check the container stats using the docker command to see what that shows, it could be that the 1GB is reserved but only 300mb is being used. |
Hi @Joffcom ! Maybe I didn't express myself correctly: this data is taken when no workflow was running. I just was speculating about what could be happen when I've talked about the workflow using Google Sheet to move a lot of data, because is the only one that I can relate with a high memory use. But maybe it's another thing. To give you more detail, I have 7 workflows that activate by time (RSS trigger or schedule trigger), most of them every 3 hours and 2 of them every hour. Most of the time is a sequence like "run, check everything is ok, end". That is something that takes a minute or so. When no workflow is running is when I have this memory consumption. In fact, now is 1.5GB. This is more info from inside de container:
Important infoNow I've noticed that I have 3 "forever running" executions (issue #7754). I don't know if that is related, I didn't check it when first I've opened this issue. I'm still in v1.16.0. I'm going to upgrade and see how everything works. |
Hey @pablorq, I was just looking at my own instance that has about 40 active workflows on it and I have a very different graph, This is after running 2948 workflows since the last restart which I think was yesterday. This is n8n running in Portainer using a Postgres database as well, Out of interest if you start up a new instance that doesn't add the extra node package and just runs like normal do you also see the same usage? Can you also share the environment variables and portainer settings you have configured, As an example in portainer have you set any of the Runtime & Resources options or Capabilities? |
Hi @Joffcom ! I've upgraded today to v1.18.2 and by now (a couple of hours) it seems pretty good. This is my portainer stack editor (docker-compose style). This is also n8n+postgres (I would prefer mariadb, by the way): # n8n with persistent volume Postgres
# https://raw.githubusercontent.com/n8n-io/n8n/master/docker/compose/withPostgres/docker-compose.yml
version: '3.8'
volumes:
db_data:
main_data:
services:
postgres:
image: postgres:11
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_NON_ROOT_USER=${POSTGRES_NON_ROOT_USER}
- POSTGRES_NON_ROOT_PASSWORD=${POSTGRES_NON_ROOT_PASSWORD}
volumes:
- db_data:/var/lib/postgresql/data
- ${N8N_LOCAL_FILES}/n8n-config/init-data.sh:/docker-entrypoint-initdb.d/init-data.sh
healthcheck:
test: ['CMD-SHELL', 'pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}']
interval: 5s
timeout: 5s
retries: 10
main:
image: docker.n8n.io/n8nio/n8n:${N8N_VERSION}
# image: docker.n8n.io/n8nio/n8n
restart: always
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
- DB_POSTGRESDB_USER=${POSTGRES_NON_ROOT_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_NON_ROOT_PASSWORD}
- GENERIC_TIMEZONE=Europe/Madrid
- TZ=Europe/Madrid
- NODE_FUNCTION_ALLOW_EXTERNAL=*
ports:
- 5678:5678
links:
- postgres
volumes:
- main_data:/home/node/.n8n
- ${N8N_LOCAL_FILES}:/files
depends_on:
postgres:
condition: service_healthy
user: root
entrypoint: sh -c
command:
- |
whoami
pwd
npm install -g @mozilla/readability jsdom
echo "done install"
su node -c "tini -s -- /docker-entrypoint.sh &"
sleep infinity
extra_hosts:
# Access host localhost
- "local.host:host-gateway" One question: what do you mean about Runtime & Resources options or Capabilities? I think the only thing configured is the timezone. |
Hey @pablorq, Runtime & Resources and Capabilities are options you would set in the Portainer UI. We do support MariaDB at the moment as well but we will be removing support for it in the future. |
I see those options in the Portainer documentation, but I don't have them in the container configuration. (Using latest available Portainer v2.19). In the host setup, I have the following options disabled and greyed out:
It seems that this option are only for agent based hosts, but I'm using Portainer without an agent, connecting directly to the Docker socket (Linux). In summary, we can think about those options are by default. |
Now I have 2 forever running executions and this is the container status:
After 7 days now it has more than 600 MB, and is increasing. I don't really know if this has anything to do with the forever running executions. Maybe is another thing. Any ideas or test to do? |
Hey @pablorq, I am still not seeing this so maybe it is linked to forever running executions 🤔 Did you try using a docker image without the extra run commands you are using so it is exactly the same as what we ship? One of the things I noticed this morning when taking a quick look was the memory leak reports on the jsdom package going back to 2013, If you are using jsdom a lot there is a change that some of your issues could be coming from that. |
Hi @Joffcom! I'm doing more tests. As you point out in your last message, maybe the jsdom library and the extra commands to install the jsdom library affects in some way. So I've duplicated the stack (copying the DB):
The first picture is good: n8n gets 200MB vs n8n-jsdom getting 340MB. No FRE (Forever Running Execution) in n8n-jsdom. After 24 hours (more or less) the picture is different: n8n with 380MB and n8n-jsdom with 240MB. No FRE in n8n-jsdom. After 24 more hours, the image is complete different: n8n with 580MB and n8n-jsdom with 240MB. 2 FRE workflows in n8n-jsdom. More infoThe n8n stack has only the n8n standard nodes, and the n8n-nodes-rss-feed-trigger community node (I've to update this, I know), and I'm using code nodes in javascript and Python. In the n8n-jsdom stack there are only 3 workflows activated (active), and all of them calls the workflow that uses a code node with the jsdom library. This is the log file from the n8n stack: n8n-mem-231214-1429.log In the log file I can see some errors and repeated queries for waiting executions, but I don't know if this is the cause of the memory increasing. Any ideas? |
That is interesting as I am still not seeing the same issue and there are not a lot of other reports of the same issue I was sure the issue was likely to be down to the issues in those other packages. I guess to dig into this further we will need to start looking at your workflows in more detail to see what is going on although what is the frequency of your workflows running and how many are there? |
Well, in fact the n8n-jsdom stack works better with the memory (keeps a low memory usage) than the n8n stack with the "unmodified" n8n docker image. Some info about the workflows in n8n stack:
The active workflows run with schedule trigger:
As you can see, this is a very simple scheme with 4 really used workflows, and 2 "sub-worflows" called by another workflow, that runs on a long time activation. I think that there's plenty of time to clean and reduce the memory usage. Today I've added another sub-workflow that is called at the end of all the 6 used workflows to log the memory usage, to try to find if this is related to a specific workflow. I'll let you know when I have more info. |
Memory log checkWell, this was very fast! I've restarted the stack at 11:21 (2023-12-16T10:21:53.728Z) and the some of the workflows have ran at 12:00. Memory log data:
Between n1 and n2 there's no a lot of difference. I'm using "n" because I can't use the hashtag symbol to refer the order in the log file. Between n2 and n3 there is 50MB more. But between n3 and n4 there is 90MB more! And it keeps increasing between n4 and n5 with 50MB more. Seems like all these memory is not released after the workflows are finished: This is the n8n log: n8n-mem-231216-1229.log And this is an image of the execution that increased 90MB: Al node codes run Python code. There is a Google Sheet node that gets 7000+ rows, but then only are selected 6 items to work with the rest of the workflow. More infoThe n1 and n2 workflows from the memory log had no code nodes, and seems to no increase the memory. The n3, n4 and n5 had code nodes with Python code. Cold it be related to the Python implementation? Comparing with n8n-jsdomThe duplicated stack called n8n-jsdom has the same workflows but only 3 of them are activated. Those activated in n8n-jsdom are disabled in the n8n stack. In the n8n-jsdom stack the workflows that are running have code nodes, but only 2 of them use Python (only 2 code nodes with Python). Moreover, the activation of the 3 workflows is done by a RSS trigger and not with a schedule trigger, which in the n8n stack has a regular hourly (or 3h in 1 workflow) basis. If each Python code node leaves some memory used, the more used in the n8n stack and the more frequent run basis could explain the difference between memory usage between the 2 twin stacks. Don't you think? |
Hey @pablorq, It could be related to the Python implementation, We know the code node can cause a memory increase but typically it is freed up as seen in my own system. I don't however use the Python option so that is a good find that we maybe could have discovered sooner if we had the workflows. When I am back in January I will do some testing with the code node using Python to see if this can be reproduced. |
Hi @Joffcom, I was doing some more tests and I think that this is the main issue, it is the code node using Python. To test it, I've created this workflow to run 10 code node with the default Python code, the one that add a column with a number: Memory_eater.json Memory log:
Can you duplicate this behavior? (By the way, it should be nice to have a real Python node, instead one running over javascript.) |
Hey @pablorq, I will give it a test, We know that using the code node increases memory already because it creates a sandbox each time but runnning your worklfow I can see the increase I would expect then the decrease of memory usage. It looks like some improvements can still be made there as it doesn't clean up as nicely as I would have expected but it does appear to be doing it. I will get a dev ticket created to see if we can change this. It did drop down again during the standard garbage collection process as well so generally it is working as it should but it just isn't ideal |
Our internal ticket for this is |
Yes, I can see the clean up but it seems that something is remaining and the cumulative remaining keep the memory increasing and increasing. This is a report of the mem usage from the last 4 days, where the clean ups can be seen, but also how, little by little, the memory is increasing: At first, the memory was about 100MB, which is ideal. Then starts to grow and clean, but not in the same amount, leaving some memory as "used". Then, this morning, something strange happened: at 07:00 the memory was at 770MB, at 08:00 at 550MB, and at 09:00 at 280MB, all running workflows without Python code node. And then, a workflow with some Python code nodes was ran and the memory went from that 280MB to 730MB, following the previous memory "growing" line". I have no idea was what that. Maybe the clean up has some kind of buffer and a new Python run retrieves that buffer? I don't know. This is the memory usage log: |
Hey @pablorq, That matches with what I said in my reply earlier today, Hopefully this will be picked up soon and worked on. |
Seeing this same issue here. Python code node related |
I've been investigating this memory leak issue with Python code nodes and have a fix that should fully resolve it. Based on the detailed testing from @pablorq and reports from others, I was able to replicate and address the root cause. The root cause is in how Pyodide (our Python-in-browser implementation) manages memory: When Python code executes through Pyodide, it creates WebAssembly memory allocations that aren't fully released afterward. Even when JavaScript references are cleared and Python's garbage collector runs, portions of this WASM heap remain allocated. I've implemented a Worker approach that completely resolves the issue. Here's a comparison of the memory patterns:
The Worker solution creates an isolated environment for each Python execution and fully terminates it afterward, ensuring all WebAssembly memory is properly reclaimed. This is significantly more effective than other approaches I tested, including manual garbage collection and removing the singleton pattern. I've submitted a PR with this fix (#13648) Tested workflow inside. Waiting for your feedback. Interactive charts with comparison: https://n8n-memory-git-master-nikitas-projects-2b098508.vercel.app/ |
Describe the bug
The n8n process is taking a lot of RAM memory, in this case 1.1 GB, and 41.7 GB of virtual memory.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A little memory consumption when idle. If taking a lot of memory during a workflow run, release it when it ends.
Environment (please complete the following information):
own
,main
andqueue
. Default ismain
]Additional context

Here are some screenshots.
All process list sort by memory:
n8n process details:

I did a new fresh install of the same Docker stack, without any workflow, and the memory consumption seems to be low. This makes me thing about this could be a (or some) workflow that is taking a lot memory, or that successive running keep increasing the memory without release it.
In any case the memory should be freed when no workflow is running.
The text was updated successfully, but these errors were encountered: