Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: Pyodide already fatally failed and can no longer be used. #8909

Closed
rajubish opened this issue Mar 16, 2024 · 13 comments
Closed

ERROR: Pyodide already fatally failed and can no longer be used. #8909

rajubish opened this issue Mar 16, 2024 · 13 comments

Comments

@rajubish
Copy link

Bug Description

Getting below error while workflow execution.

Error: Pyodide already fatally failed and can no longer be used.
at Object.get [as runPythonAsync] (/usr/lib/node_modules/n8n/node_modules/pyodide/pyodide.asm.js:9:103459)
at PythonSandbox.runCodeInPython (/usr/lib/node_modules/n8n/node_modules/n8n-nodes-base/nodes/Code/PythonSandbox.ts:62:18)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at PythonSandbox.runCodeAllItems (/usr/lib/node_modules/n8n/node_modules/n8n-nodes-base/nodes/Code/PythonSandbox.ts:47:27)
at Object.execute (/usr/lib/node_modules/n8n/node_modules/n8n-nodes-base/nodes/Code/Code.node.ts:135:14)
at Workflow.runNode (/usr/lib/node_modules/n8n/node_modules/n8n-workflow/src/Workflow.ts:1332:8)
at /usr/lib/node_modules/n8n/node_modules/n8n-core/src/WorkflowExecute.ts:1046:29
at /usr/lib/node_modules/n8n/node_modules/n8n-core/src/WorkflowExecute.ts:1722:11

To Reproduce

  1. Workflow
  2. Execution
  3. Showing error

Expected behavior

On workflow auto execution time getting error.

Operating System

Ubuntu 20

n8n Version

1.27.2

Node.js Version

v20.10.0

Database

PostgreSQL

Execution mode

main (default)

@Joffcom
Copy link
Member

Joffcom commented Mar 16, 2024

Hey @rajubish,

Can you share the Python you are trying to run?

@rajubish
Copy link
Author

rajubish commented Mar 17, 2024

Hi @Joffcom

Below python code executing through n8n workflow.

import json

tenant_data = []
for item in _input.all():
    tenant_ids = item.json["tenant_ids"]
    print(type(tenant_ids))
    if tenant_ids:
        for tenantid, val in tenant_ids.items():
            tenant_data.append({"tenant_id": tenantid})
    print(tenant_data)
    print(type(tenant_data))
return tenant_data

@janober
Copy link
Member

janober commented Mar 17, 2024

To avoid duplication of work. This question got also posted to the forum:
https://community.n8n.io/t/workflow-getting-error-on-execution/42705

Please do in the future not post twice! That just causes unnecessary work for us. Thanks.

@rajubish
Copy link
Author

Understood. Can you get resolution for this error please.

@rajubish
Copy link
Author

Any update?

@Joffcom
Copy link
Member

Joffcom commented Mar 18, 2024

Hey @rajubish,

Typically we only work Monday to Friday during Berlin office hours but occassionally we will post on our own time outside of those hours in this case I have only just got to a stage where I am ready to look into this.

Just running the Python has not reproduced the same issue, Can you share the json input data as well.

@rajubish
Copy link
Author

Hi @Joffcom

Thanks for your response.
Here here the json input data.
[ { "tenant_ids": { "241996091870937089": "1", "243599992579686401": "1", "255284649314484225": "1" } } ]

Below is the workflow json code.
{ "name": "Notification Schedule", "nodes": [ { "parameters": { "rule": { "interval": [ { "field": "minutes", "minutesInterval": 2 } ] } }, "id": "6d89c467-5782-4078-a5a1-59346a48de4c", "name": "Schedule Trigger", "type": "n8n-nodes-base.scheduleTrigger", "typeVersion": 1.1, "position": [ 260, 460 ] }, { "parameters": { "batchSize": "=1", "options": {} }, "id": "a4afad13-94cf-433b-bc74-e8111d8a416c", "name": "Loop Over Items", "type": "n8n-nodes-base.splitInBatches", "typeVersion": 3, "position": [ 900, 460 ] }, { "parameters": {}, "id": "501c626a-6942-4faf-b07e-985e5e1535eb", "name": "Replace Me", "type": "n8n-nodes-base.noOp", "typeVersion": 1, "position": [ 1160, 460 ] }, { "parameters": { "operation": "get", "propertyName": "tenant_ids", "key": "notification_partners", "keyType": "hash", "options": {} }, "id": "46631f9f-a119-4323-9a80-bc45b5fa11de", "name": "Redis", "type": "n8n-nodes-base.redis", "typeVersion": 1, "position": [ 480, 460 ], "credentials": { "redis": { "id": "UM5oeJgpCEmuArcc", "name": "Redis account" } } }, { "parameters": { "method": "POST", "url": "http://10.51.112.8:5678/webhook/2960855a-ff6d-48cb-b2d9-77faf5c78523", "sendHeaders": true, "headerParameters": { "parameters": [ { "name": "X-TENANT-ID", "value": "={{ $('Tenant').item.json[\"tenant_id\"] }}" } ] }, "sendBody": true, "specifyBody": "json", "jsonBody": "={ \"company_ids\": \"{{ $json[\"companies\"] }}\" }", "options": { "timeout": 10000 } }, "id": "f2199032-304f-4e9b-bc20-09c8db0426dc", "name": "HTTP Request", "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.1, "position": [ 2020, 460 ] }, { "parameters": { "operation": "get", "propertyName": "company_ids", "key": "=notification_partners_{{ $json[\"tenant_id\"] }}", "keyType": "hash", "options": {} }, "id": "857d98bb-ae0d-49bb-9fd7-31241e367739", "name": "Redis1", "type": "n8n-nodes-base.redis", "typeVersion": 1, "position": [ 1580, 460 ], "credentials": { "redis": { "id": "UM5oeJgpCEmuArcc", "name": "Redis account" } } }, { "parameters": { "language": "python", "pythonCode": "import json\n\ntenant_id = []\nfor item in _input.all():\n print(\" item\", item.json[\"tenant_id\"])\n tenant_id = item.json[\"tenant_id\"]\n\nprint(\" tenant_id \", tenant_id)\nreturn {\"tenant_id\": tenant_id}" }, "id": "bb20f97d-e715-471b-97ca-f86613175350", "name": "Tenant", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ 1360, 460 ] }, { "parameters": { "language": "python", "pythonCode": "import json\n\ntenant_data = []\nfor item in _input.all():\n tenant_ids = item.json[\"tenant_ids\"]\n if tenant_ids:\n for tenantid, val in tenant_ids.items():\n tenant_data.append({\"tenant_id\": tenantid})\nreturn tenant_data" }, "id": "0acc523e-2ca0-46ce-bb2d-f97a98837fe2", "name": "Tenants", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ 680, 460 ] }, { "parameters": { "language": "python", "pythonCode": "import json\nimport pyodide.ffi\n\ncompanies = []\nfor item in _input.all():\n company_ids = item.json[\"company_ids\"]\n if company_ids and isinstance(company_ids, pyodide.ffi.JsProxy):\n company_ids = company_ids.to_py()\n print(type(company_ids))\n if company_ids:\n for companyid, val in company_ids.items():\n companies.append(companyid)\n print(companies)\n print(type(companies))\nif companies:\n return {\"companies\": companies}\nelse:\n return []" }, "id": "c4eccbe2-527e-416a-9bd3-c5be2bed6075", "name": "Tenant_Companies", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ 1780, 460 ] } ], "pinData": {}, "connections": { "Schedule Trigger": { "main": [ [ { "node": "Redis", "type": "main", "index": 0 } ] ] }, "Loop Over Items": { "main": [ [], [ { "node": "Replace Me", "type": "main", "index": 0 } ] ] }, "Replace Me": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 }, { "node": "Tenant", "type": "main", "index": 0 } ] ] }, "Redis": { "main": [ [ { "node": "Tenants", "type": "main", "index": 0 } ] ] }, "Redis1": { "main": [ [ { "node": "Tenant_Companies", "type": "main", "index": 0 } ] ] }, "Tenant": { "main": [ [ { "node": "Redis1", "type": "main", "index": 0 } ] ] }, "Tenants": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 } ] ] }, "Tenant_Companies": { "main": [ [ { "node": "HTTP Request", "type": "main", "index": 0 } ] ] } }, "active": true, "settings": { "executionOrder": "v1" }, "versionId": "b8b68756-8149-4074-a011-1b586588894a", "meta": { "instanceId": "502c49c4a56b1c43a539c577bb99b2be3ca5acbe45ccd229ff14c53ab891b275" }, "id": "UtMRYFmttChIW7aO", "tags": [] }

@Joffcom
Copy link
Member

Joffcom commented Mar 18, 2024

Hey @rajubish,

This appears to be working as expected for me.

image

Running your workflow up to the second redis node is also ok, can you provide a contained workflow that reproduces this issue? Are you also using the docker image, running from npm or doing something else?

@rajubish
Copy link
Author

rajubish commented Mar 18, 2024

Hi @Joffcom

If i run manually then it works find without any issue, but if it's trigger from scheduler then it's failing.

Screenshot 2024-03-18 at 9 59 39 PM Screenshot 2024-03-18 at 10 01 47 PM

Thanks

@rajubish
Copy link
Author

@Joffcom
I am running the n8n using npm in ubuntu machine.

@Joffcom
Copy link
Member

Joffcom commented Mar 19, 2024

Hey @rajubish,

Running it as a shedule doesn't appear to be causing an issue for me either, Can you try running n8n in a container as recommended and see if that has the same issue?

@rajubish
Copy link
Author

Thanks @Joffcom
As per your recommendation i will deploy the n8n in a container and will update.

@Joffcom
Copy link
Member

Joffcom commented May 15, 2024

Moving to closed for now as we are not able to reproduce so I suspect it is a local issue.

@Joffcom Joffcom closed this as not planned Won't fix, can't repro, duplicate, stale May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants