Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Base Repo OOMs on Fly.io #170

Closed
graysonhicks opened this issue Apr 1, 2023 · 7 comments · Fixed by #183
Closed

Base Repo OOMs on Fly.io #170

graysonhicks opened this issue Apr 1, 2023 · 7 comments · Fixed by #183

Comments

@graysonhicks
Copy link

Have you experienced this bug with the latest version of the template?

yes

Steps to Reproduce

Clone the repo and follow the README all the way through the Fly.io deployment steps.

For the Postgres create step, choose the Development - Single node, 1x shared CPU, 256MB RAM, 1GB disk configuration.

Link the DB to the app, etc.

When finally committing and pushing, the Deploy action fails with exit code 137.

image

Expected Behavior

I would expect the base Blues Stack to be able to deploy on the Fly.io free tier, especially as it is heavily documented and encouraged in the docs.

Actual Behavior

Deploy fails with out of memory error.

@graysonhicks
Copy link
Author

I have attempted a paid plan, but without running machines I can't rescale the apps. I think starting over completely and choosing a higher configuration (multiple CPUs, 4GB RAM) may solve, but if that's the case it should be documented, and I'm not sure how that affects pricing on Fly.io.

@smucode
Copy link

smucode commented Apr 13, 2023

I had the same issue, managed to fix it by fly scale memory 512

https://fly.io/docs/apps/scale-machine/

@graysonhicks
Copy link
Author

Yea, unfortunately mine was in a broken state where the app never successfully deployed, so I couldn't scale the memory for it and it was stuck at 256. Only solution was to delete the fly app and start over, choosing the 512 memory version to being. I think that is okay, but should be documented.

@kinggoesgaming
Copy link

kinggoesgaming commented Apr 27, 2023

Stuck on this as well...

Is the memory update applicable to the DB machine or the website machine(staging/production)?

My assumption is the DB but want to confirm before I blow something up

EDIT: figured it out, its for the staging/prod website machines.

@kwigley
Copy link
Contributor

kwigley commented May 2, 2023

I've had success by setting the following in fly.toml

[deploy]
  release_command = "bash ./scripts/migrate.sh"

and creating a script that looks like

#!/bin/bash

fallocate -l 512M /swapfile
chmod 0600 /swapfile
mkswap /swapfile
echo 10 > /proc/sys/vm/swappiness
swapon /swapfile
echo 1 > /proc/sys/vm/overcommit_memory
npx prisma migrate deploy

related: https://community.fly.io/t/prisma-sqlite-causes-an-out-of-memory-error-on-deploy/11039

@mcansh
Copy link
Contributor

mcansh commented May 3, 2023

@kwigley if you want to open a PR with those changes, i would gladly merge it :)

@seve
Copy link

seve commented Apr 15, 2024

hmm, I'm seeing OOM again on a fresh app:

<--- Last few GCs --->
  [324:0x67f6850]     8687 ms: Mark-sweep (reduce) 252.9 (258.2) -> 252.7 (258.9) MB, 108.9 / 0.0 ms  (+ 47.4 ms in 15 steps since start of marking, biggest step 14.1 ms, walltime since start of marking 169 ms) (average mu = 0.555, current mu = 0.284) alloc[324:0x67f6850]     8889 ms: Mark-sweep (reduce) 253.7 (258.9) -> 253.7 (259.9) MB, 197.1 / 0.0 ms  (average mu = 0.335, current mu = 0.026) allocation failure; scavenge might not succeed
  <--- JS stacktrace --->
  FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
   1: 0xb9a330 node::Abort() [npm exec prisma migrate deploy]
   2: 0xaa07ee  [npm exec prisma migrate deploy]
   3: 0xd71ed0 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [npm exec prisma migrate deploy]
   4: 0xd72277 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [npm exec prisma migrate deploy]
   5: 0xf4f635  [npm exec prisma migrate deploy]
   6: 0xf61b0d v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [npm exec prisma migrate deploy]
   7: 0xf3c1fe v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [npm exec prisma migrate deploy]
   8: 0xf3d5c7 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [npm exec prisma migrate deploy]
   9: 0xf1db40 v8::internal::Factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) [npm exec prisma migrate deploy]
  10: 0xf155b4 v8::internal::FactoryBase<v8::internal::Factory>::AllocateRawWithImmortalMap(int, v8::internal::AllocationType, v8::internal::Map, v8::internal::AllocationAlignment) [npm exec prisma migrate deploy]
  11: 0xf[178](https://github.com/seve/pathos/actions/runs/8670333042/job/23778237725#step:5:179)68 v8::internal::FactoryBase<v8::internal::Factory>::NewRawOneByteString(int, v8::internal::AllocationType) [npm exec prisma migrate deploy]
  12: 0x1057999 v8::internal::JsonParser<unsigned short>::MakeString(v8::internal::JsonString const&, v8::internal::Handle<v8::internal::String>) [npm exec prisma migrate deploy]
  13: 0x10596b6 v8::internal::JsonParser<unsigned short>::ParseJsonValue() [npm exec prisma migrate deploy]
  14: 0x105a19f v8::internal::JsonParser<unsigned short>::ParseJson() [npm exec prisma migrate deploy]
  15: 0xdf7983 v8::internal::Builtin_JsonParse(int, unsigned long*, v8::internal::Isolate*) [npm exec prisma migrate deploy]
  16: 0x1710839  [npm exec prisma migrate deploy]
  ./scripts/migrate.sh: line 9:   324 Aborted                 npx prisma migrate deploy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants