-
-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes - exit code 139 (no log). #830
Comments
I believe there is an issue with alpine (musl) + openssl atm (see #676) which affects Octane. If this is the same issue, please close this one. It appears the workaround is to use Debian based images instead. |
we have the same problem with the debian image @withinboredom. |
Ok, 139 usually means it was killed by the kernel (sigsev, over memory limits, etc) so I'd check that the memory limits are sane or not-applicable. |
we removed any request/limit from the pod, i can put here the manifest if in need, its no problem:
i can try again to test debian instead of alpine for some reason, baybe third time a charm xD |
Also check that you have enough memory on the host. There should be something in |
This is the max i can get from the kubelet about this pod: thats why we are so confused, we cant get anything from anywhere. |
restarted with the debian version, this is the last log i get:
but it still dies to error 139. |
You're going to need to look at kernel-level logs, which will be in the host and nothing to do with Kubernetes. That's the last bet to figure out what is happening. Looking at your deployment though, I do see something that might be the problem. Try adding this to your container spec: securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL |
The docker images have the cap built-in: https://github.com/dunglas/frankenphp/blob/main/Dockerfile#L93 And need the cap at runtime, especially if running as non-root. |
at a certain point i had this configured following this but probably i lost that piece of configuration when i reverted lots of thing. im searching into the aws ec2 arm node where to find kernel logs in the meantime because i cant find them |
Interesting :) are you using a build cache? It should be insta-build if the code didn't change. @dunglas just did a pretty epic talk about some of that: https://dunglas.dev/2024/05/containerization-tips-and-tricks-for-php-apps/ if you are using github actions. |
Thanks @withinboredom , dmsg doesnt have anything about the memory problem, maybe im looking in the wrong position
about the CI youre right, i forgot i had already run that, it took 40s xD |
You could look in the I also see that you're using the regular PHP to start up artisan... dunno if that has any affect. Maybe change your entrypoint:
If I'm reading right though, it works locally but not in kubernetes? You also mentioned that the k8s machines are arm? I wonder if there is something architecturally specific going on there... |
Locally works without problem (mac m1), but on kubernetes under arm nodes no. :/ |
You could also set up a deployment where the entrypoint is https://mercure.rocks/docs/hub/debug which also works for frankenphp. You can also use gdb to grab a stack trace. Sadly, nothing is standing out to me about your setup and I'm out of "is it plugged in" type of debugging. Time to get our hands dirty, so to speak. |
this was pretty easy, we can do that at runtime if we stop argo :D Where would you suggest to start first? |
I would start by running your entry point manually. Maybe you'll get more information. It's mostly standard debugging from this point, though the tools might be unfamiliar (if coming from a php-only background) or you may be a bit rusty if you haven't run gdb in awhile. |
ok, finally something i can work with @withinboredom
i should create this folder in the dockerfile since its missing, but having added that folder manually now, nothing change compared to the usual behaviour. |
If you want, you can shoot me an email (landers dot robert at gmail) and maybe we can hop on a call? Might be faster. I have some free time this afternoon (around 4:30 pm CEDT or later). |
ive enabled an emptydir for tmp since it was missing at the moment. |
we are getting somewhere now since we |
|
with a brutal 777 we are getting now: |
If you want to start Caddy as www-data user, you need to make The reason you are seeing the last error is probably because there is no |
thanks @AlliBalliBaba about caddy we have this step: but nothing changed, we are kinda burned xD |
Have you looked at that php file to determine where it is trying to write the PID file? |
i cant understand where he wants to work
do you have any idea? |
If you're in the container, just slap a vardump or logging statement or throw an exception in the file to dump the path. |
i think we have made some actual progress:
thats now the only logs we get before the 139. |
running the sofware with gdb dont cause a crash apparently (on debian) but will not move anything nor give an error of sort. |
we made some progress, the pod is now working, the problem was from this opcache part. now we are fighting caddy/webserver because instead of going into app/public it goes www and for obvious reasons the app dont work, but this config in caddy gets ignored,it seem
@withinboredom do you have any idea why its ignoring this even if its in the right position? |
Does your entry point tell it to load the caddy file? |
we edited the entrypoint and now it has the correct entryfile, yes, but nonetheless the default answer from the url is /www, even if i do a port forward i get localhost/www. |
ah, that's weird but that smells like an app configuration issue. You could always stick a |
if i manually put localhost/index.php in the browser it works correctly, thats why i tought it was a caddy problem that i cant undestand or something going under the hood. :/ |
I guess I don't understand the problem :) Are you being redirected to |
exactly. |
What is the status code? Could it be cached? |
At the moment there is no cache enabled, since we delete the pod and we recreate that, the first call will go directly to this
|
I meant, is the redirect cached in your browser? The browser caches permanent redirects forever ( |
oh sorry i misunderstood that, the answer is no, the app is not correctly starting because the nginx cant find the app and goes 503. also, our colleagues when try to port forward they get the same error. |
Yeah, I don't see anything that stands out in your caddyfile. Could nginx be the one doing the redirect? |
The only thing I can think of is that maybe you have an |
You might need something like this in your Caddyfile:
but I can't remember laravel specifics off the top of my head. |
ok, this i don't honestly understand, why i would need this if its not mentioned in the franken documentation? its because of octane? but, looking at the octane doc for frankenphp it doenst state anywhere to use this component/piece. |
You probably don't need it. I've needed it when testing various dev cases ... but if it isn't documented, then you likely don't need it :) Sorry to confuse you. |
this is the octane caddyfile:
its correct too technically, if i put so now i have a real doubt: |
Just an "is it plugged in" kind of question: but did you verify the caddyfile in the container is the correct one? |
if i dont say what caddyfile to use, it has to use the octane one from the frankenphp default configuration, or this is what i understand at the moment. |
ok, i made the app working.
but instead need to have:
with this edit the app is working, i mean, i have a blocked-mixed-content error but i can fight that, not all the app now :D |
What happened?
Hello, we tried to switch from a normal
laravel/fpm
configuration to afrankenphp/octane
one.the fist step was configuring our local environment to work and that was almost flawless:
and our entrypoint configuration is pretty simple to be honest:
and all this config works correctly, locally i can test the software and everything works fine (i love this.).
But oh boy, when we tried to configure our staging environment (same image as production, you will read that in the dockerfile) everything went crazy, from caddy to octane to php?
we use a rootless environment and use www-data as our main user, the caddyfile is pretty simple too:
and to complete everything, the entrypoint for production, this is also pretty standard/straightforward:
i dont like to have to force
--admin-port
but i cant make the command work without that, and the log-level debug was needed in order to try to understand something of what was happening to be honest.we cant make this work on kubernetes with a barebone deployment, but we don't understand what we are doing wrong, since the only error we get is a 139 (we can see that from the
kubectl describe pod
)by the look of this there is something wrong we are doing that im missing?
Thank you for your time.
Build Type
Docker (Alpine)
Worker Mode
No
Operating System
GNU/Linux
CPU Architecture
aarch64
PHP configuration
PHPINFO from the local enviroment (same dockerfile.)
Relevant log output
No response
The text was updated successfully, but these errors were encountered: