Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot stop container-management service #74

Open
sophokles73 opened this issue Apr 12, 2023 · 2 comments
Open

Cannot stop container-management service #74

sophokles73 opened this issue Apr 12, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@sophokles73
Copy link

sophokles73 commented Apr 12, 2023

Describe the bug
I have run the latest 0.0.6 image for x86_64 and am not able to restart (nor stop) the container-management service in order to pick up some changes I made in some deployment manifest files.

To Reproduce
Steps to reproduce the behaviour:

  1. start up leda image in QEMU
  2. log in as root
  3. wait for systemctl status container-management to indicate that the service is up and running
  4. restart the service using systemctl restart container-management
  5. wait forever ...

Expected behaviour
container-management service stops and restarts after a few seconds

Leda Version (please complete the following information):

  • Version: 0.0.6 as downloaded from GitHub releases
  • Machine: qemux86_64
  • Connectivity: transparent internet access
@sophokles73 sophokles73 added the bug Something isn't working label Apr 12, 2023
@sophokles73 sophokles73 changed the title Cannot stop container-maangement service Cannot stop container-management service Apr 12, 2023
@sophokles73
Copy link
Author

I have killed the QEMU process and started up leda again. The changes I had made to the manifest files have been picked up and now I am also able to shutdown leda properly.

@mikehaller
Copy link
Contributor

I have witnessed the same or a similar problem on Raspberry Pi: systemctl restart container-management would hang indefinitely. In that case, the device was under heavy load and running on a possibly broken SD-Card (CRC errors when trying to back it up).

We were using a Kuksa.VAL Seat Service example container reconfigured with physical CAN-Bus access, host networking and in privileged mode. It's unclear what the root cause was, but we could not properly shut down the system any more. The Kanto container-management socket interface was not coming up any more (hence, kantui and kanto-cm were not able to connect to the container-management daemon).

Also stopping the service did not properly work either.
The file /run/container-management/netns/default was locked, but unsure whether this is related.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: No status
Development

No branches or pull requests

2 participants