Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NodeJS hot reload issue #261

Closed
tcastelly opened this issue Apr 20, 2022 · 143 comments
Closed

NodeJS hot reload issue #261

tcastelly opened this issue Apr 20, 2022 · 143 comments
Labels
enhancement New feature or request

Comments

@tcastelly
Copy link

tcastelly commented Apr 20, 2022

Kindly jump to #261 (comment) for most recent update.


Hello,

I switched from Docker Desktop to Colima. I noticed an issue with "watcher" on bind volume. I created a repository: https://github.com/shenron/colima-hot-reload-issue

The container is not able to see when a file has been updated. But if I touch the file from the container it works. I guess it's linked to the volume.


I'm using the Macbook Pro M1 Max with:

colima version 0.3.4
git commit: 5a4a70481ca8d1e794677f22524e3c1b79a9b4ae

runtime: docker
arch: aarch64
client: v20.10.14
server: v20.10.11
limactl version 0.9.2
@ggoodman
Copy link

ggoodman commented May 3, 2022

The way colima binds host directories into the guest VM means that certain filesystem events are not observed / generated.

You can work around this by using --legacy-watch in nodemon that will fall back to polling instead of the fsevents integration.

@tcastelly
Copy link
Author

Hello, thank you for your answer.

The legacy is a workaround, but unfortunately in my use case it's very slow and use a lot of cpu. It's not the best, but for now when I finished to edit my code I "touch" the entrypoint of the application.

docker exec -ti $CONTAINER_ID touch /usr/src/app/src/index.ts

@luiz290788
Copy link

I'm facing the same issue with ts-node-dev. Is this an issue that will eventually be solved by colima or we will never receive the same events when using colima?

@abiosoft
Copy link
Owner

Is this an issue that will eventually be solved by colima or we will never receive the same events when using colima?

I am working on a workaround taking a similar approach to #261 (comment) while leveraging inotify.

I paused due to some issues with the Go library at the time, but I will revisit it.

@matthewdickinson
Copy link

Is this an issue that will eventually be solved by colima or we will never receive the same events when using colima?

I am working on a workaround taking a similar approach to #261 (comment) while leveraging inotify.

I paused due to some issues with the Go library at the time, but I will revisit it.

It would be great to have this feature completed. Is there a work-in-progress branch that I could possibly help continue/test?

@alikhanich
Copy link

Hi, is there any progress on that? If you need some help, I may push our development team to help you. We need this so our frontend team could work using colima. Docker for mac is not an option for us)

@abiosoft
Copy link
Owner

@djirik support for virtiofs mounts is imminent and would be part of the next release.

It has been merged into master upstream in Lima and currently being integrated and tested in Colima, the early results have been quite impressive.

Virtiofs is same tech used by Docker Desktop for the fast volume speeds.

@alikhanich
Copy link

@abiosoft thank you very much

@alikhanich
Copy link

@abiosoft Hi, do you have an idea when would it be available?

@abiosoft
Copy link
Owner

abiosoft commented Dec 13, 2022

@djirik within the next week or two #497 (comment).

You can actually test it now if you can build Colima from source (and brew install --HEAD lima), it's on the support-vz branch.

brew install --HEAD colima

@alikhanich
Copy link

alikhanich commented Dec 13, 2022

@abiosoft hi again, I've succesfully switched to virtiofs on macOs 13. However nodemon still does not see updated files.

colima status
INFO[0000] colima is running using macOS Virtualization.Framework
INFO[0000] arch: aarch64
INFO[0000] runtime: docker
INFO[0000] mountType: virtiofs
INFO[0000] socket: unix:///Users/hacker/.colima/default/docker.sock

--legacy-watch for nodemon doesn't seem to be helping at all.
btw we are using nuxt framework, maybe it is something related with it?

@abiosoft
Copy link
Owner

@abiosoft hi again, I've succesfully switched to virtiofs on macOs 13. However nodemon still does not see updated files.

@djirik hmmm 🤔 did you reset. i.e. colima delete && colima start ?

Is your project something that I can try to reproduce? I have tested with a number of development environments that provides live reload and it has worked fine.

@abiosoft
Copy link
Owner

abiosoft commented Dec 13, 2022

@djirik also, kindly confirm lima --version is v0.14.0?

@alikhanich
Copy link

@abiosoft
yes, I have reset it. Unfortunately, I can not share source code, as it is under NDA.

limactl version 0.14.0

I can ask frontend team if they can make sample project without production code.

@alikhanich
Copy link

@abiosoft I have recreated colima again just to be sure, seems like it changed things a bit, now I'm having a lot of permission/access rights errors, I will fix it and return with more feedback.

@alikhanich
Copy link

@abiosoft File permissions are hell with virtiofs. I can't even launch node project with volume mounts. Can not figure out how to set user:group in compose file.

@abiosoft
Copy link
Owner

@abiosoft File permissions are hell with virtiofs. I can't even launch node project with volume mounts. Can not figure out how to set user:group in compose file.

Do you have an idea of the userId in the container? You can check by running id -u in the container.
What docker image are you using?

@alikhanich
Copy link

alikhanich commented Dec 15, 2022

version: '3.9'
services:
  nuxt:
    build:
      dockerfile: ../docker/nuxt/Dockerfile
      context: ./source
      target: node_base
    image: ${NUXT_IMAGE_NAME:-localhost/nuxt}:${NUXT_IMAGE_TAG:-latest}
    # user: $DEV_LOCAL_UID:$DEV_LOCAL_GID # here I've tried 501:20, 1000:1000, 501:1000, 0:0
    restart: 'no'
    environment:
      FORCE_COLOR: '1'
      NPM_TOKEN: $NPM_TOKEN
    entrypoint: sh
    command:
      - -xec
      - |-
        npm run npm-auth-repo --loglevel verbose
        npm install --loglevel verbose
        npm run dev --loglevel verbose
    volumes:
      - ./source:/app
      - ./volumes/nuxt/HOME:/home/node
FROM node:16.17.1-alpine AS node_base
WORKDIR /app

ENV HOST 0.0.0.0

EXPOSE 3000

Everytime I'm getting either 243 error from node, or permission denied for creating files in ./volumes/nuxt/HOME.
I've also tried chmod +rwx on HOME dir without effect.

@abiosoft
Copy link
Owner

abiosoft commented Dec 15, 2022

@djirik as an unrecommended stopgap, the permission denied error should go away if you run chmod -R 777 ./volumes/nuxt/HOME.

Give me a moment, let me try to reproduce your error using the node:16.17.1-alpine image.

@abiosoft
Copy link
Owner

@djirik out of curiosity, why is /home/node getting mounted?

@alikhanich
Copy link

@abiosoft I've tried with 777, but it didn't have any effect. Yes, /home/node is mounted. .npm dir is created and after that it says permission denied)

@abiosoft
Copy link
Owner

@djirik I am unable to reproduce your permission issues. You can send me an email at the email on my GitHub profile and we can schedule a session to troubleshoot this.

As for the hot reload issue, I can confirm it is not yet fixed with virtiofs. Apparently, I was testing with a dotnet project and it works fine, and thereby assumed it catered for all other environments (including the node stack) as well.

@codingluke
Copy link

codingluke commented Dec 15, 2022

I can confirm that also on my nodejs project mounted volumes are not "touched" in the container. I encounter no permission issues though.

  • I resetted colima with colima delete && colima start
  • lima --version => limactl version 0.14.1

@alikhanich
Copy link

alikhanich commented Dec 16, 2022

@codingluke what is your node version? can you please share you Dockerfile and docker-compose?

@alikhanich
Copy link

alikhanich commented Dec 16, 2022

@abiosoft I have sent you an email

@codingluke
Copy link

codingluke commented Dec 16, 2022

@djirik I use the latest 18.12.1. It is a docusaurus/marp project.

node version

@alikhanich
Copy link

@codingluke I get it now, you are not mounting sources and NODE_HOME into container, this is why you don't have permission issues I guess.

@codingluke
Copy link

codingluke commented Dec 16, 2022

correct and I would recommend to only mount the part which is edited locally. I don't see the need to share node_modules and co. For persistence I would use docker volumes. Of course there might be some reason to do it differently.

@alikhanich
Copy link

@abiosoft I have skipped mounting /home/node, as it was found unnecessary, but hot reload is still not working :(.

The working behavior is like this:

  1. update file
  2. touch file in colima ssh
  3. refresh is triggered.

Can we steal something from Docker Desktop, so it would work?

@abiosoft abiosoft added the enhancement New feature or request label Dec 20, 2022
@abiosoft
Copy link
Owner

abiosoft commented Apr 7, 2023

And setup an INotify watcher with:

apt-get update
apt-get -y install inotify-tools
cd /app
inotifywait -m -e modify "."

@michaeldiscala you should not need to do anything extra and no additional step is needed after colima start --mount-inotify.

Can you kindly try again, then tail the daemon log and share the output?

tail -f ~/.colima/default/daemon/daemon.log

Thanks.

@michaeldiscala
Copy link

@abiosoft Sure!

The log that I see is as follows:

Received signal 15
time="2023-04-10T10:58:27-07:00" level=info msg="- - - - - - - - - - - - - - -"
time="2023-04-10T10:58:27-07:00" level=info msg="daemon started by colima"
time="2023-04-10T10:58:27-07:00" level=info msg="Run `/usr/bin/pkill -F /Users/mdiscala/.colima/default/daemon/daemon.pid` to kill the daemon"
time="2023-04-10T10:58:27-07:00" level=info msg="waiting for VM to start" context=inotify
time="2023-04-10T10:58:27-07:00" level=info msg="waiting 5 secs for VM" context=inotify
time="2023-04-10T10:58:27-07:00" level=info msg="Using search domains: [hsd1.ca.comcast.net]"
time="2023-04-10T10:58:27-07:00" level=info msg="waiting for clients..."
on_accept(): vmnet_return_t VMNET_INVALID_ARGUMENT
vmnet_write: Undefined error: 0
time="2023-04-10T10:58:33-07:00" level=info msg="waiting 5 secs for VM" context=inotify
time="2023-04-10T10:58:41-07:00" level=info msg="waiting 5 secs for VM" context=inotify
time="2023-04-10T10:58:47-07:00" level=info msg="waiting 5 secs for VM" context=inotify
time="2023-04-10T10:58:52-07:00" level=info msg="VM started" context=inotify
time="2023-04-10T10:58:57-07:00" level=error msg="error listing containers: error running [lima docker ps -q], output: \"Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\", err: \"exit status 1\"" context=inotify
time="2023-04-10T10:59:48-07:00" level=info msg="syncing inotify event for /Users/mdiscala/Development/panorama/inotifytest/file1 " context=inotify

This error message stood out to me:

time="2023-04-10T10:58:57-07:00" level=error msg="error listing containers: error running [lima docker ps -q], output: \"Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\", err: \"exit status 1\"" context=inotify

When I try to run lima docker ps -q I get:

FATA[0000] instance "default" does not exist, run `limactl start default` to create a new instance

and when I do limactl list I get:

❯ limactl list
NAME      STATUS     SSH                VMTYPE    ARCH      CPUS    MEMORY    DISK     DIR
colima    Running    127.0.0.1:52385    vz        x86_64    4       12GiB     60GiB    ~/.lima/colima

colima list gives:

❯ colima list
PROFILE    STATUS     ARCH      CPUS    MEMORY    DISK     RUNTIME    ADDRESS
default    Running    x86_64    4       12GiB     60GiB    docker     192.168.107.2

I'm not sure if that error message is a red herring or not, but it seems to be happening because lima docker ps -q runs within the default instance, but mine is named colima.

Interestingly it does show that the daemon is noticing the events, they're just not making it into my container 🤔

@abiosoft
Copy link
Owner

I'm not sure if that error message is a red herring or not, but it seems to be happening because lima docker ps -q runs within the default instance, but mine is named colima.

The error is expected initially until the startup is complete. Also, the context is preserved (via env vars) and it is not running in the default instance but simply showing the commands being used.

What kind of project are you running?

@michaeldiscala
Copy link

The error is expected initially until the startup is complete. Also, the context is preserved (via env vars) and it is not running in the default instance but simply showing the commands being used.

Gotcha! Thanks for explaining

What kind of project are you running?

For this test, I'm just running a one off container started with docker run --rm -it --entrypoint bash --mount type=bind,source="$(pwd)",target=/app debian:bullseye-slim and then using the inotify tools to check for messages. Is that what you mean by project?

I also checked and confirmed that I'm seeing this for both qemu + sshfs and vz + virtiofs (I was wondering if using virtiofs might be contributing)

@abiosoft
Copy link
Owner

@michaeldiscala this solution is not actually propagating inotify events from the host, it's a workaround to simulate the behaviour. Probably why the inotify tools are not picking it up.

In fact, the goal is to provide the "hot reload" developer experience, rather than inotify events.

@michaeldiscala
Copy link

michaeldiscala commented Apr 11, 2023

Ah gotcha @abiosoft, thanks for explaining! Looking through the patch itself some more, I realized that you're passing the event through by running a chmod on the file.

When I update my test to listen to all inotify events (instead of just modification events like in my original run):

apt-get update
apt-get -y install inotify-tools
cd /app
inotifywait -m  "."

I see events coming through as expected:

Setting up watches.
Watches established.
./ ATTRIB file.rb
./ ATTRIB file.rb

It's just that all of the modifications come in as ATTRIB events instead of MODIFY events. I think that should work with our hot-reloading systems and so will flag for our team to investigate. Thanks so much for all the development effor there and for helping me understand what I was seeing.

@eilskn
Copy link

eilskn commented Apr 21, 2023

hello guys, can someone help me, i got some error. colima 0.5.4 already installed

image

@matleh
Copy link

matleh commented Apr 21, 2023

@rffensick you need the HEAD version of colima (brew install --HEAD colima).

@eilskn
Copy link

eilskn commented Apr 21, 2023

@matleh thank you!

@eilskn
Copy link

eilskn commented Apr 21, 2023

after updating and trying to start all of the containers that I have in my project, I have almost all of the containers exit with error 139

I can share more information if you tell me what to do

image
image

@eilskn
Copy link

eilskn commented Apr 21, 2023

start colima with command: colima start --profile amd64 --cpu 4 --memory 8 --mount-inotify

but works with arch profile

@jlholm
Copy link

jlholm commented May 1, 2023

@abiosoft #668 resolved hot reloading for me, thank you for your hard work!

@abiosoft
Copy link
Owner

abiosoft commented May 1, 2023

With the amount of positive feedbacks received so far, a release would be made soon.

Thanks.

@niklaswolf
Copy link

Works on my machine, too.
Great work, looking forward to have this in the new release :)

@jklmnop
Copy link

jklmnop commented May 18, 2023

I am using the HEAD of Colima and it still doesn't seem to be working for me. I can edit files in the container and my watchers will detect the changes, but not if I edit files in my IDE. Any assistance is appreciated!

image

Here is my config

# Number of CPUs to be allocated to the virtual machine.
# Default: 2
cpu: 4

# Size of the disk in GiB to be allocated to the virtual machine.
# NOTE: changing this has no effect after the virtual machine has been created.
# Default: 60
disk: 100

# Size of the memory in GiB to be allocated to the virtual machine.
# Default: 2
memory: 6

# Architecture of the virtual machine (x86_64, aarch64, host).
# Default: host
arch: host

# Container runtime to be used (docker, containerd).
# Default: docker
runtime: docker

# Kubernetes configuration for the virtual machine.
kubernetes:
  # Enable kubernetes.
  # Default: false
  enabled: false

  # Kubernetes version to use.
  # This needs to exactly match a k3s version https://github.com/k3s-io/k3s/releases
  # Default: latest stable release
  version: v1.25.4+k3s1

  # Disable k3s features [coredns servicelb traefik local-storage metrics-server].
  # All features are enabled unless disabled.
  #
  # EXAMPLE - disable traefik and metrics-server
  # disable: [traefik, metrics-server]
  #
  # Default: [traefik]
  disable:
    - traefik

# Auto-activate on the Host for client access.
# Setting to true does the following on startup
#  - sets as active Docker context (for Docker runtime).
#  - sets as active Kubernetes context (if Kubernetes is enabled).
# Default: true
autoActivate: true

# Network configurations for the virtual machine.
network:
  # Assign reachable IP address to the virtual machine.
  # NOTE: this is currently macOS only and ignored on Linux.
  # Default: false
  address: false

  # Custom DNS resolvers for the virtual machine.
  #
  # EXAMPLE
  # dns: [8.8.8.8, 1.1.1.1]
  #
  # Default: []
  dns:
    - 8.8.8.8

  # DNS hostnames to resolve to custom targets using the internal resolver.
  # This setting has no effect if a custom DNS resolver list is supplied above.
  # It does not configure the /etc/hosts files of any machine or container.
  # The value can be an IP address or another host.
  #
  # EXAMPLE
  # dnsHosts:
  #   example.com: 1.2.3.4
  dnsHosts: {}

  # Network driver to use (slirp, gvproxy), (requires vmType `qemu`)
  #   - slirp is the default user mode networking provided by Qemu
  #   - gvproxy is an alternative to VPNKit based on gVisor https://github.com/containers/gvisor-tap-vsock
  # Default: gvproxy
  driver: gvproxy

# ===================================================================== #
# ADVANCED CONFIGURATION
# ===================================================================== #

# Forward the host's SSH agent to the virtual machine.
# Default: false
forwardAgent: false

# Docker daemon configuration that maps directly to daemon.json.
# https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file.
# NOTE: some settings may affect Colima's ability to start docker. e.g. `hosts`.
#
# EXAMPLE - disable buildkit
# docker:
#   features:
#     buildkit: false
#
# EXAMPLE - add insecure registries
# docker:
#   insecure-registries:
#     - myregistry.com:5000
#     - host.docker.internal:5000
#
# Colima default behaviour: buildkit enabled
# Default: {}
docker: {}

# Virtual Machine type (qemu, vz)
# NOTE: this is macOS 13 only. For Linux and macOS <13.0, qemu is always used.
#
# vz is macOS virtualization framework and requires macOS 13
#
# Default: qemu
vmType: vz

# Utilise rosetta for amd64 emulation (requires m1 mac and vmType `vz`)
# Default: false
rosetta: true

# Volume mount driver for the virtual machine (virtiofs, 9p, sshfs).
#
# virtiofs is limited to macOS and vmType `vz`. It is the fastest of the options.
#
# 9p is the recommended and the most stable option for vmType `qemu`.
#
# sshfs is faster than 9p but the least reliable of the options (when there are lots
# of concurrent reads or writes).
#
# Default: virtiofs (for vz), sshfs (for qemu)
mountType: virtiofs

# Propagate inotify file events to the VM.
# NOTE: this is experimental.
mountInotify: true

# The CPU type for the virtual machine (requires vmType `qemu`).
# Options available for host emulation can be checked with: `qemu-system-$(arch) -cpu help`.
# Instructions are also supported by appending to the cpu type e.g. "qemu64,+ssse3".
# Default: host
cpuType: ""

# For a more general purpose virtual machine, Ubuntu container is optionally provided
# as a layer on the virtual machine.
# The underlying virtual machine is still accessible via `colima ssh --layer=false` or running `colima` in
# the Ubuntu session.
#
# Default: false
layer: false

# Custom provision scripts for the virtual machine.
# Provisioning scripts are executed on startup and therefore needs to be idempotent.
#
# EXAMPLE - script exected as root
# provision:
#   - mode: system
#     script: apk add htop vim
#
# EXAMPLE - script exected as user
# provision:
#   - mode: user
#     script: |
#       [ -f ~/.provision ] && exit 0;
#       echo provisioning as $USER...
#       touch ~/.provision
#
# Default: []
provision: []

# Modify ~/.ssh/config automatically to include a SSH config for the virtual machine.
# SSH config will still be generated in ~/.colima/ssh_config regardless.
# Default: true
sshConfig: true

# Configure volume mounts for the virtual machine.
# Colima mounts user's home directory by default to provide a familiar
# user experience.
#
# EXAMPLE
# mounts:
#   - location: ~/secrets
#     writable: false
#   - location: ~/projects
#     writable: true
#
# Colima default behaviour: $HOME and /tmp/colima are mounted as writable.
# Default: []
mounts: []

# Environment variables for the virtual machine.
#
# EXAMPLE
# env:
#   KEY: value
#   ANOTHER_KEY: another value
#
# Default: {}
env: {}

@ploxiln
Copy link

ploxiln commented Sep 22, 2023

I'm testing colima 0.5.5 with this experimental --mount-inotify option, and it is not quite working for my use-case because it seems that all changes appear to be mode / attribute changes. BTW I'm testing with vm-type=vz, mount-type=virtiofs, macOS 13.5.2 with M1 Pro

/ # inotifywait -m -r /mnt/app
Setting up watches.  Beware: since -r was given, this may take a while!
Watches established.
/mnt/app/example_app_drf/ ATTRIB urls.py
/mnt/app/example_app_drf/ ATTRIB new.py
/mnt/app/ ATTRIB supernew.txt
/mnt/app/ ATTRIB wsgi.py
^C

My setup (with which I currently use Docker Desktop ...) happens to use reflex and it (reasonably?) which seems to ignore these attribute/mode changes, waiting for some file contents change:

$ reflex --verbose -- sh -c "echo RESTART"
Globals set at commandline
| --verbose (-v) 'true' (default: 'false')
+---------
Reflex from [commandline]
| ID: 0
| Inverted regex match: "(^|/)\\.git/"
...
| (Implicitly matching all non-excluded files)
| Substitution symbol {}
| Command: [sh -c echo RESTART]
+---------

[info] fsnotify event: "./wsgi.py": CHMOD
[info] fsnotify event: "./wsgi.py": CHMOD
[info] fsnotify event: "./verynewfile.py": CHMOD
^CInterrupted (interrupt). Cleaning up children...

... I wonder if this is part of the design of how the fs events simulated/triggered inside the VM? Is it hard to change?

(I also notice that "rm" (unlink) and "touch" (ctime) are not propagated, but that is probably OK for my use-case ...)


Here's the Docker Desktop inotify event emulation to compare:

/ # inotifywait -m -r /mnt/app
Setting up watches.  Beware: since -r was given, this may take a while!
Watches established.
/mnt/app/ CREATE verynewfile2.py
/mnt/app/ MODIFY verynewfile2.py
/mnt/app/ ATTRIB wsgi.py
/mnt/app/ MODIFY wsgi.py
/mnt/app/example_app_drf/healthcheck/ CREATE views.py
/mnt/app/example_app_drf/healthcheck/ MODIFY views.py
/mnt/app/example_app_drf/healthcheck/ CREATE views.py
/mnt/app/example_app_drf/healthcheck/ ATTRIB views.py
/mnt/app/example_app_drf/healthcheck/ MODIFY views.py
/mnt/app/ MODIFY wsgi.py

and for reflex (slightly modified command but doesn't matter):

$ reflex --verbose -- sh -c 'echo RESTART'
...

[info] fsnotify event: "./wsgi.py": WRITE
[00] RESTART
[info] fsnotify event: "./verynewfile4.py": CREATE
[info] fsnotify event: "./verynewfile4.py": WRITE
[00] RESTART
[info] fsnotify event: "example_app_drf/healthcheck/views.py": REMOVE
[info] fsnotify event: "example_app_drf/healthcheck/views.py": WRITE
[info] fsnotify event: "example_app_drf/healthcheck/views.py": CREATE
[info] fsnotify event: "example_app_drf/healthcheck/views.py": CHMOD
[info] fsnotify event: "example_app_drf/healthcheck/views.py": WRITE
[00] RESTART

Anyway, much thanks for what you've made with colima thus far, and I hope this report may help either improve or clarify the inotify feature in the future 😁

@F9Uf
Copy link

F9Uf commented Nov 10, 2023

Hi, I have the same issue with @ploxiln . It doesn't work when I use reflex in docker-compose command.

@chrismith-equinix
Copy link

Hi @abiosoft - Is --mount-inotify expected to work with Kubernetes enabled?

@abiosoft
Copy link
Owner

Hi @abiosoft - Is --mount-inotify expected to work with Kubernetes enabled?

@chrismith-equinix docker containers with mounted volumes are monitored, so it should work if the Kubernetes pod has a mounted volume.

@chrismith-equinix
Copy link

Great thanks @abiosoft ... We are using containerd runtime instead of docker. It should work with containerd images as well?

@abiosoft
Copy link
Owner

abiosoft commented Nov 28, 2023

Great thanks @abiosoft ... We are using containerd runtime instead of docker. It should work with containerd images as well?

Having looked at the implementation, I can see that only the default namespace is being monitored for containerd. That needs to be fixed to monitor all namepaces including k8s.io.

@abiosoft
Copy link
Owner

@chrismith-equinix can you kindly create another issue for that?

Thanks.

@abiosoft
Copy link
Owner

I would be closing this issue as I think it has outlived it's purpose.
Hot reload now works for most users and inotify is now enabled by default.

Anyone is free to create new issue(s) for the remaining use cases that are yet to be covered.

Thanks to everyone that has contributed so far.

@chrismith-equinix
Copy link

@abiosoft - My workloads are not running in the default namespace currently, and I can see that it doesn't seem to be working. I will test it in the default namespace first, then create an issue for this.

Thanks for looking into this, and the timely response. It is appreciated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests