Skip to content
This repository was archived by the owner on Aug 18, 2025. It is now read-only.
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
96 changes: 96 additions & 0 deletions guides/inotify-watch-limits.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
---
title: Inotify Watch Limit Increase
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest titling this "Increasing the inotify Watch Limit" or "Increasing the File Watcher Limit" (I think inotify_add_watch is typically styled in lowercase as it's the name of a system call)

Suggested change
title: Inotify Watch Limit Increase
title: Increasing the inotify Watch Limit

More generally, though, it seems we may want to have this be part of a larger page about tuning options (or more specifically, kernel tuning options) for Coder?

description: Learn how to increase the inotify watcher limit.
---

When running a webpack project, you may encounter an error similar to the
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not Webpack-specific, so I would suggest making this more generic.

Suggested change
When running a webpack project, you may encounter an error similar to the
With some applications and tools, including Webpack or code-server, you may encounter an error similar to the

following:

```text
Watchpack Error (watcher): Error: ENOSPC: System limit for number of file
watchers reached, watch '/some/path'
```

This results from a low number of [inotify
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest adding more info about what the inotify mechanism is for, what it does, and the impact of changing the watch limit.

Suggested change
This results from a low number of [inotify
The [inotify facility](https://en.wikipedia.org/wiki/Inotify) allows programs to monitor files and directories for changes, so that they will receive an event immediately whenever a user or program modifies the file or directory. Because the inotify mechanism requires some kernel resources (memory and processor) per watched file, there is a limit on the number of registered file watchers. For large projects with many files and folders, programs may hit this limit, resulting in the above error message.

watches](https://confluence.jetbrains.com/display/IDEADEV/Inotify+Watches+Limit)
combined with high node usage, causing the log stream to fail.

## Resolution

Increase the number of file watchers. Because the setting quantifying the number
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Increase the number of file watchers. Because the setting quantifying the number
If you encounter the file watcher limit, there are two potential resolutions: either reduce the number of file watcher registrations, or increase the maximum file watcher limit. We recommend attempting to reduce the file watcher registrations first, because increasing the number of file watches may result in high processor utilization.
For code-server, you may exclude some files, such as those belonging to third-party project dependencies, by modifying the `files.watcherExclude` setting. Please see the user guides for the software registering the watches to learn more.

of file watchers isn't namespaced, you'll need to raise the maximum number at
the node level.

One way to do this is to use a daemonset in the cluster with a privileged
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
One way to do this is to use a daemonset in the cluster with a privileged
One way to do this is to use a DaemonSet in the cluster with a privileged

DaemonSet is a proper noun so should be capitalized, and we should stylize it as indicated in the official Kubernetes documentation

container to set the maximum number of file watchers:

```yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: more-fs-watchers
namespace: kube-system
labels:
app: more-fs-watchers
k8s-app: more-fs-watchers
spec:
selector:
matchLabels:
k8s-app: more-fs-watchers
template:
metadata:
labels:
name: more-fs-watchers
k8s-app: more-fs-watchers
spec:
hostNetwork: true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@scsmithr Do these really need to share host namespaces? I don't believe so, since sysfs is being mounted read-write. We shouldn't recommend things that don't meet Kubernetes' security recommendations in our docs

Especially since this is privileged and running as a DaemonSet, we should try to restrict things as much as possible: https://kubernetes.io/docs/concepts/security/pod-security-standards/

hostPID: true
hostIPC: true
initContainers:
- command:
- sh
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@scsmithr How come this uses sh -c instead of directly running the sysctl command?

Copy link

@jawnsy jawnsy Mar 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tested this and it works - there's only so much we can do since we need a privileged container to access /proc/sys, but I at least tried to cut down on the host namespaces we were sharing into the DaemonSet pods:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: more-fs-watchers
  namespace: kube-system
  labels:
    app: more-fs-watchers
    k8s-app: more-fs-watchers
spec:
  selector:
    matchLabels:
      k8s-app: more-fs-watchers
  template:
    metadata:
      labels:
        name: more-fs-watchers
        k8s-app: more-fs-watchers
      annotations:
        seccomp.security.alpha.kubernetes.io/defaultProfileName: runtime/default
        apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      initContainers:
        - name: sysctl
          image: alpine:3
          command:
            - sysctl
            - -w
            - fs.inotify.max_user_watches=16384
          resources:
            requests:
              cpu: 10m
              memory: 1Mi
            limits:
              cpu: 100m
              memory: 5Mi
          securityContext:
            # We need to run as root in a privileged container to modify
            # /proc/sys on the host (for sysctl)
            runAsUser: 0
            privileged: true
            readOnlyRootFilesystem: true
            capabilities:
              drop:
                - ALL
      containers:
        - name: pause
          image: k8s.gcr.io/pause:3.5
          command:
            - /pause
          resources:
            requests:
              cpu: 10m
              memory: 1Mi
            limits:
              cpu: 100m
              memory: 5Mi
          securityContext:
            runAsNonRoot: true
            runAsUser: 65535
            allowPrivilegeEscalation: false
            privileged: false
            readOnlyRootFilesystem: true
            capabilities:
              drop:
                - ALL
      terminationGracePeriodSeconds: 5

- -c
- sysctl -w fs.inotify.max_user_watches=524288;
image: alpine:3.6
imagePullPolicy: IfNotPresent
name: sysctl
resources: {}
securityContext:
privileged: true
volumeMounts:
- name: sys
mountPath: /sys
containers:
- resources:
requests:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@scsmithr we should set requests/limits too. The pod should be set with a read-only rootfs for security. And I suggest we use the sleep command instead of tail -f?

I am actually not sure if we need privileges other than the ability to write to sysfs? We should check this...

cpu: 0.01
image: alpine:3.6
name: sleepforever
command: ["tail"]
args: ["-f", "/dev/null"]
volumes:
- name: sys
hostPath:
path: /sys
```

## Notes

- Daemonsets without node selectors will persist on the cluster and will run on
*every* node. Every new node that you spin up will also have an associated
daemonset pod spun up to ensure that the sysctl command runs on every node.
- You can **delete** this by running:

```console
kubectl delete more-fs-watchers
```

This command removes all related daemonset pods, and no further pods will be
spun up.

## Helpful Resources
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Helpful Resources
## See Also

We can't guarantee these will be helpful 😉


- [Setting Sysctls for a
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this actually helps in this case, because these sysctls are not namespaced and therefore not on the allow list

Pod](https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#setting-sysctls-for-a-pod)
- [Increasing the Amount of `inotify`
Watchers](https://github.com/guard/listen/blob/master/README.md#increasing-the-amount-of-inotify-watchers)
1 change: 1 addition & 0 deletions manifest.json
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@
{ "path": "./guides/custom-env.md" },
{ "path": "./guides/gitconfig.md" },
{ "path": "./guides/helm-charts.md" },
{ "path": "./guides/inotify-watch-limits.md" },
{ "path": "./guides/macos-keybinding.md" },
{ "path": "./guides/mobile-development.md" },
{ "path": "./guides/nightly-releases.md" },
Expand Down