New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ubuntu 14.04 js-error in the app #2082

Closed
skomaroh opened this Issue May 8, 2014 · 23 comments

Comments

Projects
None yet
@skomaroh

skomaroh commented May 8, 2014

I successfully compiled .deb package without any errors. Then I installed it.
The Atom opens but I receive the error at start:

TypeError: Unable to watch path
at HandleWatcher.start (/usr/share/atom/resources/app/node_modules/pathwatcher/lib/main.js:65:29)
at new HandleWatcher (/usr/share/atom/resources/app/node_modules/pathwatcher/lib/main.js:29:12)
at new PathWatcher (/usr/share/atom/resources/app/node_modules/pathwatcher/lib/main.js:119:30)
at Object.exports.watch (/usr/share/atom/resources/app/node_modules/pathwatcher/lib/main.js:178:12)
at Config.module.exports.Config.observeUserConfig (/usr/share/atom/resources/app/src/config.js:90:109)
at Config.module.exports.Config.load (/usr/share/atom/resources/app/src/config.js:67:19)
at Atom.module.exports.Atom.startEditorWindow (/usr/share/atom/resources/app/src/atom.js:342:19)
at Object. (/usr/share/atom/resources/app/src/window-bootstrap.js:14:8)
at Object. (/usr/share/atom/resources/app/src/window-bootstrap.js:20:4)
at Module._compile (module.js:455:26)
at Object.Module._extensions..js (module.js:473:10)
at Module.load (/usr/share/atom/resources/app/node_modules/coffee-script/lib/coffee-script/register.js:45:36)
at Function.Module._load (module.js:311:12)
at Module.require (module.js:363:17)
at require (module.js:379:17)
at window.onload (file:///usr/share/atom/resources/app/static/index.js:20:5)

What can cause the problem?

@denzp

This comment has been minimized.

Show comment
Hide comment
@denzp

denzp May 8, 2014

Could you run following command?

sudo sysctl fs.inotify.max_user_watches=32768

If this command helps, you can make a persist fix. Try to run from sudo:

echo 32768 > /proc/sys/fs/inotify/max_user_watches

denzp commented May 8, 2014

Could you run following command?

sudo sysctl fs.inotify.max_user_watches=32768

If this command helps, you can make a persist fix. Try to run from sudo:

echo 32768 > /proc/sys/fs/inotify/max_user_watches
@piranna

This comment has been minimized.

Show comment
Hide comment
@piranna

piranna May 8, 2014

@denzp, I had the same error and your trick worked, thanks! :-) Could you be able to explain it (just to know what's happened...)?

piranna commented May 8, 2014

@denzp, I had the same error and your trick worked, thanks! :-) Could you be able to explain it (just to know what's happened...)?

@denzp

This comment has been minimized.

Show comment
Hide comment
@denzp

denzp May 8, 2014

@piranna, I think in Ubuntu 14.04 is reduced number of max "watches" or it itself watches more then previous releases. I encountered with same issue exacly after being upgraded to new Ubuntu release.

denzp commented May 8, 2014

@piranna, I think in Ubuntu 14.04 is reduced number of max "watches" or it itself watches more then previous releases. I encountered with same issue exacly after being upgraded to new Ubuntu release.

@piranna

This comment has been minimized.

Show comment
Hide comment
@piranna

piranna May 8, 2014

Two days ago it worked flawlessly, so maybe it now is watching more files...

piranna commented May 8, 2014

Two days ago it worked flawlessly, so maybe it now is watching more files...

@skomaroh

This comment has been minimized.

Show comment
Hide comment
@skomaroh

skomaroh May 8, 2014

Thank. I'll try it.
I have no problem on my local machine.

skomaroh commented May 8, 2014

Thank. I'll try it.
I have no problem on my local machine.

@raelgc

This comment has been minimized.

Show comment
Hide comment
@raelgc

raelgc May 13, 2014

Thanks, this fixed a white (and blank) window that I got on Ubuntu in the second launch.

raelgc commented May 13, 2014

Thanks, this fixed a white (and blank) window that I got on Ubuntu in the second launch.

@kevinsawicki

This comment has been minimized.

Show comment
Hide comment
@kevinsawicki

kevinsawicki May 16, 2014

Member

Closing this out since it looks like @denzp's fix works for this problem, thanks 👍

Member

kevinsawicki commented May 16, 2014

Closing this out since it looks like @denzp's fix works for this problem, thanks 👍

@dutgcom

This comment has been minimized.

Show comment
Hide comment
@dutgcom

dutgcom May 30, 2014

May be following info will help someone:

I'm working with like 4-5 project simultaneously and the error persisted even with " fs.inotify.max_user_watches=32768" in my sysctl.conf

I've increased it to 300 000 but still had problems
The issue disappear when I've changed the value to 3000000 (3 millions)

So if the recommendation didn't help try to increase it much further before trying to solve another way.

dutgcom commented May 30, 2014

May be following info will help someone:

I'm working with like 4-5 project simultaneously and the error persisted even with " fs.inotify.max_user_watches=32768" in my sysctl.conf

I've increased it to 300 000 but still had problems
The issue disappear when I've changed the value to 3000000 (3 millions)

So if the recommendation didn't help try to increase it much further before trying to solve another way.

@seveibar

This comment has been minimized.

Show comment
Hide comment
@seveibar

seveibar Jun 15, 2014

For some reason @denzp's answer wasn't persistent through a reset for me. I managed to get it to work by editing /etc/sysctl.conf and adding the following line at the end of the file...

fs.inotify.max_user_watches=32768

seveibar commented Jun 15, 2014

For some reason @denzp's answer wasn't persistent through a reset for me. I managed to get it to work by editing /etc/sysctl.conf and adding the following line at the end of the file...

fs.inotify.max_user_watches=32768
@piranna

This comment has been minimized.

Show comment
Hide comment
@piranna

piranna Jul 1, 2014

This seems like a temporal solution, since it's happening here again, also after @denzp solution. I believe the problem is related to having Dropbox running that it also make inotify to watch files, so maybe we can prevent this and disable the functionality on failure and try to enable it later when doing some filesystem IO operation instead of plain crash...

piranna commented Jul 1, 2014

This seems like a temporal solution, since it's happening here again, also after @denzp solution. I believe the problem is related to having Dropbox running that it also make inotify to watch files, so maybe we can prevent this and disable the functionality on failure and try to enable it later when doing some filesystem IO operation instead of plain crash...

@fbrundu

This comment has been minimized.

Show comment
Hide comment
@fbrundu

fbrundu Jul 14, 2014

The problem is here again. @Dvporg 's recommendation works.

fbrundu commented Jul 14, 2014

The problem is here again. @Dvporg 's recommendation works.

@lacouf

This comment has been minimized.

Show comment
Hide comment
@lacouf

lacouf Sep 10, 2014

Where is the +1 icon for this fix?

lacouf commented Sep 10, 2014

Where is the +1 icon for this fix?

@envygeeks

This comment has been minimized.

Show comment
Hide comment
@envygeeks

envygeeks Sep 14, 2014

Contributor

If you have to set an amount over 100, 000 you should really be checking out what is going on in your system (just sayin') because watches can use upwards of 1KB of memory on 64bit and at 3 million that will allow you to consume 3gb of unswappable memory which you might eventually notice, you can see what is eating up all your watches with:

for foo in /proc/*/fd/*; do readlink -f $foo; done |grep inotify |cut -d/ -f3 |xargs -I '{}' -- ps --no-headers -o '%P %U %c' -p '{}' |uniq -c |sort -nr
Contributor

envygeeks commented Sep 14, 2014

If you have to set an amount over 100, 000 you should really be checking out what is going on in your system (just sayin') because watches can use upwards of 1KB of memory on 64bit and at 3 million that will allow you to consume 3gb of unswappable memory which you might eventually notice, you can see what is eating up all your watches with:

for foo in /proc/*/fd/*; do readlink -f $foo; done |grep inotify |cut -d/ -f3 |xargs -I '{}' -- ps --no-headers -o '%P %U %c' -p '{}' |uniq -c |sort -nr
@goffreder

This comment has been minimized.

Show comment
Hide comment
@goffreder

goffreder Nov 10, 2014

@envygeeks What is the second number of the output of your command? I've made a few tests and I've seen that this number increases every time I open atom on a different folder. I can confirm though that the problem is related to Dropbox, when I start the system Dropbox notifies me to increase fs.inotify.max_user_watches and restart it, and then if I open atom it raises exception...

goffreder commented Nov 10, 2014

@envygeeks What is the second number of the output of your command? I've made a few tests and I've seen that this number increases every time I open atom on a different folder. I can confirm though that the problem is related to Dropbox, when I start the system Dropbox notifies me to increase fs.inotify.max_user_watches and restart it, and then if I open atom it raises exception...

@envygeeks

This comment has been minimized.

Show comment
Hide comment
@envygeeks

envygeeks Nov 10, 2014

Contributor

@goffreder the first is the amount, the second is the ppid (the parent PID) but that should be a lowercase not a capital because you are more interested in the pid, so in the command under "|ps" change "%P" to "%p"

Contributor

envygeeks commented Nov 10, 2014

@goffreder the first is the amount, the second is the ppid (the parent PID) but that should be a lowercase not a capital because you are more interested in the pid, so in the command under "|ps" change "%P" to "%p"

@treb0r

This comment has been minimized.

Show comment
Hide comment
@treb0r

treb0r Nov 12, 2014

For the record, I have also had problems with Atom on Ubuntu 14.10 and it turned out that the copy.com client was the culprit. I stopped it from running on login, rebooted and Atom is now fine again.

treb0r commented Nov 12, 2014

For the record, I have also had problems with Atom on Ubuntu 14.10 and it turned out that the copy.com client was the culprit. I stopped it from running on login, rebooted and Atom is now fine again.

@piranna

This comment has been minimized.

Show comment
Hide comment
@piranna

piranna Nov 12, 2014

Dropbox, copy.com and similar tools use inotify to know when a file has
changed on local filesystem and upload it. Since inotify is also used by
Atom and inotify has a limit on the number of files it can watch, there's a
conflict.

2014-11-12 11:06 GMT+01:00 treb0r notifications@github.com:

For the record, I have also had problems with Atom on Ubuntu 14.10 and it
turned out that the copy.com client was the culprit. I stopped it from
running on login, rebooted and Atom is now fine again.


Reply to this email directly or view it on GitHub
#2082 (comment).

"Si quieres viajar alrededor del mundo y ser invitado a hablar en un monton
de sitios diferentes, simplemente escribe un sistema operativo Unix."
– Linus Tordvals, creador del sistema operativo Linux

piranna commented Nov 12, 2014

Dropbox, copy.com and similar tools use inotify to know when a file has
changed on local filesystem and upload it. Since inotify is also used by
Atom and inotify has a limit on the number of files it can watch, there's a
conflict.

2014-11-12 11:06 GMT+01:00 treb0r notifications@github.com:

For the record, I have also had problems with Atom on Ubuntu 14.10 and it
turned out that the copy.com client was the culprit. I stopped it from
running on login, rebooted and Atom is now fine again.


Reply to this email directly or view it on GitHub
#2082 (comment).

"Si quieres viajar alrededor del mundo y ser invitado a hablar en un monton
de sitios diferentes, simplemente escribe un sistema operativo Unix."
– Linus Tordvals, creador del sistema operativo Linux

@envygeeks

This comment has been minimized.

Show comment
Hide comment
@envygeeks

envygeeks Nov 12, 2014

Contributor

There is no conflict, there is a lack of knowing how to optimize inotify watches but sometimes that can be hard so they always take the easy way out and recursively watch down the path instead of searching and optimizing by only notifying on directories which can turn what would be 100,000 watches down to 10 watches in some cases (over exaggerated but it makes the point quite well.)

Contributor

envygeeks commented Nov 12, 2014

There is no conflict, there is a lack of knowing how to optimize inotify watches but sometimes that can be hard so they always take the easy way out and recursively watch down the path instead of searching and optimizing by only notifying on directories which can turn what would be 100,000 watches down to 10 watches in some cases (over exaggerated but it makes the point quite well.)

@envygeeks

This comment has been minimized.

Show comment
Hide comment
@envygeeks

envygeeks Nov 12, 2014

Contributor

To clarify my comment so it doesn't look like I'm saying Atom or Dropbox are bad at programming I'm saying: When you watch a directory you have to sort and filter everything on your own which can create a significant amount of more programming, it's often easier to just create all the individual watches which can simplify your code drastically in some cases.

Contributor

envygeeks commented Nov 12, 2014

To clarify my comment so it doesn't look like I'm saying Atom or Dropbox are bad at programming I'm saying: When you watch a directory you have to sort and filter everything on your own which can create a significant amount of more programming, it's often easier to just create all the individual watches which can simplify your code drastically in some cases.

@guss77

This comment has been minimized.

Show comment
Hide comment
@guss77

guss77 Jun 22, 2015

@envygeeks : The problem, as shown above, is not that there are too many "watch instances" (which is what would happen if the programmer created individual watches as you suggested), but instead that there are too many "watches" - which are watched files.

Also, from a programmer's PoV, creating individual watches for specific files, when I need to watch more than a couple of files in the same directory is significantly more annoying and cumbersome write and debug then just watching the entire directory and filtering for only the files I'm interested in - and this is where the problem lies.

I'm using another electron based application, and running that through strace, I can see it is setting up 5 different watch instances just on /etc. As /etc/ on my system has more than 250 files, electron just ate up more than 1000 "watches" just for the heck of it - I'm not even sure why to setup watches for /etc in the first place - it sets up individual watches for /etc/hosts and a few other /etc/ files that it makes sense to watch.

To me this looks like a bug, and a serious one - until operating systems start to ship with a higher default limit in fs.inotify.max_user_watches, with modern desktop environments consuming their fair amount of watches, and with /etc growing in size to accommodate more modern operating systems, add (multiple) watches for /etc is not a responsible approach.

guss77 commented Jun 22, 2015

@envygeeks : The problem, as shown above, is not that there are too many "watch instances" (which is what would happen if the programmer created individual watches as you suggested), but instead that there are too many "watches" - which are watched files.

Also, from a programmer's PoV, creating individual watches for specific files, when I need to watch more than a couple of files in the same directory is significantly more annoying and cumbersome write and debug then just watching the entire directory and filtering for only the files I'm interested in - and this is where the problem lies.

I'm using another electron based application, and running that through strace, I can see it is setting up 5 different watch instances just on /etc. As /etc/ on my system has more than 250 files, electron just ate up more than 1000 "watches" just for the heck of it - I'm not even sure why to setup watches for /etc in the first place - it sets up individual watches for /etc/hosts and a few other /etc/ files that it makes sense to watch.

To me this looks like a bug, and a serious one - until operating systems start to ship with a higher default limit in fs.inotify.max_user_watches, with modern desktop environments consuming their fair amount of watches, and with /etc growing in size to accommodate more modern operating systems, add (multiple) watches for /etc is not a responsible approach.

@batmat

This comment has been minimized.

Show comment
Hide comment
@batmat

batmat Jun 24, 2016

Using Ansible, in some are interested in, one can simply do:

- name: Atom: Put higher number for fs.inotify.max_user_watches
  become: yes
  lineinfile: dest=/etc/sysctl.conf line="fs.inotify.max_user_watches=32768" insertafter=EOF

HTH

batmat commented Jun 24, 2016

Using Ansible, in some are interested in, one can simply do:

- name: Atom: Put higher number for fs.inotify.max_user_watches
  become: yes
  lineinfile: dest=/etc/sysctl.conf line="fs.inotify.max_user_watches=32768" insertafter=EOF

HTH

@aidanharris aidanharris referenced this issue Jul 19, 2016

Closed

Linux Crash #23

@cweagans

This comment has been minimized.

Show comment
Hide comment
@cweagans

cweagans Sep 12, 2016

@batmat FYI, Ansible has a sysctl module.

- name: "Atom: Set higher value for fs.inotify.max_user_watches"
  sysctl:
    name: "fs.inotify.max_user_watches"
    value: 65536
    state: present

cweagans commented Sep 12, 2016

@batmat FYI, Ansible has a sysctl module.

- name: "Atom: Set higher value for fs.inotify.max_user_watches"
  sysctl:
    name: "fs.inotify.max_user_watches"
    value: 65536
    state: present
@lock

This comment has been minimized.

Show comment
Hide comment
@lock

lock bot Apr 5, 2018

This issue has been automatically locked since there has not been any recent activity after it was closed. If you can still reproduce this issue in Safe Mode then please open a new issue and fill out the entire issue template to ensure that we have enough information to address your issue. Thanks!

lock bot commented Apr 5, 2018

This issue has been automatically locked since there has not been any recent activity after it was closed. If you can still reproduce this issue in Safe Mode then please open a new issue and fill out the entire issue template to ensure that we have enough information to address your issue. Thanks!

@lock lock bot locked and limited conversation to collaborators Apr 5, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.