Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to remove file watcher #115

Closed
gaplyk opened this issue Jan 28, 2016 · 4 comments · Fixed by #203
Closed

Not able to remove file watcher #115

gaplyk opened this issue Jan 28, 2016 · 4 comments · Fixed by #203

Comments

@gaplyk
Copy link

gaplyk commented Jan 28, 2016

I need to remove file watcher. Calling method Remove(fileName) when event happened to prevent receive other event's to this file.

And actually this call to remove event is not working. It's waiting for ignoreLinux to remove watcher, but this never happening.

@Tasssadar
Copy link

Happened to me too, relevant stack trace:

goroutine 60 [select, 1 minutes]:
gopkg.in/fsnotify%2ev1.(*Watcher).readEvents(0xc82031e000)
        src/gopkg.in/fsnotify.v1/inotify.go:265 +0xa00
created by gopkg.in/fsnotify%2ev1.NewWatcher
       gopkg.in/fsnotify.v1/inotify.go:60 +0x3e2

goroutine 61 [semacquire, 1 minutes]:
sync.runtime_Syncsemacquire(0xc820300850)
       src/runtime/sema.go:241 +0x201
sync.(*Cond).Wait(0xc820300840)
       src/sync/cond.go:63 +0x9b
gopkg.in/fsnotify%2ev1.(*Watcher).Remove(0xc82031e000, 0xc820482230, 0x4e, 0x0, 0x0)
        src/gopkg.in/fsnotify.v1/inotify.go:157 +0x2db
...

The problem is that goroutine 61 is also the one that reads from the Events channel, which means no new events can come in and free the condition in Remove(). Not sure if that's a bug per se, but definitely something that should be documented (but I honestly haven't checked if it is, so if yes, the joke's on me ^^)

@nathany
Copy link
Contributor

nathany commented Feb 23, 2016

I wonder if this is similar to the issues and fixes for kqueue on BSD (#105), except for inotify.

@jiangytcn
Copy link

@nathany and @gaplyk i had same issue, and i'm curious about whether i had to manually remove the file/directory watcher when the file/directory removed? will the fsnotify library automatically achieve this ?

@gaplyk
Copy link
Author

gaplyk commented Aug 23, 2016

I just hand over in this case and use different workaround without fsnotify.

@nathany nathany added the bug label Aug 26, 2016
aarondl added a commit to aarondl/fsnotify that referenced this issue Mar 29, 2017
Several people have reported this issue where if you are using a
single goroutine to watch for fs events and you call Remove in
that goroutine it can deadlock. The cause for this is that the Remove
was made synchronous by PR fsnotify#73. The reason for this was to try and
ensure that maps were no longer leaking.

In this PR: IN_IGNORE was used as the event to ensure map cleanup.
This worked fine when Remove() was called and the next event was
IN_IGNORE, but when a different event was received the main goroutine
that's supposed to be reading from the Events channel would be stuck
waiting for the sync.Cond, which would never be hit because the select
would then block waiting for someone to receive the non-IN_IGNORE event
from the channel so it could proceed to process the IN_IGNORE event that
was waiting in the queue. Deadlock :)

Removing the synchronization then created two nasty races where Remove
followed by Remove would error unnecessarily, and one where Remove
followed by an Add could result in the maps being cleaned up AFTER the
Add call which means the inotify watch is active, but our maps don't
have the values anymore. It then becomes impossible to delete the
watches via the fsnotify code since it checks it's local data before
calling InotifyRemove.

This code attempts to use IN_DELETE_SELF as a means to know when a watch
was deleted as part of an unlink(). That means that we didn't delete the
watch via the fsnotify lib and we should clean up our maps since that
watch no longer exists. This allows us to clean up the maps immediately
when calling Remove since we no longer try to synchronize cleanup
using IN_IGNORE as the sync point.

- Fix fsnotify#195
- Fix fsnotify#123
- Fix fsnotify#115
markbates pushed a commit that referenced this issue Mar 29, 2017
Several people have reported this issue where if you are using a
single goroutine to watch for fs events and you call Remove in
that goroutine it can deadlock. The cause for this is that the Remove
was made synchronous by PR #73. The reason for this was to try and
ensure that maps were no longer leaking.

In this PR: IN_IGNORE was used as the event to ensure map cleanup.
This worked fine when Remove() was called and the next event was
IN_IGNORE, but when a different event was received the main goroutine
that's supposed to be reading from the Events channel would be stuck
waiting for the sync.Cond, which would never be hit because the select
would then block waiting for someone to receive the non-IN_IGNORE event
from the channel so it could proceed to process the IN_IGNORE event that
was waiting in the queue. Deadlock :)

Removing the synchronization then created two nasty races where Remove
followed by Remove would error unnecessarily, and one where Remove
followed by an Add could result in the maps being cleaned up AFTER the
Add call which means the inotify watch is active, but our maps don't
have the values anymore. It then becomes impossible to delete the
watches via the fsnotify code since it checks it's local data before
calling InotifyRemove.

This code attempts to use IN_DELETE_SELF as a means to know when a watch
was deleted as part of an unlink(). That means that we didn't delete the
watch via the fsnotify lib and we should clean up our maps since that
watch no longer exists. This allows us to clean up the maps immediately
when calling Remove since we no longer try to synchronize cleanup
using IN_IGNORE as the sync point.

- Fix #195
- Fix #123
- Fix #115
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants