Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

100% cpu usage when using notify within tokio library #501

Open
TommyLike opened this issue Jul 4, 2023 · 1 comment
Open

100% cpu usage when using notify within tokio library #501

TommyLike opened this issue Jul 4, 2023 · 1 comment
Labels
A-bug B-unconfirmed Needs verification (by maintainer or testing party) os-mac

Comments

@TommyLike
Copy link

TommyLike commented Jul 4, 2023

System details

  • OS/Platform name and version:
    macos with intel chip
  • Rust version (if building from source): rustc --version:
    rustc 1.65.0 (897e37553 2022-11-02)
  • Notify version (or commit hash if building from git):
    notify = { version = "6.0.0", default-features = false, features = ["macos_kqueue"] }

What you did (as detailed as you can)

I try to use notify with tokio in my project and the reload logic is mostly based on the async monitor logic. the code is as follow:

pub fn watch(&self, cancel_token: CancellationToken) -> Result<()> {
        let (tx, mut rx) = mpsc::channel(10);
        let watch_file = self.path.clone();
        let config = self.config.clone();
        let mut watcher: RecommendedWatcher = RecommendedWatcher::new(
            move |result: std::result::Result<Event, Error>| {
                tx.blocking_send(result).expect("Failed to send event");
            },
            notify::Config::default().with_poll_interval(Duration::from_secs(5)),
        )
        .expect("configure file watch failed to setup");
        watcher
            .watch(Path::new(watch_file.as_str()), RecursiveMode::NonRecursive)
            .expect("failed to watch configuration file");
        tokio::spawn(async move {
            loop {
                tokio::select! {
                    _ = cancel_token.cancelled() => {
                        info!("cancel token received, will quit configuration watcher");
                        break;
                    }
                    event = rx.recv() => {
                        match event {
                            Some(Ok(Event {
                                kind: notify::event::EventKind::Modify(_),
                                ..
                            })) => {
                                info!("server configuration changed ...");
                                config.write().unwrap().refresh().expect("failed to write configuration file");
                            }
                            Some(Err(e)) => error!("watch error: {:?}", e),
                             //Note: get message from here infinitely 
                            _ => {}
                        }
                    }
                }
            }
        });
        Ok(())
    }

the issue is when watch function is enabled, the cpu consumption will be 100% percent, and tokio select will be triggred infinitely.

What you expected

This is when watch function is disabled:
image
And this is when watch function is enabled:
image

What happened

@0xpr03 0xpr03 added A-bug os-mac B-unconfirmed Needs verification (by maintainer or testing party) labels Jul 6, 2023
@MorningLit
Copy link

MorningLit commented Sep 6, 2023

To add on to this, is it possible for the examples folder to have a good working example for Tokio?
Currently there is the futures async example implementation, but for some of us who are rather weaker with async await, I have no idea on how to implement the equivalent to my Tokio project

Edit: found a working implementation for debounce using Tokio https://stackoverflow.com/questions/76797906/is-there-some-way-to-make-notify-debounce-watcher-async

Edit-edit: Found a better ACTUAL debounce working implementation for debounce using Tokio

fn async_watcher() -> notify::Result<(RecommendedWatcher, Receiver<notify::Result<Event>>)> {
    let (mut tx, rx) = futures::channel::mpsc::channel(1024);
    let watcher = RecommendedWatcher::new(
        move |res| {
            if let Err(e) = tx.try_send(res) {
                // error handling
            }
        },
        Config::default(),
    )?;
    Ok((watcher, rx))
}
pub async fn watch_file<P: AsRef<Path>>(
    path: P,
    filename: P,
) -> Result<(), anyhow::Error> {
    let path = path.as_ref();
    let filename = filename.as_ref();
    let (mut watcher, rx) = async_watcher()?;
    let mut debounced_rx = Debounced::new(rx, Duration::from_secs(5));
    watcher.watch(path, RecursiveMode::NonRecursive)?;

    while let Some(res) = debounced_rx.next().await {
        match res {
            Ok(event) => {
                // process
            }

            Err(e) => {
               // error handling
            }
        }
    }
    Ok(())
}

I've tried this notify's debounce mini and full version and they behaved in ways that I did not expect, my understanding of debounce is that it fires an event after x seconds of not interacting with it, but using it, it looks like it just delays the sending of the event by x seconds and does not actually debounce.

I recommend using this library's debounce https://docs.rs/debounced/latest/debounced/ and using future's mpsc channel to achieve the desired expected effect

Edit-edit-edit:
Just realised that by doing this method, it is only good if you just want to debounce ONE event, if you are watching multiple files, either, create 1 watcher to watch 1 file each or find some other method 🤷

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-bug B-unconfirmed Needs verification (by maintainer or testing party) os-mac
Projects
None yet
Development

No branches or pull requests

3 participants