Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: requestIdleCallback(cb) #2543

Closed
isaacs opened this issue Aug 25, 2015 · 19 comments
Closed

Feature: requestIdleCallback(cb) #2543

isaacs opened this issue Aug 25, 2015 · 19 comments
Labels
feature request Issues that request new features to be added to Node.js.

Comments

@isaacs
Copy link
Contributor

isaacs commented Aug 25, 2015

This seems interesting and useful, and probably wouldn't be terrifically hard to add to node: https://w3c.github.io/requestidlecallback/

It just landed in Chrome Canary: https://plus.google.com/+IlyaGrigorik/posts/bPNjgMwcMKs

Could be worth waiting a bit to see how people find it in browsers, or if V8 has any sort of special support for it in upcoming versions.

@Fishrock123 Fishrock123 added the feature request Issues that request new features to be added to Node.js. label Aug 25, 2015
@Fishrock123
Copy link
Member

cc @ofrobots

@benjamingr
Copy link
Member

Why not userland? What information about the server load isn't available to the userland and if it is not available - wouldn't it be better to expose it instead and let userland solutions evolve (at least first)?

@isaacs
Copy link
Contributor Author

isaacs commented Aug 26, 2015

@benjamingr By definition, it's impossible for userland JS to know when V8 isn't busy doing anything, because when it's running userland JS, it's busy doing that.

You could sort of fake it with setTimeouts or by writing a C++ addon that uses the v8::IdleTimeout stuff, but it would be tricky to get exactly the same logic as in Chrome.

@benjamingr
Copy link
Member

Well, if you schedule a macrotask (setTimeout) wouldn't it run after node is "no longer busy"?

How do we define idle? Idle in a UI setting is different from idle on a server. I definitely see the usages of "running low priority background work" in node - but what do we consider idle here?

@pmq20
Copy link
Contributor

pmq20 commented Aug 26, 2015

@isaacs What is v8::IdleTimeout?

@pmq20
Copy link
Contributor

pmq20 commented Aug 26, 2015

@benjamingr I searched the code in vain for a ready-to-use definition of idle-ness. libuv has a uv_idle_t but it's not what we are looking for:

Despite the name, idle handles will get their callbacks called on every loop iteration,
not when the loop is actually "idle".

Therefore I think we need to first extend libuv to provide an API that could register callbacks for when the loop is really "idle".

@bnoordhuis
Copy link
Member

A browser's event loop works at discrete intervals so there is a useful definition of "idle": no more work for this quantum, next quantum hasn't started yet, remaining time is idle time.

Node's event loop however is continuous and it's only truly idle when it's waiting for timers to expire and nothing else. As soon as there is any I/O involved, it can go from idle to busy at any time without warning.

How do browsers process requestIdleCallback() when there is an in-progress XMLHttpRequest?

@pmq20
Copy link
Contributor

pmq20 commented Aug 26, 2015

@bnoordhuis I think there is one more idle situation on server, that is, when I/O is in progress, i.e. when the event loop polls for i/o but there are no pending callbacks yet.

@bnoordhuis
Copy link
Member

@pmq20 Waiting for I/O is not idleness in the "predictable quantum of time" definition from the W3C document. Outside of the "timers only" scenario, node is very much like Chuck Norris in that it never sleeps, it just waits.

(Yes, I slipped a Chuck Norris joke in there. I apologize for nothing.)

Tangential: the concept of idle callbacks has some relation with the idle GC feature that was removed three years ago in commit d607d85.

The idea was to run the garbage collector proactively at quiescent times but it never worked all that well and for pretty much the same reasons that make requestIdleCallback() tricky.

@isaacs
Copy link
Contributor Author

isaacs commented Aug 27, 2015

@bnoordhuis That's a good point.

Doing GC at those times turned out to be a bad idea, I do recall.

I'd just as soon tell people to do "long-running work" using child procs (or web workers on browsers). It makes more sense on browsers, since you do have some kinds of work that must block the same thread as UI (updating the dom, etc) and doing that when idle is less likely to impact the user.

If there's no way to do this in a way that makes sense, then that's reasonable. But I can expect that someone will ask for it at some point.

@tjconcept
Copy link
Contributor

Tangential: the concept of idle callbacks has some relation with the idle GC feature that was removed three years ago in commit d607d85.

Is it related to what has later been described here? http://v8project.blogspot.dk/2015/08/getting-garbage-collection-for-free.html

@Fishrock123
Copy link
Member

Closing since it doesn't seem very feasible. Maybe we can re-evaluate in the future.

@sebmarkbage
Copy link

sebmarkbage commented May 31, 2017

Let me describe my use case and maybe that can inform a plausible design.

Marshaling between workers is too expensive and add too much overhead. Especially when you have lots of little memoized data shared between multiple parts of the app. It is also difficult to break a part the app at a reasonable seam that is stable over time. The easiest way to get good scheduling is to break it apart into small slices of work - which is why the I/O model of Node is good to begin with. This is why rearchitected React (Fiber) to use cooperative scheduling on the client, powered by requestIdleCallback. It lets us gather all input from all the I/O in the beginning of the frame and then use the rest of the available time to prioritize and do the highest priority work.

In my Node use case I have a HTTP request/response model where I need to on demand fetch some data from the backend. Different segments of the response has a dependency on different data from the backend. The segments can be computed out of order but each segment has a priority associated with it. If I have data available from one of the lower priority segments but not higher priority segment, I don't want to block. I want to use the idle CPU time to compute some of the lower priority segments.

E.g.:

Request data for Segment 1.
Request data for Segment 2.
Data returns for Segment 2.
Callback fires for Segment 2. Process Segment 2.
Data returns for Segment 1.
Callback fires for Segment 1. Process Segment 1.

If the data for Segment 1 returns just after I've started processing Segment 2, then I've probably done the wrong choice because now Segment 1 gets delayed a bit even though it has higher priority. However, in most cases it's a better use of time. My goal is to minimize these cases.

If I understand the Node architecture well enough, it is possible to get into situations like this:

Request data for Segment 1.
Request data for Segment 2.
Request data for Segment 3.
Data returns for Segment 2.
Callback fires for Segment 2. Process Segment 2.
Data returns for Segment 3. (Still in the middle of processing Segment 2.)
Data returns for Segment 1. (Still in the middle of processing Segment 3.)
Callback fires for Segment 3. Process Segment 3.
Callback fires for Segment 1. Process Segment 1.

In this case we have already received data for Segment 1 by the time we start processing Segment 3, but because they returned in the order I can't change the priority.

I'd like to have a way to wait for all my I/O processing to be done and built up my own priority queue.

Request data for Segment 1.
Request data for Segment 2.
Request data for Segment 3.
Data returns for Segment 2.
Callback fires for Segment 2. Add to priority queue. Request Idle.
Idle: Process Segment 2.
Data returns for Segment 3. (Still in the middle of processing Segment 2.)
Data returns for Segment 1. (Still in the middle of processing Segment 3.)
Callback fires for Segment 3. Add to priority queue. Request Idle.
Callback fires for Segment 1. Add to priority queue.
Idle: Process Segment 1. Process Segment 3. (Processing occurs in priority order in this batch of work.)

So, for my use case it's not quite requestIdleCallback because there is no "deadline" associated with the callback. Instead, it's just used as a way to schedule something after we've exhausted everything already queued in the buffers.

EDIT: I've been told setImmediate might do this already. Is that correct?

@Fishrock123
Copy link
Member

What you're asking for seem to be higher-level than just requestIdleCallback?

I do not think that setImmediate() directly does what you're asking for. If it does, I probably do not understand your comment.

setImmediate() fires once the event loop has done all* processing in a given turn. That is, when libuv is done flushing any items it may have queued due to previous backlog, and at the end of the uv_run part of the literal "event" loop.

Thing is, that isn't indicative of I/O being idle, or really if the CPU would be idle if an Immediate wasn't called. It is entirely possible for the event loop to fire practically, or entirely, consecutively. To be a bit more clear, all realistic "idle time" is if libuv is waiting during uv_run - which it does if there is no I/O notifications ready to be returned into callbacks. So by the next time we hit uv_run again we don't know if it will immediately call a callback or not, or how long it may wait.

You can do a pseudo priority queuing by utilizing the call order of the following, which is called in the following order: process.nextTick(), setImmediate(), setTimeout( , 1).

  1. nextTick() will process during the end of the current async callback from libuv, but with a new callstack.
  2. setImmediate() will act as described above.
  3. setTimeout( , 1) will most likely fire at the beginning of the next loop or the loop after that, or fire after the time if there were no I/O notifications.

*Note: I think it is possible for an add-on to fire "immediate-like" events after Immediates the "immediate ("check") phase"

@Fishrock123
Copy link
Member

Do keep in mind there is no regularly scheduled "active" work in Node.js like a render frame to differentiate from "idle" time. Work may come in at any time where we would then be calling an "idle" callback.

@sebmarkbage
Copy link

sebmarkbage commented May 31, 2017

@Fishrock123 What if, if there are pending "idle" callbacks, uv_run is called with the UV_RUN_NOWAIT flag. If nothing pending ran, it runs one "idle" callback and then tries uv_run again. The missing thing right now is libuv doesn't expose ran_pending to the caller.

That way if the queue is empty, that's considered an "idle" period. It may only be idle for a fraction of time but it would otherwise be waiting.

@Bessonov
Copy link

Bessonov commented Jun 3, 2021

The following scenario interests me: I've some kafka consumers those do some work like database queries. Because of reactive nature there is no need to run them immediately, especially if other work like processing http request can/should be done. I think the very good definition of requestIdleCallback for this use case is (borrowed from @pmq20 ):

but there are no [directly] pending callbacks yet

I see waiting for IO, timers and so on not as "direct" callbacks. They may happen in the future, but not in the processing queue for now. So, if the queue is empty then I would like to execute my low prio callback. Probably I want consider every callback/promise in the callback as low priority work and execute it only if queue is empty. If some timer is fired or IO waiting is over or new work come in while executing my consumer callback, well, I don't care about this new situation. The new event should be scheduled after my callback like usual. It would be nice if there is some way to preempt my callback, but I'm not sure that the complexity worth it, or it's possible at all.

@voxpelli
Copy link

voxpelli commented Jun 3, 2021

@Bessonov Suggestion: Rather than resurrecting old closed issues, add a new issue and reference this one, you will probably get more success then 🙂

@SilentImp
Copy link

SilentImp commented Apr 22, 2024

Maybe someone will find it useful, class to visualize debug in THREE.js with 2d/3d rapier support and update method for animation:

import * as THREE from 'three';

export class DebugController {
    #scene;
    #physicsWorld;
    #lines = null;
    #dimensions;

    
    /**
     * Constructor for DebugController class.
     *
     * @param {RAPIER.World} physicsWorld - The physics object.
     * @param {THREE.scene} scene - The scene object.
     * @param {2 | 3} dimensions - The dimensions of the physics (default is 2).
     * @return {void} Calls the update method to draw debug lines.
     */
    constructor({
        physics,
        scene,
        dimensions = 2,
    }) {
        this.#dimensions  = dimensions
        this.#scene = scene;
        this.#physicsWorld = physics;
        this.update();
    }

    /**
     * Initialize debug lines
     */
    #init = () => {
        let material = new THREE.LineBasicMaterial({
            color: 0xffffff,
            vertexColors: true
        });
        let geometry =  new THREE.BufferGeometry();
        this.#lines = new THREE.LineSegments(geometry, material);
        this.#scene.add(this.#lines);
    }

    /**
     * Update debug lines
     * You will need to call this method every frame in your scene loop
     */
    update = () => {
        if (this.#lines === null) this.#init();
        const { vertices, colors } = this.#physicsWorld.debugRender();
        this.#lines.geometry.setAttribute('position', new THREE.BufferAttribute(vertices, this.#dimensions));
        this.#lines.geometry.setAttribute('color', new THREE.BufferAttribute(colors, 4));
    }

    /**
     * Remove debug lines in non-blocking way
     * You will need to call this method when you want to switch off debug mode
     */
    clean = () => {
        this.#lines.material.visible = false;
        if (typeof requestIdleCallback === 'function') {
            requestIdleCallback(this.#removeHandler);
        } else {
            setTimeout(this.#removeHandler, 1);
        }
        
    }

    /**
     * Remove debug lines
     */
    #removeHandler = () => {
        this.#removeElement(this.#lines);
        this.#lines = null;
    }

    
    /**
     * Remove the given object3D and its associated resources from the scene.
     *
     * @param {THREE.Object3D} object3D - The object3D to be removed.
     * @return {boolean} Returns `true` if the object3D was successfully removed, `false` otherwise.
     */
    #removeElement(object3D) {
        if (!(object3D instanceof THREE.Object3D)) return false;
        if (object3D.geometry) object3D.geometry.dispose();
        if (object3D.material) {
            if (object3D.material instanceof Array) {
                object3D.material.forEach(material => material.dispose());
            } else {
                object3D.material.dispose();
            }
        }
        object3D.removeFromParent();
        return true;
    }

}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request Issues that request new features to be added to Node.js.
Projects
None yet
Development

No branches or pull requests

10 participants