-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve large expansion sets #1324
Conversation
I like the solution here with the caching. I had a similar need recently for throttling the number of inflight promises and came up with this solution which might be inspiration for a less intrusive solution to the batching. Benefit here is that it allows us to limit the number of in process workers, BUT doesn't block on all of them completing. So we can ensure the workers are all always filled. /** Throttle number of in flight promises. */
export class InFlightPromiseThrottler {
private inFlightPromises: Promise<unknown>[] = [];
private promiseLimit: number;
public constructor(promiseLimit: number) {
this.promiseLimit = promiseLimit;
}
public async run<T>(promiseFactory: () => Promise<T>): Promise<T> {
while (this.inFlightPromises.length >= this.promiseLimit) {
await Promise.race(this.inFlightPromises);
}
const promise = promiseFactory();
this.inFlightPromises.push(promise);
return promise.finally(() => {
const index = this.inFlightPromises.indexOf(promise);
if (index !== -1) {
this.inFlightPromises.splice(index, 1);
}
});
}
} Used like // Somewhere in a higher scope
const workerPromiseThrottler = new InFlightPromiseThrottler(8);
// When we actually do the worker run
const result = await workerPromiseThrottler.run(() => {\\ async code calling piscina.run}); |
Hi Dylan, Thank you for the suggestion!!! Very cool approach and I will give it a try. I hope all is well with you. |
296fec6
to
4aa23e1
Compare
4aa23e1
to
dfe3310
Compare
dfe3310
to
a14090c
Compare
43994e3
to
a14090c
Compare
a14090c
to
e6593e8
Compare
e6593e8
to
dc9a5fe
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, I saw your demo as well and it seemed like it was working well. My only question is do we need to enforce that PromiseThrottler
is a singleton?
I am using it as a singleton export but there is a ticket I created to use dependency injection so it wont be a singleton in the future. |
* The user can specify the number of workers and how much heap size each worker can have.
Implement worker options to tailor worker behavior to an application's needs, optimizing resource utilization.
Works better than the original chunking I implemented
Introduce a TypeScript transpiling cache to significantly reduce build times for both expansion sets and sequence expansion. This will be an upfront cost. Implement chunking to prevent overloading the worker queue with excessive jobs. This effectively manages worker resources and prevents potential the consumption of resources and heap size.
dc9a5fe
to
3b39cd4
Compare
Description
Part 1 of 2 for #1025
This PR addresses a stability issue where creating large expansion sets caused server crashes due to excessive resource consumption by worker processes. Here's a summary of the improvements:
Problem:
Solution:
By default, I am spinning up 8 workers and giving them 1GB of heap.
Verification
I was able to create an expansion set with 73 authored logic without a crash. Before the fix 20+ would crash the server.
Future work
Implement a single background worker to transpile the authoring logic while the server is idle and cache the results. Help with the huge upfront cost .ex 70+ expansion logic takes about 13 minutes. This is cpu bounded so a beefier computer would help.