Skip to content

Conversation

@XiNiHa
Copy link

@XiNiHa XiNiHa commented Sep 8, 2025

This PR fixes the issue of SSG generating nothing, which occurred because the finalizer to stop the server ran immediately after the effect for starting the server was completed, by moving the finalizer to the main generate task.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @XiNiHa, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug preventing successful static site generation by correcting the lifecycle management of the SSG server. The core change involves ensuring the server remains operational throughout the content generation phase, thereby allowing the SSG process to complete as intended. This fix enables the system to reliably produce static site outputs.

Highlights

  • Issue Resolution: This PR resolves an issue where the Static Site Generation (SSG) process was failing to generate any output because the server responsible for rendering was being prematurely terminated.
  • Server Lifecycle Management: The finalizer responsible for stopping the SSG server has been relocated from the individual server spawning effect to the main generation task. This ensures the server remains active for the entire duration of the SSG generation process.
  • Code Clean-up: Redundant server termination logic within the server-runner.worker.ts file has been removed, streamlining the server management code.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a bug where SSG servers were being terminated prematurely. The approach of moving the cleanup finalizer to a higher-level scope is correct in principle. However, the new implementation introduces a critical bug by using a worker pool for stateful tasks, which can lead to servers not being cleaned up correctly. My review provides a detailed explanation of the issue and a suggested refactoring to correctly manage the server worker lifecycle using effect-ts patterns.

Comment on lines 134 to 136
yield* Effect.addFinalizer(() =>
Effect.all(
serverPorts.map((port) => serverPool.executeEffect(new StopServerMessage({ port }))),
).pipe(Effect.orDie)
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This new finalizer correctly moves the server shutdown logic to the end of the entire SSG process. However, its implementation has a critical flaw when used with a worker pool, and it relies on a buggy StopServer handler in the worker.

1. Worker Pool State Issue (Critical)

A Worker.Pool treats its workers as stateless and interchangeable. When you call serverPool.executeEffect(new StopServerMessage({ port })), any available worker may handle the request. The problem is that the server process state (the serverProcess handle in server-runner.worker.ts) is local to the specific worker that started the server. There is no guarantee that the stop message for a given port will be routed to the correct worker.

This can lead to servers not being stopped at all, or the wrong server being stopped.

2. Buggy StopServer Handler

Additionally, the StopServer handler in server-runner.worker.ts has an incorrect signature (StopServer: () => ...). It doesn't accept the message payload, so it cannot know which port to stop. This handler needs to be fixed to StopServer: ({ port }: { port: number }) => ....

Suggested Solution

For stateful tasks where each worker manages a long-lived resource, it's better to manage the workers manually instead of using a pool. This ensures that you can communicate with the specific worker responsible for a resource.

I suggest refactoring this part to create and manage a list of workers explicitly. Here is an example of how you could refactor generate.ts:

// In generate.ts, instead of creating a pool for servers:

// 1. Create an array of workers manually
const serverWorkers = yield* Effect.all(
  Array.from({ length: optimalWorkers }, () => Worker.make<ServerMessage>())
).pipe(
  Effect.provide(Layer.mergeAll(
    createServerSpawner(serverRunnerPath),
    NodeWorker.layerManager,
  )),
)

// 2. Start servers, one per worker, associating them by index
yield* Effect.all(
  serverWorkers.map((worker, i) => {
    const port = serverPorts[i]!
    return worker.executeEffect(new StartServerMessage({ serverPath, port }))
  }),
  { concurrency: 'unbounded' },
)

// 3. The finalizer now correctly targets each worker to stop its server
yield* Effect.addFinalizer(() =>
  Effect.all(
    serverWorkers.map((worker, i) => {
      const port = serverPorts[i]!
      return worker.executeEffect(new StopServerMessage({ port }))
    }),
    { concurrency: 'unbounded' },
  ).pipe(Effect.orDie)
)

This approach ensures each server is managed by a dedicated worker, and the stop command is sent to the correct one. You would also need to fix the StopServer handler signature in server-runner.worker.ts as part of this change.

@XiNiHa XiNiHa force-pushed the fix/ssg-server-finalizer branch 5 times, most recently from 30e169c to bc4f182 Compare September 8, 2025 19:02
@XiNiHa XiNiHa force-pushed the fix/ssg-server-finalizer branch from bc4f182 to 60de6b1 Compare September 8, 2025 19:02
@jasonkuhrt
Copy link
Owner

Thanks @XiNiHa, i pushed a fix to main however if you want to rebase and find further improvements to make that would be welcomed!

@jasonkuhrt jasonkuhrt closed this Sep 9, 2025
@XiNiHa XiNiHa deleted the fix/ssg-server-finalizer branch September 9, 2025 15:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants