Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure static assets coming from slices have the slice name prefix in dest URL #18

Closed
wants to merge 9 commits into from

Conversation

jodosha
Copy link
Member

@jodosha jodosha commented Nov 14, 2023

Problems

Given I have two files with the same name but in two different locations:

  • app/assets/images/logo.png
  • slices/admin/assets/images/logo.png

When I run bundle exec hanami assets compile, I get the following result:

⚡ tree public
public
├── assets
│   ├── # ...
│   ├── logo-29F2002D.png
│   └── logo-C1EF77E4.png
└── assets.json
⚡ cat public/assets.json
// ...
  "logo.png": {
    "url": "/assets/logo-29F2002D.png"
  }

Problem 1: The logo.png coming from the admin slice is outside the public/assets/admin directory.
Problem 2: One of the two logo.png files is overwritten in public/assets.json.

Fix

If a static file is coming from a slice, prefix the slice name.

⚡ tree public
public
├── assets
│   ├── admin
│   │   ├── # ...
│   │   └── logo-29F2002D.png
│   ├── # ...
│   └── logo-C1EF77E4.png
└── assets.json
⚡ cat public/assets.json
// ...
  "logo.png": {
    "url": "/assets/logo-C1EF77E4.png"
  },
  "admin/logo.png": {
    "url": "/assets/admin/logo-29F2002D.png"
  }

Solution 1: The logo file from the admin slice is found at public/assets/admin/logo.png.
Solution 2: The assets manifest reports both the files.

@jodosha jodosha added the bug Something isn't working label Nov 14, 2023
@jodosha jodosha added this to the v2.1.0 milestone Nov 14, 2023
@jodosha jodosha self-assigned this Nov 14, 2023
@timriley timriley force-pushed the fix/prefix-static-assets-with-slice-name branch from 0408163 to 48343c1 Compare November 14, 2023 19:04
I've just rotated this one 90 degrees to make it a different image
@timriley timriley force-pushed the fix/prefix-static-assets-with-slice-name branch from 48343c1 to 9a7a5e6 Compare November 14, 2023 19:07
@parndt

This comment was marked as resolved.

@jodosha
Copy link
Member Author

jodosha commented Nov 15, 2023

@parndt Thanks! Could you please verify if the app.css from the admin slice, is referencing the right CSS?

Copy link
Contributor

@KonnorRogers KonnorRogers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't mind me, just driving by taking a peek!

Comment on lines +60 to +62
for (const build of builds) {
await esbuild.build(build).catch(errorHandler);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This actually won't run in parallel as you're expecting. This will run builds one by one.

If you truly wanted parallel builds you could do this:

Suggested change
for (const build of builds) {
await esbuild.build(build).catch(errorHandler);
}
await Promise.allSettled(
builds.map(async (build) => await esbuild.build(build).catch(errorHandler))
)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried this, it is fast but it didn't seem to work with the manifest file and it being valid JSON all the time.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, I think we'd need to add a mutex around the part that updates the manifest file.

src/index.ts Outdated
Comment on lines 32 to 35
const ctx = await esbuild.context(esbuildOptions);
await ctx.watch().catch(errorHandler);

return ctx;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps a silly question, why are multiple builds only run for the production build, but not the watcher build?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@KonnorRogers I think this is Luca's next step. We definitely need similar behavior for watch mode.

dist/index.js Outdated Show resolved Hide resolved
@timriley
Copy link
Member

@parndt Thanks for the feedback! I've now applied both of your fixes to this branch.

@timriley
Copy link
Member

timriley commented Nov 27, 2023

I think we have the bones of a workable approach here! A few thoughts on what I think we'd want to consider as part of finishing this off.

I'm going to go into detail on each of these, but I don't think each one represents work we have to do now. I'll thumbs up the areas that I think we should tackle, and thumbs down those that I think we can live without for now.

Let me know what you think of the ideas below. I'm keen to hear your thoughts!

User control over per-slice esbuild options 👍🏼

If we're invoking esbuild once per slice, then I think we should allow the user a hook to customise their esbuild options per slice.

To do this, we can call the user-provided esbuildOptionsFn once for each slice, passing in the slice's name and/or root directory as options. This would allow the user to do something like this:

await assets.run({
  esbuildOptionsFn: (args, esbuildOptions, slice) => {
    if (slice.name == "admin") {
      // customise esbuildOptions for admin
    }

    return esbuildOptions;
  }
});

To do this properly, however, I think we need better slice discovery.

Full slice discovery 👍🏼

One of the shortfalls we've had with our assets new solution ever since we created it is that it looks for slices in the root of slices/ only.

In practice, this is where most slices will be found, but because we do technically support nested slices (e.g. slices/a/slices/b/), I think it would be good if we could find a reasonable way to detect those slices too.

One way could be to implement a parallel implementation of our slice discovery code (which, is to recursively scan for slices/ directories, as deep as the app's existing file structure allows). This would be straightforward enough to implement, but it would mean duplicating logic from our canonical, Ruby-based slice loading.

Another way could be to leverage this Ruby-based slice loading in the first place. We could e.g. have our hanami-cli assets commands fetch the known slices from the Hanami app directly, and pass them as another argument when running config/assets.js script, e.g. something like this:

npm run assets -- --slices=main,admin,admin/nested,admin/nested/more

This would mean the JS code would then be able to look directly for each of those slice's assets directories, rather than having to manage a full filesystem crawl.

This approach would mean that we maintain just a single place for locating slice directories, and it would simplify our JS code somewhat. It would also mean that our JS code would respect Hanami's conditional slice loading (via HANAMI_SLICES or config.slices) and only build assets for the slice's that a user has activated, which is a win for consistent, integrated behaviour across our developer experience.

Another nice side effect of this is that it would then become possible for the user to explicitly call hanami assets compile and provide their own slice options, allowing the user to e.g. build a single slice's assets only, e.g. hanami assets compile --slices=admin.

An alternative could be to invert the arrangement and have the JS code itself run a hanami command to request a list of slices, but that feels a needlessly circular since the JS code is already being invoked by a Hanami command in the first place.

Respect per-slice assets config? 👎🏼

I know I mentioned up top the idea of passing the slice details to our top-level config/assets.js, but another option we should at least pause to think about is the idea of preferring per-slice config/assets.js files, if they exist. This would mirror our current ability to have separate config files per slice. If such a file is found, we would need to load it (somehow?) and use it in preference to the main file.

I think this would be straightforward to implement if we were using forked processes to run our multiple esbuild invocations, since we could just call the appropriate config/assets.js file for each invocation.

With our current implementation, however, this feels more tricky, since by the time our assets command is running, we're already "inside" the top-level config/assets.js file, and shifting ourselves into another one at that point doesn't really make sense given the current self-executing nature of of these files.

I think we would be OK in deciding to give this a miss, but I want to share this thought for completeness. It's certainly one argument in favour of some entirely separate process managing multiple esbuild invocations as completely separate processes, but that would come with its own set of challenges, I'm sure.

Besides, if we implement the first approach of passing the slice details to the esbuildOptionsFn, we already have a "good enough" method of allowing per-slice asset configuration in the meantime.

Add mutex around manifest writes 👍🏼

If we want to keep a single manifest file and properly parallelise our esbuild invocations (per @KonnorRogers' suggestion), we'll need to put a mutex around the writes to that file.

This feels like a good thing to do.

Many manifest files? 👎🏼

The alternative to the above would be to write separate manifests, one per slice.

This would have the advantage of allowing each slice to have truly independent assets, and reinforce our idea of slices being independent units. It would also make referencing assets feel a little "nicer" inside each slice, since no slice name prefix would be required. So inside an admin slice, you'd be able to do stylesheet_tag("app") instead of stylesheet_tag("admin/app"): there would be no need for the admin/ prefix for any of that slice's assets. The niceness here would be especially applicable in the case of nested slices, since in those cases, the prefix would be even longer, e.g. "foo/bar/app" for a Foo::Bar::Slice.

The flip side to this is that it would make sharing assets across slices much harder. How we'd implement this would be to register a distinct "assets" component per slice, which loads the slice's relevant manifest file, meaning all asset helpers used within that slice would be aware of that slice's assets only. Getting access to e.g. the app-level assets from a slice wouldn't be possible using our standard helpers. To achieve this, the user would need to configure their slice would need to explicitly import the app's assets component, which would make it available via "app.assets", and then wire that up into the view context so that it's available to use directly to get at the asset URLs. And at that point, they'd still have to work with those asset URLs directly, rather than through our helpers.

I think we could do more here to make our Ruby-level assets code smarter for cross-slice asset access, but that all represents more work, and doing this now would mean more work at a time when we don't actually have all that much community-provided feedback about how they would like to use our assets (in addition to further delaying our release).

Given this, I think that creating a single manifest file here is probably our best approach on balance. We already have it (mostly) working already, and it affords our users the most flexibility, at the cost of a little bit of noise via the asset path prefixes.

Implement multiple invocations for watch mode 👍🏼

Almost goes without saying, but I wanted to say it just in case ;)

In fact, I would recommend we try this next, before any of the other items above, just to make sure that our parallel esbuild approach will even work for its watch mode. If for any reason that turns out to be impossible, we might need to reconsider this approach entirely.

@timriley timriley marked this pull request as draft January 8, 2024 21:00
@timriley
Copy link
Member

timriley commented Jan 8, 2024

I'm shifting this PR back to draft. We're not likely to merge this one.

I'm going to be taking a different approach, managing multiple esbuild processes from the Ruby CLI command. This will probably look a lot like the "Many manifest files" section I wrote above.

@timriley timriley closed this Feb 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

None yet

4 participants