New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure static assets coming from slices have the slice name prefix in dest URL #18
Conversation
0408163
to
48343c1
Compare
I've just rotated this one 90 degrees to make it a different image
48343c1
to
9a7a5e6
Compare
This comment was marked as resolved.
This comment was marked as resolved.
@parndt Thanks! Could you please verify if the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't mind me, just driving by taking a peek!
for (const build of builds) { | ||
await esbuild.build(build).catch(errorHandler); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This actually won't run in parallel as you're expecting. This will run builds one by one.
If you truly wanted parallel builds you could do this:
for (const build of builds) { | |
await esbuild.build(build).catch(errorHandler); | |
} | |
await Promise.allSettled( | |
builds.map(async (build) => await esbuild.build(build).catch(errorHandler)) | |
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried this, it is fast but it didn't seem to work with the manifest file and it being valid JSON all the time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case, I think we'd need to add a mutex around the part that updates the manifest file.
src/index.ts
Outdated
const ctx = await esbuild.context(esbuildOptions); | ||
await ctx.watch().catch(errorHandler); | ||
|
||
return ctx; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps a silly question, why are multiple builds only run for the production build, but not the watcher build?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KonnorRogers I think this is Luca's next step. We definitely need similar behavior for watch mode.
@parndt Thanks for the feedback! I've now applied both of your fixes to this branch. |
I think we have the bones of a workable approach here! A few thoughts on what I think we'd want to consider as part of finishing this off. I'm going to go into detail on each of these, but I don't think each one represents work we have to do now. I'll thumbs up the areas that I think we should tackle, and thumbs down those that I think we can live without for now. Let me know what you think of the ideas below. I'm keen to hear your thoughts! User control over per-slice esbuild options 👍🏼If we're invoking esbuild once per slice, then I think we should allow the user a hook to customise their esbuild options per slice. To do this, we can call the user-provided await assets.run({
esbuildOptionsFn: (args, esbuildOptions, slice) => {
if (slice.name == "admin") {
// customise esbuildOptions for admin
}
return esbuildOptions;
}
}); To do this properly, however, I think we need better slice discovery. Full slice discovery 👍🏼One of the shortfalls we've had with our assets new solution ever since we created it is that it looks for slices in the root of In practice, this is where most slices will be found, but because we do technically support nested slices (e.g. One way could be to implement a parallel implementation of our slice discovery code (which, is to recursively scan for Another way could be to leverage this Ruby-based slice loading in the first place. We could e.g. have our hanami-cli assets commands fetch the known slices from the Hanami app directly, and pass them as another argument when running
This would mean the JS code would then be able to look directly for each of those slice's assets directories, rather than having to manage a full filesystem crawl. This approach would mean that we maintain just a single place for locating slice directories, and it would simplify our JS code somewhat. It would also mean that our JS code would respect Hanami's conditional slice loading (via Another nice side effect of this is that it would then become possible for the user to explicitly call An alternative could be to invert the arrangement and have the JS code itself run a hanami command to request a list of slices, but that feels a needlessly circular since the JS code is already being invoked by a Hanami command in the first place. Respect per-slice assets config? 👎🏼I know I mentioned up top the idea of passing the slice details to our top-level I think this would be straightforward to implement if we were using forked processes to run our multiple esbuild invocations, since we could just call the appropriate With our current implementation, however, this feels more tricky, since by the time our assets command is running, we're already "inside" the top-level I think we would be OK in deciding to give this a miss, but I want to share this thought for completeness. It's certainly one argument in favour of some entirely separate process managing multiple esbuild invocations as completely separate processes, but that would come with its own set of challenges, I'm sure. Besides, if we implement the first approach of passing the slice details to the Add mutex around manifest writes 👍🏼If we want to keep a single manifest file and properly parallelise our esbuild invocations (per @KonnorRogers' suggestion), we'll need to put a mutex around the writes to that file. This feels like a good thing to do. Many manifest files? 👎🏼The alternative to the above would be to write separate manifests, one per slice. This would have the advantage of allowing each slice to have truly independent assets, and reinforce our idea of slices being independent units. It would also make referencing assets feel a little "nicer" inside each slice, since no slice name prefix would be required. So inside an admin slice, you'd be able to do The flip side to this is that it would make sharing assets across slices much harder. How we'd implement this would be to register a distinct I think we could do more here to make our Ruby-level assets code smarter for cross-slice asset access, but that all represents more work, and doing this now would mean more work at a time when we don't actually have all that much community-provided feedback about how they would like to use our assets (in addition to further delaying our release). Given this, I think that creating a single manifest file here is probably our best approach on balance. We already have it (mostly) working already, and it affords our users the most flexibility, at the cost of a little bit of noise via the asset path prefixes. Implement multiple invocations for
|
I'm shifting this PR back to draft. We're not likely to merge this one. I'm going to be taking a different approach, managing multiple esbuild processes from the Ruby CLI command. This will probably look a lot like the "Many manifest files" section I wrote above. |
Problems
Given I have two files with the same name but in two different locations:
app/assets/images/logo.png
slices/admin/assets/images/logo.png
When I run
bundle exec hanami assets compile
, I get the following result:⚡ tree public public ├── assets │ ├── # ... │ ├── logo-29F2002D.png │ └── logo-C1EF77E4.png └── assets.json
Problem 1: The
logo.png
coming from theadmin
slice is outside thepublic/assets/admin
directory.Problem 2: One of the two
logo.png
files is overwritten inpublic/assets.json
.Fix
If a static file is coming from a slice, prefix the slice name.
Solution 1: The logo file from the
admin
slice is found atpublic/assets/admin/logo.png
.Solution 2: The assets manifest reports both the files.