-
-
Notifications
You must be signed in to change notification settings - Fork 32.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[core] Move the internal packages from docs/packages #35305
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new structure makes sense to me 👍 I would leave to the infra team to do the final review :)
@oliviertassinari @mbrookes As the location of the feedback package will change, are there any changes required to the deployment of the lambda on AWS? Or is it done manually? |
@michaldudak It's deployed from the local machine, so the package location doesn't matter. |
Co-authored-by: Marija Najdova <mnajdova@gmail.com> Signed-off-by: Michał Dudak <michal.dudak@gmail.com>
return false; | ||
}; | ||
|
||
module.exports = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I notice this is cjs so it can be used in next.config.js but also imported in application code as ESM. We could convert to next.config.mjs and author this module in ESM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't yet converted the scripts that use it to ESM. It is indeed import
ed in few places but only in TypeScript files that are transpiled to CJS. After I convert the scripts to ESM, I can update this config file as well.
Signed-off-by: Michał Dudak <michal.dudak@gmail.com> Co-authored-by: Marija Najdova <mnajdova@gmail.com>
I'm late to the party, but it makes sense. Keeping things flat, with an appropriate prefix convention, sounds easier to reason with. e.g. we plan to move https://github.com/mui/mui-x/tree/ac685cda3ed6ba19557246b4d580b60e34bc32dc/packages/grid one level up. These two packages were introduced in #22885, and #26774, nothing to report there. One thing I wonder about is the private vs. public aspect. The
I suspect that it could be great to:
|
This is a good point. Having internal packages separately could help the readers know in which context they are.
I was actually thinking about the very opposite - to remove as many dependencies from the root as possible (even to the point of having all dependencies within yarn workspaces). This approach is recommended by Yarn (it warns when trying to install dependencies globally). It also promotes building more modular packages and would help if we ever wanted to move packages between repos (see https://yarnpkg.com/features/workspaces#what-does-it-mean-to-be-a-workspace).
My personal preference is also the opposite. I very much prefer smaller packages with a single responsibility and well-defined exports. |
@michaldudak Do you have a specific problem in mind to solve? It feels a bit like the monolith vs. micro-services tradeoff. What I learned trying this move to micro-services in the past (at a small scale ~10 services) is that the default should be a monolith, and nothing should be split until there is a clear problem. By "clear problem", I mean for instance hardware constraint on how much memory there is to load for a dependency. The rest, like coding style, are subjective that can already be solved with clear folders and files organization.
This raises the question of what problem did we try to solve when we introduced yarn workspace? It seems to have started with fca1061 but I can't remember. The problems yarn workspace seems to solve today:
It's probably a rare event. I think that it's better to not anticipate this. |
Reduce the cognitive load by having small, self-contained packages.
From my experience, the default should be a modular monolith, which is a single deployment unit built with logically separate components. That's because when you actually need to split, breaking up a tangled monolith is usually a hard and expensive exercise. We see this problem today when other teams try to use the tools that are tied to the Core repo.
Having small encapsulated packages encourages having a clear organization of folders and makes it harder to import arbitrary stuff from files across the whole repo, as was the case with our docs generation scripts.
I don't know the reasons you had back then, but it seems that workspaces (or similar concepts in other package managers) became a de facto standard for working with monorepos. |
For me it's usually to link packages together locally. And to have a basic task runner that is aware of dependency topology (though it's a bit limited sometimes in that regard) |
@michaldudak I feel that a focus on having clearer folder structures and API signatures would already be great opportunities. I feel that sharing code between repositories breaks right now because of this, where abstraction levels are wrong: either too high or too low. But OK, I can agree with the idea that new package.json are valuable once the codebase is so large that a single person can't own the update of its dependencies. I suspect that we have room to grow the engineering team significantly until we reach this type of threshold (maybe x2 because we have no one doing this full-time yet).
My fear is that each new package.json creates an overhead, that it makes it harder to make impactful changes. Is there an issue with importing arbitrary files across the repository? For files that are meant to be private, I agree (maybe they need a naming convention). |
After integrating the docs in Toolpad, what I found to be the biggest breaking points for sharing code between repositories:
For me these are the reason why sharing code between the repos is hard, the connection is brittle and requires careful manual curation. I think that's why splitting into more packages doesn't scale well at the moment. Each new package needs to be introduced at multiple levels in multiple repos in this setup. If we solve this, then I feel like it doesn't matter much how many or how few package we split our logic in. One potentially bold solution could be to automatically publish all internal workspace packages to the github registry on every commit in the core repo. We'd use a version tag that's based on the git SHA1. Then, in dependent repos, you can install real packages with build output and dependencies based on any commit in the core repo. The github registry would only be for internal use, we'd still do manual publishing to the npm registry for the weekly release. |
Agree, it's not great. Maybe having a script that merges all the dependencies from @mui/monorepo to the repository that depends on it would already be better. Another idea: we could create a "shadow" of the packages in the mono repo (assuming we have a few) meaning that we could create empty folders to represent each of the packages of the monorepo, and have the dependencies isolated on these folders.
I think that we can solve this by abstracting these aliases in a module so updates for other repositories are more transparent. Each update like mui/mui-x#7849 can be a source of learning for us to adapt the APIs to fix what's not correctly abstracted.
The automation makes sense, it could solve the npm dependencies issues that go out of sync. For building in the mono-repo vs. building in the depending repo, the tradeoff isn't clear to me. I think that the closer we consume the raw source, the better because it will feel more like we are on the same repository. e.g. helps enforce that the build pipelines are the same. So wait for the engineering team to become too large we need to isolate more. Overall, I think that the pain with dependency (1.) is low compared to the friction with the API that we expose that is wrong, e.g. I'm saying this based on the last few updates of the monorepo: mui/mui-x#7849, mui/mui-x#7676, mui/mui-x#7307, etc. the main "friction for changes" seems to be about the breaking changes. Maybe each time we introduce a breaching change to the infra in the mono-repo and merge it, the same person should immediately do the same with the other repos to sync with the breaking changes. It would reinforce the culture of there is one codebase but multiple repositories. |
This is a follow-up to #35244 (comment)
Yarn docs explicitly state that nested workspaces are not supported. This PR moves all the packages from
/docs/packages
to the/packages
directory. It also moves the language configuration todocs/config
to be better accessible.