-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added solution for EKS #122
Conversation
app: micro-app-1 | ||
name: micro-app-1-deployment | ||
spec: | ||
replicas: 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not understand why do we need to scale the instance of a frontend application (even a microfrontend) if it executes on users browser. Scalation itself is related to the microfrontend approach!
Is there any specific reason (unknown for me) for scaling nginx instance with microfrontend app?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please, check my question. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation is correct and very detailed.
In my opinion we're loosing a bit the focus of the problem statement.
This solution explains in my eyes how to deploy multiple apps with different URLs.
From the title my expectation would've been how to do microfrontends.
I'm missing for example the coordination. How are the angular apps integrated. How does a navigation page look for example: Does every angular app need to implement and maintain it's own navigation page to the other pages?
Consider developing a large enterprise application having numerous modules. It makes sense to go with a microservices approach for the backend. It also makes sense to follow a similar approach for the frontend. There are so many elements in the UI that each element can be developed in isolation by a dedicated team. For example, an e-commerce application will have a listing page for the products, a products page to view a single product, a cart element to manage all the products selected by the user, a checkout page to finally purchase the products, etc. Each element has its own identity, that is, it serves an independent purpose of the entire application. You can see how the different elements can be developed independently. You can also partition the application vertically and have dedicated teams for each vertical slice, taking care of its frontend, backend, database and deployment, rather than simply having frontend and backend teams as usual. | ||
|
||
There are different approaches to micro-frontend. They usually try to resolve the challenge of how to put together your independently developed elements and present it as a single application to the end user. One approach is to create reusable custom elements by following the https://developer.mozilla.org/en-US/docs/Web/Web_Components[Web Components] standard. In this appraoch you have different teams develop the custom element they are responsible for and put them together in an encapsulating application which is deployed as a single unit, for example using a docker container. You can follow https://github.com/devonfw/devon4ts/wiki/guide-angular-elements[this guide] to developing custom elements in Angular. One challenge with this approach is that when the demand for one of your micro-frontend app increases, you have to increase the number of containers serving your entire application. For example, considering the e-commerce application we mentioned earlier, the products page might get more hits than the checkout page. But even then you have to scale out the containers serving the entire application. This defeats the purpose of developing each element of your application independently. | ||
|
||
Another approach, which resolves the above challenge, is to deploy each micro-frontend app in its own container and serve them along different routes of your domain using Ingress. Given the above scenario, when the demand for products page gets higher, you can scale out the containers serving only the products page, rather than scaling out your entire application. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would shorten this heavily.
IMHO we should in the beginning write a short abstract about the solution.
What are you solving how.
This text is full with alternatives and in the end I'm more confused what is coming next.
I would only write 1-2 sentences about what a microfrontend is and link to a website for details (e.g. https://micro-frontends.org/)
Then describes the proposed solution by us
In the end of this article you could create a chapter "Alternatives" where you describe the alternative solutions and their constraints.
Kubernetes is a popular container orchestration tool, which helps in managing numerous containers working in parallel. One of their most useful benefit is scaling out your containers when demand is high. | ||
|
||
AWS EKS is a managed Kubernetes service provided by Amazon. With EKS, you don’t need to install, operate, or maintain the Kubernetes control plane or the worker nodes. EKS provides high availability for both worker and master nodes. The control plane instance run across several Availabiliity Zones. EKS detects and replaces unhealthy nodes automatically and also provides scalability and security to applications. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would just say that we're using a managed Kubernetes in AWS (AWS EKS)
Put links to the product page of Kubernetes and EKS inside. I would not describe them. (Most of the visitors will know it)
=== Dockerize your micro apps | ||
|
||
You start with dockerizing your micro apps by creating a Dockerfile for each of them. Your Dockerfiles will look like the following if the apps are developed in Angular: | ||
|
||
``` | ||
FROM node AS ui-build | ||
WORKDIR /usr/src/app | ||
COPY micro-app-1/ ./micro-app-1/ | ||
RUN cd micro-app-1 && npm install @angular/cli && npm install && npm run build | ||
|
||
|
||
FROM nginx:alpine | ||
|
||
#!/bin/sh | ||
|
||
COPY ./.nginx/nginx.conf /etc/nginx/nginx.conf | ||
|
||
## Remove default nginx index page | ||
RUN rm -rf /usr/share/nginx/html/* | ||
|
||
# Copy from the stahg 1 | ||
COPY --from=ui-build /usr/src/app/micro-app-1/dist/ /usr/share/nginx/html | ||
|
||
EXPOSE 4200 80 | ||
|
||
ENTRYPOINT ["nginx", "-g", "daemon off;"] | ||
|
||
``` | ||
|
||
Now you build your docker images using the `docker build` command. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm asking myself if we could maybe also use external sources for this complete docker part as it's nothing special to deploy angular with nginx in docker (e.g.: https://dev.to/oneofthedevs/docker-angular-nginx-37e4)
``` | ||
apiVersion: apps/v1 | ||
kind: Deployment | ||
metadata: | ||
labels: | ||
app: micro-app-1 | ||
name: micro-app-1-deployment | ||
spec: | ||
replicas: 1 | ||
selector: | ||
matchLabels: | ||
app: micro-app-1 | ||
template: | ||
metadata: | ||
labels: | ||
app: micro-app-1 | ||
spec: | ||
containers: | ||
- image: image-repository-path/micro-app-1:v1 | ||
name: micro-app-1 | ||
ports: | ||
- containerPort: 80 | ||
|
||
--- | ||
|
||
apiVersion: v1 | ||
kind: Service | ||
metadata: | ||
name: micro-app-1-service | ||
labels: | ||
run: micro-app-1 | ||
spec: | ||
ports: | ||
- port: 80 | ||
selector: | ||
app: micro-app-1 | ||
``` | ||
|
||
This specifies the deployment configuration for one of your many micro apps. The `Deployment` element specifies the docker image of you micro app to use and the number of replicas it should run, along with some other attributes. And the `Service` element specifies how to reach out to your app. Right now it cannot be communicated to from outside of your cluster. For reaching out to your app through a URL, you need to configure Ingress: | ||
|
||
``` | ||
apiVersion: extensions/v1beta1 | ||
kind: Ingress | ||
metadata: | ||
name: app-ingress | ||
annotations: | ||
ingress.kubernetes.io/rewrite-target: / | ||
spec: | ||
rules: | ||
- host: hostname.com | ||
http: | ||
paths: | ||
- path: /path-1 | ||
backend: | ||
serviceName: micro-app-1-service | ||
servicePort: 80 | ||
- path: /path-2 | ||
backend: | ||
serviceName: micro-app-2-service | ||
servicePort: 80 | ||
``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this a usual kubernetes deployment? Can we reference this somehow?
I agree with the suggested changes. It makes sense to externalize the docker and kubernetes yaml files, and I can also write a bit more about how the angular apps can be integrated, and have an "Alternatives" chapter at the end like you suggested. But the user story was to define a use case to deploy angular apps in AWS EKS, and I came up with the use case of microfrontends. I think with the suggested changes in place, the focus will be more on microfrontends and less on EKS. If that is fine then I can go ahead with your suggestions. |
I don't really know the background of the story and therefore would like to have a second opinion on this @SchettlerKoehler . |
Yes, that is exactly how I understood this user story (finding the problem for a solution). |
This issue is now stale for a while. From my perspective (as stated above) this solution is searching for the problem of a given solution, which does not really bring a benefit. Therefore, I would like to close this PR without merging it. |
Hello together, As far as I understood this solution should've been about hosting static resources (in the concrete task an angular app) on kubernetes (in the concrete ticket EKS). I would suggest to split this up into two topics:
The first one should focus on how we can serve static resources. Is Kubernetes a possible and good solution for this? Where to store the static resources, really in a container? What mechanisms of kubernetes can we use? What alternatives do we have and do we consider them better than kubernetes (e.g. single hosted nginx or cloud storage solutions)? The second one is a bigger topic that should be considered outside of this PR. I'm now out of office for the next 4 weeks and of course I'm happy when you start working on this without me, but I'll engage in the discussion as soon as I'm back. |
Closing this PR so that the 2 split topics can be freshly looked into. |
No description provided.