Car-migo application is awesome! It will revolutionize the way you get around. It is eco-friendly and helps your pocket.
- Back-end (server)
- Spring Boot 3.3.4
- Java 17
- Maven 3.9.9
- Redis
- Docker
- Git Actions
- JWT
- BCrypt
- Swagger
- Actuator
- Checkstyle
- Front-end (ui)
- React 18.3.1
- Typescript
- Javascript
- Vite
- Node.js 20
- Material UI
- Tanstack
- Axios
- Yup
- Formik
- Zustand
- Vitest
- ESLint
- Jest
- MSW
- Database
- Hibernate
- PostgreSQL
- H2 Database
- pgAdmin
- Flyway
It is a match-making system for drivers and passengers. You can either advertise journeys or query them.
Drivers can create journeys while passengers can book them.
When inside the car share, you can enjoy the ride and make new friends. Come onboard and experience this new lifestyle.
There are 5 types of users:
Staged
users are when a user has created the account but has not confirmed the email yet.Active
users have access to non-admin endpoints.Suspended
users can only access their profiles and update some information.Locked out
users cannot access any endpoints because they had 5 failed attempts to access their accounts.Admin
users can access admin and non-admin endpoints as well as actuator endpoints.Dev
users can access their profiles, update some information and access the actuator endpoints.
It is great for the environment once there will be less CO2 released into the atmosphere. Moreover, there will be less traffic in our cities thus emergency vehicles will respond to emergencies more rapidly, less noise pollution, less road accidents, and you can make new friends to top it off. The application is not about profiting but about car sharing so the passengers can pay the driver a fair amount for fuel costs.
- Docker
From a Linux based terminal, navigate to the root of this project (assuming you have cloned it) and run:
./run-app.sh
This script will create a jar file from Maven package lifecycle using the embedded Maven Wrapper.
The script also creates Docker images and spins the necessary containers:
car-migo_ui
: this is the Front-End implementation, the website application.car-migo_server
: the Back-End implementation which holds the business logic.car-migo_postgres
: the application database.car-migo_redis
: the application database cache whose time to live (TTL) is 3 hours.car-migo_flyway
: the database version control.car-migo_pgadmin
: the database client. More details below.
Visit http://localhost:8086/v1/health to ensure the server is running as expected.
There is a heartbeat to verify whether other consumed services are up: http://localhost:8086/v1/heartbeat.
Furthermore, it will automatically open http://localhost:8087/home on your default browser. 🎉
To stop and remove the containers, run:
docker compose down
This is the chosen database client. To interact with it, go to http://localhost:8000/ from your browser to open the pgAdmin 4 UI and enter:
- Email =
admin@car-migo.com
- Password =
password
Once inside the pgAdmin 4, click on Add New Server
. From the dialog box, enter:
- From the
General
tab, give it any name, maybecar-migo
. - From the
Connection
tab:- Host name/address =
host.docker.internal
- Port =
5432
- Maintenance database =
postgres
- Username =
admin
- Password =
password
- Host name/address =
- Leave the rest as it is and
Save
.
Then, from the left panel, navigate to Servers > car-migo > Databases > carmigo > Schemas > public > Tables.
This is the Entity Relationship Diagram (ERD) for the application.
This script restarts the containers. You are also given the option to restart PostgreSQL and pgAdmin volumes.
Send a POST request to http://localhost:8086/v1/login with the following JSON body:
{
"email": "jake.sully@example.com",
"password": "Pass1234!"
}
Here is the cURL command:
curl -iL 'http://localhost:8086/v1/login' \
-H 'Content-Type: application/json' \
--data-raw '{
"email": "jake.sully@example.com",
"password": "Pass1234!"
}'
By the way, Jake Sully is our ADMIN. You can find more users to play with in migrations/local-data-seed/V1000.1___local_data_seed.sql.
Additionally, follow the link to Postman Collection which contains all the application APIs plus some extra admin endpoints. As well as, these are the Postman Environments.
The response to the request above will contain a Json Web Token which you should pass to every subsequent HTTP request as a Bearer token. For example:
curl -L 'http://localhost:8086/v1/users/profile' \
-H 'Authorization: Bearer {paste-token-here}'
OpenApi 3.0 Specification is implemented. Here are the endpoints:
- http://localhost:8086/swagger-ui/index.html
- http://localhost:8086/v3/api-docs
- http://localhost:8086/v3/api-docs.yaml (automatically downloads its yaml file)
- http://localhost:8086/v3/api-docs/swagger-config
Open endpoints, i.e., no credentials needed:
- http://localhost:8086/v1/users/create
- http://localhost:8086/v1/login
- http://localhost:8086/v1/journeys/calculateDistance
- http://localhost:8086/v1/journeys/search
Git Actions is triggered everytime there is a new code push or a new pull request against the main
branch.
The script will build the application and run tests using Apache Maven Wrapper and Vitest. It will also scan the code and produce a security report using CodeQL Analysis.
Git Actions is also scheduled to run once a week: every Monday at 7am UTC.
Also, two Docker images are automatically built, car-migo_ui and car-migo_server, and sent to Docker Hub repository when the code is merged into the main branch.
Run these to download them:
docker pull techtinkerer/car-migo_server
docker pull techtinkerer/car-migo_ui
And in case you want to run these two (without the database):
docker run -p 8086:8086 -d techtinkerer/car-migo_server
docker run -p 8087:8087 -d techtinkerer/car-migo_ui
This is how I would deploy the application to Amazon Web Services (AWS):
Users interact with the application via a web interface (UI). Their requests are routed to the AWS infrastructure.
Route 53: This is AWS’s DNS (Domain Name System) service that routes user requests to the appropriate resources. In this case, it directs the traffic to CloudFront through WAF.
Web Application Firewall: AWS WAF is a security feature that helps protect the application from common web exploits, such as SQL injection and Cross-Site Scripting (XSS). It filters out malicious traffic before it reaches CloudFront and other downstream services.
AWS CloudFront: is a Content Delivery Network (CDN) that caches and delivers static and dynamic content to users from nearby edge locations. This improves the speed and reduces the latency of delivering assets stored in the S3 bucket.
Simple Storage Service: An AWS S3 bucket is used to store static assets such as images, media, CSS, JavaScript and other static files which are served to users through CloudFront.
Application Load Balancer: The ALB distributes incoming traffic across multiple instances of the application running in Elastic Container Service (ECS). It ensures that no single service is overwhelmed with traffic and improves the scalability and availability of the application.
Elastic Container Service (ECS) Fargate (UI): This is where the user interface (UI) part of the application is running. It hosts the frontend of the application in containers, ensuring that the UI is served efficiently to users.
Elastic Container Service (ECS) Fargate (Server): This is the backend or server side of the application, where business logic, API calls, and other server-side processing occur.
Elastic Container Registry (ECR): This is where the application’s container images (both UI and Server) are stored. The ECS services pull these container images from ECR to deploy the latest versions of the application.
Aurora Database & ElastiCache Redis: AWS Aurora is a managed relational database while ElasticCache is the Cache service. Both of these work together to enhance the application’s performance and reliability to efficiently manage data while maintaining high performance and availability.
The application’s backend (server) interacts with the Aurora database to perform read and write operations related to user data and transactions and with ElastiCache to cache frequently accessed data, reducing the load on Aurora and providing faster response times for users.