Purpose: This standalone guide walks developers or testers through deploying, configuring, and using the Uni-Com application.
This project follows a Model-View-Controller (MVC) design pattern and utilizes various AWS services to enable a scalable, secure, and AI-integrated platform. The architecture is divided into three main components:
- Handles user registration and authentication.
- Uses a Cognito User Pool to manage user credentials and metadata.
- A pre-signup Lambda trigger checks if the email belongs to the
@rit.edudomain. Invalid emails are rejected during sign-up.
- A serverless Lambda function that validates if the user email is an RIT email before allowing sign-up.
- Acts as the API layer between frontend and backend.
- Handles token-based authentication and routes requests for creating, updating, and deleting posts.
- Dockerized Python 3.12 backend application.
- Requirements installed and CA certificate bundle added for secure DocumentDB connections.
- Exposes the application on port
8080.
- Amazon CloudFront distributes static content (HTML, JS, CSS) from S3 with low latency.
- Users access images via temporary pre-signed S3 URLs.
- Scans user-submitted post descriptions for offensive or inappropriate text.
- Approved content is stored in the database.
- Analyzes uploaded images to detect unsafe, explicit, or violent content.
- Images passing moderation are allowed in posts.
- Converts text descriptions into semantic vector embeddings.
- Enables context-aware semantic search based on vector similarity.
- Isolated cloud network with the following setup (configurable):
- CIDR Block:
10.0.0.0/16 - 2 Private Subnets:
10.0.4.0/24,10.0.5.0/24 - 2 Public Subnets:
10.0.1.0/24,10.0.2.0/24 - Availability Zones:
us-east-2a,us-east-2b
- CIDR Block:
- DocumentDB is hosted in a private subnet for security.
- Located in public subnets to allow private subnets to access the internet securely.
- Balances internal traffic between API Gateway and ECS tasks.
- VPC Link enables traffic ingress from API Gateway to the internal ALB.
- Backend is deployed using AWS Fargate to eliminate the need for server provisioning.
- Task Definition:
- Specifies container details like image URI, resources, and environment variables.
- ECS Service:
- Manages task lifecycle and autoscaling based on CPU and memory utilization.
- Used for storing user information and post content.
- Selected over DynamoDB to support vector indexing for semantic search.
Before deploying the application via GitHub Actions, users must complete an initial setup that includes configuring GitHub secrets and variables. This setup ensures secure access to AWS resources and proper configuration of the deployment environment.
To ensure a clean deployment and teardown of the project infrastructure, it is essential to store Terraform state files in an S3 bucket during the deployment (terraform apply) stage. Without storing the state files, any infrastructure provisioned during deployment cannot be properly destroyed using the GitHub Actions Terraform destroy workflow.
To create an empty S3 bucket for this purpose, follow these steps:
- Log in to the AWS Management Console.
- Navigate to the S3 service dashboard.
- Click the Create bucket button.
- Enter a unique name for the bucket.
- Click Create bucket to finalize creation.
- Record the bucket name, as it will be required when setting up GitHub Variables.
For successful deployment, the following GitHub secrets must be created:
| Secret Name | Description |
|---|---|
AWS_ACCESS_KEY |
The access key for your AWS IAM user with appropriate deployment permissions |
AWS_SECRET_KEY |
The secret access key paired with your AWS access key ID |
EMBEDDINGS_API_KEY |
The API key used for embedding service authentication (provided to the user) |
To create each secret follow the steps listed below :
- Navigate to the project's GitHub repository
- Click on
Settings→Secrets and variables→Actions - Select
New repository secret - Enter the secret name and value
- Click
Add secret
NOTE: The embeddings API key will be proided to the user
For successful deployment, the following GitHub variables must be created:
| Variable Name | Description |
|---|---|
AWS_REGION |
The AWS region where your resources will be deployed (e.g., us-east-1) |
TF_STATE_BUCKET |
The S3 bucket name for storing Terraform state files. User must have this bucket already created in their AWS backend |
To create each variable follow the steps listed below :
- Navigate to the project's GitHub repository
- Click on
Settings→Secrets and variables→Actions - Select the
Variablestab - Click
New repository variable - Enter the variable name and value
- Click
Add variable
NOTE: For TF_STATE_BUCKET make sure to use the same name given to the S3 bucket created in step 1 (AWS S3 bucket for state files)
This project repository includes a single reusable workflow (Deploy Uni-Com) which can be triggered manually through GitHub Actions. The workflow description is provided in .github/workflows/main.yml.
Deploy Uni-Com contains 2 job descriptions :
- Terraform_apply – provisions all AWS resources, builds the frontend and deploys the application.
- Terraform_destroy – tears down the provisioned infrastructure.
Before running Actions workflow, ensure all GitHub secrets and variables are properly configured. To deploy the application, follow the steps listed below :
- On the Github repository page, navigate to Actions → Deploy Uni-Com → Run workflow.
- Choose Terraform_apply from the drop-down.
- Click on
Run Workflow - Navigate to the
All workflowstab, where a new workflow run will be running. Click on it. - Click on
Terraform Apply - Upon a successful apply the Cognito login URL, logout URL, Cloudfront distribution domain and API Gateway URL will be displayed.
To destroy all provisioned infrastructure, follow the steps listed below
- Go to Actions → Deploy Uni-Com → Run workflow again.
- Select Terraform_destroy from the dropdown.
- Click on
Run Workflow - Navigate to the
All workflowstab, where a new workflow run will be running. - Upon successful completion, the provisioned AWS infrastructure will be destroyed.
We've created a short demonstration video on how to use the application which can be found here.


