How I built an Immich server backed by S3
Immich is an open source project to manage photo albums. Recently, my son's account for storage was filling up and the provider was offering a higher tier to pay for storage. Rather than pay for their storage, we downloaded all his photos and then uploaded them to an S3 bucket. Now, I wanted to give him better access to the pictures.
NOTE: I found it too expensive in S3 requests and CloudTrail data recordings to use S3 as the backend. This project works, but the costs make it unfeasible. I am now using a larger EBS volume and hourly syncing of S3 bucket data to the local disk.
S3 is the cheapest and safest storage available on AWS. If I wanted to run the Immich server on EC2 and pay for the storage, I would need around 200GB minimum and that would cost me just in GP3 storage without snapsots $16 per month (in the Ohio Region). The same storage in S3 Standard Tier costs $4.60. S3 is also multi-AZ with many 9's of redundancy. And I can automate the simple addition of files (S3 objects) via other scripts.
I started with a t4g.micro instance for the install process. For installation, this worked fine, but the docker containers wouldn't run. To get the system up and running, I used a t4g.medium. Once everything was setup, I downgraded to a t4g.small. The small instance works, but I see there are times it pauses, this could be from network throttling or just the time it takes to communicate with S3 for the photo storage.
Note: Throughout these examples, I talk about bucket-1 and bucket-2. These are also included in the mount-s3 file I use for the automatic mounting during boot. These are fake bucket names and you should replace these with your bucket names.
AWS released mountpoint-s3. I'm using this to mount the S3 buckets I need.
At first, I was getting an error from mkdir during docker compose
. With some googling, I found on Stack Overflow that I needed to allow other users access to the directories.
First I needed to uncomment in /etc/fuse.conf
user_allow_other
And then, when using the mount-s3
command (the command created from mountpoint-s3), I needed to include --allow-root
since Docker is running as root.
Experimental I'm still testing and verifying this works as expected
I even created a a startup script, which is still a work in progress.
- you will need to update the bucket-1 and/or bucket-2
- if you are not using Amazon Linux, you may need to update the USER that the process runs as
- you may desire a different directory for your mount points
- In order for this to start before docker (needed if Immich on Docker will be the R/W store), I needed to update
/lib/systemd/system/docker.service
- to add
mount-s3
to the end of theAfter
andRequires
lines.
- to add
- I placed the
mount-s3
file in/etc/init.d/
made sure it's executablechmod +x /etc/init.d/mount-s3
- And enable the script
sudo systemctl enable mount-s3
TODO mount-s3 is still starting after the docker containers. Therefore, on reboot, I need to stop the docker containers, run the mount-s3, and then restart the docker containers. Need to dive more into the systemd startup order.
Update The latest version of my mount-s3
includes a stop of the docker containers before the S3 mounts and then a start of the docker containers after the S3 mounts. This isn't pretty, but it's working.
To configure Immich to use the S3 mountpoints, I needed to
- adjust the
.env
file
UPLOAD_LOCATION=/home/ec2-user/mnt/bucket-1
This tells Immich to store all uploads into that directory which is mounted by mount-s3 to bucket-1
- I also had another bucket, I wanted to mount as read only, for that, I needed to adjust the
docker-compose.yml
file:
- /home/ec2-user/mnt/bucket-2:/mnt/bucket-2:ro
In the volumes section, this line tells Docker to mount the bucket-2 mount point for mount-s3 as /mnt/bucket-2 and to make it read-only
In the Immich External Library Setup, I added /mnt/bucket-2 as a scan path to find new photos.
In order to support the library scanning more efficiently, I upgraded the instance from t4g.small
to c6g.xlarge
for a few hours. While the instance costs more, once the scanning is complete, I'll shutdown and resize back dwon tot he t4g.small
to support the app and website usages.
I use Tailscale for my personal networking needs. This means there are no Security Groups needed for my devices to gain access to the Immich Server. I even renamed the tailscale network name of the device to Immich, so connecting to the server is by simple name.
- I'm considering running the EC2 instance as spot so I can pay less
- I've started up my t4g.small via a spot request from an AMI I created from one of the previous instances. I'm using a persistent spot request so if the instance is reclaimed by AWS, it should restart another instance with the same disks. I'm also using an interruption behavior of stop. I don't expect such a small instance to be reclaimed by AWS, so I expect this should work. In any case, I have a daily backup of the instance to be safe for the containers. The photo data is all stored on S3.
- I also need to monitor the S3 request costs to see if this style deployment makes sense
- So far, the major scan cost nearly $11 in S3 requests (3.7M requests). There are 76k objects for a size of around 63GB
- The next day's daily scan cost another nearly $6.50 (2.1M requests).
- In addition to S3, because of the requests, there as an additional cost in CloudTrail of $6.80 ($4.23 day 1 and $2.57 day 2).
- At this point, I have reduced the scan to monthly to save these costs.
- I need to determine if the costs make sense to continue the project or look for alternatives.