A control system for the buttons and in-room LEDs in MrBeast's Ages 1 - 100 Fight For $500,000 video.
There's a 'leader' app that runs on a 'farmer' node, which controls the challenge mechanics. It manages state (e.g. how many rooms voted for a certain choice), allows round configuration. Then 'Room' apps running on individual room nodes manage the room's state and transmits button inputs to the leader.
My behind-the-scenes video about the 1-100 challenge contains a lot more detail:
There's also a blog post: 100 SBCs, Python Flask, and two NUCs for MrBeast.
The Leader app (inside leader-app/
) runs on a central server that manages state, provides output for a display, and provides controls to manage state (e.g. starting/ending a round, advancing to a new round).
The Leader app is a Flask app built with Python.
To develop it locally, run:
cd leader-app
pipenv shell
(requirespipenv
, install withpip3 install pipenv
)pip install -r requirements.txt
- Initialize the database:
python3 init_db.py
- Run app:
FLASK_APP=app FLASK_DEBUG=true flask run
(add--host=0.0.0.0
to make it accessible over the network)
Visit the app at http://127.0.0.1:5000
The Countdown app (inside countdown-app/
) runs on a central server that manages state, provides output for a display, and provides controls to manage state (e.g. setting the time interval for a button press, resetting timers).
The Countdown app is a Flask app built with Python.
To develop it locally, run:
cd countdown-app
pipenv shell
(requirespipenv
, install withpip3 install pipenv
)pip install -r requirements.txt
- Initialize the database:
python3 init_db.py
- Run app:
FLASK_APP=app FLASK_DEBUG=true flask run
(add--host=0.0.0.0
to make it accessible over the network)
Visit the app at http://127.0.0.1:5000
The Leader and Button apps will run on the main server NUC, with a hot spare backup server available should the need arise.
The automation/farmer-control.yml
file contains the Ansible playbook to set up the server, install the app, and run it.
Make sure you have Ansible installed on a machine on the same network: pip3 install ansible
.
Then make sure the leader and spare's IP addresses are both entered in the [leader]
section of the hosts.ini
file. For SSH authentication, the private key is available inside the Notion doc—you should add it to your ssh
keychain with ssh-add ~/path/to/private_key
Then run the Ansible playbook:
ansible-playbook farmer-control.yml
For testing, bring up a Docker Ubuntu container with
docker run -d --volume=/sys/fs/cgroup:/sys/fs/cgroup:rw --cgroupns=host --privileged --name farmer geerlingguy/docker-ubuntu2204-ansible:latest /usr/sbin/init
, then set the hostname line for the farmer to:farmer ansible_connection=community.general.docker role=leader
Then run the playbook:
ansible-playbook farmer-control.yml
To initialize (or reset) the database, run the Ansible playbook:
ansible-playbook farmer-reset-database.yml
To manually initialize the database (e.g. the first time you run the application in production), log into the server and run:
# For leader app
docker exec beast-challenge_leader_1 python3 init_db.py
# For countdown app
docker exec countdown-app_countdown_1 python3 init_db.py
If you want to test things on a Potato running Armbian instead of a NUC running Ubuntu, you can do that too! Just change the [farmer]
section inside hosts.ini
to have a line for the Potato where you want the server running.
Run the Ansible farmer-control.yml
playbook, initialize the database, and away you go!
If you want to plug an HDMI display into the Potato and use Firefox to browse the web UI, you can install the LXDE Desktop (Armbian doesn't come with a desktop environment out of the box):
sudo apt install lxdm vanilla-gnome-desktop firefox
If you get a popup asking you to select a default display manager, choose gdm3
then continue. See this post for more info.
Note: During testing, some things need tweaking depending on your setup. For example, one Le Potato I was using to demo some button functionality didn't have the relay HAT attached. The default 'Live Colors' configuration resulted in an exception, because the Potato couldn't find the I2C relay HAT to control! So... you might have to do a little Python spelunking if you want to do things out of the norm. In my case, I just had to disable 'Live Colors' in that test round.
The Room app (inside room-app
) runs on every one of the 100x rooms where SBCs are set up to run the room controls.
The app controls the following:
- Buttons and Button LEDs (GPIO digital inputs)
- RGBW LED light strip control (GPIO digital outputs)
To deploy the app, see the Automation for Controlling the Potatoes section below.
The 52Pi EP-0099 Relay is a 4-channel I2C-controlled relay HAT that works with Le Potato. We bought it for two reasons:
- It is easy to install (as a HAT)
- It was available on short notice
The relays used are HK4100F-DC5V-SHG
, and according to the datasheet, they can only handle 3A at 30V, so they are not rated for the current we'll be drawing.
Because of that, we daisy chained another set of relays rated at 10A at 30V. The relays are controlled via code in the Room app scripts.
There is also a convenient light.py
script which allows for setting a room color directly on the device, e.g. ./light.py white
. Note that you may need to temporarily stop the lighting control script: sudo systemctl stop light-control
.
The automation
directory contains Ansible configuration for managing both the main server (farmer
) and the fleet of 100 room nodes (potatoes
, sometimes referred to as spuds
). We have to use something like Ansible because managing 100 nodes by hand would be insane.
Make sure you have Ansible installed on a machine on the same network: pip3 install ansible
For first-time setup of a new Le Potato (assuming you've already booted it and set up the admin
user account following Armbian's wizard), do the following:
cd automation
ansible-playbook spud-control.yml -k -K -e '{"run_upgrades": true}'
- Enter the default
admin
password (and then press enter to re-use it forBECOME
). - Wait for the playbook to complete.
For future runs, assuming you have the private key in your agent (ssh-add [path-to-key]
), you can just run the following:
ansible-playbook spud-control.yml
The playbook is configured to be idempotent, so we should be able to run it live if we need to quickly patch all 100 rooms!
There are a variety of maintenance tasks in the maintenance playbook:
# Reboot spuds:
ansible-playbook spud-maintain.yml -e '{"spud_reboot":true}'
# Stop all services on the spuds:
ansible-playbook spud-maintain.yml -e '{"service_stop":true}'
# Start all services on the spuds:
ansible-playbook spud-maintain.yml -e '{"service_start":true}'
Assuming either Ubuntu Desktop or Ubuntu Server is installed on the Farmer, make sure you have SSH access, and install your SSH key on the beast-admin
or admin
account. Then run the Ansible playbook to set it up:
ansible-playbook farmer-control.yml
You may need to add -K
the first time the playbook runs, to supply the sudo password (since by default Ubuntu doesn't allow passwordless sudo).
If you need to switch from the leader
app to countdown
(or vice-versa), run the switch-modes.yml
playbook. For example, if the leader
app is running, and you would like to switch to countdown
:
ansible-playbook switch-modes.yml -e challenge_mode=countdown
- Live round is open, accepts multiple votes, make sure multiple votes can be made per room.
- Live round is open, doesn't accept multiple votes, make sure only first vote is accepted.
- Live round is closed, make sure no votes are accepted.
GPLv3 or later (as of version 4.0.0)
All prior versions are not released under an open source license and are provided for historical context only.
Any MrBeast trademarks and references are all rights reserved and must be removed if redistributing or re-using this software outside of the MrBeast organization.