Improving the metro riding experience via real-time ambient feedback about car capacity.
Switch branches/tags
Nothing to show
Clone or download
timothychen01 Remove Development section from README.md
Not necessary anymore since we have a requirements.txt file
Latest commit aef8add Jul 21, 2018
Permalink
Failed to load latest commit information.
img
server
Metronome (Google NYC Intern Hackathon 2018).pdf
README.md
platform_pi_code.py Rename and clean up code Jul 21, 2018
train_car_pi_code.py

README.md

Metronome

Improving the train riding experience for all New Yorkers!

With over 5.5 million daily riders, the NYC metro system is both a blessing but also a headache. How many times have you watched an empty subway car pass by you as it rolls into the station, just to be greeted by a jam-packed car? How many times have you moved down the platform in hopes of ending up with an empty car?

Our team wanted to help New Yorkers experience a less crowded and more efficient commute. There are only a few seconds before each train leaves the station, but plenty of wait time on the platform. We sought out to provide commuters real-time ambient feedback about the capacity of the cars on their upcoming train.

Ideal State

Mock-up of our ideal state

We envision a metro experience that would include LED strips on the platform signaling in real-time how full each car from the incoming train is. This status could also be integrated on an on-screen display or even within Google Maps.

Implementation

Architecture Diagram

For our prototype, we created a Flask webapp with routes to simulate when the subway car doors would close. We had a button on the webapp that triggers the security cameras (simulated via Raspberry Pi's with cameras) to take a picture and send it to the Google Cloud Vision API to detect the number of faces. This enabled us to get a rough estimate of how full a car was. This data is then sent up to a Firebase database (used for historical data storage).

The frontend side of our webapp was the display that could be put up on screens at the platform to provide an additional cue.

We used socket.io to trigger the Pi code that took pictures, which would then trigger the UI to automatically update on the Flask frontend.

Raspberry Pi's

On the platform (simulated via a Raspberry Pi with LEDs), we continually pull and parse data from Firebase to determine what colors to display on each LED.