This tool scrapes on‑screen data from your phone app mirrored on your Intel Mac (e.g., via QuickTime Player with your iPhone connected by USB, or another mirroring tool). It uses OCR (optical character recognition) to read numbers/text from the app’s UI and logs them to a CSV.
⚠️ This does not read raw EEG/BLE data from the headset. It extracts whatever the app displays on screen.
- Install Homebrew (if needed): https://brew.sh
- Install Tesseract OCR:
brew install tesseract
- Create a virtual environment (recommended):
python3 -m venv venv source venv/bin/activate - Install Python deps:
pip install mss opencv-python pytesseract numpy pandas
- Start mirroring your phone (e.g., QuickTime Player → New Movie Recording → pick your iPhone as the camera).
- Open Terminal in this folder and run:
python3 serenibrain_ocr_logger.py
- A screenshot appears. Drag to select the area where the app shows the values you want. Press ENTER to confirm.
- Press s to start logging. Press o to toggle OCR mode (digits vs text).
- Watch the “OCR Preview” window; it overlays the recognized text.
- Data is appended to
ocr_output.csvin this folder. Columns:timestamp, raw_text, parsed_numbers.
- Increase the app’s font size or zoom in the mirrored window.
- Ensure high contrast (dark text on light background, or vice versa).
- If you only need numbers, keep digits mode (default) for better accuracy.
- If values jitter, enable a more stable part of the UI or add averaging post‑hoc in pandas.
- Open the script and tweak
preprocess()if your UI has special colors/backgrounds. - Add label‑specific regex in
parse_numbers()if the app shows known metric names. - Set
SAVE_FRAMES=Trueto archive periodic crops for debugging.
If you need true data streaming (e.g., raw EEG, band power, etc.), the best route is a vendor SDK or documented BLE services.