Skip to content

First vertical run #45

@fabianabarca

Description

@fabianabarca

Requisites

Flow

Databús

Interfaces

  • API: /run endpoint to start a "run" (an instance of a trip). Note: schema on API docs (databus.yml, OpenAPI).
  • MQTT: transit/vehicle/<vehicle_id>/position topic with location pings (every 7 s). Note: schema based on the /position REST API endpoint (databus.yml, AsyncAPI).

Note: OpenAPI must be automatically generated with DRF Spectacular @Kroenenn.

Processes

  • register_run: gets POST /run and results in an updates of the transit system state (SADD runs:in_progress run_id via update_system_state)
  • ingest_telemetry: gets message from transit/vehicle/<vehicle_id>/position and updates location via update_system_state.
  • update_system_state: updates the vehicle state (XADD vehicles:ABC123:locations * latitude 37.7749 longitude -122.4194 timestamp 1713881558 speed 45). Notes: all necessary formatting or computations happen here.
  • build_gtfs_realtime: triggered by Celery Beat, gets the runs in progress (SMEMBERS runs:in_progress) and then grab the data for each (XREVRANGE vehicles:ABC123:locations + - COUNT 1) and builds and publishes as .pb and .json. Notes: no formatting or anything here, just copy and paste and pack and build. Emits: publication_assertion in topic gtfs:realtime.
  • save_gtfs_feed_messages: triggered by publication_assertion (tasks.save_gtfs_feed_messages.delay()), gets .pb and uses gtfs-io to transform to PyArrow models and stores as Parquet.
  • end_run: gets simulated API call to end run (PATCH /run/<run_id>), and orchestrator asks to update system to exclude the run (update_system_state) and initiate the run lifecyle management process.
  • manage_run_lifecycle: gets run's data and saves to database the "traces" (not executed in this first vertical).
flowchart TD
    rr[register-run]
    er{end-run?}
    it[ingest-telemetry]
    us1[update-system-state]
    us2[update-system-state]
    us3[update-system-state]
    bg[build-gtfs-realtime]
    sg[save-gtfs-feed-messages]
    mr[manage-run-lifecycle]
    rr --> us1 
    us1 --> er
    er -- "no" --> it
    it --> us2
    us2 --> er
    er -- "no / every 15 s"--> bg
    bg --> sg
    sg --> er
    er -- "yes" --> us3
    us3 --> mr

Loading

URLs

  • [dominio]/api/run
  • [dominio]/api/run/<run_id>
  • tcp://[dominio]:1883
  • [dominio]/gtfs/schedule/feed.zip
  • [dominio]/gtfs/realtime/vehicle_positions.pb (.json)

Infobús

Interfaces

  • WebSocket: wss://[dominio]/ws/route/<route_id>

Processes

  • gtfs_realtime_polling: every N seconds requests new feed messages in the provided URL for GTFSProvider. For every TransitSystem and for every GTFSProvider: poll with gtfs-io and keep as pure Python DataClass. Backup: store blob in Redis. The logic of the WebSocket acts upon the Python objects that gtfs-io returns.
  • publish_gtfs_realtime_updates: publish to WebSocket endpoints
flowchart TD
TS[TransitSystem]
GP[GTFSProvider]
FE[Feed]
A[Agency]
R[Routes]
S[Stops]

TS --> GP
GP --> FE
FE --> A
FE --> R
FE --> S
Loading

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions