- Start the Broker the easy way
- Stop the Broker the easy way
- Start/Stop Components Individually
- Options for Ingesting Alerts
Use this document to run a broker instance manually (typically a Testing instance).
See also:
test-an-instance
../components/auto-scheduler
to schedule a (typically Production) instance to run automatically each night.../components/night-conductor
The easiest way to start the broker is to hijack the ../components/auto-scheduler
by sending a START
message to its Pub/Sub topic and attaching an attribute that indicates the topic to be ingested (or none). In addition to cueing up all broker components, the cue-response checker will run and log* the status of each component. (*See View and Access Resources: Cloud
Scheduler <broker/run-a-broker-instance/view-resources:Cloud Scheduler>
, and note that the checks are on a time delay of up to several minutes.)
Prerequisite: Make sure the VMs are stopped (see View and Access Resources: Compute Engine VMs <broker/run-a-broker-instance/view-resources:Compute Engine VMs>
, includes a code sample).
Command line:
# replace with your broker instance's keywords:
survey=ztf
testid=mytest
topic="${survey}-cue_night_conductor-${testid}"
cue=START
attr=KAFKA_TOPIC=NONE # leave consumer VM off; e.g., when using consumer simulator
# attr=topic_date=yyyymmdd # start the consumer and ingest the topic with date yyyymmdd
gcloud pubsub topics publish "$topic" --message="$cue" --attribute="$attr"
Python:
from pgb_utils import pubsub as pgbps
# replace with your broker instance's keywords:
survey='ztf'
testid='mytest'
topic = f'{survey}-cue_night_conductor-{testid}'
cue = b'START'
attr = {'KAFKA_TOPIC': 'NONE'} # leave consumer VM off; e.g., when using consumer simulator
# attr={'topic_date': 'yyyymmdd'} # start the consumer and ingest the topic with date yyyymmdd
pgbps.publish(topic, cue, attrs=attr)
The easiest way to stop the broker is to hijack the ../components/auto-scheduler
by sending an END
message to its Pub/Sub topic. In addition to stopping all broker components, the cue-response checker will run and log* the status of each component. (*See View and Access Resources: Cloud
Scheduler <broker/run-a-broker-instance/view-resources:Cloud Scheduler>
, and note that the checks are on a time delay of up to several minutes.)
Command line:
# replace with your broker instance's keywords:
survey=ztf
testid=mytest
topic="${survey}-cue_night_conductor-${testid}"
cue=END
gcloud pubsub topics publish "$topic" --message="$cue"
Python:
from pgb_utils import pubsub as pgbps
# replace with your broker instance's keywords:
survey='ztf'
testid='mytest'
topic = f'{survey}-cue_night_conductor-{testid}'
cue = b'END'
pgbps.publish(topic, cue)
Here are some options:
Generally
Use night conductor's scripts. In most cases, you can simply call a shell script and pass in a few variables. See especially those called by
- vm_startup.sh at the code path broker/night_conductor/vm_startup.sh
- start_night.sh at the code path broker/night_conductor/start_night/start_night.sh
- end_night.sh at the code path broker/night_conductor/end_night/end_night.sh
Night Conductor
- Instead of hijacking the auto-scheduler, you can start/stop the broker by controling night-conductor directly. See
Example: Use night-conductor to start/end the night <use-night-conductor-to-start-end-night>
.
Cloud Functions
- update/redeploy: run the
deploy_cloud_fncs.sh
script, seeView and Access Resources: Cloud Functions <broker/run-a-broker-instance/view-resources:Cloud Functions>
Dataflow
- start/update/stop jobs: see
View and Access Resources: Dataflow jobs <broker/run-a-broker-instance/view-resources:Dataflow jobs>
VMs
- start/stop: see
View and Access Resources: Compute Engine VMs <broker/run-a-broker-instance/view-resources:Compute Engine VMs>
You have three options to get alerts into the broker. Production instances typically use #1; testing instances typically use #3.
- Connect to a live stream. Obviously, this can only be done at night when there is a live stream to connect to. If there are no alerts in the topic, the consumer will poll repeatedly for available topics and begin ingesting when its assigned topic becomes active. See Kafka Topic Syntax below.
- Connect to a stream from a previous night (within the last 7 days). This is not recommended since alerts will flood into the broker as the consumer ingests as fast as it can. For ZTF, you can check ztf.uw.edu/alerts/public/;
tar
files larger than 74 (presumably in bytes) indicate dates with >0 alerts. See also: Kafka Topic Syntax. - Use the consumer simulator to control the flow of alerts into the broker. See
consumer-simulator
for details. When starting the broker, use metadata attributeKAFKA_TOPIC=NONE
to leave the consumer VM off.
Topic name syntax:
- ZTF:
ztf_yyyymmdd_programid1
whereyyyymmdd
is replaced with the date. - DECAT:
decat_yyyymmdd_2021A-0113
whereyyyymmdd
is replaced with the date.