Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 0 additions & 5 deletions Dockerfile.activator
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,8 @@ FROM ghcr.io/lsst-dm/prompt-proto-base:${BASE_TAG}
ENV PYTHONUNBUFFERED True
ENV APP_HOME /app
ENV PROMPT_PROTOTYPE_DIR $APP_HOME
ARG RUBIN_INSTRUMENT
ARG PUBSUB_VERIFICATION_TOKEN
ARG PORT
ARG CALIB_REPO
ARG IMAGE_BUCKET
ARG IMAGE_TIMEOUT
ARG BUCKET_TOPIC
WORKDIR $APP_HOME
COPY python/activator activator/
COPY pipelines pipelines/
Expand Down
14 changes: 3 additions & 11 deletions doc/playbook.rst
Original file line number Diff line number Diff line change
Expand Up @@ -154,7 +154,7 @@ To create or edit the Cloud Run service in the Google Cloud Console:

* There are also five optional parameters:

* IMAGE_TIMEOUT: timeout in seconds to wait for raw image, default 50 sec.
* IMAGE_TIMEOUT: timeout in seconds to wait after expected script completion for raw image arrival, default 20 sec.
* LOCAL_REPOS: absolute path (in the container) where local repos are created, default ``/tmp``.
* USER_APDB: database user for the APDB, default "postgres"
* USER_REGISTRY: database user for the registry database, default "postgres"
Expand Down Expand Up @@ -228,7 +228,7 @@ It includes the following required environment variables:

The following environment variables are optional:

* IMAGE_TIMEOUT: timeout in seconds to wait for raw image, default 50 sec.
* IMAGE_TIMEOUT: timeout in seconds to wait after expected script completion for raw image arrival, default 20 sec.
* LOCAL_REPOS: absolute path (in the container) where local repos are created, default ``/tmp``.
* USER_APDB: database user for the APDB, default "postgres"
* USER_REGISTRY: database user for the registry database, default "postgres"
Expand Down Expand Up @@ -310,15 +310,7 @@ tester
``python/tester/upload.py`` and ``python/tester/upload_hsc_rc2.py`` are scripts that simulate the CCS image writer.
It can be run from ``rubin-devl``, but requires the user to install the ``confluent_kafka`` package in their environment.

You must have a profile set up for the ``rubin-pp`` bucket (see `Buckets`_, above), and must set the ``KAFKA_CLUSTER`` environment variable.
Run:

.. code-block:: sh

kubectl get service -n kafka prompt-processing-kafka-external-bootstrap

and look up the ``EXTERNAL-IP``; set ``KAFKA_CLUSTER=<ip>:9094``.
The IP address is fixed, so you should only need to look it up once.
You must have a profile set up for the ``rubin-pp`` bucket (see `Buckets`_, above).

Install the prototype code, and set it up before use:

Expand Down
4 changes: 0 additions & 4 deletions python/tester/upload.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
import dataclasses
import itertools
import logging
import os
import random
import re
import sys
Expand Down Expand Up @@ -31,9 +30,6 @@ class Instrument:
EXPOSURE_INTERVAL = 18
SLEW_INTERVAL = 2

# Kafka server
kafka_cluster = os.environ["KAFKA_CLUSTER"]


logging.basicConfig(
format="{levelname} {asctime} {name} - {message}",
Expand Down
3 changes: 0 additions & 3 deletions python/tester/upload_hsc_rc2.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>.

import logging
import os
import random
import sys
import tempfile
Expand All @@ -38,8 +37,6 @@
EXPOSURE_INTERVAL = 18
SLEW_INTERVAL = 2

# Kafka server
kafka_cluster = os.environ["KAFKA_CLUSTER"]

logging.basicConfig(
format="{levelname} {asctime} {name} - {message}",
Expand Down