Skip to content
This repository has been archived by the owner on Dec 17, 2023. It is now read-only.

Running the project

Raul Piraces Alastuey edited this page Jun 3, 2023 · 12 revisions

Building and running rsslay its easy and there are several options to do so! The options are the following:

Running the relay from the source

Requirements

  • It's necessary to have gcc due to CGO (more info)
  • Go v1.20.0

Build & run

  1. Clone this repository (or fork it).

  2. Set the SECRET environment variable (a random string to be used to generate virtual private keys).

  3. Set the following flags (may differ per environment):

    export CGO_ENABLED=1
    export GOARCH=amd64
    export GOOS=linux
  4. Proceed to build the binary with the following command:

    go build -ldflags="-s -w -linkmode external -extldflags '-static'" -o ./rsslay cmd/rsslay/main.go

    Or if you prefer make, just execute make 👾

  5. Run the relay!

    ./rsslay

Note: it will create a local database file to store the currently known RSS feed URLs.

Environment variables used

  • SECRET: mandatory, a random string to be used to generate virtual private keys.
  • DB_DIR: path with filename where the database should be created, defaults to .\db\rsslay.sqlite.
  • DEFAULT_PROFILE_PICTURE_URL: default profile picture URL for feeds that don't have an image.
  • REPLAY_TO_RELAYS: set to true if you want to send the fetched events to other relays defined in RELAYS_TO_PUBLISH_TO (default is false)
  • RELAYS_TO_PUBLISH_TO: string with relays separated by , to re-publish events to in format wss://[URL],wss://[URL2] where URL and URL2 are URLs of valid relays (default is empty)
  • DEFAULT_WAIT_TIME_BETWEEN_BATCHES: default time to wait between sending batches of requests to other relays in milliseconds (default 60000, 60 seconds)
  • DEFAULT_WAIT_TIME_FOR_RELAY_RESPONSE: default time to wait for relay response for possible auth event in milliseconds (default is 3000, 3 seconds).
  • MAX_EVENTS_TO_REPLAY: maximum number of events to send to a relay in RELAYS_TO_PUBLISH_TO in a batch
  • ENABLE_AUTO_NIP05_REGISTRATION: enables NIP-05 automatic registration for all feed profiles in the format [URL]@[MAIN_DOMAIN_NAME] where URL is the main URL for the feed and MAIN_DOMAIN_NAME the below environment variable (default false)
  • MAIN_DOMAIN_NAME: mandatory, main domain name where this relay will be available. For the UX, and for NIP-05 purposes if enabled with ENABLE_AUTO_NIP05_REGISTRATION.
  • OWNER_PUBLIC_KEY: public key to show at the /.well-known/nostr.json endpoint by default mapped as the domain owner (_@[MAIN_DOMAIN_NAME] where MAIN_DOMAIN_NAME is the environment variable)
  • MAX_SUBROUTINES: maximum number to maintain in order to replay events to other relays (to prevent crash, default 20)
  • INFO_RELAY_NAME: relay name to inform for NIP-11 requests (defaults to "rsslay")
  • INFO_CONTACT: contact URI (schemes mailto or https to provide users with a means of contact) to inform for NIP-11 requests (defaults to "~")
  • MAX_CONTENT_LENGTH: maximum number of characters to limit the description to (defaults to 250 characters max, not counting title, links and comments link). It is recommended to not exceed 15.000 characters due to some known relays defaults (which may vary by relay operators configurations). See this issue comment.
  • LOG_LEVEL: minimum log level to show in the console output when running the project. Must be a string with a value of the following: "DEBUG", "INFO", "WARN", "ERROR", "FATAL". Levels are organised from more verbose (less relevant/severe) to less verbose (more relevant/severe). It's implemented with hashicorp/logutils see its README for more technical information.
  • DELETE_FAILING_FEEDS: set to true to delete from the database when a feed is detected invalid or not reachable during parsing (default false).
  • NITTER_INSTANCES: set of comma separated hostnames that hosts Nitter instances to pull feeds from if one instance fails (like a pool of instances).
  • REDIS_CONNECTION_STRING: if you want to use redis instead of in-memory cache, set this connection string in the format redis://USER:PASS@DOMAIN:PORT, then redis will be used for caching parsed feeds.

Running with Docker

The Docker image for this project is published in GitHub Packages and Docker Hub, so you can directly pull the image from that feeds.

Nevertheless, you can always use the git repository and its source code to build and run it by yourself.

From the published releases

  1. Pull the image from GitHub or Docker Hub (both are the same):
    # From GitHub (you can change the tag to a specific version)
    docker pull ghcr.io/piraces/rsslay:latest
    # From DockerHub (you can change the tag to a specific version)
    docker pull piraces/rsslay:latest
  2. Copy the .env.sample file to .env and replace the variable values as needed.
  3. Run the final image!
    # This will run in "detached" mode exposing the port 8080, change the port as needed
    # In case you downloaded the image from DockerHub
    docker run -d --env-file .env -p 8080:8080 --name rsslay piraces/rsslay:latest
    # If you downloaded the image from GitHub
    docker run -d --env-file .env -p 8080:8080 --name rsslay ghcr.io/piraces/rsslay:latest
  4. Now you can access the instance in localhost:8080 (or other port you choose).

Directly from the repository

Note: you can skip step 2 and 3 from below and directly go and run:

docker build github.com/piraces/rsslay -t rsslay
  1. Make sure you have already installed Docker.
  2. Clone this repository (or fork it).
  3. Perform a docker build:
    docker build . -t rsslay
  4. Copy the .env.sample file to .env and replace the variable values as needed.
  5. Run the final image!
    # This will run in "detached" mode exposing the port 8080, change the port as needed
    docker run -d --env-file .env -p 8080:8080 --name rsslay rsslay
  6. Now you can access the instance in localhost:8080 (or other port you choose).