First, set up environment variables and secrets.
- Copy
.env.template
to.env
and populate values. Make sure to use the instructions here to generate theANON_KEY
andSERVICE_ROLE_KEY
. - The
search
container's init script will create an application API key. Retrieve this value by usingcurl
, then populate it forSEARCH_APPLICATION_API_KEY
in.env.
:curl -s -X GET "http://localhost:7700/keys" \ -H "Authorization: Bearer ${SEARCH_MASTER_API_KEY}" \ -H "Content-Type: application/json"
- Secrets are also stored in the database. You might encounter an issue with initializing them on a fresh start - try to run a search before you add an item.
- Set up a Posthog account and populate the environment variables
NEXT_PUBLIC_POSTHOG_KEY
andNEXT_PUBLIC_POSTHOG_HOST
.
To start the local development environment:
- Run
docker compose up -d
to start containers in detached mode. - Start file syncing with
docker compose watch
. - Navigate to
http://localhost:3000
to view the application.
To develop locally using the Chrome extension, first make sure that the API hostname is correctly set in src/chrome/background.js
, src/chrome/content.js
, and src/chrome/manifest.json
.
- For local development, this should be set to
http://localhost:3000/*
. - For production, this should be set to
https://curi.ooo/*
.
The manifest should contain something like
"content_scripts": [
{
"matches": [
"http://localhost:3000/*"
],
"js": [
"content.js"
]
},
],
Next, install the extension in development mode. In Chrome, open chrome://extensions
. Click on "Load unpacked" and select the src/chrome
directory.
Finally, update NEXT_PUBLIC_CHROME_EXTENSION_ID
in .env
based on the Chrome extension ID in the chrome://extensions page.
To install the local eslint plugins, run
cd src/web/eslint-local-rules
npm run build
With the development environment running, navigate to http://localhost:8000
to view the Supabase dashboard.
Use the DASHBOARD_USERNAME
and DASHBOARD_PASSWORD
environment variables to sign in.
Saved Markdown content can be viewed here in the Supabase Storage page.
- Run
docker exec -it db bash
. - Then, run
psql -U postgres
.
To clear the database, run
docker compose down -v
rm -r docker/volumes/db/data
- Set up Supabase app.
- Set up Vercel app.
- Include the environment variables from the
web
service indocker-compose.yml
. - For the
POSTGRES_URL
variable, use the "Transaction pooler" Supabase Postgres URL. - Also include
SEARCH_MASTER_API_KEY
(at least 16 bytes) andSEARCH_APPLICATION_API_KEY
(a UUID v4).
- Copy the prod env variables locally with
vercel env pull .env.prod
. - Run database migrations against the production database using
DOTENV_CONFIG_PATH=/path/to/.env.prod npm run db:migrate
. - Set up a Google Cloud project with Google Auth Platform configured for a web application. Copy in the generated client ID and client secret into Supabase's Google auth provider, then copy the Supabase auth callback URL into the "Authorized redirect URIs" field.
- Configure the "URL Configuration" site URL and redirect settings in Supabase Auth with the app URL.
- Site URL should be
$HOSTNAME/auth/callback?next=%2Fhome
. - Redirect URLs should include
$HOSTNAME/*
.
- Configure the Supabase storage settings.
- Create a bucket
items
. Set it to be public with the allowed MIME typetext/markdown
. - Create a new policy on the
items
bucket from scratch. Title it "Allow read access for everyone", allow theSELECT
operation for all roles, and keep the default policy definitionbucket_id = 'items'
. - Create a new policy on the
items
bucket. Title it "Allow authenticated to upload", allow theINSERT
andUPDATE
operations for theauthenticated
role, and keep the default policy definition.
- Set up a Meilisearch instance on your cloud provider.
- Use the dev environment:
docker exec -it dev zsh
. - Authenticate using
gcloud auth application-default login
. - Run
terraform plan
to verify the correct resources will be created, then runterraform apply
. - Create an A (Address) DNS record for
terraform output
'sgke.ip_address
under the subdomain ofSEARCH_EXTERNAL_ENDPOINT_URL
. - Set up
cert-manager
on the cluster usingscript/deploy-certs.sh
. - Then run
script/deploy-volumes.sh [staging|prod]
to deploy a persistent volume to store the search index. - Then run
script/deploy.sh [staging|prod]
to deploy the search application. - It may take a while for the certificate to be issued. You can check the status of the
gateway
,certificate
, andchallenge
resources as well as logs of thecert-manager
pod to check progress. - Run
/data/search/init.sh [staging|prod]
to initialize the search application. - Populate the
SEARCH_APPLICATION_API_KEY
after using the master API key to retrieve its value.