Skip to content

Conversation

github-actions[bot]
Copy link

Deploy to Production

Latest changes from main are ready for production deployment.

Approve this PR to start the build - the build will be automatically triggered on the prod branch.


Auto-generated by main-to-prod workflow

h0lybyte and others added 9 commits September 2, 2025 17:15
* chore: fix couple of flaky tests (supabase#1517)

* fix: Improve runtime setup logic (supabase#1511)

Cleanup runtime.exs logic to be more organized and easier to mantain

* fix: runtime setup error (supabase#1520)

---------

Co-authored-by: Eduardo Gurgel <eduardo.gurgel@supabase.io>
Co-authored-by: Filipe Cabaço <filipe@supabase.io>
* fix: runtime setup error (supabase#1520)

* fix: use primary instead of replica on rename_settings_field (supabase#1521)

---------

Co-authored-by: Filipe Cabaço <filipe@supabase.io>
Co-authored-by: Eduardo Gurgel <eduardo.gurgel@supabase.io>
Co-authored-by: Bradley Haljendi <5642609+Fudster@users.noreply.github.com>
* fix: runtime setup error (supabase#1520)

* fix: use primary instead of replica on rename_settings_field (supabase#1521)

---------

Co-authored-by: Filipe Cabaço <filipe@supabase.io>
Co-authored-by: Eduardo Gurgel <eduardo.gurgel@supabase.io>
* fix: runtime setup error (supabase#1520)

* fix: use primary instead of replica on rename_settings_field (supabase#1521)

* feat: upgrade cowboy & ranch (supabase#1523)

* fix: Fix GenRpc to not try to connect to nodes that are not alive (supabase#1525)

* fix: enable presence on track message (supabase#1527)

currently the user would need to have enabled from the beginning of the channel. this will enable users to enable presence later in the flow by sending a track message which will enable presence messages for them

* fix: set cowboy active_n=100 as cowboy 2.12.0 (supabase#1530)

cowboy 2.13.0 set the default active_n=1

* fix: provide error_code metadata on RealtimeChannel.Logging (supabase#1531)

* feat: disable UTF8 validation on websocket frames (supabase#1532)

Currently all text frames as handled only with JSON which already requires UTF-8

* fix: move DB setup to happen after Connect.init (supabase#1533)

This change reduces the impact of slow DB setup impacting other tenants
trying to connect at the same time that landed on the same partition

* fix: handle wal bloat (supabase#1528)

Verify that replication connection is able to reconnect when faced with WAL bloat issues

* feat: replay realtime.messages (supabase#1526)

A new index was created on inserted_at DESC, topic WHERE private IS TRUE AND extension = "broadast"

The hardcoded limit is 25 for now.

* feat: gen_rpc pub sub adapter (supabase#1529)

Add a PubSub adapter that uses gen_rpc to send messages to other nodes.

It uses :gen_rpc.abcast/3 instead of :erlang.send/2

The adapter works very similarly to the PG2 adapter. It consists of
multiple workers that forward to the local node using PubSub.local_broadcast.

The way to choose the worker to be used is based on the sending process
just like PG2 adapter does

The number of workers is controlled by `:pool_size` or `:broadcast_pool_size`.
This distinction exists because Phoenix.PubSub uses `:pool_size` to
define how many partitions the PubSub registry will use. It's possible
to control them separately by using `:broadcast_pool_size`

* fix: ensure message id doesn't raise on non-map payloads (supabase#1534)

* fix: match error on Connect (supabase#1536)



---------

Co-authored-by: Eduardo Gurgel Pinho <eduardo.gurgel@supabase.io>

* feat: websocket max heap size configuration (supabase#1538)

* fix: set max process heap size to 500MB instead of 8GB
* feat: set websocket transport max heap size

WEBSOCKET_MAX_HEAP_SIZE can be used to configure it

* fix: update gen_rpc to fix gen_rpc_dispatcher issues (supabase#1537)

Issues:

* Single gen_rpc_dispatcher that can be a bottleneck if the connecting takes some time
* Many calls can land on the dispatcher but the node might be gone already. If we don't validate the node it might keep trying to connect until it times out instead of quickly giving up due to not being an actively connected node.

* fix: improve ErlSysMon logging for processes (supabase#1540)

Include initial_call, ancestors, registered_name, message_queue_len and total_heap_size

Also bump long_schedule and long_gc

* fix: make pubsub adapter configurable (supabase#1539)

---------

Co-authored-by: Filipe Cabaço <filipe@supabase.io>
Co-authored-by: Eduardo Gurgel <eduardo.gurgel@supabase.io>
Co-authored-by: Bradley Haljendi <5642609+Fudster@users.noreply.github.com>
* fix: runtime setup error (supabase#1520)

* fix: use primary instead of replica on rename_settings_field (supabase#1521)

* feat: upgrade cowboy & ranch (supabase#1523)

* fix: Fix GenRpc to not try to connect to nodes that are not alive (supabase#1525)

* fix: enable presence on track message (supabase#1527)

currently the user would need to have enabled from the beginning of the channel. this will enable users to enable presence later in the flow by sending a track message which will enable presence messages for them

* fix: set cowboy active_n=100 as cowboy 2.12.0 (supabase#1530)

cowboy 2.13.0 set the default active_n=1

* fix: provide error_code metadata on RealtimeChannel.Logging (supabase#1531)

* feat: disable UTF8 validation on websocket frames (supabase#1532)

Currently all text frames as handled only with JSON which already requires UTF-8

* fix: move DB setup to happen after Connect.init (supabase#1533)

This change reduces the impact of slow DB setup impacting other tenants
trying to connect at the same time that landed on the same partition

* fix: handle wal bloat (supabase#1528)

Verify that replication connection is able to reconnect when faced with WAL bloat issues

* feat: replay realtime.messages (supabase#1526)

A new index was created on inserted_at DESC, topic WHERE private IS TRUE AND extension = "broadast"

The hardcoded limit is 25 for now.

* feat: gen_rpc pub sub adapter (supabase#1529)

Add a PubSub adapter that uses gen_rpc to send messages to other nodes.

It uses :gen_rpc.abcast/3 instead of :erlang.send/2

The adapter works very similarly to the PG2 adapter. It consists of
multiple workers that forward to the local node using PubSub.local_broadcast.

The way to choose the worker to be used is based on the sending process
just like PG2 adapter does

The number of workers is controlled by `:pool_size` or `:broadcast_pool_size`.
This distinction exists because Phoenix.PubSub uses `:pool_size` to
define how many partitions the PubSub registry will use. It's possible
to control them separately by using `:broadcast_pool_size`

* fix: ensure message id doesn't raise on non-map payloads (supabase#1534)

* fix: match error on Connect (supabase#1536)



---------

Co-authored-by: Eduardo Gurgel Pinho <eduardo.gurgel@supabase.io>

* feat: websocket max heap size configuration (supabase#1538)

* fix: set max process heap size to 500MB instead of 8GB
* feat: set websocket transport max heap size

WEBSOCKET_MAX_HEAP_SIZE can be used to configure it

* fix: update gen_rpc to fix gen_rpc_dispatcher issues (supabase#1537)

Issues:

* Single gen_rpc_dispatcher that can be a bottleneck if the connecting takes some time
* Many calls can land on the dispatcher but the node might be gone already. If we don't validate the node it might keep trying to connect until it times out instead of quickly giving up due to not being an actively connected node.

* fix: improve ErlSysMon logging for processes (supabase#1540)

Include initial_call, ancestors, registered_name, message_queue_len and total_heap_size

Also bump long_schedule and long_gc

* fix: make pubsub adapter configurable (supabase#1539)

---------

Co-authored-by: Filipe Cabaço <filipe@supabase.io>
Co-authored-by: Eduardo Gurgel <eduardo.gurgel@supabase.io>
Co-authored-by: Bradley Haljendi <5642609+Fudster@users.noreply.github.com>
@Fudster Fudster merged commit 7128d69 into prod Sep 25, 2025
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants