Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to open Sentry with Safari, possibly related to Content-Encoding: gzip #2285

Closed
jap opened this issue Jul 20, 2023 · 24 comments · Fixed by #2297 or #2455
Closed

Unable to open Sentry with Safari, possibly related to Content-Encoding: gzip #2285

jap opened this issue Jul 20, 2023 · 24 comments · Fixed by #2297 or #2455

Comments

@jap
Copy link

jap commented Jul 20, 2023

Self-Hosted Version

23.7.0

CPU Architecture

x86_64

Docker Version

24.0.2

Docker Compose Version

2.18.1

Steps to Reproduce

Go to the sentry main page with the latest Safari browser (16.5.1).

Expected Result

I was expecting a styled and working login page.

Actual Result

It doesn't work. Browser Console shows:

screenshot_2023-07-20_at_07 26 03_720

This appears to be related to app.js being transfer-encoded with gzip, but Safari not picking this up and trying to parse the gzipped data as javascript. This does work with Chrome for example.

Note that we're running Sentry behind Caddy, which may be interfering with headers.

Event ID

No response

@jap
Copy link
Author

jap commented Jul 20, 2023

Note that this issue only popped up after upgrading to 23.7.0, previous versions had no issues.

@azaslavsky
Copy link
Contributor

Is there any way to connect directly to the Sentry instance without Caddy in the middle, to see if the headers are still incorrect?

@jap
Copy link
Author

jap commented Jul 23, 2023

Yes; I've just found a local instance that is directly accessible and has the same issue. Trying it out with Safari and using its "copy as curl" and executing that (with an additional -v and --output) gives:

curl from safari
$ curl -v --output app.js.gz 'http://192.168.121.33:8999/_static/dist/sentry/entrypoints/app.js' -X 'GET' -H 'Accept: */*' -H 'Pragma: no-cache' -H 'Cookie: sc=<redacted>; sentrysid=<redacted>' -H 'Referer: http://192.168.121.33:8999/auth/login/sentry/' -H 'Cache-Control: no-cache' -H 'Host: 192.168.121.33:8999' -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5.2 Safari/605.1.15' -H 'Accept-Language: en-GB,en;q=0.9' -H 'Accept-Encoding: gzip, deflate' -H 'Connection: keep-alive'
Note: Unnecessary use of -X or --request, GET is already inferred.
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying 192.168.121.33:8999...
* Connected to 192.168.121.33 (192.168.121.33) port 8999 (#0)
> GET /_static/dist/sentry/entrypoints/app.js HTTP/1.1
> Host: 192.168.121.33:8999
> Accept: */*
> Pragma: no-cache
> Cookie: sc=<redacted>; sentrysid=<redacted>
> Referer: http://192.168.121.33:8999/auth/login/sentry/
> Cache-Control: no-cache
> User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.5.2 Safari/605.1.15
> Accept-Language: en-GB,en;q=0.9
> Accept-Encoding: gzip, deflate
> Connection: keep-alive
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx
< Date: Sun, 23 Jul 2023 12:15:45 GMT
< Content-Type: application/javascript
< Content-Length: 20121
< Connection: keep-alive
< Content-Disposition: inline; filename="app.js.gz"
< Last-Modified: Mon, 17 Jul 2023 22:09:09 GMT
< Content-Encoding: gzip
< Vary: Accept-Encoding
< Access-Control-Allow-Origin: *
< Cache-Control: max-age=0, must-revalidate
< X-Frame-Options: deny
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< 
{ [641 bytes data]
100 20121  100 20121    0     0   375k      0 --:--:-- --:--:-- --:--:--  436k
* Connection #0 to host 192.168.121.33 left intact
$ file app.js.gz 
app.js.gz: gzip compressed data, max compression, from Unix, original size modulo 2^32 61614

so it looks like it is serving the right thing (a gzip-ed file, with the correct Content-Encoding and Content-Type headers) - but Safari is choking on it.

Doing the same with Chrome gives:

curl from Chrome
$ curl -v --output app.js 'http://192.168.121.33:8999/_static/dist/sentry/entrypoints/app.js'   -H 'Accept: */*'   -H 'Accept-Language: en-US,en;q=0.9,nl;q=0.8'   -H 'Cache-Control: no-cache'   -H 'Connection: keep-alive'   -H 'Cookie: sc=<redacted>; sentrysid=<redacted>'   -H 'DNT: 1'   -H 'Pragma: no-cache'   -H 'Referer: http://192.168.121.33:8999/auth/login/sentry/'   -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36'   -H 'sec-gpc: 1'   --compressed   --insecure
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0*   Trying 192.168.121.33:8999...
* Connected to 192.168.121.33 (192.168.121.33) port 8999 (#0)
> GET /_static/dist/sentry/entrypoints/app.js HTTP/1.1
> Host: 192.168.121.33:8999
> Accept-Encoding: deflate, gzip
> Accept: */*
> Accept-Language: en-US,en;q=0.9,nl;q=0.8
> Cache-Control: no-cache
> Connection: keep-alive
> Cookie: sc=<redacted>; sentrysid=<redacted>
> DNT: 1
> Pragma: no-cache
> Referer: http://192.168.121.33:8999/auth/login/sentry/
> User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36
> sec-gpc: 1
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx
< Date: Sun, 23 Jul 2023 12:19:02 GMT
< Content-Type: application/javascript
< Content-Length: 20121
< Connection: keep-alive
< Content-Disposition: inline; filename="app.js.gz"
< Last-Modified: Mon, 17 Jul 2023 22:09:09 GMT
< Content-Encoding: gzip
< Vary: Accept-Encoding
< Access-Control-Allow-Origin: *
< Cache-Control: max-age=0, must-revalidate
< X-Frame-Options: deny
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< 
{ [641 bytes data]
100 20121  100 20121    0     0   500k      0 --:--:-- --:--:-- --:--:--  614k
* Connection #0 to host 192.168.121.33 left intact
$ file app.js
app.js: ASCII text, with very long lines (61535)

Note that the Chrome-generated curl command has a --compressed in it, which causes it to decompress the data before outputting it, but the payload on the wire is the same.

Trying this with Firefox yields the same results as Chrome.

(Upon further inspection, sentry.css is also mangled in a similar way. Because the app.js resource fails to load, no other compressed resources are pulled in.)

@laurikari
Copy link

Also seeing this with the latest self-hosted version. It seems to me that the files may be gzip compressed twice.

@laurikari
Copy link

Upon further inspection, nope, the response is gzip compressed only once.

Safari trips up on the Content-Disposition response header, which has apparently appeared in the most recent Sentry version.

I'm serving Sentry via nginx, so adding this made the site work with Safari again:

  proxy_hide_header Content-Disposition;

@jap
Copy link
Author

jap commented Jul 24, 2023

Thanks @laurikari , can confirm a similar workaround works with Caddy
(that is, adding:

header -Content-Disposition

to the caddyfile snippet for our sentry host.)

@azaslavsky
Copy link
Contributor

Our 23.7.0 upgraded Django to 3.2.20, which is causing issues elsewhere as well. Based on my reading of this similar bug, it seems like this is caused by the Django issue as well. Will confer with the team and post a fix shortly.

@azaslavsky
Copy link
Contributor

We've managed to replicate this on our dogfood self-hosted instance. Currently working on getting a fix up.

@azaslavsky
Copy link
Contributor

So we weren't able to replicate the fix on our local instance (Safari verson 16.5). We tried both proxy_hide_header Content-Disposition; alone and with add_header Content-Disposition 'inline'; to no avail. Is there anything else you did to your nginx setup when resolving the bug? We could also go the route of explicitly adding the Content-Encoding on safari, but that feels like a bigger and riskier change, so I want to make sure there isn't a simpler solution you've seen.

@laurikari
Copy link

Hmm, adding proxy_hide_header Content-Disposition; was the only change I did to get this to work.

If I remove it, the issue reappears.

I did have to empty caches (or do a shift-reload) after changing the nginx config, otherwise Safari uses cached values for the CSS and JS files. Latest Safari on macOS arm64 (16.5.2 18615.2.9.11.10).

@azaslavsky
Copy link
Contributor

Ah, I generally use Chrome for development, I misunderstood how caches are reset in Safari 🤦 . Looks good on my machine as well, pushing the fix now.

azaslavsky added a commit that referenced this issue Jul 25, 2023
With release 23.7.0, the upstream sentry@23.7.0 upgraded Django to
3.2.20, a breaking change for that dependency. This introduced a number
of bugs into self-hosted, including one where the Content-Disposition
response header is now set by Django in a manner that is illegible to
Safari, as documenting in https://code.djangoproject.com/ticket/31082.

This change modifies the default nginx config to override this header
with the simpler `inline` setting, allowing Safari to properly
decompress gzipped resources like CSS and Javascript bundles.

Fixes #2285
azaslavsky added a commit that referenced this issue Jul 25, 2023
With release 23.7.0, the upstream sentry@23.7.0 upgraded Django to
3.2.20, a breaking change for that dependency. This introduced a number
of bugs into self-hosted, including one where the Content-Disposition
response header is now set by Django in a manner that is illegible to
Safari, as documenting in https://code.djangoproject.com/ticket/31082.

This change modifies the default nginx config to override this header
with the simpler `inline` setting, allowing Safari to properly
decompress gzipped resources like CSS and Javascript bundles.

Fixes #2285
azaslavsky added a commit that referenced this issue Jul 25, 2023
With release 23.7.0, the upstream sentry@23.7.0 upgraded Django to
3.2.20, a breaking change for that dependency. This introduced a number
of bugs into self-hosted, including one where the Content-Disposition
response header is now set by Django in a manner that is illegible to
Safari, as documenting in https://code.djangoproject.com/ticket/31082.

This change modifies the default nginx config to override this header
with the simpler `inline` setting, allowing Safari to properly
decompress gzipped resources like CSS and Javascript bundles.

Fixes #2285
@azaslavsky
Copy link
Contributor

For future reference, the upstream Django 3 bug where they say they won't fix it :/ https://code.djangoproject.com/ticket/31082

azaslavsky added a commit that referenced this issue Jul 25, 2023
With release 23.7.0, the upstream sentry@23.7.0 upgraded Django to
3.2.20, a breaking change for that dependency. This introduced a number
of bugs into self-hosted, including one where the Content-Disposition
response header is now set by Django in a manner that is illegible to
Safari, as documenting in https://code.djangoproject.com/ticket/31082.

This change modifies the default nginx config to override this header
with the simpler `inline` setting, allowing Safari to properly
decompress gzipped resources like CSS and Javascript bundles.

Fixes #2285
@azaslavsky
Copy link
Contributor

This fix is currently being merged. It will be included in the 23.7.1 release I am currently preparing.

azaslavsky added a commit that referenced this issue Jul 25, 2023
With release 23.7.0, the upstream sentry@23.7.0 upgraded Django to
3.2.20, a breaking change for that dependency. This introduced a number
of bugs into self-hosted, including one where the Content-Disposition
response header is now set by Django in a manner that is illegible to
Safari, as documenting in https://code.djangoproject.com/ticket/31082.

This change modifies the default nginx config to override this header
with the simpler `inline` setting, allowing Safari to properly
decompress gzipped resources like CSS and Javascript bundles.

Fixes #2285
@soer7022
Copy link

I pulled the latest release, and ran ./install.sh but i still have the same issue... Could this be caused by having a traefik load balancer in front?

I have modified the docker-compose.yml:

x-restart-policy: &restart_policy
  restart: unless-stopped
x-depends_on-healthy: &depends_on-healthy
  condition: service_healthy
x-depends_on-default: &depends_on-default
  condition: service_started
x-healthcheck-defaults: &healthcheck_defaults
  # Avoid setting the interval too small, as docker uses much more CPU than one would expect.
  # Related issues:
  # https://github.com/moby/moby/issues/39102
  # https://github.com/moby/moby/issues/39388
  # https://github.com/getsentry/self-hosted/issues/1000
  interval: "$HEALTHCHECK_INTERVAL"
  timeout: "$HEALTHCHECK_TIMEOUT"
  retries: $HEALTHCHECK_RETRIES
  start_period: 10s
x-sentry-defaults: &sentry_defaults
  <<: *restart_policy
  image: sentry-self-hosted-local
  # Set the platform to build for linux/arm64 when needed on Apple silicon Macs.
  platform: ${DOCKER_PLATFORM:-}
  build:
    context: ./sentry
    args:
      - SENTRY_IMAGE
  depends_on:
    redis:
      <<: *depends_on-healthy
    kafka:
      <<: *depends_on-healthy
    postgres:
      <<: *depends_on-healthy
    memcached:
      <<: *depends_on-default
    smtp:
      <<: *depends_on-default
    snuba-api:
      <<: *depends_on-default
    snuba-consumer:
      <<: *depends_on-default
    snuba-outcomes-consumer:
      <<: *depends_on-default
    snuba-sessions-consumer:
      <<: *depends_on-default
    snuba-transactions-consumer:
      <<: *depends_on-default
    snuba-subscription-consumer-events:
      <<: *depends_on-default
    snuba-subscription-consumer-transactions:
      <<: *depends_on-default
    snuba-replacer:
      <<: *depends_on-default
    symbolicator:
      <<: *depends_on-default
    vroom:
      <<: *depends_on-default
  entrypoint: "/etc/sentry/entrypoint.sh"
  command: ["run", "web"]
  environment:
    PYTHONUSERBASE: "/data/custom-packages"
    SENTRY_CONF: "/etc/sentry"
    SNUBA: "http://snuba-api:1218"
    VROOM: "http://vroom:8085"
    # Force everything to use the system CA bundle
    # This is mostly needed to support installing custom CA certs
    # This one is used by botocore
    DEFAULT_CA_BUNDLE: &ca_bundle "/etc/ssl/certs/ca-certificates.crt"
    # This one is used by requests
    REQUESTS_CA_BUNDLE: *ca_bundle
    # This one is used by grpc/google modules
    GRPC_DEFAULT_SSL_ROOTS_FILE_PATH_ENV_VAR: *ca_bundle
    # Leaving the value empty to just pass whatever is set
    # on the host system (or in the .env file)
    SENTRY_EVENT_RETENTION_DAYS:
    SENTRY_MAIL_HOST:
    SENTRY_MAX_EXTERNAL_SOURCEMAP_SIZE:
    # Set this value if you plan on using the Suggested Fix Feature
    OPENAI_API_KEY:
  volumes:
    - "sentry-data:/data"
    - "./sentry:/etc/sentry"
    - "./geoip:/geoip:ro"
    - "./certificates:/usr/local/share/ca-certificates:ro"
x-snuba-defaults: &snuba_defaults
  <<: *restart_policy
  depends_on:
    clickhouse:
      <<: *depends_on-healthy
    kafka:
      <<: *depends_on-healthy
    redis:
      <<: *depends_on-healthy
  image: "$SNUBA_IMAGE"
  environment:
    SNUBA_SETTINGS: self_hosted
    CLICKHOUSE_HOST: clickhouse
    DEFAULT_BROKERS: "kafka:9092"
    REDIS_HOST: redis
    UWSGI_MAX_REQUESTS: "10000"
    UWSGI_DISABLE_LOGGING: "true"
    # Leaving the value empty to just pass whatever is set
    # on the host system (or in the .env file)
    SENTRY_EVENT_RETENTION_DAYS:
services:
  smtp:
    <<: *restart_policy
    image: tianon/exim4
    hostname: "${SENTRY_MAIL_HOST:-}"
    volumes:
      - "sentry-smtp:/var/spool/exim4"
      - "sentry-smtp-log:/var/log/exim4"
  memcached:
    <<: *restart_policy
    image: "memcached:1.6.21-alpine"
    command: ["-I", "${SENTRY_MAX_EXTERNAL_SOURCEMAP_SIZE:-1M}"]
    healthcheck:
      <<: *healthcheck_defaults
      # From: https://stackoverflow.com/a/31877626/5155484
      test: echo stats | nc 127.0.0.1 11211
  redis:
    <<: *restart_policy
    image: "redis:6.2.12-alpine"
    healthcheck:
      <<: *healthcheck_defaults
      test: redis-cli ping
    volumes:
      - "sentry-redis:/data"
    ulimits:
      nofile:
        soft: 10032
        hard: 10032
  postgres:
    <<: *restart_policy
    # Using the same postgres version as Sentry dev for consistency purposes
    image: "postgres:14.5"
    healthcheck:
      <<: *healthcheck_defaults
      # Using default user "postgres" from sentry/sentry.conf.example.py or value of POSTGRES_USER if provided
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres}"]
    command:
      [
        "postgres",
        "-c",
        "wal_level=logical",
        "-c",
        "max_replication_slots=1",
        "-c",
        "max_wal_senders=1",
      ]
    environment:
      POSTGRES_HOST_AUTH_METHOD: "trust"
    entrypoint: /opt/sentry/postgres-entrypoint.sh
    volumes:
      - "sentry-postgres:/var/lib/postgresql/data"
      - type: bind
        read_only: true
        source: ./postgres/
        target: /opt/sentry/
  zookeeper:
    <<: *restart_policy
    image: "confluentinc/cp-zookeeper:5.5.7"
    environment:
      ZOOKEEPER_CLIENT_PORT: "2181"
      CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
      ZOOKEEPER_LOG4J_ROOT_LOGLEVEL: "WARN"
      ZOOKEEPER_TOOLS_LOG4J_LOGLEVEL: "WARN"
      KAFKA_OPTS: "-Dzookeeper.4lw.commands.whitelist=ruok"
    ulimits:
      nofile:
        soft: 4096
        hard: 4096
    volumes:
      - "sentry-zookeeper:/var/lib/zookeeper/data"
      - "sentry-zookeeper-log:/var/lib/zookeeper/log"
      - "sentry-secrets:/etc/zookeeper/secrets"
    healthcheck:
      <<: *healthcheck_defaults
      test:
        ["CMD-SHELL", 'echo "ruok" | nc -w 2 localhost 2181 | grep imok']
  kafka:
    <<: *restart_policy
    depends_on:
      zookeeper:
        <<: *depends_on-healthy
    image: "confluentinc/cp-kafka:5.5.7"
    environment:
      KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
      KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092"
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: "1"
      KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS: "1"
      KAFKA_LOG_RETENTION_HOURS: "24"
      KAFKA_MESSAGE_MAX_BYTES: "50000000" #50MB or bust
      KAFKA_MAX_REQUEST_SIZE: "50000000" #50MB on requests apparently too
      CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
      KAFKA_LOG4J_LOGGERS: "kafka.cluster=WARN,kafka.controller=WARN,kafka.coordinator=WARN,kafka.log=WARN,kafka.server=WARN,kafka.zookeeper=WARN,state.change.logger=WARN"
      KAFKA_LOG4J_ROOT_LOGLEVEL: "WARN"
      KAFKA_TOOLS_LOG4J_LOGLEVEL: "WARN"
    ulimits:
      nofile:
        soft: 4096
        hard: 4096
    volumes:
      - "sentry-kafka:/var/lib/kafka/data"
      - "sentry-kafka-log:/var/lib/kafka/log"
      - "sentry-secrets:/etc/kafka/secrets"
    healthcheck:
      <<: *healthcheck_defaults
      test: ["CMD-SHELL", "nc -z localhost 9092"]
      interval: 10s
      timeout: 10s
      retries: 30
  clickhouse:
    <<: *restart_policy
    image: clickhouse-self-hosted-local
    build:
      context: ./clickhouse
      args:
        BASE_IMAGE: "${CLICKHOUSE_IMAGE:-}"
    ulimits:
      nofile:
        soft: 262144
        hard: 262144
    volumes:
      - "sentry-clickhouse:/var/lib/clickhouse"
      - "sentry-clickhouse-log:/var/log/clickhouse-server"
      - type: bind
        read_only: true
        source: ./clickhouse/config.xml
        target: /etc/clickhouse-server/config.d/sentry.xml
    environment:
      # This limits Clickhouse's memory to 30% of the host memory
      # If you have high volume and your search return incomplete results
      # You might want to change this to a higher value (and ensure your host has enough memory)
      MAX_MEMORY_USAGE_RATIO: 0.3
    healthcheck:
      test: [
          "CMD-SHELL",
          # Manually override any http_proxy envvar that might be set, because
          # this wget does not support no_proxy. See:
          # https://github.com/getsentry/self-hosted/issues/1537
          "http_proxy='' wget -nv -t1 --spider 'http://localhost:8123/' || exit 1",
        ]
      interval: 10s
      timeout: 10s
      retries: 30
  geoipupdate:
    image: "maxmindinc/geoipupdate:v5.1.1"
    # Override the entrypoint in order to avoid using envvars for config.
    # Futz with settings so we can keep mmdb and conf in same dir on host
    # (image looks for them in separate dirs by default).
    entrypoint:
      ["/usr/bin/geoipupdate", "-d", "/sentry", "-f", "/sentry/GeoIP.conf"]
    volumes:
      - "./geoip:/sentry"
  snuba-api:
    <<: *snuba_defaults
  # Kafka consumer responsible for feeding events into Clickhouse
  snuba-consumer:
    <<: *snuba_defaults
    command: consumer --storage errors --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
  # Kafka consumer responsible for feeding outcomes into Clickhouse
  # Use --auto-offset-reset=earliest to recover up to 7 days of TSDB data
  # since we did not do a proper migration
  snuba-outcomes-consumer:
    <<: *snuba_defaults
    command: consumer --storage outcomes_raw --auto-offset-reset=earliest --max-batch-time-ms 750 --no-strict-offset-reset
  # Kafka consumer responsible for feeding session data into Clickhouse
  snuba-sessions-consumer:
    <<: *snuba_defaults
    command: consumer --storage sessions_raw --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
  # Kafka consumer responsible for feeding transactions data into Clickhouse
  snuba-transactions-consumer:
    <<: *snuba_defaults
    command: consumer --storage transactions --consumer-group transactions_group --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
  snuba-replays-consumer:
    <<: *snuba_defaults
    command: consumer --storage replays --auto-offset-reset=latest --max-batch-time-ms 750 --no-strict-offset-reset
  snuba-replacer:
    <<: *snuba_defaults
    command: replacer --storage errors --auto-offset-reset=latest --no-strict-offset-reset
  snuba-subscription-consumer-events:
    <<: *snuba_defaults
    command: subscriptions-scheduler-executor --dataset events --entity events --auto-offset-reset=latest --no-strict-offset-reset --consumer-group=snuba-events-subscriptions-consumers --followed-consumer-group=snuba-consumers --delay-seconds=60 --schedule-ttl=60 --stale-threshold-seconds=900
  snuba-subscription-consumer-sessions:
    <<: *snuba_defaults
    command: subscriptions-scheduler-executor --dataset sessions --entity sessions --auto-offset-reset=latest --no-strict-offset-reset --consumer-group=snuba-sessions-subscriptions-consumers --followed-consumer-group=sessions-group --delay-seconds=60 --schedule-ttl=60 --stale-threshold-seconds=900
  snuba-subscription-consumer-transactions:
    <<: *snuba_defaults
    command: subscriptions-scheduler-executor --dataset transactions --entity transactions --auto-offset-reset=latest --no-strict-offset-reset --consumer-group=snuba-transactions-subscriptions-consumers --followed-consumer-group=transactions_group --delay-seconds=60 --schedule-ttl=60 --stale-threshold-seconds=900
  snuba-profiling-profiles-consumer:
    <<: *snuba_defaults
    command: consumer --storage profiles --auto-offset-reset=latest --max-batch-time-ms 1000 --no-strict-offset-reset
  snuba-profiling-functions-consumer:
    <<: *snuba_defaults
    command: consumer --storage functions_raw --auto-offset-reset=latest --max-batch-time-ms 1000 --no-strict-offset-reset
  symbolicator:
    <<: *restart_policy
    image: "$SYMBOLICATOR_IMAGE"
    volumes:
      - "sentry-symbolicator:/data"
      - type: bind
        read_only: true
        source: ./symbolicator
        target: /etc/symbolicator
    command: run -c /etc/symbolicator/config.yml
  symbolicator-cleanup:
    <<: *restart_policy
    image: symbolicator-cleanup-self-hosted-local
    build:
      context: ./cron
      args:
        BASE_IMAGE: "$SYMBOLICATOR_IMAGE"
    command: '"55 23 * * * gosu symbolicator symbolicator cleanup"'
    volumes:
      - "sentry-symbolicator:/data"
  web:
    <<: *sentry_defaults
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.web.rule=Host(`mydomain.com`)"
      - "traefik.http.routers.web.entrypoints=websecure"
      - "traefik.http.routers.web.tls.certresolver=myresolver"
    ulimits:
      nofile:
        soft: 4096
        hard: 4096
    healthcheck:
      <<: *healthcheck_defaults
      test:
        - "CMD"
        - "/bin/bash"
        - "-c"
        # Courtesy of https://unix.stackexchange.com/a/234089/108960
        - 'exec 3<>/dev/tcp/127.0.0.1/9000 && echo -e "GET /_health/ HTTP/1.1\r\nhost: 127.0.0.1\r\n\r\n" >&3 && grep ok -s -m 1 <&3'
  cron:
    <<: *sentry_defaults
    command: run cron
  worker:
    <<: *sentry_defaults
    command: run worker
  events-consumer:
    <<: *sentry_defaults
    command: run consumer ingest-events --consumer-group ingest-consumer
  attachments-consumer:
    <<: *sentry_defaults
    command: run consumer ingest-attachments --consumer-group ingest-consumer
  transactions-consumer:
    <<: *sentry_defaults
    command: run consumer ingest-transactions --consumer-group ingest-consumer
  ingest-replay-recordings:
    <<: *sentry_defaults
    command: run consumer ingest-replay-recordings --consumer-group ingest-replay-recordings
  ingest-profiles:
    <<: *sentry_defaults
    command: run consumer --no-strict-offset-reset ingest-profiles --consumer-group ingest-profiles
  post-process-forwarder-errors:
    <<: *sentry_defaults
    command: run consumer post-process-forwarder-errors --consumer-group post-process-forwarder --synchronize-commit-log-topic=snuba-commit-log --synchronize-commit-group=snuba-consumers
  post-process-forwarder-transactions:
    <<: *sentry_defaults
    command: run consumer post-process-forwarder-transactions --consumer-group post-process-forwarder --synchronize-commit-log-topic=snuba-transactions-commit-log --synchronize-commit-group transactions_group
  subscription-consumer-events:
    <<: *sentry_defaults
    command: run consumer events-subscription-results --consumer-group query-subscription-consumer
  subscription-consumer-transactions:
    <<: *sentry_defaults
    command: run consumer transactions-subscription-results --consumer-group query-subscription-consumer
  sentry-cleanup:
    <<: *sentry_defaults
    image: sentry-cleanup-self-hosted-local
    build:
      context: ./cron
      args:
        BASE_IMAGE: sentry-self-hosted-local
    entrypoint: "/entrypoint.sh"
    command: '"0 0 * * * gosu sentry sentry cleanup --days $SENTRY_EVENT_RETENTION_DAYS"'
  nginx:
    <<: *restart_policy
    ports:
      - "$SENTRY_BIND:80/tcp"
    image: "nginx:1.25.1-alpine"
    volumes:
      - type: bind
        read_only: true
        source: ./nginx
        target: /etc/nginx
      - sentry-nginx-cache:/var/cache/nginx
    depends_on:
      - web
      - relay
  relay:
    <<: *restart_policy
    image: "$RELAY_IMAGE"
    volumes:
      - type: bind
        read_only: true
        source: ./relay
        target: /work/.relay
      - type: bind
        read_only: true
        source: ./geoip
        target: /geoip
    depends_on:
      kafka:
        <<: *depends_on-healthy
      redis:
        <<: *depends_on-healthy
      web:
        <<: *depends_on-healthy
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.relay.rule=Host(`mydomain.com`) && PathPrefix(`/api/store/`, `/api/{id:[1-9]\\d*/}`)"
      - "traefik.http.routers.relay.entrypoints=websecure"
      - "traefik.http.routers.relay.tls.certresolver=myresolver"
  vroom:
    <<: *restart_policy
    image: "$VROOM_IMAGE"
    environment:
      SENTRY_KAFKA_BROKERS_PROFILING: "kafka:9092"
      SENTRY_KAFKA_BROKERS_OCCURRENCES: "kafka:9092"
      SENTRY_BUCKET_PROFILES: file://localhost//var/lib/sentry-profiles
      SENTRY_SNUBA_HOST: "http://snuba-api:1218"
    volumes:
      - sentry-vroom:/var/lib/sentry-profiles
    depends_on:
      kafka:
        <<: *depends_on-healthy
  vroom-cleanup:
    <<: *restart_policy
    image: vroom-cleanup-self-hosted-local
    build:
      context: ./cron
      args:
        BASE_IMAGE: "$VROOM_IMAGE"
    entrypoint: "/entrypoint.sh"
    environment:
      # Leaving the value empty to just pass whatever is set
      # on the host system (or in the .env file)
      SENTRY_EVENT_RETENTION_DAYS:
    command: '"0 0 * * * find /var/lib/sentry-profiles -type f -mtime +$SENTRY_EVENT_RETENTION_DAYS -delete"'
    volumes:
      - sentry-vroom:/var/lib/sentry-profiles

volumes:
  # These store application data that should persist across restarts.
  sentry-data:
    external: true
  sentry-postgres:
    external: true
  sentry-redis:
    external: true
  sentry-zookeeper:
    external: true
  sentry-kafka:
    external: true
  sentry-clickhouse:
    external: true
  sentry-symbolicator:
    external: true
  # This volume stores profiles and should be persisted.
  # Not being external will still persist data across restarts.
  # It won't persist if someone does a docker compose down -v.
  sentry-vroom:
  # These store ephemeral data that needn't persist across restarts.
  # That said, volumes will be persisted across restarts until they are deleted.
  sentry-secrets:
  sentry-smtp:
  sentry-nginx-cache:
  sentry-zookeeper-log:
  sentry-kafka-log:
  sentry-smtp-log:
  sentry-clickhouse-log:

My traefik docker compose is:

version: "3.3"

services:

  traefik:
    image: "traefik:v2.9"
    container_name: "traefik"
    network_mode: "host"
    restart: "unless-stopped"
    command:
      - "--api.insecure=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.myresolver.acme.tlschallenge=true"
      - "--certificatesresolvers.myresolver.acme.email=myemail@email.com
      - "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
    ports:
      - "443:443"
      - "8080:8080"
    volumes:
      - "./letsencrypt:/letsencrypt"
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
     ```

Any ideas?

@jap
Copy link
Author

jap commented Jul 26, 2023

Did you shift-reload? Safari may be caching some things.

@soer7022
Copy link

I have tried deleting all cache, and reload even going as far as to restart my Mac. Still no luck...

@github-actions github-actions bot locked and limited conversation to collaborators Aug 12, 2023
@hubertdeng123
Copy link
Member

Looks like this hacky workaround is creating issues:
#2377

@zKoz210
Copy link
Contributor

zKoz210 commented Sep 22, 2023

@hubertdeng123

After the release of 23.9.1, this problem began to reproduce again. Can we bring this hack back or come up with other ways to make Safari work properly?

@soer7022
Copy link

It's been broken since mid july...

@frame
Copy link

frame commented Sep 22, 2023

If you use Apache as frontend proxy, you can workaround this issue by adding this to your vhost config:

Header unset Content-Disposition

(requires mod_headers)

@hubertdeng123
Copy link
Member

Unfortunately, we don't have a good solution right now. This doesn't appear to be an issue specically with our code, but an issue with the way Safari handles the headers in Django 3. This hack cannot come back because it is a bigger issue that attachments are not being downloaded properly, which is a result of the workaround. You are free to use the hacky workaround in your own instance if you'd like.

@mwarkentin
Copy link
Member

Looking at fixing this for our single tenant deployments, and it seems like we can fix it specifically for our static files without touching anything else like so:

location /_static/ {
      proxy_hide_header Content-Disposition;
    }

mwarkentin added a commit that referenced this issue Oct 6, 2023
Fixes #2285

Applies the same `proxy_hide_header Content-Disposition;`, but only to paths that start with `_static/` which should avoid the issue introduced previously with attachment downloads.
azaslavsky pushed a commit that referenced this issue Oct 7, 2023
* Update nginx.conf

Fixes #2285

Applies the same `proxy_hide_header Content-Disposition;`, but only to paths that start with `_static/` which should avoid the issue introduced previously with attachment downloads.

* Update nginx/nginx.conf

Co-authored-by: Amin Vakil <info@aminvakil.com>

---------

Co-authored-by: Amin Vakil <info@aminvakil.com>
@yozshujar
Copy link

Sentry 23.10.0 same issue on safari 17

@himynameisjonas
Copy link

Sentry 23.10.0 same issue on safari 17

It works for me with 23.10.0 in Safari, both version 16 and 17. Might be a browser cache issue or something

gersmann added a commit to gersmann/charts that referenced this issue Oct 27, 2023
Theres a bug in Sentry, which prevens Safari from rendering assets (JS). 

getsentry/self-hosted#2285

It was fixed for the self-hosted deploy by adding a 'proxy_hide_header' statement. 

getsentry/self-hosted@ab9dbbd
Mokto pushed a commit to sentry-kubernetes/charts that referenced this issue Oct 31, 2023
* fix: hide content-disposition header on /static for Safari

Theres a bug in Sentry, which prevens Safari from rendering assets (JS). 

getsentry/self-hosted#2285

It was fixed for the self-hosted deploy by adding a 'proxy_hide_header' statement. 

getsentry/self-hosted@ab9dbbd

* fix: tabs / whitespace

* chore: update chart version
@github-actions github-actions bot locked and limited conversation to collaborators Nov 3, 2023
tcorej pushed a commit to vectary/sentry-self-hosted that referenced this issue Apr 20, 2024
With release 23.7.0, the upstream sentry@23.7.0 upgraded Django to
3.2.20, a breaking change for that dependency. This introduced a number
of bugs into self-hosted, including one where the Content-Disposition
response header is now set by Django in a manner that is illegible to
Safari, as documenting in https://code.djangoproject.com/ticket/31082.

This change modifies the default nginx config to override this header
with the simpler `inline` setting, allowing Safari to properly
decompress gzipped resources like CSS and Javascript bundles.

Fixes getsentry#2285
tcorej pushed a commit to vectary/sentry-self-hosted that referenced this issue Apr 20, 2024
* Update nginx.conf

Fixes getsentry#2285

Applies the same `proxy_hide_header Content-Disposition;`, but only to paths that start with `_static/` which should avoid the issue introduced previously with attachment downloads.

* Update nginx/nginx.conf

Co-authored-by: Amin Vakil <info@aminvakil.com>

---------

Co-authored-by: Amin Vakil <info@aminvakil.com>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
Archived in project
10 participants