Skip to content

[13.x] Add optional disk storage for large SQS queue payloads#59734

Open
Orrison wants to merge 4 commits intolaravel:13.xfrom
Orrison:sqs-disk-extended
Open

[13.x] Add optional disk storage for large SQS queue payloads#59734
Orrison wants to merge 4 commits intolaravel:13.xfrom
Orrison:sqs-disk-extended

Conversation

@Orrison
Copy link
Copy Markdown
Contributor

@Orrison Orrison commented Apr 16, 2026

When using the SQS queue driver, sending a job with a payload that exceeds the queue's maximum message size causes AWS to reject it with an InvalidParameterValue error:

One or more parameters are invalid. Reason: Message must be shorter than 262144 bytes.

Note: AWS did increase the SQS max from 256 KiB to 1 MiB in August 2025, but large payloads can still hit this limit.

This adds native support for automatically offloading large payloads to a configured filesystem disk (e.g. S3) and sending a small pointer through SQS instead. The worker then fetches the full payload from disk when processing the job.

This is a well-known and AWS-recommended strategy for working around this limitation. And Java and Python implementations exist doing it: https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-managing-large-messages.html

This is fully opt-in and backwards compatible, existing SQS users are unaffected. The feature is controlled by a new extended_store_options block in the SQS queue connection config, with enabled defaulting
to false:

'sqs' => [
    // ...existing config...
    'extended_store_options' => [
        'enabled' => env('SQS_STORE_ENABLED', false),
        'disk' => env('SQS_STORE_DISK', 's3'),
        'prefix' => env('SQS_STORE_PREFIX', ''),
        'always' => false,
        'cleanup' => true,
    ],
],
  • enabled - Turn the feature on/off. Defaults to false.
  • disk - Which filesystem disk to store payloads on (e.g. s3).
  • prefix - Path prefix for stored payload files on the disk.
  • always - When true, store every payload to disk regardless of size (useful for consistency).
  • cleanup - When true, delete the disk file after the job is successfully processed.

How it works

  • On push: if the payload exceeds 1 MB (or always is enabled), the payload is written to disk at {prefix}/{uuid}.json and SQS receives a pointer message containing {"@pointer": "path/to/file.json"}.
  • On pop: SqsJob::getRawBody() detects the @pointer key, fetches the real payload from disk, and caches it for the lifetime of the job.
  • On delete: if cleanup is enabled and the message was a pointer, the disk file is removed.
  • On clear: if cleanup is enabled and a prefix is configured, the entire prefix directory is removed from disk.

The @pointer key uses an @ prefix which cannot be a PHP class property, so there is no risk of collision with normal job payloads.

@Orrison Orrison marked this pull request as draft April 16, 2026 23:09
Orrison added 4 commits April 16, 2026 21:20
Signed-off-by: Kevin Ullyott <ullyott.kevin@gmail.com>
Signed-off-by: Kevin Ullyott <ullyott.kevin@gmail.com>
Signed-off-by: Kevin Ullyott <ullyott.kevin@gmail.com>
Signed-off-by: Kevin Ullyott <ullyott.kevin@gmail.com>
@Orrison Orrison force-pushed the sqs-disk-extended branch from 79eb26e to 1dcb06d Compare April 17, 2026 01:21
@Orrison Orrison marked this pull request as ready for review April 17, 2026 01:27
@Orrison Orrison changed the title Add optional disk storage for large SQS queue payloads [13.x] Add optional disk storage for large SQS queue payloads Apr 17, 2026
@devenjahnke
Copy link
Copy Markdown

This feature would be a welcome addition to the framework! We're currently evaluating adoption of SQS as our queue driver, and this would alleviate one of our concerns with doing so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants