feat(dropbox): move media upload processing from LV jobs to cron task#534
feat(dropbox): move media upload processing from LV jobs to cron task#534
Conversation
feat(dropbox-client): manage http 429 on upload
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 45 minutes and 45 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (9)
📝 WalkthroughWalkthroughAdds a DB-backed PendingMediaUpload workflow: new entity, repository, migrations, service processing, scheduled Artisan command(s), Dropbox token auto-refresh and 429-aware client, provider/kernel wiring, config changes, and unit tests. Changes
Sequence Diagram(s)sequenceDiagram
participant Scheduler as Scheduler/Cron
participant Cmd as ProcessPendingMediaUploadsCommand
participant Svc as SummitService
participant Repo as PendingMediaUploadRepository
participant DB as Database
participant FileUtil as FileUtils
participant StoragePub as PublicStorageStrategy
participant StoragePriv as PrivateStorageStrategy
participant Dropbox as RetryAfterDropboxClient
Scheduler->>Cmd: trigger
activate Cmd
Cmd->>Svc: processPendingMediaUploads(max_retries)
deactivate Cmd
activate Svc
Svc->>Repo: resetStuckProcessingRows()
Repo->>DB: UPDATE STATUS_PROCESSING -> STATUS_PENDING
DB-->>Repo: rows_affected
Svc->>Repo: getPendingUploads()
Repo->>DB: SELECT WHERE status = STATUS_PENDING
DB-->>Repo: pending_list
loop for each pending upload
Svc->>Repo: set STATUS_PROCESSING, increment attempts
Repo->>DB: persist change
Svc->>FileUtil: download(publicPath) -> tempFile
FileUtil-->>Svc: tempFilePath
Svc->>StoragePub: save(tempFile) -> publicPath
Svc->>StoragePriv: save(tempFile) -> privatePath
StoragePriv->>Dropbox: upload (429-aware retry)
alt upload success
Svc->>Repo: set STATUS_COMPLETED, set processed_date
else error
Svc->>Repo: set STATUS_ERROR or STATUS_PENDING, set error_message
end
Svc->>FileUtil: cleanup tempFile / remote temp
end
Svc->>Repo: deleteCompletedOlderThan(days=7)
Repo->>DB: DELETE ...
DB-->>Repo: deleted_count
Svc-->>Cmd: return stats {processed, errors}
deactivate Svc
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
📘 OpenAPI / Swagger preview ➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/ This page is automatically updated on each push to this PR. |
There was a problem hiding this comment.
Actionable comments posted: 18
🧹 Nitpick comments (2)
app/Jobs/ProcessMediaUpload.php (1)
26-30: Consider marking for concrete removal timing.The deprecation note is helpful, but without a target version or removal ticket it's easy to lose track of. If you keep the job around for in-flight queue drainage, mention the cutoff commit/deployment (or link the ClickUp task) so the future cleanup can be scheduled deterministically.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/Jobs/ProcessMediaUpload.php` around lines 26 - 30, Update the deprecation block on the ProcessMediaUpload class to include a concrete removal target (e.g., a specific release version or commit hash) or a tracking ticket ID so future cleanup is deterministic; locate the docblock above the ProcessMediaUpload class definition and replace the vague “Can be removed in a future cleanup” line with a precise marker such as “REMOVE_BY: vX.Y.Z / commit <hash> / ClickUp `#12345`” and, if relevant, add a short note that it remains only for draining in-flight queue jobs.app/Services/Model/Imp/PresentationService.php (1)
1153-1162: Consider extracting pending-upload creation into one helper.The
PendingMediaUploadfield mapping is duplicated in add/update paths; a private helper would keep future fields/status changes consistent.♻️ Proposed refactor shape
+ private function createPendingMediaUpload( + Summit $summit, + $mediaUploadType, + PresentationMediaUpload $mediaUpload, + FileUploadInfo $fileInfo + ): void { + $pendingUpload = new PendingMediaUpload(); + $pendingUpload->setSummitId($summit->getId()); + $pendingUpload->setMediaUploadTypeId($mediaUploadType->getId()); + $pendingUpload->setPublicPath($mediaUpload->getPath(IStorageTypesConstants::PublicType)); + $pendingUpload->setPrivatePath($mediaUpload->getPath(IStorageTypesConstants::PrivateType)); + $pendingUpload->setFileName($fileInfo->getFileName()); + $pendingUpload->setTempFilePath($fileInfo->getFilePath()); + $pendingUpload->setStatus(PendingMediaUpload::STATUS_PENDING); + $this->pending_media_upload_repository->add($pendingUpload); + }Also applies to: 1277-1286
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/Services/Model/Imp/PresentationService.php` around lines 1153 - 1162, The PendingMediaUpload population is duplicated; extract a private helper (e.g., buildPendingMediaUpload or createPendingMediaUpload) that accepts parameters like $summit, $mediaUploadType, $mediaUpload, $fileInfo and returns a fully populated PendingMediaUpload (calls setSummitId, setMediaUploadTypeId, setPublicPath, setPrivatePath, setFileName, setTempFilePath, setStatus). Replace the inline construction in the current add path and the other occurrence (the add/update block that also calls pending_media_upload_repository->add) with calls to this helper and then call pending_media_upload_repository->add($pendingUpload) as before so all field mappings remain consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/Console/Commands/ProcessPendingMediaUploadsCommand.php`:
- Around line 66-87: The handle method in ProcessPendingMediaUploadsCommand
currently swallows failures by not returning an exit code; update handle (the
method that calls $this->summit_service->processPendingMediaUploads()) to
explicitly return 0 on success (after the info log that prints $delta and
$stats) and return 1 in the catch block (after calling Log::warning($ex) and
$this->error($ex->getMessage())) so the Artisan command signals failure to
schedulers/cron.
In `@app/Console/Commands/ReconcileMediaUploadsCommand.php`:
- Around line 65-103: Change the handle() signature to return int and ensure the
method returns 0 on success and 1 on failure: add a return type declaration to
handle(): int, then after the successful completion (after the final
$this->info(...) call) return 0; and inside the catch (Exception $ex) block,
after logging and $this->error(...), return 1; (references: handle(),
reconcileMediaUploadsToPrivateStorage(), $this->info(), $this->error(), the
catch block that currently logs $ex).
In `@app/Console/Commands/TestDropboxTokenCommand.php`:
- Around line 55-63: The code prints a portion of the Dropbox access token
(accessed via $tokenService->getToken() into $accessToken) which can leak
secrets; remove any logging of the token or its prefix and instead log a
non-sensitive success message (e.g., "Access token obtained successfully") in
the TestDropboxTokenCommand handling code, eliminating the substr($accessToken,
0, 12) usage and any direct or partial token output while preserving the
empty-token check and exit behavior.
In `@app/Console/Kernel.php`:
- Around line 58-59: The schedule for ProcessPendingMediaUploadsCommand uses
->withoutOverlapping()->onOneServer() at 5-minute cadence but a stuck chunk
(given RetryAfterDropboxClient::DEFAULT_RETRY_AFTER_SECONDS = 300 and
maxUploadChunkRetries = 5) can hold the lock much longer; update the scheduler
invocation for ProcessPendingMediaUploadsCommand (and the other affected
scheduled entry) to pass an explicit expiry to withoutOverlapping (e.g.,
->withoutOverlapping(1800)) so the lock will expire after a safe upper bound (30
minutes) if a run is killed mid-sleep.
In `@app/Repositories/Summit/DoctrinePendingMediaUploadRepository.php`:
- Around line 68-82: resetStuckProcessingRows currently uses p.last_edited to
detect stale STATUS_PROCESSING rows (in resetStuckProcessingRows) which can
reclaim rows still being processed; add a durable in-flight marker instead of
relying on last_edited: introduce a ProcessingStartedAt (datetime) column on the
PendingMediaUpload model, set ProcessingStartedAt when
processPendingMediaUploads transitions a row to
PendingMediaUpload::STATUS_PROCESSING, and change resetStuckProcessingRows to
compare ProcessingStartedAt against the stale threshold (or alternatively use a
DB-level advisory lock check) so only genuinely abandoned jobs are reverted;
update any place that sets STATUS_PROCESSING (e.g., processPendingMediaUploads)
to populate/clear ProcessingStartedAt and document the new assumption if you
prefer the lighter documentation-only approach.
- Around line 49-52: getOrderMappings() currently returns an indexed array which
breaks lookups in Order::apply2Query() because it expects keys by field name;
update DoctrinePendingMediaUploadRepository::getOrderMappings() to return an
associative array mapping sortable field names (e.g., 'id', 'created') to their
corresponding DQL/column expressions (the same shape as getFilterMappings() in
other Doctrine repositories) so that isset($mappings[$order->getField()])
succeeds and ordering is applied.
- Around line 87-101: The deleteCompletedOlderThan method currently uses
setMaxResults on a DQL DELETE which is ignored; change the implementation in
deleteCompletedOlderThan (involving getEntityManager and PendingMediaUpload) to
first SELECT the IDs of PendingMediaUpload matching status =
PendingMediaUpload::STATUS_COMPLETED and processed_date < $cutoff_date with
setMaxResults($limit), collect those IDs, then run a DELETE WHERE id IN (...)
(or use a parameterized DQL delete by id list) to remove only that batch; ensure
you handle the empty-ID case (return 0) and keep using the same cutoff_date
logic and return the number of deleted rows.
In `@app/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.php`:
- Around line 37-44: The constructor currently calls refreshAccessToken() but
does nothing if that initial refresh fails and the underlying HTTP call has no
timeout; change __construct to propagate failure (throw a specific exception)
when refreshAccessToken() cannot obtain a token so the service fails fast rather
than living with an empty token, and update refreshAccessToken() to set a
bounded timeout on the external token request (e.g., via your HTTP client
options) and handle/throw errors on non-200 responses; also ensure getToken()
returns a valid token or throws if none is available.
In `@app/Services/FileSystem/Dropbox/RetryAfterDropboxClient.php`:
- Line 37: The DEFAULT_RETRY_AFTER_SECONDS constant in RetryAfterDropboxClient
is set to 300s which is too high and conflicts with tests expecting a 1s
default; change the value to a lower sensible default (e.g., 30 or 60 seconds)
to match Dropbox guidance and test expectations, update
RetryAfterDropboxClient::DEFAULT_RETRY_AFTER_SECONDS accordingly, and adjust any
related tests in RetryAfterDropboxClientTest.php if they rely on a specific
default so they remain consistent with the new constant.
- Around line 83-84: The code currently casts the Retry-After header string
directly to int which yields 0 for HTTP-date values; change the logic in
RetryAfterDropboxClient (where $retryAfter is computed from
$exception->getResponse()->getHeaderLine('Retry-After')) to first check if the
header is numeric and use (int) when it is, otherwise parse it as an HTTP-date
(e.g. via strtotime) and compute seconds = max(0, parsed_timestamp - time()); if
parsing fails or results in non-positive value fall back to
self::DEFAULT_RETRY_AFTER_SECONDS, and ensure $retryAfter is an int before use.
- Around line 35-104: The tests expect 429 Retry-After behavior for direct
endpoint calls, but RetryAfterDropboxClient only implements that logic in
uploadChunk; add matching retry-after + jitter handling to the public methods
contentEndpointRequest and rpcEndpointRequest in RetryAfterDropboxClient (and
reuse the same constants and jitter logic used in uploadChunk:
DEFAULT_RETRY_AFTER_SECONDS, MIN_JITTER_MS, MAX_JITTER_MS) so they catch
RequestException/ClientException, read Retry-After, sleep with jitter, rewind
any stream if needed, and retry up to the same $this->maxUploadChunkRetries
semantics; alternatively, if that behavior is not desired, remove/adjust the
failing test cases in tests/Unit/Services/RetryAfterDropboxClientTest.php
instead of changing the class.
In `@app/Services/Model/Imp/SummitService.php`:
- Around line 4332-4334: The catch block in SummitService.php that currently
does "catch (\Exception $ex) { Log::error($ex); }" swallows processor-level
failures; after logging the exception you must re-throw it so the calling
command can fail. Edit the catch in the relevant method inside the SummitService
class to call Log::error(...) and then throw $ex (or rethrow using throw;) so
setup/query/reset failures propagate instead of returning success.
- Around line 4291-4298: The temp file cleanup is currently executed inside the
$this->tx_service->transaction(...) closure (where PendingMediaUpload status is
set), which runs before the DB commit; move the call to
\Libs\Utils\FileUtils::cleanLocalAndRemoteFile($localPath,
$pending_upload->getTempFilePath()) out of the transaction closure so it only
runs after the transaction completes successfully—i.e., keep the status update
and setProcessedDate(...) inside the transaction (using
$this->tx_service->transaction and the PendingMediaUpload methods),
capture/return any needed identifiers or paths from the closure, then call
FileUtils::cleanLocalAndRemoteFile(...) afterward and handle/log errors from
cleanup without rolling back DB changes.
- Around line 4244-4316: The exception handler currently marks every failed
upload as PendingMediaUpload::STATUS_ERROR and cleans both local and remote temp
files, preventing retries; update the catch block in
SummitService::processPendingMediaUploads to check
$pending_upload->getAttempts() (or equivalent) against its max retries (e.g.
$pending_upload->getMaxRetries()) and if attempts < max then set status back to
PendingMediaUpload::STATUS_PENDING (instead of STATUS_ERROR) and only delete the
local temp copy (call FileUtils::cleanLocalFile($localPath) or equivalent)
inside the tx_service->transaction; if attempts >= max, then set STATUS_ERROR
and setErrorMessage($ex->getMessage()) and clean both local and remote temp
files as before.
In `@tests/Unit/Services/PresentationServiceMediaUploadTest.php`:
- Around line 49-193: The tests currently only set up mocks but never call
PresentationService::addMediaUploadTo or ::updateMediaUploadFrom so the
persist()->once() expectations and the important regression check
(ProcessMediaUpload::dispatch not called) are never exercised; update the tests
to instantiate (or partially mock) PresentationService so you actually invoke
addMediaUploadTo(...) and updateMediaUploadFrom(...) with the prepared
$presentation, $fileInfo, $mediaUpload, and tx_service, ensure
tx_service->transaction() returnsUsing the callback while
Registry::getManager('model') returns your $em so the $em->persist(...)
expectation is met, and explicitly mock/spy ProcessMediaUpload::dispatch (or the
dispatching mechanism used) to assert it is never called in
testProcessMediaUploadJobNotDispatched; alternatively, if you cannot construct
the service, remove these placeholder tests and consolidate into a single
integration test that verifies DB row creation and absence of dispatch.
In `@tests/Unit/Services/ProcessPendingMediaUploadsTest.php`:
- Around line 46-287: The tests mock EntityManager and expect raw DQL calls, but
SummitService::processPendingMediaUploads() uses IPendingMediaUploadRepository
methods (resetStuckProcessingRows, getPendingUploads, deleteCompletedOlderThan)
and private properties (pending_media_upload_repository, summit_repository) plus
a protected tx_service; update the tests to mock IPendingMediaUploadRepository
instead of EM, set expectations on resetStuckProcessingRows(10),
getPendingUploads(), and deleteCompletedOlderThan(7,1000) and return the
appropriate PendingMediaUpload mocks, and inject that mock into the
SummitService instance; also set the protected tx_service and private
summit_repository/pending_media_upload_repository on the real SummitService
object using reflection (or the service constructor/setters if available) so the
service reads the injected mocks during processPendingMediaUploads(), and remove
all expectations on $em->createQuery(...) / DQL patterns.
In `@tests/Unit/Services/RetryAfterDropboxClientTest.php`:
- Around line 142-147: The test is asserting sleep counts after a call that is
expected to throw, so the assertion is never reached; wrap the throwing call in
a try/finally so the finally block always runs and performs the Sleep
assertions. Specifically, in
RetryAfterDropboxClientTest::testContentEndpointRequest wrap the
$client->contentEndpointRequest(...) invocation in try { ... } finally {
Sleep::assertSleptTimes(4); } (keep the existing
$this->expectException(ClientException::class) outside/above the try), and do
the same pattern in testContentEndpointRequestNon429Exception replacing the
finally assertion with Sleep::assertNeverSlept(); ensure you do not catch and
swallow the exception (do not add a catch that prevents rethrow).
- Around line 45-227: The tests call
RetryAfterDropboxClient::contentEndpointRequest but that method is not
overridden to implement 429 retry/sleep logic (only uploadChunk is), and the
tests also assume a 1s default whereas DEFAULT_RETRY_AFTER_SECONDS is 300;
update the tests to target the actual retrying code or align constants: either
(A) add an override of contentEndpointRequest in RetryAfterDropboxClient that
implements the same retry/Retry-After + jitter behavior (using a 1-second
default constant if you intend the tests to expect ~1s) and then keep the
current tests, or (B) change the tests to exercise uploadChunk (create the
proper stream/chunk inputs) and assert sleeps using DEFAULT_RETRY_AFTER_SECONDS
(300s) plus jitter ranges; reference contentEndpointRequest, uploadChunk,
RetryAfterDropboxClient, and DEFAULT_RETRY_AFTER_SECONDS to locate where to
implement or adjust the tests and constants.
---
Nitpick comments:
In `@app/Jobs/ProcessMediaUpload.php`:
- Around line 26-30: Update the deprecation block on the ProcessMediaUpload
class to include a concrete removal target (e.g., a specific release version or
commit hash) or a tracking ticket ID so future cleanup is deterministic; locate
the docblock above the ProcessMediaUpload class definition and replace the vague
“Can be removed in a future cleanup” line with a precise marker such as
“REMOVE_BY: vX.Y.Z / commit <hash> / ClickUp `#12345`” and, if relevant, add a
short note that it remains only for draining in-flight queue jobs.
In `@app/Services/Model/Imp/PresentationService.php`:
- Around line 1153-1162: The PendingMediaUpload population is duplicated;
extract a private helper (e.g., buildPendingMediaUpload or
createPendingMediaUpload) that accepts parameters like $summit,
$mediaUploadType, $mediaUpload, $fileInfo and returns a fully populated
PendingMediaUpload (calls setSummitId, setMediaUploadTypeId, setPublicPath,
setPrivatePath, setFileName, setTempFilePath, setStatus). Replace the inline
construction in the current add path and the other occurrence (the add/update
block that also calls pending_media_upload_repository->add) with calls to this
helper and then call pending_media_upload_repository->add($pendingUpload) as
before so all field mappings remain consistent.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fdba4aff-1958-43af-9cb2-0eff9771604d
📒 Files selected for processing (20)
app/Console/Commands/ProcessPendingMediaUploadsCommand.phpapp/Console/Commands/ReconcileMediaUploadsCommand.phpapp/Console/Commands/TestDropboxTokenCommand.phpapp/Console/Kernel.phpapp/Jobs/ProcessMediaUpload.phpapp/Models/Foundation/Summit/PendingMediaUpload.phpapp/Models/Foundation/Summit/Repositories/IPendingMediaUploadRepository.phpapp/Repositories/RepositoriesProvider.phpapp/Repositories/Summit/DoctrinePendingMediaUploadRepository.phpapp/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.phpapp/Services/FileSystem/Dropbox/DropboxServiceProvider.phpapp/Services/FileSystem/Dropbox/RetryAfterDropboxClient.phpapp/Services/Model/ISummitService.phpapp/Services/Model/Imp/PresentationService.phpapp/Services/Model/Imp/SummitService.phpconfig/filesystems.phpdatabase/migrations/model/Version20260421150000.phptests/Unit/Services/PresentationServiceMediaUploadTest.phptests/Unit/Services/ProcessPendingMediaUploadsTest.phptests/Unit/Services/RetryAfterDropboxClientTest.php
| $accessToken = $tokenService->getToken(); | ||
|
|
||
| if (empty($accessToken)) { | ||
| $this->error(' Token service returned an empty access token.'); | ||
| return 1; | ||
| } | ||
|
|
||
| $this->info(' Access token obtained: ' . substr($accessToken, 0, 12) . '...'); | ||
|
|
There was a problem hiding this comment.
Do not print access-token material.
Even a prefix can leak sensitive token metadata into shell history or CI logs.
🛡️ Proposed fix
- $this->info(' Access token obtained: ' . substr($accessToken, 0, 12) . '...');
+ $this->info(' Access token obtained.');📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| $accessToken = $tokenService->getToken(); | |
| if (empty($accessToken)) { | |
| $this->error(' Token service returned an empty access token.'); | |
| return 1; | |
| } | |
| $this->info(' Access token obtained: ' . substr($accessToken, 0, 12) . '...'); | |
| $accessToken = $tokenService->getToken(); | |
| if (empty($accessToken)) { | |
| $this->error(' Token service returned an empty access token.'); | |
| return 1; | |
| } | |
| $this->info(' Access token obtained.'); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/Console/Commands/TestDropboxTokenCommand.php` around lines 55 - 63, The
code prints a portion of the Dropbox access token (accessed via
$tokenService->getToken() into $accessToken) which can leak secrets; remove any
logging of the token or its prefix and instead log a non-sensitive success
message (e.g., "Access token obtained successfully") in the
TestDropboxTokenCommand handling code, eliminating the substr($accessToken, 0,
12) usage and any direct or partial token output while preserving the
empty-token check and exit behavior.
| \App\Console\Commands\ProcessPendingMediaUploadsCommand::class, | ||
| \App\Console\Commands\TestDropboxTokenCommand::class, |
There was a problem hiding this comment.
Scheduling looks correct; one consideration on overlap timing.
withoutOverlapping()->onOneServer() at a 5-minute cadence is appropriate. Note that with the current RetryAfterDropboxClient::DEFAULT_RETRY_AFTER_SECONDS = 300 combined with maxUploadChunkRetries = 5, a single stuck chunk could hold the cron lock for ~25 minutes per invocation. Consider passing ->withoutOverlapping($expiresAt) with an explicit expiry (e.g., 30 minutes) to avoid a stale lock if a run is killed mid-sleep. Related: see the comment on DEFAULT_RETRY_AFTER_SECONDS in RetryAfterDropboxClient.php.
Also applies to: 110-110
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/Console/Kernel.php` around lines 58 - 59, The schedule for
ProcessPendingMediaUploadsCommand uses ->withoutOverlapping()->onOneServer() at
5-minute cadence but a stuck chunk (given
RetryAfterDropboxClient::DEFAULT_RETRY_AFTER_SECONDS = 300 and
maxUploadChunkRetries = 5) can hold the lock much longer; update the scheduler
invocation for ProcessPendingMediaUploadsCommand (and the other affected
scheduled entry) to pass an explicit expiry to withoutOverlapping (e.g.,
->withoutOverlapping(1800)) so the lock will expire after a safe upper bound (30
minutes) if a run is killed mid-sleep.
There was a problem hiding this comment.
this run on one single k8 pod
There was a problem hiding this comment.
@smarcet, understood — with a single pod onOneServer() is effectively a no-op for lock contention between instances. The only residual edge case would be if the pod is evicted or OOM-killed mid-run: Laravel's default withoutOverlapping() lock TTL is 24 hours, so subsequent cron ticks would be skipped until it naturally expires. If that scenario is acceptable for this workload, the current setup is fine. Happy to close this comment if you're satisfied with the behaviour.
ʕ •ᴥ•ʔ
| $this->expectException(ClientException::class); | ||
| $client->contentEndpointRequest('/test/endpoint', ['arg' => 'value'], 'body'); | ||
|
|
||
| // Verify sleep was called 4 times (not on the 5th/final attempt before throwing) | ||
| Sleep::assertSleptTimes(4); | ||
| } |
There was a problem hiding this comment.
Sleep::assertSleptTimes(4) after a throwing call is unreachable.
$client->contentEndpointRequest(...) on line 143 is expected to throw (expectException(ClientException::class) on line 142). Once it throws, no statement on lines 145–146 in the same test method executes, so Sleep::assertSleptTimes(4) never runs — the test only verifies that the exception propagates, not that 4 sleeps occurred.
🛠️ Move the assertion into a try/finally so it actually runs
- $this->expectException(ClientException::class);
- $client->contentEndpointRequest('/test/endpoint', ['arg' => 'value'], 'body');
-
- // Verify sleep was called 4 times (not on the 5th/final attempt before throwing)
- Sleep::assertSleptTimes(4);
+ try {
+ $client->contentEndpointRequest('/test/endpoint', ['arg' => 'value'], 'body');
+ $this->fail('Expected ClientException was not thrown');
+ } catch (ClientException $e) {
+ // Verify sleep was called 4 times (not on the 5th/final attempt before throwing)
+ Sleep::assertSleptTimes(4);
+ }The same pattern applies to testContentEndpointRequestNon429Exception (line 177–181), though assertNeverSlept() happens to be vacuously true there.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/Unit/Services/RetryAfterDropboxClientTest.php` around lines 142 - 147,
The test is asserting sleep counts after a call that is expected to throw, so
the assertion is never reached; wrap the throwing call in a try/finally so the
finally block always runs and performs the Sleep assertions. Specifically, in
RetryAfterDropboxClientTest::testContentEndpointRequest wrap the
$client->contentEndpointRequest(...) invocation in try { ... } finally {
Sleep::assertSleptTimes(4); } (keep the existing
$this->expectException(ClientException::class) outside/above the try), and do
the same pattern in testContentEndpointRequestNon429Exception replacing the
finally assertion with Sleep::assertNeverSlept(); ensure you do not catch and
swallow the exception (do not add a catch that prevents rethrow).
|
📘 OpenAPI / Swagger preview ➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/ This page is automatically updated on each push to this PR. |
|
📘 OpenAPI / Swagger preview ➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/ This page is automatically updated on each push to this PR. |
1 similar comment
|
📘 OpenAPI / Swagger preview ➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/ This page is automatically updated on each push to this PR. |
646134d to
f8174ae
Compare
|
📘 OpenAPI / Swagger preview ➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/ This page is automatically updated on each push to this PR. |
f8174ae to
62d4c72
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (3)
app/Models/Foundation/Summit/PendingMediaUpload.php (2)
75-79: Consider preventing duplicate in-flight rows at the DB level.Nothing in the entity or table prevents two
STATUS_PENDING/STATUS_PROCESSINGrows from existing for the samePresentationMediaUploadID(see theupdateMediaUploadFromflow). A partial/functional unique constraint is not supported in MySQL, but you can approximate it by deleting prior pending rows on every create (as suggested inPresentationService), or by adding a unique index onPresentationMediaUploadIDand reusing one row via upsert. Purely a defense-in-depth suggestion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/Models/Foundation/Summit/PendingMediaUpload.php` around lines 75 - 79, The entity allows multiple in-flight rows for the same PresentationMediaUploadID, which can cause duplicate pending/processing entries; update the PendingMediaUpload creation flow to enforce uniqueness by either 1) deleting any existing rows for the same PresentationMediaUploadID with status in {STATUS_PENDING, STATUS_PROCESSING} before inserting a new PendingMediaUpload (adjust the create method used by PresentationService/updateMediaUploadFrom to perform the delete-and-insert), or 2) add a unique index on PresentationMediaUploadID at the DB level and change the create path to perform an upsert/reuse of the existing PendingMediaUpload row (modify the repository/save logic to handle update-on-conflict); reference the PendingMediaUpload entity, its $status field and constants STATUS_PENDING/STATUS_PROCESSING, and the PresentationService updateMediaUploadFrom flow when making the change.
22-30: Consider a composite(Status, Created)index for the cron query pattern.
DoctrinePendingMediaUploadRepository::getPendingUploads()doesWHERE p.status = :pending ORDER BY p.created ASC. The currentIDX_Statussingle-column index filters on status but leaves the ORDER BY to a filesort once theSTATUS_PENDINGset grows. A composite index on(Status, Created)would allow index-ordered scans and is a near-free win for this hot path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/Models/Foundation/Summit/PendingMediaUpload.php` around lines 22 - 30, The query in DoctrinePendingMediaUploadRepository::getPendingUploads() filters by status and orders by created, so replace the current single-column ORM\Index (name: IDX_Status) on Status in the PendingMediaUpload entity with a composite index on (Status, Created) to enable index-ordered scans; update the ORM\Index annotation on class PendingMediaUpload to include both columns (Status and Created) and adjust the index name (e.g., IDX_Status_Created) so the DB uses the composite index for WHERE p.status = :pending ORDER BY p.created ASC.database/migrations/model/Version20260421200000.php (1)
40-45:down()is not idempotent and will error if re-run or if the column was never added.If
up()partially applied (e.g., column added but FK failed), re-runningdown()may fail on the firstDROP FOREIGN KEY. Consider guarding with existence checks via$schema(which is already passed in and currently unused — PHPMD warning is correct here even if we typically ignore it). Low priority; Doctrine migrations usually run atomically, so this is optional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@database/migrations/model/Version20260421200000.php` around lines 40 - 45, The down() migration unconditionally drops a foreign key, index and column and will error if re-run or if those objects don't exist; update down(Schema $schema) to first check $schema->hasTable('PendingMediaUpload') then $table = $schema->getTable('PendingMediaUpload') and guard each operation with existence checks (e.g. $table->hasForeignKey('FK_PendingMediaUpload_PresentationMediaUpload'), $table->hasIndex('IDX_PresentationMediaUploadID'), $table->hasColumn('PresentationMediaUploadID')) before calling addSql to DROP FOREIGN KEY FK_PendingMediaUpload_PresentationMediaUpload, DROP INDEX IDX_PresentationMediaUploadID, and DROP COLUMN PresentationMediaUploadID so the method becomes idempotent and safe if partially applied or re-run.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/Repositories/Summit/DoctrinePendingMediaUploadRepository.php`:
- Around line 107-117: The bulk DQL DELETE in deleteByMediaUpload currently
binds a PresentationMediaUpload entity to p.media_upload which fails because
DELETE only accepts scalar identifiers; change the DQL to use a scalar parameter
(e.g., :mediaUploadId) and set that parameter to $mediaUpload->getId(), and keep
the statuses parameter as-is (refer to deleteByMediaUpload,
PresentationMediaUpload, and PendingMediaUpload::STATUS_* for locating the code
and constants).
In `@app/Services/FileSystem/Dropbox/DropboxAdapter.php`:
- Around line 38-39: Remove the raw exception log inside DropboxAdapter::getUrl
by deleting the Log::warning($ex) call and keep only the structured
Log::warning(sprintf("DropboxAdapter::getUrl %s code %s", $ex->getMessage(),
$ex->dropboxCode)); ensure the single remaining warning provides the descriptive
message and Dropbox error code; if you want richer context later, add the
exception as a context array (e.g., Log::warning($message, ['exception' =>
$ex])) in a separate refactor.
In `@app/Services/Model/Imp/PresentationService.php`:
- Around line 1343-1347: There is a race between deleteMediaUpload
(markAsDeleted + pending_media_upload_repository->deleteByMediaUpload +
$presentation->removeMediaUpload) and an in-flight cron upload; to fix, in
deleteMediaUpload first query pending_media_upload_repository for rows for this
mediaUpload and: if any STATUS_PENDING, remove their temp_file_path(s)
explicitly and then delete those rows before marking storage deleted; if any
STATUS_PROCESSING, do not immediately markAsDeleted but instead set a
cooperative cancel flag on those rows (e.g.,
pending_media_upload_repository->setCancelled or add a cancelled boolean) so the
cron can check between its transition to STATUS_PROCESSING and the actual copy
and abort/clean up, and ensure the cron worker checks this cancel flag and
removes temp files when it aborts; alternatively, take a DB row lock around the
pending row check+delete to prevent the cron from progressing while you perform
the delete; update resetStuckProcessingRows/cron logic to respect the new cancel
flag and to remove temp_file_path for pending rows.
- Around line 1278-1289: Before inserting the new PendingMediaUpload row, remove
any existing pending/processing rows for the same PresentationMediaUpload to
prevent racing/supersede issues: call the repository method that deletes by
media upload (e.g.
$this->pending_media_upload_repository->deleteByMediaUpload($mediaUpload) or a
dedicated supersede method) just prior to creating/adding the new
PendingMediaUpload in the block that builds the $pendingUpload (this addresses
existing PendingMediaUpload rows with STATUS_PENDING or STATUS_PROCESSING and
ensures only the fresh tempFilePath/fileName will be processed).
---
Nitpick comments:
In `@app/Models/Foundation/Summit/PendingMediaUpload.php`:
- Around line 75-79: The entity allows multiple in-flight rows for the same
PresentationMediaUploadID, which can cause duplicate pending/processing entries;
update the PendingMediaUpload creation flow to enforce uniqueness by either 1)
deleting any existing rows for the same PresentationMediaUploadID with status in
{STATUS_PENDING, STATUS_PROCESSING} before inserting a new PendingMediaUpload
(adjust the create method used by PresentationService/updateMediaUploadFrom to
perform the delete-and-insert), or 2) add a unique index on
PresentationMediaUploadID at the DB level and change the create path to perform
an upsert/reuse of the existing PendingMediaUpload row (modify the
repository/save logic to handle update-on-conflict); reference the
PendingMediaUpload entity, its $status field and constants
STATUS_PENDING/STATUS_PROCESSING, and the PresentationService
updateMediaUploadFrom flow when making the change.
- Around line 22-30: The query in
DoctrinePendingMediaUploadRepository::getPendingUploads() filters by status and
orders by created, so replace the current single-column ORM\Index (name:
IDX_Status) on Status in the PendingMediaUpload entity with a composite index on
(Status, Created) to enable index-ordered scans; update the ORM\Index annotation
on class PendingMediaUpload to include both columns (Status and Created) and
adjust the index name (e.g., IDX_Status_Created) so the DB uses the composite
index for WHERE p.status = :pending ORDER BY p.created ASC.
In `@database/migrations/model/Version20260421200000.php`:
- Around line 40-45: The down() migration unconditionally drops a foreign key,
index and column and will error if re-run or if those objects don't exist;
update down(Schema $schema) to first check
$schema->hasTable('PendingMediaUpload') then $table =
$schema->getTable('PendingMediaUpload') and guard each operation with existence
checks (e.g.
$table->hasForeignKey('FK_PendingMediaUpload_PresentationMediaUpload'),
$table->hasIndex('IDX_PresentationMediaUploadID'),
$table->hasColumn('PresentationMediaUploadID')) before calling addSql to DROP
FOREIGN KEY FK_PendingMediaUpload_PresentationMediaUpload, DROP INDEX
IDX_PresentationMediaUploadID, and DROP COLUMN PresentationMediaUploadID so the
method becomes idempotent and safe if partially applied or re-run.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ecb04d13-4052-4cb4-b60c-eaa3c4a6b572
📒 Files selected for processing (8)
app/Console/Kernel.phpapp/Models/Foundation/Summit/PendingMediaUpload.phpapp/Models/Foundation/Summit/Repositories/IPendingMediaUploadRepository.phpapp/Repositories/Summit/DoctrinePendingMediaUploadRepository.phpapp/Services/FileSystem/Dropbox/DropboxAdapter.phpapp/Services/FileSystem/Dropbox/RetryAfterDropboxClient.phpapp/Services/Model/Imp/PresentationService.phpdatabase/migrations/model/Version20260421200000.php
✅ Files skipped from review due to trivial changes (1)
- app/Services/FileSystem/Dropbox/RetryAfterDropboxClient.php
🚧 Files skipped from review as they are similar to previous changes (2)
- app/Console/Kernel.php
- app/Models/Foundation/Summit/Repositories/IPendingMediaUploadRepository.php
| Log::warning($ex); | ||
| Log::warning(sprintf("DropboxAdapter::getUrl %s code %s", $ex->getMessage(), $ex->dropboxCode)); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Inspect Laravel/Monolog versions and existing exception logging style.
printf '\nComposer package versions:\n'
if [ -f composer.lock ]; then
jq -r '.packages[] | select(.name=="laravel/framework" or .name=="monolog/monolog" or .name=="psr/log") | "\(.name) \(.version)"' composer.lock
fi
printf '\nNearby raw exception logging patterns:\n'
rg -n -C2 'Log::(debug|info|notice|warning|error|critical|alert|emergency)\(\$[a-zA-Z_][a-zA-Z0-9_]*\)' --type=phpRepository: OpenStackweb/summit-api
Length of output: 50380
Remove the redundant raw exception log on line 38.
Line 38 logs the exception object directly, which duplicates the more detailed warning on line 39. Remove the raw log and keep only the structured message with the Dropbox error code.
Proposed fix
- Log::warning($ex);
Log::warning(sprintf("DropboxAdapter::getUrl %s code %s", $ex->getMessage(), $ex->dropboxCode));Note: Raw exception logging is common throughout the codebase. If adopting structured exception context across the project is desired (e.g., Log::warning($message, ['exception' => $ex])), consider a broader refactoring initiative to ensure consistent patterns.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Log::warning($ex); | |
| Log::warning(sprintf("DropboxAdapter::getUrl %s code %s", $ex->getMessage(), $ex->dropboxCode)); | |
| Log::warning(sprintf("DropboxAdapter::getUrl %s code %s", $ex->getMessage(), $ex->dropboxCode)); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/Services/FileSystem/Dropbox/DropboxAdapter.php` around lines 38 - 39,
Remove the raw exception log inside DropboxAdapter::getUrl by deleting the
Log::warning($ex) call and keep only the structured
Log::warning(sprintf("DropboxAdapter::getUrl %s code %s", $ex->getMessage(),
$ex->dropboxCode)); ensure the single remaining warning provides the descriptive
message and Dropbox error code; if you want richer context later, add the
exception as a context array (e.g., Log::warning($message, ['exception' =>
$ex])) in a separate refactor.
|
📘 OpenAPI / Swagger preview ➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/ This page is automatically updated on each push to this PR. |
|
📘 OpenAPI / Swagger preview ➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/ This page is automatically updated on each push to this PR. |
|
@romanetar please re review |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
app/Services/Model/Imp/PresentationService.php (2)
1281-1294:⚠️ Potential issue | 🟠 MajorDon’t use row deletion as cancellation for in-flight uploads.
If an existing row is already
STATUS_PROCESSING, the cron worker may still finish uploading the old temp file after this re-upload creates a new pending row. Use a cooperative cancel flag/status check or a DB lock around the processing transition and transfer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/Services/Model/Imp/PresentationService.php` around lines 1281 - 1294, The current code unconditionally calls pending_media_upload_repository->deleteByMediaUpload(...) which can remove a row that is STATUS_PROCESSING and let the cron still finish the old upload; instead, stop deleting rows and implement a cooperative cancel or safe transition: modify the pending-media logic around PendingMediaUpload to set a cancel flag or status (e.g., setStatus(PendingMediaUpload::STATUS_CANCELLED) or setCancelled(true)) for existing pending rows unless their status is STATUS_PROCESSING, and update the cron worker's processing transition to acquire a DB lock or check-for-cancel before and during transfer (use row versioning/for-update or a status check against STATUS_PROCESSING/STATUS_CANCELLED) so in-flight jobs are not raced by new uploads; replace deleteByMediaUpload(...) invocation with an update that only cancels non-processing rows and ensure process code honors the cancel flag/status.
1349-1350:⚠️ Potential issue | 🟠 MajorDeleting the pending row still doesn’t stop an active cron upload.
A worker that already moved the row to processing can still write to the private/public paths after
markAsDeleted()runs. Also ensure pending temp files are cleaned before removing rows.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/Services/Model/Imp/PresentationService.php` around lines 1349 - 1350, Before removing the pending DB row via pending_media_upload_repository->deleteByMediaUpload($mediaUpload), first mark the media upload as deleted/cancelled (call markAsDeleted($mediaUpload) or set its status to a terminal state) so background cron workers will skip writing, then remove any pending temp files/paths associated with the upload (delete files from the private/public temp paths stored on $mediaUpload) and only after both status is set and temp files cleaned call pending_media_upload_repository->deleteByMediaUpload($mediaUpload).
🧹 Nitpick comments (1)
app/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.php (1)
89-94: Minor: guard against malformed JSON body.
json_decode(..., true)returnsnullon invalid JSON; theisset($data['access_token'])check handles this safely, but the error log ("response missing access_token") will be misleading for a parse failure. Consider distinguishing the two cases or usingJSON_THROW_ON_ERRORto let thecatchblock log the actual decode error.♻️ Proposed tweak
- $data = json_decode($response->getBody()->getContents(), true); - - if (!isset($data['access_token'])) { - Log::error('AutoRefreshingDropBoxTokenService: response missing access_token.'); + $data = json_decode($response->getBody()->getContents(), true, 512, JSON_THROW_ON_ERROR); + + if (!is_array($data) || !isset($data['access_token'])) { + Log::error('AutoRefreshingDropBoxTokenService: response missing access_token.'); return false; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.php` around lines 89 - 94, The log message in AutoRefreshingDropBoxTokenService currently treats invalid JSON and a missing access_token the same; update the token-refreshing method to detect JSON parse errors separately (e.g., use json_decode with JSON_THROW_ON_ERROR inside a try/catch or check json_last_error()) and log the decode exception or error message when parsing fails, otherwise keep the existing "response missing access_token" log when decoding succeeds but access_token is absent; ensure you reference the same method where $data = json_decode($response->getBody()->getContents(), true) is called so the catch/logging behavior is adjusted accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/Console/Commands/ReconcileMediaUploadsCommand.php`:
- Around line 69-86: The command currently uses intval() and empty() which can
coerce invalid CLI args (e.g., "abc" -> 0) or treat "0" as absent; update
ReconcileMediaUploadsCommand::handle to explicitly validate that $summit_id and
(if provided) $media_upload_type_id are positive integers before calling
$this->summit_service->reconcileMediaUploadsToPrivateStorage: parse the raw
arguments, ensure they match a positive-integer pattern (reject "0", non-digits,
negatives), and throw an \InvalidArgumentException with a clear message if
validation fails; only cast to int and pass values (or null for omitted optional
type) after validation so the service receives correct scopes.
In `@app/Services/Model/Imp/PresentationService.php`:
- Around line 1153-1154: The call and comment that delete pending rows for a
newly-created transient media upload should be removed: in addMediaUploadTo,
delete the line invoking
$this->pending_media_upload_repository->deleteByMediaUpload($mediaUpload) and
remove its preceding comment, since $mediaUpload is created by factory and has
no ID (transient) so the delete is a no-op; leave the rest of the method and any
subsequent flush/persist logic unchanged.
---
Duplicate comments:
In `@app/Services/Model/Imp/PresentationService.php`:
- Around line 1281-1294: The current code unconditionally calls
pending_media_upload_repository->deleteByMediaUpload(...) which can remove a row
that is STATUS_PROCESSING and let the cron still finish the old upload; instead,
stop deleting rows and implement a cooperative cancel or safe transition: modify
the pending-media logic around PendingMediaUpload to set a cancel flag or status
(e.g., setStatus(PendingMediaUpload::STATUS_CANCELLED) or setCancelled(true))
for existing pending rows unless their status is STATUS_PROCESSING, and update
the cron worker's processing transition to acquire a DB lock or check-for-cancel
before and during transfer (use row versioning/for-update or a status check
against STATUS_PROCESSING/STATUS_CANCELLED) so in-flight jobs are not raced by
new uploads; replace deleteByMediaUpload(...) invocation with an update that
only cancels non-processing rows and ensure process code honors the cancel
flag/status.
- Around line 1349-1350: Before removing the pending DB row via
pending_media_upload_repository->deleteByMediaUpload($mediaUpload), first mark
the media upload as deleted/cancelled (call markAsDeleted($mediaUpload) or set
its status to a terminal state) so background cron workers will skip writing,
then remove any pending temp files/paths associated with the upload (delete
files from the private/public temp paths stored on $mediaUpload) and only after
both status is set and temp files cleaned call
pending_media_upload_repository->deleteByMediaUpload($mediaUpload).
---
Nitpick comments:
In `@app/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.php`:
- Around line 89-94: The log message in AutoRefreshingDropBoxTokenService
currently treats invalid JSON and a missing access_token the same; update the
token-refreshing method to detect JSON parse errors separately (e.g., use
json_decode with JSON_THROW_ON_ERROR inside a try/catch or check
json_last_error()) and log the decode exception or error message when parsing
fails, otherwise keep the existing "response missing access_token" log when
decoding succeeds but access_token is absent; ensure you reference the same
method where $data = json_decode($response->getBody()->getContents(), true) is
called so the catch/logging behavior is adjusted accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a4c5d374-5bbb-4c7d-943a-689d228f0090
📒 Files selected for processing (11)
app/Console/Commands/ProcessPendingMediaUploadsCommand.phpapp/Console/Commands/ReconcileMediaUploadsCommand.phpapp/Console/Kernel.phpapp/Repositories/Summit/DoctrinePendingMediaUploadRepository.phpapp/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.phpapp/Services/FileSystem/Dropbox/DropboxAdapter.phpapp/Services/FileSystem/Dropbox/RetryAfterDropboxClient.phpapp/Services/Model/Imp/PresentationService.phpapp/Services/Model/Imp/SummitService.phptests/Unit/Services/ProcessPendingMediaUploadsTest.phptests/Unit/Services/RetryAfterDropboxClientTest.php
✅ Files skipped from review due to trivial changes (1)
- app/Services/Model/Imp/SummitService.php
🚧 Files skipped from review as they are similar to previous changes (6)
- app/Services/FileSystem/Dropbox/DropboxAdapter.php
- app/Console/Kernel.php
- app/Console/Commands/ProcessPendingMediaUploadsCommand.php
- app/Services/FileSystem/Dropbox/RetryAfterDropboxClient.php
- tests/Unit/Services/ProcessPendingMediaUploadsTest.php
- tests/Unit/Services/RetryAfterDropboxClientTest.php
6f0c60f to
4f07f71
Compare
|
📘 OpenAPI / Swagger preview ➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/ This page is automatically updated on each push to this PR. |
ref: https://app.clickup.com/t/86b9hh9y6
Summary by CodeRabbit
New Features
Bug Fixes / Maintenance
Database
Tests