Conversation
…o OT-240-feature/transcode-design-implement
…o OT-240-feature/transcode-design-implement
WalkthroughJobContext 전파로 파이프라인 실행 경로가 재구성되고(워크디렉토리 생성 → 디스크 검사 → 명령 실행 → 플레이리스트 생성 → 업로드), S3 저장소 및 클라이언트(S3Client) 지원이 추가되었으며 ffmpeg 런타임이 컨테이너 최종 이미지에 설치되었습니다. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as "Client"
participant Orchestrator as "JobOrchestrator"
participant DiskGuard as "DiskSpaceGuard"
participant Executor as "CommandPipelineExecutor"
participant S3 as "S3VideoStorage"
participant Probe as "Probe/Planner"
participant FFmpeg as "FfmpegExecutor"
participant Playlist as "MasterPlaylistGenerator"
Client->>Orchestrator: TranscodeMessage(메시지)
Orchestrator->>Orchestrator: workDir/outputDir 생성
Orchestrator->>DiskGuard: check(workDir, fileSize)
DiskGuard-->>Orchestrator: ok / StorageException
Orchestrator->>Executor: execute(DownloadCommand, JobContext)
Executor->>S3: download(sourceKey, workDir)
S3-->>Executor: 파일 저장(Path)
Executor-->>Orchestrator: 완료
Orchestrator->>Executor: execute(ProbeCommand, JobContext)
Executor->>Probe: probe(inputFile)
Probe-->>Executor: ProbeResult
loop 각 TranscodeCommand
Orchestrator->>Executor: execute(TranscodeCommand, JobContext)
Executor->>Probe: plan(command, ProbeResult)
Probe-->>Executor: TranscodeProfile
Executor->>FFmpeg: execute(inputFile, outputDir, profile)
FFmpeg-->>Executor: 트랜스코드 완료
end
Orchestrator->>Playlist: generate(outputDir, resolutionList)
Playlist-->>Orchestrator: 플레이리스트 생성
Orchestrator->>S3: upload(outputDir, destinationPrefix)
S3-->>Orchestrator: 업로드 완료 및 경로
Orchestrator-->>Client: 완료 알림
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/transcoder/src/main/java/com/ott/transcoder/job/JobOrchestrator.java (1)
116-133:⚠️ Potential issue | 🟠 Major
Files.walk()스트림이 닫히지 않아 리소스 누수 발생 가능.
Files.walk()가 반환하는Stream<Path>는 디렉토리 탐색 리소스를 래핑하며, Java 문서에 따르면 반드시 닫아야 합니다. 현재 코드에서는sorted()또는forEach()중 예외가 발생하면 스트림이 닫히지 않아 파일 핸들이 누수될 수 있습니다.🐛 try-with-resources를 사용한 수정 제안
private void cleanUp(Path workDir) { try { if (Files.exists(workDir)) { - Files.walk(workDir) - .sorted(Comparator.reverseOrder()) - .forEach(path -> { - try { - Files.deleteIfExists(path); - } catch (IOException ignored) { - // 삭제 실패는 로그만 남기고 무시 (작업 성공 여부에 지장 없음) - } - }); + try (var pathStream = Files.walk(workDir)) { + pathStream + .sorted(Comparator.reverseOrder()) + .forEach(path -> { + try { + Files.deleteIfExists(path); + } catch (IOException ignored) { + // 삭제 실패는 로그만 남기고 무시 (작업 성공 여부에 지장 없음) + } + }); + } log.info("작업 디렉토리 정리 완료 - {}", workDir); } } catch (IOException e) { log.warn("작업 디렉토리 정리 중 오류 발생 (무시함) - {}", workDir, e); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/transcoder/src/main/java/com/ott/transcoder/job/JobOrchestrator.java` around lines 116 - 133, In JobOrchestrator.cleanUp, Files.walk(workDir) must be used inside a try-with-resources to ensure the returned Stream<Path> is closed even on exceptions; change the method to open the stream with try (Stream<Path> stream = Files.walk(workDir)) { stream.sorted(Comparator.reverseOrder()).forEach(...) } and keep the existing per-path deleteIfExists try/catch and the outer Files.exists and logging/exception handling intact so resources are always released while preserving deletion behavior and logs.
🧹 Nitpick comments (1)
apps/transcoder/Dockerfile (1)
7-7:ffmpeg는 추천 패키지 제외로 설치하는 편이 좋습니다.Line 7의
apt-get install -y ffmpeg는 권장 패키지까지 함께 설치해서 런타임 이미지 크기와 취약점 표면적을 불필요하게 늘릴 수 있습니다. 여기서는--no-install-recommends를 붙여 최소 구성으로 가져가는 편이 안전합니다.변경 예시
-RUN apt-get update && apt-get install -y ffmpeg && rm -rf /var/lib/apt/lists/* +RUN apt-get update \ + && apt-get install -y --no-install-recommends ffmpeg \ + && rm -rf /var/lib/apt/lists/*🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/transcoder/Dockerfile` at line 7, The RUN line in the Dockerfile installs ffmpeg with recommended packages which increases image size and attack surface; update the RUN command that performs "apt-get install -y ffmpeg" to install ffmpeg without recommended packages (add the --no-install-recommends flag) while keeping the existing apt-get update and cleanup (rm -rf /var/lib/apt/lists/*) so the image stays minimal.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/transcoder/src/main/java/com/ott/transcoder/command/CommandExtractor.java`:
- Around line 19-24: The extracted command list can be empty for inputs below
360p, causing JobOrchestrator.extractCommand()/JobOrchestrator to proceed and
generate an invalid master playlist with zero renditions; fix by validating and
enforcing at least one rendition: either (A) in JobOrchestrator immediately
after calling CommandExtractor.extractCommand(), check if commandList.isEmpty()
and reject the job with an error, or (B) change CommandExtractor (the loop
building TranscodeCommand from Resolution.values()) to guarantee a fallback
TranscodeCommand (e.g., always include the lowest allowed Resolution such as
360p or the input's native resolution) when all resolutions are skipped; ensure
you reference the Resolution enum, TranscodeCommand constructor, and
masterPlaylistGenerator.generate() path so downstream code never receives an
empty resolutionList.
In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java`:
- Around line 24-33: DiskSpaceGuard.check currently treats fileSize==0 as 0
required space allowing all jobs through; change the method to validate fileSize
> 0 and fail fast: at the start of DiskSpaceGuard.check(Path workDir, long
fileSize) add a guard that throws a StorageException (use an appropriate
TranscodeErrorCode, e.g., a new or existing error for unknown/invalid size) with
a clear message like "파일 크기 미확인" when fileSize <= 0; this ensures callers must
provide a real file size (or obtain it before calling) instead of bypassing the
disk-space check.
In `@apps/transcoder/src/main/java/com/ott/transcoder/job/JobOrchestrator.java`:
- Around line 64-65: The disk-space check in JobOrchestrator currently calls
diskSpaceGuard.check(workDir, message.fileSize() != null ? message.fileSize() :
0L), which lets a null TranscodeMessage.fileSize bypass validation; either make
fileSize non-nullable or perform an explicit safety validation here: update
TranscodeMessage.fileSize to be `@NotNull` (and ensure RabbitTranscodeListener
validates incoming messages), or change JobOrchestrator to reject/null-check the
message and throw/log/ack-nack when fileSize == null, or substitute a
conservative required size (e.g., treat null as unknown and use a safe minimum
requiredSpace or compute from file metadata) before calling
DiskSpaceGuard.check; reference symbols: JobOrchestrator, DiskSpaceGuard.check,
TranscodeMessage.fileSize, RabbitTranscodeListener.
In
`@apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/MasterPlaylistGenerator.java`:
- Around line 29-36: The master playlist is being built from Resolution defaults
(Resolution.getVideoBitrate(), getWidth(), getHeight()) which can differ from
the actual rendered renditions (bitrate capped in TranscodePlanner and scaled
width computed in ProcessBuilderFfmpegExecutor, plus audio bitrate not
included), so update MasterPlaylistGenerator to use the final rendition metadata
(actual bitrate including audio, and actual output width/height) instead of
Resolution values when building the EXT-X-STREAM-INF BANDWIDTH and RESOLUTION
lines and the rendition URI (media.m3u8); locate where
resolution.getVideoBitrate()/getWidth()/getHeight()/getKey() are used in
MasterPlaylistGenerator and replace them with the corresponding fields from the
final Rendition or RenditionMetadata object produced by the transcode
planning/execution pipeline.
In
`@apps/transcoder/src/main/java/com/ott/transcoder/queue/TranscodeMessage.java`:
- Around line 13-14: TranscodeMessage currently allows a nullable Long fileSize
which enables bypassing DiskSpaceGuard; change the DTO to require a non-null
primitive long (or otherwise enforce non-null) by updating the TranscodeMessage
field/constructor/signature (e.g., replace Long fileSize with long fileSize and
remove any nullable annotations or Optional usage) so instances cannot be
created without a fileSize; update places that construct TranscodeMessage (and
adjust JobOrchestrator.java usage that currently does message.fileSize() != null
? message.fileSize() : 0L) to pass a real size and remove the fallback to 0, and
coordinate/verify external Lambda publishers now always include fileSize.
In
`@apps/transcoder/src/main/java/com/ott/transcoder/storage/LocalVideoStorage.java`:
- Line 26: 현재 LocalVideoStorage와 S3VideoStorage가 둘 다 `@ConditionalOnProperty`(name
= "storage.provider", havingValue = "local" / "s3")로만 등록되어 있어 STORAGE_PROVIDER
환경변수가 없으면 VideoStorage 빈이 생성되지 않습니다; 수정으로 LocalVideoStorage의 클래스
어노테이션(`@ConditionalOnProperty`(name = "storage.provider", havingValue = "local"))에
matchIfMissing = true를 추가하여 기본값을 로컬로 설정하고, 관련 클래스명(LocalVideoStorage,
S3VideoStorage)과 소비자 클래스(JobOrchestrator, 인터페이스 VideoStorage)를 확인해 빈 등록이 정상
동작하도록 보장하세요.
In
`@apps/transcoder/src/main/java/com/ott/transcoder/storage/S3VideoStorage.java`:
- Around line 94-95: S3VideoStorage.upload() currently returns only the
destinationPrefix while VideoStorage.upload() should return the full storage
location like LocalVideoStorage does; update S3VideoStorage.upload() to build
and return the full S3 URI (e.g. "s3://{bucket}/{destinationPrefix}" or
comparable canonical path) so its return value semantics match LocalVideoStorage
and what the DB expects, making sure to use the existing bucket and
destinationPrefix variables (and update any log if you want to log the same full
URI).
- Around line 76-82: When calling s3Client.putObject(...) (the block using
PutObjectRequest.builder() and RequestBody.fromFile(file)), detect the file
extension from s3Key or file name and set the Content-Type on the
PutObjectRequest: use "application/vnd.apple.mpegurl" (or
"application/x-mpegURL") for ".m3u8" files and "video/mp2t" for ".ts" files;
leave other uploads unchanged. Update the PutObjectRequest.builder() call to
include .contentType(...) with the chosen MIME type before .build() so
S3/CloudFront serves HLS manifests and segments with the correct types.
In `@apps/transcoder/src/main/resources/application.yml`:
- Around line 50-52: The YAML injects empty strings for STORAGE_OUTPUT_DIR and
AWS_S3_BUCKET which prevents LocalVideoStorage's Java default and early S3
validation; update the application.yml so that output-dir uses a real default
(e.g. a valid path fallback instead of `${STORAGE_OUTPUT_DIR:}`) or remove the
empty-default pattern so LocalVideoStorage (class LocalVideoStorage / its Java
default java.io.tmpdir + '/ott-storage') receives a usable value at boot, and
remove the trailing colon for s3.bucket (i.e. do not inject an empty
`${AWS_S3_BUCKET:}`) so the service fails fast if AWS_S3_BUCKET is not set.
In `@docker-compose.yml`:
- Around line 129-132: Transcoder service S3 auth fails with temporary
credentials because DefaultCredentialsProvider requires AWS_SESSION_TOKEN; add
the AWS_SESSION_TOKEN environment variable alongside AWS_REGION, AWS_S3_BUCKET,
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in the docker-compose env block for
the transcoder service so that STS/SSO temporary credentials are passed through;
ensure the env key is named AWS_SESSION_TOKEN and propagated wherever the
transcoder container is started.
---
Outside diff comments:
In `@apps/transcoder/src/main/java/com/ott/transcoder/job/JobOrchestrator.java`:
- Around line 116-133: In JobOrchestrator.cleanUp, Files.walk(workDir) must be
used inside a try-with-resources to ensure the returned Stream<Path> is closed
even on exceptions; change the method to open the stream with try (Stream<Path>
stream = Files.walk(workDir)) {
stream.sorted(Comparator.reverseOrder()).forEach(...) } and keep the existing
per-path deleteIfExists try/catch and the outer Files.exists and
logging/exception handling intact so resources are always released while
preserving deletion behavior and logs.
---
Nitpick comments:
In `@apps/transcoder/Dockerfile`:
- Line 7: The RUN line in the Dockerfile installs ffmpeg with recommended
packages which increases image size and attack surface; update the RUN command
that performs "apt-get install -y ffmpeg" to install ffmpeg without recommended
packages (add the --no-install-recommends flag) while keeping the existing
apt-get update and cleanup (rm -rf /var/lib/apt/lists/*) so the image stays
minimal.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: a8e567f3-f9fe-4b72-b062-a56e9f18eb73
📒 Files selected for processing (21)
apps/transcoder/Dockerfileapps/transcoder/build.gradleapps/transcoder/src/main/java/com/ott/transcoder/command/CommandExtractor.javaapps/transcoder/src/main/java/com/ott/transcoder/command/TranscodeCommand.javaapps/transcoder/src/main/java/com/ott/transcoder/exception/retryable/StorageException.javaapps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.javaapps/transcoder/src/main/java/com/ott/transcoder/job/JobContext.javaapps/transcoder/src/main/java/com/ott/transcoder/job/JobOrchestrator.javaapps/transcoder/src/main/java/com/ott/transcoder/pipeline/CommandPipeline.javaapps/transcoder/src/main/java/com/ott/transcoder/pipeline/CommandPipelineExecutor.javaapps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/HLSTranscodePipeline.javaapps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/MasterPlaylistGenerator.javaapps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/TranscodeCommandPipeline.javaapps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/TranscodePlanner.javaapps/transcoder/src/main/java/com/ott/transcoder/queue/TranscodeMessage.javaapps/transcoder/src/main/java/com/ott/transcoder/queue/rabbit/RabbitTranscodeListener.javaapps/transcoder/src/main/java/com/ott/transcoder/storage/LocalVideoStorage.javaapps/transcoder/src/main/java/com/ott/transcoder/storage/S3VideoStorage.javaapps/transcoder/src/main/resources/application.ymldocker-compose.ymlmodules/infra-s3/src/main/java/com/ott/infra/s3/config/S3ClientConfig.java
💤 Files with no reviewable changes (1)
- apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/HLSTranscodePipeline.java
| AWS_REGION: ${AWS_REGION} | ||
| AWS_S3_BUCKET: ${AWS_S3_BUCKET} | ||
| AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} | ||
| AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find and examine docker-compose.yml
find . -name "docker-compose.yml" -type f | head -5Repository: OpenTheTaste/backend
Length of output: 121
🏁 Script executed:
# Search for S3ClientConfig to understand credential provider usage
find . -name "S3ClientConfig.java" -type fRepository: OpenTheTaste/backend
Length of output: 140
🏁 Script executed:
# Search for AWS_SESSION_TOKEN references in the codebase
rg "AWS_SESSION_TOKEN" --type yml --type yaml --type java --type json -nRepository: OpenTheTaste/backend
Length of output: 91
🏁 Script executed:
# Check docker-compose.yml content around lines 129-132
sed -n '120,140p' ./docker-compose.ymlRepository: OpenTheTaste/backend
Length of output: 748
🏁 Script executed:
# Read S3ClientConfig.java to understand credential provider
cat ./modules/infra-s3/src/main/java/com/ott/infra/s3/config/S3ClientConfig.javaRepository: OpenTheTaste/backend
Length of output: 769
🏁 Script executed:
# Search for AWS_SESSION_TOKEN with correct rg syntax
rg "AWS_SESSION_TOKEN" -nRepository: OpenTheTaste/backend
Length of output: 46
🏁 Script executed:
# Check for .env files or environment files
find . -name ".env*" -o -name "*.env" | head -10Repository: OpenTheTaste/backend
Length of output: 46
🏁 Script executed:
# Search for how AWS credentials are used in transcoder service
find . -path "*/transcoder*" -name "*.java" -o -name "*.yml" -o -name "*.yaml" | grep -E "(transcoder|Transcoder)" | head -10Repository: OpenTheTaste/backend
Length of output: 875
🏁 Script executed:
# Check if there are other docker-compose files or environment config
find . -name "docker-compose*.yml" -o -name "docker-compose*.yaml"Repository: OpenTheTaste/backend
Length of output: 163
🏁 Script executed:
# Check how transcoder application uses AWS credentials
find ./apps/transcoder -name "*.java" -type f | head -20 | xargs grep -l "AWS_" 2>/dev/nullRepository: OpenTheTaste/backend
Length of output: 48
🏁 Script executed:
# Search for DefaultCredentialsProvider usage across the codebase
rg "DefaultCredentialsProvider" -A 3 -B 3 -nRepository: OpenTheTaste/backend
Length of output: 3538
🏁 Script executed:
# Check if there's any documentation or configuration about credentials
find . -name "README*" -o -name "*.md" | xargs grep -l "AWS\|credential\|STS\|SSO" 2>/dev/null | head -5Repository: OpenTheTaste/backend
Length of output: 75
일시적 AWS 자격증명이 필요한 환경에서 S3 인증이 실패합니다.
DefaultCredentialsProvider는 환경 변수에서 AWS 자격증명을 읽을 때 AWS_SESSION_TOKEN을 필요로 합니다. 이 변수가 전달되지 않으면 STS/SSO 기반의 임시 자격증명을 사용하는 환경에서 transcoder 서비스의 S3 인증이 실패합니다. 현재 구성은 장기 액세스 키(IAM 사용자)에서만 작동합니다.
제안 수정
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
+ AWS_SESSION_TOKEN: ${AWS_SESSION_TOKEN}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| AWS_REGION: ${AWS_REGION} | |
| AWS_S3_BUCKET: ${AWS_S3_BUCKET} | |
| AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} | |
| AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} | |
| AWS_REGION: ${AWS_REGION} | |
| AWS_S3_BUCKET: ${AWS_S3_BUCKET} | |
| AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} | |
| AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} | |
| AWS_SESSION_TOKEN: ${AWS_SESSION_TOKEN} |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docker-compose.yml` around lines 129 - 132, Transcoder service S3 auth fails
with temporary credentials because DefaultCredentialsProvider requires
AWS_SESSION_TOKEN; add the AWS_SESSION_TOKEN environment variable alongside
AWS_REGION, AWS_S3_BUCKET, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in the
docker-compose env block for the transcoder service so that STS/SSO temporary
credentials are passed through; ensure the env key is named AWS_SESSION_TOKEN
and propagated wherever the transcoder container is started.
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java (1)
9-10: 사용되지 않는 import 제거
IOException과Files가 import 되었으나 실제 코드에서 사용되지 않습니다.♻️ 수정 제안
-import java.io.IOException; -import java.nio.file.Files; import java.nio.file.Path;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java` around lines 9 - 10, Remove the unused imports in DiskSpaceGuard: delete the import lines for java.io.IOException and java.nio.file.Files since they are not referenced anywhere in the DiskSpaceGuard class; ensure only required imports remain and run a build/IDE organize imports to confirm no other unused imports exist.apps/transcoder/src/main/java/com/ott/transcoder/constant/IngestJobConstant.java (1)
51-53: 상수 홀더는 인스턴스화하지 못하게 막아주세요.지금처럼
public생성자를 두면new S3VideoStorageConstant()가 가능해집니다. Line 52는private로 닫아두는 편이 의도가 더 명확합니다.♻️ 제안 수정
public static final class S3VideoStorageConstant { - public S3VideoStorageConstant() { + private S3VideoStorageConstant() { }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/transcoder/src/main/java/com/ott/transcoder/constant/IngestJobConstant.java` around lines 51 - 53, The nested class S3VideoStorageConstant is a constants holder but currently has a public constructor allowing instantiation; change its constructor visibility to private (make the S3VideoStorageConstant() constructor private and keep it empty) so the class cannot be instantiated and clearly signals its intent as a static constant holder.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/transcoder/src/main/java/com/ott/transcoder/command/CommandExtractor.java`:
- Around line 28-30: The current fallback in CommandExtractor forces adding new
TranscodeCommand(Resolution.P360) into commandList which re-enables upscaling
and defeats the earlier isUpscaleFor(...) filter; instead, change this branch to
not force 360p: either throw a controlled exception to mark the job as failed or
select the original/native resolution/profile (do not use Resolution.P360) and
add that TranscodeCommand; ensure you update the commandList handling and any
callers that expect a non-empty list so the job fails cleanly or uses the native
profile rather than upscaling.
In
`@apps/transcoder/src/main/java/com/ott/transcoder/exception/TranscodeErrorCode.java`:
- Line 35: Typo in the enum constant name FILE_SIZE_ZEOR should be
FILE_SIZE_ZERO; rename the enum constant in TranscodeErrorCode (change
FILE_SIZE_ZEOR to FILE_SIZE_ZERO with same code/message) and update all
references (e.g., DiskSpaceGuard.java and any other usages) to use
TranscodeErrorCode.FILE_SIZE_ZERO to keep callers compiling and behavior
unchanged.
In
`@apps/transcoder/src/main/java/com/ott/transcoder/storage/S3VideoStorage.java`:
- Around line 65-90: The code uploads files directly to destinationPrefix (using
s3Client.putObject with s3Key built from destinationPrefix) which can leave a
partial HLS set on failure; change the upload flow to first upload to a
temporary prefix (e.g., tempPrefix = destinationPrefix + "/.tmp/" + uuid) by
writing files using the same s3Client.putObject logic against that tempPrefix
(adjust localDir.relativize -> temp key), then once all files succeed perform an
atomic promotion step (copy objects from tempPrefix to destinationPrefix and
delete temp objects) or write a manifest/marker object at destinationPrefix only
after successful uploads; ensure failures trigger cleanup of uploaded temp
objects (in catch/finally) so no partial artifacts remain.
- Around line 40-53: S3VideoStorage.download is using sourceKey directly as the
S3 object key (passed into GetObjectRequest.key()), but callers like
JobOrchestrator.handle pass TranscodeMessage.originUrl which can be a full S3
URL produced by S3PresignService.toObjectUrl; that causes NoSuchKey. Fix by
normalizing/validating sourceKey at the start of S3VideoStorage.download: detect
if sourceKey is a full URL (schemes like s3/http/https), parse it (e.g., via
java.net.URI) and extract the raw object key (the path portion after the bucket,
trimming any leading '/'), otherwise treat it as already-raw; then pass the
normalized rawKey into GetObjectRequest.builder().key(rawKey). Also add a small
unit test for S3VideoStorage.download that covers both raw keys and full
presigned URLs.
---
Nitpick comments:
In
`@apps/transcoder/src/main/java/com/ott/transcoder/constant/IngestJobConstant.java`:
- Around line 51-53: The nested class S3VideoStorageConstant is a constants
holder but currently has a public constructor allowing instantiation; change its
constructor visibility to private (make the S3VideoStorageConstant() constructor
private and keep it empty) so the class cannot be instantiated and clearly
signals its intent as a static constant holder.
In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java`:
- Around line 9-10: Remove the unused imports in DiskSpaceGuard: delete the
import lines for java.io.IOException and java.nio.file.Files since they are not
referenced anywhere in the DiskSpaceGuard class; ensure only required imports
remain and run a build/IDE organize imports to confirm no other unused imports
exist.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 4d56103b-d05f-42ba-a32a-e72fcac4176a
📒 Files selected for processing (9)
apps/transcoder/src/main/java/com/ott/transcoder/command/CommandExtractor.javaapps/transcoder/src/main/java/com/ott/transcoder/constant/IngestJobConstant.javaapps/transcoder/src/main/java/com/ott/transcoder/exception/TranscodeErrorCode.javaapps/transcoder/src/main/java/com/ott/transcoder/exception/TranscodeException.javaapps/transcoder/src/main/java/com/ott/transcoder/exception/retryable/RetryableException.javaapps/transcoder/src/main/java/com/ott/transcoder/exception/retryable/StorageException.javaapps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.javaapps/transcoder/src/main/java/com/ott/transcoder/storage/S3VideoStorage.javaapps/transcoder/src/main/resources/application.yml
🚧 Files skipped from review as they are similar to previous changes (2)
- apps/transcoder/src/main/resources/application.yml
- apps/transcoder/src/main/java/com/ott/transcoder/exception/retryable/StorageException.java
| try (Stream<Path> fileStream = Files.walk(localDir)) { | ||
| List<Path> fileList = fileStream.filter(Files::isRegularFile).toList(); | ||
|
|
||
| for (Path file : fileList) { | ||
| String relativePath = localDir.relativize(file).toString().replace("\\", "/"); | ||
| String s3Key = destinationPrefix + "/" + relativePath; | ||
| String contentType = resolveContentType(file); | ||
|
|
||
| s3Client.putObject( | ||
| PutObjectRequest.builder() | ||
| .bucket(bucket) | ||
| .key(s3Key) | ||
| .contentType(contentType) | ||
| .build(), | ||
| RequestBody.fromFile(file) | ||
| ); | ||
|
|
||
| log.debug("S3 업로드 - {}, contentType: {}", s3Key, contentType); | ||
| } | ||
| } catch (IOException e) { | ||
| throw new StorageException(TranscodeErrorCode.STORAGE_FAILED, | ||
| "S3 업로드 중 파일 탐색 실패 - " + localDir, e); | ||
| } catch (SdkException e) { | ||
| throw new StorageException(TranscodeErrorCode.STORAGE_FAILED, | ||
| "S3 업로드 실패 - bucket: " + bucket + ", prefix: " + destinationPrefix, e); | ||
| } |
There was a problem hiding this comment.
최종 prefix에 바로 업로드하면 재시도 시 깨진 HLS 세트가 남을 수 있습니다.
Line 70부터 최종 경로에 바로 쓰고, Line 87-89에서 재시도 가능한 예외로 감싸고 있습니다. 중간에 일부 파일만 올라간 상태로 실패하면 같은 prefix 아래에 불완전한 객체가 남고, 다음 재시도 산출물과 섞일 수 있습니다. HLS처럼 파일 집합 단위로 완결성이 필요한 경우엔 임시 prefix 업로드 후 완료 시점에만 노출하거나, 실패 시 업로드된 객체를 정리하는 흐름이 필요합니다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/transcoder/src/main/java/com/ott/transcoder/storage/S3VideoStorage.java`
around lines 65 - 90, The code uploads files directly to destinationPrefix
(using s3Client.putObject with s3Key built from destinationPrefix) which can
leave a partial HLS set on failure; change the upload flow to first upload to a
temporary prefix (e.g., tempPrefix = destinationPrefix + "/.tmp/" + uuid) by
writing files using the same s3Client.putObject logic against that tempPrefix
(adjust localDir.relativize -> temp key), then once all files succeed perform an
atomic promotion step (copy objects from tempPrefix to destinationPrefix and
delete temp objects) or write a manifest/marker object at destinationPrefix only
after successful uploads; ensure failures trigger cleanup of uploaded temp
objects (in catch/finally) so no partial artifacts remain.
There was a problem hiding this comment.
커맨드 테이블 추가하고 커맨드 단위 재시도 로직이 작성되면 부분 성공이 들어가게 되므로 다음 PR에 반영하겠습니다.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java`:
- Around line 24-30: The DiskSpaceGuard.check method currently multiplies
fileSize by multiplier (field multiplier) without validating multiplier, so a
configured value <= 0 bypasses the disk-space check; before computing
requiredSpace in DiskSpaceGuard.check, assert multiplier > 0 and if not throw a
StorageException (introduce or use an appropriate TranscodeErrorCode such as
INVALID_DISK_MULTIPLIER) to fail fast and prevent the guard being disabled by
misconfiguration.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 87784b42-2aba-45ca-b52b-2ccedd733a03
📒 Files selected for processing (2)
apps/transcoder/src/main/java/com/ott/transcoder/exception/TranscodeErrorCode.javaapps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/transcoder/src/main/java/com/ott/transcoder/exception/TranscodeErrorCode.java
| * @param resolutionList 포함할 해상도 목록 | ||
| * @return 생성된 master.m3u8 경로 | ||
| */ | ||
| public Path generate(Path outputDir, List<Resolution> resolutionList) { |
There was a problem hiding this comment.
보다가 든 생각인데 해당 파이프라인에 관련된 함수들이 MasterUrl 생성기랑, Planner인데
하나의 파일로 관리하는건 별로인가요?
기능은 다르지만 해당 파이프라인에서 사용하는 유틸성 함수라고 생각됩니다.
추후 썸네일, 자막 등의 파이프라인이 추가 될 경우에 각각의 추가적인 기능들도
파일을 별도로 만든다면 개인적으로 보기 어려울 것 같습니다!
📝 작업 내용
📷 스크린샷
[S3 업로드]

☑️ 체크 리스트
#️⃣ 연관된 이슈
closes #137
💬 리뷰 요구사항
0. 설명
위 두 가지 작업 시작 전까지 되었습니다.
S3에서 원본을 다운로드 받아서 작업 후 S3로 업로드되는 것까지 확인했습니다.
1. Post Processing
2. TranscodeMessage 변경
Summary by CodeRabbit
릴리스 노트
새로운 기능
개선 사항
기타