Skip to content

[OT-240] [FEAT]: HLS 트랜스코딩 작업#147

Merged
phonil merged 16 commits intodevelopfrom
OT-240-feature/transcode-design-implement
Mar 11, 2026
Merged

[OT-240] [FEAT]: HLS 트랜스코딩 작업#147
phonil merged 16 commits intodevelopfrom
OT-240-feature/transcode-design-implement

Conversation

@phonil
Copy link
Copy Markdown
Contributor

@phonil phonil commented Mar 11, 2026

📝 작업 내용

이번 PR에서 작업한 내용을 적어주세요

  • S3 Video Storage 추가
  • DiskSpaceGuard 추가
  • Docker에 FFmpeg 설치 추가
  • 트랜스코딩 파일 관련 경로 수정

📷 스크린샷

[S3 업로드]
image

  • 경로 수정이 필요합니다.

☑️ 체크 리스트

체크 리스트를 확인해주세요

  • 테스트는 잘 통과했나요?
  • 충돌을 해결했나요?
  • 이슈는 등록했나요?
  • 라벨은 등록했나요?

#️⃣ 연관된 이슈

ex) # 이슈번호

closes #137

💬 리뷰 요구사항

리뷰어가 특별히 봐주었으면 하는 부분이 있다면 작성해주세요

ex) 예외 처리를 이렇게 해도 괜찮을까요? / ~~부분 주의 깊게 봐주세요

0. 설명

  • 상태 관리
  • 아키텍처 설계
    위 두 가지 작업 시작 전까지 되었습니다.
    S3에서 원본을 다운로드 받아서 작업 후 S3로 업로드되는 것까지 확인했습니다.

1. Post Processing

  • 현재 Pipeline의 Post Processing이 Orchestrator에 있는데, 각 pipeline으로 옮겨야 합니다.

2. TranscodeMessage 변경

  • 다음 작업에 람다에서 쏴줄 수 있는 데이터에 맞게 변경하겠습니다.

Summary by CodeRabbit

릴리스 노트

  • 새로운 기능

    • AWS S3 스토리지 지원 추가 — 비디오 업로드/다운로드 가능.
    • Docker 이미지에 FFmpeg 런타임 포함 — 컨테이너 내 인코딩 도구 사용 가능.
  • 개선 사항

    • 트랜스코딩 전 디스크 공간 검증 강화 — 파일 크기 기반 검사 추가.
    • 마스터 플레이리스트 생성 로직 동적화 — 해상도 속성 기반 항목 생성으로 단순화 및 일관성 향상.
  • 기타

    • AWS 관련 환경 변수 및 설정 추가.

@phonil phonil self-assigned this Mar 11, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 11, 2026

Walkthrough

JobContext 전파로 파이프라인 실행 경로가 재구성되고(워크디렉토리 생성 → 디스크 검사 → 명령 실행 → 플레이리스트 생성 → 업로드), S3 저장소 및 클라이언트(S3Client) 지원이 추가되었으며 ffmpeg 런타임이 컨테이너 최종 이미지에 설치되었습니다.

Changes

Cohort / File(s) Summary
Docker 및 빌드 의존성
apps/transcoder/Dockerfile, apps/transcoder/build.gradle, modules/infra-s3/src/main/java/com/ott/infra/s3/config/S3ClientConfig.java
최종 이미지에 ffmpeg 설치 추가, AWS SDK BOM 및 s3 의존성 추가, S3Client 스프링 빈 추가
커맨드·엔티티 변경
apps/transcoder/src/main/java/com/ott/transcoder/command/CommandExtractor.java, apps/transcoder/src/main/java/com/ott/transcoder/command/TranscodeCommand.java
업스케일 해상도 필터링 추가 및 필터 후 빈 목록에 360p 폴백, TranscodeCommand의 resolution 필드 private final 및 로거 추가
파이프라인 시그니처 및 실행 경로
apps/transcoder/src/main/java/com/ott/transcoder/pipeline/CommandPipeline.java, apps/transcoder/src/main/java/com/ott/transcoder/pipeline/CommandPipelineExecutor.java
CommandPipeline 및 Executor의 execute 시그니처에 JobContext 파라미터 추가 및 전파
JobContext·오케스트레이션
apps/transcoder/src/main/java/com/ott/transcoder/job/JobContext.java, apps/transcoder/src/main/java/com/ott/transcoder/job/JobOrchestrator.java
JobContext 레코드 추가, 워크디렉토리 생성 후 DiskSpaceGuard 체크, 명령 실행 시 JobContext 전달 및 트랜스코드 해상도 수집 후 마스터 플레이리스트 생성·업로드 로직으로 재구성
HLS 파이프라인·플래너 변경
apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/HLSTranscodePipeline.java (삭제), .../TranscodeCommandPipeline.java, .../TranscodePlanner.java, .../MasterPlaylistGenerator.java
HLSTranscodePipeline 삭제, TranscodeCommandPipeline가 JobContext 기반으로 ffmpeg 실행 호출 도입, TranscodePlanner가 단일 해상도 중심으로 변경, MasterPlaylistGenerator가 Variant 맵 제거 후 Resolution 기반 동적 생성으로 변경
디스크 검사·예외 모델
apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java, apps/transcoder/src/main/java/com/ott/transcoder/exception/retryable/StorageException.java, apps/transcoder/src/main/java/com/ott/transcoder/exception/TranscodeException.java, apps/transcoder/src/main/java/com/ott/transcoder/exception/retryable/RetryableException.java, apps/transcoder/src/main/java/com/ott/transcoder/exception/TranscodeErrorCode.java
DiskSpaceGuard.check(workDir, fileSize)로 시그니처 변경 및 파일 크기 0 검사/용량 부족 시 StorageException 발생, 예외 생성자들에 단일-인자 생성자 추가/3-인자 제거, FILE_SIZE_ZERO 에러 코드 추가
메시지·리스너·저장소 구현
apps/transcoder/src/main/java/com/ott/transcoder/queue/TranscodeMessage.java, apps/transcoder/src/main/java/com/ott/transcoder/queue/rabbit/RabbitTranscodeListener.java, apps/transcoder/src/main/java/com/ott/transcoder/storage/LocalVideoStorage.java, apps/transcoder/src/main/java/com/ott/transcoder/storage/S3VideoStorage.java, apps/transcoder/src/main/resources/application.yml, docker-compose.yml, apps/transcoder/src/main/java/com/ott/transcoder/constant/IngestJobConstant.java
TranscodeMessage에 fileSize 추가, JobOrchestrator import 경로 업데이트, LocalVideoStorage에 ConditionalOnProperty 활성화, S3VideoStorage 신규 구현(다운로드/업로드, 예외 매핑, MIME 판정), application.yml/docker-compose에 S3/AWS 설정 추가, MIME 상수 추가

Sequence Diagram(s)

sequenceDiagram
    participant Client as "Client"
    participant Orchestrator as "JobOrchestrator"
    participant DiskGuard as "DiskSpaceGuard"
    participant Executor as "CommandPipelineExecutor"
    participant S3 as "S3VideoStorage"
    participant Probe as "Probe/Planner"
    participant FFmpeg as "FfmpegExecutor"
    participant Playlist as "MasterPlaylistGenerator"

    Client->>Orchestrator: TranscodeMessage(메시지)
    Orchestrator->>Orchestrator: workDir/outputDir 생성
    Orchestrator->>DiskGuard: check(workDir, fileSize)
    DiskGuard-->>Orchestrator: ok / StorageException
    Orchestrator->>Executor: execute(DownloadCommand, JobContext)
    Executor->>S3: download(sourceKey, workDir)
    S3-->>Executor: 파일 저장(Path)
    Executor-->>Orchestrator: 완료
    Orchestrator->>Executor: execute(ProbeCommand, JobContext)
    Executor->>Probe: probe(inputFile)
    Probe-->>Executor: ProbeResult
    loop 각 TranscodeCommand
        Orchestrator->>Executor: execute(TranscodeCommand, JobContext)
        Executor->>Probe: plan(command, ProbeResult)
        Probe-->>Executor: TranscodeProfile
        Executor->>FFmpeg: execute(inputFile, outputDir, profile)
        FFmpeg-->>Executor: 트랜스코드 완료
    end
    Orchestrator->>Playlist: generate(outputDir, resolutionList)
    Playlist-->>Orchestrator: 플레이리스트 생성
    Orchestrator->>S3: upload(outputDir, destinationPrefix)
    S3-->>Orchestrator: 업로드 완료 및 경로
    Orchestrator-->>Client: 완료 알림
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • marulog
  • arlen02-01
🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 11.11% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed PR title '[OT-240] [FEAT]: HLS 트랜스코딩 작업'는 이 PR의 주요 변경 사항인 HLS 트랜스코딩 구현을 명확히 설명합니다.
Linked Issues check ✅ Passed PR은 #137의 모든 핵심 목표를 달성합니다: 트랜스코딩 동작 구현, S3 저장소, 디스크 공간 검사, 파이프라인 구조 완성, FFmpeg 설치.
Out of Scope Changes check ✅ Passed 모든 변경사항이 HLS 트랜스코딩 구현이라는 범위 내에 있으며, 불필요한 외부 기능이 추가되지 않았습니다.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch OT-240-feature/transcode-design-implement

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/transcoder/src/main/java/com/ott/transcoder/job/JobOrchestrator.java (1)

116-133: ⚠️ Potential issue | 🟠 Major

Files.walk() 스트림이 닫히지 않아 리소스 누수 발생 가능.

Files.walk()가 반환하는 Stream<Path>는 디렉토리 탐색 리소스를 래핑하며, Java 문서에 따르면 반드시 닫아야 합니다. 현재 코드에서는 sorted() 또는 forEach() 중 예외가 발생하면 스트림이 닫히지 않아 파일 핸들이 누수될 수 있습니다.

🐛 try-with-resources를 사용한 수정 제안
 private void cleanUp(Path workDir) {
     try {
         if (Files.exists(workDir)) {
-            Files.walk(workDir)
-                    .sorted(Comparator.reverseOrder())
-                    .forEach(path -> {
-                        try {
-                            Files.deleteIfExists(path);
-                        } catch (IOException ignored) {
-                            // 삭제 실패는 로그만 남기고 무시 (작업 성공 여부에 지장 없음)
-                        }
-                    });
+            try (var pathStream = Files.walk(workDir)) {
+                pathStream
+                        .sorted(Comparator.reverseOrder())
+                        .forEach(path -> {
+                            try {
+                                Files.deleteIfExists(path);
+                            } catch (IOException ignored) {
+                                // 삭제 실패는 로그만 남기고 무시 (작업 성공 여부에 지장 없음)
+                            }
+                        });
+            }
             log.info("작업 디렉토리 정리 완료 - {}", workDir);
         }
     } catch (IOException e) {
         log.warn("작업 디렉토리 정리 중 오류 발생 (무시함) - {}", workDir, e);
     }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/transcoder/src/main/java/com/ott/transcoder/job/JobOrchestrator.java`
around lines 116 - 133, In JobOrchestrator.cleanUp, Files.walk(workDir) must be
used inside a try-with-resources to ensure the returned Stream<Path> is closed
even on exceptions; change the method to open the stream with try (Stream<Path>
stream = Files.walk(workDir)) {
stream.sorted(Comparator.reverseOrder()).forEach(...) } and keep the existing
per-path deleteIfExists try/catch and the outer Files.exists and
logging/exception handling intact so resources are always released while
preserving deletion behavior and logs.
🧹 Nitpick comments (1)
apps/transcoder/Dockerfile (1)

7-7: ffmpeg는 추천 패키지 제외로 설치하는 편이 좋습니다.

Line 7의 apt-get install -y ffmpeg는 권장 패키지까지 함께 설치해서 런타임 이미지 크기와 취약점 표면적을 불필요하게 늘릴 수 있습니다. 여기서는 --no-install-recommends를 붙여 최소 구성으로 가져가는 편이 안전합니다.

변경 예시
-RUN apt-get update && apt-get install -y ffmpeg && rm -rf /var/lib/apt/lists/*
+RUN apt-get update \
+    && apt-get install -y --no-install-recommends ffmpeg \
+    && rm -rf /var/lib/apt/lists/*
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/transcoder/Dockerfile` at line 7, The RUN line in the Dockerfile
installs ffmpeg with recommended packages which increases image size and attack
surface; update the RUN command that performs "apt-get install -y ffmpeg" to
install ffmpeg without recommended packages (add the --no-install-recommends
flag) while keeping the existing apt-get update and cleanup (rm -rf
/var/lib/apt/lists/*) so the image stays minimal.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/transcoder/src/main/java/com/ott/transcoder/command/CommandExtractor.java`:
- Around line 19-24: The extracted command list can be empty for inputs below
360p, causing JobOrchestrator.extractCommand()/JobOrchestrator to proceed and
generate an invalid master playlist with zero renditions; fix by validating and
enforcing at least one rendition: either (A) in JobOrchestrator immediately
after calling CommandExtractor.extractCommand(), check if commandList.isEmpty()
and reject the job with an error, or (B) change CommandExtractor (the loop
building TranscodeCommand from Resolution.values()) to guarantee a fallback
TranscodeCommand (e.g., always include the lowest allowed Resolution such as
360p or the input's native resolution) when all resolutions are skipped; ensure
you reference the Resolution enum, TranscodeCommand constructor, and
masterPlaylistGenerator.generate() path so downstream code never receives an
empty resolutionList.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java`:
- Around line 24-33: DiskSpaceGuard.check currently treats fileSize==0 as 0
required space allowing all jobs through; change the method to validate fileSize
> 0 and fail fast: at the start of DiskSpaceGuard.check(Path workDir, long
fileSize) add a guard that throws a StorageException (use an appropriate
TranscodeErrorCode, e.g., a new or existing error for unknown/invalid size) with
a clear message like "파일 크기 미확인" when fileSize <= 0; this ensures callers must
provide a real file size (or obtain it before calling) instead of bypassing the
disk-space check.

In `@apps/transcoder/src/main/java/com/ott/transcoder/job/JobOrchestrator.java`:
- Around line 64-65: The disk-space check in JobOrchestrator currently calls
diskSpaceGuard.check(workDir, message.fileSize() != null ? message.fileSize() :
0L), which lets a null TranscodeMessage.fileSize bypass validation; either make
fileSize non-nullable or perform an explicit safety validation here: update
TranscodeMessage.fileSize to be `@NotNull` (and ensure RabbitTranscodeListener
validates incoming messages), or change JobOrchestrator to reject/null-check the
message and throw/log/ack-nack when fileSize == null, or substitute a
conservative required size (e.g., treat null as unknown and use a safe minimum
requiredSpace or compute from file metadata) before calling
DiskSpaceGuard.check; reference symbols: JobOrchestrator, DiskSpaceGuard.check,
TranscodeMessage.fileSize, RabbitTranscodeListener.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/MasterPlaylistGenerator.java`:
- Around line 29-36: The master playlist is being built from Resolution defaults
(Resolution.getVideoBitrate(), getWidth(), getHeight()) which can differ from
the actual rendered renditions (bitrate capped in TranscodePlanner and scaled
width computed in ProcessBuilderFfmpegExecutor, plus audio bitrate not
included), so update MasterPlaylistGenerator to use the final rendition metadata
(actual bitrate including audio, and actual output width/height) instead of
Resolution values when building the EXT-X-STREAM-INF BANDWIDTH and RESOLUTION
lines and the rendition URI (media.m3u8); locate where
resolution.getVideoBitrate()/getWidth()/getHeight()/getKey() are used in
MasterPlaylistGenerator and replace them with the corresponding fields from the
final Rendition or RenditionMetadata object produced by the transcode
planning/execution pipeline.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/queue/TranscodeMessage.java`:
- Around line 13-14: TranscodeMessage currently allows a nullable Long fileSize
which enables bypassing DiskSpaceGuard; change the DTO to require a non-null
primitive long (or otherwise enforce non-null) by updating the TranscodeMessage
field/constructor/signature (e.g., replace Long fileSize with long fileSize and
remove any nullable annotations or Optional usage) so instances cannot be
created without a fileSize; update places that construct TranscodeMessage (and
adjust JobOrchestrator.java usage that currently does message.fileSize() != null
? message.fileSize() : 0L) to pass a real size and remove the fallback to 0, and
coordinate/verify external Lambda publishers now always include fileSize.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/storage/LocalVideoStorage.java`:
- Line 26: 현재 LocalVideoStorage와 S3VideoStorage가 둘 다 `@ConditionalOnProperty`(name
= "storage.provider", havingValue = "local" / "s3")로만 등록되어 있어 STORAGE_PROVIDER
환경변수가 없으면 VideoStorage 빈이 생성되지 않습니다; 수정으로 LocalVideoStorage의 클래스
어노테이션(`@ConditionalOnProperty`(name = "storage.provider", havingValue = "local"))에
matchIfMissing = true를 추가하여 기본값을 로컬로 설정하고, 관련 클래스명(LocalVideoStorage,
S3VideoStorage)과 소비자 클래스(JobOrchestrator, 인터페이스 VideoStorage)를 확인해 빈 등록이 정상
동작하도록 보장하세요.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/storage/S3VideoStorage.java`:
- Around line 94-95: S3VideoStorage.upload() currently returns only the
destinationPrefix while VideoStorage.upload() should return the full storage
location like LocalVideoStorage does; update S3VideoStorage.upload() to build
and return the full S3 URI (e.g. "s3://{bucket}/{destinationPrefix}" or
comparable canonical path) so its return value semantics match LocalVideoStorage
and what the DB expects, making sure to use the existing bucket and
destinationPrefix variables (and update any log if you want to log the same full
URI).
- Around line 76-82: When calling s3Client.putObject(...) (the block using
PutObjectRequest.builder() and RequestBody.fromFile(file)), detect the file
extension from s3Key or file name and set the Content-Type on the
PutObjectRequest: use "application/vnd.apple.mpegurl" (or
"application/x-mpegURL") for ".m3u8" files and "video/mp2t" for ".ts" files;
leave other uploads unchanged. Update the PutObjectRequest.builder() call to
include .contentType(...) with the chosen MIME type before .build() so
S3/CloudFront serves HLS manifests and segments with the correct types.

In `@apps/transcoder/src/main/resources/application.yml`:
- Around line 50-52: The YAML injects empty strings for STORAGE_OUTPUT_DIR and
AWS_S3_BUCKET which prevents LocalVideoStorage's Java default and early S3
validation; update the application.yml so that output-dir uses a real default
(e.g. a valid path fallback instead of `${STORAGE_OUTPUT_DIR:}`) or remove the
empty-default pattern so LocalVideoStorage (class LocalVideoStorage / its Java
default java.io.tmpdir + '/ott-storage') receives a usable value at boot, and
remove the trailing colon for s3.bucket (i.e. do not inject an empty
`${AWS_S3_BUCKET:}`) so the service fails fast if AWS_S3_BUCKET is not set.

In `@docker-compose.yml`:
- Around line 129-132: Transcoder service S3 auth fails with temporary
credentials because DefaultCredentialsProvider requires AWS_SESSION_TOKEN; add
the AWS_SESSION_TOKEN environment variable alongside AWS_REGION, AWS_S3_BUCKET,
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in the docker-compose env block for
the transcoder service so that STS/SSO temporary credentials are passed through;
ensure the env key is named AWS_SESSION_TOKEN and propagated wherever the
transcoder container is started.

---

Outside diff comments:
In `@apps/transcoder/src/main/java/com/ott/transcoder/job/JobOrchestrator.java`:
- Around line 116-133: In JobOrchestrator.cleanUp, Files.walk(workDir) must be
used inside a try-with-resources to ensure the returned Stream<Path> is closed
even on exceptions; change the method to open the stream with try (Stream<Path>
stream = Files.walk(workDir)) {
stream.sorted(Comparator.reverseOrder()).forEach(...) } and keep the existing
per-path deleteIfExists try/catch and the outer Files.exists and
logging/exception handling intact so resources are always released while
preserving deletion behavior and logs.

---

Nitpick comments:
In `@apps/transcoder/Dockerfile`:
- Line 7: The RUN line in the Dockerfile installs ffmpeg with recommended
packages which increases image size and attack surface; update the RUN command
that performs "apt-get install -y ffmpeg" to install ffmpeg without recommended
packages (add the --no-install-recommends flag) while keeping the existing
apt-get update and cleanup (rm -rf /var/lib/apt/lists/*) so the image stays
minimal.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: a8e567f3-f9fe-4b72-b062-a56e9f18eb73

📥 Commits

Reviewing files that changed from the base of the PR and between f290ec2 and fe25a8a.

📒 Files selected for processing (21)
  • apps/transcoder/Dockerfile
  • apps/transcoder/build.gradle
  • apps/transcoder/src/main/java/com/ott/transcoder/command/CommandExtractor.java
  • apps/transcoder/src/main/java/com/ott/transcoder/command/TranscodeCommand.java
  • apps/transcoder/src/main/java/com/ott/transcoder/exception/retryable/StorageException.java
  • apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java
  • apps/transcoder/src/main/java/com/ott/transcoder/job/JobContext.java
  • apps/transcoder/src/main/java/com/ott/transcoder/job/JobOrchestrator.java
  • apps/transcoder/src/main/java/com/ott/transcoder/pipeline/CommandPipeline.java
  • apps/transcoder/src/main/java/com/ott/transcoder/pipeline/CommandPipelineExecutor.java
  • apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/HLSTranscodePipeline.java
  • apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/MasterPlaylistGenerator.java
  • apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/TranscodeCommandPipeline.java
  • apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/TranscodePlanner.java
  • apps/transcoder/src/main/java/com/ott/transcoder/queue/TranscodeMessage.java
  • apps/transcoder/src/main/java/com/ott/transcoder/queue/rabbit/RabbitTranscodeListener.java
  • apps/transcoder/src/main/java/com/ott/transcoder/storage/LocalVideoStorage.java
  • apps/transcoder/src/main/java/com/ott/transcoder/storage/S3VideoStorage.java
  • apps/transcoder/src/main/resources/application.yml
  • docker-compose.yml
  • modules/infra-s3/src/main/java/com/ott/infra/s3/config/S3ClientConfig.java
💤 Files with no reviewable changes (1)
  • apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/HLSTranscodePipeline.java

Comment thread apps/transcoder/src/main/java/com/ott/transcoder/storage/S3VideoStorage.java Outdated
Comment thread apps/transcoder/src/main/resources/application.yml Outdated
Comment thread docker-compose.yml
Comment on lines +129 to +132
AWS_REGION: ${AWS_REGION}
AWS_S3_BUCKET: ${AWS_S3_BUCKET}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Find and examine docker-compose.yml
find . -name "docker-compose.yml" -type f | head -5

Repository: OpenTheTaste/backend

Length of output: 121


🏁 Script executed:

# Search for S3ClientConfig to understand credential provider usage
find . -name "S3ClientConfig.java" -type f

Repository: OpenTheTaste/backend

Length of output: 140


🏁 Script executed:

# Search for AWS_SESSION_TOKEN references in the codebase
rg "AWS_SESSION_TOKEN" --type yml --type yaml --type java --type json -n

Repository: OpenTheTaste/backend

Length of output: 91


🏁 Script executed:

# Check docker-compose.yml content around lines 129-132
sed -n '120,140p' ./docker-compose.yml

Repository: OpenTheTaste/backend

Length of output: 748


🏁 Script executed:

# Read S3ClientConfig.java to understand credential provider
cat ./modules/infra-s3/src/main/java/com/ott/infra/s3/config/S3ClientConfig.java

Repository: OpenTheTaste/backend

Length of output: 769


🏁 Script executed:

# Search for AWS_SESSION_TOKEN with correct rg syntax
rg "AWS_SESSION_TOKEN" -n

Repository: OpenTheTaste/backend

Length of output: 46


🏁 Script executed:

# Check for .env files or environment files
find . -name ".env*" -o -name "*.env" | head -10

Repository: OpenTheTaste/backend

Length of output: 46


🏁 Script executed:

# Search for how AWS credentials are used in transcoder service
find . -path "*/transcoder*" -name "*.java" -o -name "*.yml" -o -name "*.yaml" | grep -E "(transcoder|Transcoder)" | head -10

Repository: OpenTheTaste/backend

Length of output: 875


🏁 Script executed:

# Check if there are other docker-compose files or environment config
find . -name "docker-compose*.yml" -o -name "docker-compose*.yaml"

Repository: OpenTheTaste/backend

Length of output: 163


🏁 Script executed:

# Check how transcoder application uses AWS credentials
find ./apps/transcoder -name "*.java" -type f | head -20 | xargs grep -l "AWS_" 2>/dev/null

Repository: OpenTheTaste/backend

Length of output: 48


🏁 Script executed:

# Search for DefaultCredentialsProvider usage across the codebase
rg "DefaultCredentialsProvider" -A 3 -B 3 -n

Repository: OpenTheTaste/backend

Length of output: 3538


🏁 Script executed:

# Check if there's any documentation or configuration about credentials
find . -name "README*" -o -name "*.md" | xargs grep -l "AWS\|credential\|STS\|SSO" 2>/dev/null | head -5

Repository: OpenTheTaste/backend

Length of output: 75


일시적 AWS 자격증명이 필요한 환경에서 S3 인증이 실패합니다.

DefaultCredentialsProvider는 환경 변수에서 AWS 자격증명을 읽을 때 AWS_SESSION_TOKEN을 필요로 합니다. 이 변수가 전달되지 않으면 STS/SSO 기반의 임시 자격증명을 사용하는 환경에서 transcoder 서비스의 S3 인증이 실패합니다. 현재 구성은 장기 액세스 키(IAM 사용자)에서만 작동합니다.

제안 수정
       AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
       AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
+      AWS_SESSION_TOKEN: ${AWS_SESSION_TOKEN}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
AWS_REGION: ${AWS_REGION}
AWS_S3_BUCKET: ${AWS_S3_BUCKET}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_REGION: ${AWS_REGION}
AWS_S3_BUCKET: ${AWS_S3_BUCKET}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
AWS_SESSION_TOKEN: ${AWS_SESSION_TOKEN}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.yml` around lines 129 - 132, Transcoder service S3 auth fails
with temporary credentials because DefaultCredentialsProvider requires
AWS_SESSION_TOKEN; add the AWS_SESSION_TOKEN environment variable alongside
AWS_REGION, AWS_S3_BUCKET, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in the
docker-compose env block for the transcoder service so that STS/SSO temporary
credentials are passed through; ensure the env key is named AWS_SESSION_TOKEN
and propagated wherever the transcoder container is started.

@phonil phonil added the feat 새로운 기능 구현 label Mar 11, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (2)
apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java (1)

9-10: 사용되지 않는 import 제거

IOExceptionFiles가 import 되었으나 실제 코드에서 사용되지 않습니다.

♻️ 수정 제안
-import java.io.IOException;
-import java.nio.file.Files;
 import java.nio.file.Path;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java`
around lines 9 - 10, Remove the unused imports in DiskSpaceGuard: delete the
import lines for java.io.IOException and java.nio.file.Files since they are not
referenced anywhere in the DiskSpaceGuard class; ensure only required imports
remain and run a build/IDE organize imports to confirm no other unused imports
exist.
apps/transcoder/src/main/java/com/ott/transcoder/constant/IngestJobConstant.java (1)

51-53: 상수 홀더는 인스턴스화하지 못하게 막아주세요.

지금처럼 public 생성자를 두면 new S3VideoStorageConstant()가 가능해집니다. Line 52는 private로 닫아두는 편이 의도가 더 명확합니다.

♻️ 제안 수정
     public static final class S3VideoStorageConstant {
-        public S3VideoStorageConstant() {
+        private S3VideoStorageConstant() {
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/constant/IngestJobConstant.java`
around lines 51 - 53, The nested class S3VideoStorageConstant is a constants
holder but currently has a public constructor allowing instantiation; change its
constructor visibility to private (make the S3VideoStorageConstant() constructor
private and keep it empty) so the class cannot be instantiated and clearly
signals its intent as a static constant holder.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/transcoder/src/main/java/com/ott/transcoder/command/CommandExtractor.java`:
- Around line 28-30: The current fallback in CommandExtractor forces adding new
TranscodeCommand(Resolution.P360) into commandList which re-enables upscaling
and defeats the earlier isUpscaleFor(...) filter; instead, change this branch to
not force 360p: either throw a controlled exception to mark the job as failed or
select the original/native resolution/profile (do not use Resolution.P360) and
add that TranscodeCommand; ensure you update the commandList handling and any
callers that expect a non-empty list so the job fails cleanly or uses the native
profile rather than upscaling.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/exception/TranscodeErrorCode.java`:
- Line 35: Typo in the enum constant name FILE_SIZE_ZEOR should be
FILE_SIZE_ZERO; rename the enum constant in TranscodeErrorCode (change
FILE_SIZE_ZEOR to FILE_SIZE_ZERO with same code/message) and update all
references (e.g., DiskSpaceGuard.java and any other usages) to use
TranscodeErrorCode.FILE_SIZE_ZERO to keep callers compiling and behavior
unchanged.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/storage/S3VideoStorage.java`:
- Around line 65-90: The code uploads files directly to destinationPrefix (using
s3Client.putObject with s3Key built from destinationPrefix) which can leave a
partial HLS set on failure; change the upload flow to first upload to a
temporary prefix (e.g., tempPrefix = destinationPrefix + "/.tmp/" + uuid) by
writing files using the same s3Client.putObject logic against that tempPrefix
(adjust localDir.relativize -> temp key), then once all files succeed perform an
atomic promotion step (copy objects from tempPrefix to destinationPrefix and
delete temp objects) or write a manifest/marker object at destinationPrefix only
after successful uploads; ensure failures trigger cleanup of uploaded temp
objects (in catch/finally) so no partial artifacts remain.
- Around line 40-53: S3VideoStorage.download is using sourceKey directly as the
S3 object key (passed into GetObjectRequest.key()), but callers like
JobOrchestrator.handle pass TranscodeMessage.originUrl which can be a full S3
URL produced by S3PresignService.toObjectUrl; that causes NoSuchKey. Fix by
normalizing/validating sourceKey at the start of S3VideoStorage.download: detect
if sourceKey is a full URL (schemes like s3/http/https), parse it (e.g., via
java.net.URI) and extract the raw object key (the path portion after the bucket,
trimming any leading '/'), otherwise treat it as already-raw; then pass the
normalized rawKey into GetObjectRequest.builder().key(rawKey). Also add a small
unit test for S3VideoStorage.download that covers both raw keys and full
presigned URLs.

---

Nitpick comments:
In
`@apps/transcoder/src/main/java/com/ott/transcoder/constant/IngestJobConstant.java`:
- Around line 51-53: The nested class S3VideoStorageConstant is a constants
holder but currently has a public constructor allowing instantiation; change its
constructor visibility to private (make the S3VideoStorageConstant() constructor
private and keep it empty) so the class cannot be instantiated and clearly
signals its intent as a static constant holder.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java`:
- Around line 9-10: Remove the unused imports in DiskSpaceGuard: delete the
import lines for java.io.IOException and java.nio.file.Files since they are not
referenced anywhere in the DiskSpaceGuard class; ensure only required imports
remain and run a build/IDE organize imports to confirm no other unused imports
exist.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 4d56103b-d05f-42ba-a32a-e72fcac4176a

📥 Commits

Reviewing files that changed from the base of the PR and between fe25a8a and a8c9fc8.

📒 Files selected for processing (9)
  • apps/transcoder/src/main/java/com/ott/transcoder/command/CommandExtractor.java
  • apps/transcoder/src/main/java/com/ott/transcoder/constant/IngestJobConstant.java
  • apps/transcoder/src/main/java/com/ott/transcoder/exception/TranscodeErrorCode.java
  • apps/transcoder/src/main/java/com/ott/transcoder/exception/TranscodeException.java
  • apps/transcoder/src/main/java/com/ott/transcoder/exception/retryable/RetryableException.java
  • apps/transcoder/src/main/java/com/ott/transcoder/exception/retryable/StorageException.java
  • apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java
  • apps/transcoder/src/main/java/com/ott/transcoder/storage/S3VideoStorage.java
  • apps/transcoder/src/main/resources/application.yml
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/transcoder/src/main/resources/application.yml
  • apps/transcoder/src/main/java/com/ott/transcoder/exception/retryable/StorageException.java

Comment on lines +65 to +90
try (Stream<Path> fileStream = Files.walk(localDir)) {
List<Path> fileList = fileStream.filter(Files::isRegularFile).toList();

for (Path file : fileList) {
String relativePath = localDir.relativize(file).toString().replace("\\", "/");
String s3Key = destinationPrefix + "/" + relativePath;
String contentType = resolveContentType(file);

s3Client.putObject(
PutObjectRequest.builder()
.bucket(bucket)
.key(s3Key)
.contentType(contentType)
.build(),
RequestBody.fromFile(file)
);

log.debug("S3 업로드 - {}, contentType: {}", s3Key, contentType);
}
} catch (IOException e) {
throw new StorageException(TranscodeErrorCode.STORAGE_FAILED,
"S3 업로드 중 파일 탐색 실패 - " + localDir, e);
} catch (SdkException e) {
throw new StorageException(TranscodeErrorCode.STORAGE_FAILED,
"S3 업로드 실패 - bucket: " + bucket + ", prefix: " + destinationPrefix, e);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

최종 prefix에 바로 업로드하면 재시도 시 깨진 HLS 세트가 남을 수 있습니다.

Line 70부터 최종 경로에 바로 쓰고, Line 87-89에서 재시도 가능한 예외로 감싸고 있습니다. 중간에 일부 파일만 올라간 상태로 실패하면 같은 prefix 아래에 불완전한 객체가 남고, 다음 재시도 산출물과 섞일 수 있습니다. HLS처럼 파일 집합 단위로 완결성이 필요한 경우엔 임시 prefix 업로드 후 완료 시점에만 노출하거나, 실패 시 업로드된 객체를 정리하는 흐름이 필요합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/transcoder/src/main/java/com/ott/transcoder/storage/S3VideoStorage.java`
around lines 65 - 90, The code uploads files directly to destinationPrefix
(using s3Client.putObject with s3Key built from destinationPrefix) which can
leave a partial HLS set on failure; change the upload flow to first upload to a
temporary prefix (e.g., tempPrefix = destinationPrefix + "/.tmp/" + uuid) by
writing files using the same s3Client.putObject logic against that tempPrefix
(adjust localDir.relativize -> temp key), then once all files succeed perform an
atomic promotion step (copy objects from tempPrefix to destinationPrefix and
delete temp objects) or write a manifest/marker object at destinationPrefix only
after successful uploads; ensure failures trigger cleanup of uploaded temp
objects (in catch/finally) so no partial artifacts remain.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

커맨드 테이블 추가하고 커맨드 단위 재시도 로직이 작성되면 부분 성공이 들어가게 되므로 다음 PR에 반영하겠습니다.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java`:
- Around line 24-30: The DiskSpaceGuard.check method currently multiplies
fileSize by multiplier (field multiplier) without validating multiplier, so a
configured value <= 0 bypasses the disk-space check; before computing
requiredSpace in DiskSpaceGuard.check, assert multiplier > 0 and if not throw a
StorageException (introduce or use an appropriate TranscodeErrorCode such as
INVALID_DISK_MULTIPLIER) to fail fast and prevent the guard being disabled by
misconfiguration.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 87784b42-2aba-45ca-b52b-2ccedd733a03

📥 Commits

Reviewing files that changed from the base of the PR and between a8c9fc8 and 13f5320.

📒 Files selected for processing (2)
  • apps/transcoder/src/main/java/com/ott/transcoder/exception/TranscodeErrorCode.java
  • apps/transcoder/src/main/java/com/ott/transcoder/inspection/DiskSpaceGuard.java
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/transcoder/src/main/java/com/ott/transcoder/exception/TranscodeErrorCode.java

Copy link
Copy Markdown
Collaborator

@marulog marulog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

수고하셨습니다 👍

* @param resolutionList 포함할 해상도 목록
* @return 생성된 master.m3u8 경로
*/
public Path generate(Path outputDir, List<Resolution> resolutionList) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

보다가 든 생각인데 해당 파이프라인에 관련된 함수들이 MasterUrl 생성기랑, Planner인데
하나의 파일로 관리하는건 별로인가요?
기능은 다르지만 해당 파이프라인에서 사용하는 유틸성 함수라고 생각됩니다.

추후 썸네일, 자막 등의 파이프라인이 추가 될 경우에 각각의 추가적인 기능들도
파일을 별도로 만든다면 개인적으로 보기 어려울 것 같습니다!

Copy link
Copy Markdown
Collaborator

@marulog marulog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

수고하셨습니다~

@phonil phonil merged commit c1023af into develop Mar 11, 2026
1 check passed
@phonil phonil deleted the OT-240-feature/transcode-design-implement branch April 4, 2026 09:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feat 새로운 기능 구현

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[OT-240] [FEAT]: 커맨드별 트랜스코딩 동작 추가

2 participants