Skip to content

[OT-123][FEAT]: 공통 작업 분리 및 트랜스코딩 파이프라인 구현#89

Merged
phonil merged 13 commits intodevelopfrom
OT-123-feature/transcode-task
Mar 3, 2026
Merged

[OT-123][FEAT]: 공통 작업 분리 및 트랜스코딩 파이프라인 구현#89
phonil merged 13 commits intodevelopfrom
OT-123-feature/transcode-task

Conversation

@phonil
Copy link
Copy Markdown
Contributor

@phonil phonil commented Mar 1, 2026

아래 노션 먼저 확인하고 코드 읽어주세요!

https://www.notion.so/3-1-2-3162753972b7808b8a17fd8a166898f3

📝 작업 내용

이번 PR에서 작업한 내용을 적어주세요

  • Inspector 내 Validate/Analyze 과정 추가
  • 트랜스코딩 실행 계획 단계 추가 (기본 구현)
  • 리스너 -> 파이프라인 사이 오케스트레이션 클래스 추가

📷 스크린샷

☑️ 체크 리스트

체크 리스트를 확인해주세요

  • 테스트는 잘 통과했나요?
  • 충돌을 해결했나요?
  • 이슈는 등록했나요?
  • 라벨은 등록했나요?

#️⃣ 연관된 이슈

ex) # 이슈번호

closes #69

💬 리뷰 요구사항

리뷰어가 특별히 봐주었으면 하는 부분이 있다면 작성해주세요

ex) 예외 처리를 이렇게 해도 괜찮을까요? / ~~부분 주의 깊게 봐주세요

1. 세부 구현보다 흐름을 살펴봐주세요!

  • 세세한 구현은 '돌아가는 정도'로만 해놨습니다.
  • 따라서 세부 구현 사항은 변경이 필요합니다.
  • 현재 큐에 메시지 발행부터 작업 결과물 확인까지 완료했습니다.

2. Command

  • '파이프라인'이라는 클래스 앞에 Command가 붙어있습니다.
  • 제가 생각하는 커맨드는 한 영상 작업 단위로, 트랜스코딩/썸네일/자막 등으로 분류될 예정입니다.
  • 우선 트랜스코딩 작업만 있기에 Command는 미구현 상태로, 주석만 달아놨습니다.

Summary by CodeRabbit

릴리즈 노트

  • 새로운 기능

    • 적응형 비트레이트 스트리밍을 지원하는 동영상 트랜스코딩 시스템 추가 (360p, 720p, 1080p 다중 해상도)
    • 사용자 온보딩 완료 추적 기능 추가
  • 개선 사항

    • 인증 쿠키 관리 및 CORS 지원 개선
    • 관리자 및 사용자 인증 API 문서 상세화

@phonil phonil self-assigned this Mar 1, 2026
@phonil phonil added the feat 새로운 기능 구현 label Mar 1, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 1, 2026

Walkthrough

트랜스코딩 파이프라인을 JobOrchestrator 중심으로 재설계했으며, 파일 검증, 프로브, 해상도 계획, FFmpeg 실행 등의 단계를 분리된 컴포넌트로 구성했습니다. 또한 API 인증/인가 로직을 CookieUtil로 통합하고, CORS 설정과 OpenAPI 문서화를 개선했으며, 회원 온보딩 상태 추적을 위한 필드를 도메인에 추가했습니다.

Changes

Cohort / File(s) Summary
Transcoder 핵심 조율 및 검증
JobOrchestrator, Inspector, DiskSpaceGuard, FileValidator, StreamValidator
트랜스코딩 파이프라인을 JobOrchestrator로 통합하고, 사전 검증 단계(디스크 공간, 파일 존재성, 스트림 안전성)를 분리된 검증 컴포넌트로 구성했습니다.
Transcoder 프로브 및 메타데이터
ProbeResult, FfprobeExecutor, ProcessBuilderFfprobeExecutor
ffprobe 실행 결과를 ProbeResult 레코드로 캡슐화하고, ProcessBuilder 기반 FFprobe 구현을 추가하여 미디어 메타데이터 추출을 담당합니다.
Transcoder FFmpeg 실행 엔진
FfmpegExecutor, ProcessBuilderFfmpegExecutor, TranscodeProfile
FFmpeg 실행을 인터페이스로 추상화하고, ProcessBuilder 기반 구현 및 해상도별 인코딩 설정(TranscodeProfile)을 추가했습니다.
Transcoder HLS 파이프라인 및 계획
CommandPipeline, HlsTranscodePipeline, TranscodePlanner, MasterPlaylistGenerator
파이프라인 API를 재설계하여 사전 준비된 입력 파일과 프로브 결과를 받도록 변경했고, 동적 해상도 계획(TranscodePlanner)과 순차 프로필 실행 방식으로 변경했습니다.
Transcoder 메시징 및 설정
MessageListener, RabbitTranscodeListener, RabbitConfig, application.yml
메시지 리스너 계약을 Exception 선언으로 업데이트하고, RabbitTranscodeListener를 JobOrchestrator로 위임하는 방식으로 변경했으며, ffprobe 설정을 application.yml에 추가했습니다.
API-Admin 인증 및 OpenAPI 문서화
AdminAuthApi, AdminAuthController, AdminLoginRequest, AdminLoginResponse, AdminTokenResponse
OpenAPI 스키마 타입 지정을 추가하고, CookieUtil을 사용하도록 쿠키 처리를 통합했으며, 로그인/토큰 갱신/로그아웃 엔드포인트 문서를 상세화했습니다.
API-User 인증 및 OAuth2 핸들링
AuthController, CustomOAuth2UserService, OAuth2SuccessHandler, KakaoAuthService
CookieUtil을 통한 쿠키 관리로 통합하고, 온보딩 완료 여부 기반으로 새로운 회원 판단 로직을 변경했으며, OAuth2 성공 핸들러의 리다이렉트 로직을 단순화했습니다.
API 보안 설정 (Admin & User)
apps/api-admin/src/main/java/com/ott/api_admin/config/SecurityConfig, apps/api-user/src/main/java/com/ott/api_user/config/SecurityConfig
CORS를 활성화하고 localhost 및 프로덕션 도메인을 허용하도록 구성했으며, 권한 검증 규칙을 정비하고 세션을 상태 없음으로 설정했습니다.
공통 보안 유틸 및 도메인
CookieUtil, Member, V3__member_onboarding_completed.sql
HTTP 응답 쿠키 생성/삭제를 담당하는 CookieUtil을 추가하고, Member 엔티티에 onboardingCompleted 필드를 추가한 후 DB 마이그레이션을 진행했습니다.
Transcoder 저장소 문서
VideoStorage
클래스 Javadoc에서 구현 세부사항 언급을 제거하여 문서를 정리했습니다.

Sequence Diagram(s)

sequenceDiagram
    participant RabbitMQ
    participant RabbitTranscodeListener
    participant JobOrchestrator
    participant Inspector
    participant FileValidator
    participant FfprobeExecutor
    participant StreamValidator
    participant VideoStorage
    participant TranscodePlanner
    participant FfmpegExecutor
    participant MasterPlaylistGenerator

    RabbitMQ->>RabbitTranscodeListener: TranscodeMessage
    RabbitTranscodeListener->>JobOrchestrator: handle(message)
    
    JobOrchestrator->>JobOrchestrator: 임시 작업 디렉토리 생성
    JobOrchestrator->>VideoStorage: download(sourceUrl, inputFile)
    
    JobOrchestrator->>Inspector: inspect(inputFile)
    Inspector->>FileValidator: validate(inputFile)
    FileValidator-->>Inspector: ✓ 검증 완료
    Inspector->>FfprobeExecutor: probe(inputFile)
    FfprobeExecutor-->>Inspector: ProbeResult
    Inspector->>StreamValidator: validate(probeResult)
    StreamValidator-->>Inspector: ✓ 검증 완료
    Inspector-->>JobOrchestrator: ProbeResult
    
    JobOrchestrator->>TranscodePlanner: plan(probeResult)
    TranscodePlanner-->>JobOrchestrator: List<TranscodeProfile>
    
    loop 각 TranscodeProfile에 대해
        JobOrchestrator->>FfmpegExecutor: execute(inputFile, workDir, profile)
        FfmpegExecutor-->>JobOrchestrator: media.m3u8 경로
    end
    
    JobOrchestrator->>MasterPlaylistGenerator: generate(resolutions)
    MasterPlaylistGenerator-->>JobOrchestrator: master.m3u8
    
    JobOrchestrator->>VideoStorage: upload(workDir)
    VideoStorage-->>JobOrchestrator: ✓ 업로드 완료
    
    JobOrchestrator->>JobOrchestrator: 임시 작업 디렉토리 정리
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • marulog
  • arlen02-01
🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 38.60% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed PR 제목은 주요 변경 사항을 명확히 요약하고 있으며, 공통 작업 분리 및 트랜스코딩 파이프라인 구현이라는 핵심 목표를 정확히 반영합니다.
Linked Issues check ✅ Passed PR이 연관 이슈 #69의 목표(작업 분리, 파이프라인 구성, 공통 작업 개발)를 충족합니다. Inspector, 오케스트레이션 클래스, TranscodePlanner 등 필요한 컴포넌트들이 구현되었습니다.
Out of Scope Changes check ✅ Passed 모든 코드 변경이 트랜스코딩 파이프라인 구현과 관련된 범위 내에 있습니다. API 관리자/사용자 모듈 변경도 CookieUtil 추상화와 CORS 설정 같은 지원 작업으로 파이프라인 구현을 위한 필수 개선사항입니다.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch OT-123-feature/transcode-task

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (7)
apps/transcoder/src/main/java/com/ott/transcoder/inspection/probe/execution/processbuilder/ProcessBuilderFfprobeExecutor.java (2)

66-68: InterruptedException 처리 시 인터럽트 상태 복원 필요

InterruptedException을 catch하면서 스레드의 인터럽트 상태를 복원하지 않으면, 호출 스택 상위의 인터럽트 처리 로직이 무시될 수 있습니다.

♻️ 인터럽트 상태 복원
-} catch (IOException | InterruptedException e) {
+} catch (IOException e) {
+    throw new RuntimeException("ffprobe 실행 실패 - input: " + inputFile, e);
+} catch (InterruptedException e) {
+    Thread.currentThread().interrupt();
     throw new RuntimeException("ffprobe 실행 실패 - input: " + inputFile, e);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/probe/execution/processbuilder/ProcessBuilderFfprobeExecutor.java`
around lines 66 - 68, The catch block in ProcessBuilderFfprobeExecutor that
handles "IOException | InterruptedException e" must restore the thread interrupt
status when InterruptedException occurs; split the multi-catch into separate
catches (one for IOException, one for InterruptedException) or detect
InterruptedException, call Thread.currentThread().interrupt() to restore the
interrupt flag, then rethrow the RuntimeException with the original exception as
the cause (preserving the existing message "ffprobe 실행 실패 - input: " +
inputFile) so upstream interrupt handling can occur.

47-68: 프로세스 출력 읽기 순서로 인한 잠재적 데드락

process.getInputStream().readAllBytes()waitFor() 전에 호출하면, stdout 버퍼가 가득 찰 경우 프로세스가 블록되고 데드락이 발생할 수 있습니다. 일반적인 ffprobe 출력은 크지 않아 문제가 드물지만, 대용량 스트림이 있는 파일에서 위험할 수 있습니다.

♻️ 별도 스레드에서 출력 읽기
 ProcessBuilder pb = new ProcessBuilder(command);
 pb.redirectErrorStream(true);
 Process process = pb.start();

-String output = new String(process.getInputStream().readAllBytes());
+// 별도 스레드에서 출력 수집 (버퍼 오버플로 방지)
+CompletableFuture<String> outputFuture = CompletableFuture.supplyAsync(() -> {
+    try {
+        return new String(process.getInputStream().readAllBytes());
+    } catch (IOException e) {
+        return "";
+    }
+});

 boolean finished = process.waitFor(2, TimeUnit.MINUTES);
 if (!finished) {
     process.destroyForcibly();
     throw new RuntimeException("ffprobe 타임아웃 - input: " + inputFile);
 }
+
+String output = outputFuture.get(10, TimeUnit.SECONDS);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/probe/execution/processbuilder/ProcessBuilderFfprobeExecutor.java`
around lines 47 - 68, The current implementation in
ProcessBuilderFfprobeExecutor calls process.getInputStream().readAllBytes()
before waiting for process termination which can deadlock if the child fills
stdout; instead start a background reader to continuously consume the process
output (or use an Executor/CompletableFuture) and collect it into a StringBuffer
before or while waiting, then call process.waitFor(2, TimeUnit.MINUTES), handle
timeout by destroyForcibly(), and after the process ends get the collected
output to pass to parseJson(input/output) and to include in error messages;
ensure InterruptedException/IOException handling still wraps with "ffprobe 실행 실패
- input: " + inputFile.
apps/transcoder/src/main/java/com/ott/transcoder/JobOrchestrator.java (1)

66-79: cleanup 예외 처리 개선 제안

Files.deleteIfExists(path)에서 발생하는 IOException을 무시하면 삭제 실패 원인을 파악하기 어렵습니다. 개별 파일 삭제 실패는 전체 cleanup을 중단시키지 않되, 실패 건수나 경로를 로깅하면 디버깅에 도움이 됩니다.

♻️ 삭제 실패 로깅 추가
 Files.walk(workDir)
         .sorted(Comparator.reverseOrder())
         .forEach(path -> {
-            try { Files.deleteIfExists(path); } catch (IOException ignored) {}
+            try {
+                Files.deleteIfExists(path);
+            } catch (IOException e) {
+                log.debug("파일 삭제 실패 - path: {}, error: {}", path, e.getMessage());
+            }
         });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/transcoder/src/main/java/com/ott/transcoder/JobOrchestrator.java` around
lines 66 - 79, In cleanUp(Path workDir) change the current swallowing of
IOException inside the Files.walk(...).forEach lambda to record and log
individual delete failures: wrap Files.deleteIfExists(path) in a try/catch that
on exception increments a shared counter (e.g., AtomicInteger failedDeletes) and
logs a warn including the failed path and exception (use log.warn("Failed to
delete {}:", path, e)); after the walk completes, log a summary (e.g., number of
failed deletes) and include that summary in the existing success/failure logs;
keep using Files.walk and Comparator.reverseOrder() but avoid suppressing
exceptions silently.
apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/TranscodePlanner.java (1)

115-120: formatBitrate() 포맷 불일치 가능성

1Mbps 미만 비트레이트(예: 999,000 bps)는 "999000"으로 반환되고, 1Mbps 이상은 "2400k" 형식으로 반환됩니다. FFmpeg은 두 형식 모두 처리할 수 있지만, 일관성을 위해 모든 값을 k 단위로 통일하는 것이 좋습니다.

♻️ 일관된 비트레이트 포맷
 private String formatBitrate(long bps) {
-    if (bps >= 1_000_000) {
-        return (bps / 1_000) + "k";
-    }
-    return bps + "";
+    // FFmpeg은 k 단위 선호, 1k 미만은 bps 그대로
+    if (bps >= 1_000) {
+        return (bps / 1_000) + "k";
+    }
+    return bps + "";
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/TranscodePlanner.java`
around lines 115 - 120, formatBitrate currently returns raw bps for values under
1_000_000 and "Nk" for larger values, causing inconsistent formats; change
formatBitrate to always return a kilobit string with a "k" suffix (convert
bits/sec to kbps) so outputs like 999000 become "999k" and 2400000 becomes
"2400k"; perform the conversion using appropriate rounding (e.g., Math.round(bps
/ 1000.0) or integer division with bias) and update the single method
formatBitrate to build and return the unified "<value>k" string.
apps/transcoder/src/main/java/com/ott/transcoder/ffmpeg/execution/processbuilder/ProcessBuilderFfmpegExecutor.java (3)

73-73: 타임아웃 값을 설정 가능하게 변경 권장

30분 타임아웃이 하드코딩되어 있습니다. segmentDuration처럼 @Value로 주입받아 환경별로 조정 가능하게 하면 좋겠습니다.

♻️ 제안된 수정

설정 필드 추가:

    `@Value`("${transcoder.ffmpeg.timeout-minutes:30}")
    private int timeoutMinutes;

사용 부분 수정:

-        boolean finished = process.waitFor(30, TimeUnit.MINUTES);
+        boolean finished = process.waitFor(timeoutMinutes, TimeUnit.MINUTES);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/ffmpeg/execution/processbuilder/ProcessBuilderFfmpegExecutor.java`
at line 73, Add a configurable timeout for FFmpeg by injecting a new `@Value`
field (e.g., private int timeoutMinutes with default 30) into the
ProcessBuilderFfmpegExecutor class and replace the hardcoded 30 in the call that
sets finished (the line using process.waitFor(30, TimeUnit.MINUTES)) to use
TimeUnit.MINUTES and the injected timeoutMinutes variable; ensure the injected
property key matches the suggestion (transcoder.ffmpeg.timeout-minutes) and that
the variable is used wherever the fixed 30-minute wait is currently applied.

76-80: 실패 시 디버깅 정보 부족 우려

FFmpeg 출력이 DEBUG 레벨로만 로깅되므로, 프로덕션에서 실패 시 원인 파악이 어려울 수 있습니다. 실패한 경우 마지막 N줄의 출력을 예외 메시지에 포함하거나, 실패 시점에 WARN/ERROR 레벨로 출력 내용을 남기는 것을 고려해 보세요.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/ffmpeg/execution/processbuilder/ProcessBuilderFfmpegExecutor.java`
around lines 76 - 80, When FFmpeg fails in ProcessBuilderFfmpegExecutor (where
the RuntimeException is thrown for non‑zero exit codes and timeouts), include
FFmpeg's recent output in the failure path: capture the process output buffer
(stdout/stderr) and either append the last N lines to the RuntimeException
message for the timeout and non‑zero exit cases (include resolutionKey and
exitCode) or emit a WARN/ERROR log with that buffered output before throwing;
update the exception/logging in the timeout branch and the exitCode != 0 branch
so the final error message contains the relevant FFmpeg output for debugging.

46-57: 컬렉션 변수명에 List 접미사 필요

코딩 가이드라인에 따라 컬렉션 변수명은 List 접미사를 사용해야 합니다.

♻️ 제안된 수정
-        List<String> command = List.of(
+        List<String> commandList = List.of(
                 ffmpegPath, "-i", inputFile.toString(),
                 "-vf", "scale=-2:" + profile.height(),
                 "-c:v", profile.videoCodec(), "-preset", profile.preset(),
                 "-c:a", profile.audioCodec(), "-b:a", profile.audioBitrate(),
                 "-b:v", profile.videoBitrate(),
                 "-f", "hls",
                 "-hls_time", String.valueOf(segmentDuration),
                 "-hls_list_size", "0",
                 "-hls_segment_filename", segmentPattern,
                 playlistPath.toString()
         );
 
-        log.info("FFmpeg 실행 - resolution: {}, command: {}", resolutionKey, String.join(" ", command));
+        log.info("FFmpeg 실행 - resolution: {}, command: {}", resolutionKey, String.join(" ", commandList));
 
-        ProcessBuilder processBuilder = new ProcessBuilder(command);
+        ProcessBuilder processBuilder = new ProcessBuilder(commandList);

As per coding guidelines, "Collection variable names with List suffix".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/ffmpeg/execution/processbuilder/ProcessBuilderFfmpegExecutor.java`
around lines 46 - 57, Rename the collection variable `command` to a
`List`-suffixed name (e.g., `commandList`) in ProcessBuilderFfmpegExecutor to
follow the "collection variable names with List suffix" guideline: update the
declaration List<String> command = List.of(...) to List<String> commandList =
List.of(...) and adjust all subsequent references within the class/method (e.g.,
any uses in methods that build or execute the ProcessBuilder) to use
`commandList` so compilation and behavior remain unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/transcoder/src/main/java/com/ott/transcoder/ffmpeg/execution/processbuilder/ProcessBuilderFfmpegExecutor.java`:
- Around line 64-77: The code in ProcessBuilderFfmpegExecutor that starts the
Process (processBuilder.start()) must ensure the Process is always cleaned up if
an exception occurs while reading output or waiting; wrap the process lifecycle
in try { start, read from process.getInputStream() } finally { ensure process
streams are closed and call process.destroyForcibly() if still alive and
waitFor() to reap it } so that any IOException during the BufferedReader loop
cannot leave a zombie process; update the block around process, reader, and
process.waitFor(...) to use try/finally (or try-with-resources for streams) and
perform safe destroy/wait in the finally.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/probe/execution/processbuilder/ProcessBuilderFfprobeExecutor.java`:
- Around line 91-101: The code may NPE when format is null: guard uses of the
local variable format in ProcessBuilderFfprobeExecutor (the block computing
duration and videoBitrate) by checking if format != null before calling
format.has("duration") or format.path("bit_rate"); compute duration = (format !=
null && format.has("duration")) ? format.get("duration").asDouble() : 0.0 and
set videoBitrate = (videoStream.has("bit_rate")) ?
videoStream.get("bit_rate").asLong() : ((format != null) ?
format.path("bit_rate").asLong(0) : 0L) so
parseFps(videoStream.path("r_frame_rate").asText("0/1")) stays unchanged but all
accesses to format are safe.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/validation/DiskSpaceGuard.java`:
- Around line 23-25: Implement DiskSpaceGuard.check(Path originPath) to compute
the origin file size (e.g., Files.size(originPath)), multiply by the class
multiplier (the multiplier field used for required space calculation), obtain
available/usable space for the origin path's FileStore
(Files.getFileStore(originPath).getUsableSpace()), compare usable >= required,
and throw a clear runtime exception (e.g., InsufficientDiskSpaceException or
IllegalStateException) when usable is insufficient; ensure you handle
IOExceptions from Files APIs and treat them as failure to validate (propagate or
wrap with the same exception) so callers cannot silently proceed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/validation/FileValidator.java`:
- Around line 117-120: The RIFF-only check in FileValidator (the block that
inspects the byte[] header) is too broad; update the condition to require at
least 12 bytes and verify both the "RIFF" magic at header[0..3] and the form
type "AVI " at header[8..11] before returning "AVI" (i.e., check header length
>= 12 and that header[8]=='A', header[9]=='V', header[10]=='I', header[11]=='
'), keeping comparisons appropriate for the header byte[] in Java.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/validation/StreamValidator.java`:
- Around line 15-22: The comment lists "6. 손상 감지 (메타데이터 불완전)" but the
StreamValidator.validate method does not check for corrupted/incomplete
metadata; update the code to match the comment or vice versa. Either remove the
item from the class-level comment or, preferably, implement a metadata-integrity
check inside StreamValidator.validate: verify required fields (e.g., codec,
width, height, duration, frameRate) are present and within sane ranges, detect
null/NaN/zero or wildly inconsistent values, and return/throw the same
validation failure path used by other checks (so locate StreamValidator.validate
and add the check before other validations) so the validator reports
"corrupted/incomplete metadata" consistently.

In `@apps/transcoder/src/main/java/com/ott/transcoder/queue/MessageListener.java`:
- Line 11: The current method signature void listen(TranscodeMessage message)
throws Exception forces callers to treat all failures identically; change the
contract on MessageListener.listen to throw a domain-specific checked exception
(e.g. TranscodeException) or a narrow set of exceptions that differentiate
retryable vs non-retryable failures (for example TranscodeException for
business/retryable errors and ValidationException/NonRetryableException for
validation/state errors), and update any implementors to wrap/translate
IllegalStateException, UncheckedIOException, RuntimeException into the
appropriate domain exception so RabbitMQ retry logic can distinguish and act
accordingly.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/queue/rabbit/RabbitTranscodeListener.java`:
- Around line 21-29: The listener method RabbitTranscodeListener.listen
currently lets exceptions propagate (triggering Spring AMQP requeue); change it
to catch expected business exceptions around
jobOrchestrator.handle(TranscodeMessage) and handle them locally: for
non-transient errors (invalid format, unsupported codec) log the error and
return without rethrowing so the message is considered consumed, for
transient/errors you want retried either rethrow or throw an
AmqpRejectAndDontRequeueException to send to a DLQ; alternatively document
adding a RetryTemplate/@Retryable or DLQ configuration if you prefer broker-side
handling instead of in-listener handling.

---

Nitpick comments:
In
`@apps/transcoder/src/main/java/com/ott/transcoder/ffmpeg/execution/processbuilder/ProcessBuilderFfmpegExecutor.java`:
- Line 73: Add a configurable timeout for FFmpeg by injecting a new `@Value` field
(e.g., private int timeoutMinutes with default 30) into the
ProcessBuilderFfmpegExecutor class and replace the hardcoded 30 in the call that
sets finished (the line using process.waitFor(30, TimeUnit.MINUTES)) to use
TimeUnit.MINUTES and the injected timeoutMinutes variable; ensure the injected
property key matches the suggestion (transcoder.ffmpeg.timeout-minutes) and that
the variable is used wherever the fixed 30-minute wait is currently applied.
- Around line 76-80: When FFmpeg fails in ProcessBuilderFfmpegExecutor (where
the RuntimeException is thrown for non‑zero exit codes and timeouts), include
FFmpeg's recent output in the failure path: capture the process output buffer
(stdout/stderr) and either append the last N lines to the RuntimeException
message for the timeout and non‑zero exit cases (include resolutionKey and
exitCode) or emit a WARN/ERROR log with that buffered output before throwing;
update the exception/logging in the timeout branch and the exitCode != 0 branch
so the final error message contains the relevant FFmpeg output for debugging.
- Around line 46-57: Rename the collection variable `command` to a
`List`-suffixed name (e.g., `commandList`) in ProcessBuilderFfmpegExecutor to
follow the "collection variable names with List suffix" guideline: update the
declaration List<String> command = List.of(...) to List<String> commandList =
List.of(...) and adjust all subsequent references within the class/method (e.g.,
any uses in methods that build or execute the ProcessBuilder) to use
`commandList` so compilation and behavior remain unchanged.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/probe/execution/processbuilder/ProcessBuilderFfprobeExecutor.java`:
- Around line 66-68: The catch block in ProcessBuilderFfprobeExecutor that
handles "IOException | InterruptedException e" must restore the thread interrupt
status when InterruptedException occurs; split the multi-catch into separate
catches (one for IOException, one for InterruptedException) or detect
InterruptedException, call Thread.currentThread().interrupt() to restore the
interrupt flag, then rethrow the RuntimeException with the original exception as
the cause (preserving the existing message "ffprobe 실행 실패 - input: " +
inputFile) so upstream interrupt handling can occur.
- Around line 47-68: The current implementation in ProcessBuilderFfprobeExecutor
calls process.getInputStream().readAllBytes() before waiting for process
termination which can deadlock if the child fills stdout; instead start a
background reader to continuously consume the process output (or use an
Executor/CompletableFuture) and collect it into a StringBuffer before or while
waiting, then call process.waitFor(2, TimeUnit.MINUTES), handle timeout by
destroyForcibly(), and after the process ends get the collected output to pass
to parseJson(input/output) and to include in error messages; ensure
InterruptedException/IOException handling still wraps with "ffprobe 실행 실패 -
input: " + inputFile.

In `@apps/transcoder/src/main/java/com/ott/transcoder/JobOrchestrator.java`:
- Around line 66-79: In cleanUp(Path workDir) change the current swallowing of
IOException inside the Files.walk(...).forEach lambda to record and log
individual delete failures: wrap Files.deleteIfExists(path) in a try/catch that
on exception increments a shared counter (e.g., AtomicInteger failedDeletes) and
logs a warn including the failed path and exception (use log.warn("Failed to
delete {}:", path, e)); after the walk completes, log a summary (e.g., number of
failed deletes) and include that summary in the existing success/failure logs;
keep using Files.walk and Comparator.reverseOrder() but avoid suppressing
exceptions silently.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/TranscodePlanner.java`:
- Around line 115-120: formatBitrate currently returns raw bps for values under
1_000_000 and "Nk" for larger values, causing inconsistent formats; change
formatBitrate to always return a kilobit string with a "k" suffix (convert
bits/sec to kbps) so outputs like 999000 become "999k" and 2400000 becomes
"2400k"; perform the conversion using appropriate rounding (e.g., Math.round(bps
/ 1000.0) or integer division with bias) and update the single method
formatBitrate to build and return the unified "<value>k" string.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6d20882 and c9dfa02.

📒 Files selected for processing (23)
  • apps/transcoder/src/main/java/com/ott/transcoder/JobOrchestrator.java
  • apps/transcoder/src/main/java/com/ott/transcoder/config/RabbitConfig.java
  • apps/transcoder/src/main/java/com/ott/transcoder/ffmpeg/TranscodeProfile.java
  • apps/transcoder/src/main/java/com/ott/transcoder/ffmpeg/execution/FfmpegExecutor.java
  • apps/transcoder/src/main/java/com/ott/transcoder/ffmpeg/execution/processbuilder/ProcessBuilderFfmpegExecutor.java
  • apps/transcoder/src/main/java/com/ott/transcoder/inspection/Inspector.java
  • apps/transcoder/src/main/java/com/ott/transcoder/inspection/probe/ProbeResult.java
  • apps/transcoder/src/main/java/com/ott/transcoder/inspection/probe/execution/FfprobeExecutor.java
  • apps/transcoder/src/main/java/com/ott/transcoder/inspection/probe/execution/processbuilder/ProcessBuilderFfprobeExecutor.java
  • apps/transcoder/src/main/java/com/ott/transcoder/inspection/validation/DiskSpaceGuard.java
  • apps/transcoder/src/main/java/com/ott/transcoder/inspection/validation/FileValidator.java
  • apps/transcoder/src/main/java/com/ott/transcoder/inspection/validation/StreamValidator.java
  • apps/transcoder/src/main/java/com/ott/transcoder/pipeline/CommandPipeline.java
  • apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/HlsTranscodePipeline.java
  • apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/MasterPlaylistGenerator.java
  • apps/transcoder/src/main/java/com/ott/transcoder/pipeline/hls/TranscodePlanner.java
  • apps/transcoder/src/main/java/com/ott/transcoder/queue/MessageListener.java
  • apps/transcoder/src/main/java/com/ott/transcoder/queue/rabbit/RabbitTranscodeListener.java
  • apps/transcoder/src/main/java/com/ott/transcoder/storage/VideoStorage.java
  • apps/transcoder/src/main/java/com/ott/transcoder/transcode/.gitkeep
  • apps/transcoder/src/main/java/com/ott/transcoder/transcode/FfmpegExecutor.java
  • apps/transcoder/src/main/java/com/ott/transcoder/transcode/processbuilder/ProcessBuilderFfmpegExecutor.java
  • apps/transcoder/src/main/resources/application.yml
💤 Files with no reviewable changes (3)
  • apps/transcoder/src/main/java/com/ott/transcoder/transcode/FfmpegExecutor.java
  • apps/transcoder/src/main/java/com/ott/transcoder/transcode/processbuilder/ProcessBuilderFfmpegExecutor.java
  • apps/transcoder/src/main/java/com/ott/transcoder/storage/VideoStorage.java

Comment on lines +64 to +77
Process process = processBuilder.start();

try (BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream()))) {
String line;
while ((line = reader.readLine()) != null) {
log.debug("[FFmpeg] {}", line);
}
}

boolean finished = process.waitFor(30, TimeUnit.MINUTES);
if (!finished) {
process.destroyForcibly();
throw new RuntimeException("FFmpeg 타임아웃 - resolution: " + resolutionKey);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

프로세스 리소스 누수 가능성

출력 읽기 중 예외 발생 시 Process가 정리되지 않아 좀비 프로세스가 남을 수 있습니다. try-finally로 감싸서 예외 상황에서도 프로세스가 종료되도록 해야 합니다.

🔒️ 제안된 수정
         Process process = processBuilder.start();
 
-        try (BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream()))) {
-            String line;
-            while ((line = reader.readLine()) != null) {
-                log.debug("[FFmpeg] {}", line);
+        try {
+            try (BufferedReader reader = new BufferedReader(new InputStreamReader(process.getInputStream()))) {
+                String line;
+                while ((line = reader.readLine()) != null) {
+                    log.debug("[FFmpeg] {}", line);
+                }
             }
-        }
 
-        boolean finished = process.waitFor(30, TimeUnit.MINUTES);
-        if (!finished) {
-            process.destroyForcibly();
-            throw new RuntimeException("FFmpeg 타임아웃 - resolution: " + resolutionKey);
+            boolean finished = process.waitFor(30, TimeUnit.MINUTES);
+            if (!finished) {
+                process.destroyForcibly();
+                throw new RuntimeException("FFmpeg 타임아웃 - resolution: " + resolutionKey);
+            }
+        } catch (Exception e) {
+            process.destroyForcibly();
+            throw e;
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/ffmpeg/execution/processbuilder/ProcessBuilderFfmpegExecutor.java`
around lines 64 - 77, The code in ProcessBuilderFfmpegExecutor that starts the
Process (processBuilder.start()) must ensure the Process is always cleaned up if
an exception occurs while reading output or waiting; wrap the process lifecycle
in try { start, read from process.getInputStream() } finally { ensure process
streams are closed and call process.destroyForcibly() if still alive and
waitFor() to reap it } so that any IOException during the BufferedReader loop
cannot leave a zombie process; update the block around process, reader, and
process.waitFor(...) to use try/finally (or try-with-resources for streams) and
perform safe destroy/wait in the finally.

Comment on lines +91 to +101
JsonNode format = root.get("format"); // null 가능성 존재

double duration = format.has("duration")
? format.get("duration").asDouble()
: 0.0;

double fps = parseFps(videoStream.path("r_frame_rate").asText("0/1"));

long videoBitrate = videoStream.has("bit_rate")
? videoStream.get("bit_rate").asLong()
: format.path("bit_rate").asLong(0);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

format이 null일 때 NPE 발생 가능

root.get("format")이 null을 반환할 수 있다고 주석에 명시되어 있지만, 이후 format.has("duration")format.path("bit_rate")에서 null 체크 없이 접근합니다. 일부 미디어 파일에서 format 정보가 없을 수 있습니다.

🐛 null 체크 추가
 JsonNode format = root.get("format"); // null 가능성 존재

-double duration = format.has("duration")
+double duration = (format != null && format.has("duration"))
         ? format.get("duration").asDouble()
         : 0.0;

 double fps = parseFps(videoStream.path("r_frame_rate").asText("0/1"));

 long videoBitrate = videoStream.has("bit_rate")
         ? videoStream.get("bit_rate").asLong()
-        : format.path("bit_rate").asLong(0);
+        : (format != null ? format.path("bit_rate").asLong(0) : 0L);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
JsonNode format = root.get("format"); // null 가능성 존재
double duration = format.has("duration")
? format.get("duration").asDouble()
: 0.0;
double fps = parseFps(videoStream.path("r_frame_rate").asText("0/1"));
long videoBitrate = videoStream.has("bit_rate")
? videoStream.get("bit_rate").asLong()
: format.path("bit_rate").asLong(0);
JsonNode format = root.get("format"); // null 가능성 존재
double duration = (format != null && format.has("duration"))
? format.get("duration").asDouble()
: 0.0;
double fps = parseFps(videoStream.path("r_frame_rate").asText("0/1"));
long videoBitrate = videoStream.has("bit_rate")
? videoStream.get("bit_rate").asLong()
: (format != null ? format.path("bit_rate").asLong(0) : 0L);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/probe/execution/processbuilder/ProcessBuilderFfprobeExecutor.java`
around lines 91 - 101, The code may NPE when format is null: guard uses of the
local variable format in ProcessBuilderFfprobeExecutor (the block computing
duration and videoBitrate) by checking if format != null before calling
format.has("duration") or format.path("bit_rate"); compute duration = (format !=
null && format.has("duration")) ? format.get("duration").asDouble() : 0.0 and
set videoBitrate = (videoStream.has("bit_rate")) ?
videoStream.get("bit_rate").asLong() : ((format != null) ?
format.path("bit_rate").asLong(0) : 0L) so
parseFps(videoStream.path("r_frame_rate").asText("0/1")) stays unchanged but all
accesses to format are safe.

Comment on lines +23 to +25
public void check(Path originPath) {

}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

디스크 공간 검증이 현재 비활성(no-op) 상태입니다.

Line 23-25의 check(...)가 비어 있어 항상 통과됩니다. 이 상태면 디스크 부족을 사전에 차단하지 못해 다운로드/트랜스코딩 중간 실패로 이어질 수 있습니다. 최소한 필요 공간(origin size × multiplier)usable space 비교 후 실패를 명시적으로 던지도록 구현이 필요합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/validation/DiskSpaceGuard.java`
around lines 23 - 25, Implement DiskSpaceGuard.check(Path originPath) to compute
the origin file size (e.g., Files.size(originPath)), multiply by the class
multiplier (the multiplier field used for required space calculation), obtain
available/usable space for the origin path's FileStore
(Files.getFileStore(originPath).getUsableSpace()), compare usable >= required,
and throw a clear runtime exception (e.g., InsufficientDiskSpaceException or
IllegalStateException) when usable is insufficient; ensure you handle
IOExceptions from Files APIs and treat them as failure to validate (propagate or
wrap with the same exception) so callers cannot silently proceed.

Comment on lines +117 to +120
// AVI: "RIFF"
if (header[0] == 'R' && header[1] == 'I' && header[2] == 'F' && header[3] == 'F') {
return "AVI";
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

AVI 매직바이트 판별 조건이 과도하게 넓습니다.

현재는 RIFF만 맞으면 AVI로 판정되어 RIFF 계열의 비디오가 아닌 포맷도 오탐될 수 있습니다. RIFF + form type(AVI , offset 8~11)까지 확인하는 편이 안전합니다.

🔧 제안 수정안
-        // AVI: "RIFF"
-        if (header[0] == 'R' && header[1] == 'I' && header[2] == 'F' && header[3] == 'F') {
-            return "AVI";
-        }
+        // AVI: "RIFF" + form type "AVI " (offset 8~11)
+        if (header[0] == 'R' && header[1] == 'I' && header[2] == 'F' && header[3] == 'F'
+                && header[8] == 'A' && header[9] == 'V' && header[10] == 'I' && header[11] == ' ') {
+            return "AVI";
+        }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/validation/FileValidator.java`
around lines 117 - 120, The RIFF-only check in FileValidator (the block that
inspects the byte[] header) is too broad; update the condition to require at
least 12 bytes and verify both the "RIFF" magic at header[0..3] and the form
type "AVI " at header[8..11] before returning "AVI" (i.e., check header length
>= 12 and that header[8]=='A', header[9]=='V', header[10]=='I', header[11]=='
'), keeping comparisons appropriate for the header byte[] in Java.

Comment on lines +15 to +22
* 검증 항목:
* 1. 비디오 스트림 존재 (오디오만 있는 파일 차단)
* 2. 비디오 코덱 지원 여부
* 3. duration 유효성 (0초, 비정상적으로 긴 영상)
* 4. 해상도 범위 (너무 작거나 너무 큰 영상)
* 5. 프레임레이트 이상 감지
* 6. 손상 감지 (메타데이터 불완전)
*/
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

주석의 검증 항목과 실제 구현이 일치하지 않습니다.

Line 15-22에는 6. 손상 감지(메타데이터 불완전)가 포함되어 있지만, 현재 validate 구현(Line 48-85)에는 해당 항목이 직접 반영되어 있지 않습니다. 주석을 실제 동작에 맞추거나 검증을 추가해 주세요.

Also applies to: 48-85

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/inspection/validation/StreamValidator.java`
around lines 15 - 22, The comment lists "6. 손상 감지 (메타데이터 불완전)" but the
StreamValidator.validate method does not check for corrupted/incomplete
metadata; update the code to match the comment or vice versa. Either remove the
item from the class-level comment or, preferably, implement a metadata-integrity
check inside StreamValidator.validate: verify required fields (e.g., codec,
width, height, duration, frameRate) are present and within sane ranges, detect
null/NaN/zero or wildly inconsistent values, and return/throw the same
validation failure path used by other checks (so locate StreamValidator.validate
and add the check before other validations) so the validator reports
"corrupted/incomplete metadata" consistently.

public interface MessageListener {

void listen(TranscodeMessage message);
void listen(TranscodeMessage message) throws Exception;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# 1) MessageListener 인터페이스와 listen 메서드 정의 확인
echo "=== MessageListener 인터페이스 정의 ==="
find . -type f -name "MessageListener.java" | head -5
if [ -f "apps/transcoder/src/main/java/com/ott/transcoder/queue/MessageListener.java" ]; then
  cat -n "apps/transcoder/src/main/java/com/ott/transcoder/queue/MessageListener.java"
fi

# 2) MessageListener 구현체 찾기
echo -e "\n=== MessageListener 구현체 ==="
rg -t java "implements\s+MessageListener" -A 20 --max-count=10

# 3) 구현체에서 실제로 throw하는 예외 타입 확인
echo -e "\n=== listen 메서드 구현에서 throw하는 예외 ==="
rg -t java "void\s+listen\s*\(" -A 30 -B 2

# 4) 호출부에서 listen 메서드 호출 확인
echo -e "\n=== listen 메서드 호출 ==="
rg -t java "\.listen\s*\(" -B 2 -A 5 --max-count=10

# 5) 예외 처리 패턴 확인 (catch 문에서 특정 예외 타입 처리)
echo -e "\n=== listen 호출 주변 예외 처리 ==="
rg -t java "\.listen\s*\(" -B 5 -A 10 | grep -E "(catch|throws|Exception)" -B 2 -A 2

Repository: OpenTheTaste/backend

Length of output: 4076


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# 1) JobOrchestrator 인터페이스/클래스 찾기
echo "=== JobOrchestrator 정의 ==="
find . -type f -name "*JobOrchestrator*" -path "*/java/*"

# 2) handle 메서드 구현 확인
echo -e "\n=== JobOrchestrator.handle() 메서드 ==="
rg -t java "JobOrchestrator" -A 3 -B 3 --max-count=5

# 3) handle 메서드의 throw 선언
echo -e "\n=== handle 메서드 시그니처 ==="
rg -t java "void\s+handle\s*\(" -A 1 -B 2

# 4) 프로젝트의 커스텀 Exception 찾기
echo -e "\n=== 커스텀 Exception 클래스 ==="
find . -type f -name "*Exception.java" -path "*/java/*" | grep -E "(transcoder|domain)" | head -10

# 5) 실제로 throw 되는 예외 타입들
echo -e "\n=== throw하는 예외 패턴 ==="
rg -t java "throw\s+new\s+\w+Exception" apps/transcoder/ --max-count=20

# 6) RabbitTranscodeListener 전체 파일 확인
echo -e "\n=== RabbitTranscodeListener 전체 내용 ==="
if [ -f "apps/transcoder/src/main/java/com/ott/transcoder/queue/rabbit/RabbitTranscodeListener.java" ]; then
  cat -n "apps/transcoder/src/main/java/com/ott/transcoder/queue/rabbit/RabbitTranscodeListener.java"
fi

Repository: OpenTheTaste/backend

Length of output: 3498


🏁 Script executed:

#!/bin/bash
set -euo pipefail

# 1) JobOrchestrator 파일 크기 확인
echo "=== JobOrchestrator.java 파일 크기 ==="
wc -l apps/transcoder/src/main/java/com/ott/transcoder/JobOrchestrator.java

# 2) JobOrchestrator 파일 내용
echo -e "\n=== JobOrchestrator 파일 내용 ==="
cat -n apps/transcoder/src/main/java/com/ott/transcoder/JobOrchestrator.java

# 3) transcoder 모듈 내 모든 Exception 검색
echo -e "\n=== transcoder 모듈 내 Exception 관련 코드 ==="
rg -t java "Exception|throw|catch" apps/transcoder/src/main/java/ --max-count=30

# 4) JobOrchestrator와 RabbitTranscodeListener 간의 에러 처리 경로
echo -e "\n=== VideoStorage, DiskSpaceGuard 등의 handle 관련 메서드 ==="
rg -t java "public.*void.*\(.*TranscodeMessage" apps/transcoder/src/main/java/ -A 5

Repository: OpenTheTaste/backend

Length of output: 12641


도메인별 예외로 계약을 좁혀서 RabbitMQ 재시도 로직을 명확히 하세요.

throws Exception은 호출자(RabbitMQ)가 모든 예외를 동일하게 처리하도록 강제합니다. 실제로 코드에서는 IllegalStateException, UncheckedIOException, RuntimeException 등 다양한 예외가 발생하는데, 이들을 구분하여 재시도 가능 여부를 결정할 수 없습니다. 대신 TranscodeException 같은 도메인 예외로 통일하거나, 최소한 검증 실패(IllegalStateException)와 시스템 오류(IOException의 래핑)를 분리하는 것이 유지보수성을 높입니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/transcoder/src/main/java/com/ott/transcoder/queue/MessageListener.java`
at line 11, The current method signature void listen(TranscodeMessage message)
throws Exception forces callers to treat all failures identically; change the
contract on MessageListener.listen to throw a domain-specific checked exception
(e.g. TranscodeException) or a narrow set of exceptions that differentiate
retryable vs non-retryable failures (for example TranscodeException for
business/retryable errors and ValidationException/NonRetryableException for
validation/state errors), and update any implementors to wrap/translate
IllegalStateException, UncheckedIOException, RuntimeException into the
appropriate domain exception so RabbitMQ retry logic can distinguish and act
accordingly.

Comment on lines 21 to 29
@Override
@RabbitListener(queues = RabbitConfig.QUEUE_NAME)
public void listen(TranscodeMessage message) {
log.info("트랜스코딩 요청 수신 - mediaId: {}, originUrl: {}", message.mediaId(), message.originUrl());
try {
commandPipeline.execute(message.mediaId(), message.originUrl());
} catch (Exception e) {
// 예외를 삼켜 requeue를 방지 / DLQ 구성 후 AmqpRejectAndDontRequeueException으로 교체
log.error("트랜스코딩 처리 실패, 메시지 폐기 - mediaId: {}", message.mediaId(), e);
}
public void listen(TranscodeMessage message) throws Exception {
log.info("작업 요청 수신 - mediaId: {}, originUrl: {}", message.mediaId(), message.originUrl());

jobOrchestrator.handle(message);

log.info("작업 요청 처리 완료 - mediaId: {}, originUrl: {}", message.mediaId(), message.originUrl());
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

예외 전파 시 메시지 무한 재큐 위험

@RabbitListener 메서드에서 예외를 전파하면 Spring AMQP의 기본 동작으로 메시지가 nack되고 큐에 다시 들어갑니다. 영구적 오류(잘못된 파일 형식, 지원하지 않는 코덱 등)의 경우 무한 재시도 루프가 발생할 수 있습니다.

다음 중 하나의 방법으로 처리해야 합니다:

  1. 비즈니스 예외는 listener 내에서 처리하고 로깅 후 메시지 소비 완료
  2. Dead Letter Queue 설정으로 실패 메시지 격리
  3. RetryTemplate 또는 @Retryable로 재시도 횟수 제한
🔧 예외 처리 추가 제안
 `@Override`
 `@RabbitListener`(queues = RabbitConfig.QUEUE_NAME)
-public void listen(TranscodeMessage message) throws Exception {
+public void listen(TranscodeMessage message) {
     log.info("작업 요청 수신 - mediaId: {}, originUrl: {}", message.mediaId(), message.originUrl());

-    jobOrchestrator.handle(message);
-
-    log.info("작업 요청 처리 완료 - mediaId: {}, originUrl: {}", message.mediaId(), message.originUrl());
+    try {
+        jobOrchestrator.handle(message);
+        log.info("작업 요청 처리 완료 - mediaId: {}, originUrl: {}", message.mediaId(), message.originUrl());
+    } catch (Exception e) {
+        log.error("작업 처리 실패 - mediaId: {}, error: {}", message.mediaId(), e.getMessage(), e);
+        // TODO: 실패 상태를 DB에 기록하거나 Dead Letter Queue로 전송
+    }
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@Override
@RabbitListener(queues = RabbitConfig.QUEUE_NAME)
public void listen(TranscodeMessage message) {
log.info("트랜스코딩 요청 수신 - mediaId: {}, originUrl: {}", message.mediaId(), message.originUrl());
try {
commandPipeline.execute(message.mediaId(), message.originUrl());
} catch (Exception e) {
// 예외를 삼켜 requeue를 방지 / DLQ 구성 후 AmqpRejectAndDontRequeueException으로 교체
log.error("트랜스코딩 처리 실패, 메시지 폐기 - mediaId: {}", message.mediaId(), e);
}
public void listen(TranscodeMessage message) throws Exception {
log.info("작업 요청 수신 - mediaId: {}, originUrl: {}", message.mediaId(), message.originUrl());
jobOrchestrator.handle(message);
log.info("작업 요청 처리 완료 - mediaId: {}, originUrl: {}", message.mediaId(), message.originUrl());
}
`@Override`
`@RabbitListener`(queues = RabbitConfig.QUEUE_NAME)
public void listen(TranscodeMessage message) {
log.info("작업 요청 수신 - mediaId: {}, originUrl: {}", message.mediaId(), message.originUrl());
try {
jobOrchestrator.handle(message);
log.info("작업 요청 처리 완료 - mediaId: {}, originUrl: {}", message.mediaId(), message.originUrl());
} catch (Exception e) {
log.error("작업 처리 실패 - mediaId: {}, error: {}", message.mediaId(), e.getMessage(), e);
// TODO: 실패 상태를 DB에 기록하거나 Dead Letter Queue로 전송
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/transcoder/src/main/java/com/ott/transcoder/queue/rabbit/RabbitTranscodeListener.java`
around lines 21 - 29, The listener method RabbitTranscodeListener.listen
currently lets exceptions propagate (triggering Spring AMQP requeue); change it
to catch expected business exceptions around
jobOrchestrator.handle(TranscodeMessage) and handle them locally: for
non-transient errors (invalid format, unsupported codec) log the error and
return without rethrowing so the message is considered consumed, for
transient/errors you want retried either rethrow or throw an
AmqpRejectAndDontRequeueException to send to a DLQ; alternatively document
adding a RetryTemplate/@Retryable or DLQ configuration if you prefer broker-side
handling instead of in-listener handling.

@phonil phonil changed the title [FEAT]: 공통 작업 분리 및 트랜스코딩 파이프라인 구현 [OT-123][FEAT]: 공통 작업 분리 및 트랜스코딩 파이프라인 구현 Mar 1, 2026
Copy link
Copy Markdown
Collaborator

@marulog marulog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

수고하셨습니다~

profileList.add(profile);
}

if (profileList.isEmpty()) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

노션 개선 사항에는 원본 해상도를 유지하여 넣는 방식으로 되어있는데, 현재 코드 방식(최소 해상도로 강제 업스케일링)이 더 적절한 방법인 것 같습니다.

현재 Resolution enum에 3개의 값만 존재하고
화면 상에서도 정해진 해상도만 시청이 가능합니다.
서비스에서 제공된 화질이 아닌 다른 화질이 저장된거면 문제가 있다고 생각되며, 해당 지원하지 않는 해상도에 대해서는 트랜스코딩 파이프라인을 거부하는 로직도 추가되면 좋을 것 같습니다.

Copy link
Copy Markdown
Contributor Author

@phonil phonil Mar 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

좋네요! 노션에 적어놨어요~

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
apps/api-admin/src/main/java/com/ott/api_admin/auth/controller/AdminAuthController.java (1)

56-57: ⚠️ Potential issue | 🔴 Critical

/back-office/reissue 엔드포인트가 permitAll()로 노출되어 있습니다 — 인증 필수 토큰 재발급이 무인증 접근 가능하므로 보안 위험입니다.

apps/api-admin/src/main/java/com/ott/api_admin/config/SecurityConfig.java 라인 51-52에서 /back-office/reissuepermitAll()로 구성되어 있습니다. 토큰 재발급은 유효한 리프레시 토큰으로 인증된 사용자만 가능해야 하며, 무인증 접근을 허용하는 것은 토큰 재발급 엔드포인트의 의도에 위배됩니다.

  • /back-office/logout (라인 75-87)은 올바르게 보호되어 있습니다 (catch-all 규칙 anyRequest().hasAnyRole("ADMIN", "EDITOR")로 ADMIN/EDITOR 역할 필요).
  • /back-office/reissue 설정을 수정하여 인증 요구 사항을 추가해야 합니다 (예: anyRequest() 규칙으로 이동하거나 명시적 인증 필요).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/auth/controller/AdminAuthController.java`
around lines 56 - 57, 현재 SecurityConfig에서 "/back-office/reissue"가 permitAll()으로
열려 있어 토큰 재발급이 무인증으로 가능하므로, 해당 경로의 보안 설정을 수정해 인증을 요구하도록 변경하세요: SecurityConfig 내
HttpSecurity 설정에서 antMatchers("/back-office/reissue").permitAll()을 찾아
antMatchers("/back-office/reissue").authenticated()로 바꾸거나, 더 엄격히 하려면
antMatchers("/back-office/reissue").hasAnyRole("ADMIN","EDITOR")로 변경해
AdminAuthController.reissue 엔드포인트가 반드시 인증된(또는 지정된 역할을 가진) 사용자만 접근하도록 만드세요; 변경 후
기존의 anyRequest().hasAnyRole("ADMIN","EDITOR") 규칙과 충돌이 없는지 확인하세요.
🧹 Nitpick comments (1)
apps/api-user/src/main/java/com/ott/api_user/config/SecurityConfig.java (1)

100-105: CORS Origin 하드코딩은 환경 전환 시 유지보수/운영 리스크가 큽니다.

Line [100]~Line [102] 값을 코드에 고정하기보다 설정값(리스트)으로 외부화하면 스테이징/운영 분기 시 코드 수정 없이 제어할 수 있습니다.

♻️ 권장 리팩터 예시
-        config.setAllowedOriginPatterns(List.of(
-                "http://localhost:*",
-                "https://www.openthetaste.cloud"));
+        config.setAllowedOriginPatterns(allowedOriginPatternsList);
// 예: application.yml
// app:
//   allowed-origin-patterns:
//     - "http://localhost:*"
//     - "https://www.openthetaste.cloud"

Also applies to: 108-108

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api-user/src/main/java/com/ott/api_user/config/SecurityConfig.java`
around lines 100 - 105, The hardcoded CORS origins in SecurityConfig (calls to
config.setAllowedOriginPatterns and related CORS settings) should be
externalized to configuration instead of being inlined; add a configuration
property (e.g., app.allowedOriginPatterns as a List in
application.yml/properties), bind it into SecurityConfig via `@Value` or a
`@ConfigurationProperties-backed` POJO, and replace the literal List.of(...) used
in config.setAllowedOriginPatterns(...) with the injected property; likewise
move any other hardcoded CORS lists (config.setAllowedMethods,
config.setAllowedHeaders) to configurable properties so environments
(staging/production) can be changed without code edits.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/api-admin/src/main/java/com/ott/api_admin/auth/dto/request/AdminLoginRequest.java`:
- Around line 16-21: The `@Schema` type strings on AdminLoginRequest's fields are
using "String" which violates OpenAPI 3.0; update the `@Schema` annotations for
the email and password fields to use lowercase types (e.g., type = "string") so
they conform to OpenAPI 3.0; specifically edit the `@Schema` on the email field
and the `@Schema` on the password field in the AdminLoginRequest class to replace
"String" with "string".

In
`@apps/api-admin/src/main/java/com/ott/api_admin/auth/dto/response/AdminLoginResponse.java`:
- Around line 19-23: The `@Schema` annotations on AdminLoginResponse use
non-standard OpenAPI types ("Long", "String"); update the annotations for
memberId and role: change memberId's type to "integer" and add format = "int64"
(or remove type and set schema = `@Schema`(type="integer", format="int64") as
supported), and change role's type to "string" (e.g., `@Schema`(type="string")),
keeping the descriptions and examples unchanged so Swagger/OpenAPI and client
generators interpret memberId and role correctly.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/auth/dto/response/AdminTokenResponse.java`:
- Around line 14-18: The `@Schema` type values in AdminTokenResponse are using
"String" (uppercase) which is not OpenAPI-compliant; update the `@Schema`
annotations on the accessToken and refreshToken fields in class
AdminTokenResponse to use lowercase "string" instead of "String" so the
generated OpenAPI schema and downstream client code are valid.

In `@apps/api-user/src/main/java/com/ott/api_user/config/SecurityConfig.java`:
- Around line 70-73: The current SecurityConfig uses
.anyRequest().hasRole("MEMBER") which blocks users who hold only EDITOR or ADMIN
roles; update the authorization rule in SecurityConfig so that endpoints that
should be accessible to elevated roles use .anyRequest().hasAnyRole("MEMBER",
"EDITOR", "ADMIN") or a more specific matcher (e.g., replace
.anyRequest().hasRole("MEMBER") with
.anyRequest().hasAnyRole("MEMBER","EDITOR","ADMIN") or add separate
antMatchers/authorizeHttpRequests rules) to align with api-admin's
.anyRequest().hasAnyRole("ADMIN","EDITOR") pattern and ensure ADMIN/EDITOR
accounts can access appropriate endpoints.

In
`@modules/common-security/src/main/java/com/ott/common/security/util/CookieUtil.java`:
- Around line 11-31: The CookieUtil methods addCookie and deleteCookie currently
set ResponseCookie.secure(false) which allows cookies to be sent over insecure
HTTP; change these to set the Secure flag for production by replacing
.secure(false) with a secure(true) usage or making the secure value
environment-configurable (e.g., use a boolean from configuration or an
isProduction() check) so that addCookie and deleteCookie send cookies with
Secure=true in production while still allowing non-secure in local tests via
config.

---

Outside diff comments:
In
`@apps/api-admin/src/main/java/com/ott/api_admin/auth/controller/AdminAuthController.java`:
- Around line 56-57: 현재 SecurityConfig에서 "/back-office/reissue"가 permitAll()으로
열려 있어 토큰 재발급이 무인증으로 가능하므로, 해당 경로의 보안 설정을 수정해 인증을 요구하도록 변경하세요: SecurityConfig 내
HttpSecurity 설정에서 antMatchers("/back-office/reissue").permitAll()을 찾아
antMatchers("/back-office/reissue").authenticated()로 바꾸거나, 더 엄격히 하려면
antMatchers("/back-office/reissue").hasAnyRole("ADMIN","EDITOR")로 변경해
AdminAuthController.reissue 엔드포인트가 반드시 인증된(또는 지정된 역할을 가진) 사용자만 접근하도록 만드세요; 변경 후
기존의 anyRequest().hasAnyRole("ADMIN","EDITOR") 규칙과 충돌이 없는지 확인하세요.

---

Nitpick comments:
In `@apps/api-user/src/main/java/com/ott/api_user/config/SecurityConfig.java`:
- Around line 100-105: The hardcoded CORS origins in SecurityConfig (calls to
config.setAllowedOriginPatterns and related CORS settings) should be
externalized to configuration instead of being inlined; add a configuration
property (e.g., app.allowedOriginPatterns as a List in
application.yml/properties), bind it into SecurityConfig via `@Value` or a
`@ConfigurationProperties-backed` POJO, and replace the literal List.of(...) used
in config.setAllowedOriginPatterns(...) with the injected property; likewise
move any other hardcoded CORS lists (config.setAllowedMethods,
config.setAllowedHeaders) to configurable properties so environments
(staging/production) can be changed without code edits.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c9dfa02 and ab3f069.

📒 Files selected for processing (18)
  • apps/api-admin/src/main/java/com/ott/api_admin/auth/controller/AdminAuthApi.java
  • apps/api-admin/src/main/java/com/ott/api_admin/auth/controller/AdminAuthController.java
  • apps/api-admin/src/main/java/com/ott/api_admin/auth/dto/request/AdminLoginRequest.java
  • apps/api-admin/src/main/java/com/ott/api_admin/auth/dto/response/AdminLoginResponse.java
  • apps/api-admin/src/main/java/com/ott/api_admin/auth/dto/response/AdminTokenResponse.java
  • apps/api-admin/src/main/java/com/ott/api_admin/config/SecurityConfig.java
  • apps/api-user/src/main/java/com/ott/api_user/auth/controller/AuthController.java
  • apps/api-user/src/main/java/com/ott/api_user/auth/oauth2/CustomOAuth2UserService.java
  • apps/api-user/src/main/java/com/ott/api_user/auth/oauth2/handler/OAuth2SuccessHandler.java
  • apps/api-user/src/main/java/com/ott/api_user/auth/service/KakaoAuthService.java
  • apps/api-user/src/main/java/com/ott/api_user/bookmark/controller/BookmarkAPI.java
  • apps/api-user/src/main/java/com/ott/api_user/config/SecurityConfig.java
  • apps/api-user/src/main/java/com/ott/api_user/member/service/MemberService.java
  • apps/transcoder/src/main/java/com/ott/transcoder/validate/.gitkeep
  • apps/transcoder/src/main/java/com/ott/transcoder/worker/.gitkeep
  • modules/common-security/src/main/java/com/ott/common/security/util/CookieUtil.java
  • modules/domain/src/main/java/com/ott/domain/member/domain/Member.java
  • modules/infra-db/src/main/resources/db/migration/V3__member_onboarding_completed.sql

Comment on lines +16 to 21
@Schema(type= "String",description = "관리자 이메일", example = "admin@ott.com")
private String email;

@NotBlank
@Schema(description = "비밀번호", example = "password123")
@Schema(type="String", description = "비밀번호", example = "password123")
private String password;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

OpenAPI 3.0 @Schema type field specification uppercase lowercase string

💡 Result:

In OpenAPI 3.0, the Schema Object’s type value must be a string (not an array of types), and it uses the JSON Schema primitive type names, i.e. the valid values are:

  • string
  • number
  • integer
  • boolean
  • array
  • object

These type names are effectively case-sensitive in tooling/validators, so you should use the lowercase forms above (e.g., type: string, not type: String / STRING). [1], [2], [3]

If you’re setting this via Swagger annotations like @Schema(type=...), pass the same lowercase strings (e.g., @Schema(type = "string")). [1], [2]

Sources: OpenAPI 3.0.3 spec (data types + Schema Object type) [1], Swagger (OpenAPI 3.0 data types examples) [2], validation guidance noting case-sensitive comparison [3].


@Schema(type=...) 필드를 OpenAPI 3.0 표준에 맞춰 소문자로 정정하세요.

OpenAPI 3.0 명세에서 type은 반드시 string, number, integer, boolean, array, object 등 소문자 형식이어야 합니다. "String"(대문자)은 OpenAPI 표준을 위반하며 도구 검증 및 호환성 문제를 야기할 수 있습니다.

🔧 제안 수정안
-    `@Schema`(type= "String",description = "관리자 이메일", example = "admin@ott.com")
+    `@Schema`(type = "string", description = "관리자 이메일", example = "admin@ott.com")
     private String email;

-    `@Schema`(type="String", description = "비밀번호", example = "password123")
+    `@Schema`(type = "string", description = "비밀번호", example = "password123")
     private String password;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/auth/dto/request/AdminLoginRequest.java`
around lines 16 - 21, The `@Schema` type strings on AdminLoginRequest's fields are
using "String" which violates OpenAPI 3.0; update the `@Schema` annotations for
the email and password fields to use lowercase types (e.g., type = "string") so
they conform to OpenAPI 3.0; specifically edit the `@Schema` on the email field
and the `@Schema` on the password field in the AdminLoginRequest class to replace
"String" with "string".

Comment on lines +19 to 23
@Schema(type = "Long", description = "회원 ID", example = "1")
private Long memberId;

@Schema(description = "회원 역할", example = "ADMIN")
@Schema(type= "String", description = "회원 역할", example = "ADMIN")
private String role;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

OpenAPI 3.x Schema Object type allowed values specification

💡 Result:

OpenAPI 3.0.x (Schema Object type)

  • type MUST be a string; multiple types via an array are not supported. [1]
  • The supported primitive/data types are based on JSON Schema (Wright Draft 00): string, number, integer, boolean, array, object. [2]
  • null is not a supported type; use nullable: true instead. [2]

OpenAPI 3.1.x (Schema Object type)

  • OpenAPI 3.1 Schema Objects are a superset of JSON Schema Draft 2020-12, and data types include: null, boolean, object, array, number, string, integer. [3]
  • Because it follows JSON Schema, type can be a string or an array of strings (to allow multiple types). [4]

Sources: OpenAPI 3.0.1 spec [1][2], OpenAPI 3.1.1 spec [3], JSON Schema reference on type [4].


🌐 Web query:

Swagger v3 @Schema type parameter allowed values Long String

💡 Result:

In swagger-annotations (OpenAPI/Swagger v3), @Schema(type = "...") must be an OpenAPI Schema “type” string, not a Java type like Long or String. The valid basic values are:

  • string
  • number
  • integer
  • boolean
  • array
  • object  [2]

The annotation itself states typemust be a valid type per the OpenAPI Specification.” [1]

So, for your cases:

  • Java Long → use:

    `@Schema`(type = "integer", format = "int64")

    (int64 is the OpenAPI format commonly used for 64-bit integers / long.) [2]

  • Java String → use:

    `@Schema`(type = "string")

    [2]


@Schema 어노테이션의 type 값이 OpenAPI 표준 타입 규격을 벗어났습니다.

type = "Long", type = "String"은 OpenAPI 스펙에서 정의하지 않는 값입니다. OpenAPI 표준 타입(integer+int64, string)으로 수정해야 Swagger 문서 생성 및 클라이언트 코드 생성 도구와의 호환성이 보장됩니다.

🔧 제안 수정안
-    `@Schema`(type = "Long", description = "회원 ID", example = "1")
+    `@Schema`(type = "integer", format = "int64", description = "회원 ID", example = "1")
     private Long memberId;

-    `@Schema`(type= "String", description = "회원 역할", example = "ADMIN")
+    `@Schema`(type = "string", description = "회원 역할", example = "ADMIN")
     private String role;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@Schema(type = "Long", description = "회원 ID", example = "1")
private Long memberId;
@Schema(description = "회원 역할", example = "ADMIN")
@Schema(type= "String", description = "회원 역할", example = "ADMIN")
private String role;
`@Schema`(type = "integer", format = "int64", description = "회원 ID", example = "1")
private Long memberId;
`@Schema`(type = "string", description = "회원 역할", example = "ADMIN")
private String role;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/auth/dto/response/AdminLoginResponse.java`
around lines 19 - 23, The `@Schema` annotations on AdminLoginResponse use
non-standard OpenAPI types ("Long", "String"); update the annotations for
memberId and role: change memberId's type to "integer" and add format = "int64"
(or remove type and set schema = `@Schema`(type="integer", format="int64") as
supported), and change role's type to "string" (e.g., `@Schema`(type="string")),
keeping the descriptions and examples unchanged so Swagger/OpenAPI and client
generators interpret memberId and role correctly.

Comment on lines +14 to 18
@Schema(type = "String", description = "accessToken", example = "122Wjxf@djx1jcmxsizkds2fj-dsm2.dzj2")
private String accessToken;

@Schema(type = "String", description = "refreshToken", example = "eym122Wjxf@djx1jcmxsizkds2fj-dsm2.dzj2")
private String refreshToken;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🌐 Web query:

OpenAPI 3.0 specification string type format and schema annotation

💡 Result:

OpenAPI 3.0: string type + format

In OAS 3.0, a string schema is declared with:

type: string
format: <optional>
  • format is an open string-valued property: you may use values not defined by the spec (e.g., email, uuid, uri), and tools that don’t recognize a format may treat it as just type: string. [1], [2]
  • Built-in OAS 3.0 string formats include: byte (base64), binary (octets), date (RFC3339 full-date), date-time (RFC3339 date-time), and password (UI hint). [1], [2]

Example:

id:
  type: string
  format: uuid

String validation keywords (schema constraints)

OAS 3.0 Schema Object supports JSON-Schema-derived constraints including:

  • minLength, maxLength
  • pattern (regex; should follow ECMA-262 Edition 5.1 dialect)
  • enum [1], [2]

Example:

ssn:
  type: string
  pattern: '^\d{3}-\d{2}-\d{4}$'

“Schema annotations” (documentation-oriented fields)

In OAS 3.0, the Schema Object supports documentation/annotation-style fields such as:

  • title
  • description (CommonMark allowed)
  • default (must conform to the schema’s type)
  • example
  • plus schema metadata flags like readOnly, writeOnly, deprecated, and nullable. [1]

🌐 Web query:

Springdoc OpenAPI @Schema type parameter uppercase lowercase string

💡 Result:

In springdoc-openapi (Swagger/OpenAPI annotations), @Schema(type = ...) expects an OpenAPI Schema type value, which is lowercase—e.g. type = "string" (not "String"). OpenAPI primitive types are string, integer, number, boolean, array, object. [1]

Example:

`@Schema`(type = "string")
private String name;

If you’re migrating from Swagger 2 annotations, note that @ApiModelProperty(dataType=...) maps to @Schema(type=...) (same idea: OpenAPI “type” values like "string"). [2]

Sources: [1] [3]


🌐 Web query:

Swagger @Schema type String vs string annotation specification

💡 Result:

In OpenAPI/Swagger, the Schema Object’s type is the OpenAPI primitive type name, not a Java class name—so it should be lowercase:

  • Valid OpenAPI type values include: string, integer, number, boolean, array, object (case-sensitive). Using String is not a valid OpenAPI schema type. [1], [2]
  • The Swagger @Schema(type = "...") annotation explicitly says the value must be a valid type per the OpenAPI Specification—so use @Schema(type = "string"), not "String". [3]

If your goal is “this is a Java String”, prefer one of these instead of type:

  • Omit type and let the library infer it from the Java field, or
  • Use @Schema(implementation = String.class) when you need to force the Java model type. [3]

Rule of thumb: type="string" describes the wire format in the OpenAPI document; String.class describes the Java implementation type. [1], [3]

Sources:
[1] Swagger/OpenAPI data types (type: string, etc.)
[2] OpenAPI schema type validation rule (case-sensitive allowed values)
[3] io.swagger.v3.oas.annotations.media.Schema javadoc (“Must be a valid type per the OpenAPI Specification”)


응답 DTO의 토큰 스키마 타입을 OpenAPI 표준으로 통일해주세요.

@Schema(type = "String") 대신 @Schema(type = "string")으로 변경이 필요합니다. OpenAPI 3.0 표준에서는 스키마 타입이 소문자(string, integer, number 등)로 정의되어 있으며, Springdoc은 이 규격을 준수합니다. Java 클래스명인 String(대문자)은 OpenAPI 유효 타입이 아니므로 문서 생성 및 클라이언트 코드 생성 도구에서 오류가 발생할 수 있습니다.

🔧 제안 수정안
-    `@Schema`(type = "String", description = "accessToken", example = "122Wjxf@djx1jcmxsizkds2fj-dsm2.dzj2")
+    `@Schema`(type = "string", description = "accessToken", example = "122Wjxf@djx1jcmxsizkds2fj-dsm2.dzj2")
     private String accessToken;

-    `@Schema`(type = "String", description = "refreshToken", example = "eym122Wjxf@djx1jcmxsizkds2fj-dsm2.dzj2")
+    `@Schema`(type = "string", description = "refreshToken", example = "eym122Wjxf@djx1jcmxsizkds2fj-dsm2.dzj2")
     private String refreshToken;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@Schema(type = "String", description = "accessToken", example = "122Wjxf@djx1jcmxsizkds2fj-dsm2.dzj2")
private String accessToken;
@Schema(type = "String", description = "refreshToken", example = "eym122Wjxf@djx1jcmxsizkds2fj-dsm2.dzj2")
private String refreshToken;
`@Schema`(type = "string", description = "accessToken", example = "122Wjxf@djx1jcmxsizkds2fj-dsm2.dzj2")
private String accessToken;
`@Schema`(type = "string", description = "refreshToken", example = "eym122Wjxf@djx1jcmxsizkds2fj-dsm2.dzj2")
private String refreshToken;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/auth/dto/response/AdminTokenResponse.java`
around lines 14 - 18, The `@Schema` type values in AdminTokenResponse are using
"String" (uppercase) which is not OpenAPI-compliant; update the `@Schema`
annotations on the accessToken and refreshToken fields in class
AdminTokenResponse to use lowercase "string" instead of "String" so the
generated OpenAPI schema and downstream client code are valid.

Comment on lines +70 to +73
/*
역할이 MEMBER인 유저만 그 외 EndPoint 접근 가능하도록 설정
*/
.anyRequest().hasRole("MEMBER")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find and examine SecurityConfig.java
find . -type f -name "SecurityConfig.java" | head -20

Repository: OpenTheTaste/backend

Length of output: 213


🏁 Script executed:

#!/bin/bash
# Search for role definitions and configurations
rg -nP --type=java '(enum|class)\s+\w*Role|ROLE_(MEMBER|USER|EDITOR|ADMIN)|authorities|GrantedAuthority' --max-count=50

Repository: OpenTheTaste/backend

Length of output: 4933


🏁 Script executed:

#!/bin/bash
# Check the actual SecurityConfig.java content around lines 70-73
find . -type f -name "SecurityConfig.java" -exec head -100 {} \; -print

Repository: OpenTheTaste/backend

Length of output: 8625


api-user의 기본 접근 정책이 MEMBER 단일 Role로만 제한되면 권한 분리가 깨질 수 있습니다.

Line 73의 .anyRequest().hasRole("MEMBER")는 현재 role 구조상 각 사용자가 MEMBER, EDITOR, ADMIN단일 역할만 보유하므로, EDITORADMIN 계정은 모든 엔드포인트 접근이 차단됩니다. api-admin의 .anyRequest().hasAnyRole("ADMIN", "EDITOR") 패턴과 일관성 있게, api-user도 접근 권한이 필요한 경우 명시적으로 여러 역할을 허용해야 합니다.

🔧 제안 수정안
-                        .anyRequest().hasRole("MEMBER")
+                        .anyRequest().hasAnyRole("MEMBER", "EDITOR", "ADMIN")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/*
역할이 MEMBER인 유저만 EndPoint 접근 가능하도록 설정
*/
.anyRequest().hasRole("MEMBER")
/*
역할이 MEMBER인 유저만 EndPoint 접근 가능하도록 설정
*/
.anyRequest().hasAnyRole("MEMBER", "EDITOR", "ADMIN")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/api-user/src/main/java/com/ott/api_user/config/SecurityConfig.java`
around lines 70 - 73, The current SecurityConfig uses
.anyRequest().hasRole("MEMBER") which blocks users who hold only EDITOR or ADMIN
roles; update the authorization rule in SecurityConfig so that endpoints that
should be accessible to elevated roles use .anyRequest().hasAnyRole("MEMBER",
"EDITOR", "ADMIN") or a more specific matcher (e.g., replace
.anyRequest().hasRole("MEMBER") with
.anyRequest().hasAnyRole("MEMBER","EDITOR","ADMIN") or add separate
antMatchers/authorizeHttpRequests rules) to align with api-admin's
.anyRequest().hasAnyRole("ADMIN","EDITOR") pattern and ensure ADMIN/EDITOR
accounts can access appropriate endpoints.

Comment on lines +11 to +31
public void addCookie(HttpServletResponse response, String name, String value, int maxAge) {
ResponseCookie cookie = ResponseCookie.from(name, value)
// .domain("openthetaste.cloud") // 로컬 테스트 시 주석처리!!!
.httpOnly(true) // JS 접근 차단 -> 크로스 사이트 스크립트 공격 대비
.secure(false) // HTTPS 요청만 허용
.path("/") // 모든 경로로 전송
.maxAge(maxAge)
.sameSite("Lax") // 크로스 사이트에 대해서 쿠키 전송 허용
.build();
response.addHeader(HttpHeaders.SET_COOKIE, cookie.toString());
}

public void deleteCookie(HttpServletResponse response, String name) {
ResponseCookie cookie = ResponseCookie.from(name, "")
// .domain("openthetaste.cloud") // 로컬 테스트 시 주석처리!!!
.httpOnly(true)
.secure(false)
.path("/")
.maxAge(0)
.sameSite("Lax")
.build();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

운영 환경 기준으로 Secure가 꺼진 토큰 쿠키는 위험합니다.

Line 15, Line 27에서 .secure(false)로 설정되어 있어 HTTPS가 아닌 경로로도 인증 쿠키가 전송될 수 있습니다. 이 유틸이 공통 경로라 영향 범위가 큽니다.

🔧 제안 수정안
+import org.springframework.beans.factory.annotation.Value;
@@
 `@Component`
 public class CookieUtil {
+
+    `@Value`("${app.security.cookie.secure:true}")
+    private boolean secureCookie;
@@
     public void addCookie(HttpServletResponse response, String name, String value, int maxAge) {
         ResponseCookie cookie = ResponseCookie.from(name, value)
@@
-                .secure(false) // HTTPS 요청만 허용
+                .secure(secureCookie) // 운영 기본값 true
                 .path("/")  // 모든 경로로 전송
@@
     public void deleteCookie(HttpServletResponse response, String name) {
         ResponseCookie cookie = ResponseCookie.from(name, "")
@@
-                .secure(false)
+                .secure(secureCookie)
                 .path("/")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
public void addCookie(HttpServletResponse response, String name, String value, int maxAge) {
ResponseCookie cookie = ResponseCookie.from(name, value)
// .domain("openthetaste.cloud") // 로컬 테스트 시 주석처리!!!
.httpOnly(true) // JS 접근 차단 -> 크로스 사이트 스크립트 공격 대비
.secure(false) // HTTPS 요청만 허용
.path("/") // 모든 경로로 전송
.maxAge(maxAge)
.sameSite("Lax") // 크로스 사이트에 대해서 쿠키 전송 허용
.build();
response.addHeader(HttpHeaders.SET_COOKIE, cookie.toString());
}
public void deleteCookie(HttpServletResponse response, String name) {
ResponseCookie cookie = ResponseCookie.from(name, "")
// .domain("openthetaste.cloud") // 로컬 테스트 시 주석처리!!!
.httpOnly(true)
.secure(false)
.path("/")
.maxAge(0)
.sameSite("Lax")
.build();
public void addCookie(HttpServletResponse response, String name, String value, int maxAge) {
ResponseCookie cookie = ResponseCookie.from(name, value)
// .domain("openthetaste.cloud") // 로컬 테스트 시 주석처리!!!
.httpOnly(true) // JS 접근 차단 -> 크로스 사이트 스크립트 공격 대비
.secure(secureCookie) // 운영 기본값 true
.path("/") // 모든 경로로 전송
.maxAge(maxAge)
.sameSite("Lax") // 크로스 사이트에 대해서 쿠키 전송 허용
.build();
response.addHeader(HttpHeaders.SET_COOKIE, cookie.toString());
}
public void deleteCookie(HttpServletResponse response, String name) {
ResponseCookie cookie = ResponseCookie.from(name, "")
// .domain("openthetaste.cloud") // 로컬 테스트 시 주석처리!!!
.httpOnly(true)
.secure(secureCookie)
.path("/")
.maxAge(0)
.sameSite("Lax")
.build();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@modules/common-security/src/main/java/com/ott/common/security/util/CookieUtil.java`
around lines 11 - 31, The CookieUtil methods addCookie and deleteCookie
currently set ResponseCookie.secure(false) which allows cookies to be sent over
insecure HTTP; change these to set the Secure flag for production by replacing
.secure(false) with a secure(true) usage or making the secure value
environment-configurable (e.g., use a boolean from configuration or an
isProduction() check) so that addCookie and deleteCookie send cookies with
Secure=true in production while still allowing non-secure in local tests via
config.

@phonil phonil merged commit 5ae8653 into develop Mar 3, 2026
1 check passed
@phonil phonil deleted the OT-123-feature/transcode-task branch April 4, 2026 09:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feat 새로운 기능 구현

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[OT-123][FEAT]: 트랜스코딩 작업 분리

2 participants