Skip to content

[OT-311] [CHORE]: 그라파나 탬플릿 적용 및 필요 설정 추가#168

Closed
arlen02-01 wants to merge 13 commits intodevelopfrom
OT-311-chore/grafana-template
Closed

[OT-311] [CHORE]: 그라파나 탬플릿 적용 및 필요 설정 추가#168
arlen02-01 wants to merge 13 commits intodevelopfrom
OT-311-chore/grafana-template

Conversation

@arlen02-01
Copy link
Copy Markdown
Contributor

@arlen02-01 arlen02-01 commented Mar 14, 2026

📝 작업 내용

이번 PR에서 작업한 내용을 적어주세요

  • 그라파나 탬플릿 적용 및 필요 설정 추가

☑️ 체크 리스트

체크 리스트를 확인해주세요

  • 테스트는 잘 통과했나요?
  • 충돌을 해결했나요?
  • 이슈는 등록했나요?
  • 라벨은 등록했나요?

#️⃣ 연관된 이슈

ex) # 이슈번호
close #167

Summary by CodeRabbit

릴리스 노트

  • 새 기능

    • 콘텐츠·숏폼 멀티파트 업로드 지원(파트별 presigned URL, 업로드 완료/중단 API) 및 관련 관리자 API 추가
    • 멀티파트 파트 URL 조회/완료 엔드포인트 추가
    • JVM/로그용 Grafana 대시보드 추가 및 Prometheus에 node-exporter 타깃 추가
    • 배포 워크플로우에 node-exporter 설치/시작 절차 통합
  • 개선사항

    • 업로드 요청 검증 강화(필수 필드 검증 추가)
    • S3 presign·멀티파트 업로드 흐름 개선 및 모니터링 네트워크 구성 업데이트
    • 프론트엔드/모니터링 기본 설정 업데이트
  • 버그 수정

    • 업로드 실패 시 멀티파트 세션 안전하게 중단 처리 및 관련 오류 코드 추가

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 14, 2026

Walkthrough

멀티파트 S3 업로드(생성·파트 URL 조회·완료·중단) API/도구 추가와 함께 node-exporter 대상 발견·SSM 저장·원격 배포 및 Prometheus/Grafana 구성(스크랩 잡·대시보드·네트워크) 확장이 이루어졌습니다.

Changes

Cohort / File(s) Summary
CI / 배포 워크플로우
​.github/workflows/deploy-ai.yml, ​.github/workflows/deploy-ec2-docker.yml, ​.github/workflows/deploy-monitoring.yml
NODE_EXPORTER SSM 파라미터 입력/해결 추가, 타겟 인스턴스 IP/SG 탐지 로직, 모니터링 SG 기반 9100 인그레스 조건 생성, 타겟에 node-exporter 컨테이너 제거·풀·실행 단계 통합.
Prometheus / Grafana / compose
apps/monitoring/prometheus/prometheus*.yml, apps/monitoring/grafana/provisioning/dashboards/json/*, apps/monitoring/docker-compose.yml, docker-compose.yml
Prometheus에 node-exporter/machine-node-exporter 스크랩 잡 추가, 템플릿 치환 변수 추가, Grafana 대시보드 여러 파일 추가/수정, 네트워크·환경변수 및 prometheus 포트 호스트 매핑 변경.
업로드 DTOs 추가
apps/api-admin/.../upload/dto/request/MultipartUploadCompleteRequest.java, apps/api-admin/.../upload/dto/response/MultipartUploadPartUrlResponse.java
멀티파트 완료 요청 레코드 및 파트 업로드 URL 응답 레코드 추가(검증·Swagger 애노테이션 포함).
컨트롤러: 콘텐츠·숏폼 엔드포인트
apps/api-admin/.../content/controller/BackOfficeContentsApi.java, .../BackOfficeContentsController.java, .../shortform/controller/BackOfficeShortFormApi.java, .../BackOfficeShortFormController.java
POST /{id}/upload/complete 및 GET /{id}/upload/parts 엔드포인트 추가 및 요청/응답 래핑 처리 추가.
응답/매퍼 변경
apps/api-admin/.../content/dto/response/ContentsUploadResponse.java, .../shortform/dto/response/ShortFormUploadResponse.java, .../mapper/BackOfficeContentsMapper.java, .../shortform/mapper/BackOfficeShortFormMapper.java
originUploadUrl 제거, originUploadId, originTotalPartCount, originPartSizeBytes 필드 추가 및 매퍼 시그니처/호출부 변경.
서비스 로직: 콘텐츠·숏폼
apps/api-admin/.../content/service/BackOfficeContentsService.java, apps/api-admin/.../shortform/service/BackOfficeShortFormService.java
멀티파트 업로드 완성/파트 조회 API 추가, origin objectKey 검증, 중간 실패시 multipart abort 시도, 로깅(@Slf4j) 추가 및 권한/검증 로직 보강.
UploadHelper 및 S3 인프라
apps/api-admin/.../upload/support/UploadHelper.java, modules/infra-s3/.../S3PresignService.java, modules/infra-s3/.../S3PresignerConfig.java
이미지 업로드 메서드명 정리, multipart 계획·파트 URL 생성·완료·중단 API 추가, 관련 레코드/타입 추가, S3PresignService에 S3Client 주입 및 multipart 구현, S3PresignerConfig 생성 패턴 변경.
검증·에러코드
modules/common-web/src/main/java/com/ott/common/web/exception/ErrorCode.java
새 에러 코드 추가: CONTENTS_ORIGIN_OBJECT_KEY_MISMATCH(B207), SHORTFORM_ORIGIN_OBJECT_KEY_MISMATCH(B208), ETAG_LIST_INVALID(B209).
요청 DTO 검증 강화
apps/api-admin/.../content/dto/request/ContentsUploadRequest.java, .../shortform/dto/request/ShortFormUploadRequest.java
duration, videoSize 필드에 @NotNull 추가(기존 @PositiveOrZero 유지).
모니터링 대시보드 파일
apps/monitoring/grafana/provisioning/dashboards/json/*
JVM 및 Loki 대시보드 JSON 신규 추가 및 일부 대시보드 제목/비활성화 파일 업데이트(내용·템플릿 포함).

Sequence Diagram

sequenceDiagram
    participant Client
    participant Controller as BackOffice<br/>Controller
    participant Service as BackOffice<br/>Service
    participant Presign as S3Presign<br/>Service
    participant S3 as AWS S3

    rect rgba(100, 200, 150, 0.5)
    Note over Client,S3: 멀티파트 파트 URL 조회 흐름
    Client->>Controller: GET /{id}/upload/parts(objectKey, uploadId, page,size)
    Controller->>Service: get*OriginUploadPartUrls(...)
    Service->>Service: objectKey/권한/총파트수 검증
    Service->>Presign: getMultipartPartUrls(objectKey, uploadId, totalPartCount, page, size)
    Presign->>S3: 내부 presign 연산(파트별)
    S3-->>Presign: presigned URLs
    Presign-->>Service: PageResponse<PartUrl>
    Service-->>Controller: PageResponse
    Controller-->>Client: 200 + URL 목록
    end

    rect rgba(150, 200, 200, 0.5)
    Note over Client,S3: 멀티파트 업로드 완료 흐름
    Client->>Controller: POST /{id}/upload/complete(objectKey, uploadId, parts)
    Controller->>Service: complete*OriginUpload(...)
    Service->>Service: objectKey/ETag 목록 검증
    Service->>Presign: completeMultipartUpload(objectKey, uploadId, partETags)
    Presign->>S3: CompleteMultipartUploadRequest
    S3-->>Presign: 완료 응답
    Presign-->>Service: 성공
    Service-->>Controller: Void
    Controller-->>Client: 200 OK
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

deploy

Suggested reviewers

  • phonil
  • marulog
🚥 Pre-merge checks | ✅ 1 | ❌ 4

❌ Failed checks (3 warnings, 1 inconclusive)

Check name Status Explanation Resolution
Title check ⚠️ Warning PR 제목은 그라파나 템플릿 적용에 관한 것이지만, 실제 변경사항은 멀티파트 업로드 기능, 모니터링 설정, 그리고 그라파나 템플릿까지 포함한 다양한 범위의 변경을 포함하고 있어 주요 변경사항을 정확히 반영하지 못함. PR 제목을 더 포괄적으로 수정하여 멀티파트 업로드 기능 추가, 모니터링/프로메테우스 설정, 그리고 그라파나 템플릿 적용을 모두 반영하도록 변경하세요.
Out of Scope Changes check ⚠️ Warning 이슈 #167은 그라파나 템플릿 적용만을 요구하지만, PR에는 멀티파트 업로드 API 구현(BackOfficeContentsApi/Controller, BackOfficeShortFormApi/Controller, UploadHelper, S3PresignService 등), 모니터링/프로메테우스 설정(워크플로우, 설정 파일), 노드 익스포터 배포 로직 등 범위를 벗어난 광범위한 변경사항이 포함되어 있음. 멀티파트 업로드 관련 변경사항들을 별도의 PR로 분리하고, 현재 PR을 그라파나 템플릿 적용(모니터링 설정 포함)에만 집중하도록 축소하세요.
Docstring Coverage ⚠️ Warning Docstring coverage is 7.14% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Linked Issues check ❓ Inconclusive 이슈 #167은 '그라파나 템플릿 적용 및 필요 설정 추가'만을 목표로 명시하고 있으나, 실제 PR 변경사항에는 이에 추가로 멀티파트 업로드 API 기능, S3 프리사인 서비스, 모니터링/프로메테우스 설정 등 광범위한 변경이 포함되어 있어 요구사항과의 일치성을 명확히 판단할 수 없음. 이슈 #167의 범위가 실제 PR 변경사항과 일치하는지 재확인하거나, 추가 변경사항들이 별도의 이슈로 추적되고 있는지 확인하세요.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch OT-311-chore/grafana-template
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@arlen02-01
Copy link
Copy Markdown
Contributor Author

@CodeRabbit 지금 코드검수 했는지 확인하고 안했으면 해줘 컨밴션 확인이랑 코드 자체 리뷰 해주고 각각을 코맨트 각각으로 나눠서 보여줘

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 14, 2026

@arlen02-01 PR #168에 대한 전체 코드 검수를 진행하겠습니다. 컨벤션 확인과 코드 리뷰 각각을 별도 코멘트로 나눠서 피드백 드리겠습니다.


🧠 Learnings used
Learnt from: phonil
Repo: OpenTheTaste/backend PR: 0
File: :0-0
Timestamp: 2026-02-27T04:09:55.496Z
Learning: OpenTheTaste/backend 저장소의 PR 리뷰 시, repo 내 coderabbit/coderabbit-guidelines.md를 준수하고 “하나의 리뷰 항목당 하나의 댓글” 방식으로 피드백을 남긴다.
✅ Actions performed

Full review triggered.

@arlen02-01 arlen02-01 added the chore 설정 파일 등 변경 (.gitignore, .yml 등) label Mar 14, 2026
@arlen02-01 arlen02-01 self-assigned this Mar 14, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 14, 2026

@arlen02-01 이전 응답에서 전체 리뷰를 트리거했습니다. 현재 리뷰가 완료되지 않았을 수 있으니 다시 한번 전체 리뷰를 실행하겠습니다. 컨벤션 확인과 코드 리뷰는 각각 별도의 코멘트로 나눠서 피드백 드리겠습니다.

리뷰 요청: 컨벤션 확인과 코드 자체 리뷰를 각각 별도의 코멘트로 나눠서 피드백 부탁드립니다.


🧠 Learnings used
Learnt from: phonil
Repo: OpenTheTaste/backend PR: 0
File: :0-0
Timestamp: 2026-02-27T04:09:55.496Z
Learning: OpenTheTaste/backend 저장소의 PR 리뷰 시, repo 내 coderabbit/coderabbit-guidelines.md를 준수하고 “하나의 리뷰 항목당 하나의 댓글” 방식으로 피드백을 남긴다.
✅ Actions performed

Full review triggered.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (12)
.github/workflows/deploy-ai.yml (2)

146-154: 보안 그룹 규칙 생성 오류 무시

|| true로 모든 오류를 무시하고 있습니다. 기존 규칙이 있는 경우의 InvalidPermission.Duplicate 오류는 무시해도 되지만, 권한 부족이나 잘못된 그룹 ID 같은 실제 오류도 함께 무시됩니다. stderr 출력을 확인하거나 특정 오류 코드만 무시하는 것을 고려해 주세요.

♻️ 개선 제안
-            aws ec2 authorize-security-group-ingress \
-              --region "$AWS_REGION" \
-              --group-id "$MACHINE_SG_ID" \
-              --protocol tcp \
-              --port 9100 \
-              --source-group "$MONITORING_SG_ID" >/dev/null 2>&1 || true
+            if ! aws ec2 authorize-security-group-ingress \
+              --region "$AWS_REGION" \
+              --group-id "$MACHINE_SG_ID" \
+              --protocol tcp \
+              --port 9100 \
+              --source-group "$MONITORING_SG_ID" 2>&1 | grep -q "InvalidPermission.Duplicate"; then
+              echo "Security group rule may have failed or already exists"
+            fi
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/deploy-ai.yml around lines 146 - 154, Currently the aws
ec2 authorize-security-group-ingress call for MONITORING_SG_ID -> MACHINE_SG_ID
is silencing all errors with "|| true"; change this so you run the aws command
and capture its stderr/exit code, then only ignore the call when the error text
or error code indicates "InvalidPermission.Duplicate" (existing rule), but
surface/exit on other errors (e.g., UnauthorizedOperation,
InvalidGroup.NotFound). Locate the shell block that references MONITORING_SG_ID,
MACHINE_SG_ID and the aws ec2 authorize-security-group-ingress invocation,
remove the unconditional "|| true", and add a conditional check that
greps/parses stderr (or inspects the AWS CLI JSON error code) to permit only the
Duplicate-permission case while failing for other errors.

125-135: 보안 그룹 조회 시 첫 번째 그룹만 사용

SecurityGroups[0].GroupId로 첫 번째 보안 그룹만 조회합니다. 인스턴스에 여러 보안 그룹이 연결된 경우, 의도하지 않은 그룹에 인바운드 규칙이 추가될 수 있습니다. 특정 태그나 이름으로 보안 그룹을 필터링하는 것이 더 안전합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/deploy-ai.yml around lines 125 - 135, 현재 MONITORING_SG_ID
및 MACHINE_SG_ID 조회에서 SecurityGroups[0].GroupId로 항상 첫 번째 보안 그룹만 사용하고 있어 인스턴스에 다수
SG가 연결된 경우 잘못된 SG에 규칙을 추가할 위험이 있습니다; 수정 방법: describe-instances 호출에서
SecurityGroups[*].GroupId로 모든 그룹 ID를 반환하거나 (권장) 보안 그룹을 태그/이름으로 필터링하도록 로직을 바꿔 특정
SG를 선택하세요 — 예를 들어 SecurityGroups[*].GroupId 쿼리로 배열을 받아 필요한 기준(태그/이름)으로 올바른 SG를
골라 사용하거나 aws ec2 describe-security-groups --filters "Name=tag:Name,Values=..."/
"Name=group-name,Values=..." 를 사용해 MONITORING_SG_ID 및 MACHINE_SG_ID를 정확한 보안 그룹으로
결정하도록 변경하세요 (참조: MONITORING_SG_ID, MACHINE_SG_ID, SecurityGroups[0].GroupId).
.github/workflows/deploy-monitoring.yml (1)

110-114: 스크랩 타깃 형식 검증(host:port)도 추가하는 것을 권장합니다.

Line 110~114는 빈 값만 막고 있어, 형식이 잘못된 값이 들어오면 이후 Prometheus 기동 단계에서 실패할 수 있습니다.

형식 검증 추가 예시
           if [ -z "$USER_API_TARGET" ] || [ "$USER_API_TARGET" = "None" ] || \
              [ -z "$ADMIN_API_TARGET" ] || [ "$ADMIN_API_TARGET" = "None" ] || \
              [ -z "$TRANSCODER_TARGET" ] || [ "$TRANSCODER_TARGET" = "None" ] || \
              [ -z "$NODE_EXPORTER_TARGET" ] || [ "$NODE_EXPORTER_TARGET" = "None" ] || \
              [ -z "$MACHINE_NODE_EXPORTER_TARGET" ] || [ "$MACHINE_NODE_EXPORTER_TARGET" = "None" ]; then
             echo "One or more scrape targets are empty. Check SSM parameter values." >&2
             exit 1
           fi
+
+          is_host_port() {
+            [[ "$1" =~ ^[^[:space:]:]+:[0-9]{1,5}$ ]]
+          }
+
+          for target in \
+            "$USER_API_TARGET" \
+            "$ADMIN_API_TARGET" \
+            "$TRANSCODER_TARGET" \
+            "$NODE_EXPORTER_TARGET" \
+            "$MACHINE_NODE_EXPORTER_TARGET"; do
+            if ! is_host_port "$target"; then
+              echo "Invalid scrape target format: $target (expected host:port)" >&2
+              exit 1
+            fi
+          done
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/deploy-monitoring.yml around lines 110 - 114, 현재 조건문 only
checks for empty/"None" values for variables USER_API_TARGET, ADMIN_API_TARGET,
TRANSCODER_TARGET, NODE_EXPORTER_TARGET, MACHINE_NODE_EXPORTER_TARGET; add
host:port format validation for each variable (e.g., using a regex like
host:port) so malformed targets fail early. Update the existing if-block (the
condition that references USER_API_TARGET, ADMIN_API_TARGET, TRANSCODER_TARGET,
NODE_EXPORTER_TARGET, MACHINE_NODE_EXPORTER_TARGET) to also test that each
variable matches a host:port pattern (or run a separate check immediately after)
and exit with an error if any do not match. Ensure the validation uses a robust
shell-compatible check (bash regex or grep -E) and includes a clear error
message naming the offending variable.
apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java (2)

266-276: forEach 내 예외 처리 방식 개선 제안

현재 forEach 내에서 예외를 던지는 방식은 동작하지만, 함수형 스타일에서는 side-effect로 간주될 수 있습니다. 가독성을 위해 전통적인 for-loop 또는 anyMatch를 사용한 검증 후 별도 예외 처리를 고려할 수 있습니다.

♻️ 검증 로직 개선 예시
-        //ETAG List 유효성 검증
-        Set<Integer> seenPartNumbers = new HashSet<>();
-        normalizedParts.forEach(part -> {
-            if (part.partNumber() <= 0 || !StringUtils.hasText(part.eTag()) || !seenPartNumbers.add(part.partNumber())) {
-                throw new BusinessException(ErrorCode.ETAG_LIST_INVALID);
-            }
-        });
+        //ETAG List 유효성 검증
+        Set<Integer> seenPartNumbers = new HashSet<>();
+        for (MultipartPartETag part : normalizedParts) {
+            if (part.partNumber() <= 0 || !StringUtils.hasText(part.eTag()) || !seenPartNumbers.add(part.partNumber())) {
+                throw new BusinessException(ErrorCode.ETAG_LIST_INVALID);
+            }
+        }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java`
around lines 266 - 276, The current validation throws inside a forEach
(normalizedParts.forEach ...) which causes side-effects in a functional context;
replace it with a clear imperative loop or a stream predicate to validate first
and then throw. For example, iterate over normalizedParts with a traditional
for-loop (or use normalizedParts.stream().anyMatch(...)) to detect if any
MultipartPartETag has partNumber() <= 0, blank eTag() (StringUtils.hasText), or
a duplicate partNumber using the seenPartNumbers Set, and if such a case is
found throw new BusinessException(ErrorCode.ETAG_LIST_INVALID); ensure you
reference MultipartPartETag, normalizedParts, seenPartNumbers and ErrorCode in
the updated code.

342-363: 멀티파트 업로드 생성 실패 시 리소스 정리 누락

createMultipartUpload 호출 후 MultipartUploadFileResult 생성 중 예외가 발생하면 S3에 생성된 멀티파트 세션이 orphan 상태로 남을 수 있습니다. 실제로는 MultipartUploadFileResult record 생성 시 예외가 발생할 가능성이 낮지만, 방어적으로 try-catch를 고려할 수 있습니다.

♻️ 방어적 예외 처리 제안
 private MultipartUploadFileResult createVideoMultipartUpload(
         String resourceRoot,
         Long resourceId,
         String fileName,
         Integer fileSizeKb
 ) {
     String sanitizedFileName = sanitizeFileName(fileName);
     String objectKey = buildObjectKey(resourceRoot, resourceId, "origin", sanitizedFileName);
     String contentType = resolveVideoContentType(sanitizedFileName);
     String objectUrl = s3PresignService.toObjectUrl(objectKey);

     MultipartUploadPlan multipartUploadPlan = getMultipartPlan(fileSizeKb);
     String uploadId = s3PresignService.createMultipartUpload(objectKey, contentType);

+    try {
         return new MultipartUploadFileResult(
                 objectKey,
                 objectUrl,
                 uploadId,
                 multipartUploadPlan.totalPartCount(),
                 multipartUploadPlan.partSizeBytes()
         );
+    } catch (RuntimeException ex) {
+        abortMultipartUpload(objectKey, uploadId);
+        throw ex;
+    }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java`
around lines 342 - 363, createVideoMultipartUpload can leave an orphaned S3
multipart session if an exception happens after
s3PresignService.createMultipartUpload; wrap the block that calls
createMultipartUpload and constructs the MultipartUploadFileResult in a
try-catch, capture uploadId (ensure it's non-null) and on any exception call
s3PresignService.abortMultipartUpload(objectKey, uploadId) (or equivalent abort
method) to clean up, then rethrow the exception so callers still see the
failure.
apps/api-admin/src/main/java/com/ott/api_admin/content/dto/response/ContentsUploadResponse.java (1)

34-35: Swagger example 값 정리 권장

originPartSizeBytes의 example 값에 설명 텍스트(<-(16mb))가 포함되어 있습니다. 이는 Swagger UI에서 예시값으로 표시될 때 부자연스러울 수 있습니다. description에 단위 설명을 추가하는 것이 더 적절합니다.

♻️ 권장 수정
-        `@Schema`(type = "Long", description = "파트별 크기", example = "16777216   <-(16mb)")
+        `@Schema`(type = "Long", description = "파트별 크기 (바이트 단위, 기본 16MB)", example = "16777216")
         long originPartSizeBytes
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/dto/response/ContentsUploadResponse.java`
around lines 34 - 35, Remove the explanatory text from the `@Schema` example for
the field originPartSizeBytes and instead include the unit in the description:
update the annotation on ContentsUploadResponse.originPartSizeBytes so
example="16777216" (numeric only) and change/append the description to mention
the unit (e.g., "파트별 크기 (바이트)" or "파트별 크기, 단위: 바이트"). Ensure the field remains
of type long and only the `@Schema` annotation values are adjusted.
apps/api-admin/src/main/java/com/ott/api_admin/content/mapper/BackOfficeContentsMapper.java (1)

51-75: 파라미터 수 증가에 따른 리팩토링 고려

toContentsUploadResponse 메서드의 파라미터가 10개로 많아졌습니다. 현재는 동작에 문제가 없지만, 향후 유지보수성을 위해 파라미터 객체나 MediaCreateUploadResult를 직접 전달하는 방식을 고려할 수 있습니다.

♻️ 리팩토링 예시
// 옵션 1: MediaCreateUploadResult를 직접 전달
public ContentsUploadResponse toContentsUploadResponse(
        Long contentsId,
        UploadHelper.MediaCreateUploadResult result
) {
    return new ContentsUploadResponse(
            contentsId,
            result.posterObjectKey(),
            result.thumbnailObjectKey(),
            result.originObjectKey(),
            result.masterPlaylistObjectKey(),
            result.posterUploadUrl(),
            result.thumbnailUploadUrl(),
            result.originUploadId(),
            result.originTotalPartCount(),
            result.originPartSizeBytes()
    );
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/mapper/BackOfficeContentsMapper.java`
around lines 51 - 75, toContentsUploadResponse currently takes 10 primitive
parameters which harms maintainability; change its signature to accept a single
parameter object (e.g., UploadHelper.MediaCreateUploadResult result) plus
contentsId, or an explicit DTO, and map fields from that object inside
toContentsUploadResponse (refer to method name toContentsUploadResponse and
class ContentsUploadResponse) so callers pass the result object instead of ten
separate args; update all call sites to construct/pass the
MediaCreateUploadResult (or new param object) and map its getters
(posterObjectKey(), thumbnailObjectKey(), originObjectKey(),
masterPlaylistObjectKey(), posterUploadUrl(), thumbnailUploadUrl(),
originUploadId(), originTotalPartCount(), originPartSizeBytes()) when creating
the ContentsUploadResponse.
apps/api-admin/src/main/java/com/ott/api_admin/shortform/dto/response/ShortFormUploadResponse.java (1)

28-35: Swagger example 값 정리 권장

ContentsUploadResponse와 마찬가지로, originPartSizeBytes의 example 값에 설명 텍스트가 포함되어 있습니다. 일관성을 위해 동일하게 수정하는 것이 좋습니다.

♻️ 권장 수정
-        `@Schema`(type = "Long", description = "파트별 크기", example = "16777216   <-(16mb)")
+        `@Schema`(type = "Long", description = "파트별 크기 (바이트 단위, 기본 16MB)", example = "16777216")
         long originPartSizeBytes
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/dto/response/ShortFormUploadResponse.java`
around lines 28 - 35, The Swagger example for originPartSizeBytes in
ShortFormUploadResponse includes explanatory text ("16777216   <-(16mb)");
update the `@Schema` annotation for originPartSizeBytes (in class
ShortFormUploadResponse) to use a clean example value (e.g. "16777216") without
the parenthetical comment so it matches the format used in
ContentsUploadResponse.
apps/api-admin/src/main/java/com/ott/api_admin/content/controller/BackOfficeContentsController.java (1)

92-93: 메서드 사이에 빈 줄 누락

completeContentsUploadgetContentsUploadPartUrls 메서드 사이에 빈 줄이 없습니다. 가독성을 위해 메서드 사이에 빈 줄을 추가하세요.

🔧 수정 제안
         return ResponseEntity.ok(SuccessResponse.of(null));
     }
+
     `@Override`
     `@GetMapping`("/{contentsId}/upload/parts")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/controller/BackOfficeContentsController.java`
around lines 92 - 93, Add a blank line between the two methods
completeContentsUpload(...) and getContentsUploadPartUrls(...) in
BackOfficeContentsController to improve readability; locate the method
declarations for completeContentsUpload and getContentsUploadPartUrls and insert
a single empty line separating the two method blocks.
apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java (1)

262-288: @Transactional(readOnly = true)가 의미적으로 부적절할 수 있음

completeShortFormOriginUpload 메서드는 readOnly = true로 표시되어 있지만, 내부적으로 uploadHelper.completeMultipartUpload()를 호출하여 S3 멀티파트 업로드를 완료하는 외부 쓰기 작업을 수행합니다.

DB 상태를 변경하지 않으므로 기능상 문제는 없지만, 메서드의 의도와 트랜잭션 어노테이션이 일치하지 않아 혼란을 줄 수 있습니다. readOnly 속성을 제거하거나 @Transactional 어노테이션 자체를 제거하는 것을 고려하세요.

🔧 수정 제안
-    `@Transactional`(readOnly = true)
+    `@Transactional`
     public void completeShortFormOriginUpload(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java`
around lines 262 - 288, The method completeShortFormOriginUpload is annotated
with `@Transactional`(readOnly = true) but calls
uploadHelper.completeMultipartUpload (an external write operation), which makes
the readOnly flag misleading; update the method signature by removing readOnly
(e.g., change to `@Transactional`) or remove the `@Transactional` annotation
entirely so the transactional metadata accurately reflects that the method
performs non-read-only work—adjust the annotation on
completeShortFormOriginUpload accordingly.
modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java (2)

110-123: 파트 목록 정렬을 이 메서드 안에서 보장해 주세요.

지금은 입력 순서를 그대로 CompletedMultipartUpload에 넣고 있어서, 정렬 보장이 apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java Line 257-276 한 곳에만 있습니다. infra 공개 메서드라면 partNumber 오름차순 정렬을 내부에서 보장하는 편이 안전하고, 컬렉션 이름도 partETagList/completedPartList처럼 맞춰 두는 게 규칙과도 일치합니다.

예시 수정
-    public void completeMultipartUpload(String objectKey, String uploadId, List< MultipartPartETag> partETags) {
+    public void completeMultipartUpload(String objectKey, String uploadId, List<MultipartPartETag> partETagList) {
         try {
-            List<CompletedPart> completedParts = partETags.stream()
+            List<CompletedPart> completedPartList = partETagList.stream()
+                    .sorted(Comparator.comparingInt(MultipartPartETag::partNumber))
                     .map(part -> CompletedPart.builder()
                             .partNumber(part.partNumber())
                             .eTag(part.eTag())
                             .build())
                     .toList();
 
             CompletedMultipartUpload completedMultipartUpload = CompletedMultipartUpload.builder()
-                    .parts(completedParts)
+                    .parts(completedPartList)
                     .build();

java.util.Comparator import 추가가 필요합니다.

As per coding guidelines, "Collection variable names with List suffix".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java`
around lines 110 - 123, In S3PresignService.completeMultipartUpload ensure the
incoming partETags are sorted by partNumber before building CompletedPart
objects and CompletedMultipartUpload: rename the input collection to
partETagList and the mapped collection to completedPartList for consistency,
sort partETagList by Comparator.comparingInt( MultipartPartETag::partNumber )
(ascending) before mapping to
CompletedPart.builder().partNumber(...).eTag(...).build(), and add the
java.util.Comparator import; this guarantees internal ordering regardless of
callers.

162-166: MultipartPartETag를 서비스 내부 중첩 타입으로 노출하지 않는 편이 좋습니다.

지금은 apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java Line 257-276에서 new S3PresignService.MultipartPartETag(...)를 만들고 있어서, apps 계층이 infra 서비스 구현 타입에 결합됩니다. 이 record는 별도 DTO나 포트 타입으로 분리해 두는 쪽이 모듈 경계를 더 깔끔하게 유지합니다.

As per coding guidelines, "Module boundaries: common-web/domain/infra/apps".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java`
around lines 162 - 166, Move the nested record MultipartPartETag out of
S3PresignService and into its own public DTO class (e.g., MultipartPartETag) in
an infra-visible package so other modules can depend on the DTO instead of the
service implementation; update S3PresignService to return/consume the new DTO
type and change the usage site (UploadHelper) to import the new DTO rather than
referencing S3PresignService.MultipartPartETag, keeping the class name and
fields identical to avoid API changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/controller/BackOfficeContentsApi.java`:
- Around line 139-145: The completeContentsUpload controller method in
BackOfficeContentsApi is missing the Authentication authentication parameter
required for back-office role checks; add an Authentication authentication
parameter to the completeContentsUpload signature (matching how
BackOfficeShortFormApi.completeShortFormUpload is declared) so the service layer
can validate EDITOR permissions, and update any implementing controller class to
accept and pass this Authentication through to the service layer/authorization
checks.
- Around line 165-180: The API method
BackOfficeContentsApi.getContentsUploadPartUrls is missing the Authentication
parameter, preventing authorization checks; add an Authentication authentication
parameter to the method signature for getContentsUploadPartUrls (matching
BackOfficeShortFormApi.getShortFormUploadPartUrls), update any corresponding
annotations/imports, and ensure the controller/service path that implements
getContentsUploadPartUrls accepts and uses this Authentication to perform the
same permission checks as other backoffice endpoints.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/controller/BackOfficeContentsController.java`:
- Around line 77-92: The endpoint
BackOfficeContentsController.completeContentsUpload is missing
Authentication-based authorization, allowing EDITORs to complete uploads for
other users; add an Authentication parameter to completeContentsUpload, extract
the caller's user id/roles (as done in
BackOfficeShortFormController.completeShortFormUpload), and either pass that
identity to BackOfficeContentsService.completeContentsOriginUpload or perform an
ownership check before calling the service so the service can enforce the same
OWNER/EDITOR check (matching logic in
BackOfficeShortFormService.completeShortFormOriginUpload) to ensure only the
uploader/authorized editor may complete the upload.
- Around line 93-105: The getContentsUploadPartUrls endpoint is missing the
Authentication parameter so EDITOR permission and uploader ownership cannot be
verified; update BackOfficeContentsController.getContentsUploadPartUrls to
accept an Authentication (or `@AuthenticationPrincipal`) parameter, perform the
same EDITOR/ownership check as other endpoints (or pass the Authentication
through) before calling
backOfficeContentsService.getContentsOriginUploadPartUrls, and if you change the
service signature ensure
backOfficeContentsService.getContentsOriginUploadPartUrls accepts the
Authentication (or a validated user id) so the service can verify uploader
ownership.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java`:
- Around line 229-236: The code recalculates totalPartCount via
uploadHelper.getMultipartPartCount(contents.getVideoSize()) when serving part
URLs, which can diverge if server multipart settings changed; modify
BackOfficeContentsService to use a stable source: read and use a persisted
originTotalPartCount from the Contents entity (e.g., add/get
originTotalPartCount on Contents) or accept a client-supplied totalPartCount
from the upload creation response and pass it into
uploadHelper.getMultipartPartUrls, and then call
uploadHelper.getMultipartPartUrls(objectKey, uploadId, originTotalPartCount,
page, size) instead of recalculating to avoid mismatch.

In `@apps/monitoring/docker-compose.yml`:
- Around line 52-56: The monitoring compose uses an external network named
backend_default via the backend_shared network definition (external: true);
ensure that backend_default exists by either adding an explicit networks entry
for backend_default in the root docker-compose.yml (create the network there so
it's available to child stacks) or change the monitoring stack to not reference
an external network and create its own network; update the networks
configuration so backend_shared references a real network (backend_default) that
is created before or within the root compose.

In `@apps/monitoring/grafana/provisioning/dashboards/json/New`
dashboard-1772584885701.json.disabled:
- Line 261: Several dashboard "title" string values contain mojibake (e.g., the
entries currently set to "?쒕퉬???듭떖 吏??", "API ?붿껌 泥섎━??(RPS)", "JVM
硫붾え由??ъ슜??(app蹂?", "?쒕퉬???곹깭 ?먭? (UP)", "CPU ?ъ슜瑜?(app蹂?", "紐⑤땲?곕쭅 ??쒕낫??");
open the JSON and replace each garbled value in the "title" fields with the
correct Korean text (or an English equivalent) and save the file encoded as
UTF-8 (no BOM) so characters are preserved; search for the exact garbled strings
above to locate every occurrence and update them consistently before re-enabling
the dashboard file.

In
`@modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java`:
- Around line 136-146: Modify abortMultipartUpload to catch
NoSuchUploadException from software.amazon.awssdk.services.s3.model and treat it
as a no-op, and change the existing catch for SdkException to log the error
(using the class logger) instead of throwing an IllegalStateException so the
original caller's error path isn't obscured; locate the abortMultipartUpload
method that calls s3Client.abortMultipartUpload(...), add an import for
NoSuchUploadException and the logger, add a specific catch
(NoSuchUploadException ex) { /* no-op */ } followed by catch (SdkException ex) {
logger.warn("Failed to abort multipart upload for key={}, uploadId={}",
objectKey, uploadId, ex); }.

---

Nitpick comments:
In @.github/workflows/deploy-ai.yml:
- Around line 146-154: Currently the aws ec2 authorize-security-group-ingress
call for MONITORING_SG_ID -> MACHINE_SG_ID is silencing all errors with "||
true"; change this so you run the aws command and capture its stderr/exit code,
then only ignore the call when the error text or error code indicates
"InvalidPermission.Duplicate" (existing rule), but surface/exit on other errors
(e.g., UnauthorizedOperation, InvalidGroup.NotFound). Locate the shell block
that references MONITORING_SG_ID, MACHINE_SG_ID and the aws ec2
authorize-security-group-ingress invocation, remove the unconditional "|| true",
and add a conditional check that greps/parses stderr (or inspects the AWS CLI
JSON error code) to permit only the Duplicate-permission case while failing for
other errors.
- Around line 125-135: 현재 MONITORING_SG_ID 및 MACHINE_SG_ID 조회에서
SecurityGroups[0].GroupId로 항상 첫 번째 보안 그룹만 사용하고 있어 인스턴스에 다수 SG가 연결된 경우 잘못된 SG에
규칙을 추가할 위험이 있습니다; 수정 방법: describe-instances 호출에서 SecurityGroups[*].GroupId로 모든
그룹 ID를 반환하거나 (권장) 보안 그룹을 태그/이름으로 필터링하도록 로직을 바꿔 특정 SG를 선택하세요 — 예를 들어
SecurityGroups[*].GroupId 쿼리로 배열을 받아 필요한 기준(태그/이름)으로 올바른 SG를 골라 사용하거나 aws ec2
describe-security-groups --filters "Name=tag:Name,Values=..."/
"Name=group-name,Values=..." 를 사용해 MONITORING_SG_ID 및 MACHINE_SG_ID를 정확한 보안 그룹으로
결정하도록 변경하세요 (참조: MONITORING_SG_ID, MACHINE_SG_ID, SecurityGroups[0].GroupId).

In @.github/workflows/deploy-monitoring.yml:
- Around line 110-114: 현재 조건문 only checks for empty/"None" values for variables
USER_API_TARGET, ADMIN_API_TARGET, TRANSCODER_TARGET, NODE_EXPORTER_TARGET,
MACHINE_NODE_EXPORTER_TARGET; add host:port format validation for each variable
(e.g., using a regex like host:port) so malformed targets fail early. Update the
existing if-block (the condition that references USER_API_TARGET,
ADMIN_API_TARGET, TRANSCODER_TARGET, NODE_EXPORTER_TARGET,
MACHINE_NODE_EXPORTER_TARGET) to also test that each variable matches a
host:port pattern (or run a separate check immediately after) and exit with an
error if any do not match. Ensure the validation uses a robust shell-compatible
check (bash regex or grep -E) and includes a clear error message naming the
offending variable.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/controller/BackOfficeContentsController.java`:
- Around line 92-93: Add a blank line between the two methods
completeContentsUpload(...) and getContentsUploadPartUrls(...) in
BackOfficeContentsController to improve readability; locate the method
declarations for completeContentsUpload and getContentsUploadPartUrls and insert
a single empty line separating the two method blocks.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/dto/response/ContentsUploadResponse.java`:
- Around line 34-35: Remove the explanatory text from the `@Schema` example for
the field originPartSizeBytes and instead include the unit in the description:
update the annotation on ContentsUploadResponse.originPartSizeBytes so
example="16777216" (numeric only) and change/append the description to mention
the unit (e.g., "파트별 크기 (바이트)" or "파트별 크기, 단위: 바이트"). Ensure the field remains
of type long and only the `@Schema` annotation values are adjusted.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/mapper/BackOfficeContentsMapper.java`:
- Around line 51-75: toContentsUploadResponse currently takes 10 primitive
parameters which harms maintainability; change its signature to accept a single
parameter object (e.g., UploadHelper.MediaCreateUploadResult result) plus
contentsId, or an explicit DTO, and map fields from that object inside
toContentsUploadResponse (refer to method name toContentsUploadResponse and
class ContentsUploadResponse) so callers pass the result object instead of ten
separate args; update all call sites to construct/pass the
MediaCreateUploadResult (or new param object) and map its getters
(posterObjectKey(), thumbnailObjectKey(), originObjectKey(),
masterPlaylistObjectKey(), posterUploadUrl(), thumbnailUploadUrl(),
originUploadId(), originTotalPartCount(), originPartSizeBytes()) when creating
the ContentsUploadResponse.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/dto/response/ShortFormUploadResponse.java`:
- Around line 28-35: The Swagger example for originPartSizeBytes in
ShortFormUploadResponse includes explanatory text ("16777216   <-(16mb)");
update the `@Schema` annotation for originPartSizeBytes (in class
ShortFormUploadResponse) to use a clean example value (e.g. "16777216") without
the parenthetical comment so it matches the format used in
ContentsUploadResponse.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java`:
- Around line 262-288: The method completeShortFormOriginUpload is annotated
with `@Transactional`(readOnly = true) but calls
uploadHelper.completeMultipartUpload (an external write operation), which makes
the readOnly flag misleading; update the method signature by removing readOnly
(e.g., change to `@Transactional`) or remove the `@Transactional` annotation
entirely so the transactional metadata accurately reflects that the method
performs non-read-only work—adjust the annotation on
completeShortFormOriginUpload accordingly.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java`:
- Around line 266-276: The current validation throws inside a forEach
(normalizedParts.forEach ...) which causes side-effects in a functional context;
replace it with a clear imperative loop or a stream predicate to validate first
and then throw. For example, iterate over normalizedParts with a traditional
for-loop (or use normalizedParts.stream().anyMatch(...)) to detect if any
MultipartPartETag has partNumber() <= 0, blank eTag() (StringUtils.hasText), or
a duplicate partNumber using the seenPartNumbers Set, and if such a case is
found throw new BusinessException(ErrorCode.ETAG_LIST_INVALID); ensure you
reference MultipartPartETag, normalizedParts, seenPartNumbers and ErrorCode in
the updated code.
- Around line 342-363: createVideoMultipartUpload can leave an orphaned S3
multipart session if an exception happens after
s3PresignService.createMultipartUpload; wrap the block that calls
createMultipartUpload and constructs the MultipartUploadFileResult in a
try-catch, capture uploadId (ensure it's non-null) and on any exception call
s3PresignService.abortMultipartUpload(objectKey, uploadId) (or equivalent abort
method) to clean up, then rethrow the exception so callers still see the
failure.

In
`@modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java`:
- Around line 110-123: In S3PresignService.completeMultipartUpload ensure the
incoming partETags are sorted by partNumber before building CompletedPart
objects and CompletedMultipartUpload: rename the input collection to
partETagList and the mapped collection to completedPartList for consistency,
sort partETagList by Comparator.comparingInt( MultipartPartETag::partNumber )
(ascending) before mapping to
CompletedPart.builder().partNumber(...).eTag(...).build(), and add the
java.util.Comparator import; this guarantees internal ordering regardless of
callers.
- Around line 162-166: Move the nested record MultipartPartETag out of
S3PresignService and into its own public DTO class (e.g., MultipartPartETag) in
an infra-visible package so other modules can depend on the DTO instead of the
service implementation; update S3PresignService to return/consume the new DTO
type and change the usage site (UploadHelper) to import the new DTO rather than
referencing S3PresignService.MultipartPartETag, keeping the class name and
fields identical to avoid API changes.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: f93294c6-fea1-48e3-b65f-14fa382f01f4

📥 Commits

Reviewing files that changed from the base of the PR and between 8aabc55 and eeee19e.

📒 Files selected for processing (32)
  • .github/workflows/deploy-ai.yml
  • .github/workflows/deploy-ec2-docker.yml
  • .github/workflows/deploy-monitoring.yml
  • apps/api-admin/src/main/java/com/ott/api_admin/content/controller/BackOfficeContentsApi.java
  • apps/api-admin/src/main/java/com/ott/api_admin/content/controller/BackOfficeContentsController.java
  • apps/api-admin/src/main/java/com/ott/api_admin/content/dto/request/ContentsUploadRequest.java
  • apps/api-admin/src/main/java/com/ott/api_admin/content/dto/response/ContentsUploadResponse.java
  • apps/api-admin/src/main/java/com/ott/api_admin/content/mapper/BackOfficeContentsMapper.java
  • apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java
  • apps/api-admin/src/main/java/com/ott/api_admin/shortform/controller/BackOfficeShortFormApi.java
  • apps/api-admin/src/main/java/com/ott/api_admin/shortform/controller/BackOfficeShortFormController.java
  • apps/api-admin/src/main/java/com/ott/api_admin/shortform/dto/request/ShortFormUploadRequest.java
  • apps/api-admin/src/main/java/com/ott/api_admin/shortform/dto/response/ShortFormUploadResponse.java
  • apps/api-admin/src/main/java/com/ott/api_admin/shortform/mapper/BackOfficeShortFormMapper.java
  • apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java
  • apps/api-admin/src/main/java/com/ott/api_admin/upload/dto/request/MultipartUploadCompleteRequest.java
  • apps/api-admin/src/main/java/com/ott/api_admin/upload/dto/response/MultipartUploadPartUrlResponse.java
  • apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java
  • apps/monitoring/docker-compose.yml
  • apps/monitoring/grafana/provisioning/dashboards/json/New dashboard-1772584885701.json.disabled
  • apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json
  • apps/monitoring/grafana/provisioning/dashboards/json/loki-logs-15324.json
  • apps/monitoring/grafana/provisioning/dashboards/json/node-exporter-full-1860.json
  • apps/monitoring/grafana/provisioning/dashboards/json/rabbitmq-overview-10991.json
  • apps/monitoring/grafana/provisioning/dashboards/json/spring-boot-3x-20727.json
  • apps/monitoring/prometheus/prometheus.prod.yml
  • apps/monitoring/prometheus/prometheus.prod.yml.tpl
  • apps/monitoring/prometheus/prometheus.yml
  • docker-compose.yml
  • modules/common-web/src/main/java/com/ott/common/web/exception/ErrorCode.java
  • modules/infra-s3/src/main/java/com/ott/infra/s3/config/S3PresignerConfig.java
  • modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java

Comment on lines +139 to +145
ResponseEntity<SuccessResponse<Void>> completeContentsUpload(
@Parameter(description = "대상 콘텐츠 ID", required = true, example = "1")
@PathVariable("contentsId") Long contentsId,

@Parameter(description = "멀티파트 업로드 완료 요청 바디", required = true)
@Valid @RequestBody MultipartUploadCompleteRequest request
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

completeContentsUpload에 Authentication 파라미터 누락

BackOfficeShortFormApi.completeShortFormUpload (lines 160-167)와 비교했을 때, 이 엔드포인트에는 Authentication authentication 파라미터가 누락되어 있습니다. 서비스 레이어에서 EDITOR 권한 검증을 수행하려면 Authentication 객체가 필요합니다.

코딩 가이드라인에 따르면 apps/api-admin/** 파일에서는 back-office 권한 및 역할 제약을 확인해야 하며, 누락된 경로 보호는 P0로 플래그해야 합니다.

🔧 수정 제안
     ResponseEntity<SuccessResponse<Void>> completeContentsUpload(
             `@Parameter`(description = "대상 콘텐츠 ID", required = true, example = "1")
             `@PathVariable`("contentsId") Long contentsId,

             `@Parameter`(description = "멀티파트 업로드 완료 요청 바디", required = true)
-            `@Valid` `@RequestBody` MultipartUploadCompleteRequest request
+            `@Valid` `@RequestBody` MultipartUploadCompleteRequest request,
+            Authentication authentication
     );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/controller/BackOfficeContentsApi.java`
around lines 139 - 145, The completeContentsUpload controller method in
BackOfficeContentsApi is missing the Authentication authentication parameter
required for back-office role checks; add an Authentication authentication
parameter to the completeContentsUpload signature (matching how
BackOfficeShortFormApi.completeShortFormUpload is declared) so the service layer
can validate EDITOR permissions, and update any implementing controller class to
accept and pass this Authentication through to the service layer/authorization
checks.

Comment on lines +165 to +180
ResponseEntity<SuccessResponse<PageResponse<MultipartUploadPartUrlResponse>>> getContentsUploadPartUrls(
@Parameter(description = "대상 콘텐츠 ID", required = true, example = "1")
@PathVariable("contentsId") Long contentsId,

@Parameter(description = "S3 object key", required = true, example = "contents/1/origin/video.mp4")
@RequestParam("objectKey") String objectKey,

@Parameter(description = "S3 multipart upload ID", required = true)
@RequestParam("uploadId") String uploadId,

@Parameter(description = "페이지 번호(0부터 시작)", required = true, example = "0")
@RequestParam(value = "page", defaultValue = "0") Integer page,

@Parameter(description = "페이지 크기", required = true, example = "100")
@RequestParam(value = "size", defaultValue = "100") Integer size
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

getContentsUploadPartUrls에 Authentication 파라미터 누락

마찬가지로, 이 엔드포인트도 BackOfficeShortFormApi.getShortFormUploadPartUrls (lines 187-203)와 달리 Authentication authentication 파라미터가 누락되어 있습니다. EDITOR 사용자가 다른 사용자의 콘텐츠에 접근하는 것을 방지하기 위한 권한 검증이 불가능합니다.

🔧 수정 제안
     ResponseEntity<SuccessResponse<PageResponse<MultipartUploadPartUrlResponse>>> getContentsUploadPartUrls(
             `@Parameter`(description = "대상 콘텐츠 ID", required = true, example = "1")
             `@PathVariable`("contentsId") Long contentsId,

             `@Parameter`(description = "S3 object key", required = true, example = "contents/1/origin/video.mp4")
             `@RequestParam`("objectKey") String objectKey,

             `@Parameter`(description = "S3 multipart upload ID", required = true)
             `@RequestParam`("uploadId") String uploadId,

             `@Parameter`(description = "페이지 번호(0부터 시작)", required = true, example = "0")
             `@RequestParam`(value = "page", defaultValue = "0") Integer page,

             `@Parameter`(description = "페이지 크기", required = true, example = "100")
-            `@RequestParam`(value = "size", defaultValue = "100") Integer size
+            `@RequestParam`(value = "size", defaultValue = "100") Integer size,
+            Authentication authentication
     );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ResponseEntity<SuccessResponse<PageResponse<MultipartUploadPartUrlResponse>>> getContentsUploadPartUrls(
@Parameter(description = "대상 콘텐츠 ID", required = true, example = "1")
@PathVariable("contentsId") Long contentsId,
@Parameter(description = "S3 object key", required = true, example = "contents/1/origin/video.mp4")
@RequestParam("objectKey") String objectKey,
@Parameter(description = "S3 multipart upload ID", required = true)
@RequestParam("uploadId") String uploadId,
@Parameter(description = "페이지 번호(0부터 시작)", required = true, example = "0")
@RequestParam(value = "page", defaultValue = "0") Integer page,
@Parameter(description = "페이지 크기", required = true, example = "100")
@RequestParam(value = "size", defaultValue = "100") Integer size
);
ResponseEntity<SuccessResponse<PageResponse<MultipartUploadPartUrlResponse>>> getContentsUploadPartUrls(
`@Parameter`(description = "대상 콘텐츠 ID", required = true, example = "1")
`@PathVariable`("contentsId") Long contentsId,
`@Parameter`(description = "S3 object key", required = true, example = "contents/1/origin/video.mp4")
`@RequestParam`("objectKey") String objectKey,
`@Parameter`(description = "S3 multipart upload ID", required = true)
`@RequestParam`("uploadId") String uploadId,
`@Parameter`(description = "페이지 번호(0부터 시작)", required = true, example = "0")
`@RequestParam`(value = "page", defaultValue = "0") Integer page,
`@Parameter`(description = "페이지 크기", required = true, example = "100")
`@RequestParam`(value = "size", defaultValue = "100") Integer size,
Authentication authentication
);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/controller/BackOfficeContentsApi.java`
around lines 165 - 180, The API method
BackOfficeContentsApi.getContentsUploadPartUrls is missing the Authentication
parameter, preventing authorization checks; add an Authentication authentication
parameter to the method signature for getContentsUploadPartUrls (matching
BackOfficeShortFormApi.getShortFormUploadPartUrls), update any corresponding
annotations/imports, and ensure the controller/service path that implements
getContentsUploadPartUrls accepts and uses this Authentication to perform the
same permission checks as other backoffice endpoints.

Comment on lines +77 to +92
@Override
@PostMapping("/{contentsId}/upload/complete")
public ResponseEntity<SuccessResponse<Void>> completeContentsUpload(
@PathVariable("contentsId") Long contentsId,
@Valid @RequestBody MultipartUploadCompleteRequest request
) {
backOfficeContentsService.completeContentsOriginUpload(
contentsId,
request.objectKey(),
request.uploadId(),
request.parts().stream()
.map(part -> new UploadHelper.MultipartPartETag(part.partNumber(), part.eTag()))
.toList()
);
return ResponseEntity.ok(SuccessResponse.of(null));
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

EDITOR 권한 검증 누락 - 권한 우회 가능

BackOfficeShortFormController.completeShortFormUpload와 비교했을 때, 이 엔드포인트는 Authentication 파라미터가 없으며 서비스 호출에도 전달하지 않습니다.

BackOfficeShortFormService.completeShortFormOriginUpload에서는 EDITOR가 자신이 업로드한 콘텐츠만 접근할 수 있도록 검증하지만, 콘텐츠 엔드포인트에는 이 검증이 없어 EDITOR가 다른 사용자의 콘텐츠 업로드를 완료할 수 있습니다.

코딩 가이드라인에 따르면 apps/api-admin/**에서 누락된 경로 보호는 P0입니다.

🔧 수정 제안
     `@Override`
     `@PostMapping`("/{contentsId}/upload/complete")
     public ResponseEntity<SuccessResponse<Void>> completeContentsUpload(
             `@PathVariable`("contentsId") Long contentsId,
-            `@Valid` `@RequestBody` MultipartUploadCompleteRequest request
+            `@Valid` `@RequestBody` MultipartUploadCompleteRequest request,
+            Authentication authentication
     ) {
         backOfficeContentsService.completeContentsOriginUpload(
                 contentsId,
                 request.objectKey(),
                 request.uploadId(),
                 request.parts().stream()
                         .map(part -> new UploadHelper.MultipartPartETag(part.partNumber(), part.eTag()))
-                        .toList()
+                        .toList(),
+                authentication
         );
         return ResponseEntity.ok(SuccessResponse.of(null));
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/controller/BackOfficeContentsController.java`
around lines 77 - 92, The endpoint
BackOfficeContentsController.completeContentsUpload is missing
Authentication-based authorization, allowing EDITORs to complete uploads for
other users; add an Authentication parameter to completeContentsUpload, extract
the caller's user id/roles (as done in
BackOfficeShortFormController.completeShortFormUpload), and either pass that
identity to BackOfficeContentsService.completeContentsOriginUpload or perform an
ownership check before calling the service so the service can enforce the same
OWNER/EDITOR check (matching logic in
BackOfficeShortFormService.completeShortFormOriginUpload) to ensure only the
uploader/authorized editor may complete the upload.

Comment on lines +93 to +105
@Override
@GetMapping("/{contentsId}/upload/parts")
public ResponseEntity<SuccessResponse<PageResponse<MultipartUploadPartUrlResponse>>> getContentsUploadPartUrls(
@PathVariable("contentsId") Long contentsId,
@RequestParam("objectKey") String objectKey,
@RequestParam("uploadId") String uploadId,
@RequestParam(value = "page", defaultValue = "0") Integer page,
@RequestParam(value = "size", defaultValue = "100") Integer size
) {
return ResponseEntity.ok(
SuccessResponse.of(backOfficeContentsService.getContentsOriginUploadPartUrls(contentsId, objectKey, uploadId, page, size))
);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

getContentsUploadPartUrls에도 동일한 권한 검증 누락

이 엔드포인트도 Authentication 파라미터가 누락되어 있어 EDITOR 권한 검증이 불가능합니다. 서비스 레이어에서 업로더 소유권 검증을 수행하려면 인증 정보가 필요합니다.

🔧 수정 제안
     `@Override`
     `@GetMapping`("/{contentsId}/upload/parts")
     public ResponseEntity<SuccessResponse<PageResponse<MultipartUploadPartUrlResponse>>> getContentsUploadPartUrls(
             `@PathVariable`("contentsId") Long contentsId,
             `@RequestParam`("objectKey") String objectKey,
             `@RequestParam`("uploadId") String uploadId,
             `@RequestParam`(value = "page", defaultValue = "0") Integer page,
-            `@RequestParam`(value = "size", defaultValue = "100") Integer size
+            `@RequestParam`(value = "size", defaultValue = "100") Integer size,
+            Authentication authentication
     ) {
         return ResponseEntity.ok(
-                SuccessResponse.of(backOfficeContentsService.getContentsOriginUploadPartUrls(contentsId, objectKey, uploadId, page, size))
+                SuccessResponse.of(backOfficeContentsService.getContentsOriginUploadPartUrls(contentsId, objectKey, uploadId, page, size, authentication))
         );
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/controller/BackOfficeContentsController.java`
around lines 93 - 105, The getContentsUploadPartUrls endpoint is missing the
Authentication parameter so EDITOR permission and uploader ownership cannot be
verified; update BackOfficeContentsController.getContentsUploadPartUrls to
accept an Authentication (or `@AuthenticationPrincipal`) parameter, perform the
same EDITOR/ownership check as other endpoints (or pass the Authentication
through) before calling
backOfficeContentsService.getContentsOriginUploadPartUrls, and if you change the
service signature ensure
backOfficeContentsService.getContentsOriginUploadPartUrls accepts the
Authentication (or a validated user id) so the service can verify uploader
ownership.

Comment on lines +229 to 236
int totalPartCount = uploadHelper.getMultipartPartCount(contents.getVideoSize());
PageResponse<UploadHelper.MultipartUploadPartUrl> partUrlPage = uploadHelper.getMultipartPartUrls(
objectKey,
uploadId,
totalPartCount,
page,
size
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

파트 수 재계산으로 인한 잠재적 불일치

totalPartCount가 매번 uploadHelper.getMultipartPartCount(contents.getVideoSize())로 재계산됩니다. 서버 설정(multipartDefaultPartSizeBytes, multipartMaxParts)이 업로드 생성 시점과 파트 URL 조회 시점 사이에 변경되면 계산된 파트 수가 달라질 수 있습니다.

Contents 엔티티에 originTotalPartCount를 저장하거나, 클라이언트가 생성 응답에서 받은 값을 요청에 포함하도록 하는 것을 고려해 보세요.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java`
around lines 229 - 236, The code recalculates totalPartCount via
uploadHelper.getMultipartPartCount(contents.getVideoSize()) when serving part
URLs, which can diverge if server multipart settings changed; modify
BackOfficeContentsService to use a stable source: read and use a persisted
originTotalPartCount from the Contents entity (e.g., add/get
originTotalPartCount on Contents) or accept a client-supplied totalPartCount
from the upload creation response and pass it into
uploadHelper.getMultipartPartUrls, and then call
uploadHelper.getMultipartPartUrls(objectKey, uploadId, originTotalPartCount,
page, size) instead of recalculating to avoid mismatch.

Comment thread apps/monitoring/docker-compose.yml
Comment on lines +136 to +146
public void abortMultipartUpload(String objectKey, String uploadId) {
try {
s3Client.abortMultipartUpload(
AbortMultipartUploadRequest.builder()
.bucket(bucket)
.key(objectKey)
.uploadId(uploadId)
.build()
);
} catch (SdkException ex) {
throw new IllegalStateException("Failed to abort multipart upload.", ex);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "S3PresignService.java" -type f

Repository: OpenTheTaste/backend

Length of output: 143


🏁 Script executed:

find . -name "UploadHelper.java" -type f

Repository: OpenTheTaste/backend

Length of output: 145


🏁 Script executed:

wc -l ./modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java

Repository: OpenTheTaste/backend

Length of output: 147


🏁 Script executed:

sed -n '130,155p' ./modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java

Repository: OpenTheTaste/backend

Length of output: 1041


🏁 Script executed:

wc -l ./apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java

Repository: OpenTheTaste/backend

Length of output: 149


🏁 Script executed:

sed -n '285,305p' ./apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java

Repository: OpenTheTaste/backend

Length of output: 694


🏁 Script executed:

sed -n '1,50p' ./modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java

Repository: OpenTheTaste/backend

Length of output: 2306


🏁 Script executed:

sed -n '260,310p' ./apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java

Repository: OpenTheTaste/backend

Length of output: 1999


🏁 Script executed:

sed -n '250,330p' ./apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java

Repository: OpenTheTaste/backend

Length of output: 3002


🏁 Script executed:

grep -n "abortMultipartUpload" ./apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java

Repository: OpenTheTaste/backend

Length of output: 213


🏁 Script executed:

cat -n ./modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java | grep -i "logger\|log\|import.*log"

Repository: OpenTheTaste/backend

Length of output: 134


🏁 Script executed:

grep -n "import.*log\|import.*Logger\|private.*log" ./modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java

Repository: OpenTheTaste/backend

Length of output: 46


🏁 Script executed:

sed -n '1,80p' ./apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java

Repository: OpenTheTaste/backend

Length of output: 3163


🏁 Script executed:

rg "abortMultipartUpload" --context 10 ./apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java

Repository: OpenTheTaste/backend

Length of output: 828


🏁 Script executed:

rg "NoSuchUploadException" --type java

Repository: OpenTheTaste/backend

Length of output: 46


🏁 Script executed:

python3 << 'EOF'
# Check AWS SDK v2 for NoSuchUploadException availability
import json
response = """
The NoSuchUploadException is available in software.amazon.awssdk.services.s3.model package.
It's part of the AWS SDK for Java 2.x.
"""
print(response)
EOF

Repository: OpenTheTaste/backend

Length of output: 196


🏁 Script executed:

grep -n "try\|catch\|s3PresignService.abortMultipartUpload" ./apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java | head -50

Repository: OpenTheTaste/backend

Length of output: 271


🏁 Script executed:

rg "abortMultipartUpload" --context 15 ./apps/api-admin/

Repository: OpenTheTaste/backend

Length of output: 12997


🏁 Script executed:

grep -rn "catch\|try" ./apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java | grep -A 10 -B 5 "294"

Repository: OpenTheTaste/backend

Length of output: 46


🏁 Script executed:

sed -n '150,250p' ./apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java

Repository: OpenTheTaste/backend

Length of output: 4164


🏁 Script executed:

sed -n '100,120p' ./apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java

Repository: OpenTheTaste/backend

Length of output: 1038


🏁 Script executed:

rg "uploadHelper.abortMultipartUpload" -B 5 -A 10 ./apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java

Repository: OpenTheTaste/backend

Length of output: 879


🏁 Script executed:

sed -n '1,30p' ./apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java

Repository: OpenTheTaste/backend

Length of output: 1722


멀티파트 업로드 중단 시 원본 에러를 가리는 예외 처리 개선 필요

NoSuchUploadException은 no-op으로 처리하고, 다른 SdkException은 로그만 남기도록 수정하세요. 현재는 abort 실패 시 IllegalStateException을 던지는데, 이것이 업로드 실패의 원래 원인을 덮어쓸 수 있습니다. BackOfficeShortFormService와 BackOfficeContentsService 같은 서비스 계층의 에러 복구 경로에서 이 메서드가 호출될 때 원본 에러가 가려질 위험이 있습니다.

수정 예시
+        } catch (NoSuchUploadException ex) {
+            // Upload session already aborted or expired - no-op
+            return;
         } catch (SdkException ex) {
-            throw new IllegalStateException("Failed to abort multipart upload.", ex);
+            log.warn("Failed to abort multipart upload. objectKey={}, uploadId={}", objectKey, uploadId, ex);
         }

software.amazon.awssdk.services.s3.model.NoSuchUploadException import과 logger import가 필요합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java`
around lines 136 - 146, Modify abortMultipartUpload to catch
NoSuchUploadException from software.amazon.awssdk.services.s3.model and treat it
as a no-op, and change the existing catch for SdkException to log the error
(using the class logger) instead of throwing an IllegalStateException so the
original caller's error path isn't obscured; locate the abortMultipartUpload
method that calls s3Client.abortMultipartUpload(...), add an import for
NoSuchUploadException and the logger, add a specific catch
(NoSuchUploadException ex) { /* no-op */ } followed by catch (SdkException ex) {
logger.warn("Failed to abort multipart upload for key={}, uploadId={}",
objectKey, uploadId, ex); }.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

♻️ Duplicate comments (1)
apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java (1)

198-209: ⚠️ Potential issue | 🟠 Major

누락된 part 목록으로도 원본 업로드를 확정할 수 있습니다.

여기서는 기대 part 수를 전혀 비교하지 않고 completeMultipartUpload로 넘깁니다. 그런데 UploadHelper.completeMultipartUpload는 중복/빈 eTag만 검사해서, 일부 part가 빠진 목록도 통과할 수 있습니다. 그 경우 S3가 전달된 part만 조립해 손상된 원본 영상이 확정됩니다. 가능하면 생성 시점의 originTotalPartCount를 저장해 그 값으로 검증하세요.

🔧 제안된 수정
+        int expectedPartCount = uploadHelper.getMultipartPartCount(contents.getVideoSize());
+        if (parts.size() != expectedPartCount
+                || parts.stream().anyMatch(part -> part.partNumber() < 1 || part.partNumber() > expectedPartCount)) {
+            throw new BusinessException(ErrorCode.ETAG_LIST_INVALID);
+        }
+
         uploadHelper.completeMultipartUpload(objectKey, uploadId, parts);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java`
around lines 198 - 209, The method completeContentsOriginUpload currently
forwards whatever parts are provided to uploadHelper.completeMultipartUpload
without ensuring no parts are missing; update completeContentsOriginUpload to
retrieve the expected part count from the Contents entity (e.g.,
contents.getOriginTotalPartCount() or the stored originTotalPartCount field),
compare it against parts.size(), and if they differ throw a BusinessException
with an appropriate error code (e.g.,
ErrorCode.CONTENTS_ORIGIN_PART_COUNT_MISMATCH) before calling
uploadHelper.completeMultipartUpload; this ensures only a complete set of parts
is accepted and prevents assembling a corrupted origin object.
🧹 Nitpick comments (11)
apps/monitoring/prometheus/prometheus.prod.yml (1)

43-55: 비템플릿 파일에 플레이스홀더가 포함되어 있음

prometheus.prod.yml 파일에 __NODE_EXPORTER_TARGET____MACHINE_NODE_EXPORTER_TARGET__ 플레이스홀더가 그대로 포함되어 있습니다. deploy-monitoring.yml 워크플로우가 .tpl 파일로부터 이 파일을 생성하므로, 커밋된 prometheus.prod.yml 파일은 실제로 사용되지 않거나 혼란을 야기할 수 있습니다.

.tpl 파일만 버전 관리하고 prometheus.prod.yml.gitignore에 추가하거나, 커밋된 파일에서 플레이스홀더를 제거하는 것을 권장합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitoring/prometheus/prometheus.prod.yml` around lines 43 - 55,
prometheus.prod.yml currently contains template placeholders
__NODE_EXPORTER_TARGET__ and __MACHINE_NODE_EXPORTER_TARGET__ which are
generated from the .tpl by the deploy-monitoring.yml workflow and therefore
should not be committed as a consumer-facing file; either remove the
placeholders and commit a valid production-ready scrape target config or stop
committing prometheus.prod.yml by adding it to .gitignore and ensuring only the
source .tpl file remains under version control (verify deploy-monitoring.yml
still produces the runtime file from the .tpl).
.github/workflows/deploy-ai.yml (1)

125-135: 단일 보안 그룹만 조회됨

SecurityGroups[0].GroupId를 사용하여 첫 번째 보안 그룹만 가져옵니다. 인스턴스에 여러 보안 그룹이 연결된 경우, node-exporter 포트(9100)에 대한 인그레스 규칙이 올바른 보안 그룹에 추가되지 않을 수 있습니다.

현재 인프라에서 인스턴스당 단일 보안 그룹을 사용한다면 문제없지만, 향후 확장 시 고려가 필요합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/deploy-ai.yml around lines 125 - 135, The current queries
for MONITORING_SG_ID and MACHINE_SG_ID only select SecurityGroups[0].GroupId
(first SG) which misses additional attached groups; change the --query to return
all security group IDs (use
Reservations[0].Instances[0].SecurityGroups[].GroupId or join(...) to get a
space/comma list) for both MONITORING_SG_ID and MACHINE_SG_ID, then iterate over
the resulting list (or split the join) and apply the node-exporter ingress
update to each group instead of a single group so every attached SG receives the
rule.
apps/monitoring/prometheus/prometheus.yml (1)

16-16: appapplication 라벨의 중복 여부 확인 필요

app 라벨과 동일한 값을 가진 application 라벨이 추가되었습니다. Grafana 대시보드 호환성을 위한 것으로 보이나, prometheus.prod.ymlprometheus.prod.yml.tpl 파일에는 application 라벨이 추가되지 않아 로컬과 프로덕션 환경 간 불일치가 있습니다.

Also applies to: 28-28, 40-40

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitoring/prometheus/prometheus.yml` at line 16, The prometheus scrape
config has both labels "app" and "application" with the same value, but
production templates are missing "application", causing inconsistency; make the
labels consistent across environments by either removing the redundant
"application" label here or adding the same "application" label (set to the same
value as "app") to the production templates, and ensure all scrape configs use
the identical label naming convention ("app" and/or "application") so Grafana
dashboards work the same locally and in prod.
modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java (1)

110-117: partETags 입력 방어를 추가하면 실패 원인이 더 명확해집니다.

Line 112에서 바로 stream을 타므로 null일 때 NPE가 발생합니다. public 메서드라서 최소한의 fail-fast 검증을 두는 걸 권장합니다.

제안 패치
     public void completeMultipartUpload(String objectKey, String uploadId, List< MultipartPartETag> partETags) {
+        if (partETags == null || partETags.isEmpty()) {
+            throw new IllegalArgumentException("partETags must not be null or empty.");
+        }
         try {
             List<CompletedPart> completedParts = partETags.stream()
                     .map(part -> CompletedPart.builder()
                             .partNumber(part.partNumber())
                             .eTag(part.eTag())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java`
around lines 110 - 117, The public method completeMultipartUpload in
S3PresignService currently streams partETags without guarding against null,
causing an NPE; add a fail-fast validation at the start of
completeMultipartUpload to check that partETags is not null (and optionally not
empty) and throw a clear IllegalArgumentException (or NullPointerException with
a descriptive message) if the input is invalid, then proceed to map partETags to
CompletedPart as before (referencing the method completeMultipartUpload,
parameter partETags, and the CompletedPart.builder() mapping to locate the
code).
apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json (1)

2841-2841: 메트릭을 포함하여 라벨 값 쿼리 범위를 제한하세요.

현재 label_values(application)은 모든 메트릭을 대상으로 application 라벨 값을 조회하여 불필요한 값 유입과 카디널리티 부담이 발생할 수 있습니다. JVM 대시보드의 실제 메트릭에 스코프를 제한하여 label_values(jvm_memory_used_bytes, application)로 변경하는 것이 권장됩니다. 이는 Grafana 공식 문서에서 추천하는 패턴이며, Google Managed Prometheus 같은 일부 구현에서도 메트릭 없는 형태가 작동하지 않을 수 있으므로 더욱 중요합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json`
at line 2841, The Prometheus label_values query currently uses a metric-less
form "label_values(application)" which can return irrelevant labels and increase
cardinality; update the query to scope it to the relevant JVM metric used on
this panel (e.g., change "label_values(application)" to
"label_values(jvm_memory_used_bytes, application)") so the label values are
constrained to that metric and compatible with implementations like Google
Managed Prometheus.
apps/monitoring/grafana/provisioning/dashboards/json/loki-logs-15324.json (1)

28-31: 신규 대시보드에 Graph (old) 패널을 그대로 들여오진 않는 편이 좋습니다.

레거시 graph 패널 의존성을 남기면 Grafana 버전 업 시 이 JSON만 별도 마이그레이션 포인트가 됩니다. 가능하면 이번에 timeseries 기반으로 바꿔 두는 편이 안전합니다.

Also applies to: 124-124, 147-147

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitoring/grafana/provisioning/dashboards/json/loki-logs-15324.json`
around lines 28 - 31, The review flags that the panel with id "graph" and name
"Graph (old)" should be migrated from the legacy "graph" panel to a modern
"timeseries" panel to avoid future Grafana migration pain; update the dashboard
JSON entries that define the panel (look for id "graph" and name "Graph (old)"
and the similar occurrences noted) by replacing the legacy panel type with a
"timeseries" panel configuration, map the existing queries, axes, legends and
display options to their equivalents in the timeseries schema, remove
legacy-specific fields, and validate the resulting JSON in Grafana to ensure
visuals and queries still render correctly.
apps/api-admin/src/main/java/com/ott/api_admin/content/dto/response/ContentsUploadResponse.java (1)

28-35: 멀티파트 업로드 메타데이터 필드 추가 확인

originUploadUrl을 세 개의 멀티파트 관련 필드로 교체한 것은 대용량 파일 업로드 지원에 적절합니다.

한 가지 개선 사항: Line 34의 example 값에 불필요한 공백과 주석("16777216 <-(16mb)")이 포함되어 있습니다. 다른 필드들과 일관성을 위해 정리하는 것이 좋습니다.

💡 example 포맷 정리
-        `@Schema`(type = "Long", description = "파트별 크기", example = "16777216   <-(16mb)")
+        `@Schema`(type = "Long", description = "파트별 크기(바이트)", example = "16777216")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/dto/response/ContentsUploadResponse.java`
around lines 28 - 35, The Schema example for the originPartSizeBytes field in
ContentsUploadResponse contains stray whitespace and an inline comment
("16777216   <-(16mb)"); update the `@Schema` annotation on originPartSizeBytes to
use a clean numeric example (e.g., "16777216") matching the other fields' style
so the example contains only the plain number without comments or extra spaces.
apps/api-admin/src/main/java/com/ott/api_admin/upload/dto/request/MultipartUploadCompleteRequest.java (1)

22-24: 컬렉션 변수명에 List 접미사 누락

코딩 가이드라인에 따르면 컬렉션 변수는 List 접미사를 사용해야 합니다. partspartList로 변경하는 것을 권장합니다.

♻️ 제안된 변경
         `@Schema`(description = "업로드된 파트 eTag 목록")
         `@NotEmpty`
-        List<@Valid `@NotNull` PartETagRequest> parts
+        List<@Valid `@NotNull` PartETagRequest> partList

As per coding guidelines: "Collection variable names with List suffix"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/upload/dto/request/MultipartUploadCompleteRequest.java`
around lines 22 - 24, Rename the collection field parts in
MultipartUploadCompleteRequest to partList to comply with the "List" suffix
guideline; update the field declaration List<@Valid `@NotNull` PartETagRequest>
parts -> partList (preserve `@Schema`, `@NotEmpty` and other annotations), and then
update all related accessors (getParts/setParts or builder/constructor
parameters), any `@JsonProperty` or serialization names if present, and all usages
across the codebase to reference partList instead of parts to keep
API/serialization behavior unchanged.
apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java (1)

257-285: 컬렉션 파라미터명 List 접미사 적용 필요

코딩 가이드라인에 따라 partETagspartETagList로 변경하는 것을 권장합니다.

또한, ETag 유효성 검증 로직(Lines 271-276)에서 forEach 내 예외 발생 방식은 동작하지만, 스트림 기반 검증이 더 명시적일 수 있습니다.

♻️ 파라미터명 및 검증 로직 개선
     public void completeMultipartUpload(
             String objectKey,
             String uploadId,
-            List<MultipartPartETag> partETags
+            List<MultipartPartETag> partETagList
     ) {
-        if (!StringUtils.hasText(objectKey) || !StringUtils.hasText(uploadId) || partETags == null || partETags.isEmpty()) {
+        if (!StringUtils.hasText(objectKey) || !StringUtils.hasText(uploadId) || partETagList == null || partETagList.isEmpty()) {
             throw new BusinessException(ErrorCode.ETAG_LIST_INVALID);
         }

-        List<MultipartPartETag> normalizedParts = partETags.stream()
+        List<MultipartPartETag> normalizedPartList = partETagList.stream()
                 .sorted(Comparator.comparingInt(MultipartPartETag::partNumber))
                 .toList();

         //ETAG List 유효성 검증
         Set<Integer> seenPartNumbers = new HashSet<>();
-        normalizedParts.forEach(part -> {
-            if (part.partNumber() <= 0 || !StringUtils.hasText(part.eTag()) || !seenPartNumbers.add(part.partNumber())) {
-                throw new BusinessException(ErrorCode.ETAG_LIST_INVALID);
-            }
-        });
+        boolean hasInvalidPart = normalizedPartList.stream()
+                .anyMatch(part -> part.partNumber() <= 0 
+                        || !StringUtils.hasText(part.eTag()) 
+                        || !seenPartNumbers.add(part.partNumber()));
+        if (hasInvalidPart) {
+            throw new BusinessException(ErrorCode.ETAG_LIST_INVALID);
+        }

As per coding guidelines: "Collection variable names with List suffix"

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java`
around lines 257 - 285, Rename the collection parameter partETags to
partETagList in the method completeMultipartUpload and update all usages
(including the normalizedParts creation and the call to
s3PresignService.completeMultipartUpload); replace the in-place forEach
validation block that throws inside a lambda with explicit stream-based checks:
validate no null/empty eTag via normalizedParts.stream().anyMatch(...) and
validate positive partNumber and duplicates via
normalizedParts.stream().map(MultipartPartETag::partNumber) with a duplicate
detection (e.g., collecting to a set or using anyMatch with frequency), and
throw BusinessException(ErrorCode.ETAG_LIST_INVALID) when any check fails so the
validation is clearer and side-effect free.
apps/api-admin/src/main/java/com/ott/api_admin/shortform/dto/response/ShortFormUploadResponse.java (1)

28-35: ContentsUploadResponse와 일관된 멀티파트 메타데이터 구조

ContentsUploadResponse와 동일한 패턴으로 멀티파트 업로드 필드가 적용되었습니다.

Line 34의 example 값에 ContentsUploadResponse와 동일한 포맷 이슈가 있습니다 ("16777216 <-(16mb)"). 일관성을 위해 정리하는 것을 권장합니다.

💡 example 포맷 정리
-        `@Schema`(type = "Long", description = "파트별 크기", example = "16777216   <-(16mb)")
+        `@Schema`(type = "Long", description = "파트별 크기(바이트)", example = "16777216")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/dto/response/ShortFormUploadResponse.java`
around lines 28 - 35, ShortFormUploadResponse의 멀티파트 메타데이터 예시 포맷이
ContentsUploadResponse와 일관되지 않습니다; Schema 어노테이션의 originPartSizeBytes(example =
"...") 값을 주석/설명 문구가 섞인 "16777216   <-(16mb)"에서 숫자만 있는 형식(예: "16777216")으로 정리하고
동일하게 originUploadId, originTotalPartCount 필드의 example 포맷도
ContentsUploadResponse와 일치하도록 통일하세요; 클래스명 ShortFormUploadResponse와 필드명
originPartSizeBytes, originTotalPartCount, originUploadId를 찾아 해당
`@Schema`(example=...) 값을 수정하면 됩니다.
apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java (1)

133-140: EDITOR 소유권 체크를 헬퍼로 모으는 편이 안전합니다.

동일한 권한 분기가 네 군데라 정책 변경 때 한 곳만 빠질 가능성이 큽니다. validateEditorOwnership(Media media, Authentication authentication) 같은 공용 메서드로 합치면 신규 엔드포인트 추가 시 드리프트를 줄일 수 있습니다.

♻️ 리팩터링 예시
-        Long memberId = (Long) authentication.getPrincipal();
-        boolean isEditor = authentication.getAuthorities().stream()
-                .anyMatch(authority -> Role.EDITOR.getKey().equals(authority.getAuthority()));
-        if (isEditor && !media.getUploader().getId().equals(memberId)) {
-            throw new BusinessException(ErrorCode.FORBIDDEN);
-        }
+        validateEditorOwnership(media, authentication);
private void validateEditorOwnership(Media media, Authentication authentication) {
    Long memberId = (Long) authentication.getPrincipal();
    boolean isEditor = authentication.getAuthorities().stream()
            .anyMatch(authority -> Role.EDITOR.getKey().equals(authority.getAuthority()));
    if (isEditor && !media.getUploader().getId().equals(memberId)) {
        throw new BusinessException(ErrorCode.FORBIDDEN);
    }
}

Also applies to: 274-279, 303-308, 338-343

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java`
around lines 133 - 140, Extract the duplicated "EDITOR ownership" check into a
single private helper method named validateEditorOwnership(Media media,
Authentication authentication) inside BackOfficeShortFormService that performs
the existing logic (retrieve memberId from authentication, compute isEditor via
Role.EDITOR.getKey() matching authority, and throw new
BusinessException(ErrorCode.FORBIDDEN) when an editor does not own the media)
and then replace each duplicated block that currently performs this check with a
single call to validateEditorOwnership(media, authentication) (these are the
blocks currently using Media, Authentication, Role.EDITOR, BusinessException,
and ErrorCode.FORBIDDEN).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java`:
- Around line 263-287: completeShortFormOriginUpload currently forwards the
supplied parts to uploadHelper.completeMultipartUpload without checking that the
number of parts matches the original expected part count, allowing incomplete
multipart uploads to be finalized; before calling
uploadHelper.completeMultipartUpload, read the stored expected part count from
the ShortForm (e.g., shortForm.getOriginTotalPartCount() or the
originTotalPartCount field), compare it to parts.size(), and if they differ
throw a BusinessException with an appropriate ErrorCode (create
SHORTFORM_ORIGIN_PART_COUNT_MISMATCH if missing) so only requests with the full
expected part list proceed to uploadHelper.completeMultipartUpload.

In
`@apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json`:
- Line 2048: Replace the misspelled legendFormat value "commited" with the
correct spelling "committed" wherever it appears in the dashboard JSON; search
for the JSON key "legendFormat" (e.g., the occurrences around the current lines
that contain "commited") and update the string to "committed" so legend labels
and searchability are fixed.
- Around line 2109-2110: The dashboard row uses "repeat": "persistence_counts"
but that template variable does not exist (defined variables are application,
instance, jvm_buffer_pool, jvm_memory_pool_heap, jvm_memory_pool_nonheap), so
row repetition is ignored; fix by either changing the repeat value to one of the
existing template variables (e.g., "instance") or add a new templating entry
named "persistence_counts" to the dashboard templating section so the "repeat"
key has a matching variable.
- Around line 2-10: The dashboard declares an input "__inputs" entry named
"DS_PROMETHEUS" but every panel hardcodes datasource objects like
{"type":"prometheus","uid":"prometheus"}, causing breakage if the UID differs;
fix by updating all panel datasource references to use the templated input
variable (replace hardcoded datasource objects with
{"type":"prometheus","uid":"${DS_PROMETHEUS}"} or the Grafana variable
${DS_PROMETHEUS} as used by other provisioning JSON), or alternatively remove
the unused "__inputs" DS_PROMETHEUS declaration and add a comment documenting
the hardcoded UID dependency; search for occurrences of "datasource": and the
panels that contain uid "prometheus" to apply the consistent change.
- Around line 14-37: This dashboard uses schemaVersion 14 and deprecated panel
types ("graph" id "graph" and "singlestat" id "singlestat") that are
incompatible with Grafana 12.4.0; fix by opening the dashboard in Grafana
12.4.0, re-exporting it (which updates schemaVersion to the current export
version) and replacing the removed panel types with the modern equivalents
(create new panels in the UI to replace "graph" with a Time series/Timeseries
panel and "singlestat" with a Stat panel), then commit the newly exported JSON
to replace the existing jvm-micrometer-4701.json content so provisioning will
load a compatible dashboard.

In `@apps/monitoring/grafana/provisioning/dashboards/json/loki-logs-15324.json`:
- Around line 266-277: The Pod variable's Prometheus query currently uses
container_network_receive_bytes_total which omits pods without network traffic;
update the variable (the object with "name": "pod" and "refId":
"Prometheus-pod-Variable-Query") to use kube_pod_info{namespace=~"$namespace"}
in the label_values call so it returns all pods in the selected namespace (e.g.,
label_values(kube_pod_info{namespace=~"$namespace"}, pod)); keep includeAll,
multi and the label unchanged.

In `@apps/monitoring/grafana/provisioning/dashboards/json/New`
dashboard-1772584885701.json.disabled:
- Line 261: Several "title" JSON fields contain corrupted Korean text (e.g.
"?쒕퉬???듭떖 吏�??") producing invalid control characters and causing JSON parse
failures; fix by replacing each garbled "title" value with the correct Korean
string (restore from original source or translate to the intended label), save
the file encoded as UTF-8 (no BOM), and re-run JSON validation; target the JSON
"title" keys where the garbled sequences appear and ensure all string values are
valid UTF-8 without control characters.

In
`@modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java`:
- Around line 89-96: createUploadPartPresignedUrl에서 AWS S3의 partNumber
범위(1~10000)를 메서드 시작부에서 검증하고, 범위를 벗어나면 명확한 예외를 던지도록 수정하세요: S3PresignService의
createUploadPartPresignedUrl(String objectKey, String uploadId, int partNumber)
진입부에 partNumber < 1 또는 partNumber > 10000인 경우 IllegalArgumentException(또는 프로젝트
예외 관례에 맞는 예외)와 읽기 쉬운 메시지를 던지도록 추가한 뒤 기존 UploadPartRequest.builder()에 전달되도록 하세요.

---

Duplicate comments:
In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java`:
- Around line 198-209: The method completeContentsOriginUpload currently
forwards whatever parts are provided to uploadHelper.completeMultipartUpload
without ensuring no parts are missing; update completeContentsOriginUpload to
retrieve the expected part count from the Contents entity (e.g.,
contents.getOriginTotalPartCount() or the stored originTotalPartCount field),
compare it against parts.size(), and if they differ throw a BusinessException
with an appropriate error code (e.g.,
ErrorCode.CONTENTS_ORIGIN_PART_COUNT_MISMATCH) before calling
uploadHelper.completeMultipartUpload; this ensures only a complete set of parts
is accepted and prevents assembling a corrupted origin object.

---

Nitpick comments:
In @.github/workflows/deploy-ai.yml:
- Around line 125-135: The current queries for MONITORING_SG_ID and
MACHINE_SG_ID only select SecurityGroups[0].GroupId (first SG) which misses
additional attached groups; change the --query to return all security group IDs
(use Reservations[0].Instances[0].SecurityGroups[].GroupId or join(...) to get a
space/comma list) for both MONITORING_SG_ID and MACHINE_SG_ID, then iterate over
the resulting list (or split the join) and apply the node-exporter ingress
update to each group instead of a single group so every attached SG receives the
rule.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/dto/response/ContentsUploadResponse.java`:
- Around line 28-35: The Schema example for the originPartSizeBytes field in
ContentsUploadResponse contains stray whitespace and an inline comment
("16777216   <-(16mb)"); update the `@Schema` annotation on originPartSizeBytes to
use a clean numeric example (e.g., "16777216") matching the other fields' style
so the example contains only the plain number without comments or extra spaces.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/dto/response/ShortFormUploadResponse.java`:
- Around line 28-35: ShortFormUploadResponse의 멀티파트 메타데이터 예시 포맷이
ContentsUploadResponse와 일관되지 않습니다; Schema 어노테이션의 originPartSizeBytes(example =
"...") 값을 주석/설명 문구가 섞인 "16777216   <-(16mb)"에서 숫자만 있는 형식(예: "16777216")으로 정리하고
동일하게 originUploadId, originTotalPartCount 필드의 example 포맷도
ContentsUploadResponse와 일치하도록 통일하세요; 클래스명 ShortFormUploadResponse와 필드명
originPartSizeBytes, originTotalPartCount, originUploadId를 찾아 해당
`@Schema`(example=...) 값을 수정하면 됩니다.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java`:
- Around line 133-140: Extract the duplicated "EDITOR ownership" check into a
single private helper method named validateEditorOwnership(Media media,
Authentication authentication) inside BackOfficeShortFormService that performs
the existing logic (retrieve memberId from authentication, compute isEditor via
Role.EDITOR.getKey() matching authority, and throw new
BusinessException(ErrorCode.FORBIDDEN) when an editor does not own the media)
and then replace each duplicated block that currently performs this check with a
single call to validateEditorOwnership(media, authentication) (these are the
blocks currently using Media, Authentication, Role.EDITOR, BusinessException,
and ErrorCode.FORBIDDEN).

In
`@apps/api-admin/src/main/java/com/ott/api_admin/upload/dto/request/MultipartUploadCompleteRequest.java`:
- Around line 22-24: Rename the collection field parts in
MultipartUploadCompleteRequest to partList to comply with the "List" suffix
guideline; update the field declaration List<@Valid `@NotNull` PartETagRequest>
parts -> partList (preserve `@Schema`, `@NotEmpty` and other annotations), and then
update all related accessors (getParts/setParts or builder/constructor
parameters), any `@JsonProperty` or serialization names if present, and all usages
across the codebase to reference partList instead of parts to keep
API/serialization behavior unchanged.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java`:
- Around line 257-285: Rename the collection parameter partETags to partETagList
in the method completeMultipartUpload and update all usages (including the
normalizedParts creation and the call to
s3PresignService.completeMultipartUpload); replace the in-place forEach
validation block that throws inside a lambda with explicit stream-based checks:
validate no null/empty eTag via normalizedParts.stream().anyMatch(...) and
validate positive partNumber and duplicates via
normalizedParts.stream().map(MultipartPartETag::partNumber) with a duplicate
detection (e.g., collecting to a set or using anyMatch with frequency), and
throw BusinessException(ErrorCode.ETAG_LIST_INVALID) when any check fails so the
validation is clearer and side-effect free.

In
`@apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json`:
- Line 2841: The Prometheus label_values query currently uses a metric-less form
"label_values(application)" which can return irrelevant labels and increase
cardinality; update the query to scope it to the relevant JVM metric used on
this panel (e.g., change "label_values(application)" to
"label_values(jvm_memory_used_bytes, application)") so the label values are
constrained to that metric and compatible with implementations like Google
Managed Prometheus.

In `@apps/monitoring/grafana/provisioning/dashboards/json/loki-logs-15324.json`:
- Around line 28-31: The review flags that the panel with id "graph" and name
"Graph (old)" should be migrated from the legacy "graph" panel to a modern
"timeseries" panel to avoid future Grafana migration pain; update the dashboard
JSON entries that define the panel (look for id "graph" and name "Graph (old)"
and the similar occurrences noted) by replacing the legacy panel type with a
"timeseries" panel configuration, map the existing queries, axes, legends and
display options to their equivalents in the timeseries schema, remove
legacy-specific fields, and validate the resulting JSON in Grafana to ensure
visuals and queries still render correctly.

In `@apps/monitoring/prometheus/prometheus.prod.yml`:
- Around line 43-55: prometheus.prod.yml currently contains template
placeholders __NODE_EXPORTER_TARGET__ and __MACHINE_NODE_EXPORTER_TARGET__ which
are generated from the .tpl by the deploy-monitoring.yml workflow and therefore
should not be committed as a consumer-facing file; either remove the
placeholders and commit a valid production-ready scrape target config or stop
committing prometheus.prod.yml by adding it to .gitignore and ensuring only the
source .tpl file remains under version control (verify deploy-monitoring.yml
still produces the runtime file from the .tpl).

In `@apps/monitoring/prometheus/prometheus.yml`:
- Line 16: The prometheus scrape config has both labels "app" and "application"
with the same value, but production templates are missing "application", causing
inconsistency; make the labels consistent across environments by either removing
the redundant "application" label here or adding the same "application" label
(set to the same value as "app") to the production templates, and ensure all
scrape configs use the identical label naming convention ("app" and/or
"application") so Grafana dashboards work the same locally and in prod.

In
`@modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java`:
- Around line 110-117: The public method completeMultipartUpload in
S3PresignService currently streams partETags without guarding against null,
causing an NPE; add a fail-fast validation at the start of
completeMultipartUpload to check that partETags is not null (and optionally not
empty) and throw a clear IllegalArgumentException (or NullPointerException with
a descriptive message) if the input is invalid, then proceed to map partETags to
CompletedPart as before (referencing the method completeMultipartUpload,
parameter partETags, and the CompletedPart.builder() mapping to locate the
code).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: b901e473-fd0a-48ef-8ea5-c4ce591dc224

📥 Commits

Reviewing files that changed from the base of the PR and between 8aabc55 and eeee19e.

📒 Files selected for processing (32)
  • .github/workflows/deploy-ai.yml
  • .github/workflows/deploy-ec2-docker.yml
  • .github/workflows/deploy-monitoring.yml
  • apps/api-admin/src/main/java/com/ott/api_admin/content/controller/BackOfficeContentsApi.java
  • apps/api-admin/src/main/java/com/ott/api_admin/content/controller/BackOfficeContentsController.java
  • apps/api-admin/src/main/java/com/ott/api_admin/content/dto/request/ContentsUploadRequest.java
  • apps/api-admin/src/main/java/com/ott/api_admin/content/dto/response/ContentsUploadResponse.java
  • apps/api-admin/src/main/java/com/ott/api_admin/content/mapper/BackOfficeContentsMapper.java
  • apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java
  • apps/api-admin/src/main/java/com/ott/api_admin/shortform/controller/BackOfficeShortFormApi.java
  • apps/api-admin/src/main/java/com/ott/api_admin/shortform/controller/BackOfficeShortFormController.java
  • apps/api-admin/src/main/java/com/ott/api_admin/shortform/dto/request/ShortFormUploadRequest.java
  • apps/api-admin/src/main/java/com/ott/api_admin/shortform/dto/response/ShortFormUploadResponse.java
  • apps/api-admin/src/main/java/com/ott/api_admin/shortform/mapper/BackOfficeShortFormMapper.java
  • apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java
  • apps/api-admin/src/main/java/com/ott/api_admin/upload/dto/request/MultipartUploadCompleteRequest.java
  • apps/api-admin/src/main/java/com/ott/api_admin/upload/dto/response/MultipartUploadPartUrlResponse.java
  • apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java
  • apps/monitoring/docker-compose.yml
  • apps/monitoring/grafana/provisioning/dashboards/json/New dashboard-1772584885701.json.disabled
  • apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json
  • apps/monitoring/grafana/provisioning/dashboards/json/loki-logs-15324.json
  • apps/monitoring/grafana/provisioning/dashboards/json/node-exporter-full-1860.json
  • apps/monitoring/grafana/provisioning/dashboards/json/rabbitmq-overview-10991.json
  • apps/monitoring/grafana/provisioning/dashboards/json/spring-boot-3x-20727.json
  • apps/monitoring/prometheus/prometheus.prod.yml
  • apps/monitoring/prometheus/prometheus.prod.yml.tpl
  • apps/monitoring/prometheus/prometheus.yml
  • docker-compose.yml
  • modules/common-web/src/main/java/com/ott/common/web/exception/ErrorCode.java
  • modules/infra-s3/src/main/java/com/ott/infra/s3/config/S3PresignerConfig.java
  • modules/infra-s3/src/main/java/com/ott/infra/s3/service/S3PresignService.java

Comment on lines +263 to +287
public void completeShortFormOriginUpload(
Long shortFormId,
String objectKey,
String uploadId,
List<UploadHelper.MultipartPartETag> parts,
Authentication authentication
) {
ShortForm shortForm = shortFormRepository.findWithMediaAndUploaderByShortFormId(shortFormId)
.orElseThrow(() -> new BusinessException(ErrorCode.SHORT_FORM_NOT_FOUND));

Media media = shortForm.getMedia();
Long memberId = (Long) authentication.getPrincipal();
boolean isEditor = authentication.getAuthorities().stream()
.anyMatch(authority -> Role.EDITOR.getKey().equals(authority.getAuthority()));
if (isEditor && !media.getUploader().getId().equals(memberId)) {
throw new BusinessException(ErrorCode.FORBIDDEN);
}

Series series = null;
Contents contents = null;
uploadHelper.validateOriginObjectKey(
objectKey,
shortForm.getOriginUrl(),
ErrorCode.SHORTFORM_ORIGIN_OBJECT_KEY_MISMATCH
);

uploadHelper.completeMultipartUpload(objectKey, uploadId, parts);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

누락된 part 목록으로도 숏폼 원본 업로드를 확정할 수 있습니다.

여기도 기대 part 수를 검증하지 않고 completeMultipartUpload에 바로 넘깁니다. UploadHelper.completeMultipartUpload는 part 연속성까지 보지 않아서, 일부 part가 빠진 요청이 들어오면 손상된 원본이 그대로 완료될 수 있습니다. 생성 시점의 originTotalPartCount를 저장해 그 값과 비교하는 쪽이 가장 안전합니다.

🔧 제안된 수정
+        int expectedPartCount = uploadHelper.getMultipartPartCount(shortForm.getVideoSize());
+        if (parts.size() != expectedPartCount
+                || parts.stream().anyMatch(part -> part.partNumber() < 1 || part.partNumber() > expectedPartCount)) {
+            throw new BusinessException(ErrorCode.ETAG_LIST_INVALID);
+        }
+
         uploadHelper.completeMultipartUpload(objectKey, uploadId, parts);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java`
around lines 263 - 287, completeShortFormOriginUpload currently forwards the
supplied parts to uploadHelper.completeMultipartUpload without checking that the
number of parts matches the original expected part count, allowing incomplete
multipart uploads to be finalized; before calling
uploadHelper.completeMultipartUpload, read the stored expected part count from
the ShortForm (e.g., shortForm.getOriginTotalPartCount() or the
originTotalPartCount field), compare it to parts.size(), and if they differ
throw a BusinessException with an appropriate ErrorCode (create
SHORTFORM_ORIGIN_PART_COUNT_MISMATCH if missing) so only requests with the full
expected part list proceed to uploadHelper.completeMultipartUpload.

Comment on lines +2 to +10
"__inputs": [
{
"name": "DS_PROMETHEUS",
"label": "Prometheus",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "[1] Dashboard datasource UIDs in use"
python - <<'PY'
import json
from pathlib import Path

p = Path("apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json")
d = json.loads(p.read_text())

uids = set()
def walk(x):
    if isinstance(x, dict):
        ds = x.get("datasource")
        if isinstance(ds, dict) and ds.get("uid"):
            uids.add(ds["uid"])
        for v in x.values():
            walk(v)
    elif isinstance(x, list):
        for i in x:
            walk(i)

walk(d)
print("dashboard datasource uids:", sorted(uids))
PY

echo
echo "[2] Provisioned Prometheus datasource UID candidates in repo"
rg -n --glob '*.yml' --glob '*.yaml' --glob '*.json' 'type:\s*prometheus|uid:\s*prometheus|name:\s*prometheus'

Repository: OpenTheTaste/backend

Length of output: 377


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "[1] Check if DS_PROMETHEUS input is referenced in dashboard"
python - <<'PY'
import json
from pathlib import Path

p = Path("apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json")
d = json.loads(p.read_text())

# Check __inputs
inputs = d.get("__inputs", [])
print("__inputs defined:")
for inp in inputs:
    print(f"  - {inp.get('name')}: {inp.get('label')}")

# Search for any reference to DS_PROMETHEUS variable
def search_references(x, path=""):
    references = []
    if isinstance(x, dict):
        for k, v in x.items():
            new_path = f"{path}.{k}" if path else k
            if isinstance(v, str) and "DS_PROMETHEUS" in v:
                references.append((new_path, v))
            references.extend(search_references(v, new_path))
    elif isinstance(x, list):
        for i, item in enumerate(x):
            new_path = f"{path}[{i}]"
            references.extend(search_references(item, new_path))
    return references

refs = search_references(d)
print(f"\nReferences to 'DS_PROMETHEUS': {len(refs)} found")
for path, value in refs[:10]:  # Show first 10
    print(f"  {path}: {value[:80]}")
PY

echo
echo "[2] Sample panel datasource definition from lines around 52, 87, 2834"
python - <<'PY'
import json
from pathlib import Path

p = Path("apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json")
with open(p, 'r') as f:
    lines = f.readlines()
    
# Show context around reported lines
for line_num in [52, 87, 2834]:
    if line_num <= len(lines):
        print(f"\n--- Lines {max(1, line_num-2)} to {min(len(lines), line_num+2)} ---")
        for i in range(max(0, line_num-3), min(len(lines), line_num+2)):
            print(f"{i+1:4d}: {lines[i]}", end="")
PY

Repository: OpenTheTaste/backend

Length of output: 1039


DS_PROMETHEUS 입력이 선언되었으나 사용되지 않으며, 모든 패널이 하드코딩된 UID를 사용합니다.

__inputsDS_PROMETHEUS를 선언했지만, 대시보드 전체에서 실제로 참조되는 곳은 선언부 1곳뿐입니다. 대신 모든 패널의 datasource는 {"type": "prometheus", "uid": "prometheus"}를 직접 지정합니다(Line 52, 87, 2834 등).

현재 환경의 프로비저닝된 Prometheus UID가 "prometheus"로 일치하므로 작동하지만, UID가 다른 환경으로 이동하면 모든 패널이 No data가 됩니다. 다음 중 하나로 통일하세요:

  • 패널에서 ${DS_PROMETHEUS} 변수를 사용하고 __inputs를 활용
  • 또는 미사용 입력을 제거하고 하드코딩된 UID 의존성을 문서화
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json`
around lines 2 - 10, The dashboard declares an input "__inputs" entry named
"DS_PROMETHEUS" but every panel hardcodes datasource objects like
{"type":"prometheus","uid":"prometheus"}, causing breakage if the UID differs;
fix by updating all panel datasource references to use the templated input
variable (replace hardcoded datasource objects with
{"type":"prometheus","uid":"${DS_PROMETHEUS}"} or the Grafana variable
${DS_PROMETHEUS} as used by other provisioning JSON), or alternatively remove
the unused "__inputs" DS_PROMETHEUS declaration and add a comment documenting
the hardcoded UID dependency; search for occurrences of "datasource": and the
panels that contain uid "prometheus" to apply the consistent change.

Comment on lines +14 to +37
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "4.6.5"
},
{
"type": "panel",
"id": "graph",
"name": "Graph",
"version": ""
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
},
{
"type": "panel",
"id": "singlestat",
"name": "Singlestat",
"version": ""
}
],
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "[1] Dashboard metadata"
python - <<'PY'
import json
from pathlib import Path
p = Path("apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json")
d = json.loads(p.read_text())
print("__requires:", d.get("__requires"))
print("schemaVersion:", d.get("schemaVersion"))
PY

echo
echo "[2] Repository Grafana version candidates (image/tag/env)"
rg -n --glob '*.yml' --glob '*.yaml' --glob '*.json' 'grafana/grafana|GF_|grafanaVersion|image:\s*.*grafana'

Repository: OpenTheTaste/backend

Length of output: 1198


🏁 Script executed:

# Check if jvm-micrometer-4701.json is provisioned
echo "[1] Provisioning config references"
rg -n 'jvm-micrometer-4701|provisioning/dashboards' docker-compose.yml apps/monitoring/docker-compose.yml

echo
echo "[2] All Grafana dashboard JSONs and their schemaVersions"
fd -e json -p 'provisioning/dashboards' | while read f; do
  schema=$(python -c "import json; print(json.load(open('$f')).get('schemaVersion', 'N/A'))" 2>/dev/null)
  echo "$f: schemaVersion=$schema"
done

echo
echo "[3] Graph and Singlestat panel support in Grafana 12.4.0"
rg -i 'graph.*deprecated|singlestat.*deprecated|graph.*removed|singlestat.*removed' --type md --type txt

Repository: OpenTheTaste/backend

Length of output: 857


🏁 Script executed:

# Check Grafana provisioning directory for auto-loading behavior
echo "[1] Provisioning directory and file references"
ls -la apps/monitoring/grafana/provisioning/dashboards/json/

echo
echo "[2] Check if there's a dashboards.yml or dashboards provisioning config"
find apps/monitoring/grafana/provisioning -name "*.yml" -o -name "*.yaml" | xargs cat 2>/dev/null

echo
echo "[3] Verify panel count in jvm-micrometer-4701.json"
python - <<'PY'
import json
from pathlib import Path

with open("apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json") as f:
    d = json.load(f)
    
panels = d.get("panels", [])
print(f"Total panels: {len(panels)}")

panel_types = {}
for p in panels:
    ptype = p.get("type", "unknown")
    panel_types[ptype] = panel_types.get(ptype, 0) + 1

print("Panel types distribution:")
for ptype, count in sorted(panel_types.items()):
    print(f"  {ptype}: {count}")
PY

Repository: OpenTheTaste/backend

Length of output: 1440


🌐 Web query:

Grafana graph panel removed version history

💡 Result:

Grafana doesn’t keep per-panel (“Graph panel”) version history. Version history is tracked at the dashboard level (which implicitly includes panel changes).

  • To view/restore history: Dashboard → Edit → Settings (gear) → Versions tab. Grafana saves a copy on every dashboard save, and you can diff/restore versions there. [1]
  • If it feels “removed,” you may be looking for it while editing a panel: in some distributions (notably Amazon Managed Grafana), dashboard settings can’t be opened while you’re in the panel editor, so you must exit panel editing first. [2]

If by “graph panel removed” you meant Graph (old): it was an Angular visualization and has been deprecated as Grafana moved to React; the replacement is Time series. [3]

Sources: [1] [2] [3]


🌐 Web query:

Grafana singlestat panel removed version

💡 Result:

Grafana’s built-in Singlestat panel was deprecated in Grafana 7.0 and removed in Grafana 8.0 (replaced by the Stat visualization). [1][2]

If you still need it on Grafana 8.0+, you can install the standalone grafana-singlestat-panel plugin. [3]

Sources:
[1] Grafana docs – Stat visualization (notes on deprecation/removal) (grafana.com)
[2] Grafana v8.0 “What’s new” – Singlestat panel discontinued (grafana.com)
[3] Singlestat panel migration guide / plugin note for Grafana 8.0+ (deepwiki.com)

Citations:


schemaVersion 14의 Grafana 대시보드가 Grafana 12.4.0에서 호환되지 않습니다.

Singlestat 패널은 Grafana 8.0에서 제거되었고, Graph 패널은 더 이상 기본 지원되지 않습니다. 현재 실행 중인 Grafana 12.4.0에서 이 대시보드는 제대로 렌더링되지 않습니다. jvm-micrometer-4701.json은 provisioning 디렉토리의 자동 로딩 설정에 의해 활성화되어 있으므로, 현재 Grafana 버전(12.4.0)에서 대시보드를 재-export하여 갱신해야 합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json`
around lines 14 - 37, This dashboard uses schemaVersion 14 and deprecated panel
types ("graph" id "graph" and "singlestat" id "singlestat") that are
incompatible with Grafana 12.4.0; fix by opening the dashboard in Grafana
12.4.0, re-exporting it (which updates schemaVersion to the current export
version) and replacing the removed panel types with the modern equivalents
(create new panels in the UI to replace "graph" with a Time series/Timeseries
panel and "singlestat" with a Stat panel), then commit the newly exported JSON
to replace the existing jvm-micrometer-4701.json content so provisioning will
load a compatible dashboard.

"hide": false,
"interval": "",
"intervalFactor": 2,
"legendFormat": "commited",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

범례 오탈자(commited)를 수정해주세요.

Line 2048, Line 2182의 legendFormat 오탈자는 검색성과 가독성을 떨어뜨립니다.

✏️ 제안 수정안
-              "legendFormat": "commited",
+              "legendFormat": "committed",
...
-              "legendFormat": "commited",
+              "legendFormat": "committed",

Also applies to: 2182-2182

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json`
at line 2048, Replace the misspelled legendFormat value "commited" with the
correct spelling "committed" wherever it appears in the dashboard JSON; search
for the JSON key "legendFormat" (e.g., the occurrences around the current lines
that contain "commited") and update the string to "committed" so legend labels
and searchability are fixed.

Comment on lines +2109 to +2110
"repeat": "persistence_counts",
"repeatIteration": null,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

python - <<'PY'
import json
from pathlib import Path

p = Path("apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json")
d = json.loads(p.read_text())

vars_ = {v.get("name") for v in d.get("templating", {}).get("list", []) if v.get("name")}
print("templating vars:", sorted(vars_))

for i, row in enumerate(d.get("rows", []), start=1):
    rep = row.get("repeat")
    if rep and rep not in vars_:
        print(f"[INVALID] row#{i} title={row.get('title')} repeat={rep}")
PY

Repository: OpenTheTaste/backend

Length of output: 250


정의되지 않은 반복 변수로 행 반복이 작동하지 않습니다.

Line 2109의 repeat: "persistence_counts"는 템플릿 변수 목록에 존재하지 않습니다. 정의된 템플릿 변수는 application, instance, jvm_buffer_pool, jvm_memory_pool_heap, jvm_memory_pool_nonheap뿐입니다. 이 설정으로 인해 행 반복 기능이 무시되고 대시보드가 의도대로 동작하지 않습니다.

🔧 제안 수정안
-      "repeat": "persistence_counts",
+      "repeat": null,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"repeat": "persistence_counts",
"repeatIteration": null,
"repeat": null,
"repeatIteration": null,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/monitoring/grafana/provisioning/dashboards/json/jvm-micrometer-4701.json`
around lines 2109 - 2110, The dashboard row uses "repeat": "persistence_counts"
but that template variable does not exist (defined variables are application,
instance, jvm_buffer_pool, jvm_memory_pool_heap, jvm_memory_pool_nonheap), so
row repetition is ignored; fix by either changing the repeat value to one of the
existing template variables (e.g., "instance") or add a new templating entry
named "persistence_counts" to the dashboard templating section so the "repeat"
key has a matching variable.

Comment on lines +266 to +277
"definition": "label_values(container_network_receive_bytes_total{namespace=~\"$namespace\"},pod)",
"description": null,
"error": null,
"hide": 0,
"includeAll": true,
"label": "Pod",
"multi": true,
"name": "pod",
"options": [],
"query": {
"query": "label_values(container_network_receive_bytes_total{namespace=~\"$namespace\"},pod)",
"refId": "Prometheus-pod-Variable-Query"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, check if the file exists and get its total lines
wc -l apps/monitoring/grafana/provisioning/dashboards/json/loki-logs-15324.json

Repository: OpenTheTaste/backend

Length of output: 141


🏁 Script executed:

# Read lines around 266-277 to verify the snippet
sed -n '260,290p' apps/monitoring/grafana/provisioning/dashboards/json/loki-logs-15324.json

Repository: OpenTheTaste/backend

Length of output: 971


🏁 Script executed:

# Check line 240 area for the namespace variable definition
sed -n '230,250p' apps/monitoring/grafana/provisioning/dashboards/json/loki-logs-15324.json

Repository: OpenTheTaste/backend

Length of output: 630


🏁 Script executed:

# Search for all occurrences of kube_pod_info and container_network_receive_bytes_total in the file
echo "=== kube_pod_info occurrences ==="
grep -n "kube_pod_info" apps/monitoring/grafana/provisioning/dashboards/json/loki-logs-15324.json
echo ""
echo "=== container_network_receive_bytes_total occurrences ==="
grep -n "container_network_receive_bytes_total" apps/monitoring/grafana/provisioning/dashboards/json/loki-logs-15324.json

Repository: OpenTheTaste/backend

Length of output: 510


Pod 변수가 트래픽 메트릭에 의존하면 로그를 조회할 수 없는 파드가 생깁니다.

container_network_receive_bytes_total는 네트워크 트래픽이 발생한 파드만 포함하므로, 트래픽이 없거나 짧게 실행되는 파드는 드롭다운에서 누락됩니다. 이는 Loki 로그 대시보드에서 치명적인데, 해당 파드의 로그가 존재해도 필터링할 수 없게 됩니다. Line 240의 namespace 변수에서 이미 사용 중인 kube_pod_info로 통일하면 모든 파드를 선택 가능하게 만들 수 있습니다.

🔧 제안 변경
-        "definition": "label_values(container_network_receive_bytes_total{namespace=~\"$namespace\"},pod)",
+        "definition": "label_values(kube_pod_info{namespace=~\"$namespace\"},pod)",
...
-          "query": "label_values(container_network_receive_bytes_total{namespace=~\"$namespace\"},pod)",
+          "query": "label_values(kube_pod_info{namespace=~\"$namespace\"},pod)",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"definition": "label_values(container_network_receive_bytes_total{namespace=~\"$namespace\"},pod)",
"description": null,
"error": null,
"hide": 0,
"includeAll": true,
"label": "Pod",
"multi": true,
"name": "pod",
"options": [],
"query": {
"query": "label_values(container_network_receive_bytes_total{namespace=~\"$namespace\"},pod)",
"refId": "Prometheus-pod-Variable-Query"
"definition": "label_values(kube_pod_info{namespace=~\"$namespace\"},pod)",
"description": null,
"error": null,
"hide": 0,
"includeAll": true,
"label": "Pod",
"multi": true,
"name": "pod",
"options": [],
"query": {
"query": "label_values(kube_pod_info{namespace=~\"$namespace\"},pod)",
"refId": "Prometheus-pod-Variable-Query"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitoring/grafana/provisioning/dashboards/json/loki-logs-15324.json`
around lines 266 - 277, The Pod variable's Prometheus query currently uses
container_network_receive_bytes_total which omits pods without network traffic;
update the variable (the object with "name": "pod" and "refId":
"Prometheus-pod-Variable-Query") to use kube_pod_info{namespace=~"$namespace"}
in the label_values call so it returns all pods in the selected namespace (e.g.,
label_values(kube_pod_info{namespace=~"$namespace"}, pod)); keep includeAll,
multi and the label unchanged.

"id": 6,
"panels": [],
"title": "서비스 핵심 지표",
"title": "?쒕퉬???듭떖 吏€??,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# 목적: 해당 파일이 유효한 JSON인지 문법 검증
python - <<'PY'
import json
from pathlib import Path
import sys

p = Path("apps/monitoring/grafana/provisioning/dashboards/json/New dashboard-1772584885701.json.disabled")
text = p.read_text(encoding="utf-8")
try:
    json.loads(text)
    print("OK: valid JSON")
except Exception as e:
    print(f"INVALID_JSON: {e}")
    sys.exit(1)
PY

Repository: OpenTheTaste/backend

Length of output: 138


🏁 Script executed:

#!/bin/bash
# Line 261 및 주변 라인 내용 확인
echo "=== Lines around 261 ==="
sed -n '259,263p' "apps/monitoring/grafana/provisioning/dashboards/json/New dashboard-1772584885701.json.disabled" | cat -A

echo ""
echo "=== Lines around 656 ==="
sed -n '654,658p' "apps/monitoring/grafana/provisioning/dashboards/json/New dashboard-1772584885701.json.disabled" | cat -A

echo ""
echo "=== All mentioned lines (261, 356, 476, 544, 640, 656) ==="
for line_num in 261 356 476 544 640 656; do
  echo "Line $line_num:"
  sed -n "${line_num}p" "apps/monitoring/grafana/provisioning/dashboards/json/New dashboard-1772584885701.json.disabled" | cat -A
done

Repository: OpenTheTaste/backend

Length of output: 1087


파일 인코딩 손상: 여러 title 필드의 한글 텍스트가 깨져 있어 JSON 파싱 실패합니다.

라인 261, 356, 476, 544, 640, 656의 title 값들이 손상된 UTF-8 인코딩으로 인해 제어 문자를 포함하게 되어 JSON 파싱이 실패합니다(Invalid control character). .disabled 파일이지만 재활성화 또는 검증 단계에서 문제를 일으킬 수 있으므로 올바른 한글 텍스트로 복구 필요합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/monitoring/grafana/provisioning/dashboards/json/New`
dashboard-1772584885701.json.disabled at line 261, Several "title" JSON fields
contain corrupted Korean text (e.g. "?쒕퉬???듭떖 吏�??") producing invalid control
characters and causing JSON parse failures; fix by replacing each garbled
"title" value with the correct Korean string (restore from original source or
translate to the intended label), save the file encoded as UTF-8 (no BOM), and
re-run JSON validation; target the JSON "title" keys where the garbled sequences
appear and ensure all string values are valid UTF-8 without control characters.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (2)
apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java (1)

209-210: ⚠️ Potential issue | 🟠 Major

멀티파트 기대 파트 수를 재계산하지 말고 생성 시점 값으로 고정하세요.

Line 209, Line 230에서 uploadHelper.getMultipartPartCount(contents.getVideoSize())를 다시 계산하면, 업로드 생성 이후 서버 multipart 설정이 바뀐 경우 originTotalPartCount와 불일치해 완료/파트 URL 조회가 깨질 수 있습니다.

Also applies to: 230-236

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java`
around lines 209 - 210, The code recalculates multipart expected part count via
uploadHelper.getMultipartPartCount(contents.getVideoSize()) at completion and
when generating part URLs, which can mismatch if server multipart settings
changed; instead use the original part count recorded at upload creation (e.g.,
contents.getOriginTotalPartCount() or the stored originTotalPartCount field) and
pass that into uploadHelper.completeMultipartUpload(objectKey, uploadId,
originTotalPartCount, parts) and the corresponding presigned URL generation call
(the same place around lines 230-236) so both completion and URL retrieval use
the creation-time part count rather than recomputing from videoSize.
apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java (1)

287-289: ⚠️ Potential issue | 🟠 Major

기대 파트 수를 현재 설정으로 재계산하면 업로드 세션과 불일치할 수 있습니다.

Line 287, Line 317에서 기대 파트 수를 videoSize + 현재 설정으로 다시 계산하고 있어, 생성 시점 설정과 달라지면 완료/파트 URL 조회가 실패할 수 있습니다. 생성 응답의 originTotalPartCount를 고정값으로 사용하세요.

Also applies to: 317-323

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java`
around lines 287 - 289, The code currently recalculates the expected part count
via uploadHelper.getMultipartPartCount(shortForm.getVideoSize()) when completing
the multipart upload (and similarly in the part-URL retrieval path around the
block that uses uploadHelper and parts), which can mismatch the original session
if settings changed; instead, use the fixed originTotalPartCount returned at
creation time (preserve and pass through the creation response field
originTotalPartCount) when calling
uploadHelper.completeMultipartUpload(objectKey, uploadId, originTotalPartCount,
parts) and when building/validating part URLs so the same originalTotalPartCount
is consistently used in BackOfficeShortFormService rather than recalculating
from shortForm.getVideoSize().
🧹 Nitpick comments (1)
apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java (1)

187-192: 멀티파트 abort 실패는 추적 가능한 운영 신호로 남겨주세요.

Line 187-192에서 abort 실패를 warn 로그로만 남기면 orphan multipart 누적을 놓치기 쉽습니다. error 레벨/메트릭/알람을 추가해 운영 가시성을 확보하는 것을 권장합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java`
around lines 187 - 192, The abort multipart catch in BackOfficeContentsService
currently only logs with log.warn using
mediaCreateUploadResult.originObjectKey()/originUploadId(); change this to emit
an error-level alert and a metric so orphan multipart failures are visible:
replace or augment the log.warn with log.error including the exception and
context, and add a metric/increment call (e.g., via your application's
MeterRegistry or Metrics.increment("backoffice.multipart.abort.failure"))
including tags for objectKey and uploadId; ensure the same catch block that
references mediaCreateUploadResult.originObjectKey() and originUploadId() also
triggers any configured alerting/monitoring hook so operators get notified.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java`:
- Around line 198-244: Add Authentication-based owner/EDITOR checks to the two
new methods completeContentsOriginUpload and getContentsOriginUploadPartUrls in
BackOfficeContentsService: retrieve the current principal/Authentication, verify
the caller has the required back-office role or is the content owner (compare
Authentication user id with contents.getOwnerId/getCreatedBy or equivalent) and
enforce EDITOR-specific restrictions (reject if caller lacks ownership/EDITOR
rights) before calling uploadHelper.validateOriginObjectKey or any upload
operations; throw the existing BusinessException/ErrorCode for unauthorized
access to match other short-form service checks.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java`:
- Around line 178-185: The null-check bug: comparisons like
request.mediaType().equals(MediaType.SERIES) can NPE if mediaType is null;
change to constant-first equals checks (e.g.,
MediaType.SERIES.equals(request.mediaType()) and
MediaType.CONTENTS.equals(request.mediaType())) in BackOfficeShortFormService
where you currently call request.mediaType().equals(...), and do the same
replacement for the other occurrence around the resolveContents usage (the block
that throws BusinessException(ErrorCode.INVALID_SHORTFORM_TARGET)); keep
existing logic (fetch series via seriesRepository.findById and contents via
resolveContents) but perform the safe equals checks to avoid NPEs.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java`:
- Around line 62-64: sanitizeFileName currently calls fileName.trim()
unconditionally which causes an NPE for null input; update the method
(sanitizeFileName) to first validate fileName for null/blank and throw the
project's consistent business exception (or return a controlled error result)
instead of letting a NullPointerException propagate. Ensure the null check
happens before any calls to fileName.trim() or fileName.lastIndexOf('.') and use
the same exception type/format used elsewhere in UploadHelper or the upload flow
so error handling remains consistent.
- Around line 236-255: Add a safe upper bound for the requested page size to
prevent generating excessive presigned URLs: introduce a MAX_PAGE_SIZE constant
and validate that size is between 1 and MAX_PAGE_SIZE (instead of only >0) at
the start of the method that contains objectKey, uploadId, totalPartCount, page,
size (the method that calls buildMultipartPartUploadUrls); if size is out of
range return or throw BusinessException(ErrorCode.INVALID_INPUT) (consistent
with existing validation) or alternatively clamp size = Math.min(size,
MAX_PAGE_SIZE) before computing totalPage, startPartNumber/endPartNumber and
calling buildMultipartPartUploadUrls so the subsequent URL generation is
limited.

---

Duplicate comments:
In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java`:
- Around line 209-210: The code recalculates multipart expected part count via
uploadHelper.getMultipartPartCount(contents.getVideoSize()) at completion and
when generating part URLs, which can mismatch if server multipart settings
changed; instead use the original part count recorded at upload creation (e.g.,
contents.getOriginTotalPartCount() or the stored originTotalPartCount field) and
pass that into uploadHelper.completeMultipartUpload(objectKey, uploadId,
originTotalPartCount, parts) and the corresponding presigned URL generation call
(the same place around lines 230-236) so both completion and URL retrieval use
the creation-time part count rather than recomputing from videoSize.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java`:
- Around line 287-289: The code currently recalculates the expected part count
via uploadHelper.getMultipartPartCount(shortForm.getVideoSize()) when completing
the multipart upload (and similarly in the part-URL retrieval path around the
block that uses uploadHelper and parts), which can mismatch the original session
if settings changed; instead, use the fixed originTotalPartCount returned at
creation time (preserve and pass through the creation response field
originTotalPartCount) when calling
uploadHelper.completeMultipartUpload(objectKey, uploadId, originTotalPartCount,
parts) and when building/validating part URLs so the same originalTotalPartCount
is consistently used in BackOfficeShortFormService rather than recalculating
from shortForm.getVideoSize().

---

Nitpick comments:
In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java`:
- Around line 187-192: The abort multipart catch in BackOfficeContentsService
currently only logs with log.warn using
mediaCreateUploadResult.originObjectKey()/originUploadId(); change this to emit
an error-level alert and a metric so orphan multipart failures are visible:
replace or augment the log.warn with log.error including the exception and
context, and add a metric/increment call (e.g., via your application's
MeterRegistry or Metrics.increment("backoffice.multipart.abort.failure"))
including tags for objectKey and uploadId; ensure the same catch block that
references mediaCreateUploadResult.originObjectKey() and originUploadId() also
triggers any configured alerting/monitoring hook so operators get notified.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 2fc24e06-cb93-47bd-ab93-f1d06aedd4ad

📥 Commits

Reviewing files that changed from the base of the PR and between eeee19e and d4aa2d1.

📒 Files selected for processing (3)
  • apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java
  • apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java
  • apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java

Comment on lines +198 to 244
@Transactional(readOnly = true)
public void completeContentsOriginUpload(Long contentsId, String objectKey, String uploadId, List<UploadHelper.MultipartPartETag> parts) {
Contents contents = contentsRepository.findById(contentsId)
.orElseThrow(() -> new BusinessException(ErrorCode.CONTENTS_NOT_FOUND));

uploadHelper.validateOriginObjectKey(
objectKey,
contents.getOriginUrl(),
ErrorCode.CONTENTS_ORIGIN_OBJECT_KEY_MISMATCH
);

mediaTagLinker.linkTags(media, request.categoryId(), request.tagIdList());
int totalPartCount = uploadHelper.getMultipartPartCount(contents.getVideoSize());
uploadHelper.completeMultipartUpload(objectKey, uploadId, totalPartCount, parts);
}

@Transactional(readOnly = true)
public PageResponse<MultipartUploadPartUrlResponse> getContentsOriginUploadPartUrls(
Long contentsId,
String objectKey,
String uploadId,
Integer page,
Integer size
) {
Contents contents = contentsRepository.findById(contentsId)
.orElseThrow(() -> new BusinessException(ErrorCode.CONTENTS_NOT_FOUND));

// 임시로 해당 위치로 삽입 상태 관리 픽스 후 추후 변경 예정
eventPublisher.publishEvent(new AiTaggingRequestedEvent(media.getId(), request.description()));
uploadHelper.validateOriginObjectKey(
objectKey,
contents.getOriginUrl(),
ErrorCode.CONTENTS_ORIGIN_OBJECT_KEY_MISMATCH
);

return backOfficeContentsMapper.toContentsUploadResponse(
contentsId,
mediaCreateUploadResult.posterObjectKey(),
mediaCreateUploadResult.thumbnailObjectKey(),
mediaCreateUploadResult.originObjectKey(),
mediaCreateUploadResult.masterPlaylistObjectKey(),
mediaCreateUploadResult.posterUploadUrl(),
mediaCreateUploadResult.thumbnailUploadUrl(),
mediaCreateUploadResult.originUploadUrl()
int totalPartCount = uploadHelper.getMultipartPartCount(contents.getVideoSize());
PageResponse<UploadHelper.MultipartUploadPartUrl> partUrlPage = uploadHelper.getMultipartPartUrls(
objectKey,
uploadId,
totalPartCount,
page,
size
);

List<MultipartUploadPartUrlResponse> dataList = partUrlPage.getDataList().stream()
.map(part -> new MultipartUploadPartUrlResponse(part.partNumber(), part.uploadUrl()))
.toList();

return PageResponse.toPageResponse(partUrlPage.getPageInfo(), dataList);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

콘텐츠 원본 업로드 API에 EDITOR 소유권/권한 검증이 없습니다.

Line 198-244의 신규 메서드는 Authentication 기반 검증이 없어, 라우트 보호가 느슨한 경우 EDITOR 제약 우회로 이어질 수 있습니다. 숏폼 서비스와 동일한 소유자 검증 경로를 맞추는 편이 안전합니다.
As per coding guidelines, apps/api-admin/**에서는 백오피스 권한/역할 제약 확인과 EDITOR 제한 우회 방지를 반드시 점검해야 합니다.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/content/service/BackOfficeContentsService.java`
around lines 198 - 244, Add Authentication-based owner/EDITOR checks to the two
new methods completeContentsOriginUpload and getContentsOriginUploadPartUrls in
BackOfficeContentsService: retrieve the current principal/Authentication, verify
the caller has the required back-office role or is the content owner (compare
Authentication user id with contents.getOwnerId/getCreatedBy or equivalent) and
enforce EDITOR-specific restrictions (reject if caller lacks ownership/EDITOR
rights) before calling uploadHelper.validateOriginObjectKey or any upload
operations; throw the existing BusinessException/ErrorCode for unauthorized
access to match other short-form service checks.

Comment on lines +178 to 185
if (request.mediaType().equals(MediaType.SERIES)) {
series = seriesRepository.findById(request.originId())
.orElseThrow(() -> new BusinessException(ErrorCode.SERIES_NOT_FOUND));
} else if (request.mediaType().equals(MediaType.CONTENTS)) {
contents = resolveContents(request.originId());
} else {
throw new BusinessException(ErrorCode.INVALID_SHORTFORM_TARGET);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

request.mediaType()가 null이면 NPE로 500이 납니다.

Line 178, Line 349에서 request.mediaType().equals(...)를 사용해 null 입력 시 비즈니스 예외가 아니라 NPE가 발생합니다. 상수 우변 비교로 바꿔 주세요.

수정 예시
-        if (request.mediaType().equals(MediaType.SERIES)) {
+        if (MediaType.SERIES.equals(request.mediaType())) {
             series = seriesRepository.findById(request.originId())
                     .orElseThrow(() -> new BusinessException(ErrorCode.SERIES_NOT_FOUND));
-        } else if (request.mediaType().equals(MediaType.CONTENTS)) {
+        } else if (MediaType.CONTENTS.equals(request.mediaType())) {
             contents = resolveContents(request.originId());
         } else {
             throw new BusinessException(ErrorCode.INVALID_SHORTFORM_TARGET);
         }

Also applies to: 349-356

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/shortform/service/BackOfficeShortFormService.java`
around lines 178 - 185, The null-check bug: comparisons like
request.mediaType().equals(MediaType.SERIES) can NPE if mediaType is null;
change to constant-first equals checks (e.g.,
MediaType.SERIES.equals(request.mediaType()) and
MediaType.CONTENTS.equals(request.mediaType())) in BackOfficeShortFormService
where you currently call request.mediaType().equals(...), and do the same
replacement for the other occurrence around the resolveContents usage (the block
that throws BusinessException(ErrorCode.INVALID_SHORTFORM_TARGET)); keep
existing logic (fetch series via seriesRepository.findById and contents via
resolveContents) but perform the safe equals checks to avoid NPEs.

Comment on lines 62 to 64
public String sanitizeFileName(String fileName) {
String trimmed = fileName.trim();
int extensionDelimiterIndex = trimmed.lastIndexOf('.');
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

sanitizeFileName는 null 입력에서 NPE가 발생합니다.

Line 63에서 fileName.trim()을 바로 호출해 null이면 500으로 떨어집니다. 비즈니스 예외로 일관 처리해 주세요.

수정 예시
 public String sanitizeFileName(String fileName) {
+        if (!StringUtils.hasText(fileName)) {
+            throw new BusinessException(ErrorCode.INVALID_INPUT);
+        }
         String trimmed = fileName.trim();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java`
around lines 62 - 64, sanitizeFileName currently calls fileName.trim()
unconditionally which causes an NPE for null input; update the method
(sanitizeFileName) to first validate fileName for null/blank and throw the
project's consistent business exception (or return a controlled error result)
instead of letting a NullPointerException propagate. Ensure the null check
happens before any calls to fileName.trim() or fileName.lastIndexOf('.') and use
the same exception type/format used elsewhere in UploadHelper or the upload flow
so error handling remains consistent.

Comment on lines +236 to +255
if (!StringUtils.hasText(objectKey) || !StringUtils.hasText(uploadId) || totalPartCount <= 0 || page < 0 || size <= 0) {
throw new BusinessException(ErrorCode.INVALID_INPUT);
}

int totalPage = (totalPartCount + size - 1) / size;
if (page >= totalPage) {
return PageResponse.toPageResponse(PageInfo.toPageInfo(page, totalPage, size), List.of());
}

int startPartNumber = (page * size) + 1;
int endPartNumber = Math.min(startPartNumber + size - 1, totalPartCount);

// 페이징(start - end)범위의 url 생성
List<MultipartUploadPartUrl> dataList = buildMultipartPartUploadUrls(objectKey, uploadId, startPartNumber, endPartNumber);

return PageResponse.toPageResponse(
PageInfo.toPageInfo(page, totalPage, size),
dataList
);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

파트 URL 조회 size 상한이 없어 과도한 presign 생성이 가능합니다.

Line 236의 검증은 size > 0만 확인하므로, 매우 큰 값으로 요청 시 대량 URL 생성으로 성능/비용 리스크가 생깁니다. 안전한 상한을 두는 것이 좋습니다.

수정 예시
+    private static final int MAX_MULTIPART_PAGE_SIZE = 200;
...
-        if (!StringUtils.hasText(objectKey) || !StringUtils.hasText(uploadId) || totalPartCount <= 0 || page < 0 || size <= 0) {
+        if (!StringUtils.hasText(objectKey)
+                || !StringUtils.hasText(uploadId)
+                || totalPartCount <= 0
+                || page < 0
+                || size <= 0
+                || size > MAX_MULTIPART_PAGE_SIZE) {
             throw new BusinessException(ErrorCode.INVALID_INPUT);
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (!StringUtils.hasText(objectKey) || !StringUtils.hasText(uploadId) || totalPartCount <= 0 || page < 0 || size <= 0) {
throw new BusinessException(ErrorCode.INVALID_INPUT);
}
int totalPage = (totalPartCount + size - 1) / size;
if (page >= totalPage) {
return PageResponse.toPageResponse(PageInfo.toPageInfo(page, totalPage, size), List.of());
}
int startPartNumber = (page * size) + 1;
int endPartNumber = Math.min(startPartNumber + size - 1, totalPartCount);
// 페이징(start - end)범위의 url 생성
List<MultipartUploadPartUrl> dataList = buildMultipartPartUploadUrls(objectKey, uploadId, startPartNumber, endPartNumber);
return PageResponse.toPageResponse(
PageInfo.toPageInfo(page, totalPage, size),
dataList
);
}
private static final int MAX_MULTIPART_PAGE_SIZE = 200;
// ... other code ...
if (!StringUtils.hasText(objectKey)
|| !StringUtils.hasText(uploadId)
|| totalPartCount <= 0
|| page < 0
|| size <= 0
|| size > MAX_MULTIPART_PAGE_SIZE) {
throw new BusinessException(ErrorCode.INVALID_INPUT);
}
int totalPage = (totalPartCount + size - 1) / size;
if (page >= totalPage) {
return PageResponse.toPageResponse(PageInfo.toPageInfo(page, totalPage, size), List.of());
}
int startPartNumber = (page * size) + 1;
int endPartNumber = Math.min(startPartNumber + size - 1, totalPartCount);
// 페이징(start - end)범위의 url 생성
List<MultipartUploadPartUrl> dataList = buildMultipartPartUploadUrls(objectKey, uploadId, startPartNumber, endPartNumber);
return PageResponse.toPageResponse(
PageInfo.toPageInfo(page, totalPage, size),
dataList
);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@apps/api-admin/src/main/java/com/ott/api_admin/upload/support/UploadHelper.java`
around lines 236 - 255, Add a safe upper bound for the requested page size to
prevent generating excessive presigned URLs: introduce a MAX_PAGE_SIZE constant
and validate that size is between 1 and MAX_PAGE_SIZE (instead of only >0) at
the start of the method that contains objectKey, uploadId, totalPartCount, page,
size (the method that calls buildMultipartPartUploadUrls); if size is out of
range return or throw BusinessException(ErrorCode.INVALID_INPUT) (consistent
with existing validation) or alternatively clamp size = Math.min(size,
MAX_PAGE_SIZE) before computing totalPage, startPartNumber/endPartNumber and
calling buildMultipartPartUploadUrls so the subsequent URL generation is
limited.

@arlen02-01 arlen02-01 closed this Mar 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

chore 설정 파일 등 변경 (.gitignore, .yml 등)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[OT-311] [CHORE]: 그라파나 탬플릿 적용 및 필요 설정 추가

1 participant