Conversation
… config
v1.3.0 default was `pro`; v1.4.0 flips to `standard` per 10-expert panel
(3 APPROVE / 6 MODIFY / 1 REJECT-subset — Christensen/Kim-Mauborgne/Taleb
unanimous on JTBD + Blue Ocean + Antifragility grounds).
Changes:
settings.json defaultProfile: "pro" → "standard"
All 3 profiles gain `stack` block: db engine, file location, containerize flag,
migration command. Engineering lead consumes these during scaffold.
All 3 profiles gain `profile_escalation` block with two-tier signal system:
- hard_require_signals: payments/PHI/PII/auth-provider → FORCE upgrade
(security-engineer CP-1 — false-assurance mitigation)
- soft_suggest_categories: compliance/multi-tenant/B2B/scale → AskUserQuestion
- min_distinct_categories: 2 (devops-architect CP-2 — category vector, not keyword count)
- confidence_threshold: 0.8 (standard→pro), 0.9 (pro→max)
Schema (pf-profile.schema.json Draft-07) extended additively:
- `stack` block optional
- `profile_escalation` block optional
All existing profiles still validate.
Next phases:
O — generate templates reflecting standard's sqlite+dockerless stack
P — scripts/recommend-profile.sh implements the categorical scoring
Q — pre-flight hook + decision ledger for escalation
R — CI tests + docs + LESSON 0.10
Panel references:
security-engineer: CP-1 hard-require signals
devops-architect: CP-2 category vector + CP-1 additive upgrade
backend-architect: CP-1 schema-lint (enforced in Phase O)
requirements-analyst: scope fence — keyword MVP only, no ML
root-cause-analyst: reuse existing `escalation` config rather than new subsystem
Christensen / Drucker / Godin / Kim-Mauborgne / Collins / Taleb:
business framing — "9/143 engaged, pro wakes 45, max wakes 143" messaging
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ss MVP)
New templates in assets/ consumed by Engineering lead when profile=standard:
prisma.schema.standard.template
- provider="sqlite", url=env("DATABASE_URL") — rewritten at scaffold to
`file:~/.preview-forge/<project>/dev.db` (OUTSIDE repo tree per
security-engineer CP-2). Model uses String, NOT enum, NOT @db.JsonB
so schema is Postgres-portable on graduation.
gitignore.standard.template
- *.db, *.db-wal, *.db-shm, *.sqlite*, prisma/*.db, .env variants
- Defense-in-depth even though DB file lives outside repo — blocks
misplaced dev copies or WAL sidecar leaks.
README.standard.template
- "⚠ DEV-ONLY SCAFFOLD" banner up top (frontend-architect concern:
better-sqlite3 sync blocks SSR on Vercel/Netlify — must be loud).
- "Local run (30 seconds)" section — npm install + db:push + dev.
- "Graduation path" section documents `bash scripts/graduate.sh pro`.
graduate.sh.template (devops-architect CP-1 — additive, not regen)
- Only writes NEW artifacts: Dockerfile, docker-compose.yml, .dockerignore,
.github/workflows/deploy.yml (for max). Never touches existing
package.json, app code, or schema models.
- Prisma datasource swap: provider "sqlite" → "postgresql".
- Runs schema-lint FIRST — aborts if non-portable features found.
scripts/standard-schema-lint.py (backend-architect CP-1)
- Detects non-portable Prisma features: enum blocks, @db.JsonB,
raw SQL with Postgres-specific casts.
- Exit 2 with line:number + fix suggestion on violation.
- Exit 0 + ✓ confirmation on portable schema.
- Test fixtures pass: portable schema exits 0, unportable exits 2
with enum + JsonB violations reported at correct line numbers.
No hook wired yet — Phase P/Q will invoke schema-lint post-scaffold via
the Engineering lead's Bash step. CI test added in Phase R.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ring)
Pre-flight profile recommender consumed by M1 Run Supervisor. Scans
idea.json for enterprise signals, emits JSON with action=hard-require|ask|hint|none.
Two-tier signal system (security-engineer CP-1):
HARD_REQUIRE (forces upgrade, no user dismiss):
payments stripe, pci, subscription, 결제, 구독
phi_healthcare hipaa, phi, ehr, patient record, 의료, 환자
pii_storage personally identifiable, 주민등록, 개인정보 저장
auth_provider saml provider, oidc provider, identity provider
SOFT_SUGGEST (AskUserQuestion, user may decline):
compliance soc2, iso27001, audit log, 감사로그
multi_tenant multi-tenant, tenancy, 멀티테넌트
enterprise_b2b enterprise sso, b2b saas, 엔터프라이즈
scale high-volume, realtime streaming, 대용량
Decision logic (devops-architect CP-2 — category vector, not keyword count):
any HARD hit → action=hard-require
distinct_categories >= min + score >= threshold → action=ask
distinct_categories >= min + score < threshold → action=hint
else → action=none
Score = (hard × 0.5) + (soft × 0.2), capped at 1.0.
Policy (min_distinct_categories, confidence_threshold, upgrade_to) comes
from current profile's .profile_escalation block, so each profile tunes
its own escalation curve (standard→pro stricter than pro→max).
Security (v1.3.0 Gemini lesson applied):
- JSON read via python stdin, never shell-interpolated
- No `python3 -c "...$VAR..."` anywhere with user data
- Injection canary fixture in test matrix
- bash-3 (macOS) array-safety: declare -a + set +u around appends
Bilingual EN + KO banks. JP/CN stubs TODO v1.5 (quality-engineer flag).
8/8 self-test cases pass:
benign / stripe-only / PII+stripe / multi-tenant+compliance / KO /
single-soft-below-min / empty / raw-text-healthcare + injection canary
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…ation
system-architect CP-1 + quality-engineer "replay safety":
New hook `hooks/escalation-ledger.py` records user responses to profile-
escalation prompts in `~/.preview-forge/escalation-history.json`.
Signal-hash identity (sha256 of sorted category list, 16-hex) means
"same signals" = "same decision", not "same prompt text". Decisions
persist across runs so declining SOC2 upgrade once doesn't nag every
new run with SOC2 ideas.
Suppression policy (quality-engineer fuzz matrix):
- declined within 24h window + same signal_hash → suppress reprompt
- accepted any time → user already upgraded, no suppression needed
- older than 24h → safe to re-prompt (maybe user changed mind)
Subcommands:
record <hash> <current> <recommended> <response> <run_id>
lookup <hash> exit 1 if no history
replay_safe <hash> exit 0 safe to prompt, 1 suppress
hash <cat1,cat2,...> utility: compute signal_hash
M1 Run Supervisor pre-flight integration (`agents/meta/run-supervisor.md`):
Step 7: profile resolve with v1.4 default=standard + one-time
stderr notice for users upgrading from v1.3 (refactoring-expert CP)
Step 8: surface-type detection (unchanged)
Step 9: NEW — profile escalation check
hard-require → force upgrade AskUserQuestion (no dismiss)
ask → replay_safe gate + AskUserQuestion
hint → static hint in /pf:status (no prompt)
none → no-op
Step 10: Blackboard init with escalation_action column
Atomic file write (tmpfile + rename) so interrupted runs don't corrupt
the ledger. 200-entry cap prevents unbounded growth.
8/8 self-test cases pass:
empty-lookup / hash-determinism / empty-replay-safe / record-decline /
suppress-after-decline / lookup-after-record / different-signals-ok /
accepted-not-suppressed
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
CI (.github/workflows/ci.yml): - defaultProfile="standard" assertion (guards against accidental revert) - recommend-profile: 9 cases matrix (8 classifications + injection canary) - escalation-ledger: 8 cases (hash determinism + replay safety + TTL + signal isolation) - standard-schema-lint: portable/unportable fixtures - Python hook compile list expanded: escalation-ledger + standard-schema-lint verify-plugin.sh: - Hook count 5 → 6 (adds escalation-ledger) - Asset count 4 → 8 (adds 4 standard-profile templates) - 46/46 checks pass locally README.md: - Profiles table gets DB + Container columns - "Profile escalation (v1.4+)" subsection documents hard-require vs soft-suggest - Quick Install snippet shows default=standard with explicit stack implications CHANGELOG.md: full 1.4.0 entry documenting breaking-default + new assets + recommender + ledger + schema gains. LESSON 0.10 (memory/LESSONS.md): "기본값은 첫-실행 성공을 좌우한다". Documents the Collins/Drucker gate (measure before flip), security-engineer's false- assurance mitigation via HARD_REQUIRE tier, devops-architect's category- vector scoring, refactoring-expert's one-time stderr notice for upgraders. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 41 minutes and 18 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (12)
워크스루기본 프로필을 "pro"에서 "standard"로 변경하면서 로컬 우선 SQLite 기반 개발 스택을 제공합니다. 새로운 프로필 에스컬레이션 메커니즘은 신호 분석을 통해 필요시 상위 프로필로의 업그레이드를 제안하거나 강제합니다. 스크립트, 템플릿, 설정 파일이 추가되어 이를 지원합니다. 변경사항
시퀀스 다이어그램sequenceDiagram
participant User
participant RunSupervisor as Run Supervisor
participant Recommender as recommend-profile.sh
participant Ledger as escalation-ledger.py
participant System
User->>RunSupervisor: 프로젝트 시작 (현재 프로필)
RunSupervisor->>Recommender: 신호 데이터 전달
Recommender->>Recommender: 키워드 매칭 & 신호 검사
Recommender->>Recommender: 프로필 정책 로드 (confidence_threshold)
Recommender-->>RunSupervisor: JSON 권장사항<br/>(action: hard-require/ask/hint/none)
alt hard-require 신호
RunSupervisor->>User: 강제 업그레이드 질문
User->>RunSupervisor: 응답
RunSupervisor->>Ledger: 기록 (signal_hash, 응답, 타임스탬프)
Ledger->>Ledger: 원장 저장 (atomic)
else ask 신호 (임계값 충족)
RunSupervisor->>Ledger: replay_safe 쿼리<br/>(signal_hash 조회)
Ledger-->>RunSupervisor: 최근 기록 & 24h 상태
alt 24시간 내 거절 없음
RunSupervisor->>User: 프로필 제안
User->>RunSupervisor: 응답
RunSupervisor->>Ledger: 기록
Ledger->>Ledger: 원장 저장
else 24시간 내 거절 있음
RunSupervisor->>RunSupervisor: 제안 억제
end
else hint 신호
RunSupervisor->>System: /pf:status에 힌트 표시
else none
RunSupervisor->>System: 에스컬레이션 없음
end
RunSupervisor->>System: 프로필로 진행
예상 코드 리뷰 노력🎯 4 (복잡) | ⏱️ ~45분 시
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request updates the plugin to version 1.4.0, flipping the default profile to standard for a faster, local-first MVP experience using SQLite and removing the Docker requirement for initial runs. It introduces a profile escalation system that analyzes project ideas for enterprise signals to recommend profile upgrades, supported by a new decision ledger and a graduation script for additive profile elevation. Feedback includes addressing a sed portability issue on macOS, improving keyword matching robustness in the recommendation script using grep -w, and implementing file locking in the escalation ledger to prevent race conditions during concurrent writes.
|
|
||
| # Replace datasource provider + url (use env DATABASE_URL — same env var, | ||
| # different contents: file:dev.db → postgres://...) | ||
| sed -i.bak -E 's|provider\s*=\s*"sqlite"|provider = "postgresql"|' prisma/schema.prisma |
There was a problem hiding this comment.
The \s shorthand for whitespace is a GNU extension and is not supported by the default sed on macOS (BSD sed). This will cause the provider replacement to fail silently on macOS, leaving the project with the incorrect database provider after graduation. Use [[:space:]] for better portability.
sed -i.bak -E 's|provider[[:space:]]*=[[:space:]]*"sqlite"|provider = "postgresql"|' prisma/schema.prisma
|
|
||
| # Signal banks (bilingual EN + KO). Quality-engineer pushed for explicit | ||
| # JP/CN stubs — empty for v1.4.0, flagged TODO. | ||
| HARD_PAYMENTS=("stripe" "pci " "payment processing" "billing flow" "subscription" "결제" "구독" "청구") |
There was a problem hiding this comment.
The keyword "pci " includes a trailing space to avoid partial matches, but this approach is fragile as it fails if the term is followed by punctuation (e.g., "PCI.") or appears at the end of a line. It is recommended to remove the trailing space and use the -w (word-regexp) flag with grep in the matching logic.
| HARD_PAYMENTS=("stripe" "pci " "payment processing" "billing flow" "subscription" "결제" "구독" "청구") | |
| HARD_PAYMENTS=("stripe" "pci" "payment processing" "billing flow" "subscription" "결제" "구독" "청구") |
|
|
||
| SOFT_COMPLIANCE=("soc2" "iso27001" "audit log" "compliance" "hipaa compliance" "감사로그" "감사 로그" "컴플라이언스") | ||
| SOFT_TENANT=("multi-tenant" "multitenant" "tenancy" "workspace isolation" "organization-scoped" "멀티테넌트" "멀티 테넌트") | ||
| SOFT_B2B=("enterprise sso" "b2b saas" "procurement" "rfp " "enterprise buyer" "엔터프라이즈" "기업용") |
There was a problem hiding this comment.
The keyword "rfp " includes a trailing space which can lead to missed detections if the term is followed by punctuation. Removing the space and using grep -w for whole-word matching is a more robust approach.
| SOFT_B2B=("enterprise sso" "b2b saas" "procurement" "rfp " "enterprise buyer" "엔터프라이즈" "기업용") | |
| SOFT_B2B=("enterprise sso" "b2b saas" "procurement" "rfp" "enterprise buyer" "엔터프라이즈" "기업용") |
| local text="$1" | ||
| shift | ||
| for kw in "$@"; do | ||
| if printf '%s' "$text" | grep -qi -- "$kw" 2>/dev/null; then |
There was a problem hiding this comment.
Using grep -w (or --word-regexp) is a more robust way to match keywords as whole words. It correctly handles word boundaries (punctuation, start/end of line) and avoids false positives from substrings without needing manual trailing spaces in the keyword list.
| if printf '%s' "$text" | grep -qi -- "$kw" 2>/dev/null; then | |
| if printf '%s' "$text" | grep -qiw -- "$kw" 2>/dev/null; then |
| def save_ledger(rows: list[dict]) -> None: | ||
| LEDGER_DIR.mkdir(parents=True, exist_ok=True) | ||
| # Write atomically via tmpfile+rename | ||
| tmp = LEDGER_FILE.with_suffix(".tmp") | ||
| tmp.write_text(json.dumps(rows, indent=2)) | ||
| tmp.replace(LEDGER_FILE) | ||
|
|
There was a problem hiding this comment.
The save_ledger function, when used in conjunction with load_ledger in cmd_record, creates a non-atomic read-modify-write cycle. If multiple processes attempt to record a decision simultaneously, updates could be lost. While the use of replace ensures file integrity, it does not prevent lost updates. Consider using a file locking mechanism (e.g., fcntl.flock) to synchronize access to the ledger file.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 3b36e43765
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| 'upgrade_to': e.get('upgrade_to', 'pro'), | ||
| 'confidence_threshold': e.get('confidence_threshold', 0.8), | ||
| 'min_distinct_categories': e.get('min_distinct_categories', 2) |
There was a problem hiding this comment.
Read escalation signal sets from profile policy
profile_escalation is parsed here with only upgrade_to, confidence_threshold, and min_distinct_categories, but hard_require_signals/soft_suggest_categories are never loaded, so detection still uses the global hard-coded banks for every profile. That causes incorrect enforcement for non-standard profiles (for example, --profile=pro still hard-requires upgrade on Stripe text even though pro.json limits hard_require_signals to phi_healthcare), which can force unnecessary profile escalation.
Useful? React with 👍 / 👎.
| - **v1.4+ 디폴트 변경 고지**: 이전에 사용자가 명시적으로 `--profile=pro`를 쓰지 않았다면, 첫 run 시 stderr에 "pf: default profile changed standard←pro (v1.4.0). See README for profile comparison." 1회 출력 (refactoring-expert CP). `~/.preview-forge/default-notice-shown` 파일로 중복 출력 방지. | ||
| 8. **Surface-type detection** (v1.3+): `scripts/detect-surface.sh < runs/<id>/idea.json`을 실행하여 `runs/<id>/surface.json`에 저장. Engineering lead가 stack 선택 시 참조 (rest-first → nestia / ui-first → Next.js 16 / hybrid → 둘 다). | ||
| 9. **Blackboard 초기화**: `runs/r-<ts>/blackboard.db` 생성 + 초기 row: `(run.pre_flight_passed, ts, cwd, cli_ver, profile, surface)`. | ||
| 9. **Profile escalation check** (v1.4+): `scripts/recommend-profile.sh < runs/<id>/idea.json $(cat runs/<id>/.profile)`를 실행하여 `runs/<id>/profile-recommendation.json`에 저장. |
There was a problem hiding this comment.
Call recommend-profile with explicit /dev/stdin
The documented pre-flight command passes only one positional argument after stdin redirection, but recommend-profile.sh expects arg1 to be an input path and arg2 to be the profile. With this invocation, the profile name is treated as a filename (cat "$INPUT") and the command exits non-zero, so the escalation check step fails instead of producing profile-recommendation.json.
Useful? React with 👍 / 👎.
| local text="$1" | ||
| shift | ||
| for kw in "$@"; do | ||
| if printf '%s' "$text" | grep -qi -- "$kw" 2>/dev/null; then |
There was a problem hiding this comment.
Match escalation keywords on word boundaries
Keyword matching uses plain substring search (grep -qi) for each token, so short terms like "phi" in the hard-signal list match unrelated words (e.g., "Delphi") and trigger phi_healthcare hard-require. In standard profile this creates false forced upgrades for benign ideas; matching should use word boundaries or stricter tokenization for sensitive signals.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
plugins/preview-forge/agents/meta/run-supervisor.md (1)
31-41:⚠️ Potential issue | 🟡 Minorpre-flight 설명을 v1.4 흐름에 맞춰 정리해 주세요.
Line 32는 여전히 “7-step”이라고 하지만 현재 목록은 10단계입니다. 또한 Line 41의 default 변경 고지는 “명시적
--profile이 없는 implicit default 사용자”에게만 적용되도록 써야, 명시적으로--profile=standard를 고른 사용자에게 불필요한 업그레이드 안내가 나가지 않습니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/preview-forge/agents/meta/run-supervisor.md` around lines 31 - 41, Update the pre-flight doc to reflect the actual 10 steps (change “7-step” to “10-step”) and adjust the Profile resolve paragraph so the default-change notice only triggers for implicit-default users (i.e., when no explicit --profile flag was provided); keep the logic that parses --profile into PF_PROFILE, resolves against settings.json.pf.defaultProfile (defaulting to "standard" v1.4+), writes the chosen profile to runs/<id>/.profile, and gate the one-time stderr message "pf: default profile changed standard←pro (v1.4.0)..." behind a check that the user did not supply --profile and that ~/.preview-forge/default-notice-shown is absent (create that file after showing the notice).plugins/preview-forge/schemas/pf-profile.schema.json (1)
5-5:⚠️ Potential issue | 🟡 Minorschema 설명의 default 프로필을 갱신해 주세요.
Line 5는 아직
pro (default, balanced)라고 되어 있어 v1.4의standarddefault 전환과 충돌합니다. schema를 보는 도구/문서 생성 결과가 잘못된 기본값을 노출할 수 있습니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/preview-forge/schemas/pf-profile.schema.json` at line 5, The schema description still says "pro (default, balanced)" which conflicts with v1.4 switching the default to "standard"; update the "description" value in pf-profile.schema.json (the description property for the Preview Forge run profile) to reflect that "standard" is now the default (e.g., change "pro (default, balanced)" to "standard (default, balanced)" and ensure the three profile names "standard", "pro", and "max" remain correctly described).README.md (1)
256-262:⚠️ Potential issue | 🟡 Minor검증 체크 수가 v1.4 내용과 맞지 않습니다.
Line 261은 아직
34/34 checks로 안내하지만, 이 PR의 changelog와 목표는verify-plugin.sh가46/46으로 확장됐다고 설명합니다. 설치 검증 문서도 같은 숫자로 갱신해 주세요.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 256 - 262, README의 "Verify install" 섹션에 있는 예시 출력 문자열 `34/34 checks`이 현재 PR의 변경사항(verify-plugin.sh이 `46/46`으로 확장된 점)과 불일치합니다; 해당 문자열을 찾아 `34/34 checks`를 `46/46 checks`로 업데이트하고, 필요하면 스크립트 이름 `scripts/verify-plugin.sh`와 함께 문구가 일관되게 보이도록 검증 예시를 수정하세요.
🧹 Nitpick comments (2)
scripts/recommend-profile.sh (2)
88-97:grep -qi가 키워드를 BRE regex로 해석합니다 —-F또는-w권장.현재 사전에는 regex 메타문자가 없어 당장 문제는 없지만, 다음 리스크가 있습니다:
- 향후
.,(,+,?등이 포함된 키워드(예: 일본어/중국어 TODO 확장, 정식 상표명) 추가 시 의미가 바뀝니다.category_hit가 매 키워드마다 새 grep 프로세스를 포크하므로, 동시에 fixed-string + 단어 경계로 바꾸면 "pci "의 공백 꼼수 없이도PCI,RFP를 안전하게 정확 매칭할 수 있습니다.♻️ 제안 diff
category_hit() { local text="$1" shift for kw in "$@"; do - if printf '%s' "$text" | grep -qi -- "$kw" 2>/dev/null; then + # -F: fixed-string (no regex metachar surprises in future dict entries) + # -w: word-boundary match (removes need for "pci " / "rfp " trailing-space hacks) + if printf '%s' "$text" | grep -qiFw -- "$kw" 2>/dev/null; then return 0 fi done return 1 }
-w도입 시"multi-tenant"처럼 하이픈을 포함한 키워드는grep가-를 비단어 문자로 보므로 여전히 매칭됩니다. 다만 CI의 분류 케이스에multi-tenant/multitenant둘 다 포함되어 있는지 한 번 확인해 주세요.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/recommend-profile.sh` around lines 88 - 97, The category_hit function uses grep -qi which treats keywords as BRE regexes; change the invocation in category_hit to use fixed-string and word-boundary matching (e.g., grep -F -w -i or equivalent) so keywords are matched literally and as whole words (avoiding regex meta-character surprises) and avoid forking unsafe regexes per keyword; update category_hit to call grep with -F -w -i on "$kw" and verify CI test cases cover hyphenated tokens like "multi-tenant" vs "multitenant" since -w treats - as a non-word char.
114-156: 중복/비효율 정리 (선택).
- 149–150줄의
HARD_N/SOFT_N재할당은 114–115줄과 완전히 동일합니다. 중간에 배열이 변경되지 않으므로 제거 가능.- 140–142줄은 같은
POLICYJSON을 세 번 파싱하기 위해python3 -c를 세 번 호출합니다. 한 번에read+ 환경변수/eval 로 3개 값을 받아오면 프로세스 3개를 절약할 수 있습니다.동작엔 영향 없으니 여유 있을 때 정리만 해 두시면 됩니다.
♻️ 예시 diff (POLICY 단일 파싱)
-UPGRADE_TO=$(python3 -c "import json,sys; print(json.loads(sys.argv[1])['upgrade_to'])" "$POLICY") -THRESHOLD=$(python3 -c "import json,sys; print(json.loads(sys.argv[1])['confidence_threshold'])" "$POLICY") -MIN_CATEGORIES=$(python3 -c "import json,sys; print(json.loads(sys.argv[1])['min_distinct_categories'])" "$POLICY") +eval "$(python3 -c " +import json, sys +p = json.loads(sys.argv[1]) +print(f\"UPGRADE_TO={p['upgrade_to']}\") +print(f\"THRESHOLD={p['confidence_threshold']}\") +print(f\"MIN_CATEGORIES={p['min_distinct_categories']}\") +" "$POLICY")"참고:
POLICY는 내부json.dumps산출이라 eval 안전하지만, 엄격히 가려면read+ NUL 구분자를 쓰는 방식이 더 견고합니다.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/recommend-profile.sh` around lines 114 - 156, The script duplicates HARD_N/SOFT_N assignments and inefficently calls python3 three times to parse POLICY; remove the second HARD_N=${`#HARD_HITS`[@]} and SOFT_N=${`#SOFT_HITS`[@]} reassignment, and replace the three separate python3 -c calls that set UPGRADE_TO, THRESHOLD, and MIN_CATEGORIES with a single parse (e.g., one python3 -c that loads the POLICY JSON and prints the three values in a safe delimiter) then split/assign those three outputs to UPGRADE_TO, THRESHOLD, MIN_CATEGORIES; keep references to the same symbols (POLICY, UPGRADE_TO, THRESHOLD, MIN_CATEGORIES, HARD_N, SOFT_N, SCORE) so the rest of the logic is unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci.yml:
- Around line 358-364: The CI step currently treats any non-zero exit as success
by using "&& ... || true" with the line invoking "python3
scripts/standard-schema-lint.py /tmp/unportable.prisma 2>/dev/null", so it
doesn't verify the linter returned the required exit code 2; change the check to
capture the command's exit status and assert it equals 2 (e.g., run the linter,
store "$?", and test "[ $status -eq 2 ]" failing the job otherwise) so only the
specific contract "2 = non-portable feature detected" passes.
In `@plugins/preview-forge/agents/meta/run-supervisor.md`:
- Around line 43-48: The pre-flight permission conflict arises because the spec
requires running recommend-profile.sh and escalation-ledger.py during pre-flight
(see "recommend-profile.sh" and "escalation-ledger.py" in steps 9–10) while a
later rule forbids Bash execution (line ~96/allowed_scope), leaving unclear
whether M1 must run these checks or delegate them; fix by explicitly choosing
one of two approaches and updating the doc: either allow read-only, non-mutating
shell/script execution for pre-flight (document what "read-only" means and
whitelist recommend-profile.sh and escalation-ledger.py) or declare that M1 must
only delegate profile/escalation work to a subtask/agent (add a delegation flow
and required API for running those scripts elsewhere), and update the lines
referenced (the pre-flight steps and the Bash prohibition/allowed_scope) to
reflect the chosen approach so the execution path is unambiguous.
In `@plugins/preview-forge/assets/gitignore.standard.template`:
- Around line 6-12: The SQLite ignore list is missing rollback journal patterns;
update the SQLite section in gitignore.standard.template to also ignore rollback
journals by adding patterns such as *.db-journal and *.sqlite-journal (or a
broader *-journal) alongside the existing *.db, *.sqlite3-journal entries so
files like dev.db-journal and any *.sqlite-journal are excluded from the repo.
- Around line 17-20: 현재 패턴이 `.env.production`, `.env.staging` 등 `.env.*` 변형을
허용하므로 비밀값 유출 위험이 남아 있습니다;
plugins/preview-forge/assets/gitignore.standard.template에서 기존 블록(".env",
".env.local", ".env.*.local", "!.env.example")을 제거하고 대신 `.env`와 모든 변형을 차단하는
`.env.*`를 추가하되 예외로 `!.env.example`만 남기도록 교체해 주세요 (예: replace with ".env",
".env.*", "!.env.example").
In `@plugins/preview-forge/assets/graduate.sh.template`:
- Around line 82-90: 현재 compose 템플릿에 하드코딩된 DB 자격증명(DATABASE_URL:
postgres://pf:pf@db:5432/pf 및 db 서비스의
POSTGRES_USER/POSTGRES_PASSWORD/POSTGRES_DB)이 포함되어 있어 시크릿 스캐너에 걸릴 위험이 있으므로 이러한
값들을 제거하고 환경변수 치환을 요구하도록 변경하세요: 템플릿에서 DATABASE_URL과 db 서비스의 POSTGRES_* 환경 항목을
삭제하거나 값 대신 ${DATABASE_URL}, ${POSTGRES_USER}, ${POSTGRES_PASSWORD},
${POSTGRES_DB} 같은 변수 참조로 바꾸고, 프로젝트 루트에 .env.example 파일을 추가해 필요한 변수 이름과 예시(비밀이 아닌
플레이스홀더)만 문서화하도록 만드세요; 또한 템플릿 주석 또는 README에 런타임에 실제 비밀을 제공하라는 안내를 추가해 배포 시 비밀이
하드코딩되지 않도록 하세요.
- Around line 52-69: The Dockerfile is installing only production deps in the
deps stage which causes the builder stage's npm run build to fail; update the
flow so the deps stage runs full install (remove --omit=dev) e.g. use npm ci to
install all dependencies, let the builder stage run npm run build (in the
builder stage that uses the node_modules from deps), and after build prune
dev-only packages before finalizing the runtime image (use npm prune --omit=dev
or equivalent) so the final stage contains only production deps; reference the
Docker stages names deps, builder and the npm commands npm ci, npm run build and
npm prune --omit=dev.
In `@plugins/preview-forge/assets/prisma.schema.standard.template`:
- Around line 18-21: The template currently documents %%PF_DB_URL%% as a
tilde-based path but Prisma SQLite does not expand '~'; update the engineering
scaffold that generates %%PF_DB_URL%% so it expands the user's home directory
(e.g. using os.homedir() or $HOME) and writes DATABASE_URL in the .env as an
absolute URL like
DATABASE_URL="file:/home/username/.preview-forge/<project>/dev.db"; also update
the prisma template comment around url = env("DATABASE_URL") to explicitly
instruct the scaffold to perform this expansion (referencing %%PF_DB_URL%%,
DATABASE_URL, and the url = env("DATABASE_URL") line).
In `@plugins/preview-forge/hooks/escalation-ledger.py`:
- Around line 50-54: The signal_hash function currently risks different hashes
for the same logical category set and returns only 16 hex chars; instead
normalize and deduplicate before hashing and return the full SHA-256 hex digest:
update signal_hash to map categories to a canonical form (e.g., strip and
lower-case), convert to a set to remove duplicates, sort that set, join with the
existing delimiter ("\x1f"), compute hashlib.sha256(...).hexdigest() and return
the full 64-character hex string (keep the function name signal_hash and the
delimiter approach so callers remain compatible).
- Around line 69-99: The ledger append in cmd_record is racy: load_ledger →
append → save_ledger can lose concurrent writes and using a fixed ".tmp" name
can collide; fix by serializing writers with an exclusive file lock around the
read/modify/write sequence (acquire lock before calling load_ledger and hold it
until save_ledger returns), and make save_ledger use a unique temporary filename
(e.g., include pid/timestamp/uuid when creating tmp from
LEDGER_FILE.with_suffix) then atomic replace LEDGER_FILE; update references to
LEDGER_FILE, LEDGER_DIR, save_ledger, load_ledger and the tmp variable
accordingly and ensure the lock is released on all error paths.
In `@plugins/preview-forge/profiles/pro.json`:
- Around line 6-12: The _note in the "stack" object incorrectly claims "postgres
+ redis" while the graduation compose template only provisions Postgres; update
the "stack" object (keys: "stack", "_note", and optionally "db_file_location")
so the note accurately reflects the template — either add Redis to the
graduation compose/template that generates containers or change the "_note" text
to mention only "postgres" (and adjust "db_file_location" phrasing if needed) to
make the profile consistent with the actual graduate template.
In `@plugins/preview-forge/schemas/pf-profile.schema.json`:
- Around line 165-213: The schema allows empty objects for the optional "stack"
and "profile_escalation" blocks so runtime code that reads fields like stack.db
or profile_escalation.upgrade_to / confidence_threshold /
min_distinct_categories can break; add "required" arrays to the "stack" object
(e.g., require at least "db") and to the "profile_escalation" object (require
"upgrade_to", "confidence_threshold", and "min_distinct_categories" or any other
fields your runtime expects) so that if the block is present the necessary keys
are validated.
In `@plugins/preview-forge/settings.json`:
- Around line 7-13: In load_active_profile() in
plugins/preview-forge/hooks/cost-regression.py update the hardcoded "pro"
fallback to "standard": change s.get("pf", {}).get("defaultProfile", "pro") to
use "standard", and replace any assignments that set name = "pro" on exception
or None (the two occurrences currently at the exception/None handling) to name =
"standard" so the function aligns with PF_DEFAULT_PROFILE in settings.json; keep
the rest of load_active_profile() logic unchanged.
In `@README.md`:
- Around line 70-83: The README currently describes the pro profile DB
inconsistently; align the quick-start and table to match the actual stack in
plugins/preview-forge/profiles/pro.json by changing the pro profile text to a
single, consistent description (e.g., "SQLite → Postgres" or "Postgres + Docker"
depending on that JSON's dev/prod stack). Update the /pf:new example comment for
"real project" and the Profiles table row for **pro** so both reflect the same
DB wording and include "Docker" if the profile's prod config requires it; ensure
the term matches the stack in plugins/preview-forge/profiles/pro.json.
- Around line 92-98: 문서의 모순은 "Hard-require"와 "Categorical scoring (≥2 distinct
signal categories)" 우선순위가 불명확한 점이므로 문구를 명확히 바꿔 주세요: 명시적으로 "Hard-require (Stripe
/ PII / HIPAA / auth-provider) bypasses the category floor and will force
upgrade on a single matching hard signal"라고 고치고, 반대로 "Categorical scoring floor
(≥2 distinct signal categories) applies to Soft-suggest and Hint only"라고 이어서 적어
Hard-require가 예외임을 분명히 하며, 관련 키워드(예: Hard-require, Soft-suggest, Hint,
Categorical scoring, AskUserQuestion, ~/.preview-forge/escalation-history.json,
/pf:status)를 그대로 남겨 문맥과 행동(강제 업그레이드, AskUserQuestion 재촉 방식, 24h anti-nagging, 상태
메시지)을 유지해 주세요.
In `@scripts/recommend-profile.sh`:
- Around line 74-82: HARD_PAYMENTS and SOFT_B2B contain entries with trailing
spaces ("pci " and "rfp ") which cause missed matches for common variants like
"PCI-DSS", "PCI." or "RFP-driven"; update the entries in HARD_PAYMENTS and
SOFT_B2B to remove the trailing spaces or replace those literal tokens with
word-boundary-aware patterns (e.g., use regex tokens like \bpci\b and \brfp\b or
ensure the matching call uses grep -w) so the intent to match whole words is
preserved and no legitimate cases are silently missed.
- Around line 56-70: The current python snippet that sets IDEA_TEXT will fall
back to lowercasing the entire raw JSON when the four checked fields are all
empty, causing key names to be scanned; change the logic in the python block
that produces IDEA_TEXT so it distinguishes parse failure from an empty-value
parse: on successful json.loads produce a string of concatenated values
(recursively extracting all leaf values or at least from an expanded
allowed-fields list like description/summary/goals) and emit that (possibly
empty), but on parse exception emit a special marker (or non-empty string) that
signals failure so the shell fallback lowercasing of the raw JSON only runs when
parsing actually failed; update the code that reads IDEA_TEXT to rely on that
marker and avoid using the raw JSON when parsing succeeded but returned no
values (use the extracted values or empty string instead).
- Around line 43-53: The script currently calls cat "$INPUT" without validating
the file, so unreadable/missing files cause immediate exit due to set -euo
pipefail and produce no JSON; update the branch that handles non-stdin input to
first test readability with [[ -r "$INPUT" ]] and if that fails set JSON to a
minimal {"action":"none"} (and echo it to stdout) instead of attempting cat,
preserving the INPUT/JSON variables and ensuring the script still emits the
fallback JSON and exits normally; reference the INPUT variable and the JSON
output generation logic when making the change.
In `@scripts/standard-schema-lint.py`:
- Around line 41-44: The current detector only looks for `$executeRaw` template
usage (regex around "$executeRaw(Unsafe)?") so it misses `$queryRaw`, generic
raw query patterns and Postgres casts inside migration SQL files
(prisma/migrations/**/*.sql) or app code; update the scanner to also search for
`$queryRaw` and generic raw SQL callsites (e.g., raw string/TaggedTemplate
patterns used by Prisma client), and add an additional pass to glob-scan
prisma/migrations/**/*.sql (and optionally app code paths) for Postgres-only
casts like ::tsvector, ::jsonb, ::uuid, or otherwise explicitly document that
this script only analyzes schema.prisma and does not guarantee raw-SQL blocking
outside that scope. Ensure changes reference the existing regex/check logic that
currently matches "$executeRaw(Unsafe)?" and the logic that emits the "raw SQL
with Postgres-specific cast" warning so the new patterns produce the same
warning behavior.
---
Outside diff comments:
In `@plugins/preview-forge/agents/meta/run-supervisor.md`:
- Around line 31-41: Update the pre-flight doc to reflect the actual 10 steps
(change “7-step” to “10-step”) and adjust the Profile resolve paragraph so the
default-change notice only triggers for implicit-default users (i.e., when no
explicit --profile flag was provided); keep the logic that parses --profile into
PF_PROFILE, resolves against settings.json.pf.defaultProfile (defaulting to
"standard" v1.4+), writes the chosen profile to runs/<id>/.profile, and gate the
one-time stderr message "pf: default profile changed standard←pro (v1.4.0)..."
behind a check that the user did not supply --profile and that
~/.preview-forge/default-notice-shown is absent (create that file after showing
the notice).
In `@plugins/preview-forge/schemas/pf-profile.schema.json`:
- Line 5: The schema description still says "pro (default, balanced)" which
conflicts with v1.4 switching the default to "standard"; update the
"description" value in pf-profile.schema.json (the description property for the
Preview Forge run profile) to reflect that "standard" is now the default (e.g.,
change "pro (default, balanced)" to "standard (default, balanced)" and ensure
the three profile names "standard", "pro", and "max" remain correctly
described).
In `@README.md`:
- Around line 256-262: README의 "Verify install" 섹션에 있는 예시 출력 문자열 `34/34 checks`이
현재 PR의 변경사항(verify-plugin.sh이 `46/46`으로 확장된 점)과 불일치합니다; 해당 문자열을 찾아 `34/34
checks`를 `46/46 checks`로 업데이트하고, 필요하면 스크립트 이름 `scripts/verify-plugin.sh`와 함께 문구가
일관되게 보이도록 검증 예시를 수정하세요.
---
Nitpick comments:
In `@scripts/recommend-profile.sh`:
- Around line 88-97: The category_hit function uses grep -qi which treats
keywords as BRE regexes; change the invocation in category_hit to use
fixed-string and word-boundary matching (e.g., grep -F -w -i or equivalent) so
keywords are matched literally and as whole words (avoiding regex meta-character
surprises) and avoid forking unsafe regexes per keyword; update category_hit to
call grep with -F -w -i on "$kw" and verify CI test cases cover hyphenated
tokens like "multi-tenant" vs "multitenant" since -w treats - as a non-word
char.
- Around line 114-156: The script duplicates HARD_N/SOFT_N assignments and
inefficently calls python3 three times to parse POLICY; remove the second
HARD_N=${`#HARD_HITS`[@]} and SOFT_N=${`#SOFT_HITS`[@]} reassignment, and replace
the three separate python3 -c calls that set UPGRADE_TO, THRESHOLD, and
MIN_CATEGORIES with a single parse (e.g., one python3 -c that loads the POLICY
JSON and prints the three values in a safe delimiter) then split/assign those
three outputs to UPGRADE_TO, THRESHOLD, MIN_CATEGORIES; keep references to the
same symbols (POLICY, UPGRADE_TO, THRESHOLD, MIN_CATEGORIES, HARD_N, SOFT_N,
SCORE) so the rest of the logic is unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 787650bc-8ff1-49d3-9a82-6a9bfe4a70bc
📒 Files selected for processing (18)
.github/workflows/ci.ymlREADME.mdplugins/preview-forge/CHANGELOG.mdplugins/preview-forge/agents/meta/run-supervisor.mdplugins/preview-forge/assets/README.standard.templateplugins/preview-forge/assets/gitignore.standard.templateplugins/preview-forge/assets/graduate.sh.templateplugins/preview-forge/assets/prisma.schema.standard.templateplugins/preview-forge/hooks/escalation-ledger.pyplugins/preview-forge/memory/LESSONS.mdplugins/preview-forge/profiles/max.jsonplugins/preview-forge/profiles/pro.jsonplugins/preview-forge/profiles/standard.jsonplugins/preview-forge/schemas/pf-profile.schema.jsonplugins/preview-forge/settings.jsonscripts/recommend-profile.shscripts/standard-schema-lint.pyscripts/verify-plugin.sh
| FROM node:20-alpine AS deps | ||
| WORKDIR /app | ||
| COPY package*.json ./ | ||
| RUN npm ci --omit=dev | ||
|
|
||
| FROM node:20-alpine AS builder | ||
| WORKDIR /app | ||
| COPY --from=deps /app/node_modules ./node_modules | ||
| COPY . . | ||
| RUN npm run build | ||
|
|
||
| FROM node:20-alpine | ||
| WORKDIR /app | ||
| ENV NODE_ENV=production | ||
| COPY --from=builder /app/.next ./.next | ||
| COPY --from=builder /app/public ./public | ||
| COPY --from=builder /app/node_modules ./node_modules | ||
| COPY --from=builder /app/package.json ./ |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -t f "graduate.sh.template" -x cat -n {}Repository: Two-Weeks-Team/PreviewForgeForClaudeCode
Length of output: 4931
🏁 Script executed:
# Let's also check if there are any other related files or context
git ls-files | grep -E "(graduate|docker|package)" | head -20Repository: Two-Weeks-Team/PreviewForgeForClaudeCode
Length of output: 264
🏁 Script executed:
# Check Next.js build requirements - verify if devDependencies are needed
web_search
Do Next.js builds require devDependencies in node_modules during npm run build?Repository: Two-Weeks-Team/PreviewForgeForClaudeCode
Length of output: 175
🌐 Web query:
Next.js npm run build requires devDependencies TypeScript compilation
💡 Result:
Yes, npm run build in Next.js requires TypeScript (and related packages like @types/node, @types/react) even if they are installed as devDependencies. Next.js performs TypeScript type-checking and compilation during the build process by default (using SWC or tsc). The official docs confirm: "Next.js fails your production build (next build) when TypeScript errors are present in your project." Running next build invokes the TypeScript compiler, which must be available in node_modules. The error "It looks like you're trying to use TypeScript but do not have the required package(s) installed" typically occurs in production/CI environments where npm install --production or npm prune --production skips devDependencies. TypeScript is needed at build time for: - Type checking your source code - Reading tsconfig.json - Parsing next.config.ts (in recent versions like Next.js 15+, as Next.js relies on TypeScript for this) Workarounds: 1. Ensure npm install (full deps) runs before npm run build in CI/Docker/production scripts. Do not use --production before build. 2. Move typescript to dependencies (not recommended, bloats prod bundle). 3. For next.config.ts issues in Next.js 16+: Use Node.js 22.10+ with NEXT_NODE_NATIVE_TS_LOADER_ENABLED=true NODE_OPTIONS=--experimental-transform-types or rename to next.config.js / next.config.mjs. create-next-app installs TypeScript in devDependencies by default, and next build works fine locally because full node_modules is present. In prod pipelines, install all deps first. To verify type errors locally before build: npx tsc --noEmit. Temporary (not recommended): next.config.js with typescript: { ignoreBuildErrors: true }.
Citations:
- 1: https://stackoverflow.com/questions/73357150/nextjs-next-build-typescript-not-installed
- 2: typescript should be added to dependencies not devDependencies since it is used to parse the next.config.ts vercel/next.js#81798
- 3: https://nextjs.org/docs/pages/api-reference/config/typescript
- 4: https://nextjs.org/docs/app/building-your-application/configuring/typescript
- 5: https://nextjs.org/docs/app/getting-started/installation
🌐 Web query:
Docker multi-stage npm ci --omit=dev builder stage Next.js build
💡 Result:
In Docker multi-stage builds for Next.js production, use npm ci (without --omit=dev) in the dependencies stage to install all dependencies (including devDependencies needed for npm run build). Then in the builder stage, copy node_modules from dependencies, build the app (npm run build), and optionally prune dev dependencies with npm prune --omit=dev or npm ci --omit=dev if needed for custom setups. However, official Vercel Next.js and Docker Docs examples use plain npm ci in deps stage (installs all deps since no NODE_ENV=production there) and npm run build in builder (with NODE_ENV=production). For optimal production images, configure next.config.js with output: 'standalone'. This generates .next/standalone with only runtime deps. The runner stage copies .next/standalone, .next/static, and public/ files—no full node_modules needed. Example multi-stage Dockerfile (adapted from official Vercel Next.js examples @10,17,58 and Docker Docs @11): # syntax=docker/dockerfile:1 ARG NODE_VERSION=20.17.0-slim # Dependencies stage FROM node:${NODE_VERSION} AS dependencies WORKDIR /app COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* .npmrc* ./ RUN --mount=type=cache,target=/root/.npm \ --mount=type=cache,target=/usr/local/share/.cache/yarn \ --mount=type=cache,target=/root/.local/share/pnpm/store \ if [ -f package-lock.json ]; then \ npm ci; \ elif [ -f yarn.lock ]; then \ corepack enable yarn && yarn install --frozen-lockfile; \ elif [ -f pnpm-lock.yaml ]; then \ corepack enable pnpm && pnpm install --frozen-lockfile; \ else \ echo "Lockfile not found." && exit 1; \ fi # Builder stage FROM node:${NODE_VERSION} AS builder WORKDIR /app COPY --from=dependencies /app/node_modules ./node_modules COPY . . ENV NODE_ENV=production RUN npm run build # Runner stage (production) FROM node:${NODE_VERSION}-slim AS runner WORKDIR /app ENV NODE_ENV=production RUN addgroup --system --gid 1001 nodejs && \ adduser --system --uid 1001 nextjs COPY --from=builder /app/public ./public COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./ COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static USER nextjs EXPOSE 3000 ENV PORT=3000 ENV HOSTNAME=0.0.0.0 CMD ["node", "server.js"] Key points: - npm ci in deps installs all deps (dev + prod) for build tools like TypeScript, ESLint, etc. (@11,10) - Builder uses full node_modules to run npm run build which needs devDeps. - Standalone output (@17) traces and includes only prod runtime deps in .next/standalone—no prune needed (@4,22). - Use slim/Alpine base, non-root user, cache mounts for speed (@11). - --omit=dev with npm ci skips devDep installation but may fail builds needing them (e.g., typescript in build script); resolves peers but doesn't install dev (@5). Use post-build npm prune --omit=dev or npm install --production if not using standalone (@22,19). Build: docker buildx build --provenance=false -t my-nextjs-app . --load Run: docker run -p 3000:3000 my-nextjs-app
Citations:
- 1: https://github.com/vercel/next.js/blob/canary/examples/with-docker/Dockerfile
- 2: https://docs.docker.com/guides/nextjs/containerize/
- 3: https://github.com/vercel/next.js/blob/canary/examples/with-docker/README.md
- 4: https://thelinuxcode.com/nextjs-docker-images-the-2026-playbook-for-reliable-builds-and-fast-deploys/
- 5: https://blog.carbonteq.com/optimizing-next-js-docker-images-for-faster-ci-builds-2/
- 6: [BUG] npm ci --omit=dev does not omit dev deps npm/cli#7094
- 7: https://github.com/vercel/next.js/blob/canary/examples%2Fwith-docker%2FDockerfile
builder stage에서 devDependencies를 설치해야 합니다.
라인 55에서 npm ci --omit=dev로 설치한 node_modules를 그대로 라인 59-61의 builder stage에서 사용하면, Next.js의 npm run build가 실패합니다. Next.js 빌드는 TypeScript, @types/node 등 devDependencies의 TypeScript 컴파일러가 필수이기 때문입니다. 전체 의존성을 먼저 설치한 후 빌드를 완료하고 나서 npm prune --omit=dev로 불필요한 패키지를 제거하는 방식이 올바릅니다.
🐳 제안 수정
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
-RUN npm ci --omit=dev
+RUN npm ci
FROM node:20-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
+RUN npm prune --omit=dev
FROM node:20-alpine
WORKDIR /app
ENV NODE_ENV=production📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| FROM node:20-alpine AS deps | |
| WORKDIR /app | |
| COPY package*.json ./ | |
| RUN npm ci --omit=dev | |
| FROM node:20-alpine AS builder | |
| WORKDIR /app | |
| COPY --from=deps /app/node_modules ./node_modules | |
| COPY . . | |
| RUN npm run build | |
| FROM node:20-alpine | |
| WORKDIR /app | |
| ENV NODE_ENV=production | |
| COPY --from=builder /app/.next ./.next | |
| COPY --from=builder /app/public ./public | |
| COPY --from=builder /app/node_modules ./node_modules | |
| COPY --from=builder /app/package.json ./ | |
| FROM node:20-alpine AS deps | |
| WORKDIR /app | |
| COPY package*.json ./ | |
| RUN npm ci | |
| FROM node:20-alpine AS builder | |
| WORKDIR /app | |
| COPY --from=deps /app/node_modules ./node_modules | |
| COPY . . | |
| RUN npm run build | |
| RUN npm prune --omit=dev | |
| FROM node:20-alpine | |
| WORKDIR /app | |
| ENV NODE_ENV=production | |
| COPY --from=builder /app/.next ./.next | |
| COPY --from=builder /app/public ./public | |
| COPY --from=builder /app/node_modules ./node_modules | |
| COPY --from=builder /app/package.json ./ |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@plugins/preview-forge/assets/graduate.sh.template` around lines 52 - 69, The
Dockerfile is installing only production deps in the deps stage which causes the
builder stage's npm run build to fail; update the flow so the deps stage runs
full install (remove --omit=dev) e.g. use npm ci to install all dependencies,
let the builder stage run npm run build (in the builder stage that uses the
node_modules from deps), and after build prune dev-only packages before
finalizing the runtime image (use npm prune --omit=dev or equivalent) so the
final stage contains only production deps; reference the Docker stages names
deps, builder and the npm commands npm ci, npm run build and npm prune
--omit=dev.
| set -euo pipefail | ||
|
|
||
| INPUT="${1:-/dev/stdin}" | ||
| CURRENT_PROFILE="${2:-${PF_PROFILE:-standard}}" | ||
| PLUGIN_ROOT="${CLAUDE_PLUGIN_ROOT:-}" | ||
|
|
||
| if [[ "$INPUT" == "/dev/stdin" ]]; then | ||
| JSON=$(cat) | ||
| else | ||
| JSON=$(cat "$INPUT") | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n scripts/recommend-profile.sh | head -70Repository: Two-Weeks-Team/PreviewForgeForClaudeCode
Length of output: 3029
🏁 Script executed:
# Check if there's any error handling or Run Supervisor integration
rg -A 5 -B 5 "Run Supervisor\|run.*supervisor\|error.*handling" scripts/ --type sh -iRepository: Two-Weeks-Team/PreviewForgeForClaudeCode
Length of output: 66
🏁 Script executed:
# Search for similar input validation patterns in other scripts
fd -e sh . scripts/ | head -20 | xargs -I {} sh -c 'echo "=== {} ==="; grep -n "cat.*INPUT\|readability\|\[[ -r\|file.*check" {} 2>/dev/null || true'Repository: Two-Weeks-Team/PreviewForgeForClaudeCode
Length of output: 250
🏁 Script executed:
cat -n scripts/recommend-profile.sh | tail -n +70Repository: Two-Weeks-Team/PreviewForgeForClaudeCode
Length of output: 6420
🏁 Script executed:
# Check for error handling patterns or trap statements
rg "trap|error|exit|catch" scripts/recommend-profile.shRepository: Two-Weeks-Team/PreviewForgeForClaudeCode
Length of output: 224
입력 파일 검증 추가 권장 (사소).
45–53줄에서 $INPUT이 /dev/stdin이 아니면 cat "$INPUT"을 무조건 실행하는데, 파일 존재 여부나 읽기 권한을 검증하지 않습니다. 스크립트의 공식 문서(37줄)에서 "Exit codes: 0 always"라고 명시했음에도 불구하고, set -euo pipefail(43줄)로 인해 파일 오류 시 스크립트가 즉시 종료되며 JSON 출력이 전혀 나오지 않습니다.
이를 해결하려면:
[[ -r "$INPUT" ]]로 읽기 가능 여부를 먼저 확인하고- 실패 시
action=noneJSON을 stdout으로 출력하기
이렇게 하면 문서화된 계약(항상 exit 0 + JSON 출력)을 지킬 수 있으며, Run Supervisor가 일관되게 권고를 처리할 수 있습니다.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/recommend-profile.sh` around lines 43 - 53, The script currently
calls cat "$INPUT" without validating the file, so unreadable/missing files
cause immediate exit due to set -euo pipefail and produce no JSON; update the
branch that handles non-stdin input to first test readability with [[ -r
"$INPUT" ]] and if that fails set JSON to a minimal {"action":"none"} (and echo
it to stdout) instead of attempting cat, preserving the INPUT/JSON variables and
ensuring the script still emits the fallback JSON and exits normally; reference
the INPUT variable and the JSON output generation logic when making the change.
| IDEA_TEXT=$(printf '%s' "$JSON" | python3 -c " | ||
| import json, sys | ||
| try: | ||
| d = json.loads(sys.stdin.read()) | ||
| parts = [str(d.get('text','')), str(d.get('idea','')), str(d.get('title','')), str(d.get('pitch',''))] | ||
| print(' '.join(p for p in parts if p).lower()) | ||
| except Exception: | ||
| # Treat raw input as idea text. | ||
| sys.stdin.seek(0) if sys.stdin.seekable() else None | ||
| print('') | ||
| " 2>/dev/null || true) | ||
|
|
||
| if [[ -z "$IDEA_TEXT" ]]; then | ||
| IDEA_TEXT=$(printf '%s' "$JSON" | tr '[:upper:]' '[:lower:]') | ||
| fi |
There was a problem hiding this comment.
JSON 파싱은 성공했지만 모든 필드가 비어 있을 때 raw JSON으로 fallback되어 오탐 가능.
56–66줄의 python이 text/idea/title/pitch 네 필드만 읽는데, 모두 빈 문자열이면 IDEA_TEXT=""가 되고 68–70줄에서 원본 JSON 전체(키 이름 포함)를 소문자화해서 스캔합니다. 예를 들어 사용자가 {"description": "...", "compliance_notes": "none"}을 넘기면:
description/compliance_notes라는 키 이름이SOFT_COMPLIANCE의"compliance"키워드에 매칭- 의도하지 않은
action=hint/ask가 발동할 수 있음
개선안: fallback 시에도 JSON이 파싱된 경우면 값(values)만 재귀 추출하고, 파싱 실패일 때만 raw 텍스트로 fallback하도록 분기하는 편이 안전합니다. 혹은 지원 필드 목록(description, summary, goals 등)을 넓혀 주세요.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/recommend-profile.sh` around lines 56 - 70, The current python
snippet that sets IDEA_TEXT will fall back to lowercasing the entire raw JSON
when the four checked fields are all empty, causing key names to be scanned;
change the logic in the python block that produces IDEA_TEXT so it distinguishes
parse failure from an empty-value parse: on successful json.loads produce a
string of concatenated values (recursively extracting all leaf values or at
least from an expanded allowed-fields list like description/summary/goals) and
emit that (possibly empty), but on parse exception emit a special marker (or
non-empty string) that signals failure so the shell fallback lowercasing of the
raw JSON only runs when parsing actually failed; update the code that reads
IDEA_TEXT to rely on that marker and avoid using the raw JSON when parsing
succeeded but returned no values (use the extracted values or empty string
instead).
External review on PR #7. Applied systematically; test matrix grew. 🔴 HIGH — graduate.sh GNU-only sed (Gemini) Before: sed 's|provider\s*=\s*"sqlite"|...|' — \s is GNU extension. After: sed '...provider[[:space:]]*=[[:space:]]*"sqlite"...' macOS BSD sed now matches portably. 🟡 P1 — recommend-profile.sh ignored profile-declared signal sets (Codex) Before: script had hardcoded HARD=[payments,phi,pii,auth] SOFT=[comp,tenant,b2b,scale]. profile.hard_require_signals / soft_suggest_categories were loaded but never applied, so pro and standard behaved identically (bug). After: script detects ALL 8 categories, then FILTERS through profile's hard_require_signals / soft_suggest_categories sets. Categories in neither are ignored. Result: pro profile correctly treats Stripe as no-op (payments handled safely at pro's stack) while still hard- requiring PHI. CI test asserts pro-ignores-stripe + pro-hard-requires-hipaa. 🟡 P1 — run-supervisor.md invocation syntax wrong (Codex) Before: `recommend-profile.sh < idea.json $(cat .profile)` — stdin redirect leaves arg1 as profile name; script expected arg1=input path. After: `recommend-profile.sh /dev/stdin "$(cat .profile)" < idea.json` — arg1=/dev/stdin, arg2=profile. Bilingual doc comment added. 🟡 P2 — "phi" substring matched "Delphi"/"morphism" (Codex) Before: plain `grep -qi` — false hard-require on unrelated words. After: EN_WORD_HIT uses `grep -iwE` word-boundary, EN_MULTIWORD_HIT wraps in non-word-char boundaries `(^|[^a-z0-9])...($|[^a-z0-9])`. Korean keywords use substring (no POSIX word boundary for CJK). CI: "Delphi programming with morphism patterns" → action:none assertion. 🟡 Medium — trailing-space keyword trick (Gemini) Before: "pci ", "rfp " with trailing spaces to avoid partial matches — fails if followed by punctuation. After: removed trailing spaces; word-boundary match handles boundaries correctly. CI: "PCI compliance for card processing." → action:hard-require assertion. 🟡 Medium — ledger non-atomic read-modify-write (Gemini) Before: concurrent `record` calls could lose entries. After: fcntl advisory lock on ~/.preview-forge/escalation-history.lock around load→append→save. No-op on Windows (no fcntl). CI: 5-way concurrent `record` with & assertion that all 5 entries persisted. Keyword bank restructure: split EN/KO per category so matchers can use the right strategy (word-boundary for ASCII, substring for CJK). Multi-word EN phrases get a third matcher with non-word-char boundaries. Also fixed ordering bug exposed during debug: POLICY / HARD_SET / SOFT_SET were loaded AFTER the partition loop, so HARD_HITS was always empty. Moved policy load before partition. Test matrix growth: recommend-profile: 9 → 10 (+ profile-filter + word-boundary + PCI without space) escalation-ledger: 8 → 9 (+ 5-way concurrent write) verify-plugin: 46/46 unchanged. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
After Phase T addressed Gemini/Codex first pass, CodeRabbit did a deeper
sweep. All legitimate findings applied. CI matrix assertions strengthened.
🟠 Major fixes:
1. Prisma schema tilde expansion
Before: `file:~/.preview-forge/<project>/dev.db` in DATABASE_URL — but
Prisma does NOT expand ~. App would fail silently (connect to literal
dir named "~").
After: Template comments updated; Engineering scaffold writes
ABSOLUTE path via $HOME expansion at scaffold time into .env.
standard.json stack.db_file_location: ~/ → $HOME/.
2. Agent scope conflict
Before: run-supervisor.md Bash=없음 but pre-flight steps 8-9 invoke
recommend-profile.sh + escalation-ledger.py via Bash. Contradiction.
After: allowed_scope lists explicit read-only pre-flight scripts
(detect-surface, recommend-profile, preview-cache key|get, pre-flight,
escalation-ledger, cost-regression). Destructive Bash still blocked
by Rule 6.
3. SQLite rollback journal missing from gitignore
Before: *.db-wal + *.db-shm covered WAL mode only. SQLite's default
rollback mode leaves *.db-journal and *.sqlite-journal sidecars.
After: gitignore.standard.template covers all 4 sidecars + .sqlite*.
4. .env variants too permissive
Before: .env, .env.local, .env.*.local — but .env.production,
.env.staging, .env.development slipped through and could leak secrets.
After: .env.* with !.env.example exception. Block-all-then-allow.
5. docker-compose hardcoded credentials
Before: `POSTGRES_PASSWORD: pf` literal in compose.yml — secret
scanner tripwire + production-copy risk.
After: env_file: [.env] on both services. graduate.sh emits
.env.example with `change-me-before-deploy` placeholder. Compose
reads from .env (gitignored).
6. Schema nested-required fields missing
Before: `stack: {}` and `profile_escalation: {}` passed schema
validation, but runtime expected upgrade_to / confidence_threshold /
min_distinct_categories / hard_require_signals / soft_suggest_categories.
After: schema requires those fields inside the nested blocks.
All 3 existing profile files validate (they were already populated).
7. signal_hash truncated to 16-hex + no normalisation
Before: sha256(...).hexdigest()[:16] with raw category list. Case
variance ("COMPLIANCE" vs "compliance") and duplicates produced
different hashes, bypassing replay suppression.
After: full 64-hex sha256 over sorted({c.lower() for c in cats if c}).
CI asserts case+dedup invariance AND 64-char length.
🟡 Minor fixes:
8. Pro profile description: removed "redis" reference (graduate.sh only
generates postgres service; redis claim was aspirational).
9. README pro-profile DB cell: "SQLite → Postgres" → "Postgres (dev-prod
parity)" for consistency with settings.json stack.db="postgres".
10. standard-schema-lint: expanded pattern coverage
- Added $queryRaw / $queryRawUnsafe (was only $executeRaw)
- Added ::interval Postgres cast
- Added @db.Xml/Citext/Inet/Macaddr/Bit/VarBit (Postgres-only types)
- New: scans prisma/migrations/*.sql for Postgres casts (raw SQL in
migration files also breaks portability, previously uncaught).
Test matrix growth:
escalation-ledger: 9 → 10 (+ case/dedup invariance + 64-char length)
All other matrices stable.
verify-plugin: 46/46 (unchanged).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…t minor)
CodeRabbit noticed the negative schema-lint test used `&& { FAIL } || true`
which treats ANY non-zero exit as "success" — linter crashes, usage errors,
or import errors would falsely pass the negative case.
Now explicitly captures exit code and asserts -eq 2 per the script's
contract (2 = non-portable feature detected). Positive case asserts -eq 0.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 14fe872e59
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| EN_HARD_PHI=("hipaa" "ehr" "healthcare") | ||
| EN_HARD_PII=("gdpr storage" "personally identifiable" "social security" "passport number") |
There was a problem hiding this comment.
Detect PII/PHI acronyms in hard-require signal bank
The hard-signal categories are described as non-dismissible for sensitive domains, but the English keyword arrays omit the literal acronyms pii and phi. Inputs like “store PII for onboarding” or “PHI retention workflow” currently fall through to action: none, so standard runs can skip mandatory escalation for explicitly sensitive data handling. Add these acronym tokens (with the existing word-boundary matcher) so explicit PII/PHI mentions reliably trigger hard-require behavior.
Useful? React with 👍 / 👎.
| if not plugin_root: | ||
| print(json.dumps(default)); sys.exit(0) | ||
| p = os.path.join(plugin_root, 'profiles', f'{profile_name}.json') |
There was a problem hiding this comment.
Preserve profile-specific policy when plugin root is missing
If CLAUDE_PLUGIN_ROOT is unset, the script always returns one global default policy (upgrade_to: pro plus standard hard/soft sets), regardless of the current profile. In a pro run this can produce action: hard-require with recommended: pro for enterprise text, which sends callers into a forced-upgrade path with no actual profile change and prevents intended pro→max routing. The fallback should be derived from profile_name (or error out) rather than hardcoding the standard default policy for every profile.
Useful? React with 👍 / 👎.
| latest = matches[-1] | ||
| age = int(time.time()) - int(latest.get("timestamp", 0)) | ||
|
|
||
| if latest.get("user_response") == "declined" and age < SUPPRESSION_WINDOW_SECONDS: |
There was a problem hiding this comment.
Suppress replay prompts for declined_twice responses
The ledger entry contract documents user_response values including declined_twice, but replay suppression only checks for declined. If a caller records declined_twice, replay_safe returns safe-to-prompt immediately and the same signal set can re-prompt within the 24h window, undermining the anti-nagging behavior. Include declined_twice in the suppression check (or normalize/validate responses at write time).
Useful? React with 👍 / 👎.
Summary
v1.4.0 flips the default profile from
protostandard, makes standard a true local-first MVP (Next.js + SQLite + no Docker —npm install && npm run devin seconds), and adds a pre-flight profile escalation system that routes enterprise-signal ideas to the right profile before tokens are spent.Phases: N → O → P → Q → R (one commit per phase).
Why (10-expert panel outcome: 3 APPROVE / 6 MODIFY / 1 REJECT-subset)
Christensen/Kim-Mauborgne/Taleb unanimous APPROVE on hackathon JTBD + local-first Blue Ocean + Docker-removal antifragility. Collins gate met (standard cost ceiling measured). Refactoring / security / backend / devops / quality / system / requirements panelists all voted MODIFY with specific tightening — every concern addressed below. Root-cause-analyst's REJECT-subset ("reuse existing
escalationconfig instead of building new subsystem") accepted: this PR extends the existingprofile_escalationblock rather than creating a parallel system.What changes
1. Default profile
pro→standardsettings.json.pf.defaultProfile = "standard"(was"pro")--profileever used): one-time stderr notice "pf: default profile changed standard←pro (v1.4.0)" + README link. Suppressed via~/.preview-forge/default-notice-shown.--profile=pro/--profile=maxstill work unchanged — explicit users unaffected.2. Local-first MVP templates (standard only)
assets/prisma.schema.standard.templateprovider="sqlite", DB URL →~/.preview-forge/<project>/dev.db(OUTSIDE repo, security-engineer CP-2). String columns, no enum / no@db.JsonBfor Postgres portability.assets/gitignore.standard.template*.db,*.db-wal,*.db-shm,*.sqlite*— defense-in-depth even though DB is outside repo.assets/README.standard.templateassets/graduate.sh.templatebash scripts/graduate.sh prowrites Dockerfile + compose.yml + Postgres datasource without regenerating app code (devops-architect CP-1).scripts/standard-schema-lint.py@db.JsonB/ Postgres-specific raw SQL. Exit 2 with line:number + fix suggestion. Called by graduate.sh before any conversion.3. Profile escalation (pre-flight)
scripts/recommend-profile.sh: bilingual EN+KO categorical signal scorer (devops-architect CP-2: category vector, not raw keyword count).payments(Stripe, PCI, subscription, 결제, 구독)phi_healthcare(HIPAA, PHI, EHR, 의료, 환자)pii_storage(personally identifiable, 주민등록, 개인정보)auth_provider(SAML provider, OIDC provider, SSO host)compliance,multi_tenant,enterprise_b2b,scale{action: "hard-require" | "ask" | "hint" | "none", recommended, score, signals: {...}}hooks/escalation-ledger.py: decision ledger at~/.preview-forge/escalation-history.json. Sha256 signal_hash keys, 24h suppression window on declined decisions, 200-entry cap, atomic write (tmpfile+rename).record / lookup / replay_safe / hashsubcommands.[compliance, multi_tenant]10h ago, same-hash prompt is suppressed on next run.M1 Run Supervisor pre-flight steps 9-10 wire it up:
hard-require→ AskUserQuestion (no dismiss), force upgrade, record asforcedask→ replay_safe gate → AskUserQuestion → record responsehint→ static hint in/pf:status, no promptnone→ no-op4. Schema (additive, v1.3.0 profiles still validate)
pf-profile.schema.jsongains optionalstackandprofile_escalationblocks. All 3 profiles updated:Panelist concerns — resolution table
~/.preview-forge/<project>/dev.dboutside repo + .gitignore sidecar (Phase O)defaultProfilehistory in settings.json (Phase N)profile_escalationschema block; doesn't build new subsystem (Phase N)/pf:statusoutput advertise "9/143 active, pro wakes 45, max wakes 143" framingCI matrices added
verify-plugin.sh: 46/46 checks pass locally (was 45 in v1.3.0).Test plan
plugin.json+marketplace.json(metadata + plugins[0])bash scripts/recommend-profile.sh <<<'{"text":"Stripe SaaS with SOC2"}'→action: hard-require, signals.hard_require: [payments], signals.soft_suggest: [compliance]python3 plugins/preview-forge/hooks/escalation-ledger.py hash "compliance,multi_tenant"deterministic--profileflag → uses standard automatically, stderr shows default-change notice onceRollout
/plugin update pf@two-weeks-teamon existing installs → pulls v1.4.0/pf:newwithout explicit profile → uses standard + shows one-time stderr noticeCo-Authored-By: Claude Opus 4.7 (1M context) noreply@anthropic.com
Summary by CodeRabbit
릴리스 노트
새로운 기능
문서
테스트