You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Analyze the task description to determine clarity:
31
31
32
-
**If unclear or ambiguous:** Use /create-prompt to ask clarifying questions and create a structured prompt. This ensures we capture all requirements upfront and saves time later. The /create-prompt workflow will:
32
+
**If unclear or ambiguous:** Use /create-prompt to ask clarifying questions and create a
33
+
structured prompt. This ensures we capture all requirements upfront and saves time
34
+
later. The /create-prompt workflow will:
35
+
33
36
- Ask targeted clarification questions
34
37
- Create a structured prompt document
35
38
- Offer to execute immediately
36
39
37
40
**If clear and unambiguous:** Proceed directly to implementation.
38
41
39
42
Quick clarity check:
43
+
40
44
- Can you identify the exact files to modify? If no → use /create-prompt
41
45
- Are there multiple valid approaches? If yes → use /create-prompt
42
-
- Is the expected outcome measurable? If no → use /create-prompt
43
-
</task-preparation>
46
+
- Is the expected outcome measurable? If no → use /create-prompt </task-preparation>
44
47
45
48
<worktree-setup>
46
49
Create an isolated development environment using /setup-environment:
47
50
48
51
Git worktree setup (auto-detected):
52
+
49
53
- Create worktree with branch
50
54
- Run: /setup-environment
51
55
- Automatically detects worktree context
52
56
- Smoke test only (15-30 seconds)
53
57
- Main repo already validated everything
54
58
55
59
The /setup-environment command is smart:
60
+
56
61
- Detects .gitworktrees/ path → minimal setup
57
62
- Detects existing node_modules → minimal setup
58
63
- Fresh clone without dependencies → full validation
59
64
60
-
No need to specify verification level - the command figures out the right approach based on context. Git worktrees get fast setup, new machines get thorough validation.
65
+
No need to specify verification level - the command figures out the right approach based
66
+
on context. Git worktrees get fast setup, new machines get thorough validation.
61
67
</worktree-setup>
62
68
63
69
<autonomous-execution>
@@ -94,7 +100,8 @@ implementation decisions, constraint discoveries, and why choices were made.
94
100
<obstacle-and-decision-handling>
95
101
Pause only for deal-killers: security risks, data loss potential, or fundamentally unclear requirements. For everything else, make a reasonable choice and document it.
96
102
97
-
Design decisions get documented in the PR with rationale and alternatives considered. The executing model knows when to ask vs when to decide and document.
103
+
Design decisions get documented in the PR with rationale and alternatives considered.
104
+
The executing model knows when to ask vs when to decide and document.
98
105
</obstacle-and-decision-handling>
99
106
100
107
<validation-and-review>
@@ -105,41 +112,48 @@ Design decisions get documented in the PR with rationale and alternatives consid
105
112
- Fix only if hooks fail
106
113
107
114
**Targeted validation (complex features):**
115
+
108
116
- Run specific tests for changed code
109
117
- Use Rivera for architecture review if patterns change
The principle: Don't duplicate what git hooks already do. They'll catch formatting, linting, and test failures at commit time. Only add extra validation when the risk justifies it.
117
-
</validation-and-review>
125
+
The principle: Don't duplicate what git hooks already do. They'll catch formatting,
126
+
linting, and test failures at commit time. Only add extra validation when the risk
127
+
justifies it. </validation-and-review>
118
128
119
129
<create-pr>
120
130
Deliver a well-documented pull request ready for review, with commits following .cursor/rules/git-commit-message.mdc.
121
131
122
132
PR description must include:
123
133
124
134
Summary:
135
+
125
136
- What was implemented and why
126
137
- How it addresses the requirements
127
138
128
139
Design Decisions (if any were made):
140
+
129
141
- List each significant decision with rationale
130
142
- Note alternatives considered and trade-offs
131
143
- Explain why each approach was chosen
132
144
133
145
Obstacles Encountered (if any):
146
+
134
147
- Document any challenges faced
135
148
- How they were resolved or worked around
136
149
137
150
Testing:
151
+
138
152
- What validation was performed
139
153
- Any edge cases considered
140
154
141
-
This transparency helps reviewers understand not just what changed, but why specific approaches were chosen and what was considered along the way.
142
-
</create-pr>
155
+
This transparency helps reviewers understand not just what changed, but why specific
156
+
approaches were chosen and what was considered along the way. </create-pr>
143
157
144
158
<bot-feedback-loop>
145
159
Autonomously address valuable bot feedback, reject what's not applicable, and deliver a PR ready for human review with all critical issues resolved.
@@ -161,19 +175,22 @@ times if needed until critical issues are resolved. </bot-feedback-loop>
161
175
Provide a summary including:
162
176
163
177
What was accomplished:
178
+
164
179
- Core functionality delivered
165
180
- Any design decisions made autonomously
166
181
- Obstacles overcome without user intervention
167
182
168
183
Key highlights:
184
+
169
185
- Elegant solutions or optimizations
170
186
- Significant issues found and fixed
171
187
- Bot feedback addressed
172
188
173
-
Transparency note if applicable:
174
-
"Made [N] design decisions autonomously - all documented in the PR for your review."
189
+
Transparency note if applicable: "Made [N] design decisions autonomously - all
190
+
documented in the PR for your review."
175
191
176
-
Include the PR URL and worktree location. Scale the summary length to complexity - simple tasks get brief summaries, complex features deserve detailed explanations.
192
+
Include the PR URL and worktree location. Scale the summary length to complexity -
193
+
simple tasks get brief summaries, complex features deserve detailed explanations.
Copy file name to clipboardExpand all lines: .claude/commands/setup-environment.md
+14-11Lines changed: 14 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,8 @@ description: Initialize development environment for git worktree
4
4
5
5
# Setup Development Environment
6
6
7
-
Initialize development environment with context-aware setup that distinguishes between git worktrees and new machines.
7
+
Initialize development environment with context-aware setup that distinguishes between
8
+
git worktrees and new machines.
8
9
9
10
<objective>
10
11
Get the development environment ready for productive work, with right-sized verification based on context.
@@ -25,17 +26,18 @@ For git worktrees (15-30 seconds):
25
26
- Trust main repo's validation
26
27
27
28
For new machines (2-5 minutes):
29
+
28
30
- Install and verify all dependencies
29
31
- Set up and test git hooks
30
32
- Run build process
31
33
- Execute test suite
32
-
- Verify all tools are available
33
-
</setup-approach>
34
+
- Verify all tools are available </setup-approach>
34
35
35
36
<project-detection>
36
37
Identify the project type and package manager by examining project files:
37
38
38
39
Project types:
40
+
39
41
- Node.js: package.json present
40
42
- Python: requirements.txt or Pipfile present
41
43
- Ruby: Gemfile present
@@ -45,11 +47,11 @@ Project types:
45
47
- .NET: .csproj files present
46
48
47
49
Package managers for Node.js:
50
+
48
51
- pnpm if pnpm-lock.yaml exists
49
52
- yarn if yarn.lock exists
50
53
- bun if bun.lockb exists
51
-
- npm as default fallback
52
-
</project-detection>
54
+
- npm as default fallback </project-detection>
53
55
54
56
<dependency-installation>
55
57
Install project dependencies using the appropriate package manager. For Node.js projects, use pnpm/yarn/bun/npm based on which lockfile exists. For Python, use pip or pipenv. Install with frozen/locked versions to ensure consistency with the main repository.
@@ -80,24 +82,24 @@ Run any necessary code generation steps the project requires:
80
82
- TypeScript: Generate declarations if configured
81
83
- Package prepare scripts: Run if defined in package.json
82
84
83
-
These ensure generated code is available for development.
84
-
</code-generation>
85
+
These ensure generated code is available for development. </code-generation>
85
86
86
87
<verification>
87
88
Verify the environment is ready based on context:
88
89
89
90
For git worktrees (smoke test only):
91
+
90
92
- Confirm dependencies were installed successfully
91
93
- Run a quick TypeScript compilation check if applicable
92
94
- Trust that the main repository's validation is sufficient
93
95
94
96
For new machines (thorough verification):
97
+
95
98
- Verify all development tools are available and working
96
99
- Run the build process to ensure it completes
97
100
- Execute the test suite to confirm everything works
98
101
- Test that git hooks function correctly
99
-
- Check that all required command-line tools are installed
100
-
</verification>
102
+
- Check that all required command-line tools are installed </verification>
101
103
102
104
<error-handling>
103
105
When encountering failures, identify the root cause and attempt automatic resolution where possible. For issues that require manual intervention, provide clear guidance on how to proceed. Continue with other setup steps when it's safe to do so without the failed component.
Copy file name to clipboardExpand all lines: .cursor/rules/prompt-engineering.mdc
+7-2Lines changed: 7 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,9 @@ you create for LLM consumption.
29
29
30
30
## Key Principles for LLM-Readable Prompts
31
31
32
-
- Assume the executing model is smarter: The model executing your prompt is likely more capable than the model that created it. Trust its abilities rather than over-prescribing implementation details.
32
+
- Assume the executing model is smarter: The model executing your prompt is likely more
33
+
capable than the model that created it. Trust its abilities rather than
34
+
over-prescribing implementation details.
33
35
- Front-load critical information: LLMs give more weight to early content
34
36
- Be explicit: LLMs can't infer context the way humans do
35
37
- Maintain consistency: Use the same terminology throughout
@@ -143,7 +145,10 @@ what you don't want, even as a counterexample.
143
145
When writing prompts for LLM execution (commands, workflows, agents), focus on clear
144
146
outcomes rather than micro-managing steps. LLMs can figure out implementation details.
145
147
146
-
Remember: The model executing your prompt is likely more advanced than the model that created it. A prompt written by GPT-4 might be executed by Claude 3.5 Sonnet or GPT-4o. Even prompts written by older versions of the same model will be executed by newer, smarter versions. Trust the executing model's superior capabilities.
148
+
Remember: The model executing your prompt is likely more advanced than the model that
149
+
created it. A prompt written by GPT-4 might be executed by Claude 3.5 Sonnet or GPT-4o.
150
+
Even prompts written by older versions of the same model will be executed by newer,
151
+
smarter versions. Trust the executing model's superior capabilities.
147
152
148
153
### The Over-Prescription Problem in LLM-to-LLM Communication
0 commit comments