LCORE-2235: feature design tooling improvements#1725
Conversation
Per the LCORE-1736 review of the Epic-rework approach (Option D), the Proposed JIRAs section now uses heading hierarchy to group JIRAs by Epic: - ## Proposed JIRAs — section heading (H2, unchanged) - ### Epic: <name> — Epic boundary (H3, NEW) - #### LCORE-???? <title> — child JIRA stub (H4, demoted from H3) Each `### Epic:` heading is followed by prose (Goals, optional Scope and Success criteria) that becomes the filed Epic's description. The existing kickoff Story + step-definitions Task + third TODO stub are now H4 children under a default `### Epic: Implement TODO feature` block, demoted from their previous H3 level. The intro to the Proposed JIRAs section explains the new shape and notes that single-Epic features can keep one Epic block while larger features can group by aspect (Implementation, Docs, Tests, ...). A commented-out multi-Epic example (`### Epic: Documentation` with a child JIRA stub) sits at the bottom of the section to illustrate the multi-Epic shape; users uncomment and adapt as needed. The `## Proposed incidental JIRAs` section keeps its flat `### LCORE-` shape — incidentals are by nature ungrouped findings that don't belong under any one Epic. This commit only updates the template. The corresponding `file-jiras.sh` parser update (which actually consumes the new shape) follows in a separate commit. Branch will be renamed to lcore-XXXX-feature-design-tooling-improvements once the umbrella tooling JIRA is filed.
…outing Implements the Epic-rework agreed in LCORE-1736 review (Option D): spike doc carries `### Epic: <name>` H3 sub-sections with prose + `#### LCORE-???? <title>` H4 children. Script parses the heading hierarchy, generates one Epic file per group plus child files with `<!-- parent_epic_file: <stub> -->` metadata, and during filing routes each child to its parent Epic's filed key. Parser changes (the embedded Python heredoc): - Strips multi-line HTML comment blocks before parsing, so commented-out template examples (e.g., the multi-Epic example block at the bottom of the Proposed JIRAs section in spike-template.md) don't leak into the parsed output. - Locates the Proposed JIRAs section accepting both H1 and H2 headings (older spikes used H1 throughout — see conversation-compaction reference). - Locates an optional Proposed incidental JIRAs section (same H1/H2 flexibility). - Detects `### Epic: <name>` boundaries within Proposed JIRAs. For each Epic block, extracts prose (Goals/Scope) between the Epic heading and the first H4 child, and parses H4 children as `#### LCORE-???? title`. - Backward-compat: if no `### Epic:` boundaries are present, falls back to the legacy flat shape (H3 LCORE-???? children grouped under a single auto-generated Epic). The auto-Epic name is derived from the spike doc's parent directory name, as before. - Recognizes already-filed real keys in headings (e.g., `### LCORE-1569: Add token estimation`) alongside placeholders (`### LCORE-???? ...`), preserves real keys in the generated child files. Filing such files sends a PUT (update) instead of POST (create) — useful for re-syncing spike docs whose JIRAs are already filed. - Incidental tickets get `<!-- incidental: true -->` metadata; they are filed under FEATURE_TICKET directly, no Epic parent. - Metadata file `.meta.json` now records `epic_count`, `jira_count`, `incidental_count`, and `parse_mode` (`epic_grouped` | `legacy_flat` | `empty`). Bash-side filing changes: - New helpers `get_parent_epic_file` and `is_incidental` read the new metadata comments. - `show_summary` displays per-file parents: each child shows its parent Epic's filed key (or `(unfiled: <stub>)` if the Epic hasn't been filed yet); incidentals show `<feature-ticket> (incidental)`; Epics show the feature ticket as parent. - `file_ticket` routes children to their `parent_epic_file`'s filed key. If a child's parent Epic has not been filed yet, the script errors clearly and tells the user to file the Epic first. Incidentals go straight to FEATURE_TICKET. Legacy-mode children (no `parent_epic_file` metadata) still use the global EPIC_KEY flow with the existing `ensure_epic_key` prompt as fallback. - Spike-to-Epic "Informs" link is now created against the FIRST filed Epic (was: only Epic). This preserves existing single-Epic behavior while supporting multi-Epic filings. Smoke-tested against docs/design/conversation-compaction/conversation-compaction-spike.md (legacy flat shape with H1 headings, mixed real/placeholder keys): parser detects 7 real keys (LCORE-1569..LCORE-1575), 2 placeholders (kickoff + step-defs), 1 auto-generated Epic; backward-compat works. Subsequent commits will add a structure linter and update the /file-jiras skill text to document the new convention. Branch will be renamed to lcore-XXXX-feature-design-tooling-improvements once the umbrella tooling JIRA is filed.
…ction Inline structure check that runs immediately after parsing. Emits warnings to stderr for likely mistakes; reserves error (non-zero exit) for the unparseable case (zero JIRAs). Checks implemented: - **Mixed shape** (warning): both `### Epic:` boundaries and flat `### LCORE-...` H3 stubs in the same Proposed JIRAs section. The flat ones won't be parsed under any Epic; either demote them to `#### LCORE-...` under an Epic heading or remove the Epic boundaries. - **Empty Epic** (warning): an `### Epic:` block with no `#### LCORE-...` H4 children. The Epic would be filed but have nothing under it. - **Duplicate JIRA title** (warning): two child JIRAs with the same title (case-insensitive, after stripping the LCORE-... prefix). The parser uses titles for filename generation and could collide. - **Zero JIRAs** (error): no children parsed at all from the section. Exits non-zero so the user sees the failure clearly rather than silently filing nothing. Smoke-tested: - conversation-compaction-spike.md (legacy flat, mixed real keys + 2 placeholders): no warnings, parses cleanly. - spike-template.md (Epic-grouped with multi-Epic example commented out): no warnings, parses cleanly (the comment block is stripped). - Synthetic spike with one Epic + one flat ### LCORE: warning fires. - Synthetic spike with empty Epic: warning fires. - Synthetic spike with no JIRAs: error fires, exit code 1.
…howto /file-jiras skill (`.claude/commands/file-jiras.md`): adds a "Spike doc shape the parser expects" section showing the new `### Epic:` + `#### LCORE-` heading hierarchy, with examples. Documents: - The `parent_epic_file` metadata that routes children at filing time. - Backward-compat for the legacy flat shape (auto-generated Epic). - Real-key preservation (`### LCORE-1569: ...` → PUT/update). - Incidental tickets file under FEATURE_TICKET directly. - Linter behaviors (warnings vs error-with-non-zero-exit). - Multi-Epic filing order (Epic before its children, repeat per Epic). The `echo "quit"` hack instruction stays for now (will be replaced with `--parse-only` in a later commit). `howto-run-a-spike.md` step 5: adds a bullet documenting the Epic-grouping convention so authors know what shape the spike-template demonstrates and what the script consumes.
Replaces the previous `echo "quit"` hack the /file-jiras skill relied on to get parse-and-exit behavior. The new flag is a clean short-circuit: - `--parse-only` (alias `--dry-run`) — skip the interactive filing loop and skip the Jira credentials check. Parser writes the parsed files to the output directory, prints a one-line summary listing them, and exits 0. - `ensure_jira_credentials` is now skipped when `--parse-only` is set, so the script can run in CI, in pre-commit hooks, or via an agent that doesn't have credentials configured. - Help text updated to mention the new flag and its use cases. - /file-jiras skill (`.claude/commands/file-jiras.md`) step 1 updated to use `--parse-only` instead of the `echo "quit"` hack. Smoke-tested against docs/design/conversation-compaction/conversation-compaction-spike.md: parses 1 Epic + 9 JIRAs (legacy flat mode), writes 10 files, exits 0 with no Jira API calls attempted.
Adds an optional `--comments` flag to fetch-jira.sh that also fetches and prints the ticket's comment thread. Comments often carry decisions or context that the description doesn't capture (e.g., scope negotiations in standup recap comments); having the script surface them avoids the manual JIRA web-UI step. Implementation: - New flag `--comments` parsed at the front of the argument list (before any positional ticket key), so existing positional invocations keep working unchanged. - When set, `fetch_ticket` makes a second curl call to /rest/api/3/issue/<key>/comment and passes both responses to the Python heredoc. - The ADF text extractor used for ticket descriptions has been hoisted to top-level in the Python heredoc so it can be reused for comment bodies (which are also ADF documents). - Comments are printed after subtasks, with author display name + date (YYYY-MM-DD) header and body indented under it. Plain-string comment bodies (rare) are printed verbatim. Show-help text updated to document the new flag and its purpose. /spike skill: notes that `--comments` should be passed when scope decisions or context likely live in the ticket's comment thread. contributing_guide.md: updates the CLI tools listing to show the `[--comments]` flag. Smoke-tested against LCORE-1311 (which has 2 real comments by Ondrej Metelka and Anxhela Coba): comments rendered with author+date headers, body extracted from ADF, full thread visible inline. Without the flag, output is unchanged from the prior behavior.
Adds optional `--linked-depth N` flag that recurses N levels into the
ticket's subtasks, linked issues, and parent= JQL children. Default 0
(unchanged behavior: lists related-ticket keys/summaries, doesn't
fetch them). Capped at 3 to prevent runaway fetches.
Implementation:
- Flag parsing accepts non-negative integers up to 3, validated up
front with clear error messages.
- `fetch_ticket` accepts a depth parameter; after printing the main
ticket body it extracts subtask + linked-issue keys from the parsed
data and calls `parent= JQL` for parent-relation children, then
recurses into each at `depth - 1`.
- Module-level `FETCHED_KEYS` tracks already-seen keys (space-delimited
with substring matching). Tickets reachable via multiple paths are
fetched at most once. Cycle-safe.
- Output uses indentation (`indent` parameter, two spaces per level)
so nested ticket sections are visually distinguished.
- At depth 0 the bottom-of-script JQL search runs as before (legacy
behavior: print the children list as a flat summary). At depth > 0
the recursive fetch_ticket already pulled them in, so the bottom
search is skipped to avoid duplication.
/spike skill: mentions `--linked-depth N` for spike kickoff context
gathering — typically `--linked-depth 1` is the right value at the
start of a spike, fetching the feature ticket plus all immediate
relations in one call.
contributing_guide.md: updates the CLI tools listing.
Smoke-tested against LCORE-1311:
- depth 0: 1 ticket fetched (matches prior behavior).
- depth 1: 3 tickets fetched (LCORE-1311 + LCORE-1314 spike (linked)
+ LCORE-1631 epic (parent-relation child)). Cycle protection
confirmed: no duplicates.
Adds markdown-table → ADF-table conversion in the parsed-doc → ADF
builder. Previously, markdown tables in spike-doc JIRA bodies were
flattened to plain-text paragraphs with literal `|` and `---` characters
in the JIRA description — readable but unstructured. Now they render
as native ADF tables in JIRA.
Implementation:
- New `is_table_paragraph(para)` — detects GitHub-flavored markdown
tables: paragraph starts with `|` and the second line is a separator
matching `^\|[\s\-:|]+\|?\s*$` (i.e., `|---|---|` or `|:---|---:|`).
- New `parse_table_row(line, cell_kind)` — splits a `| a | b | c |`
line into a `tableRow` ADF node, with each cell wrapping a paragraph.
- New `parse_table(para)` — full table parser. First line is header
row (cells of type `tableHeader`); separator line is dropped;
remaining lines are body rows (cells of type `tableCell`). Returns
an ADF table node with the standard `attrs`
(isNumberColumnEnabled=false, layout=default).
- `parse_block(para)` checks `is_table_paragraph` first and dispatches
to `parse_table` when a table is detected. Falls through to existing
heading / list / codeblock / paragraph parsing otherwise.
Edge cases handled:
- Empty cells produce `{"type": "paragraph", "content": []}` rather
than a malformed empty cell.
- Inline markdown inside cells (bold / code / links) goes through
`parse_inline` like any other paragraph.
Smoke-tested with a 4-row table (3 cols, 1 header + 3 body, including
an empty cell): generated ADF JSON structurally matches the
Atlassian ADF spec — header row uses `tableHeader`, body rows use
`tableCell`, all wrap paragraphs, all required `attrs` present.
Real-JIRA round-trip test (update a sandbox ticket, verify rendering,
revert) intentionally NOT done in this commit — the user explicitly
authorized that test and will perform it when first using the feature.
If JIRA rejects the payload, the bug is in this commit.
…cross subshells Two pre-existing bugs in ensure_epic_key() that bit a user filing tickets under an existing Epic via the option-2 prompt path: 1. **Invisible prompts.** ensure_epic_key writes the "Choice (1/2/3):" and "Epic key:" prompts to stdout via `echo`/`printf`. The interactive `file` command calls file_ticket inside a `key=$(file_ticket "$f")` capture, which swallows stdout into the `$key` variable. Net effect: the user types blindly into the `read -r choice < /dev/tty` call without seeing the prompt. They guess "2" + their Epic key correctly and filing works, but the captured prompt text spills out under "Done:" at the end of the file-loop. Fix: route all user-facing prompt echoes to stderr. The `read` still reads from /dev/tty (controlling terminal), unaffected. Same fix applied to the "Invalid choice" error. 2. **EPIC_KEY doesn't survive subshells.** The same `$(file_ticket ...)` capture means any variable assignment (including `EPIC_KEY="LCORE-1631"` from the option-2 prompt) is scoped to the subshell and discarded when it exits. The parent shell still sees `EPIC_KEY=""`. The next file in the file-loop re-runs ensure_epic_key and re-prompts the user for the SAME Epic. Tedious and error-prone. Fix: persist EPIC_KEY to a small state file at `$JIRA_DIR/.epic-key` after the option-2 prompt OR after option-1 files an Epic. ensure_epic_key reads from the state file on entry, skipping the prompt if a value is present. The parent shell's file-loop tail also reads from the state file so show_summary reflects the right parent in the Parent column. The state file gets auto-cleaned when re-parse happens (rm -rf $JIRA_DIR removes it), so cross-session staleness is not a concern for the typical flow (one feature per JIRA_DIR). Also applies to file_ticket's direct-Epic-filing path (when a file has `<!-- type: Epic -->`): EPIC_KEY captured from file_single_ticket's return value is now persisted to the state file in the same way. Smoke-tested: bash syntax OK; --parse-only still works (3+9 parse against conversation-compaction-spike.md unchanged). End-to-end interactive flow (option-2 path) verified manually in the LCORE-1673 amend + LCORE-2230 step-defs filing that motivated this fix.
|
Warning Rate limit exceeded
You’ve run out of usage credits. Purchase more in the billing tab. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughThis PR enhances the JIRA tooling to support richer ticket fetching and Epic-based JIRA organization. It adds comment and recursive issue fetching to fetch-jira.sh, rewrites the spike-doc parser to support Epic grouping with multi-Epic filing, introduces offline parse-only mode, and updates documentation. ChangesJIRA tooling enhancements: recursive fetching, Epic-based grouping, and parse-only mode
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
✨ Simplify code
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.claude/commands/file-jiras.md:
- Around line 23-38: The fenced code block that shows the spike doc structure is
missing a language identifier; update the opening triple backticks for that
block (the line that currently reads "```" immediately before "## Proposed
JIRAs") to include "markdown" (i.e., change it to "```markdown") so markdownlint
passes and syntax highlighting is enabled; no other content changes required.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 12b1dd89-3a7f-48d0-9cc6-93a5b75e00df
📒 Files selected for processing (7)
.claude/commands/file-jiras.md.claude/commands/spike.mddev-tools/fetch-jira.shdev-tools/file-jiras.shdocs/contributing/howto-run-a-spike.mddocs/contributing/templates/spike-template.mddocs/contributing_guide.md
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (9)
- GitHub Check: build-pr
- GitHub Check: E2E: library mode / ci / group 3
- GitHub Check: E2E: server mode / ci / group 1
- GitHub Check: E2E: library mode / ci / group 1
- GitHub Check: E2E: server mode / ci / group 3
- GitHub Check: E2E Tests for Lightspeed Evaluation job
- GitHub Check: E2E: server mode / ci / group 2
- GitHub Check: E2E: library mode / ci / group 2
- GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
🧰 Additional context used
🪛 markdownlint-cli2 (0.22.1)
.claude/commands/file-jiras.md
[warning] 23-23: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (23)
docs/contributing/templates/spike-template.md (1)
87-207: LGTM!dev-tools/file-jiras.sh (13)
49-96: LGTM!
112-125: LGTM!
173-184: LGTM!
246-320: LGTM!
325-404: LGTM!
406-427: LGTM!
429-508: LGTM!
543-571: LGTM!
586-653: LGTM!
778-856: LGTM!
957-1034: LGTM!
1036-1046: LGTM!
1128-1161: LGTM!.claude/commands/file-jiras.md (1)
18-96: LGTM!dev-tools/fetch-jira.sh (5)
20-75: LGTM!
85-98: LGTM!
105-244: LGTM!
251-294: LGTM!
297-340: LGTM!.claude/commands/spike.md (1)
39-50: LGTM!docs/contributing_guide.md (1)
176-176: LGTM!docs/contributing/howto-run-a-spike.md (1)
117-126: LGTM!
…nced code block Addresses CodeRabbit feedback on PR lightspeed-core#1725 (`.claude/commands/file-jiras.md` line 23): the fenced code block showing the spike-doc Proposed-JIRAs shape was missing a language identifier, triggering markdownlint MD040 (fenced-code-language). Adds `markdown` after the opening triple backticks. No content change inside the block.
Description
Tooling-side companion to LCORE-1736 (feature design process improvements). This implements the items described in LCORE-2235's scope:
file-jiras.sh: Epic-grouping parser (heading-hierarchy:### Epic: <name>+#### LCORE-???? <title>), multi-Epic filing routing, structure linter,--parse-onlyflag, ADF table emission for markdown tables, and bug fixes for invisible prompts + EPIC_KEY persistence across subshells.fetch-jira.sh:--commentsflag,--linked-depth Nflag./file-jirasskill,/spikeskill,contributing_guide.md,howto-run-a-spike.mdupdated to document the conventions and flag usage.Backward-compatible with the legacy flat shape — old spike docs without
### Epic:boundaries still parse via the auto-Epic fallback.Verified:
conversation-compaction-spike.md(legacy flat, mixed real/placeholder keys): clean parse, 1 Epic + 9 JIRAs.--parse-only: end-to-end run without credentials, no API calls.--comments: fetched LCORE-1311's 2 real comments correctly.--linked-depth 1: recursed into LCORE-1311 → LCORE-1314 + LCORE-1631..epic-keystate file persists across$(file_ticket)subshells so subsequent tickets in the same file-loop don't re-prompt.Type of change
Tools used to create PR
Identify any AI code assistants used in this PR (for transparency and review context)
Related Tickets & Documents
Checklist before requesting a review
Testing
This PR changes
dev-tools/shell scripts and the skill / howto markdown files that document them. There is no Python source code path affected; no unit/integration test suite applies.Manual verifications performed (also listed in the Description):
dev-tools/file-jiras.sh --spike-doc docs/design/conversation-compaction/conversation-compaction-spike.md --feature-ticket LCORE-1311 --parse-only. Expected: parses 1 Epic + 9 JIRAs inlegacy_flatmode, exits 0.--commentsflag — rundev-tools/fetch-jira.sh --comments 1311. Expected: output includesComments (N):block with author + date + body of each comment.--linked-depthflag — rundev-tools/fetch-jira.sh --linked-depth 1 1311. Expected: 3===markers (LCORE-1311 + LCORE-1314 spike + LCORE-1631 epic), indented children.parent_epic_filemetadata, rundev-tools/file-jiras.shinteractively, typefile 0, answer the visible Epic prompt with option 2 + a fake key, observe thatfile 1does NOT re-prompt and$JIRA_DIR/.epic-keycontains the key.Summary by CodeRabbit
Release Notes
New Features
Documentation