Merged
Conversation
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
anthroos
added a commit
that referenced
this pull request
Mar 23, 2026
Ensures Claude Code always follows search-before/add-after pattern when working in the openexp directory. Includes Q-learning params (do not change), dual-repo workflow, and architecture overview. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
anthroos
added a commit
that referenced
this pull request
Apr 26, 2026
* docs: v2 redesign freeze + Claude Design brief Architecture freeze for the 2026-04-26 redesign — supersedes PR #17 pivot and PR #34 Experience Library labels. Core principle: no pre-labeling. Capture trajectories raw, grade only terminal outcomes (school-style 0–1). Q-learning removed entirely after 8 months of mean Q = 0.006. Two artifacts: - docs/redesign-2026-04-26.md — internal architecture freeze with epics E1–E6, schema, pilot plan, decision log - docs/claude-design-brief.md — self-contained brief for a parallel Claude Design workstream covering site rewrite, promotion, and funding submissions No code changes. Code-level cleanup (E6) follows after E1–E5 prove themselves. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: scope Claude Design brief to landing page only Original brief over-scoped the parallel agent into three streams (landing page + promotion + funding). Founder voice and funder- specific numbers don't outsource well. Trim to a single deliverable — one static HTML landing page replacing welabeldata.com/openexp/. Promotion (HN, essays, DMs) and funding applications stay with the founder. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: warn Claude Design not to pull from outdated README or v1 site The GitHub README and the existing welabeldata.com/openexp/ landing both still describe the v1 Q-learning architecture removed on 2026-04-26. Without an explicit warning, a fresh agent will pull facts from those sources and reintroduce the deprecated framing. Add a "Sources of Truth" section to the brief naming both as out-of-date and instructing the agent to ask the founder rather than pull from the repo. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: add anonymize + extract_experience system prompts Two-step pipeline for producing shareable OpenExp experience artifacts from raw trajectory data: 1. prompts/anonymize.md — strips PII (names, companies, amounts, dates, URLs) while preserving structural features (sequence, timing, channel, actor roles, reasoning chains). Output: clean anonymized trajectory YAML. 2. prompts/extract_experience.md — wraps an anonymized trajectory with terminal outcome + grade + applies_when hint into a single shareable artifact. Explicitly avoids interpretation, lessons, or pre-labels — preserves trajectory verbatim, lets the reader (and their Claude) interpret. These are the user-facing core of the v2 product: any user can paste raw data + these prompts into their Claude Code and produce a publishable experience without any custom infrastructure. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * experiences: first seed — defense-tech inbound acquisition (closed_won) First end-to-end run of the v2 pipeline (anonymize.md → extract_experience.md) on a real 57-day acquisition trajectory. Outcome: closed_won, grade 1.0 by author. Three files: - trajectory.anonymized.yaml — 26-step ordered timeline - experience.yaml — wrapper with applies_when, terminal, searchable_summary - README.md — human-readable face for marketplace Pilot known limitations (do not block ship): - Anonymization is conservative but identifying-by-combination is possible to a reader with strong domain knowledge. - Some step content carries author observation that could be separated into explicit author_intent / author_hypothesis fields in future schema iterations. Treat as v0 — published to validate the pipeline shape, not as gold-standard reference. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * experiences: scrub identifying labels from metadata Recurring leak pattern: anonymizing body text but leaving identifying labels in directory names, file headers, README titles. Same kind of de-anonymization as a leaked body string. Changes: - Rename experiences/d49e0997-<name>/ → experiences/d49e0997/ - README title: drop vertical-specific framing → neutral "Inbound Acquisition with Free Pilot" - Drop "defense-tech / UAV" framing throughout README + searchable_summary - Drop specific volume "4,500 frames" → "a sample dataset" - Drop reference to internal CRM record IDs in file header comments - experience.yaml header: drop counterparty profile description Methodology lesson hardcoded in author memory: anonymization must cover everything a reader sees — directory names, filenames, IDs, README titles, comments, commit messages — not only body text. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: rewrite README for v2 Drop Q-learning narrative entirely. Re-anchor on the v2 question: "How did this happen?" — capture trajectories raw, grade only when reality returns its verdict, no pre-labeling. Replace: - "Q-learning memory that gets smarter" → hippocampus framing - 5 reward paths table → removed - 3-layer Q architecture (action/hypothesis/fit) → removed - Experience Library context/actions/outcome/lesson labels → removed (treated as pre-labeling violation per redesign-2026-04-26) - 16 MCP tools → 5 (search_memory, add_memory, log_prediction, log_outcome, memory_stats) Add: - "What It Is Not" — explicit on Q-learning removal and Mem0/Zep positioning - "No pre-labeling" methodological core - Two-prompt publishing pipeline (anonymize.md + extract_experience.md) - Publishing an Experience section with seed d49e0997 schema example - Honest "Status" section: pilot, 1 seed, no marketplace UI yet, schema may iterate Length: 522 lines → 178. Old README archived in git history. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: mark v1-era docs as stale Prepend a STALE banner to seven docs that describe pre-2026-04-26 architecture (Q-learning, 5 reward paths, Experience Library context/actions/outcome/lesson labels). Each banner points to docs/redesign-2026-04-26.md and the new README for current state. Banner-only; original content preserved for git-blame and archive value. Full rewrite of these docs deferred to follow-up PRs. Marked stale: - docs/architecture.md - docs/how-it-works.md - docs/storage-system.md - docs/product-page-content.md - docs/experience-library.md - docs/benchmark-results.md - docs/reward-audit-2026-04-08.md Kept current (no banner): - docs/redesign-2026-04-26.md (the freeze) - docs/claude-design-brief.md (v2 product framing) - docs/configuration.md (env vars stable) - docs/decision-extraction.md (still active in code) - docs/experiences.md (sales/dealflow YAML configs still shipped) - docs/user-stories-mcp-tools.md (recent, mostly v2-aligned) Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * experiences: skill architecture + per-step indexing + dog-food notes Three additions wrapping up the v2 first-seed arc. 1. experiences/d49e0997/SKILL.md Claude Code entry point for the seed pack. Specifies when to invoke (inbound + free pilot + local e-signing), how to use the trajectory + steps + grade, and what NOT to do (no attribution-stripping, no fabrication, no de-anonymization). 2. experiences/d49e0997/steps.indexable.jsonl Per-step records (24 lines, one JSON per step) for retrieval. Generated from trajectory.anonymized.yaml. Used by the user's Claude to surface step-level pattern matches in addition to experience-level summary matches. 3. docs/skill-architecture.md Naming convention: openexp:<author-handle>:<experience-slug>. Filesystem layout (storage = UUID dir; install = skill-named dir). Two-layer identity: author public, counterparty anonymized. Install + invoke flows. Open questions (trust, discovery, conflict resolution) flagged for future work. 4. docs/dogfood-2026-04-26.md Honest record of the first self-test of the publishing pipeline. Three queries against the live Qdrant: 2/3 surfaced the seed via summary-only indexing, 1/3 (pattern query about silence + re-engagement) missed. Per-step indexing brought the missed query to #3 in top-5. Format matters: structural- fact-led content beat metadata-led content with the same source. Pre-labeled Trajektory experiences (PR #34) still beat us on retrieval — deliberate trade-off, not a bug. 5. README.md New "Install as a Claude Code skill" subsection under Publishing an Experience. Documents the namespaced skill form and points to docs/skill-architecture.md for full detail. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> --------- Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
anthroos
added a commit
that referenced
this pull request
Apr 27, 2026
Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
anthroos
added a commit
that referenced
this pull request
Apr 27, 2026
Ensures Claude Code always follows search-before/add-after pattern when working in the openexp directory. Includes Q-learning params (do not change), dual-repo workflow, and architecture overview. Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
anthroos
added a commit
that referenced
this pull request
Apr 27, 2026
* docs: v2 redesign freeze + Claude Design brief Architecture freeze for the 2026-04-26 redesign — supersedes PR #17 pivot and PR #34 Experience Library labels. Core principle: no pre-labeling. Capture trajectories raw, grade only terminal outcomes (school-style 0–1). Q-learning removed entirely after 8 months of mean Q = 0.006. Two artifacts: - docs/redesign-2026-04-26.md — internal architecture freeze with epics E1–E6, schema, pilot plan, decision log - docs/claude-design-brief.md — self-contained brief for a parallel Claude Design workstream covering site rewrite, promotion, and funding submissions No code changes. Code-level cleanup (E6) follows after E1–E5 prove themselves. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: scope Claude Design brief to landing page only Original brief over-scoped the parallel agent into three streams (landing page + promotion + funding). Founder voice and funder- specific numbers don't outsource well. Trim to a single deliverable — one static HTML landing page replacing welabeldata.com/openexp/. Promotion (HN, essays, DMs) and funding applications stay with the founder. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: warn Claude Design not to pull from outdated README or v1 site The GitHub README and the existing welabeldata.com/openexp/ landing both still describe the v1 Q-learning architecture removed on 2026-04-26. Without an explicit warning, a fresh agent will pull facts from those sources and reintroduce the deprecated framing. Add a "Sources of Truth" section to the brief naming both as out-of-date and instructing the agent to ask the founder rather than pull from the repo. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: add anonymize + extract_experience system prompts Two-step pipeline for producing shareable OpenExp experience artifacts from raw trajectory data: 1. prompts/anonymize.md — strips PII (names, companies, amounts, dates, URLs) while preserving structural features (sequence, timing, channel, actor roles, reasoning chains). Output: clean anonymized trajectory YAML. 2. prompts/extract_experience.md — wraps an anonymized trajectory with terminal outcome + grade + applies_when hint into a single shareable artifact. Explicitly avoids interpretation, lessons, or pre-labels — preserves trajectory verbatim, lets the reader (and their Claude) interpret. These are the user-facing core of the v2 product: any user can paste raw data + these prompts into their Claude Code and produce a publishable experience without any custom infrastructure. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * experiences: first seed — defense-tech inbound acquisition (closed_won) First end-to-end run of the v2 pipeline (anonymize.md → extract_experience.md) on a real 57-day acquisition trajectory. Outcome: closed_won, grade 1.0 by author. Three files: - trajectory.anonymized.yaml — 26-step ordered timeline - experience.yaml — wrapper with applies_when, terminal, searchable_summary - README.md — human-readable face for marketplace Pilot known limitations (do not block ship): - Anonymization is conservative but identifying-by-combination is possible to a reader with strong domain knowledge. - Some step content carries author observation that could be separated into explicit author_intent / author_hypothesis fields in future schema iterations. Treat as v0 — published to validate the pipeline shape, not as gold-standard reference. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * experiences: scrub identifying labels from metadata Recurring leak pattern: anonymizing body text but leaving identifying labels in directory names, file headers, README titles. Same kind of de-anonymization as a leaked body string. Changes: - Rename experiences/d49e0997-<name>/ → experiences/d49e0997/ - README title: drop vertical-specific framing → neutral "Inbound Acquisition with Free Pilot" - Drop "defense-tech / UAV" framing throughout README + searchable_summary - Drop specific volume "4,500 frames" → "a sample dataset" - Drop reference to internal CRM record IDs in file header comments - experience.yaml header: drop counterparty profile description Methodology lesson hardcoded in author memory: anonymization must cover everything a reader sees — directory names, filenames, IDs, README titles, comments, commit messages — not only body text. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: rewrite README for v2 Drop Q-learning narrative entirely. Re-anchor on the v2 question: "How did this happen?" — capture trajectories raw, grade only when reality returns its verdict, no pre-labeling. Replace: - "Q-learning memory that gets smarter" → hippocampus framing - 5 reward paths table → removed - 3-layer Q architecture (action/hypothesis/fit) → removed - Experience Library context/actions/outcome/lesson labels → removed (treated as pre-labeling violation per redesign-2026-04-26) - 16 MCP tools → 5 (search_memory, add_memory, log_prediction, log_outcome, memory_stats) Add: - "What It Is Not" — explicit on Q-learning removal and Mem0/Zep positioning - "No pre-labeling" methodological core - Two-prompt publishing pipeline (anonymize.md + extract_experience.md) - Publishing an Experience section with seed d49e0997 schema example - Honest "Status" section: pilot, 1 seed, no marketplace UI yet, schema may iterate Length: 522 lines → 178. Old README archived in git history. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: mark v1-era docs as stale Prepend a STALE banner to seven docs that describe pre-2026-04-26 architecture (Q-learning, 5 reward paths, Experience Library context/actions/outcome/lesson labels). Each banner points to docs/redesign-2026-04-26.md and the new README for current state. Banner-only; original content preserved for git-blame and archive value. Full rewrite of these docs deferred to follow-up PRs. Marked stale: - docs/architecture.md - docs/how-it-works.md - docs/storage-system.md - docs/product-page-content.md - docs/experience-library.md - docs/benchmark-results.md - docs/reward-audit-2026-04-08.md Kept current (no banner): - docs/redesign-2026-04-26.md (the freeze) - docs/claude-design-brief.md (v2 product framing) - docs/configuration.md (env vars stable) - docs/decision-extraction.md (still active in code) - docs/experiences.md (sales/dealflow YAML configs still shipped) - docs/user-stories-mcp-tools.md (recent, mostly v2-aligned) Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * experiences: skill architecture + per-step indexing + dog-food notes Three additions wrapping up the v2 first-seed arc. 1. experiences/d49e0997/SKILL.md Claude Code entry point for the seed pack. Specifies when to invoke (inbound + free pilot + local e-signing), how to use the trajectory + steps + grade, and what NOT to do (no attribution-stripping, no fabrication, no de-anonymization). 2. experiences/d49e0997/steps.indexable.jsonl Per-step records (24 lines, one JSON per step) for retrieval. Generated from trajectory.anonymized.yaml. Used by the user's Claude to surface step-level pattern matches in addition to experience-level summary matches. 3. docs/skill-architecture.md Naming convention: openexp:<author-handle>:<experience-slug>. Filesystem layout (storage = UUID dir; install = skill-named dir). Two-layer identity: author public, counterparty anonymized. Install + invoke flows. Open questions (trust, discovery, conflict resolution) flagged for future work. 4. docs/dogfood-2026-04-26.md Honest record of the first self-test of the publishing pipeline. Three queries against the live Qdrant: 2/3 surfaced the seed via summary-only indexing, 1/3 (pattern query about silence + re-engagement) missed. Per-step indexing brought the missed query to #3 in top-5. Format matters: structural- fact-led content beat metadata-led content with the same source. Pre-labeled Trajektory experiences (PR #34) still beat us on retrieval — deliberate trade-off, not a bug. 5. README.md New "Install as a Claude Code skill" subsection under Publishing an Experience. Documents the namespaced skill form and points to docs/skill-architecture.md for full detail. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> --------- Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
anthroos
added a commit
that referenced
this pull request
Apr 28, 2026
* docs: v2 redesign freeze + Claude Design brief Architecture freeze for the 2026-04-26 redesign — supersedes PR #17 pivot and PR #34 Experience Library labels. Core principle: no pre-labeling. Capture trajectories raw, grade only terminal outcomes (school-style 0–1). Q-learning removed entirely after 8 months of mean Q = 0.006. Two artifacts: - docs/redesign-2026-04-26.md — internal architecture freeze with epics E1–E6, schema, pilot plan, decision log - docs/claude-design-brief.md — self-contained brief for a parallel Claude Design workstream covering site rewrite, promotion, and funding submissions No code changes. Code-level cleanup (E6) follows after E1–E5 prove themselves. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: scope Claude Design brief to landing page only Original brief over-scoped the parallel agent into three streams (landing page + promotion + funding). Founder voice and funder- specific numbers don't outsource well. Trim to a single deliverable — one static HTML landing page replacing welabeldata.com/openexp/. Promotion (HN, essays, DMs) and funding applications stay with the founder. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: warn Claude Design not to pull from outdated README or v1 site The GitHub README and the existing welabeldata.com/openexp/ landing both still describe the v1 Q-learning architecture removed on 2026-04-26. Without an explicit warning, a fresh agent will pull facts from those sources and reintroduce the deprecated framing. Add a "Sources of Truth" section to the brief naming both as out-of-date and instructing the agent to ask the founder rather than pull from the repo. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: add anonymize + extract_experience system prompts Two-step pipeline for producing shareable OpenExp experience artifacts from raw trajectory data: 1. prompts/anonymize.md — strips PII (names, companies, amounts, dates, URLs) while preserving structural features (sequence, timing, channel, actor roles, reasoning chains). Output: clean anonymized trajectory YAML. 2. prompts/extract_experience.md — wraps an anonymized trajectory with terminal outcome + grade + applies_when hint into a single shareable artifact. Explicitly avoids interpretation, lessons, or pre-labels — preserves trajectory verbatim, lets the reader (and their Claude) interpret. These are the user-facing core of the v2 product: any user can paste raw data + these prompts into their Claude Code and produce a publishable experience without any custom infrastructure. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * experiences: first seed — defense-tech inbound acquisition (closed_won) First end-to-end run of the v2 pipeline (anonymize.md → extract_experience.md) on a real 57-day acquisition trajectory. Outcome: closed_won, grade 1.0 by author. Three files: - trajectory.anonymized.yaml — 26-step ordered timeline - experience.yaml — wrapper with applies_when, terminal, searchable_summary - README.md — human-readable face for marketplace Pilot known limitations (do not block ship): - Anonymization is conservative but identifying-by-combination is possible to a reader with strong domain knowledge. - Some step content carries author observation that could be separated into explicit author_intent / author_hypothesis fields in future schema iterations. Treat as v0 — published to validate the pipeline shape, not as gold-standard reference. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * experiences: scrub identifying labels from metadata Recurring leak pattern: anonymizing body text but leaving identifying labels in directory names, file headers, README titles. Same kind of de-anonymization as a leaked body string. Changes: - Rename experiences/d49e0997-<name>/ → experiences/d49e0997/ - README title: drop vertical-specific framing → neutral "Inbound Acquisition with Free Pilot" - Drop "defense-tech / UAV" framing throughout README + searchable_summary - Drop specific volume "4,500 frames" → "a sample dataset" - Drop reference to internal CRM record IDs in file header comments - experience.yaml header: drop counterparty profile description Methodology lesson hardcoded in author memory: anonymization must cover everything a reader sees — directory names, filenames, IDs, README titles, comments, commit messages — not only body text. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: rewrite README for v2 Drop Q-learning narrative entirely. Re-anchor on the v2 question: "How did this happen?" — capture trajectories raw, grade only when reality returns its verdict, no pre-labeling. Replace: - "Q-learning memory that gets smarter" → hippocampus framing - 5 reward paths table → removed - 3-layer Q architecture (action/hypothesis/fit) → removed - Experience Library context/actions/outcome/lesson labels → removed (treated as pre-labeling violation per redesign-2026-04-26) - 16 MCP tools → 5 (search_memory, add_memory, log_prediction, log_outcome, memory_stats) Add: - "What It Is Not" — explicit on Q-learning removal and Mem0/Zep positioning - "No pre-labeling" methodological core - Two-prompt publishing pipeline (anonymize.md + extract_experience.md) - Publishing an Experience section with seed d49e0997 schema example - Honest "Status" section: pilot, 1 seed, no marketplace UI yet, schema may iterate Length: 522 lines → 178. Old README archived in git history. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * docs: mark v1-era docs as stale Prepend a STALE banner to seven docs that describe pre-2026-04-26 architecture (Q-learning, 5 reward paths, Experience Library context/actions/outcome/lesson labels). Each banner points to docs/redesign-2026-04-26.md and the new README for current state. Banner-only; original content preserved for git-blame and archive value. Full rewrite of these docs deferred to follow-up PRs. Marked stale: - docs/architecture.md - docs/how-it-works.md - docs/storage-system.md - docs/product-page-content.md - docs/experience-library.md - docs/benchmark-results.md - docs/reward-audit-2026-04-08.md Kept current (no banner): - docs/redesign-2026-04-26.md (the freeze) - docs/claude-design-brief.md (v2 product framing) - docs/configuration.md (env vars stable) - docs/decision-extraction.md (still active in code) - docs/experiences.md (sales/dealflow YAML configs still shipped) - docs/user-stories-mcp-tools.md (recent, mostly v2-aligned) Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> * experiences: skill architecture + per-step indexing + dog-food notes Three additions wrapping up the v2 first-seed arc. 1. experiences/d49e0997/SKILL.md Claude Code entry point for the seed pack. Specifies when to invoke (inbound + free pilot + local e-signing), how to use the trajectory + steps + grade, and what NOT to do (no attribution-stripping, no fabrication, no de-anonymization). 2. experiences/d49e0997/steps.indexable.jsonl Per-step records (24 lines, one JSON per step) for retrieval. Generated from trajectory.anonymized.yaml. Used by the user's Claude to surface step-level pattern matches in addition to experience-level summary matches. 3. docs/skill-architecture.md Naming convention: openexp:<author-handle>:<experience-slug>. Filesystem layout (storage = UUID dir; install = skill-named dir). Two-layer identity: author public, counterparty anonymized. Install + invoke flows. Open questions (trust, discovery, conflict resolution) flagged for future work. 4. docs/dogfood-2026-04-26.md Honest record of the first self-test of the publishing pipeline. Three queries against the live Qdrant: 2/3 surfaced the seed via summary-only indexing, 1/3 (pattern query about silence + re-engagement) missed. Per-step indexing brought the missed query to #3 in top-5. Format matters: structural- fact-led content beat metadata-led content with the same source. Pre-labeled Trajektory experiences (PR #34) still beat us on retrieval — deliberate trade-off, not a bug. 5. README.md New "Install as a Claude Code skill" subsection under Publishing an Experience. Documents the namespaced skill form and points to docs/skill-architecture.md for full detail. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com> --------- Co-authored-by: Ivan Pasichnyk <ivanpasichnyk@gmail.com> Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
QDRANT_API_KEYto README.md and docs/configuration.md config tables🤖 Generated with Claude Code