Skip to content

feat: Source Intake / Fetch Pipeline v0.1 (URL extraction, fetch, cache, provider integration)#23

Merged
sasazaki1994 merged 4 commits intomainfrom
codex/add-source-intake-and-fetch-pipeline-v0.1
May 7, 2026
Merged

feat: Source Intake / Fetch Pipeline v0.1 (URL extraction, fetch, cache, provider integration)#23
sasazaki1994 merged 4 commits intomainfrom
codex/add-source-intake-and-fetch-pipeline-v0.1

Conversation

@sasazaki1994
Copy link
Copy Markdown
Owner

@sasazaki1994 sasazaki1994 commented May 7, 2026

Motivation

  • Add a lightweight Source Intake / Fetch pipeline before provider invocation so research-topic URLs can be normalized, safety-checked, cached, fetched when needed, and passed as compact source candidates to providers.
  • Improve grounding for mock/OpenAI providers while preventing unsafe fetches (SSRF) and reusing existing SourceCacheEntry / SourceFetchSnapshot models without DB migrations.

Description

  • Add spec and acceptance artifacts: specs/source-intake-and-fetching.md and acceptance/source-intake-and-fetching.feature and update specs/README.md and acceptance/README.md.
  • New types and intake implementation: src/types/source-intake.ts, src/server/analysis/source-intake/extract-urls.ts, and src/server/analysis/source-intake/source-intake-service.ts that extract http/https URLs, trim trailing punctuation, normalize/validate, dedupe by normalized URL, call resolveSourceCacheForUrl, and build SourceCandidate[].
  • Pipeline integration: run buildSourceIntakeFromQuestion(question) in createAnalysisRunFromProvider and pass sourceCandidates via the non-breaking addition GenerateAnswerGraphInput.sourceCandidates?: SourceCandidate[].
  • Provider changes: mock provider uses up to the first 1–3 sourceCandidates as sources when present; OpenAI provider injects a compact source candidate context into the user prompt (label/url/content_type/excerpt) while preserving structured output schema.
  • Fetch/excerpt adjustments: fetch-source-snapshot.ts now strips noscript as well as script/style and raises excerpt cap to 2000 chars; fetch behavior, safety checks, snapshot creation, and SourceCacheEntry.latest* updates reuse existing code paths.
  • No Prisma schema changes or migrations were added; persisted metadata uses existing SourceCacheEntry / SourceFetchSnapshot models.

Testing

  • Ran pnpm lint (passed) and pnpm typecheck (passed) to ensure code quality and type safety.
  • Ran unit tests with pnpm test where all tests passed (142 passed, 0 failed).
  • Built the app with pnpm build which completed successfully; pnpm exec prisma validate failed in this CI run due to missing DATABASE_URL environment variable (local/CI env constraint), not schema issues.
  • Verified provider integration tests: mock provider accepts sourceCandidates and OpenAI provider prompt includes compact source context in tests added/updated under tests/.

Codex Task

Summary by CodeRabbit

Release Notes

  • New Features

    • 研究トピックから HTTP/HTTPS URL を自動抽出し、正規化・キャッシング機能を追加
    • URL の安全性チェック(ローカルホスト・プライベート IP ブロック)を実装
    • URL フェッチ失敗時の耐障害性向上(エラー情報を保持しながら処理継続)
  • Documentation

    • ソースインテーク・フェッチパイプラインの仕様書を追加
  • Tests

    • ソースインテーク機能の受け入れテストを追加
    • URL 抽出・正規化・安全性チェックのユニットテストを追加
  • Chores

    • CI ワークフローの Node.js セットアップとデータベース検証ステップを改善

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 7, 2026

Review Change Stack

Warning

Rate limit exceeded

@sasazaki1994 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 31 minutes and 18 seconds before requesting another review.

To continue reviewing without waiting, purchase usage credits in the billing tab.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e3576fa3-4b4d-4c46-b94e-b43990b41ca2

📥 Commits

Reviewing files that changed from the base of the PR and between 6031533 and eb0b72c.

📒 Files selected for processing (6)
  • acceptance/source-intake-and-fetching.feature
  • specs/source-intake-and-fetching.md
  • src/server/analysis/create-analysis-run-from-provider.ts
  • src/server/analysis/providers/mock-answer-graph-provider.ts
  • src/server/analysis/providers/openai-answer-graph-provider.ts
  • src/server/analysis/source-intake/source-intake-service.ts

ウォークスルー

このPRは、URL抽出、正規化、キャッシング、安全性検証を通じてソースインテイクパイプラインを実装し、プロバイダーに注入するコンパクト化されたソース候補リストを生成します。モックおよびOpenAIプロバイダーを更新して新しい入力形式を受け入れ、CI環境ではpnpmバージョン固定化とPrismaマイグレーション検証を追加します。

変更内容

ソースインテイクおよびフェッチングパイプライン

レイヤー/ファイル 要約
データ型とコントラクト
src/types/source-intake.ts, src/types/answer-graph-generation.ts
SourceCandidate 型を導入し、正規化URL、元のURL、ラベル、抜粋、コンテンツメタデータ、フェッチエラーメッセージを含みます。SourceIntakeResult は候補と無視されたURL のリストを定義します。GenerateAnswerGraphInput にオプショナルな sourceCandidates?: SourceCandidate[] フィールドを追加します。
URL抽出とインテイク処理
src/server/analysis/source-intake/extract-urls.ts, src/server/analysis/source-intake/source-intake-service.ts
extractUrls() は http/https URL をテキストから抽出し、末尾の句読点を削除します。buildSourceIntakeFromQuestion() は URL を抽出、キャッシュ検索、正規化、安全性チェックを実行し、ホスト名ベースのラベルを導出して SourceIntakeResult を返します。
プロバイダー統合
src/server/analysis/create-analysis-run-from-provider.ts, src/server/analysis/providers/mock-answer-graph-provider.ts, src/server/analysis/providers/openai-answer-graph-provider.ts
createAnalysisRunFromProvidersourceIntake を構築し、sourceCandidates を含む GenerateAnswerGraphInput をプロバイダーに渡します。モックプロバイダーは mapSourceCandidates() で候補を入力形式に正規化します。OpenAI プロバイダーは buildSourceCandidateContext() で候補をプロンプトに含め、モデルに候補URL の使用を指示します。
抜粋サイズ拡大
src/server/analysis/fetch-source-snapshot.ts
MAX_EXCERPT_CHARS を 1200 から 2000 に増加させます。
仕様と受け入れテスト
specs/source-intake-and-fetching.md, specs/README.md, acceptance/source-intake-and-fetching.feature, acceptance/README.md
ソースインテイク/フェッチパイプラインの仕様を追加し、目的、キャッシュ TTL ルール、プロバイダー統合、エラーハンドリングをカバーします。URL 抽出、キャッシュ再利用、安全性フィルタリング、プロバイダー入力、フェッチエラーハンドリングをテストする Cucumber シナリオを追加します。
テストスイート
tests/source-intake.test.ts, tests/create-mock-run.test.ts, tests/mock-answer-graph-provider.test.ts, tests/openai-answer-graph-provider.test.ts
URL 抽出、URL 正規化、安全性チェックの単体テストを追加します。既存のプロバイダーテストを更新して sourceCandidates 入力を検証します。OpenAI プロバイダーテストでソース候補コンテキストがプロンプトに含まれることを確認します。

CI環境とツールチェーン

レイヤー/ファイル 要約
pnpm バージョン管理
.github/workflows/ci.yml
全ジョブ(install-and-static-checks、database-checks、e2e)で corepack enablepnpm@10.18.2 --activate を実行し、pnpm バージョンを明示的に固定化します。
データベースマイグレーション検証
.github/workflows/ci.yml
database-checks ジョブに prisma migrate deployprisma migrate status ステップを追加します。Fresh DB 検証ステップは tracemap_verify データベースを作成し、マイグレーションを適用して検証します。

推定コードレビュー作業量

🎯 3 (中程度) | ⏱️ 約25分

関連の可能性があるPR

  • sasazaki1994/TraceMap#7: 両PR は OpenAI answer-graph プロバイダーを修正します。取得PR はプロバイダーを追加し、メインPR はソース候補を受け入れてフォーマットするように更新します。
  • sasazaki1994/TraceMap#20: 両PR は src/server/analysis/create-analysis-run-from-provider.ts を修正します。メインPR はソースインテイク/sourceCandidates ハンドリングを追加し、取得PR はラン計算ロジックを追加します。
  • sasazaki1994/TraceMap#16: 両PR は OpenAI answer-graph プロバイダープロンプトを修正します。メインPR はソース候補コンテキストと候補選好指示を追加し、取得PR はプロンプト文言を「Investigation Mission」/「Research topic」に書き換えます。

🐇 ウサギのバウンス、URL の跳躍
キャッシュを検索し、安全なURLを選別
プロバイダーに渡す、候補のコンパクト化
インテイク・パイプラインよ、知識の守護者
ソースは信頼、エラーも握りしめ ✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 27.27% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed PRタイトルは、Source Intake / Fetch Pipelineの実装という主要な変更を正確かつ具体的に説明しており、PR全体の目的と完全に一致している。
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch codex/add-source-intake-and-fetch-pipeline-v0.1

Warning

Review ran into problems

🔥 Problems

Git: Failed to clone repository. Please run the @coderabbitai full review command to re-trigger a full review. If the issue persists, set path_filters to include or exclude specific files.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (4)
src/server/analysis/providers/mock-answer-graph-provider.ts (1)

176-200: ⚡ Quick win

mapSourceCandidates が 2 回呼ばれています。

Line 176 で mapSourceCandidates(normalizedInput) を 2 度呼び出しており、冗長です。一度変数に格納してください。

♻️ 提案修正
+  const mappedSources = mapSourceCandidates(normalizedInput);
   return {
     kind: "success",
     payload: {
       ...
-      sources: mapSourceCandidates(normalizedInput).length > 0 ? mapSourceCandidates(normalizedInput) : [
+      sources: mappedSources.length > 0 ? mappedSources : [
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/server/analysis/providers/mock-answer-graph-provider.ts` around lines 176
- 200, The code calls mapSourceCandidates(normalizedInput) twice when building
the sources array; compute it once into a local variable (e.g., const mapped =
mapSourceCandidates(normalizedInput)) before the object construction and then
use that variable in the ternary expression for sources to avoid redundant work
and ensure consistency when referencing mapped candidates across the sources
logic.
src/server/analysis/providers/openai-answer-graph-provider.ts (1)

207-212: 💤 Low value

return 文のインデントが関数本体と合っていません。

Line 212 の return がコラム 0 にあり、buildSourceCandidateContext の他のコードと揃っていません。

🔧 提案修正
-return `\n\nAvailable source candidates:\n${lines.join("\n")}`;
+  return `\n\nAvailable source candidates:\n${lines.join("\n")}`;
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/server/analysis/providers/openai-answer-graph-provider.ts` around lines
207 - 212, The closing return in buildSourceCandidateContext is mis-indented
(it's at column 0) causing inconsistent function body formatting; align the
return that produces the "Available source candidates" string with the rest of
the function block so it sits at the same indentation level as other statements
in buildSourceCandidateContext (ensure the `return \n\nAvailable source
candidates:\n${lines.join("\n")}` line is indented inside the function scope and
matches the indentation of the preceding `return `${index + 1}...` mapping or
the function's other statements).
specs/source-intake-and-fetching.md (2)

60-65: ⚡ Quick win

キャッシュ関連のテスト要件を追加してください。

Test requirements にキャッシュ機能のテストが明記されていません。Source cache strategy(lines 38-43)が仕様の重要な部分であるため、以下のテストケースを追加することを推奨します:

  • キャッシュヒット/ミスのシナリオ
  • TTL 期限切れ時の挙動(stale cache からの fetch 再実行)
  • SourceCacheEntry および SourceFetchSnapshot モデルへの正しいデータ保存
  • TRACEMAP_SOURCE_CACHE_TTL_DAYS 環境変数の動作確認

これにより、キャッシュ戦略が意図通りに実装されていることを保証できます。

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@specs/source-intake-and-fetching.md` around lines 60 - 65, Test requirements
are missing cache-related cases: add unit/integration tests covering cache hit
and cache miss flows for fetch-source, TTL expiry behavior that forces re-fetch
from stale cache, persistence and shape checks for the SourceCacheEntry and
SourceFetchSnapshot models, and behavior driven by the
TRACEMAP_SOURCE_CACHE_TTL_DAYS env var to ensure TTL is applied; reference the
fetch-source code paths and the SourceCacheEntry/SourceFetchSnapshot types when
implementing these tests so they validate caching, stale/refresh logic, and
env-driven TTL changes.

47-47: ⚡ Quick win

OpenAI provider の source context 追加時の制約を定義してください。

OpenAI provider が prompt に compact source context を追加する際、以下の制約が仕様に記載されていません:

  • prompt に含める source の最大数(token limit を超える場合の対処)
  • 複数 source がある場合の優先順位・並び順
  • source context が token limit に収まらない場合の truncate 戦略

これらを明記しないと、実装時に予期しない token limit エラーや不適切な source 選択が発生する可能性があります。

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@specs/source-intake-and-fetching.md` at line 47, Specify exact constraints
for the OpenAI provider when appending compact source context
(label/url/content_type/excerpt) to prompts: define a maximum number of sources
to include and a token budget per prompt (e.g., max_sources and
max_context_tokens), describe the ordering/priority rule for multiple sources
(e.g., sort by relevance score then recency using the same ranking codepath that
produces source relevance), and add a truncation strategy when the combined
source context exceeds the token budget (e.g., drop lowest-priority sources
first, then truncate excerpts with an ellipsis preserving
label/url/content_type, and include a warning token like “[TRUNCATED]”);
document how to compute token usage (using the tokenizer used by OpenAI) and
fallback behavior (if even one source excerpt exceeds per-source token limit,
truncate the excerpt to per-source limit and still include
label/url/content_type).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@acceptance/source-intake-and-fetching.feature`:
- Line 27: The acceptance scenario uses a mixed Japanese/English phrase ("And
raw HTML全文 is not passed to provider"); replace it with a consistent English
phrase such as "And raw full HTML is not passed to provider" (or "And the raw
full HTML is not passed to the provider") in the acceptance scenario text, and
mirror the same wording change in the corresponding spec under specs/ so both
specs/ and acceptance/ are kept in sync per the guideline to start from spec and
update both directories.

In `@specs/source-intake-and-fetching.md`:
- Line 36: 文書内でモデル名が混在しているため一貫性を持たせてください:
「SourceSnapshot」と「SourceFetchSnapshot」がどちらを指すか不明な箇所(特に Line 35–36 と Data model
strategy
セクション)を確認し、正しいモデル名に統一して仕様書を修正してください。該当する箇所で使われている名称をすべて検索し、もし二つが別モデルであればそれぞれの定義(目的、フィールド、保存先)を明示し、誤記であればすべて「SourceFetchSnapshot」または正しい方の名称に置換して整合性を取ってください。
- Line 42: 仕様書内の「fetch 成功履歴」の判定が曖昧なので、「stale / missing のみ fetch
実行。」の節に具体定義を追記してください:成功は HTTP ステータス 200–299 のみと明記し(例: isFetchSuccess を満たす条件)、404
や 5xx をキャッシュした場合の再試行ポリシー(例:
キャッシュ済み404は再フェッチしない/5xxは短時間で再試行またはTTLを短縮)を明示し、コンテンツが存在するが空(空ボディやzero-length)の扱い(success
と見なすか、再フェッチ対象にするか)を定義して、これらのルールを「fetch 成功履歴」判定基準として仕様に追記してください。
- Line 13: The spec line "URL 正規化(tracking params 除去、hash 除去、host/protocol
小文字化)" lacks a concrete definition of which query parameters count as "tracking
params"; update the spec to list the exact removal rules by name (e.g.,
utm_source, utm_medium, utm_campaign, utm_term, utm_content, gclid, fbclid, _ga,
_gid, _gat, mc_cid, mc_eid, _openstat, etc.) and/or a pattern-based rule (e.g.,
remove keys matching /^utm_/i or known analytics keys), state whether matching
is case-insensitive, clarify that keys are removed entirely (key and value),
define behavior for repeated keys and ordering, and include a canonical set (or
allow configurable list) referenced in the "URL 正規化" section so implementations
(URL 正規化, tracking params 除去, hash 除去, host/protocol 小文字化) can apply the same
deterministic filter.
- Line 40: TRACEMAP_SOURCE_CACHE_TTL_DAYS
の取り扱いが未定義なので、仕様に「検証ルールとフォールバック」を追記してください:
値は整数で最小0(日数0はキャッシュ無効)かつ最大を例えば3650(日数上限=10年)に設定し、非数値・小数・負数・上限超過の場合はデフォルト定数(例
DEFAULT_TRACEMAP_SOURCE_CACHE_TTL_DAYS)を使用して警告ログを出す、と明記すること(設定読み取り箇所=環境変数パース/設定初期化ロジックにこのバリデーションを実装することを併記)。
- Around line 54-59: Security constraints に HTTP
リダイレクト先の検証ルールが欠けているため、仕様に「リダイレクト方針(追従の可否)」「リダイレクト先にも Security constraints
の同一検証を適用すること」「最大リダイレクト回数(例: 5 回)を明記」および DNS rebinding 対策を追加してください。具体的には Security
constraints セクションに「リダイレクトを追従する場合は各跳び先 URL について scheme と解決後の IP が block
list(localhost/private/link-local/metadata
等)に該当しないことを検証し、検証に失敗したら追従を中止する」「リダイレクトごとに DNS を再解決して得た IP
を再検査する」「最大リダイレクト回数を設定する(例: 5)」「可能ならホスト名と最終解決 IP の整合性チェックや公開 DNS
の使用要件を追加する」という文言を追記してください。

In `@src/server/analysis/create-analysis-run-from-provider.ts`:
- Around line 65-69: buildSourceIntakeFromQuestion が try/catch
の外にあり、resolveSourceCacheForUrl 等の失敗で例外が上位に漏れて analysis_runs が "processing"
のまま残る可能性があります。修正は buildSourceIntakeFromQuestion 呼び出しを try/catch
で囲み、失敗したら補助的な処理として空の候補を持つデフォルト SourceIntakeResult(例: { candidates: []
})を代入して処理を継続し、必ず provider.generateAnswerGraph({ question, sourceCandidates:
sourceIntake.candidates }) が実行されるようにすることと、必要な型/名前(SourceIntakeResult)をインポートすること。

In `@src/server/analysis/source-intake/source-intake-service.ts`:
- Around line 11-16: The call to resolveSourceCacheForUrl inside the rawUrls
loop can throw and will abort buildSourceIntakeFromQuestion; wrap each call to
resolveSourceCacheForUrl(rawUrl) in a try/catch inside the loop (the loop that
builds ignoredUrls) so failures for a single URL are caught, push an entry into
ignoredUrls with { url: rawUrl, reason: (error.message || String(error)) } on
catch, and continue to the next rawUrl; keep the existing handling for
result.kind === "invalid" unchanged.

---

Nitpick comments:
In `@specs/source-intake-and-fetching.md`:
- Around line 60-65: Test requirements are missing cache-related cases: add
unit/integration tests covering cache hit and cache miss flows for fetch-source,
TTL expiry behavior that forces re-fetch from stale cache, persistence and shape
checks for the SourceCacheEntry and SourceFetchSnapshot models, and behavior
driven by the TRACEMAP_SOURCE_CACHE_TTL_DAYS env var to ensure TTL is applied;
reference the fetch-source code paths and the
SourceCacheEntry/SourceFetchSnapshot types when implementing these tests so they
validate caching, stale/refresh logic, and env-driven TTL changes.
- Line 47: Specify exact constraints for the OpenAI provider when appending
compact source context (label/url/content_type/excerpt) to prompts: define a
maximum number of sources to include and a token budget per prompt (e.g.,
max_sources and max_context_tokens), describe the ordering/priority rule for
multiple sources (e.g., sort by relevance score then recency using the same
ranking codepath that produces source relevance), and add a truncation strategy
when the combined source context exceeds the token budget (e.g., drop
lowest-priority sources first, then truncate excerpts with an ellipsis
preserving label/url/content_type, and include a warning token like
“[TRUNCATED]”); document how to compute token usage (using the tokenizer used by
OpenAI) and fallback behavior (if even one source excerpt exceeds per-source
token limit, truncate the excerpt to per-source limit and still include
label/url/content_type).

In `@src/server/analysis/providers/mock-answer-graph-provider.ts`:
- Around line 176-200: The code calls mapSourceCandidates(normalizedInput) twice
when building the sources array; compute it once into a local variable (e.g.,
const mapped = mapSourceCandidates(normalizedInput)) before the object
construction and then use that variable in the ternary expression for sources to
avoid redundant work and ensure consistency when referencing mapped candidates
across the sources logic.

In `@src/server/analysis/providers/openai-answer-graph-provider.ts`:
- Around line 207-212: The closing return in buildSourceCandidateContext is
mis-indented (it's at column 0) causing inconsistent function body formatting;
align the return that produces the "Available source candidates" string with the
rest of the function block so it sits at the same indentation level as other
statements in buildSourceCandidateContext (ensure the `return \n\nAvailable
source candidates:\n${lines.join("\n")}` line is indented inside the function
scope and matches the indentation of the preceding `return `${index + 1}...`
mapping or the function's other statements).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 34e41816-1526-41b5-9f0c-142ff2fe2c16

📥 Commits

Reviewing files that changed from the base of the PR and between a45cd6c and 6031533.

📒 Files selected for processing (17)
  • .github/workflows/ci.yml
  • acceptance/README.md
  • acceptance/source-intake-and-fetching.feature
  • specs/README.md
  • specs/source-intake-and-fetching.md
  • src/server/analysis/create-analysis-run-from-provider.ts
  • src/server/analysis/fetch-source-snapshot.ts
  • src/server/analysis/providers/mock-answer-graph-provider.ts
  • src/server/analysis/providers/openai-answer-graph-provider.ts
  • src/server/analysis/source-intake/extract-urls.ts
  • src/server/analysis/source-intake/source-intake-service.ts
  • src/types/answer-graph-generation.ts
  • src/types/source-intake.ts
  • tests/create-mock-run.test.ts
  • tests/mock-answer-graph-provider.test.ts
  • tests/openai-answer-graph-provider.test.ts
  • tests/source-intake.test.ts

Comment thread acceptance/source-intake-and-fetching.feature Outdated
Comment thread specs/source-intake-and-fetching.md Outdated
Comment thread specs/source-intake-and-fetching.md Outdated
Comment thread specs/source-intake-and-fetching.md Outdated
Comment thread specs/source-intake-and-fetching.md
Comment thread specs/source-intake-and-fetching.md
Comment thread src/server/analysis/create-analysis-run-from-provider.ts
Comment thread src/server/analysis/source-intake/source-intake-service.ts
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant