Conversation
- Introduced a new fetch tool in `src/mastra/tools/fetch.tool.ts` for web content fetching and markdown conversion. - Implemented various search functionalities including DuckDuckGo, Google, and Bing search. - Added support for Google News RSS fetching. - Included URL validation and sanitization to ensure safe fetching. - Created a new React component for MCP A2A page in `app/chat/mcp-a2a/page.tsx` to manage MCP servers and tools. - Developed a workspace management page in `app/chat/workspaces/page.tsx` to handle file browsing and skill display. - Updated exports in `src/mastra/tools/index.ts` to include the new fetch tool.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
Reviewer's GuideAdds a production-oriented fetch/search tool with markdown output and RE2-based URL filtering, expands Mastra query hooks for workspaces/sandbox/MCP/A2A, and introduces new UI pages to explore workspaces, MCP servers/tools, and A2A agents while wiring everything into side navigation and documentation/memory. Sequence diagram for the new WorkspacesPage workspace/sandbox interactionssequenceDiagram
actor User
participant WorkspacesPage
participant MastraHooks as useMastraQuery
participant ReactQuery as ReactQueryHooks
participant MastraClient as mastraClient
participant WorkspaceAPI as Workspace
User->>WorkspacesPage: Navigate to /chat/workspaces
WorkspacesPage->>MastraHooks: useMastraQuery()
MastraHooks-->>WorkspacesPage: { useWorkspaces, useSandboxFiles, useSandboxReadFile, useWorkspaceSkills }
Note over WorkspacesPage: Fetch workspace list
WorkspacesPage->>ReactQuery: useWorkspaces()
ReactQuery->>MastraClient: getWorkspaces()
MastraClient-->>ReactQuery: { workspaces }
ReactQuery-->>WorkspacesPage: workspaces data
Note over WorkspacesPage: Derive activeWorkspaceId and file tree
WorkspacesPage->>ReactQuery: useSandboxFiles(activeWorkspaceId, "/", true)
ReactQuery->>MastraClient: getWorkspace(activeWorkspaceId)
MastraClient->>WorkspaceAPI: listFiles(path="/", recursive=true)
WorkspaceAPI-->>MastraClient: WorkspaceFsListResponse
MastraClient-->>ReactQuery: files
ReactQuery-->>WorkspacesPage: files data
User->>WorkspacesPage: Select file in FileTree
WorkspacesPage->>ReactQuery: useSandboxReadFile(activeWorkspaceId, selectedFilePath, "utf-8")
ReactQuery->>MastraClient: getWorkspace(activeWorkspaceId)
MastraClient->>WorkspaceAPI: readFile(path, encoding)
WorkspaceAPI-->>MastraClient: WorkspaceFsReadResponse
MastraClient-->>ReactQuery: file content
ReactQuery-->>WorkspacesPage: content
Note over WorkspacesPage: Fetch workspace skills
WorkspacesPage->>ReactQuery: useWorkspaceSkills(activeWorkspaceId)
ReactQuery->>MastraClient: getWorkspace(activeWorkspaceId)
MastraClient->>WorkspaceAPI: listSkills()
WorkspaceAPI-->>MastraClient: ListSkillsResponse
MastraClient-->>ReactQuery: skills data
ReactQuery-->>WorkspacesPage: skills
WorkspacesPage-->>User: Render file tree, code viewer, skills list
Sequence diagram for the new McpA2APage MCP server and A2A agent interactionssequenceDiagram
actor User
participant McpA2APage
participant MastraHooks as useMastraQuery
participant ReactQuery as ReactQueryHooks
participant MastraClient as mastraClient
participant MCPAPI as MCP
participant AgentsAPI as Agents
participant A2AAPI as A2A
User->>McpA2APage: Navigate to /chat/mcp-a2a
McpA2APage->>MastraHooks: useMastraQuery()
MastraHooks-->>McpA2APage: { useMcpServers, useMcpServerTools, useAgents, useA2ACard }
Note over McpA2APage: Load MCP servers
McpA2APage->>ReactQuery: useMcpServers({ page:0, perPage:50 })
ReactQuery->>MastraClient: getMcpServers(params)
MastraClient->>MCPAPI: listServers(params)
MCPAPI-->>MastraClient: McpServerListResponse
MastraClient-->>ReactQuery: servers
ReactQuery-->>McpA2APage: servers data
Note over McpA2APage: Load tools for active server
McpA2APage->>ReactQuery: useMcpServerTools(activeServerId)
ReactQuery->>MastraClient: getMcpServerTools(serverId)
MastraClient->>MCPAPI: listTools(serverId)
MCPAPI-->>MastraClient: McpServerToolListResponse
MastraClient-->>ReactQuery: tools
ReactQuery-->>McpA2APage: serverTools
Note over McpA2APage: Load agents list
McpA2APage->>ReactQuery: useAgents()
ReactQuery->>MastraClient: getAgents()
MastraClient->>AgentsAPI: listAgents()
AgentsAPI-->>MastraClient: agents[]
MastraClient-->>ReactQuery: agents
ReactQuery-->>McpA2APage: agents data
Note over McpA2APage: Load A2A card for active agent
McpA2APage->>ReactQuery: useA2ACard(activeAgentId)
ReactQuery->>MastraClient: getA2A(agentId)
MastraClient->>A2AAPI: getCard()
A2AAPI-->>MastraClient: AgentCard
MastraClient-->>ReactQuery: card
ReactQuery-->>McpA2APage: a2aCard
McpA2APage-->>User: Render MCP tools and A2A agent card
Updated class diagram for the fetchTool moduleclassDiagram
class FetchToolContext {
<<interface>>
+string userAgent
+number timeout
+string userId
+string workspaceId
}
class FetchToolError {
+string code
+number statusCode
+string url
+constructor(message, code, statusCode, url)
}
class ValidationUtils {
<<static>>
+boolean validateUrl(url)
}
class FetchTool {
<<tool>>
+execute(inputData, context)
+onInputStart()
+onInputDelta()
+onInputAvailable()
+onOutput()
}
class FetchToolInputSchema {
<<zodSchema>>
+url
+query
+searchProvider
+searchVertical
+maxResults
+includeContent
+timeout
+userAgent
+contentContext
+includeUrlPatterns
+excludeUrlPatterns
}
class FetchToolOutputSchema {
<<zodSchema>>
+mode
+query
+url
+markdown
+results
+metadata
}
class SearchResult {
+string title
+string url
+string snippet
}
class HttpFetch {
<<function>>
+httpFetch(url, options)
}
class FetchHelpers {
<<module>>
+buildRequestHeaders(userAgent)
+sanitizeHtml(html)
+htmlToMarkdown(html)
+compileRe2Patterns(patterns)
+passesRe2Filters(value, include, exclude)
+dedupeResults(results)
+normalizeUrl(rawUrl)
+isNewsQuery(query)
+applyContentWindow(markdown, window)
+resolveContentWindow(input)
+extractDuckDuckGoResults(html)
+extractGoogleResults(html)
+extractBingResults(html)
+searchDuckDuckGo(options)
+searchGoogle(options)
+searchBing(options)
+searchGoogleNewsRss(options)
+fetchPageAsMarkdown(options)
}
class ObservabilitySpan {
<<from @mastra/core/observability>>
+update(data)
+end()
+error(options)
}
class TracingContext {
<<from @mastra/core/observability>>
}
class RequestContext {
<<from @mastra/core/request-context>>
}
class Writer {
<<toolWriter>>
+custom(event)
}
class Error
FetchToolError --|> Error
FetchToolContext --|> RequestContext
FetchTool ..> FetchToolInputSchema : uses
FetchTool ..> FetchToolOutputSchema : uses
FetchTool ..> FetchHelpers : calls
FetchTool ..> FetchToolError : throws
FetchTool ..> FetchToolContext : cast requestContext
FetchTool ..> ObservabilitySpan : tracing span
FetchTool ..> TracingContext : tracingContext
FetchTool ..> Writer : progress events
FetchTool ..> HttpFetch : network calls
FetchHelpers ..> SearchResult : returns
FetchHelpers ..> HttpFetch : uses
FetchHelpers ..> ValidationUtils : uses
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
🤖 Hi @ssdeanx, I've received your request, and I'm working on it now! You can track my progress in the logs for more details. |
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (12)
Summary by CodeRabbitRelease Notes
WalkthroughThis PR introduces workspace and MCP/A2A management UI pages with integrated query hooks, adds a production-grade web fetch tool with multi-provider search integration to the research agent, and expands the Mastra query hook system to support workspace, sandbox, MCP, and A2A operations alongside associated cache keys and mutations. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant WorkspacePage as Workspace UI
participant MastraQuery as useMastraQuery Hooks
participant MastraClient as Mastra Client
participant Backend as Backend API
User->>WorkspacePage: Select workspace
WorkspacePage->>MastraQuery: useWorkspaces()
MastraQuery->>MastraClient: fetch workspaces
MastraClient->>Backend: GET /workspaces
Backend-->>MastraClient: workspace list
MastraClient-->>MastraQuery: return data
MastraQuery-->>WorkspacePage: workspaces state updated
User->>WorkspacePage: Select file path
WorkspacePage->>MastraQuery: useWorkspaceFiles(workspaceId)
MastraQuery->>MastraClient: fetch files
MastraClient->>Backend: GET /workspaces/{id}/files
Backend-->>MastraClient: file tree
MastraClient-->>MastraQuery: return data
MastraQuery-->>WorkspacePage: file content displayed
sequenceDiagram
participant Agent
participant FetchTool as fetchTool
participant Providers as Search Providers
participant HTML as HTML Processor
participant Result as Result Formatter
Agent->>FetchTool: Execute(query/url)
FetchTool->>Providers: Query DuckDuckGo, Google, Bing
Providers-->>FetchTool: Raw results
FetchTool->>HTML: Sanitize & convert to markdown
HTML-->>FetchTool: Cleaned markdown
FetchTool->>Result: Deduplicate, aggregate
Result-->>FetchTool: Final result set
FetchTool-->>Agent: Return structured output
sequenceDiagram
participant User
participant McpPage as MCP/A2A UI
participant MastraQuery as useMcpServers/Agents Hooks
participant MastraClient as Mastra Client
participant McpServers as MCP Servers
User->>McpPage: Load page
McpPage->>MastraQuery: useMcpServers()
MastraQuery->>MastraClient: fetch servers
MastraClient->>McpServers: Query MCP server registry
McpServers-->>MastraClient: server list
MastraClient-->>MastraQuery: return data
MastraQuery-->>McpPage: servers populated
User->>McpPage: Select server
McpPage->>MastraQuery: useMcpServerTools(serverId)
MastraQuery->>MastraClient: fetch tools
MastraClient->>McpServers: GET /servers/{id}/tools
McpServers-->>MastraClient: tool list
MastraClient-->>MastraQuery: return data
MastraQuery-->>McpPage: tools & agent card displayed
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly Related PRs
Poem
✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a powerful new web fetching and search tool, significantly enhancing the application's ability to acquire and process external data. Complementing this, new user interfaces have been developed for managing Mastra Control Plane servers, Agent-to-Agent interactions, and workspace files, providing users with improved visibility and control over their agent environments and associated data. Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
🤖 I'm sorry @ssdeanx, but I was unable to process your request. Please see the logs for more details. |
There was a problem hiding this comment.
Hey - I've found 2 issues, and left some high level feedback:
- The workspace vs sandbox hooks in
use-mastra-query.tsare almost identical; consider extracting a shared factory/helper to avoid duplicated query/mutation logic and keep future changes in sync. - Both
app/chat/workspaces/page.tsxandapp/chat/mcp-a2a/page.tsxassume non-empty server/workspace/agent lists; it would be safer to gate the selects and dependent hooks on loading/error/empty states instead of defaulting to the first element or an empty string. - In
WorkspacesPage, the filesystem payload is treated asunknownand then heuristically mapped viaentries/items/files; wiring this to the concreteWorkspaceFsListResponsetype (or a dedicated transformer) would make the file-tree mapping more robust to backend shape changes.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The workspace vs sandbox hooks in `use-mastra-query.ts` are almost identical; consider extracting a shared factory/helper to avoid duplicated query/mutation logic and keep future changes in sync.
- Both `app/chat/workspaces/page.tsx` and `app/chat/mcp-a2a/page.tsx` assume non-empty server/workspace/agent lists; it would be safer to gate the selects and dependent hooks on loading/error/empty states instead of defaulting to the first element or an empty string.
- In `WorkspacesPage`, the filesystem payload is treated as `unknown` and then heuristically mapped via `entries/items/files`; wiring this to the concrete `WorkspaceFsListResponse` type (or a dedicated transformer) would make the file-tree mapping more robust to backend shape changes.
## Individual Comments
### Comment 1
<location path="app/chat/components/main-sidebar.tsx" line_range="309-314" />
<code_context>
+ {!(!(agent.provider ?? agent.modelId)) && (
</code_context>
<issue_to_address>
**suggestion:** Simplify boolean checks for provider/modelId to improve readability.
`!(!(agent.provider ?? agent.modelId))` and `Boolean(agent.provider)` are much less readable than `!!(agent.provider || agent.modelId)` and `agent.provider && ...`. Please revert to the more idiomatic forms to keep the logic easy to scan and maintain.
Suggested implementation:
```typescript
{!!(agent.provider || agent.modelId) && (
```
```typescript
{agent.provider && `${agent.provider} • `}
```
</issue_to_address>
### Comment 2
<location path="src/mastra/tools/fetch.tool.ts" line_range="179-188" />
<code_context>
+ mode: 'head-tail',
+}
+
+function compileRe2Patterns(patterns?: string[]) {
+ const compiled: Array<InstanceType<typeof RE2Ctor>> = []
+ for (const pattern of patterns ?? []) {
+ try {
+ if (typeof pattern === 'string' && pattern.trim().length > 0) {
+ compiled.push(new RE2Ctor(pattern))
+ }
+ } catch (error) {
+ log.warn('Invalid RE2 pattern ignored', {
+ pattern,
+ error: error instanceof Error ? error.message : String(error),
+ })
+ }
+ }
+ return compiled
+}
+
</code_context>
<issue_to_address>
**suggestion:** Surface invalid RE2 patterns back into tool output or tracing to aid debugging.
Currently invalid RE2 patterns are only logged with `log.warn` and then ignored, which makes it hard for callers to understand why their include/exclude filters are not taking effect. Please also surface invalid pattern details in span metadata or in `providerDiagnostics`/output metadata so consumers and monitoring can detect misconfiguration without relying solely on logs.
Suggested implementation:
```typescript
type Re2PatternDiagnostic = {
pattern: string
error: string
}
function compileRe2Patterns(
patterns?: string[]
): {
compiled: Array<InstanceType<typeof RE2Ctor>>
invalidPatterns: Re2PatternDiagnostic[]
} {
const compiled: Array<InstanceType<typeof RE2Ctor>> = []
const invalidPatterns: Re2PatternDiagnostic[] = []
for (const pattern of patterns ?? []) {
try {
if (typeof pattern === 'string' && pattern.trim().length > 0) {
compiled.push(new RE2Ctor(pattern))
}
} catch (error) {
const message = error instanceof Error ? error.message : String(error)
log.warn('Invalid RE2 pattern ignored', {
pattern,
error: message,
})
invalidPatterns.push({
pattern,
error: message,
})
}
}
return { compiled, invalidPatterns }
}
```
To fully surface invalid RE2 patterns as requested, you should also:
1. Update all call sites of `compileRe2Patterns` to handle the new return type:
- Replace usages like:
- `const includePatterns = compileRe2Patterns(config.includePatterns)`
with:
- `const { compiled: includePatterns, invalidPatterns: includePatternErrors } = compileRe2Patterns(config.includePatterns)`
2. Propagate `invalidPatterns` into:
- Span metadata (e.g., `span.setAttribute('fetch.invalidRe2Patterns', invalidPatterns)` or similar, depending on your tracing abstraction).
- Tool/provider diagnostics or output metadata (e.g., append to a `providerDiagnostics` array or include in a `debug`/`meta` field returned by the tool).
3. If you have a shared diagnostics type, align `Re2PatternDiagnostic` with it or remove the local type and reuse the shared one.
4. Consider aggregating invalid patterns from multiple call sites into a single diagnostics payload associated with the tool invocation, so consumers can easily see all misconfigurations for a given request.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| {!(!(agent.provider ?? agent.modelId)) && ( | ||
| <div className="mt-1.5 flex items-center gap-2"> | ||
| <div className="flex h-5 items-center rounded-md border border-primary/20 bg-primary/10 px-1.5 text-[9px] font-bold uppercase tracking-wider text-primary"> | ||
| <CpuIcon className="mr-1 size-3" /> | ||
| <span> | ||
| {agent.provider && `${agent.provider} • `} | ||
| {(Boolean(agent.provider)) && `${agent.provider} • `} |
There was a problem hiding this comment.
suggestion: Simplify boolean checks for provider/modelId to improve readability.
!(!(agent.provider ?? agent.modelId)) and Boolean(agent.provider) are much less readable than !!(agent.provider || agent.modelId) and agent.provider && .... Please revert to the more idiomatic forms to keep the logic easy to scan and maintain.
Suggested implementation:
{!!(agent.provider || agent.modelId) && ( {agent.provider && `${agent.provider} • `}| function compileRe2Patterns(patterns?: string[]) { | ||
| const compiled: Array<InstanceType<typeof RE2Ctor>> = [] | ||
| for (const pattern of patterns ?? []) { | ||
| try { | ||
| if (typeof pattern === 'string' && pattern.trim().length > 0) { | ||
| compiled.push(new RE2Ctor(pattern)) | ||
| } | ||
| } catch (error) { | ||
| log.warn('Invalid RE2 pattern ignored', { | ||
| pattern, |
There was a problem hiding this comment.
suggestion: Surface invalid RE2 patterns back into tool output or tracing to aid debugging.
Currently invalid RE2 patterns are only logged with log.warn and then ignored, which makes it hard for callers to understand why their include/exclude filters are not taking effect. Please also surface invalid pattern details in span metadata or in providerDiagnostics/output metadata so consumers and monitoring can detect misconfiguration without relying solely on logs.
Suggested implementation:
type Re2PatternDiagnostic = {
pattern: string
error: string
}
function compileRe2Patterns(
patterns?: string[]
): {
compiled: Array<InstanceType<typeof RE2Ctor>>
invalidPatterns: Re2PatternDiagnostic[]
} {
const compiled: Array<InstanceType<typeof RE2Ctor>> = []
const invalidPatterns: Re2PatternDiagnostic[] = []
for (const pattern of patterns ?? []) {
try {
if (typeof pattern === 'string' && pattern.trim().length > 0) {
compiled.push(new RE2Ctor(pattern))
}
} catch (error) {
const message = error instanceof Error ? error.message : String(error)
log.warn('Invalid RE2 pattern ignored', {
pattern,
error: message,
})
invalidPatterns.push({
pattern,
error: message,
})
}
}
return { compiled, invalidPatterns }
}To fully surface invalid RE2 patterns as requested, you should also:
- Update all call sites of
compileRe2Patternsto handle the new return type:- Replace usages like:
const includePatterns = compileRe2Patterns(config.includePatterns)
with:const { compiled: includePatterns, invalidPatterns: includePatternErrors } = compileRe2Patterns(config.includePatterns)
- Replace usages like:
- Propagate
invalidPatternsinto:- Span metadata (e.g.,
span.setAttribute('fetch.invalidRe2Patterns', invalidPatterns)or similar, depending on your tracing abstraction). - Tool/provider diagnostics or output metadata (e.g., append to a
providerDiagnosticsarray or include in adebug/metafield returned by the tool).
- Span metadata (e.g.,
- If you have a shared diagnostics type, align
Re2PatternDiagnosticwith it or remove the local type and reuse the shared one. - Consider aggregating invalid patterns from multiple call sites into a single diagnostics payload associated with the tool invocation, so consumers can easily see all misconfigurations for a given request.
There was a problem hiding this comment.
Code Review
This pull request introduces a versatile fetch/search tool and accompanying UI pages for workspace and MCP/A2A management, featuring a new fetch.tool.ts with multi-provider search capabilities and new pages in app/chat/. While the new functionality is impressive, the fetch tool lacks critical security controls, making it vulnerable to Server-Side Request Forgery (SSRF) and Denial of Service (DoS) attacks due to unvalidated URL inputs and unrestricted response sizes. These security issues must be addressed before production deployment. Furthermore, improvements are needed regarding code duplication in the new React Query hooks and the fragility of the web scraping implementation in the fetch tool to enhance long-term maintainability and robustness.
| const page = await fetchPageAsMarkdown({ | ||
| url: inputData.url, | ||
| timeout, | ||
| userAgent, | ||
| contentWindow, | ||
| }) |
There was a problem hiding this comment.
The fetch tool accepts a user-provided URL and fetches its content without validating the destination. This allows an attacker to perform Server-Side Request Forgery (SSRF) by pointing the tool to internal services, loopback addresses, or cloud metadata endpoints (e.g., http://169.254.169.254/latest/meta-data/).
To remediate this, implement a validation step that blocks internal IP ranges and sensitive hostnames. You should also ensure that only http and https protocols are allowed.
| const useWorkspaceInfo = (id: string) => | ||
| useQuery<WorkspaceInfoResponse, Error>({ | ||
| queryKey: mastraQueryKeys.workspaces.info(id), | ||
| queryFn: () => mastraClient.getWorkspace(id).info(), | ||
| enabled: !!id, | ||
| }) | ||
|
|
||
| const useWorkspaceFiles = ( | ||
| workspaceId: string, | ||
| path = '/', | ||
| recursive = false | ||
| ) => | ||
| useQuery<WorkspaceFsListResponse, Error>({ | ||
| queryKey: mastraQueryKeys.workspaces.files(workspaceId, path, recursive), | ||
| queryFn: () => mastraClient.getWorkspace(workspaceId).listFiles(path, recursive), | ||
| enabled: !!workspaceId, | ||
| }) | ||
|
|
||
| const useWorkspaceReadFile = ( | ||
| workspaceId: string, | ||
| path: string, | ||
| encoding = 'utf-8' | ||
| ) => | ||
| useQuery<WorkspaceFsReadResponse, Error>({ | ||
| queryKey: mastraQueryKeys.workspaces.file(workspaceId, path, encoding), | ||
| queryFn: () => mastraClient.getWorkspace(workspaceId).readFile(path, encoding), | ||
| enabled: !!workspaceId && !!path, | ||
| }) | ||
|
|
||
| const useWorkspaceStat = (workspaceId: string, path: string) => | ||
| useQuery<WorkspaceFsStatResponse, Error>({ | ||
| queryKey: mastraQueryKeys.workspaces.stat(workspaceId, path), | ||
| queryFn: () => mastraClient.getWorkspace(workspaceId).stat(path), | ||
| enabled: !!workspaceId && !!path, | ||
| }) | ||
|
|
||
| const useWorkspaceSearch = (workspaceId: string, params: WorkspaceSearchParams) => | ||
| useQuery<WorkspaceSearchResponse, Error>({ | ||
| queryKey: mastraQueryKeys.workspaces.search(workspaceId, params), | ||
| queryFn: () => mastraClient.getWorkspace(workspaceId).search(params), | ||
| enabled: !!workspaceId && !!params?.query, | ||
| }) | ||
|
|
||
| const useWorkspaceSkills = (workspaceId: string) => | ||
| useQuery<ListSkillsResponse, Error>({ | ||
| queryKey: mastraQueryKeys.workspaces.skills(workspaceId), | ||
| queryFn: () => mastraClient.getWorkspace(workspaceId).listSkills(), | ||
| enabled: !!workspaceId, | ||
| }) | ||
|
|
||
| const useWorkspaceSearchSkills = ( | ||
| workspaceId: string, | ||
| params: SearchSkillsParams | ||
| ) => | ||
| useQuery<SearchSkillsResponse, Error>({ | ||
| queryKey: mastraQueryKeys.workspaces.searchSkills(workspaceId, params), | ||
| queryFn: () => mastraClient.getWorkspace(workspaceId).searchSkills(params), | ||
| enabled: !!workspaceId, | ||
| }) | ||
|
|
||
| // Workspace Mutations | ||
| const useWorkspaceWriteFileMutation = (workspaceId: string) => | ||
| useMutation({ | ||
| mutationFn: (params: { | ||
| path: string | ||
| content: string | ||
| options?: { encoding?: 'utf-8' | 'base64'; recursive?: boolean } | ||
| }) => | ||
| mastraClient | ||
| .getWorkspace(workspaceId) | ||
| .writeFile(params.path, params.content, params.options), | ||
| onSuccess: async () => { | ||
| await queryClient.invalidateQueries({ | ||
| queryKey: mastraQueryKeys.workspaces.all, | ||
| }) | ||
| }, | ||
| }) | ||
|
|
||
| const useWorkspaceDeleteMutation = (workspaceId: string) => | ||
| useMutation({ | ||
| mutationFn: (params: { | ||
| path: string | ||
| options?: { recursive?: boolean; force?: boolean } | ||
| }) => | ||
| mastraClient | ||
| .getWorkspace(workspaceId) | ||
| .delete(params.path, params.options), | ||
| onSuccess: async () => { | ||
| await queryClient.invalidateQueries({ | ||
| queryKey: mastraQueryKeys.workspaces.all, | ||
| }) | ||
| }, | ||
| }) | ||
|
|
||
| const useWorkspaceMkdirMutation = (workspaceId: string) => | ||
| useMutation({ | ||
| mutationFn: (params: { path: string; recursive?: boolean }) => | ||
| mastraClient | ||
| .getWorkspace(workspaceId) | ||
| .mkdir(params.path, params.recursive), | ||
| onSuccess: async () => { | ||
| await queryClient.invalidateQueries({ | ||
| queryKey: mastraQueryKeys.workspaces.all, | ||
| }) | ||
| }, | ||
| }) | ||
|
|
||
| const useWorkspaceIndexMutation = (workspaceId: string) => | ||
| useMutation({ | ||
| mutationFn: (params: WorkspaceIndexParams) => | ||
| mastraClient.getWorkspace(workspaceId).index(params), | ||
| onSuccess: async () => { | ||
| await queryClient.invalidateQueries({ | ||
| queryKey: mastraQueryKeys.workspaces.all, | ||
| }) | ||
| }, | ||
| }) | ||
|
|
||
| // --- SANDBOX (separate frontend hooks) --- | ||
|
|
||
| const useSandboxInfo = (workspaceId: string) => | ||
| useQuery<WorkspaceInfoResponse, Error>({ | ||
| queryKey: mastraQueryKeys.sandbox.info(workspaceId), | ||
| queryFn: () => mastraClient.getWorkspace(workspaceId).info(), | ||
| enabled: !!workspaceId, | ||
| }) | ||
|
|
||
| const useSandboxFiles = ( | ||
| workspaceId: string, | ||
| path = '/', | ||
| recursive = false | ||
| ) => | ||
| useQuery<WorkspaceFsListResponse, Error>({ | ||
| queryKey: mastraQueryKeys.sandbox.files(workspaceId, path, recursive), | ||
| queryFn: () => mastraClient.getWorkspace(workspaceId).listFiles(path, recursive), | ||
| enabled: !!workspaceId, | ||
| }) | ||
|
|
||
| const useSandboxReadFile = ( | ||
| workspaceId: string, | ||
| path: string, | ||
| encoding = 'utf-8' | ||
| ) => | ||
| useQuery<WorkspaceFsReadResponse, Error>({ | ||
| queryKey: mastraQueryKeys.sandbox.file(workspaceId, path, encoding), | ||
| queryFn: () => mastraClient.getWorkspace(workspaceId).readFile(path, encoding), | ||
| enabled: !!workspaceId && !!path, | ||
| }) | ||
|
|
||
| const useSandboxStat = (workspaceId: string, path: string) => | ||
| useQuery<WorkspaceFsStatResponse, Error>({ | ||
| queryKey: mastraQueryKeys.sandbox.stat(workspaceId, path), | ||
| queryFn: () => mastraClient.getWorkspace(workspaceId).stat(path), | ||
| enabled: !!workspaceId && !!path, | ||
| }) | ||
|
|
||
| const useSandboxSearch = (workspaceId: string, params: WorkspaceSearchParams) => | ||
| useQuery<WorkspaceSearchResponse, Error>({ | ||
| queryKey: mastraQueryKeys.sandbox.search(workspaceId, params), | ||
| queryFn: () => mastraClient.getWorkspace(workspaceId).search(params), | ||
| enabled: !!workspaceId && !!params?.query, | ||
| }) | ||
|
|
||
| const useSandboxWriteFileMutation = (workspaceId: string) => | ||
| useMutation({ | ||
| mutationFn: (params: { | ||
| path: string | ||
| content: string | ||
| options?: { encoding?: 'utf-8' | 'base64'; recursive?: boolean } | ||
| }) => | ||
| mastraClient | ||
| .getWorkspace(workspaceId) | ||
| .writeFile(params.path, params.content, params.options), | ||
| onSuccess: async () => { | ||
| await queryClient.invalidateQueries({ queryKey: mastraQueryKeys.sandbox.all }) | ||
| }, | ||
| }) | ||
|
|
||
| const useSandboxDeleteMutation = (workspaceId: string) => | ||
| useMutation({ | ||
| mutationFn: (params: { | ||
| path: string | ||
| options?: { recursive?: boolean; force?: boolean } | ||
| }) => | ||
| mastraClient | ||
| .getWorkspace(workspaceId) | ||
| .delete(params.path, params.options), | ||
| onSuccess: async () => { | ||
| await queryClient.invalidateQueries({ queryKey: mastraQueryKeys.sandbox.all }) | ||
| }, | ||
| }) | ||
|
|
||
| const useSandboxMkdirMutation = (workspaceId: string) => | ||
| useMutation({ | ||
| mutationFn: (params: { path: string; recursive?: boolean }) => | ||
| mastraClient.getWorkspace(workspaceId).mkdir(params.path, params.recursive), | ||
| onSuccess: async () => { | ||
| await queryClient.invalidateQueries({ queryKey: mastraQueryKeys.sandbox.all }) | ||
| }, | ||
| }) | ||
|
|
||
| const useSandboxIndexMutation = (workspaceId: string) => | ||
| useMutation({ | ||
| mutationFn: (params: WorkspaceIndexParams) => | ||
| mastraClient.getWorkspace(workspaceId).index(params), | ||
| onSuccess: async () => { | ||
| await queryClient.invalidateQueries({ queryKey: mastraQueryKeys.sandbox.all }) | ||
| }, | ||
| }) |
There was a problem hiding this comment.
There's significant code duplication between the useWorkspace... hooks and the useSandbox... hooks. The implementations are identical, only differing in the query key used (mastraQueryKeys.workspaces vs mastraQueryKeys.sandbox). This makes the code harder to maintain, as any change needs to be applied in two places.
Consider refactoring this by creating a factory function that generates these hooks, taking the query key group as an argument. This would eliminate the duplication and improve maintainability.
Example:
const createWorkspaceHooks = (keyGroup: typeof mastraQueryKeys.workspaces | typeof mastraQueryKeys.sandbox) => {
const useInfo = (id: string) =>
useQuery<WorkspaceInfoResponse, Error>({
queryKey: keyGroup.info(id),
queryFn: () => mastraClient.getWorkspace(id).info(),
enabled: !!id,
});
// ... other hooks for files, readFile, etc.
return { useInfo, ... };
}
// Then you could use it like this:
const { useWorkspaceInfo, ... } = createWorkspaceHooks(mastraQueryKeys.workspaces);
const { useSandboxInfo, ... } = createWorkspaceHooks(mastraQueryKeys.sandbox);| function extractDuckDuckGoResults(html: string): SearchResult[] { | ||
| const $ = cheerio.load(html) | ||
| const out: SearchResult[] = [] | ||
|
|
||
| $('a.result__a').each((_i, el) => { | ||
| const anchor = $(el) | ||
| const title = anchor.text().trim() | ||
| const href = anchor.attr('href') ?? '' | ||
| if (href.trim().length === 0) { | ||
| return | ||
| } | ||
|
|
||
| let resolvedUrl = href | ||
| try { | ||
| const urlObj = new URL(href, 'https://duckduckgo.com') | ||
| const uddg = urlObj.searchParams.get('uddg') | ||
| resolvedUrl = | ||
| typeof uddg === 'string' && uddg.trim().length > 0 | ||
| ? decodeURIComponent(uddg) | ||
| : urlObj.href | ||
| } catch { | ||
| // Keep original href | ||
| } | ||
|
|
||
| const snippet = | ||
| anchor.closest('.result').find('.result__snippet').text().trim() || | ||
| undefined | ||
|
|
||
| if (ValidationUtils.validateUrl(resolvedUrl)) { | ||
| out.push({ title, url: resolvedUrl, snippet }) | ||
| } | ||
| }) | ||
|
|
||
| return out | ||
| } | ||
|
|
||
| function extractGoogleResults(html: string): SearchResult[] { | ||
| const $ = cheerio.load(html) | ||
| const out: SearchResult[] = [] | ||
|
|
||
| $('a[href^="/url?q="]').each((_i, el) => { | ||
| const anchor = $(el) | ||
| const href = anchor.attr('href') ?? '' | ||
| const title = anchor.text().trim() | ||
| if (href.trim().length === 0) { | ||
| return | ||
| } | ||
|
|
||
| try { | ||
| const parsed = new URL(`https://www.google.com${href}`) | ||
| const target = parsed.searchParams.get('q') ?? '' | ||
| if (!ValidationUtils.validateUrl(target)) { | ||
| return | ||
| } | ||
|
|
||
| const snippet = | ||
| anchor | ||
| .closest('div') | ||
| .parent() | ||
| .find('span,div') | ||
| .first() | ||
| .text() | ||
| .trim() || undefined | ||
|
|
||
| out.push({ | ||
| title: title.length > 0 ? title : target, | ||
| url: target, | ||
| snippet, | ||
| }) | ||
| } catch { | ||
| // Skip malformed results | ||
| } | ||
| }) | ||
|
|
||
| return out | ||
| } | ||
|
|
||
| function extractBingResults(html: string): SearchResult[] { | ||
| const $ = cheerio.load(html) | ||
| const out: SearchResult[] = [] | ||
|
|
||
| $('li.b_algo').each((_i, el) => { | ||
| const node = $(el) | ||
| const linkEl = node.find('h2 a').first() | ||
| const url = linkEl.attr('href') ?? '' | ||
| if (!ValidationUtils.validateUrl(url)) { | ||
| return | ||
| } | ||
|
|
||
| const title = linkEl.text().trim() | ||
| const snippet = node.find('p').first().text().trim() || undefined | ||
|
|
||
| out.push({ | ||
| title: title.length > 0 ? title : url, | ||
| url, | ||
| snippet, | ||
| }) | ||
| }) | ||
|
|
||
| return out | ||
| } |
There was a problem hiding this comment.
The functions for extracting search results (extractDuckDuckGoResults, extractGoogleResults, extractBingResults) rely on scraping HTML content using cheerio selectors. This approach is very brittle and prone to breaking whenever the search engines update their website's HTML structure, which can happen frequently and without notice. This could cause the tool to fail or return incorrect data.
For a more robust solution, consider using official search APIs if available. If scraping is the only option, it would be good to add prominent comments warning about the fragility of this implementation and potentially add more robust error handling or monitoring to detect when scraping fails.
| const response = await httpFetch(options.url, { | ||
| method: 'GET', | ||
| timeout: options.timeout, | ||
| responseType: 'text', | ||
| headers, | ||
| }) | ||
|
|
||
| if (!response.ok) { | ||
| throw new FetchToolError( | ||
| `HTTP ${response.status}: ${response.statusText}`, | ||
| 'HTTP_ERROR', | ||
| response.status, | ||
| options.url | ||
| ) | ||
| } | ||
|
|
||
| const html = await response.text() |
There was a problem hiding this comment.
The tool does not limit the size of the HTTP response it fetches. An attacker can provide a URL that returns a very large response, causing the server to exhaust its memory when reading the body into a string and parsing it with JSDOM. This can lead to a Denial of Service (DoS) by crashing the server process.
To remediate this, set a maximum response size limit (e.g., 5MB) in the httpFetch call using the maxContentLength option or by checking the Content-Length header before processing the body.
| - 2) Run `get_errors` on the exact files being edited (not project-wide). | ||
| - 3) Fix reported issues. | ||
| - 4) Run `get_errors` again on those same files to verify clean state. | ||
| - 🌐 When unsure about framework/API behavior while editing UI pages, use internet research tools first (`#web`, `#websearch`, or `fetch_webpage`) and then apply fixes. |
There was a problem hiding this comment.
The instruction mentions fetch_webpage, but the new tool added in this PR is fetchTool. To avoid confusion and ensure the instructions are accurate, it would be better to use the actual tool ID.
| - 🌐 When unsure about framework/API behavior while editing UI pages, use internet research tools first (`#web`, `#websearch`, or `fetch_webpage`) and then apply fixes. | |
| - 🌐 When unsure about framework/API behavior while editing UI pages, use internet research tools first (`#web`, `#websearch`, or `fetchTool`) and then apply fixes. |
| <CpuIcon className="mr-1 size-3" /> | ||
| <span> | ||
| {agent.provider && `${agent.provider} • `} | ||
| {(Boolean(agent.provider)) && `${agent.provider} • `} |
| {/* Description section */} | ||
| <div className="p-4 pt-3"> | ||
| {agent.description ? ( | ||
| {(agent.description) ? ( |
| <li className="text-sm text-muted-foreground">No skills found.</li> | ||
| ) : ( | ||
| skills.map((skill, idx) => ( | ||
| <li key={`${skill.name ?? 'skill'}-${idx}`} className="rounded-md border p-2"> |
There was a problem hiding this comment.
Using the list index idx as part of a React key is not ideal, as it can lead to issues if the list is re-ordered. The skill object likely has a unique id property. Using skill.id would provide a stable identity for each item, which is the recommended practice.
| <li key={`${skill.name ?? 'skill'}-${idx}`} className="rounded-md border p-2"> | |
| <li key={skill.id ?? `${skill.name ?? 'skill'}-${idx}`} className="rounded-md border p-2"> |
| } | ||
|
|
||
| function sanitizeHtml(html: string): string { | ||
| const dom = new JSDOM(String(html), { |
There was a problem hiding this comment.
| try { | ||
| const urlObj = new URL(href, 'https://duckduckgo.com') | ||
| const uddg = urlObj.searchParams.get('uddg') | ||
| resolvedUrl = | ||
| typeof uddg === 'string' && uddg.trim().length > 0 | ||
| ? decodeURIComponent(uddg) | ||
| : urlObj.href | ||
| } catch { | ||
| // Keep original href | ||
| } |
There was a problem hiding this comment.
The try...catch block here is empty, which silently swallows any errors that might occur during URL parsing or decoding, making debugging difficult. The same issue exists in extractGoogleResults on line 364. It's better to at least log the error for easier troubleshooting.
} catch (error) {
log.warn('Failed to resolve DuckDuckGo URL', { href, error: error instanceof Error ? error.message : String(error) });
// Keep original href
}There was a problem hiding this comment.
Pull request overview
Adds a new Mastra fetchTool for URL fetching + web/news search with markdown conversion, and introduces new chat UI pages for MCP/A2A and workspace browsing that consume the expanded Mastra client hooks.
Changes:
- Added
src/mastra/tools/fetch.tool.tsand exported it via the tools index; wired it intoresearchAgent. - Introduced new chat pages:
/chat/workspaces(workspace files/skills) and/chat/mcp-a2a(MCP servers/tools + A2A card). - Extended frontend Mastra query hooks/query keys for workspaces/sandbox and MCP/A2A consumption (plus documentation/memory-bank updates).
Reviewed changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| src/mastra/tools/index.ts | Exports the new fetchTool from the tools barrel. |
| src/mastra/tools/fetch.tool.ts | Implements fetch/search + HTML sanitization + markdown conversion + RE2 filtering. |
| src/mastra/agents/researchAgent.ts | Adds fetchTool to the agent toolset and updates tool-selection guidance. |
| src/components/ai-elements/tools/types.ts | Adds FetchUITool type for UI tool rendering. |
| lib/hooks/use-mastra-query.ts | Adds workspace/sandbox hooks + MCP/A2A hooks and query key structure. |
| app/chat/workspaces/page.tsx | New workspace browser page (file tree + file viewer + skills list). |
| app/chat/mcp-a2a/page.tsx | New MCP/A2A page for browsing MCP servers/tools and viewing A2A agent cards. |
| app/chat/components/main-sidebar.tsx | Adds navigation links to the new pages and tweaks agent card rendering logic. |
| memory-bank/progress.md | Documents recent hook cleanup/expansion work. |
| memory-bank/activeContext.md | Records current context around workspace/sandbox + MCP/A2A hook integration. |
| lib/AGENTS.md | Updates lib directory documentation to reflect new hooks. |
| .github/copilot-instructions.md | Updates Copilot instructions around targeted error checking for page edits. |
You can also share your feedback on Copilot code review. Take the survey.
| return parsed.protocol === 'http:' || parsed.protocol === 'https:' | ||
| } catch { | ||
| return false | ||
| } | ||
| } |
There was a problem hiding this comment.
ValidationUtils.validateUrl() only checks the protocol, which still allows fetching localhost/private-network URLs (e.g. http://127.0.0.1, http://10.x.x.x) and enables SSRF against internal services/metadata endpoints. Please add host-level restrictions (at least block localhost + RFC1918 + link-local + IPv6 loopback) and/or support an allowlist similar to web-scraper-tool’s WEB_SCRAPER_ALLOWED_DOMAINS gating before calling httpFetch().
| return parsed.protocol === 'http:' || parsed.protocol === 'https:' | |
| } catch { | |
| return false | |
| } | |
| } | |
| if (parsed.protocol !== 'http:' && parsed.protocol !== 'https:') { | |
| return false | |
| } | |
| const hostname = parsed.hostname | |
| if (!hostname || this.isPrivateHostname(hostname)) { | |
| return false | |
| } | |
| return true | |
| } catch { | |
| return false | |
| } | |
| } | |
| private static isPrivateHostname(hostname: string): boolean { | |
| const lower = hostname.toLowerCase() | |
| // Block localhost and subdomains like foo.localhost | |
| if (lower === 'localhost' || lower.endsWith('.localhost')) { | |
| return true | |
| } | |
| // Block IPv6 loopback | |
| if (lower === '::1' || lower === '0:0:0:0:0:0:0:1') { | |
| return true | |
| } | |
| // Detect simple IPv4 literals | |
| const ipv4Pattern = /^(\d{1,3}\.){3}\d{1,3}$/ | |
| if (ipv4Pattern.test(hostname)) { | |
| const parts = hostname.split('.').map((part) => Number(part)) | |
| // Reject obviously invalid octets as unsafe | |
| if (parts.some((octet) => Number.isNaN(octet) || octet < 0 || octet > 255)) { | |
| return true | |
| } | |
| const [a, b] = parts | |
| // 127.0.0.0/8 – loopback | |
| if (a === 127) { | |
| return true | |
| } | |
| // 10.0.0.0/8 – RFC1918 private | |
| if (a === 10) { | |
| return true | |
| } | |
| // 172.16.0.0/12 – RFC1918 private | |
| if (a === 172 && b >= 16 && b <= 31) { | |
| return true | |
| } | |
| // 192.168.0.0/16 – RFC1918 private | |
| if (a === 192 && b === 168) { | |
| return true | |
| } | |
| // 169.254.0.0/16 – link-local | |
| if (a === 169 && b === 254) { | |
| return true | |
| } | |
| } | |
| return false | |
| } |
| return { | ||
| markdown: `${sliced}\n\n---\n_Truncated by content window (head mode)_`, | ||
| originalChars, | ||
| outputChars: sliced.length, | ||
| truncated: true, | ||
| } | ||
| } | ||
|
|
||
| if (window.mode === 'tail') { | ||
| const sliced = source.slice(-window.maxChars).trim() | ||
| return { | ||
| markdown: `_Truncated by content window (tail mode)_\n---\n\n${sliced}`, | ||
| originalChars, | ||
| outputChars: sliced.length, |
There was a problem hiding this comment.
applyContentWindow() reports outputChars as sliced.length in head/tail modes, but the returned markdown includes additional truncation banner text. This makes the metadata inconsistent with the actual output. Update outputChars to reflect the final markdown length (including the truncation notice) or rename the field if it’s meant to measure only the excerpt length.
| return { | |
| markdown: `${sliced}\n\n---\n_Truncated by content window (head mode)_`, | |
| originalChars, | |
| outputChars: sliced.length, | |
| truncated: true, | |
| } | |
| } | |
| if (window.mode === 'tail') { | |
| const sliced = source.slice(-window.maxChars).trim() | |
| return { | |
| markdown: `_Truncated by content window (tail mode)_\n---\n\n${sliced}`, | |
| originalChars, | |
| outputChars: sliced.length, | |
| const markdown = `${sliced}\n\n---\n_Truncated by content window (head mode)_` | |
| return { | |
| markdown, | |
| originalChars, | |
| outputChars: markdown.length, | |
| truncated: true, | |
| } | |
| } | |
| if (window.mode === 'tail') { | |
| const sliced = source.slice(-window.maxChars).trim() | |
| const markdown = `_Truncated by content window (tail mode)_\n---\n\n${sliced}` | |
| return { | |
| markdown, | |
| originalChars, | |
| outputChars: markdown.length, |
| searchProvider: z | ||
| .enum(['duckduckgo', 'google', 'bing', 'all']) | ||
| .optional() | ||
| .describe('Search backend. No fallback is applied.'), | ||
| searchVertical: z | ||
| .enum(['web', 'news', 'auto']) | ||
| .optional() | ||
| .describe( | ||
| 'Search vertical. auto detects news-like queries and adds Google News RSS for reliability.' | ||
| ), |
There was a problem hiding this comment.
The input schema describes searchProvider as “No fallback is applied”, but the implementation does fallback/aggregation (e.g. provider all runs multiple providers, errors are caught per-provider, and searchVertical=auto/news adds Google News RSS regardless). Please update the .describe(...) text (and potentially the tool description) to match actual behavior so tool callers aren’t misled.
| export const fetchTool = createTool({ | ||
| id: 'fetch', | ||
| description: | ||
| 'Production fetch/search tool with RE2 filtering and markdown output. No fallback, no file writes.', | ||
| inputSchema: fetchToolInputSchema, | ||
| outputSchema: fetchToolOutputSchema, | ||
| onInputStart: ({ toolCallId, messages, abortSignal }) => { |
There was a problem hiding this comment.
This PR introduces a new production-critical tool (fetchTool) with substantial parsing/search logic, but there are no accompanying tests under src/mastra/tools/tests/ (the repo has extensive tool test coverage already). Please add unit tests that mock httpFetch() to cover: direct URL fetch, search aggregation/dedupe/filtering, content window truncation modes, and provider error handling.
| {folders.map((folder) => ( | ||
| <FileTreeFolder key={folder.path} path={folder.path} name={folder.name} /> | ||
| ))} | ||
| {plainFiles.map((file) => ( | ||
| <FileTreeFile key={file.path} path={file.path} name={file.name} /> | ||
| ))} |
There was a problem hiding this comment.
The FileTree is rendered with all FileTreeFolder and FileTreeFile nodes as flat siblings, but FileTreeFolder expects nested children to display hierarchy. As-is, folders will expand to empty content and nested files won’t appear under their parents. Consider building a nested tree structure from the returned paths and rendering folders with their child folders/files.
| {folders.map((folder) => ( | |
| <FileTreeFolder key={folder.path} path={folder.path} name={folder.name} /> | |
| ))} | |
| {plainFiles.map((file) => ( | |
| <FileTreeFile key={file.path} path={file.path} name={file.name} /> | |
| ))} | |
| {(() => { | |
| type FolderInfo = { | |
| folder: WorkspaceFileNode | |
| childFolderPaths: string[] | |
| files: WorkspaceFileNode[] | |
| } | |
| const folderMap = new Map<string, FolderInfo>() | |
| // Initialize map with all folders | |
| for (const folder of folders) { | |
| folderMap.set(folder.path, { | |
| folder, | |
| childFolderPaths: [], | |
| files: [], | |
| }) | |
| } | |
| const rootFolderPaths: string[] = [] | |
| const rootFiles: WorkspaceFileNode[] = [] | |
| const getParentPath = (path: string): string | null => { | |
| const lastSlashIndex = path.lastIndexOf('/') | |
| if (lastSlashIndex === -1) { | |
| return null | |
| } | |
| return path.slice(0, lastSlashIndex) | |
| } | |
| // Link folders to their parents | |
| for (const folder of folders) { | |
| const parentPath = getParentPath(folder.path) | |
| if (parentPath && folderMap.has(parentPath)) { | |
| const parentInfo = folderMap.get(parentPath)! | |
| parentInfo.childFolderPaths.push(folder.path) | |
| } else { | |
| rootFolderPaths.push(folder.path) | |
| } | |
| } | |
| // Assign files to their parent folders or root | |
| for (const file of plainFiles) { | |
| const parentPath = getParentPath(file.path) | |
| if (parentPath && folderMap.has(parentPath)) { | |
| const parentInfo = folderMap.get(parentPath)! | |
| parentInfo.files.push(file) | |
| } else { | |
| rootFiles.push(file) | |
| } | |
| } | |
| const renderFolder = (folderPath: string): JSX.Element | null => { | |
| const info = folderMap.get(folderPath) | |
| if (!info) return null | |
| return ( | |
| <FileTreeFolder | |
| key={info.folder.path} | |
| path={info.folder.path} | |
| name={info.folder.name} | |
| > | |
| {info.childFolderPaths.map((childPath) => renderFolder(childPath))} | |
| {info.files.map((file) => ( | |
| <FileTreeFile | |
| key={file.path} | |
| path={file.path} | |
| name={file.name} | |
| /> | |
| ))} | |
| </FileTreeFolder> | |
| ) | |
| } | |
| return ( | |
| <> | |
| {rootFolderPaths.map((folderPath) => renderFolder(folderPath))} | |
| {rootFiles.map((file) => ( | |
| <FileTreeFile | |
| key={file.path} | |
| path={file.path} | |
| name={file.name} | |
| /> | |
| ))} | |
| </> | |
| ) | |
| })()} |
| const [selectedFilePath, setSelectedFilePath] = useState<string>('') | ||
| const readFileResult = useSandboxReadFile( | ||
| activeWorkspaceId, | ||
| selectedFilePath, | ||
| 'utf-8' | ||
| ) |
There was a problem hiding this comment.
Selecting a FileTreeFolder triggers onSelect(path) (by design of FileTreeFolder), and this page wires onSelect directly to setSelectedFilePath. That means clicking a folder will attempt useSandboxReadFile(..., selectedFilePath) on a directory path. Filter selections to files only (e.g. ignore folder paths, or track selected node type) to avoid erroneous readFile calls and confusing UI state.
| {!(!(agent.provider ?? agent.modelId)) && ( | ||
| <div className="mt-1.5 flex items-center gap-2"> | ||
| <div className="flex h-5 items-center rounded-md border border-primary/20 bg-primary/10 px-1.5 text-[9px] font-bold uppercase tracking-wider text-primary"> | ||
| <CpuIcon className="mr-1 size-3" /> | ||
| <span> | ||
| {agent.provider && `${agent.provider} • `} | ||
| {(Boolean(agent.provider)) && `${agent.provider} • `} | ||
| {agent.modelId} | ||
| </span> |
There was a problem hiding this comment.
The new condition !(!(agent.provider ?? agent.modelId)) changes behavior vs the previous !!(agent.provider || agent.modelId). With the nullish-coalescing version, an empty-string provider will prevent the badge from showing even when modelId is present. Use a truthy check (agent.provider || agent.modelId) or explicitly check each field so the model badge still renders when only modelId is set.
src/mastra/tools/fetch.tool.tsfor web content fetching and markdown conversion.app/chat/mcp-a2a/page.tsxto manage MCP servers and tools.app/chat/workspaces/page.tsxto handle file browsing and skill display.src/mastra/tools/index.tsto include the new fetch tool.Summary by Sourcery
Introduce a new fetch/search tool with markdown output, expand Mastra frontend hooks for workspaces, sandbox, MCP, and A2A, and add new UI pages and navigation for managing workspaces and MCP/A2A integrations.
New Features:
Bug Fixes:
Enhancements:
Documentation: