Skip to content
Merged
15 changes: 10 additions & 5 deletions CLAUDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ make ci # Runs: fmt → vet → lint → test-unit → test-race → secu
- **Timezone Support**: Offline utilities + calendar integration ✅
- **Email Signing**: GPG/PGP email signing (RFC 3156 PGP/MIME) ✅
- **AI Chat**: Web-based chat interface using locally installed AI agents ✅
- **Credential Storage**: System keyring (see below)
- **Credential Storage**: System keyring for secrets; file-backed grant cache for non-secret grant metadata (see below)
- **Web UI**: Air - browser-based interface (localhost:7365)

**Details:** See `docs/ARCHITECTURE.md`
Expand All @@ -84,13 +84,16 @@ make ci # Runs: fmt → vet → lint → test-unit → test-race → secu

---

## Credential Storage (Keyring)
## Credential Storage

Credentials stored in system keyring (service: `"nylas"`) via `nylas auth config`.
Secrets are stored in the system keyring (service: `"nylas"`) via `nylas auth config`.
Grant metadata and the local default grant are stored outside the keyring in the grant cache.

**Key files:** `internal/ports/secrets.go` (constants), `internal/adapters/keyring/` (implementation), `internal/app/auth/config.go` (setup)
**Key files:** `internal/ports/secrets.go` (constants), `internal/adapters/keyring/` (secret storage), `internal/adapters/grantcache/` (grant metadata cache), `internal/app/auth/config.go` (setup)

**Keys:** `client_id`, `api_key`, `client_secret`, `org_id`, `grants`, `default_grant`, `grant_token_<id>`
**Secret keys:** `client_id`, `api_key`, `client_secret`, `org_id`

**Grant cache:** non-secret grant ID, email, provider, and default grant at `filepath.Join(os.UserCacheDir(), "nylas", "grants.json")`

**Disable keyring:** `NYLAS_DISABLE_KEYRING=true` (falls back to encrypted file at `~/.config/nylas/`)

Expand All @@ -110,9 +113,11 @@ Credentials stored in system keyring (service: `"nylas"`) via `nylas auth config
- `internal/ports/output.go` - OutputWriter interface for pluggable formatting
- `internal/adapters/output/` - Table, JSON, YAML, Quiet output adapters
- `internal/httputil/` - HTTP response helpers (WriteJSON, LimitedBody, DecodeJSON)
- `internal/adapters/grantcache/` - File-backed local grant metadata/default cache
- `internal/adapters/gpg/` - GPG/PGP email signing service (2026)
- `internal/adapters/mime/` - RFC 3156 PGP/MIME message builder (2026)
- `internal/chat/` - AI chat interface with local agent support (2026)
- `internal/webguard/` - Shared localhost web UI request guards
- `internal/cli/setup/` - First-time setup wizard (`nylas init`)

**Full inventory:** `docs/ARCHITECTURE.md`
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,7 @@ Step-by-step tutorials on [cli.nylas.com](https://cli.nylas.com/guides):

## Configuration

Credentials are stored in your system keyring (macOS Keychain, Linux Secret Service, Windows Credential Manager). Nothing is written to plain-text files.
Credentials are stored in your system keyring (macOS Keychain, Linux Secret Service, Windows Credential Manager). Non-secret grant metadata, such as account email/provider and the local default grant, is cached separately for fast local lookup.

```bash
nylas auth status # Check what's configured
Expand Down
7 changes: 5 additions & 2 deletions docs/ARCHITECTURE.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ internal/
ai/ # AI providers (Claude, OpenAI, Groq, Ollama)
analytics/ # Focus optimizer, meeting scorer
keyring/ # Secret storage
grantcache/ # Non-secret local grant metadata/default cache
config/ # Configuration validation
mcp/ # MCP proxy server
slack/ # Slack API client
Expand All @@ -26,6 +27,7 @@ internal/
browser/ # Browser automation
tunnel/ # Cloudflare tunnel
webhookserver/ # Webhook server
webguard/ # Shared localhost web UI request guards
cli/ # CLI commands
common/ # Shared helpers (client, context, errors, flags, format, html, timeutil)
admin/ # API key management
Expand Down Expand Up @@ -182,14 +184,15 @@ url := qb.BuildURL(baseURL)
- `utilities.go` - Utilities interface
- `webhook_server.go` - Webhook server interface

3. **Adapters** (`internal/adapters/`) - 12 adapter directories
3. **Adapters** (`internal/adapters/`) - 13 adapter directories

| Adapter | Files | Purpose |
|---------|-------|---------|
| `nylas/` | 94 | Nylas API client (messages, calendars, contacts, events) |
| `ai/` | 24 | AI clients (Claude, OpenAI, Groq, Ollama), email analyzer |
| `analytics/` | 14 | Focus optimizer, conflict resolver, meeting scorer |
| `keyring/` | 6 | Credential storage (system keyring, file-based) |
| `keyring/` | 8 | Secret storage (system keyring, encrypted file fallback) |
| `grantcache/` | 2 | Non-secret local grant metadata/default cache |
| `mcp/` | 8 | MCP proxy server for AI assistants |
| `slack/` | 21 | Slack API client (channels, messages, users) |
| `config/` | 5 | Configuration validation |
Expand Down
5 changes: 3 additions & 2 deletions docs/COMMANDS.md
Original file line number Diff line number Diff line change
Expand Up @@ -439,8 +439,9 @@ nylas webhook pubsub delete <channel-id> --yes
```bash
nylas webhook test send <webhook-url> # Send test payload
nylas webhook test payload [trigger-type] # Generate test payload
nylas webhook server # Start local webhook server
nylas webhook server --port 8080 --tunnel cloudflared # With public tunnel
nylas webhook server # Interactive preflight (offers cloudflared tunnel)
nylas webhook server --no-tunnel # Loopback-only (skip preflight)
nylas webhook server --port 8080 --tunnel cloudflared --secret xxx # Public tunnel + HMAC verify
```

**Details:** `docs/commands/webhooks.md`
Expand Down
31 changes: 23 additions & 8 deletions docs/commands/webhooks.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,20 +22,35 @@ nylas webhook pubsub delete <channel-id> --yes
Start a local webhook server for development and testing:

```bash
# Start server (local only)
# Interactive: detects cloudflared and prompts to enable a public tunnel.
# (Nylas can't deliver webhooks to localhost, so a tunnel is needed to
# receive real events.)
nylas webhook server

# Start with public tunnel (cloudflared required)
nylas webhook server --tunnel cloudflared
# Skip the prompt and run loopback-only (useful for local curl tests
# or non-interactive environments)
nylas webhook server --no-tunnel

# Custom port
nylas webhook server --port 8080 --tunnel cloudflared
# Start with public tunnel (cloudflared required) + signature verification
nylas webhook server --tunnel cloudflared --secret your-webhook-secret

# Custom port with a tunnel
nylas webhook server --port 8080 --tunnel cloudflared --secret your-webhook-secret
```

**Install cloudflared:**
When `--tunnel` is set, `--secret` is required (or pass `--allow-unsigned`
to opt out explicitly). The interactive preflight will prompt for a
secret inline when you accept the tunnel; leaving it empty opts into
unsigned mode.

**Cloudflared install:**

On macOS, the preflight will offer to run `brew install cloudflared` for
you when cloudflared isn't on `PATH`. On other platforms, see:

```bash
brew install cloudflared # macOS
# Or download from: github.com/cloudflare/cloudflared
brew install cloudflared # macOS (manual)
# Linux/Windows: https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/installation/
```

### TUI Webhook Server
Expand Down
12 changes: 7 additions & 5 deletions docs/security/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,25 +14,26 @@ nylas auth config # Configure API credentials (stored securely)

### Keyring Storage

Credentials are stored in the system keyring under service name `"nylas"`:
Secrets are stored in the system keyring under service name `"nylas"`:

| Key | Constant | Description |
|-----|----------|-------------|
| `client_id` | `ports.KeyClientID` | Nylas Application/Client ID |
| `api_key` | `ports.KeyAPIKey` | Nylas API key (Bearer auth) |
| `client_secret` | `ports.KeyClientSecret` | Provider OAuth secret (Google/Microsoft) |
| `org_id` | `ports.KeyOrgID` | Nylas Organization ID |
| `grants` | `grantsKey` | JSON array of grant info (ID, email, provider) |
| `default_grant` | `defaultGrantKey` | Default grant ID for CLI operations |
| `grant_token_<id>` | `ports.GrantTokenKey()` | Per-grant access tokens |

Grant IDs, emails, providers, and the local default grant are non-secret metadata.
They are stored in the grant cache at `filepath.Join(os.UserCacheDir(), "nylas", "grants.json")`.
Keyring remains secrets-only.

### Implementation Files

| File | Purpose |
|------|---------|
| `internal/ports/secrets.go` | Key constants (`KeyClientID`, `KeyAPIKey`, etc.) |
| `internal/adapters/keyring/keyring.go` | System keyring implementation |
| `internal/adapters/keyring/grants.go` | Grant storage (`grants`, `default_grant`) |
| `internal/adapters/grantcache/cache.go` | File-backed non-secret grant metadata/default cache |
| `internal/app/auth/config.go` | `SetupConfig()` saves credentials to keyring |

### Platform Backends
Expand All @@ -53,6 +54,7 @@ NYLAS_DISABLE_KEYRING=true # Force encrypted file store (useful for testing/CI
Non-sensitive settings stored in `~/.config/nylas/config.yaml`:
- Region (us/eu)
- Callback port
- Local default grant mirror

---

Expand Down
89 changes: 89 additions & 0 deletions internal/adapters/ai/base_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -173,3 +173,92 @@ func FallbackStreamChat(ctx context.Context, req *domain.ChatRequest, chatFunc f
}
return callback(resp.Content)
}

// openAICompatibleResponse is the shared shape of /v1/chat/completions
// responses across providers that speak the OpenAI API surface (OpenAI,
// Groq, Together, Anyscale, etc.). Kept private to this package.
type openAICompatibleResponse struct {
Choices []struct {
Message struct {
Role string `json:"role"`
Content string `json:"content"`
ToolCalls []struct {
ID string `json:"id"`
Type string `json:"type"`
Function struct {
Name string `json:"name"`
Arguments string `json:"arguments"`
} `json:"function"`
} `json:"tool_calls,omitempty"`
} `json:"message"`
} `json:"choices"`
Model string `json:"model"`
Usage struct {
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
} `json:"usage"`
}

// OpenAICompatibleChat performs a chat request against any provider that
// implements the OpenAI /v1/chat/completions surface. provider is used to
// label the response and shape error messages.
//
// Callers (OpenAIClient, GroqClient, …) should validate IsConfigured before
// calling this method.
func (b *BaseClient) OpenAICompatibleChat(ctx context.Context, provider string, req *domain.ChatRequest, tools []domain.Tool) (*domain.ChatResponse, error) {
body := map[string]any{
"model": b.GetModel(req.Model),
"messages": ConvertMessagesToMaps(req.Messages),
}
if req.MaxTokens > 0 {
body["max_tokens"] = req.MaxTokens
}
if req.Temperature > 0 {
body["temperature"] = req.Temperature
}
if len(tools) > 0 {
body["tools"] = ConvertToolsOpenAIFormat(tools)
body["tool_choice"] = "auto"
}

headers := map[string]string{
"Authorization": "Bearer " + b.apiKey,
}

var raw openAICompatibleResponse
if err := b.DoJSONRequestAndDecode(ctx, "POST", "/chat/completions", body, headers, &raw); err != nil {
return nil, err
}
if len(raw.Choices) == 0 {
return nil, fmt.Errorf("no response from %s", provider)
}

resp := &domain.ChatResponse{
Content: raw.Choices[0].Message.Content,
Model: raw.Model,
Provider: provider,
Usage: domain.TokenUsage{
PromptTokens: raw.Usage.PromptTokens,
CompletionTokens: raw.Usage.CompletionTokens,
TotalTokens: raw.Usage.TotalTokens,
},
}
for _, tc := range raw.Choices[0].Message.ToolCalls {
var args map[string]any
if err := json.Unmarshal([]byte(tc.Function.Arguments), &args); err != nil {
// The model emitted a tool-call with malformed JSON arguments.
// Silently dropping it would leave the caller wondering why
// `len(ToolCalls)` is short — return the parse error so the
// scheduler can decide whether to retry or surface it.
return nil, fmt.Errorf("model tool-call %q has invalid JSON arguments: %w",
tc.Function.Name, err)
}
resp.ToolCalls = append(resp.ToolCalls, domain.ToolCall{
ID: tc.ID,
Function: tc.Function.Name,
Arguments: args,
})
}
return resp, nil
}
87 changes: 4 additions & 83 deletions internal/adapters/ai/groq_client.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@ package ai

import (
"context"
"encoding/json"
"fmt"

"github.com/nylas/cli/internal/domain"
Expand Down Expand Up @@ -48,92 +47,14 @@ func (c *GroqClient) Chat(ctx context.Context, req *domain.ChatRequest) (*domain
return c.ChatWithTools(ctx, req, nil)
}

// ChatWithTools sends a chat request with function calling.
// ChatWithTools sends a chat request with function calling. Groq exposes the
// OpenAI /v1/chat/completions surface, so this delegates to the shared
// pipeline.
func (c *GroqClient) ChatWithTools(ctx context.Context, req *domain.ChatRequest, tools []domain.Tool) (*domain.ChatResponse, error) {
if !c.IsConfigured() {
return nil, fmt.Errorf("groq API key not configured")
}

// Prepare Groq request (OpenAI-compatible format)
groqReq := map[string]any{
"model": c.GetModel(req.Model),
"messages": ConvertMessagesToMaps(req.Messages),
}

if req.MaxTokens > 0 {
groqReq["max_tokens"] = req.MaxTokens
}

if req.Temperature > 0 {
groqReq["temperature"] = req.Temperature
}

// Tools support
if len(tools) > 0 {
groqReq["tools"] = ConvertToolsOpenAIFormat(tools)
groqReq["tool_choice"] = "auto"
}

// Send request using base client
var groqResp struct {
Choices []struct {
Message struct {
Role string `json:"role"`
Content string `json:"content"`
ToolCalls []struct {
ID string `json:"id"`
Type string `json:"type"`
Function struct {
Name string `json:"name"`
Arguments string `json:"arguments"`
} `json:"function"`
} `json:"tool_calls,omitempty"`
} `json:"message"`
} `json:"choices"`
Model string `json:"model"`
Usage struct {
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
} `json:"usage"`
}

headers := map[string]string{
"Authorization": "Bearer " + c.apiKey,
}

if err := c.DoJSONRequestAndDecode(ctx, "POST", "/chat/completions", groqReq, headers, &groqResp); err != nil {
return nil, err
}

if len(groqResp.Choices) == 0 {
return nil, fmt.Errorf("no response from Groq")
}

response := &domain.ChatResponse{
Content: groqResp.Choices[0].Message.Content,
Model: groqResp.Model,
Provider: "groq",
Usage: domain.TokenUsage{
PromptTokens: groqResp.Usage.PromptTokens,
CompletionTokens: groqResp.Usage.CompletionTokens,
TotalTokens: groqResp.Usage.TotalTokens,
},
}

// Convert tool calls if present
for _, tc := range groqResp.Choices[0].Message.ToolCalls {
var args map[string]any
if err := json.Unmarshal([]byte(tc.Function.Arguments), &args); err == nil {
response.ToolCalls = append(response.ToolCalls, domain.ToolCall{
ID: tc.ID,
Function: tc.Function.Name,
Arguments: args,
})
}
}

return response, nil
return c.OpenAICompatibleChat(ctx, "groq", req, tools)
}

// StreamChat streams chat responses.
Expand Down
Loading
Loading