Skip to content

[ACTION] Perplexity - add Props #18642

@timfong888

Description

@timfong888

Is there a specific app this action is for?
Perplexity: https://github.com/PipedreamHQ/pipedream/tree/master/components/perplexity

Please provide a link to the relevant API docs for the specific service / operation.
https://docs.perplexity.ai/guides/chat-completions-guide?utm_source=chatgpt.com

	•	Prop: messages | type: array of { role, content }
API: Chat Completions request schema (OpenAI-compatible)  
Comment: Lets you pass a full multi-turn conversation. Use this instead of single role+content.
	•	Prop: system | type: string
API: OpenAI-compatible system message pattern (stacked before user)  
Comment: Convenience prop. If set, it’s injected as the first messages item: {role:"system", content: system}.
	•	Prop: model | type: string
API: Chat Completions + model list (e.g., sonar, sonar-pro, sonar-reasoning(-pro))  
Comment: Required; selects Perplexity model.
	•	Prop: temperature | type: number
API: Chat Completions params (OpenAI-compatible)  
Comment: Higher → more diverse generations.
	•	Prop: top_p | type: number
API: Chat Completions params (OpenAI-compatible)  
Comment: Nucleus sampling; use one of temperature or top_p to avoid odd sampling combos.
	•	Prop: max_tokens | type: integer
API: Chat Completions + practical note: Sonar Pro max output ≈ 8k tokens  
Comment: Caps output length.
	•	Prop: stream | type: boolean
API: Streaming guide (server supports token streaming)  
Comment: Server can stream; your Pipedream action can accept this flag even if you buffer and return once done.
	•	Prop: search_domain_filter | type: array of strings or null
API: Examples in Perplexity collections/community mention this param  
Comment: Restrict web search to certain domains (e.g., ["sec.gov","ft.com"]).
	•	Prop: search_recency_filter | type: string enum (common values seen: "day" | "week" | "month" | "year")
API: Example usage in Perplexity collections  
Comment: Prefer newer results; useful for newsy queries.
	•	Prop: top_k | type: integer
API: Example usage in Perplexity collections  
Comment: Limits number of retrieved docs/evidence considered.
	•	Prop: return_images | type: boolean
API: Example usage in Perplexity collections  
Comment: Include images in the response when supported by the model/content.
	•	Prop: return_related_questions | type: boolean
API: Example usage in Perplexity collections  
Comment: Ask the API to add “related questions” to the response payload.
	•	Prop: role | type: string (fallback)
API: Chat Completions schema (OpenAI-compatible)  
Comment: Back-compat with your current action if messages isn’t provided.
	•	Prop: content | type: string (fallback)
API: Chat Completions schema (OpenAI-compatible)  
Comment: Back-compat: single-turn prompt if messages isn’t provided.
import app from "../../perplexity.app.mjs";

export default {
  key: "perplexity-chat-completions-advanced",
  name: "Chat Completions (Advanced)",
  description: "Generates a model's response for the given chat conversation with multi-message support and Perplexity search controls. Docs: https://docs.perplexity.ai/api-reference/chat-completions-post",
  version: "1.0.0",
  annotations: {
    destructiveHint: false,
    openWorldHint: true,
    readOnlyHint: false,
  },
  type: "action",
  props: {
    app,

    // --- Core ---
    model: { propDefinition: [app, "model"] },

    // Either provide messages/system OR fall back to role+content
    messages: {
      type: "string[]",
      label: "Messages (JSON strings or UI collection)",
      description: "Array of message objects: [{ role: 'system'|'user'|'assistant', content: '...' }, ...]. If provided, 'role' and 'content' are ignored.",
      optional: true,
    },
    system: {
      type: "string",
      label: "System",
      description: "Optional system prompt injected as the first message.",
      optional: true,
    },
    role: { propDefinition: [app, "role"], optional: true },
    content: { propDefinition: [app, "content"], optional: true },

    // --- Generation controls (OpenAI-compatible) ---
    temperature: {
      type: "float",
      label: "Temperature",
      description: "Sampling temperature. Higher values = more diverse output.",
      optional: true,
    },
    top_p: {
      type: "float",
      label: "Top-p",
      description: "Nucleus sampling probability mass. Use either temperature or top_p.",
      optional: true,
    },
    max_tokens: {
      type: "integer",
      label: "Max Output Tokens",
      description: "Cap response length (Sonar Pro practical max output ~8k).",
      optional: true,
    },
    stream: {
      type: "boolean",
      label: "Stream",
      description: "Enable server-side streaming. This action will still buffer and return the final text.",
      optional: true,
      default: false,
    },

    // --- Perplexity search & response controls ---
    search_domain_filter: {
      type: "string[]",
      label: "Search Domain Filter",
      description: "Limit web search to these domains (e.g., ['sec.gov','ft.com']).",
      optional: true,
    },
    search_recency_filter: {
      type: "string",
      label: "Search Recency Filter",
      description: "Prefer recent sources (e.g., 'day', 'week', 'month', 'year').",
      optional: true,
    },
    top_k: {
      type: "integer",
      label: "Top K",
      description: "Restrict number of retrieved items considered.",
      optional: true,
    },
    return_images: {
      type: "boolean",
      label: "Return Images",
      description: "Ask API to include images when applicable.",
      optional: true,
      default: false,
    },
    return_related_questions: {
      type: "boolean",
      label: "Return Related Questions",
      description: "Ask API to include related questions in response.",
      optional: true,
      default: false,
    },
  },

  async run({ $ }) {
    // Build messages array
    let messages = [];
    const provided = this.messages && this.messages.length > 0;

    if (provided) {
      // Accept messages as array of either objects or JSON strings
      messages = this.messages.map((m) =>
        typeof m === "string" ? JSON.parse(m) : m
      );
    } else {
      if (!this.content) {
        throw new Error("Either provide `messages` or `content`.");
      }
      // Back-compat single-turn
      if (this.system) {
        messages.push({ role: "system", content: this.system });
      }
      messages.push({
        role: this.role || "user",
        content: this.content,
      });
    }

    const data = {
      model: this.model,
      messages,
      // Generation knobs
      ...(this.temperature != null && { temperature: this.temperature }),
      ...(this.top_p != null && { top_p: this.top_p }),
      ...(this.max_tokens != null && { max_tokens: this.max_tokens }),
      ...(this.stream != null && { stream: this.stream }),
      // Perplexity search controls
      ...(this.search_domain_filter && {
        search_domain_filter: this.search_domain_filter,
      }),
      ...(this.search_recency_filter && {
        search_recency_filter: this.search_recency_filter,
      }),
      ...(this.top_k != null && { top_k: this.top_k }),
      ...(this.return_images != null && { return_images: this.return_images }),
      ...(this.return_related_questions != null && {
        return_related_questions: this.return_related_questions,
      }),
    };

    const response = await this.app.chatCompletions({ $, data });

    $.export(
      "$summary",
      `Perplexity ${this.model} responded${
        this.stream ? " (streaming buffered)" : ""
      }`
    );

    return response;
  },
};

Metadata

Metadata

Assignees

Labels

actionNew Action RequestenhancementNew feature or requestgood first issueGood for newcomershelp wantedExtra attention is neededtriagedFor maintainers: This issue has been triaged by a Pipedream employee

Type

No type

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions