Skip to content
This repository was archived by the owner on Sep 25, 2025. It is now read-only.

duckailabs/agent

Repository files navigation

Duck Agents

P2P agents that can communicate and process messages using LLM capabilities.

Quick Start

  1. Install dependencies:
pnpm install
  1. Copy .env.example to .env and fill in required values:
cp .env.example .env

Required environment variables:

  • AGENT_NAME: Name for your agent instance
  • P2P_NODE_PATH: Path to P2P node executable
  • PRIVATE_KEY: Private key for P2P network identity
  • OPENAI_API_KEY: OpenAI API key for LLM processing

Creating Your Agent

  1. Update the .env file with your agent name and ETH private key with a small amount of ETH on base. The node will automatically register your agent with the network.

  2. Set up your environment:

    • Copy the generated private key to your .env file as PRIVATE_KEY
    • Set a unique AGENT_NAME in your .env file
    • Add your OPENAI_API_KEY to the .env file
    • For local development, set P2P_NODE_PATH to the path of your p2p-node.js

Example .env configuration:

AGENT_NAME=my-first-agent
PRIVATE_KEY=<your-generated-private-key>
OPENAI_API_KEY=<your-openai-api-key>
P2P_NODE_PATH=./sdk/p2p-node.js
P2P_PORT=8000
GRPC_PORT=50051
LOG_TO_CONSOLE=true

Customizing Your Agent

The agent's behavior is defined in src/agent.ts. This is where you can customize how your agent processes messages.

  1. Open src/agent.ts
  2. Modify the OpenAI configuration and prompt handling:
// Example agent.ts customization
export async function processMessage(content: string): Promise<string> {
  try {
    const completion = await openai.chat.completions.create({
      messages: [
        {
          role: "system",
          content: "You are a helpful AI assistant that specializes in...",
        },
        { role: "user", content },
      ],
      model: "gpt-3.5-turbo",
      // Add additional parameters like temperature, max_tokens, etc.
    });

    return completion.choices[0].message.content || "No response generated";
  } catch (error) {
    Logger.error("llm", "Failed to process message with OpenAI", {
      error: error instanceof Error ? error.message : String(error),
    });
    return "Sorry, I encountered an error processing your message.";
  }
}

You can customize:

  • System prompt to define agent personality and capabilities
  • OpenAI model selection (e.g., gpt-4, gpt-3.5-turbo)
  • Model parameters (temperature, max tokens, etc.)
  • Error handling and response formatting
  • Additional processing logic before/after LLM calls

Development

Run locally:

pnpm run start

Deployment

The agent can be deployed to Fly.io easily:

  1. Install Fly CLI:
curl -L https://fly.io/install.sh | sh
  1. Login to Fly:
fly auth login
  1. Set required secrets (sensitive variables):
# Set sensitive environment variables
fly secrets set PRIVATE_KEY="your-private-key" \
               OPENAI_API_KEY="your-openai-api-key"
  1. Create a new app:
fly launch

Non-sensitive configuration is already set in fly.toml under the [env] section:

[env]
  NODE_ENV = "production"
  AGENT_NAME = "ducky"
  P2P_PORT = "8000"
  GRPC_PORT = "50051"
  P2P_NODE_PATH = "/app/sdk/p2p-node.js"
  LOG_TO_CONSOLE = "true"
  1. Deploy:
pnpm run deploy:fly
  1. Check logs:
pnpm run logs
  1. Check status:
pnpm run status

The deployment uses the configuration in fly.toml, which sets up:

  • TCP service on port 8000 for P2P network communication
  • Internal gRPC service on port 50051 for P2P node communication
  • Auto-scaling and monitoring

Architecture

gRPC Communication Flow

The Duck Agent uses a gRPC-based architecture for communication between components:

sequenceDiagram
    participant Network as P2P Network
    participant Node as P2P Node
    participant GRPC as gRPC Server
    participant Client as P2PGrpcClient
    participant Agent as Agent Class

    %% Initialization
    Agent->>Client: new Agent()
    Client->>GRPC: Connect(port, name)
    GRPC->>Node: Start P2P node
    Node->>Network: Join network

    %% Incoming Message Flow
    Network->>Node: Receive message
    Node->>GRPC: Stream event
    GRPC->>Client: Emit P2PEvent
    Client->>Agent: handleMessage()

    %% Outgoing Message Flow
    Agent->>Client: sendMessage()
    Client->>GRPC: SendMessage RPC
    GRPC->>Node: Forward message
    Node->>Network: Send to peer

    %% Error Handling
    Node-->>GRPC: Error event
    GRPC-->>Client: Stream error
    Client-->>Agent: Error callback
Loading

Components

  1. P2P Node (p2p-node.js)

    • Standalone Node.js process
    • Handles actual P2P networking
    • Exposes gRPC server on port 50051 (default)
    • Defined in sdk/p2p-node.js
  2. gRPC Interface (P2PGrpcClient)

    • Manages communication between Agent and P2P Node
    • Implements event-based message handling
    • Located in sdk/src/grpc/client.ts
  3. Message Flow

    Incoming:
    P2P Network -> P2P Node -> gRPC Stream -> P2PGrpcClient -> Agent Handler
    
    Outgoing:
    Agent -> P2PGrpcClient -> gRPC -> P2P Node -> P2P Network
    
  4. Message Processing

    • Each incoming message is processed using OpenAI's LLM
    • Responses are automatically sent back to the message sender
    • Error handling and logging is built in

About

Repo of Agent using an Openpond Node

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published