Team chat, reimagined. Built with Next.js 15, Convex, and Clerk.
- Real-time messaging with channels and direct messages
- File attachments and image uploads
- Reactions, replies, and message pinning
- Workspace management with role-based access
- Typing indicators and read receipts
- Node.js 18+ or Bun
- A Clerk account for authentication
- A Convex account for the backend
-
Clone the repo:
git clone https://github.com/your-username/portal.git cd portal -
Install dependencies:
bun install
-
Copy the environment file and configure it:
cp .env.example .env.local
Optional environment variables:
NEXT_PUBLIC_DATABUDDY_CLIENT_ID- Your Databuddy client ID (for analytics)NEXT_PUBLIC_POSTHOG_KEY- Your PostHog key (for event tracking)
-
Start Convex (in a separate terminal):
bun run dev:convex
-
Start the development server:
bun run dev
- Framework: Next.js 15 (App Router)
- Backend: Convex
- Auth: Clerk
- Styling: Tailwind CSS
- Analytics: PostHog (optional), Databuddy (optional)
Portal includes built-in analytics proxy endpoints to prevent ad-blockers from interfering with optional analytics services (PostHog and Databuddy). These proxies are protected by the proxy handler (proxy.ts) that implements:
- Default limit: 100 requests per minute per IP address
- No-IP fallback limit: 10 requests per minute (development only)
- Window: Rolling 1-minute window
- Headers: Standard rate limit headers (
X-RateLimit-*) included in responses - Response:
429 Too Many Requestswhen quota exceeded, withRetry-Afterheader - Storage:
- Development: In-memory store (resets on server restart)
- Production: Recommended to use Redis-backed counters (e.g., Upstash, Vercel KV)
Missing IP Headers:
- Production: Requests without valid IP headers (
x-forwarded-fororx-real-ip) are rejected with400 Bad Requestto prevent shared bucket exhaustion - Development: A stricter shared rate limit bucket (10 req/min) is used to allow local testing while preventing abuse
- This prevents malicious clients from stripping IP headers to exhaust a shared "unknown" quota
The middleware implements an allowlist for known analytics endpoints:
PostHog endpoints (/ingest/*):
/ingest/static/*- Static assets/ingest/decide,/ingest/e,/ingest/batch,/ingest/capture,/ingest/engage,/ingest/track- Event endpoints/ingest/i/v0/e- V0 event endpoint
Databuddy endpoints (/db-ingest/*):
/db-ingest/api/*- API endpoints/db-ingest/*.js- JavaScript assets/db-ingest/*.json- JSON config files
Requests to non-allowlisted paths return 403 Forbidden.
All proxy requests are logged in structured JSON format with:
- Timestamp, IP address (or "unknown" if missing), user agent
- Request method and full URL
- Status (
allowed,rate_limited,blocked_invalid_path,blocked_missing_ip,rate_limited_no_ip,allowed_no_ip) - Additional details (rate limit remaining, blocked path, etc.)
Operational Trade-offs:
-
In-memory rate limiting: Fast and simple for development, but doesn't scale horizontally and loses state on restart. For production, migrate to Redis-backed storage (example below).
-
Path allowlist maintenance: Adding new analytics endpoints requires updating the
ALLOWED_INGEST_PATHSpatterns inproxy.ts. This prevents arbitrary proxy usage but adds maintenance overhead. -
Performance impact: The proxy handler runs on analytics routes (
/ingest/*and/db-ingest/*). The overhead is minimal (<1ms per request) for allowlist checks and in-memory rate limiting. -
DDoS protection: The rate limit provides basic protection but may need tuning based on legitimate traffic patterns. Monitor
rate_limitedlogs to identify if limits are too restrictive.
For production deployments, replace in-memory storage with Redis:
// Example using Vercel KV (or Upstash Redis)
import { kv } from "@vercel/kv";
async function checkRateLimit(key: string) {
const now = Date.now();
const windowKey = `${key}:${Math.floor(now / RATE_LIMIT_WINDOW_MS)}`;
const count = await kv.incr(windowKey);
if (count === 1) {
await kv.expire(windowKey, Math.ceil(RATE_LIMIT_WINDOW_MS / 1000));
}
return {
allowed: count <= RATE_LIMIT_MAX_REQUESTS,
remaining: Math.max(0, RATE_LIMIT_MAX_REQUESTS - count),
resetTime: (Math.floor(now / RATE_LIMIT_WINDOW_MS) + 1) * RATE_LIMIT_WINDOW_MS,
};
}Monitoring setup:
- Set up alerts for high
rate_limited,blocked_invalid_path, orblocked_missing_ipcounts - Track P99 latency for
/ingest/*and/db-ingest/*routes - Monitor rate limit effectiveness with analytics on blocked requests
- Watch for
allowed_no_iplogs in production (should not occur with proper deployment)
MIT