A flexible server-side generation service that runs on Deno Deploy. Generate dynamic web pages using AI (Gemini Flash 2.5) from markdown content with streaming support.
- 🚀 Deno Deploy Ready: Built to run on Deno Deploy
- 🤖 AI-Powered: Uses Vercel AI SDK with Google's Gemini Flash 2.5
- 📝 Markdown Support: Describe content in simple markdown
- 🎨 Style Influence: Control output style with brand guidelines and reference images
- 🏷️ SEO-Friendly: Add title and description metadata via YAML front matter
- 🌊 Streaming Responses: Real-time content generation
- 🔍 Context-Aware: LLM has full access to HTTP headers and request variables
- ⚡ Fast & Flexible: Easy to change prompts and source data
- đź’ľ Smart Caching: Configurable caching to reduce API calls and improve performance
- Deno installed (v1.37+)
- Google AI API Key (get one at ai.google.dev)
- Clone the repository:
git clone https://github.com/PaulKinlan/ssgen.git
cd ssgen- Create a
.envfile with your API key:
cp .env.example .env
# Edit .env and add your GOOGLE_GENERATIVE_AI_API_KEY- Run the server:
deno task devThe server will start on http://localhost:8000
The server can automatically serve markdown files from the content/ directory:
# Visit /about or /about.html to see content from content/about.md
curl http://localhost:8000/about
# Visit /contact or /contact.html to see content from content/contact.md
curl http://localhost:8000/contactTo add your own pages:
- Create a markdown file in the
content/directory (e.g.,content/services.md) - Access it at
/servicesor/services.html
Make a GET request to generate content from default markdown:
curl http://localhost:8000/Send markdown content via query parameter:
curl "http://localhost:8000/?content=$(cat examples/sample-content.md | jq -sRr @uri)"Or via POST request:
curl -X POST http://localhost:8000/ \
-H "Content-Type: application/json" \
-d '{
"content": "# Hello World\nThis is my markdown content.",
"prompt": "Convert this to beautiful HTML",
"systemPrompt": "You are a web design expert."
}'Using the Helper Script:
For easier POST request management, use the included ssgen-post.sh script:
# Use content from a file
./ssgen-post.sh -f examples/sample-content.md
# Provide inline content
./ssgen-post.sh -c "# Hello World\n\nThis is my content."
# Custom prompt and system prompt
./ssgen-post.sh -f examples/blog-post.md \
-p "Create a modern blog layout" \
-s "You are a web design expert"
# Save output to file
./ssgen-post.sh -f examples/sample-content.md -o output.html
# Show all options
./ssgen-post.sh --helpYou can customize both the system prompt and user prompt:
curl -X POST http://localhost:8000/ \
-H "Content-Type: application/json" \
-d '{
"content": "# My Blog\n\nWelcome to my blog!",
"systemPrompt": "You are an expert in creating modern, responsive web designs with Tailwind CSS.",
"prompt": "Create a beautiful landing page with a hero section and modern styling."
}'You can specify which Gemini model to use via query parameter (GET) or request body (POST):
GET Request:
curl "http://localhost:8000/?model=gemini-2.5-pro&content=Hello+World"POST Request:
curl -X POST http://localhost:8000/ \
-H "Content-Type: application/json" \
-d '{
"content": "# My Content",
"model": "gemini-2.5-flash-lite",
"prompt": "Generate a simple HTML page"
}'Available models include:
gemini-2.5-flash(default)gemini-2.5-flash-lite(lighter, faster)gemini-2.5-pro(most powerful)
You can control the visual style of generated HTML by providing brand guidelines and/or reference images in the YAML front matter of your markdown files:
---
style:
brand: brands/modern-tech.md
image: images/style-reference.png
---Or use multiple style configurations:
---
style:
- brand: brands/company-brand.md
- image: images/design-inspiration.png
---Brand Guidelines:
- Create markdown files in the
./brands/or./content/directories - Include information about colors, typography, layout principles, and design philosophy
- The brand guidelines are added to the system prompt and influence the AI's design decisions
Reference Images:
- Place images in
./images/,./content/, or./assets/directories - Supported formats: PNG, JPEG, GIF, WebP, SVG
- Images are sent to the AI (multimodal) as visual style inspiration
- The AI analyzes the images and applies similar design aesthetics to the generated HTML
Example brand guidelines file (brands/modern-tech.md):
# Modern Tech Brand Guidelines
## Color Palette
- Primary: Deep Blue (#1a365d)
- Secondary: Electric Cyan (#00d4ff)
- Accent: Vibrant Purple (#9333ea)
## Typography
- Headings: Bold, modern sans-serif
- Body: Clean, readable fonts with good spacing
## Design Principles
- Minimalism with plenty of white space
- Modern design patterns with subtle shadows
- Responsive and accessibleExample usage:
# Serve content with style configuration
curl http://localhost:8000/style-brand-example
# Or via POST with inline content
curl -X POST http://localhost:8000/ \
-H "Content-Type: application/json" \
-d '{
"content": "---\nstyle:\n brand: brands/modern-tech.md\n---\n# My Page\n\nContent here"
}'See the content/style-*-example.md files and brands/ directory for complete examples.
The LLM automatically receives information about the request:
- HTTP method
- Full URL
- User-Agent
- Client IP (from X-Forwarded-For or X-Real-IP headers)
- All HTTP headers
This allows the AI to generate personalized content based on the request context.
You can include YAML front matter at the beginning of your markdown files to add metadata and customize behavior:
---
title: "My Page Title"
description: "A description for SEO and social sharing"
prompt: "custom-prompt.md"
---
# Your Markdown Content
The rest of your content goes here...Supported Fields:
title(optional): Page title that will be included in the<title>tag of the generated HTMLdescription(optional): Page description that will be included in a<meta name="description">tagprompt(optional): Custom prompt for the AI. Can be either:- An inline string:
prompt: "Create a modern, minimalist design" - A file path relative to the
prompts/directory:prompt: "custom-prompt.md"(resolves toprompts/custom-prompt.md)prompt: "/custom-prompt.md"(also resolves toprompts/custom-prompt.md)prompt: "subdir/file.md"(resolves toprompts/subdir/file.md)
- An inline string:
cache(optional): Configure caching behavior for this specific content:enabled: Enable or disable caching (boolean)ttl: Cache time-to-live in seconds (number)
Examples:
See the examples/with-metadata.md file for a complete example of using metadata:
curl http://localhost:8000/?content="$(cat examples/with-metadata.md | jq -sRr @uri)"Or place a file with metadata in the content/ directory:
# content/my-page.md
---
title: "Welcome to My Site"
description: "Learn about our services and offerings"
cache:
enabled: true
ttl: 7200 # Cache for 2 hours
---
# My Content
...Then access it at /my-page or /my-page.html.
Cache Configuration Example:
You can disable caching for specific content or set a custom TTL:
---
title: "Real-time Dashboard"
cache:
enabled: false # Disable caching for this page
---
# Live Data
This page shows real-time information...Or set a longer cache duration for static content:
---
title: "Company History"
cache:
ttl: 86400 # Cache for 24 hours
---
# Our Story
Founded in 1990...Generate content from markdown.
Query Parameters (GET):
content(optional): Markdown content to processprompt(optional): User prompt for the LLMsystemPrompt(optional): System prompt to configure LLM behaviormodel(optional): Model to use (e.g., "gemini-2.5-flash", "gemini-2.5-flash-lite", "gemini-2.5-pro"). Defaults to "gemini-2.5-flash"
Request Body (POST):
{
"content": "Markdown content here",
"prompt": "Optional user prompt",
"systemPrompt": "Optional system prompt",
"model": "Optional model name (e.g., gemini-2.5-flash)"
}Response: Streaming text response with generated content.
Health check endpoint.
Response:
OK
- Install the Deno Deploy CLI:
deno install --allow-all --no-check -r -f https://deno.land/x/deploy/deployctl.ts- Deploy your project:
deployctl deploy --project=your-project-name main.ts- Set environment variables in the Deno Deploy dashboard:
GOOGLE_GENERATIVE_AI_API_KEY: Your Google AI API key
GOOGLE_GENERATIVE_AI_API_KEY(required): Your Google AI API keyPORT(optional): Server port, defaults to 8000CACHE_ENABLED(optional): Enable or disable Cache-Control headers, defaults totrueCACHE_TTL(optional): Cache-Control max-age in seconds, defaults to3600(1 hour)
You can now specify which model to use on a per-request basis using the model parameter in query strings (GET) or request body (POST). This allows you to dynamically choose the model without changing code:
# Use gemini-2.5-pro for a specific request
curl "http://localhost:8000/?model=gemini-2.5-pro&content=Hello"The default model is gemini-2.5-flash. Available models include:
gemini-2.5-flash(default)gemini-2.5-flash-lite(lighter, faster)gemini-2.5-pro(most powerful)
To change the default model for all requests, edit the DEFAULT_MODEL constant in main.ts:
const DEFAULT_MODEL = "gemini-2.5-flash"; // Change this to any supported modelThanks to the Vercel AI SDK, you can also easily switch to other providers (OpenAI, Anthropic, etc.) by changing the import and model initialization.
ssgen sets appropriate Cache-Control headers on all responses to enable browser and CDN caching of generated content.
How it works:
- All responses include a
Cache-Controlheader based on your configuration - When caching is enabled, responses include
Cache-Control: public, max-age={ttl} - When caching is disabled, responses include
Cache-Control: no-cache, no-store, must-revalidate - This allows browsers and CDNs (like Cloudflare or Deno Deploy) to cache the generated HTML
Global Configuration:
Set default cache behavior via environment variables:
# Enable or disable Cache-Control headers (default: true)
CACHE_ENABLED=true
# Set max-age in seconds (default: 3600 = 1 hour)
CACHE_TTL=3600Per-Content Configuration:
Override cache behavior for specific content files using YAML front matter:
---
title: "My Page"
cache:
enabled: true # Enable/disable caching for this page
ttl: 7200 # Cache for 2 hours (overrides CACHE_TTL)
---
# Content here...Use cases:
- Static content: Set long cache times (e.g.,
ttl: 86400for 24 hours) - Frequently updated content: Set short cache times (e.g.,
ttl: 300for 5 minutes) - Dynamic/real-time content: Disable caching (
enabled: false)
Example:
# Check the Cache-Control header
curl -I http://localhost:8000/about
# Response includes: Cache-Control: public, max-age=3600See the examples/ directory for sample markdown files:
sample-content.md: A portfolio page exampleblog-post.md: A blog post examplewith-metadata.md: Example showing YAML front matter with title and description metadata
See the content/ directory for style influence examples:
style-brand-example.md: Using brand guidelinesstyle-image-example.md: Using reference imagesstyle-combined-example.md: Combining both approaches
For detailed documentation on the style influence feature, see STYLE_EXAMPLES.md.
Run the development server with auto-reload:
deno task devMIT License - see LICENSE file for details
Contributions are welcome! Please feel free to submit a Pull Request.