A complete TypeScript/JavaScript implementation of OpenAI's Harmony response format for structured conversational AI interactions. This library provides 100% API compatibility with the Rust implementation, enabling structured conversations, multi-channel outputs, tool integration, and real-time streaming support.
This library does NOT include an AI model. It provides the conversation formatting and parsing layer that works with models supporting the Harmony protocol. You need:
- OpenAI API access or compatible model endpoint
- A model that understands Harmony formatting (
<|start|>
,<|message|>
,<|end|>
tokens) - Integration with your preferred inference provider (OpenAI, Azure, local servers)
What this library does:
Your App → Harmony Protocol → [Format Tokens] → OpenAI Model → [Response Tokens] → Harmony Protocol → Structured Output
This library provides a complete TypeScript/JavaScript implementation of the Harmony response format used by OpenAI's open-weight model series (gpt-oss). It enables parsing and rendering of structured conversations with support for:
- Multiple communication channels (analysis, commentary, final)
- Tool calling and function integration
- Reasoning effort control
- Streaming token parsing
- System and developer instructions
- Zero Dependencies (except tiktoken for tokenization)
- Full TypeScript Support with complete type safety and IntelliSense
- Memory Efficient streaming parser for real-time processing
- High Performance tokenization with tiktoken integration
- Thread Safe operations for concurrent usage
- Comprehensive Error Handling with typed exceptions
- Multiple Output Formats (Text, Markdown, HTML, JSON, CSV)
- Channel-Specific Rendering (analysis, commentary, final)
- Custom Formatting Options (labels, truncation, timestamps)
- Streaming UI Support with incremental updates
- Export Capabilities for analytics and documentation
- CSS-Ready HTML with semantic classes
- Multiple Encoding Support (o200k_base, custom encodings)
- Extensible Tool System with namespace-based organization
- Configurable Channel Routing with filtering options
- Role-based Validation with automatic message sorting
- Custom Tool Integration with JSON schema validation
- Real-Time Streaming with delta content updates
- 100% API Compatibility with the Rust implementation
- All Special Tokens supported (200006, 200008, 200007, etc.)
- Multi-Channel System for structured reasoning workflows
- Tool Calling Framework with built-in namespaces (Browser, Python, Functions)
- System Content Builder with fluent API design
- Message Validation with conversation state management
- 5 Comprehensive Examples with real-world usage patterns
- Complete Documentation with step-by-step guides
- Migration Guide from Rust implementation
- Integration Guides for OpenAI, Azure, local models
- Troubleshooting Guide with common solutions
- Performance Best Practices and optimization tips
# npm
npm install harmony-protocol
# yarn
yarn add harmony-protocol
# pnpm
pnpm add harmony-protocol
Requirements:
- Node.js ≥ 18.0.0
- TypeScript ≥ 5.0.0 (for TypeScript projects)
import { loadHarmonyEncoding, Message, Conversation, Role } from 'harmony-protocol';
// 1. Load encoding
const encoding = loadHarmonyEncoding();
// 2. Create conversation
const conversation = Conversation.fromMessages([
Message.system('You are a helpful assistant.'),
Message.user('What is 2 + 2?')
]);
// 3. Get tokens ready for your OpenAI model
const tokens = encoding.renderConversationForCompletion(conversation, Role.ASSISTANT);
// 4. Send tokens to OpenAI, get response tokens back
// const responseTokens = await openai.complete(tokens);
// 5. Parse response back to structured messages
// const messages = encoding.parseMessagesFromCompletionTokens(responseTokens, Role.ASSISTANT);
import {
loadHarmonyEncoding,
Message,
Conversation,
Role,
Channel,
createSystemContent
} from 'harmony-protocol';
const encoding = loadHarmonyEncoding();
// Create a system message with multi-channel support
const systemContent = createSystemContent()
.withIdentity('Expert Mathematics Tutor')
.withRequiredChannels(['analysis', 'final'])
.withReasoningEffort('high')
.build();
// Build conversation
const conversation = Conversation.fromMessages([
Message.system(systemContent),
Message.user('Solve: What is the derivative of x² + 3x + 2?')
]);
// Get tokens for your OpenAI model
const tokens = encoding.renderConversationForCompletion(conversation, Role.ASSISTANT);
// After getting response from OpenAI:
// const messages = encoding.parseMessagesFromCompletionTokens(responseTokens, Role.ASSISTANT);
import { StreamableParser, loadHarmonyEncoding, Role } from 'harmony-protocol';
const encoding = loadHarmonyEncoding();
const parser = new StreamableParser(encoding, Role.ASSISTANT);
// Connect to your OpenAI streaming endpoint
async function handleStreamingResponse(streamResponse: ReadableStream) {
const reader = streamResponse.getReader();
while (true) {
const { done, value } = await reader.read();
if (done) break;
// Process each token as it arrives
parser.processText(new TextDecoder().decode(value));
// Get real-time content delta for UI updates
const delta = parser.getLastContentDelta();
if (delta) {
console.log('New content:', delta);
// Update your UI immediately
updateStreamingUI(delta);
}
}
// Get final parsed messages
const messages = parser.intoMessages();
console.log('Complete conversation:', messages);
}
import {
createSystemContent,
createToolDescription,
createToolNamespace,
createBrowserToolNamespace,
createPythonToolNamespace,
Message,
Conversation,
Role
} from 'harmony-protocol';
// Create custom tools
const mathTools = [
createToolDescription('calculate', 'Perform mathematical calculations', {
type: 'object',
properties: {
expression: { type: 'string', description: 'Math expression to evaluate' },
precision: { type: 'number', description: 'Decimal places', default: 2 }
},
required: ['expression']
}),
createToolDescription('plot_function', 'Plot mathematical functions', {
type: 'object',
properties: {
function: { type: 'string', description: 'Function to plot (e.g., x^2 + 2x + 1)' },
xRange: { type: 'array', items: { type: 'number' }, description: '[min, max] for x-axis' }
},
required: ['function']
})
];
// Create tool namespaces
const mathNamespace = createToolNamespace('math', 'Mathematical computation tools', mathTools);
const browserNamespace = createBrowserToolNamespace();
const pythonNamespace = createPythonToolNamespace();
// Build comprehensive system content
const systemContent = createSystemContent()
.withIdentity('Advanced AI Assistant with Tool Access')
.withRequiredChannels(['analysis', 'commentary', 'final'])
.withReasoningEffort('high')
.withTools(mathNamespace)
.withTools(browserNamespace)
.withTools(pythonNamespace)
.build();
const conversation = new Conversation([
Message.system(systemContent),
Message.user('Calculate the roots of 2x² + 5x - 3 = 0 and plot the function')
]);
// Model will now have access to all defined tools
const tokens = encoding.renderConversationForCompletion(conversation, Role.ASSISTANT);
import OpenAI from 'openai';
import { loadHarmonyEncoding, Message, Conversation, Role } from 'harmony-protocol';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const encoding = loadHarmonyEncoding();
async function completeWithHarmony(userMessage: string) {
// 1. Create Harmony conversation
const conversation = Conversation.fromMessages([
Message.system('You are a helpful assistant. Use the analysis channel for reasoning.'),
Message.user(userMessage)
]);
// 2. Render to tokens
const tokens = encoding.renderConversationForCompletion(conversation, Role.ASSISTANT);
// 3. Send to OpenAI (note: this is conceptual - OpenAI doesn't directly accept token arrays)
// In practice, you'd need a compatible model that accepts Harmony-formatted text
const harmonyText = encoding.decode(tokens);
const response = await openai.completions.create({
model: 'text-davinci-003', // Use your Harmony-compatible model
prompt: harmonyText,
max_tokens: 1000,
stream: false
});
// 4. Parse response back to structured messages
const responseText = response.choices[0].text || '';
const responseTokens = encoding.encode(responseText);
const messages = encoding.parseMessagesFromCompletionTokens(responseTokens, Role.ASSISTANT);
return messages;
}
// Usage
const result = await completeWithHarmony('Explain quantum computing in simple terms');
console.log('Analysis:', result.find(m => m.channel === 'analysis')?.content);
console.log('Final Answer:', result.find(m => m.channel === 'final')?.content);
import {
HarmonyRenderer,
StreamingRenderer,
renderToText,
renderToMarkdown,
renderToHTML,
renderFinalOnly,
renderSummary,
renderChannel
} from 'harmony-protocol';
// Quick rendering functions
const textOutput = renderToText(conversation);
const markdownOutput = renderToMarkdown(conversation);
const htmlOutput = renderToHTML(conversation);
// Advanced rendering with custom options
const renderer = new HarmonyRenderer({
showChannels: true,
showRoles: true,
channelLabels: {
analysis: '🤔 Thinking',
commentary: '💭 Context',
final: '💬 Answer'
},
maxContentLength: 200,
includeTimestamps: true
});
const rendered = renderer.renderConversation(conversation);
console.log('Text:', rendered.text);
console.log('Markdown:', rendered.markdown);
console.log('HTML:', rendered.html);
// Channel-specific rendering
const finalOnly = renderFinalOnly(conversation); // User-facing content only
const analysisContent = renderChannel(conversation, Channel.ANALYSIS);
// Streaming UI support
const streamingRenderer = new StreamingRenderer();
const updates = streamingRenderer.renderIncremental(newConversation);
if (updates.hasChanges) {
updateUI(updates.newContent); // Real-time updates
}
// Conversation analytics
const summary = renderSummary(conversation);
console.log(summary); // "Conversation contains 5 messages across 3 channels..."
// Export formats
const structured = rendered.structured;
const csvExport = structured.messages.map(m =>
`"${m.role}","${m.channel}","${m.content}",${m.originalLength}`
).join('\n');
import {
HarmonyError,
ParseError,
RenderError,
ValidationError
} from 'harmony-protocol';
try {
// Validate conversation before processing
const validation = conversation.validate();
if (!validation.isValid) {
throw new ValidationError(`Invalid conversation: ${validation.errors.join(', ')}`);
}
const tokens = encoding.renderConversationForCompletion(conversation, Role.ASSISTANT);
const messages = encoding.parseMessagesFromCompletionTokens(responseTokens, Role.ASSISTANT);
} catch (error) {
if (error instanceof ParseError) {
console.error('Parsing failed:', error.message);
} else if (error instanceof RenderError) {
console.error('Rendering failed:', error.message);
} else if (error instanceof ValidationError) {
console.error('Validation failed:', error.message);
} else {
console.error('Unexpected error:', error);
}
}
import { StreamableParser, loadHarmonyEncoding, Role } from 'harmony-protocol';
const encoding = loadHarmonyEncoding('o200k_base');
const parser = new StreamableParser(encoding, Role.ASSISTANT);
// In practice, responseTokens would come from your OpenAI model's streaming API
const responseTokens = [200006, 1234, 5678]; // These would be from OpenAI
// Process tokens as they arrive from the model
for (const token of responseTokens) {
parser.process(token);
// Get content delta for real-time streaming UI updates
const delta = parser.getLastContentDelta();
if (delta) {
process.stdout.write(delta); // Show new content to user immediately
}
}
// Get final structured messages after streaming is complete
const messages = parser.intoMessages();
console.log(`\nParsed ${messages.length} messages from model output`);
The Harmony format structures conversations using special tokens:
<|start|>system<|message|>You are ChatGPT, a large language model trained by OpenAI.
Knowledge cutoff: 2024-06
Reasoning: medium
# Valid channels: analysis, commentary, final. Channel must be included for every message.<|end|>
<|start|>user<|message|>What is 2 + 2?<|end|>
<|start|>assistant<|channel|>analysis<|message|>I need to perform a simple arithmetic calculation.<|end|>
<|start|>assistant<|channel|>final<|message|>2 + 2 equals 4.<|end|>
The library supports multiple communication channels for organized model outputs:
- analysis: Internal reasoning and analysis
- commentary: Model explanations and meta-commentary
- final: User-facing final responses
Channels can be configured as required, and the system automatically handles analysis dropping when final responses are complete.
- Browser Tools: Web browsing, search, and content extraction
- Python Tools: Code execution environment
- Function Tools: Custom function definitions
import { createToolDescription } from 'harmony-protocol';
const customTool = createToolDescription(
'weather',
'Gets current weather for a location',
{
type: 'object',
properties: {
location: { type: 'string' },
units: { type: 'string', enum: ['celsius', 'fahrenheit'] }
},
required: ['location']
}
);
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Message │ │ Encoding │ │ Streaming │
│ Conversation │◄──►│ HarmonyEncoding │◄──►│ StreamableParser│
│ Types │ │ Token Handling │ │ Real-time Parse │
└─────────────────┘ └─────────────────┘ └─────────────────┘
▲ ▲ ▲
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Tool Integration│ │ Tiktoken │ │ TypeScript │
│ Namespaces │ │ Tokenization │ │ Type Safety │
│ Validation │ │ Encoding │ │ Full Typing │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Token | ID | Purpose |
---|---|---|
`< | start | >` |
`< | message | >` |
`< | end | >` |
`< | channel | >` |
`< | call | >` |
`< | return | >` |
`< | constrain | >` |
- Context Window: 1,048,576 tokens (1M)
- Max Action Length: 524,288 tokens (512K)
- Type Safe: Full TypeScript support
- Memory Efficient: Token reuse and streaming parsing
# Clone and setup
git clone https://github.com/terraprompt/harmony-protocol-js
cd harmony-protocol-js
npm install && npm run build
# Run individual examples
npm run example:basic # Core functionality demonstration
npm run example:streaming # Real-time streaming parser
npm run example:tools # Tool integration and custom functions
npm run example:channels # Multi-channel workflow patterns
npm run example:rendering # Output formatting and rendering options
Example | Purpose | Key Features |
---|---|---|
basic-usage.ts | Core library functionality | Message creation, conversation rendering, token handling |
streaming-parser.ts | Real-time processing | Streaming token processing, delta updates, error handling |
tool-integration.ts | Tool system | Custom tools, namespaces, browser/Python tools |
channel-management.ts | Multi-channel workflows | Channel routing, analysis dropping, conversation filtering |
output-rendering.ts | Output formatting | Text/Markdown/HTML rendering, custom formatting, export options |
- OpenAI Integration - Using with OpenAI API
- Azure OpenAI Integration - Azure OpenAI Service setup
- Local Models - Ollama, vLLM, and local inference
- Streaming UIs - Building real-time chat interfaces
- Error Handling - Comprehensive error management
Class | Purpose | Key Methods |
---|---|---|
Message |
Individual conversation messages | system() , user() , assistant() , withChannel() |
Conversation |
Message collections | fromMessages() , validate() , getStats() , filter() |
HarmonyEncoding |
Token encoding/decoding | renderConversation() , parseMessages() , countTokens() |
StreamableParser |
Real-time parsing | process() , getLastContentDelta() , intoMessages() |
Class/Function | Purpose | Output Formats |
---|---|---|
HarmonyRenderer |
Advanced rendering | Text, Markdown, HTML, Structured |
StreamingRenderer |
Incremental updates | Real-time UI support |
renderToText() |
Quick text output | Plain text with formatting |
renderToMarkdown() |
Documentation format | Markdown with headers |
renderToHTML() |
Web interfaces | HTML with CSS classes |
Function | Purpose | Built-in Tools |
---|---|---|
createSystemContent() |
System configuration | Fluent API builder |
createToolDescription() |
Define tools | JSON schema validation |
createBrowserToolNamespace() |
Web browsing | Search, navigate, extract |
createPythonToolNamespace() |
Code execution | Execute, install packages |
Type | Values | Usage |
---|---|---|
Role |
System, Developer, User, Assistant, Tool | Message attribution |
Channel |
Final, Analysis, Commentary | Response organization |
ReasoningEffort |
Low, Medium, High | System configuration |
# Run all tests
npm test
# Build the library
npm run build
# Run linting
npm run lint
# Format code
npm run format
Q: Import errors when using the library
// ❌ This might fail
import { Message } from 'harmony-protocol/dist/message';
// ✅ Use this instead
import { Message } from 'harmony-protocol';
Q: Tiktoken encoding errors
# If you see tiktoken errors, try:
npm install tiktoken@latest
# For Node.js compatibility issues:
npm install --save-dev @types/node@latest
Q: TypeScript compilation issues
// Ensure your tsconfig.json includes:
{
"compilerOptions": {
"moduleResolution": "node",
"esModuleInterop": true,
"allowSyntheticDefaultImports": true
}
}
Q: Large conversation performance
// For large conversations, use streaming or filtering
const finalOnly = renderFinalOnly(conversation); // Faster
const streaming = new StreamingRenderer(); // Memory efficient
- Always validate conversations before encoding
- Use appropriate rendering format for your use case
- Handle errors gracefully with typed exceptions
- Cache rendered output when conversations don't change
- Use streaming for real-time interfaces
- Filter channels based on audience needs
- Reuse
HarmonyEncoding
instances - Use
StreamableParser
for large responses - Enable analysis dropping for production
- Cache rendered outputs
- Use
renderFinalOnly()
for user interfaces
- 📖 Complete Documentation - Comprehensive guides
- 🔍 Examples - Working code samples
- 🐛 GitHub Issues - Bug reports and questions
- 📚 API Reference - Detailed method documentation
We welcome contributions! Please see our contributing guidelines:
- Fork the repository and create a feature branch
- Add comprehensive tests for new functionality
- Update documentation for any API changes
- Follow TypeScript best practices and existing code style
- Submit a pull request with clear description
git clone https://github.com/terraprompt/harmony-protocol-js
cd harmony-protocol-js
npm install
npm run build
npm test
New examples are always welcome! Follow the existing pattern:
- Create
.ts
file inexamples/
- Add comprehensive comments
- Include error handling
- Add to
package.json
scripts - Document in
docs/examples.md
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- OpenAI for creating the Harmony response format
- Rust Implementation for providing the specification
- tiktoken maintainers for tokenization support
- TypeScript Team for excellent tooling
- Open Source Community for contributions and feedback
This is a reverse-engineered implementation for educational and research purposes. It is not affiliated with or endorsed by OpenAI. The Harmony protocol specification may change as OpenAI continues development.