Skip to content

A lightweight, auto-scaling chat backend that orchestrates AI conversations entirely on serverless infrastructure, optimized for fast deployment and effortless upkeep.

License

Notifications You must be signed in to change notification settings

metacore-stack/serverless-conversation-engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WhisperScale Conversational Cloud Backend

WhisperScale packages a stateless chatbot workflow on serverless primitives so you can launch conversational experiences without wrangling infrastructure.

Why WhisperScale

  • Elastic by design: Functions scale with traffic bursts while keeping costs near-zero when idle.
  • Stateful memory layer: MongoDB persists session transcripts and enrichment data for follow-up messages.
  • Assistant-ready: Ships with a Watson Assistant integration path but stays agnostic so you can swap in any NLU provider.
  • Deployment in minutes: A single manifest describes all functions, actions, and bindings for a predictable rollout.

Architecture Snapshot

The stack relies on three lightweight building blocks:

  1. Event gateway (HTTP or messaging) triggers the assistant function.
  2. The function forwards user utterances to the configured NLU service and enriches replies.
  3. The mongodb function stores context records, enabling continuity across sessions.

Consult doc/source/images/architecture.jpg for a visual overview.

Getting Started

  1. Provision IBM Cloud Functions (or another OpenWhisk-compatible runtime), Watson Assistant, and MongoDB instances in the same region.
  2. Copy src/assistant.js and src/mongodb.js into your preferred tooling and replace the placeholder configuration values with secure environment variables.
  3. Update manifest.yml with your package name, credentials, and sequence wiring.
  4. Deploy with ibmcloud fn deploy --manifest manifest.yml or an equivalent OpenWhisk deployment command.
  5. Test by sending a sample payload to the published HTTP endpoint and monitor logs to verify the conversation flow.

Operational Tips

  • Rotate API keys regularly and inject them through the platform’s secret manager.
  • Enable MongoDB authentication and network rules; avoid embedding credentials in source files.
  • Add automated tests for new intents to ensure fulfillment logic keeps pace with the dialog.

License

This project is licensed under the Apache 2.0 License. Refer to LICENSE for the full text.

About

A lightweight, auto-scaling chat backend that orchestrates AI conversations entirely on serverless infrastructure, optimized for fast deployment and effortless upkeep.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published