Skip to content
/ starter-monorepo Public template

Monorepo with 🤖 AI goodies | LLM Chat | 🔥Hono + OpenAPI & RPC, Nuxt, Convex, SST Ion, Kinde Auth, Tanstack Query, Shadcn, UnoCSS, Spreadsheet I18n, Lingo.dev

License

Notifications You must be signed in to change notification settings

NamesMT/starter-monorepo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Starter Monorepo

Monorepo is amazing!

Project's branding image

starter-monorepo

Overview

This is a base monorepo starter template to kick-start your beautifully organized project, whether its a fullstack project, monorepo of multiple libraries and applications, or even just one API server and its related infrastructure deployment and utilities.

Out-of-the-box with the included apps, we have a fullstack project: with a frontend Nuxt 4 app, a main backend using Hono, and a backend-convex Convex app.

  • General APIs, such as authentication, are handled by the main backend, which is designed to be serverless-compatible and can be deployed anywhere, allowing for the best possible latency, performance, and cost, according to your needs.
  • backend-convex is a modular, add-in backend, utilized to power components like AI Chat.

It is recommended to use an AI Agent (Roo Code recommended) to help you setup the monorepo according to your needs, see Utilities

What's inside?

Overview of the tech

⏩ This template is powered by Turborepo.

😊 Out-of-the-box, this repo is configured for an SSG frontend Nuxt app, and a backend Hono app that will be the main API, to optimize on cost and simplicity.

  • The starter kit is still configured for 100% SSR support,
    Simply change the apps/frontend's build script to nuxt build to enable SSR building

🌩️ SST Ion, an Infrastructure-as-Code solution, with powerful Live development.

  • SST is 100% opt-in, by using sst CLI commands yourself, like sst dev,
    simply remove sst dependency and sst.config.ts if you want to use another solution.
  • currently only backend app is configured, which will deploy a Lambda with Function URL enabled

🔐 Comes with starter-kit for Kinde typescript-sdk, see: /apps/backend/api/auth

  • Add your env variables, activate the auth routes, profit$
  • Please note that by default backend comes with a cookies-based session manager, which have great DX, security and does not require an external database (which also means great performance), but as the backend is decoupled with the Nuxt's SSR server, it will not work well with SSR (the session/auth state is not shared).
    So if you use SSR, you could use the official Nuxt Kinde module or implement your own way to manage the session at apps/backend/src/middlewares/session.ts.
    • If you have a good session manager implementation, a PR is greatly appreciated!

💯 JS is always TypeScript where possible.

Highlight Features / Components

AI / LLM Chat

Work started in 2025-06-12 for T3 Chat Cloneathon competition, with no prior AI SDK and chat streams experience, but I think I did an amazing job 🫡!

The focus of the project is for broader adoption, prioritizing easy-to-access UI/UX, bleeding-edge features like workflows are a low prio, though, advanced capabilities per-model capabilities and fine-tuning are still expected to be elegantly supported via the model's interface. #48

A super efficient and powerful, yet friendly LLM Chat system, featuring:

  • Business-ready, support hosted provider that you can control the billing of.
  • Supports other add-in BYOK providers, like OpenAI, OpenRouter,...
  • Seamless authentication integration with the main backend.
  • Beautiful syntax highlighting 🌈.
  • Thread branching, freezing, and sharing.
  • Real-time, multi-agents, multi-users support ¹.
    • Invite your families and friends, and play with the Agents together in real-time.
    • Or maybe invite your colleagues, and brainstorm together with the help and power of AI.
  • Resumable and multi-streams ¹.
    • Ask follow-up questions while the previous isn't done, the model is able to pick up what's available currently 🍳🍳.
    • Multi-users can send messages at the same time 😲😲.
  • Easy and private: guest, anonymous usage supported.
    • Your dad can just join and chat with just a link share 😉, no setup needed.
  • Mobile-friendly.
  • Fully internalized, with AI-powered translations and smooth switching between languages.
  • Blazingly fast ⚡ with local caching and optimistic updates.
  • Designed to be scalable
    • Things are isolated and common interfaces are defined and utilized where possible, there's no tightly coupled-hacks that prevents future scaling, things just works, elegantly.

    • Any AI provider that is compatible with @ai-sdk interface can be added in a few words of code, I just don't want to bloat the UI by adding all of them.

*1: currently the "stream" received when resuming or for other real-time users in the same thread is implemented via a custom polling mechanism, and not SSE. it is intentionally chosed to be this way for more minimal infrastructure setup and wider hosting support, so smaller user groups can host their own version easily, it is still very performant and efficient.

  • There is boilerplate code for SSE resume support, you can simply add a pub-sub to the backend and switch to using SSE resume in ChatInterface.

Apps and Libraries

frontend: a Nuxt app, compatible with v4 structure.

  • By default, the frontend /api/* routes is proxied to the backendUrl.
  • The rpcApi plugin will call the /api/* proxy if they're on the same domain but different ports (e.g: 127.0.0.1)
    • this mimics a production environment where the static frontend and the backend lives on the same domain at /api, which is the most efficient configuration for Cloudfront + Lambda Function Url, or Cloudflare Workers.

    • If the frontend and backend are on different domains then the backend will be called directly without proxy.
    • This could be configured in frontend's app.config.ts
  • By default, the Convex app is not enabled in development, to enable it, change the root dev script from dev:noConvex to dev:full.

Local packages

  • @local/locales: a shared central locales/i18n data library powered by spreadsheet-i18n.
    • 🌐✨🤖 AUTOMATIC localization with AI, powered by lingo.dev, just pnpm run i18n.
    • 🔄️ Hot-reload and automatic-reload supported, changes are reflected in apps (frontend, backend) instantly.
  • @local/common: a shared library that can contain constants, functions, types.
  • @local/common-vue: a shared library that can contain components, constants, functions, types for vue-based apps.
  • tsconfig: tsconfig.jsons used throughout the monorepo.

Utilities

This Turborepo has some additional tools already setup for you:

  • 🧐 ESLint + stylistic formatting rules (antfu)
  • 📚 A few more goodies like:
    • lint-staged pre-commit hook
    • 🤖 Initialization prompt for AI Agents to modify the monorepo according to your needs.
      • To start, open the chat with your AI Agent, and include the INIT_PROMPT.md file in your prompt.

Build

To build all apps and packages, run the following command:
pnpm run build

Develop

To develop all apps and packages, run the following command:
pnpm run dev

For local development environment variables / secrets, create a copy of .env.dev to .env.dev.local.

  • AI Agent will help you creating the .env.dev.local files if you use the AI initialization prompt.

Deploy

  • Some more deploying notes:
    • To enable deploy with Convex in production, simply rename _deploy script to deploy in backend-convex app, run the deploy script once manually to get the Convex's production url, set it to NUXT_PUBLIC_CONVEX_URL env in frontend's .env.prod file or CI / build machine env variable.

Notes

import ordering

Imports should not be separated by empty lines, and should be sorted automatically by eslint.

Dev with SSL

The project comes with a localcert SSL at locals/common/dev to enable HTTPS for local development, generated with mkcert, you can install mkcert, generate your own certificate and replace it, or install the localcert.crt to your trusted CA to remove the untrusted SSL warning.

Remote Caching

Turborepo can use a technique known as Remote Caching to share cache artifacts across machines, enabling you to share build caches with your team and CI/CD pipelines.

By default, Turborepo will cache locally. To enable Remote Caching you will need an account with Vercel. If you don't have an account you can create one, then enter the following commands:

npx turbo login

This will authenticate the Turborepo CLI with your Vercel account.

Next, you can link your Turborepo to your Remote Cache by running the following command from the root of your Turborepo:

npx turbo link

Useful Links

Learn more about the power of Turborepo:

About

Monorepo with 🤖 AI goodies | LLM Chat | 🔥Hono + OpenAPI & RPC, Nuxt, Convex, SST Ion, Kinde Auth, Tanstack Query, Shadcn, UnoCSS, Spreadsheet I18n, Lingo.dev

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •