NeuroSearch is an open-source AI search engine that combines the power of large language models with real-time web search to provide accurate, cited answers. Built with Next.js and powered by Groq's ultra-fast inference, it delivers streaming responses with source attribution.
💡 Want to learn how to build this? Check out the official tutorial!
- 🔍 Real-time Web Search - Leverages Exa.ai for high-quality, relevant search results
- ⚡ Ultra-fast Inference - Powered by Groq's LPU technology for near-instant responses
- 📝 Streaming Responses - Answers stream in real-time for a smooth user experience
- 📚 Source Attribution - Displays top search results alongside AI-generated answers
- 🔄 Related Questions - Automatically suggests follow-up questions based on your query
- 📊 Observability - Integrated with Helicone for monitoring and analytics
- 🎨 Modern UI - Clean, responsive interface built with Tailwind CSS
| Category | Technology |
|---|---|
| Framework | Next.js 16 (App Router, Turbopack) |
| UI Library | React 19 |
| Styling | Tailwind CSS 3.4 |
| Language | TypeScript 5 |
| LLM Inference | Groq AI |
| LLM Models | OpenAI gpt-oss-120b & Qwen qwen3-32b |
| Search API | Exa.ai |
| AI SDK | Vercel AI SDK 5 |
| Observability | Helicone |
| Analytics | Plausible |
graph LR
A[User Question] --> B[Exa Search API]
B --> C[Top 9 Results]
C --> D[Process Top 5 Sources]
D --> E[LLM Context Window]
A --> E
E --> F[gpt-oss-120b]
F --> G[Streaming Answer]
D --> H[qwen3-32b]
H --> I[3 Related Questions]
- Search - User's question is sent to Exa.ai to retrieve the top 9 relevant results
- Process - Text is extracted from the top 5 sources (optimized for token limits)
- Generate - Combined context and question are sent to gpt-oss-120b for answer generation
- Stream - Response is streamed back to the user in real-time
- Suggest - Qwen qwen3-32b generates 3 related questions using the top 2 sources
| Service | Purpose | Sign Up |
|---|---|---|
| Groq | LLM inference | Get API Key |
| Exa | Search API | Get API Key |
| Helicone | Observability | Get API Key |
-
Clone the repository
git clone https://github.com/yourusername/neurosearch.git cd neurosearch -
Install dependencies
npm install # or yarn install -
Configure environment variables
Create a
.env.localfile in the root directory:GROQ_API_KEY=your_groq_api_key EXA_API_KEY=your_exa_api_key HELICONE_API_KEY=your_helicone_api_key
Note: Helicone integration is automatic. All Groq requests are routed through Helicone for observability—no additional configuration needed.
-
Start the development server
npm run dev
-
Open in browser
Navigate to http://localhost:3000
neurosearch/
├── app/
│ ├── api/ # API routes
│ │ ├── getAnswer/ # Answer generation endpoint
│ │ ├── getSimilarQuestions/ # Related questions endpoint
│ │ └── getSources/ # Source fetching endpoint
│ ├── globals.css # Global styles
│ ├── layout.tsx # Root layout
│ └── page.tsx # Main page
├── components/ # React components
│ ├── Answer.tsx # Answer display
│ ├── Hero.tsx # Landing section
│ ├── InputArea.tsx # Search input
│ ├── Sources.tsx # Source cards
│ └── ...
├── public/ # Static assets
├── utils/ # Utilities & types
└── ...
- Smart Tokenization - Implement tokenizer to optimize source content within token limits
- Regenerate Feature - Allow users to regenerate answers
- Enhanced Citations - Numbered in-text citations with source linking
- Share Answers - Generate shareable links for search results
- Auto-scroll - Smooth scrolling during streaming (especially mobile)
- SPA Navigation - Migrate to page-based routing to fix hard refresh issues
- Caching & Rate Limiting - Integrate Upstash Redis
- Advanced RAG - Implement keyword search and question rephrasing
- Authentication - Add Clerk auth with PostgreSQL/Prisma for user sessions
This project was inspired by:
- Perplexity - AI-powered answer engine
- You.com - AI search with citations
- Lepton Search - Open-source AI search
This project is licensed under the MIT License - see the LICENSE file for details.
MontaCoder
- GitHub: @montacoder
⭐ Star this repo if you found it helpful!
Made with ❤️ and Groq