A knowledge navigator for AI research papers that uses natural language processing to interpret search queries and find relevant papers from arXiv and Semantic Scholar.
Screenshot of the AI Research Navigator showing search results with the LLM's interpretation
Screenshot showing the detailed search interface with natural language query processing
- Natural Language Search: Use everyday language to search for research papers
- LLM-Powered Query Understanding: Leverages Ollama's local LLM to interpret search queries
- Multiple Data Sources: Searches both arXiv and Semantic Scholar
- Comprehensive Results: Fetches up to 100 results from each source with pagination
- Paper Details: View abstracts, authors, affiliations, and download links
- Graceful Fallback: Works even without LLM by using regex-based query parsing
- Frontend: Next.js with TypeScript and React
- Styling: Tailwind CSS
- LLM Integration: Ollama (local LLM, defaults to llama3)
- API Integration:
- ArXivTool from beeai-framework for arXiv access
- Custom implementation for Semantic Scholar API
- Query Processing: Natural language processing with local LLM
- Node.js 18+ and npm
- Ollama (optional, for enhanced query processing)
-
Clone the repository:
git clone https://github.com/yourusername/knowledge-navigator-tool.git cd knowledge-navigator-tool -
Install dependencies:
npm install
-
(Optional) Install Ollama for enhanced query processing:
- Download and install Ollama
- Pull the llama3 model:
ollama pull llama3
-
Start the development server:
npm run dev
-
Open http://localhost:3000 in your browser
-
If using Ollama, ensure it's running in the background before starting the app
-
Query Processing:
- User enters a natural language query
- If Ollama is available, the query is sent to the local LLM for interpretation
- The LLM extracts structured information (author, affiliation, topic, etc.)
- If Ollama is not available, a fallback regex-based parser extracts information
-
Search Execution:
- The processed query is used to search arXiv and Semantic Scholar
- Results are combined, deduplicated, and sorted by relevance
- Ethiopian affiliations are detected when present
-
Results Display:
- Papers are displayed with title, authors, and affiliations
- Users can expand entries to view abstracts and access links
- Pagination allows browsing through large result sets
The application uses Ollama to run LLMs locally for query processing:
- Default Model: llama3
- Host: http://localhost:11434
- Purpose: Interprets natural language queries and extracts structured search parameters
- Fallback: If Ollama is not available, the application uses regex-based parsing
Uses the ArXivTool from beeai-framework to search for papers on arXiv.
Uses a custom implementation to search for papers on Semantic Scholar.
- Change LLM Model: Edit the model name in
src/lib/queryProcessor.ts - Adjust Result Limits: Modify the maxResults parameters in the tool implementations
- Add More Sources: Implement additional tools following the pattern in
src/lib/tools/
- beeai-framework for the ArXivTool
- Ollama for local LLM capabilities
- arXiv and Semantic Scholar for research paper data