Add DeepSeek and LM Studio providers, reasoning capabilities, and UI improvements#654
Closed
haddadrm wants to merge 12 commits intoItzCrazyKns:masterfrom
Closed
Add DeepSeek and LM Studio providers, reasoning capabilities, and UI improvements#654haddadrm wants to merge 12 commits intoItzCrazyKns:masterfrom
haddadrm wants to merge 12 commits intoItzCrazyKns:masterfrom
Conversation
- Integrate DeepSeek and LMStudio AI providers - Add message processing utilities for improved handling - Implement reasoning panel for message actions - Add logging functionality to UI - Update configurations and dependencies
configurable delay feature. 1. Created AlternatingMessageValidator (renamed from MessageProcessor): -Focused on handling alternating message patterns -Made it model-agnostic with configuration-driven approach -Kept the core validation logic intact 2. Created ReasoningChatModel (renamed from DeepSeekChat): -Made it generic for any model with reasoning/thinking capabilities -Added configurable streaming delay parameter (streamDelay) -Implemented delay logic in the streaming process 3. Updated the DeepSeek provider: -Now uses ReasoningChatModel for deepseek-reasoner with a 50ms delay -Uses standard ChatOpenAI for deepseek-chat -Added a clear distinction between models that need reasoning capabilities Updated references in metaSearchAgent.ts: 4. Changed import from messageProcessor to alternatingMessageValidator -Updated function calls to use the new validator -The configurable delay implementation allows to control the speed of token generation, which can help with the issue you were seeing. The delay is set to 20ms by default for the deepseek-reasoner model, but you can adjust his value in the deepseek.ts provider file to find the optimal speed. This refactoring maintains all the existing functionality while making the code more maintainable and future-proof. The separation of concerns between message validation and model implementation will make it easier to add support for other models with similar requirements in the future.
1. Search Functionality: -Added a search box with search icon and "Search your threads..." placeholder -Real-time filtering of threads as you type -Clear button (X) when text is entered 2. Thread Count Display: -Added "You have X threads in Perplexica" below the search box -Only shows in normal mode (hidden during selection) 3. Multiple delete functionality: -"Select" button in the top right below Search Box -Checkboxes that appear on hover and when in selection mode -Selection mode header showing count and actions -When in selection mode, shows "X selected thread(s)" on the left -Action buttons (Select all, Cancel, Delete Selected) on the right -Disabled Delete Selected button when no threads are selected -Confirmation dialog using the new BatchDeleteChats component 4. Terminology Update: -Changed all instances of "chats" to "threads" throughout the interface
Enhanced the Discover section with personalization f eatures and category navigation 1. Backend Enhancements 1.1. Database Schema Updates -Added a user Preferences table to store user category preferences -Set default preferences to AI and Technology 1.2. Category-Based Search -Created a comprehensive category system with specialized search queries for each category -Implemented 11 categories: AI, Technology, Current News, Sports, Money, Gaming, Weather, Entertainment, Art & Culture, Science, Health, and Travel -Each category searches relevant websites with appropriate keywords -Updated the search sources for each category with more reputable websites 1.3. New API Endpoints -Enhanced the main /discover endpoint to support category filtering and preference-based content -Added /discover/preferences endpoints for getting and saving user preferences 2. Frontend Improvements 2.1 Category Navigation Bar -Added a horizontal scrollable category bar at the top of the Discover page -Active category is highlighted with the primary color with smooth scrolling animation via tight/left buttons "For You" category shows personalised content based on saved preferences. 2.2 Personalization Feature - Added a Settings button in the top-right corner - Implemented a personalisation modal that allows users to select their preferred categories - Implemented language checkboxes grid for 12 major languages that allow users to select multiple languages for their preferred language in the results -Updated the backend to filter search results by the selected language - Preferences are saved to the backend and persist between sessions 3.2 UI Enhancements Improved layout with better spacing and transitions Added hover effects for better interactivity Ensured the design is responsive across different screen sizes How It Works -Users can click on category tabs to view news specific to that category The "For You" tab shows a personalized feed based on the user's saved preferences -Users can customize their preferences by clicking the Settings icon and selecting categories and preferered language(s). -When preferences are saved, the "For You" feed automatically updates to reflect those preferences -These improvements make the Discover section more engaging and personalized, allowing users to easily find content that interests them across a wide range of categories.
Additonal Tweeks
Restructured the Discover page to prevent the entire page from refreshing when selecting categories or updating settings 1. Component Separation -Split the page into three main components: -DiscoverHeader: Contains the title, settings button, and category navigation -DiscoverContent: Contains the grid of articles with its own loading state -PreferencesModal: Manages the settings modal with temporary state 2. Optimized Rendering -Used React.memo for all components to prevent unnecessary re-renders -Each component only receives the props it needs -The header remains stable while only the content area updates 3. Improved Loading States 3.1. Added separate loading states: -Initial loading for the first page load -Content-only loading when changing categories or preferences -Loading spinners now only appear in the content area when changing categories 3.2. Better State Management -Main state is managed in the parent component -Modal uses temporary state that only updates the main state after saving -Clear separation of concerns between components These changes create a more polished user experience where the header and sidebar remain stable while only the content area refreshes when needed. The page now feels more responsive and app-like, rather than having the entire page refresh on every interaction
the reasoning models using ReasoningChatModel Custom Class.
1. Added the STREAM_DELAY parameter to the sample.config.toml file:
[MODELS.DEEPSEEK]
API_KEY = ""
STREAM_DELAY = 20 # Milliseconds between token emissions for reasoning models (higher = slower, 0 = no delay)
2. Updated the Config interface in src/config.ts to include the new parameter:
DEEPSEEK: {
API_KEY: string;
STREAM_DELAY: number;
};
3. Added a getter function in src/config.ts to retrieve the configured value:
export const getDeepseekStreamDelay = () =>
loadConfig().MODELS.DEEPSEEK.STREAM_DELAY || 20; // Default to 20ms if not specified
Updated the deepseek.ts provider to use the configured stream delay:
const streamDelay = getDeepseekStreamDelay();
logger.debug(`Using stream delay of ${streamDelay}ms for ${model.id}`);
// Then using it in the model configuration
model: new ReasoningChatModel({
// ...other params
streamDelay
}),
4. This implementation provides several benefits:
-User-Configurable: Users can now adjust the stream delay without modifying code
-Descriptive Naming: The parameter name "STREAM_DELAY" clearly indicates its purpose
-Documented: The comment in the config file explains what the parameter does
-Fallback Default: If not specified, it defaults to 20ms
-Logging: Added debug logging to show the configured value when loading models
To adjust the stream delay, users can simply modify the STREAM_DELAY value in
their config.toml file. Higher values will slow down token generation
(making it easier to read in real-time), while lower values will speed it up.
Setting it to 0 will disable the delay entirely.
names in the dropdown menus: 1. Created a formatProviderName utility function in ui/lib/utils.ts that: -Contains a comprehensive mapping of provider keys to their properly formatted display names -Handles current providers like "openai" → "OpenAI" and "lm_studio" → "LM Studio" -Includes future-proofing for many additional providers like NVIDIA, OpenRouter, Mistral AI, etc. -Provides a fallback formatting mechanism for any unknown providers (replacing underscores with spaces and capitalizing each word) 2. Updated both dropdown menus in the settings page to use this function: -The Chat Model Provider dropdown now displays properly formatted names -The Embedding Model Provider dropdown also uses the same formatting This is a purely aesthetic change that improves the UI by displaying provider names with proper capitalization and spacing that matches their official branding. The internal values and functionality remain unchanged since only the display labels were modified. The app will now show properly formatted provider names like "OpenAI", "LM Studio", and "DeepSeek" instead of "Openai", "Lm_studio", and "Deepseek".
of reasoningChatModel.ts and messageProcessor.ts in favor of alternaitngMessageValidator.ts - Removed src/lib/deepseekChat.ts as it was duplicative - All functionality is now handled by reasoningChatModel.ts - No imports or references to deepseekChat.ts found in codebase - Removed src/utils/messageProcessor.ts as it was duplicative - All functionality is now handled by alternatingMessaageValidator.ts - No imports or references messageProcessor.ts found in codebase
Owner
|
Hey man, really sorry, I was busy merging the backend and frontend. Can you update your branch with the latest changes from mine? Once that's done, we can merge it. |
Contributor
Author
Hey, sorry been busy myself, conflicts all resolved, i'll open a new PR and I hope you''ll get around to check it before more conflicts popup 🤞 Reasoning for Deepseek & Claude for a later PR though. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR adds several improvements to Perplexica: