Skip to content

Feat/hardware awareness#33

Merged
bernoussama merged 10 commits intomainfrom
feat/hardware-awareness
May 30, 2025
Merged

Feat/hardware awareness#33
bernoussama merged 10 commits intomainfrom
feat/hardware-awareness

Conversation

@bernoussama
Copy link
Owner

@bernoussama bernoussama commented May 30, 2025

This pull request introduces several enhancements and new features, focusing on hardware detection capabilities, support for OpenAI-compatible providers, and improved async patterns documentation. Additionally, dependencies were updated to support new functionality. Below is a breakdown of the most important changes grouped by theme.

Hardware Detection Enhancements:

  • Added getHardwareInfo and getDetailedHardwareInfo functions in src/helpers/hardware.ts to provide detailed hardware information, including CPU, memory, architecture, and GPU details. GPU detection leverages the systeminformation library with fallback mechanisms for unsupported platforms.
  • Updated getSystemInfo in src/lib/ai.ts to include hardware information for Linux systems, integrating the new getHardwareInfo function.

OpenAI-Compatible Provider Support:

  • Introduced a new provider, openaiCompatible, in src/lib/config.ts with support for custom base URLs and optional API key authentication.
  • Added logic in src/lib/ai.ts to support OpenAI-compatible models, including default configurations, benchmarking, and API key handling. [1] [2]

Async Patterns Documentation:

  • Created a new document, docs/async-patterns.md, detailing best practices for handling async functions in TypeScript, including recommended patterns like async/await and fire-and-forget, as well as less desirable approaches like .then().

Dependency Updates:

  • Added the systeminformation package to package.json and pnpm-lock.yaml to enable GPU detection functionality. [1] [2]

Minor Improvements:

  • Updated the version in package.json from 1.0.10 to 1.0.11-0 to reflect the introduction of new features.

Summary by CodeRabbit

  • New Features

    • Added support for OpenAI-compatible API endpoints with customizable base URLs and optional API key handling.
    • Introduced system hardware detection, providing detailed CPU, memory, architecture, and GPU information for enhanced system insights.
  • Documentation

    • Added new guides covering async programming patterns in TypeScript and cross-platform hardware detection usage and best practices.
  • Chores

    • Updated dependencies and version number to reflect recent changes.

…dd getHardwareInfo() function to detect CPU, memory, architecture, and GPU - Use systeminformation library for detailed GPU information including vendor, model, and VRAM - Implement graceful fallbacks when systeminformation is not available - Add proper TypeScript support with HardwareInfo interface - Include comprehensive error handling and cross-platform support - Add documentation for hardware detection capabilities
…async approach for hardware detection - Add getHardwareInfo() for immediate basic info - Add getDetailedHardwareInfo() for async detailed GPU info - Update SystemInfo interface to include hardware property - Add comprehensive async patterns documentation - Fix TypeScript errors related to Promise<string> vs string mismatch
…dwareInfo() and getDetailedHardwareInfo() to async functions for improved GPU info retrieval - Clean up whitespace for better readability
…to use async/await for hardware info, improved SystemInfo interface to include detailed hardware properties, and adjusted output formatting for better clarity.
…openaiCompatible to SUPPORTED_PROVIDERS with custom base URL support - Update validation logic to handle openaiCompatible as API-key-free provider - Add openaiCompatible case in getModelFromConfig function - Add openaiCompatible to defaultModels, models export, and benchmark models - Configure default model as gpt-3.5-turbo with localhost:8000 baseURL - Supports any OpenAI-compatible API endpoint with custom base URL configuration
…d OPENAI_COMPATIBLE_API_KEY environment variable support - Update promptApiKey to handle optional API key with user confirmation - Add API key parameter to createOpenAICompatible configuration - Update editApiKey function to handle openaiCompatible special case - Maintain backward compatibility for local servers without authentication - Support hosted OpenAI-compatible services that require API keys
…i.ts and config.ts - Removed unnecessary blank lines and enhanced the conditional output for system information retrieval.
…conditional output for system information retrieval by removing unnecessary blank lines while maintaining clarity.
@coderabbitai
Copy link

coderabbitai bot commented May 30, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

This update introduces documentation on async patterns and hardware detection, adds a hardware info helper, and integrates hardware info into system details. It adds a new "openaiCompatible" provider with optional API key logic, updates configuration handling for this provider, and bumps the package version while adding the "systeminformation" dependency.

Changes

File(s) Change Summary
docs/async-patterns.md
docs/hardware-detection.md
Added documentation on async patterns in TypeScript and cross-platform hardware detection module usage and design.
src/helpers/hardware.ts New module: Exports async functions to gather CPU, memory, architecture, and GPU info, using "systeminformation" when available.
src/lib/ai.ts getSystemInfo is now async and includes hardware info; added and integrated "openaiCompatible" provider and model logic.
src/lib/config.ts Added "openaiCompatible" to supported providers, with optional API key logic and interactive prompts for API key requirement.
src/commands/config.ts Refined API key editing logic for "openaiCompatible" provider, with special handling for optional API key scenarios.
package.json Updated version to "1.0.11-0"; added "systeminformation" dependency.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI
    participant Config
    participant ModelProvider
    participant HardwareHelper

    User->>CLI: Select "openaiCompatible" provider
    CLI->>Config: promptApiKey('openaiCompatible')
    Config->>User: Inform about optional API key
    User->>Config: Confirm if API key is needed
    alt API key required
        Config->>User: Prompt for API key
        User->>Config: Enter API key
        Config->>CLI: Return API key
    else API key not required
        Config->>CLI: Return undefined
    end
    CLI->>ModelProvider: Initialize with config and (optional) API key

    User->>CLI: Request system info
    CLI->>HardwareHelper: getHardwareInfo()
    HardwareHelper->>systeminformation: (if available) get GPU info
    HardwareHelper->>CLI: Return hardware info
    CLI->>User: Display system info with hardware details
Loading

Possibly related PRs

  • bernoussama/lazyshell#10: Adds configuration file feature with interactive prompts and persistent storage for AI provider and API key management, related to provider and API key handling.
  • bernoussama/lazyshell#32: Adds support for "lmstudio" provider and improves error handling, related to provider support and configuration logic.

Poem

In the warren of code, a new path appears,
Async and hardware, the future now nears.
With "openaiCompatible", options abound,
Optional keys let new models be found.
🐰 System info hops in, so clever and bright—
This update’s a carrot, and all feels just right!


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b52fcad and 5aeb061.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (7)
  • docs/async-patterns.md (1 hunks)
  • docs/hardware-detection.md (1 hunks)
  • package.json (2 hunks)
  • src/commands/config.ts (1 hunks)
  • src/helpers/hardware.ts (1 hunks)
  • src/lib/ai.ts (9 hunks)
  • src/lib/config.ts (3 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @bernoussama, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello team, Gemini here with a summary of this pull request titled "Feat/hardware awareness". This PR introduces several significant enhancements, primarily focusing on adding hardware detection capabilities, integrating support for OpenAI-compatible API providers, and providing new documentation on handling asynchronous patterns in TypeScript. It also includes necessary dependency updates to support the new features.

Highlights

  • Hardware Detection: New functions (getHardwareInfo, getDetailedHardwareInfo) have been added to detect CPU, memory, architecture, and GPU details. GPU detection uses the systeminformation library with a fallback if it's not available or fails.
  • OpenAI-Compatible Provider: A new openaiCompatible provider has been introduced, allowing users to connect to any API endpoint that follows the OpenAI API specification. This includes support for custom base URLs and optional API key authentication.
  • Async Patterns Documentation: A new markdown document (docs/async-patterns.md) has been added to guide developers on best practices for working with async functions in TypeScript, covering async/await, dual sync/async versions, and the fire-and-forget pattern.
  • System Info Enhancement: The existing getSystemInfo function has been updated to include the newly detected hardware information, specifically for Linux systems, making this context available for AI prompts.
  • Dependency Update: The systeminformation package has been added as a dependency to enable detailed hardware (especially GPU) detection.

Changelog

Click here to see the changelog
  • docs/async-patterns.md
    • Added new documentation explaining different patterns for handling async functions in TypeScript, including recommended approaches (async/await, dual sync/async) and patterns to avoid (.then()).
    • Includes an example related to the new hardware detection feature.
  • docs/hardware-detection.md
    • Added new documentation detailing the hardware detection module.
    • Explains features (CPU, Memory, Arch, GPU), usage examples, GPU detection specifics (using systeminformation and fallback), installation requirements, platform support, error handling, and TypeScript types.
  • package.json
    • Bumped version from 1.0.10 to 1.0.11-0.
    • Added systeminformation package as a dependency (^5.27.1).
  • pnpm-lock.yaml
    • Added lockfile entry for the new systeminformation dependency.
  • src/commands/config.ts
    • Updated the editApiKey function to specifically handle the openaiCompatible provider, clarifying that the API key is optional for this provider and checking for environment variables first.
  • src/helpers/hardware.ts
    • Added a new file containing functions for hardware detection.
    • Implemented getGpuInfo using dynamic import of systeminformation with fallback logic for platforms or failures.
    • Implemented getHardwareInfo and getDetailedHardwareInfo to gather CPU, memory, architecture, and GPU information.
  • src/lib/ai.ts
    • Imported hardware detection functions and types.
    • Added an optional hardware field to the SystemInfo interface.
    • Made getSystemInfo asynchronous to accommodate the async hardware detection call.
    • Integrated getHardwareInfo call into getSystemInfo for Linux systems.
    • Added logic to getModelFromConfig to support the new openaiCompatible provider, handling base URL and API key configuration.
    • Added openaiCompatible to the default model ID mapping.
    • Added openaiCompatible-gpt to the benchmark models list.
    • Updated the systemPrompt to include detailed hardware information (CPU, Memory, GPU) and conditionally include distribution and package manager details.
  • src/lib/config.ts
    • Added openaiCompatible to the SUPPORTED_PROVIDERS list with its configuration details (name, description, env var, default model/base URL, supports custom base URL).
    • Updated validateConfig to consider openaiCompatible as a provider that doesn't strictly require an API key for validation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Hardware found with code,
Async waits, a heavy load,
AI speaks, now free.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@bernoussama bernoussama merged commit 6a87dc5 into main May 30, 2025
7 of 8 checks passed
@bernoussama bernoussama deleted the feat/hardware-awareness branch May 30, 2025 23:37
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces hardware awareness features, OpenAI-compatible provider support, and documentation for async patterns. The changes are well-structured and enhance the application's capabilities. However, some areas could benefit from refinement to improve maintainability and performance.

Summary of Findings

  • Duplicated Hardware Information Function: The getDetailedHardwareInfo function in src/helpers/hardware.ts duplicates the functionality of getHardwareInfo. Consider removing the duplicate function and calling getHardwareInfo directly.
  • Potential Blocking Module Initialization: The await call at the top level of src/lib/ai.ts can block module initialization. Evaluate whether the hardware information is critical for initial load and consider lazy-loading or deferring the call if not.
  • Improve Hardware Information Display: The template literals in src/lib/ai.ts will output undefined when hardware information is unavailable. Consider providing default values for a cleaner output.
  • Improve Readability of Conditional Checks: The conditional checks in src/commands/config.ts and src/lib/config.ts could be extracted into separate functions or constants to improve readability and maintainability. I did not comment on this for all instances due to review settings.

Merge Readiness

The pull request introduces valuable features and is generally well-implemented. However, the potential blocking module initialization and duplicated hardware information function should be addressed before merging. Additionally, improving the display of hardware information and readability of conditional checks would enhance the overall quality of the code. I am unable to approve this pull request, and recommend that these changes be reviewed and approved by the appropriate team members.

- Distribution: ${osInfo.distro || 'N/A'}
- Package Manager: ${osInfo.packageManager || 'N/A'}
- Hardware:
- CPU: ${osInfo.hardware?.cpu}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using await at the top level of the module can block the module's initialization, potentially impacting startup time. Consider whether fetching hardware info is critical for initial module load, or if it can be deferred or loaded asynchronously after the module is initialized. If the hardware information is not critical for the initial load, consider lazy-loading or deferring the getSystemInfo call.

// Consider lazy-loading or deferring the getSystemInfo call
// Example: 
// let osInfo: SystemInfo | null = null;
// async function getOsInfo() {
//   if (!osInfo) {
//     osInfo = await getSystemInfo();
//   }
//   return osInfo;
// }
// const systemPrompt = dedent`...${(await getOsInfo()).platform}...`;

Style Guide References

Comment on lines +137 to +139
// Handle providers that don't support API keys at all
if (!providerInfo.envVar && config.provider !== 'openaiCompatible') {
await print(chalk.yellow(`${providerInfo.name} doesn't require an API key.`));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider extracting the condition !providerInfo.envVar && config.provider !== 'openaiCompatible' into a separate function or a well-named constant to improve readability and maintainability. This makes the intent clearer.

Style Guide References

Suggested change
// Handle providers that don't support API keys at all
if (!providerInfo.envVar && config.provider !== 'openaiCompatible') {
await print(chalk.yellow(`${providerInfo.name} doesn't require an API key.`));
const providerDoesNotNeedApiKey = (providerInfo: any, provider: string) => !providerInfo.envVar && provider !== 'openaiCompatible';
if (providerDoesNotNeedApiKey(providerInfo, config.provider)) {

Comment on lines +73 to +91
// Async version for detailed hardware info including GPU
export async function getDetailedHardwareInfo(): Promise<HardwareInfo> {
try {
const cpu = os.cpus()[0]?.model || 'Unknown CPU';
const memory = `${Math.round(os.totalmem() / 1024 / 1024 / 1024)} GB`;
const arch = os.arch();
const gpu = await getGpuInfo();

return { cpu, memory, arch, gpu };
} catch (error) {
console.error('Error getting detailed hardware info:', error);
return {
cpu: 'Unknown CPU',
memory: 'Unknown Memory',
arch: 'Unknown Architecture',
gpu: 'Unknown GPU',
};
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The getDetailedHardwareInfo function is essentially a duplicate of getHardwareInfo. It would be more efficient to have a single function with a flag to determine the level of detail, or refactor to avoid redundant code. Consider removing this function and calling getHardwareInfo directly.

// Remove the getDetailedHardwareInfo function
// Call getHardwareInfo directly where detailed info is needed

Style Guide References

Comment on lines 282 to +285
return lmstudio(modelId);
},
openaiCompatible: (modelId: string = 'gpt-3.5-turbo', baseUrl: string = 'http://localhost:8000/v1') => {
const openaiCompatible = createOpenAICompatible({

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

When hardware information is unavailable, the template literals will output undefined for CPU, Memory and GPU. Consider providing a default value (e.g., 'N/A') to display when the hardware properties are null or undefined. This will provide a cleaner output.

- Hardware:
  - CPU: ${osInfo.hardware?.cpu || 'N/A'}
  - Memory: ${osInfo.hardware?.memory || 'N/A'}
  - GPU: ${osInfo.hardware?.gpu || 'N/A'}

Style Guide References

Comment on lines +160 to +161
// Ollama, LM Studio, and OpenAI Compatible don't require an API key (can work locally)
if (config.provider === 'ollama' || config.provider === 'lmstudio' || config.provider === 'openaiCompatible') {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider extracting the condition config.provider === 'ollama' || config.provider === 'lmstudio' || config.provider === 'openaiCompatible' into a separate function or a well-named constant to improve readability and maintainability. This makes the intent clearer.

Style Guide References

Suggested change
// Ollama, LM Studio, and OpenAI Compatible don't require an API key (can work locally)
if (config.provider === 'ollama' || config.provider === 'lmstudio' || config.provider === 'openaiCompatible') {
const providerDoesNotNeedApiKey = (provider: string) => provider === 'ollama' || provider === 'lmstudio' || provider === 'openaiCompatible';
if (providerDoesNotNeedApiKey(config.provider)) {

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant