Skip to content

ENG-859: Fix SDK binary generation and permissions #2261

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 47 commits into
base: next
Choose a base branch
from

Conversation

devin-ai-integration[bot]
Copy link
Contributor

ENG-859: Fix SDK binary generation and permissions

This PR fixes an issue with binary files referenced in package.json not being properly generated during the build process, causing pnpm to fail creating binary symlinks in node_modules/.bin.

Changes

  1. For integrations.do:

    • Modified tsup.config.ts to preserve shebang line with banner configuration
  2. For sdk.do:

    • Added postbuild script to set executable permissions on dist/bin.js
    • Fixed tsconfig.json configuration to properly compile bin.js
    • Updated index.ts to export bin.js
  3. For workflows.do:

    • Added postbuild script to set executable permissions on dist/index.js
  4. For standardize-sdk-packages.js:

    • Extended script to handle binary files for all SDKs, not just apis.do
    • Improved binary file detection and permission setting

Testing

Verified that binary files are properly generated and have executable permissions by:

  • Building all three SDKs
  • Installing them as dependencies in a test project
  • Confirming binary symlinks are correctly created in node_modules/.bin/

Link to Devin run: https://app.devin.ai/sessions/dbb5ba15f1ba4ca0813ef95bb6f7941d

Requested by: Nathan Clevenger (nateclev@gmail.com)

AggressivelyMeows and others added 30 commits May 3, 2025 19:24
… route

- Improved response format handling based on model output schema.
- Added model parsing logic to dynamically adjust response format for OpenAI and Google models.
- Refactored schema alteration logic to ensure proper type inference and handling of properties.
- Updated language model types to remove unnecessary provider endpoint flexibility.
- Changed the LLM provider base URL to point to the testing endpoint.
- Improved response handling by adjusting conditions for output schema and response format.
- Added logic to handle JSON output schema more effectively.
Co-Authored-By: Samuel Lippert <samuel@driv.ly>
Co-Authored-By: Samuel Lippert <samuel@driv.ly>
- Added `@vitest/ui` dependency to `sdks/llm.do/package.json`.
- Enhanced model handling in `app/(apis)/llm/chat/completions/route.ts` by integrating `convertIncomingSchema`.
- Updated model definitions in `pkgs/language-models/src/models.js` and `pkgs/language-models/src/models.d.ts` to include new models and properties.
- Improved test coverage in `sdks/llm.do/test/provider.test.ts` for structured outputs and tool interactions.
- Cleaned up console logs and adjusted test descriptions for clarity.
- Introduced `LLMCompatibleRequest` type to support additional model options in `app/(apis)/llm/chat/completions/route.ts`.
- Updated error handling with a new `ErrorResponse` function for better response management.
- Modified message processing to ensure compatibility with the AI SDK by adjusting message roles.
- Enhanced model retrieval logic to incorporate model options in `getModel`.
- Improved test coverage in `sdks/llm.do/test/provider.test.ts` for new functionalities and error conditions.
- Cleaned up unused action constants in `sdks/actions.do/src/constants.ts`.
- Updated TypeScript configurations and parser logic for better model filtering.
AggressivelyMeows and others added 17 commits May 14, 2025 20:19
- Refactored error handling in `app/(apis)/llm/chat/completions/route.ts` to utilize a try-catch block for improved error management.
- Introduced `AIToolRedirectError` type in `pkgs/ai-providers/src/ai.ts` to handle missing app connections with detailed error responses.
- Updated model handling logic to support both streaming and non-streaming responses, ensuring compatibility with various request formats.
- Adjusted model definitions in `pkgs/language-models/src/models.js` to reflect updated throughput and latency metrics for several models.
- Added a new model, "DeepSeek: DeepSeek V3 0324," with comprehensive details for integration into the system.
- Commented out the `createHooksQueuePlugin` in `payload.config.ts` for future reference.
- Enhanced the LLM processing in `app/(apis)/llm/chat/completions/route.ts` by adding an `onTool` callback for analytics tracking of tool usage.
- Improved error handling in the LLM processing to provide more informative responses for missing app connections.
- Updated model definitions in `pkgs/language-models/src/models.js` to reflect changes in throughput and latency metrics, including the addition of new models and adjustments to existing ones.
- Removed outdated model entries from the changelog in `pkgs/language-models/changelog.md`.
- Improved the model writing process in `pkgs/language-models/generate/build-models.ts` to check for existing models before writing, allowing for better tracking of added and removed models.
- Updated model definitions in `pkgs/language-models/src/models.js` to reflect changes in throughput and latency metrics, ensuring accuracy in performance data.
- Cleared the `ACTION_NAMES` array in `sdks/actions.do/src/constants.ts` to reset action constants for future use.
- Adjusted type handling in `sdks/llm.do/src/index.ts` for better compatibility with request initialization.
- Updated the AI provider logic in `pkgs/ai-providers/src/ai.ts` to improve tool handling and connection management, ensuring better user experience when accessing tools.
- Enhanced error handling in `app/(apis)/llm/chat/completions/route.ts` to provide clearer feedback for missing models.
- Refined model definitions in `pkgs/language-models/src/models.js` to correct throughput and latency metrics, ensuring accurate performance data.
- Adjusted model resolution in `pkgs/ai-providers/src/provider.ts` to accommodate new options for better configuration management.
- Updated model slugs and descriptions for clarity and consistency across the application.
…onents

- Updated the model initialization in `app/(apis)/llm/chat/completions/route.ts` to include model options for improved configuration.
- Modified the chat component in `app/(sites)/sites/gpt.do/components/chat.tsx` to pass model options, specifically for tool management.
- Adjusted the AI provider logic in `pkgs/ai-providers/src/ai.ts` to utilize model options during model resolution.
- Added logging in `pkgs/ai-providers/src/provider.ts` to track resolved models and their options for better debugging.
…p (ENG-858)

Co-Authored-By: Nathan Clevenger <nateclev@gmail.com>
fix: Add defensive coding and error handling to domain processing loop (ENG-858)
Co-Authored-By: Nathan Clevenger <nateclev@gmail.com>
Co-Authored-By: Nathan Clevenger <nateclev@gmail.com>
Co-Authored-By: Nathan Clevenger <nateclev@gmail.com>
Copy link

linear bot commented May 16, 2025

ENG-859 Fix SDK errors

Copy link

vercel bot commented May 16, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
ai ❌ Failed (Inspect) May 16, 2025 9:09am

Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants