-
Notifications
You must be signed in to change notification settings - Fork 10
Implement image resize search with bisect #2056
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement image resize search with bisect #2056
Conversation
Add optional client-side image compression for LLM API size limits:
**New Property**:
- `maxSizeBytes?: number` - Optional max size before compression
- Only compresses if set and file exceeds limit
- Falls back to original if compression fails
**Compression Strategy**:
- Uses Canvas API (OffscreenCanvas for performance)
- Tries 9 progressive size/quality combinations:
- 2048px @ 85% quality → 800px @ 50% quality
- Stops when under size limit
- Logs compression results for debugging
**Implementation**:
- `_compressImage()`: Robust compression with fallback
- Preserves original filename
- Updates size metadata to reflect compressed size
- JPEG output for broad compatibility
**Use Case**:
Anthropic vision API has 5MB limit per image. Setting maxSizeBytes={4_500_000} ensures images compress automatically before upload.
**Example**:
```typescript
<ct-image-input
maxSizeBytes={4500000}
multiple
maxImages={5}
buttonText="📷 Scan Signs"
/>
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Update JSX type definitions for new compression property 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Replace linear scan through 9 predefined quality/dimension combinations with an intelligent binary search approach: - Binary search on quality values (0.5-0.95) for each dimension level - Tries dimensions in descending order: 2048, 1600, 1200, 800 - Returns as soon as optimal compression is found - More efficient: typically 3-4 compressions per dimension vs 9 total - Better quality: finds optimal quality dynamically instead of using predefined values This reduces compression time while maintaining or improving output quality.
Move the binary search image compression algorithm from ct-image-input component into a reusable utility module: - Created packages/ui/src/v2/utils/image-compression.ts with: - compressImage() function with configurable options - formatFileSize() helper function - CompressionResult interface with detailed metadata - CompressionOptions interface for customization Benefits: - Reusability: Other components can now use image compression - Testability: Logic can be tested independently - Separation of concerns: UI component focuses on presentation - Maintainability: Algorithm improvements benefit all consumers - Type safety: Proper TypeScript interfaces and return types The ct-image-input component now delegates to the utility while maintaining the same compression behavior and logging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 4 files
Prompt for AI agents (all 1 issues)
Understand the root cause of the following 1 issues and fix them.
<file name="packages/ui/src/v2/utils/image-compression.ts">
<violation number="1" location="packages/ui/src/v2/utils/image-compression.ts:124">
The quality search is inverted: we return null if the high-quality blob is too large and we move `high` down on success, so we never probe lower JPEG qualities to satisfy the size budget and always converge on the worst quality/dimension.</violation>
</file>
React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 1 file (reviewed changes from recent commits).
1 issue found across 1 file
Prompt for AI agents (all 1 issues)
Understand the root cause of the following 1 issues and fix them.
<file name="packages/ui/src/v2/utils/image-compression.ts">
<violation number="1" location="packages/ui/src/v2/utils/image-compression.ts:138">
The updated comment claims we "try going higher" after a successful compression, but the code still sets `high = mid;`, which moves toward lower quality (more compression). Please adjust the comment to reflect the actual behavior or update the code accordingly.</violation>
</file>
React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.
| const blob = await compressAtSettings(maxDim, mid); | ||
|
|
||
| if (blob.size <= maxSizeBytes) { | ||
| // This quality works, try going higher (less compression) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The updated comment claims we "try going higher" after a successful compression, but the code still sets high = mid;, which moves toward lower quality (more compression). Please adjust the comment to reflect the actual behavior or update the code accordingly.
Prompt for AI agents
Address the following comment on packages/ui/src/v2/utils/image-compression.ts at line 138:
<comment>The updated comment claims we "try going higher" after a successful compression, but the code still sets `high = mid;`, which moves toward lower quality (more compression). Please adjust the comment to reflect the actual behavior or update the code accordingly.</comment>
<file context>
@@ -135,12 +135,12 @@ export async function compressImage(
if (blob.size <= maxSizeBytes) {
- // This quality works, try lower quality
+ // This quality works, try going higher (less compression)
bestBlob = blob;
bestQuality = mid;
</file context>
| // This quality works, try going higher (less compression) | |
| // This quality works, try lower quality (more compression) |
✅ Addressed in 8550b2a
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 issue found across 1 file (reviewed changes from recent commits).
1 issue found across 1 file
Prompt for AI agents (all 1 issues)
Understand the root cause of the following 1 issues and fix them.
<file name="packages/ui/src/v2/utils/image-compression.ts">
<violation number="1" location="packages/ui/src/v2/utils/image-compression.ts:154">
This fallback now returns the minimum quality when no midpoints succeed, so configurations where maxQuality - minQuality <= qualityTolerance will always return minQuality even though higher qualities may satisfy the size constraint.</violation>
</file>
React with 👍 or 👎 to teach cubic. Mention @cubic-dev-ai to give feedback, ask questions, or re-run the review.
| } | ||
|
|
||
| // Otherwise, try with the low quality setting one more time | ||
| const finalBlob = await compressAtSettings(maxDim, low); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fallback now returns the minimum quality when no midpoints succeed, so configurations where maxQuality - minQuality <= qualityTolerance will always return minQuality even though higher qualities may satisfy the size constraint.
Prompt for AI agents
Address the following comment on packages/ui/src/v2/utils/image-compression.ts at line 154:
<comment>This fallback now returns the minimum quality when no midpoints succeed, so configurations where maxQuality - minQuality <= qualityTolerance will always return minQuality even though higher qualities may satisfy the size constraint.</comment>
<file context>
@@ -150,10 +150,10 @@ export async function compressImage(
- // Otherwise, try with the high quality setting one more time
- const finalBlob = await compressAtSettings(maxDim, high);
+ // Otherwise, try with the low quality setting one more time
+ const finalBlob = await compressAtSettings(maxDim, low);
if (finalBlob.size <= maxSizeBytes) {
- return { blob: finalBlob, quality: high };
</file context>
✅ Addressed in 59ac38f
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No issues found across 1 file
* Add maxSizeBytes auto-compression to ct-image-input
Add optional client-side image compression for LLM API size limits:
**New Property**:
- `maxSizeBytes?: number` - Optional max size before compression
- Only compresses if set and file exceeds limit
- Falls back to original if compression fails
**Compression Strategy**:
- Uses Canvas API (OffscreenCanvas for performance)
- Tries 9 progressive size/quality combinations:
- 2048px @ 85% quality → 800px @ 50% quality
- Stops when under size limit
- Logs compression results for debugging
**Implementation**:
- `_compressImage()`: Robust compression with fallback
- Preserves original filename
- Updates size metadata to reflect compressed size
- JPEG output for broad compatibility
**Use Case**:
Anthropic vision API has 5MB limit per image. Setting maxSizeBytes={4_500_000} ensures images compress automatically before upload.
**Example**:
```typescript
<ct-image-input
maxSizeBytes={4500000}
multiple
maxImages={5}
buttonText="📷 Scan Signs"
/>
```
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Add maxSizeBytes to CTImageInputAttributes types
Update JSX type definitions for new compression property
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Optimize image compression with binary search algorithm
Replace linear scan through 9 predefined quality/dimension combinations
with an intelligent binary search approach:
- Binary search on quality values (0.5-0.95) for each dimension level
- Tries dimensions in descending order: 2048, 1600, 1200, 800
- Returns as soon as optimal compression is found
- More efficient: typically 3-4 compressions per dimension vs 9 total
- Better quality: finds optimal quality dynamically instead of using
predefined values
This reduces compression time while maintaining or improving output quality.
* Refactor: Extract image compression logic into utility module
Move the binary search image compression algorithm from ct-image-input
component into a reusable utility module:
- Created packages/ui/src/v2/utils/image-compression.ts with:
- compressImage() function with configurable options
- formatFileSize() helper function
- CompressionResult interface with detailed metadata
- CompressionOptions interface for customization
Benefits:
- Reusability: Other components can now use image compression
- Testability: Logic can be tested independently
- Separation of concerns: UI component focuses on presentation
- Maintainability: Algorithm improvements benefit all consumers
- Type safety: Proper TypeScript interfaces and return types
The ct-image-input component now delegates to the utility while
maintaining the same compression behavior and logging.
* Fix lint
* Fix logic so it works at runtime
* Trim console.log output
* Clarify intended logic
* Fix logic
* Respond to feedback
---------
Co-authored-by: Alex Komoroske <jkomoros@gmail.com>
Co-authored-by: Claude <noreply@anthropic.com>
Summary by cubic
Adds optional client-side image compression to ct-image-input using a binary search over quality and progressive resize to keep images under a max size. Introduces a new maxSizeBytes attribute (default 5MB) and updates the UI to show formatted, compressed sizes.
New Features
Refactors
Written for commit 59ac38f. Summary will update automatically on new commits.