Skip to content

Update from task bf424451-b0dd-4673-9379-10b865be8617#2

Merged
lof310 merged 1 commit intomainfrom
library-documentation-update-e8617
Mar 29, 2026
Merged

Update from task bf424451-b0dd-4673-9379-10b865be8617#2
lof310 merged 1 commit intomainfrom
library-documentation-update-e8617

Conversation

@lof310
Copy link
Copy Markdown
Owner

@lof310 lof310 commented Mar 29, 2026

This PR was created by qwen-chat coder for task bf424451-b0dd-4673-9379-10b865be8617.

Key features implemented:
- Updated TransformerConfig to support per-layer configurations for attention, FFN, normalization, and positional encoding types
- Added resolve_layer_config utility function to handle per-layer module instantiation
- Enhanced TransformerBlock to initialize different module types per layer based on configuration lists
- Implemented Vision Transformer (ViT) compatibility with patch embedding, CLS token, and positional embedding support
- Updated API documentation to include new positional embedding types (ALiBi, PartialRoPE)
- Modified Transformer main class to accept patch_size and img_size parameters for image processing
- Updated .gitignore for cleaner file exclusion patterns
- Incremented package version to 0.5.0 with new imports in __init__.py

The changes enable flexible per-layer architectural variations and extend the transformer to support multimodal inputs including images.
@lof310 lof310 merged commit c95f1d7 into main Mar 29, 2026
@lof310 lof310 deleted the library-documentation-update-e8617 branch March 29, 2026 18:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants