-
-
Notifications
You must be signed in to change notification settings - Fork 48.5k
Added the vision_transformer.py file which fixes the issue #13326 #13331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
computer_vision/vision_tranformer.py
Outdated
embed_dim (int): Dimension of embedding | ||
""" | ||
|
||
def __init__(self, img_size: int = 224, patch_size: int = 16, in_channels: int = 3, embed_dim: int = 768): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
stride=patch_size | ||
) | ||
|
||
def forward(self, x: Tensor) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function forward
Please provide descriptive name for the parameter: x
dropout (float): Dropout rate | ||
""" | ||
|
||
def __init__(self, embed_dim: int = 768, num_heads: int = 12, dropout: float = 0.0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
self.proj = nn.Linear(embed_dim, embed_dim) | ||
self.proj_dropout = nn.Dropout(dropout) | ||
|
||
def forward(self, x: Tensor) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function forward
Please provide descriptive name for the parameter: x
computer_vision/vision_tranformer.py
Outdated
dropout (float): Dropout rate | ||
""" | ||
|
||
def __init__(self, embed_dim: int = 768, mlp_ratio: float = 4.0, dropout: float = 0.0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
# Initialize weights | ||
self._init_weights() | ||
|
||
def _init_weights(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function _init_weights
Please provide return type hint for the function: _init_weights
. If the function does not return a value, please provide the type hint as: def function() -> None:
# Initialize linear layers | ||
self.apply(self._init_linear_weights) | ||
|
||
def _init_linear_weights(self, module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function _init_linear_weights
Please provide return type hint for the function: _init_linear_weights
. If the function does not return a value, please provide the type hint as: def function() -> None:
Please provide type hint for the parameter: module
if module.bias is not None: | ||
nn.init.zeros_(module.bias) | ||
|
||
def forward(self, x: Tensor) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function forward
Please provide descriptive name for the parameter: x
return x | ||
|
||
|
||
def create_vit_model( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function create_vit_model
|
||
|
||
|
||
def count_parameters(model: nn.Module) -> int: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function count_parameters
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
embed_dim (int): Dimension of embedding | ||
""" | ||
|
||
def __init__( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
stride=patch_size, | ||
) | ||
|
||
def forward(self, x: Tensor) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function forward
Please provide descriptive name for the parameter: x
dropout (float): Dropout rate | ||
""" | ||
|
||
def __init__(self, embed_dim: int = 768, num_heads: int = 12, dropout: float = 0.0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
self.proj = nn.Linear(embed_dim, embed_dim) | ||
self.proj_dropout = nn.Dropout(dropout) | ||
|
||
def forward(self, x: Tensor) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function forward
Please provide descriptive name for the parameter: x
dropout (float): Dropout rate | ||
""" | ||
|
||
def __init__( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
# Initialize weights | ||
self._init_weights() | ||
|
||
def _init_weights(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function _init_weights
Please provide return type hint for the function: _init_weights
. If the function does not return a value, please provide the type hint as: def function() -> None:
# Initialize linear layers | ||
self.apply(self._init_linear_weights) | ||
|
||
def _init_linear_weights(self, module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function _init_linear_weights
Please provide return type hint for the function: _init_linear_weights
. If the function does not return a value, please provide the type hint as: def function() -> None:
Please provide type hint for the parameter: module
if module.bias is not None: | ||
nn.init.zeros_(module.bias) | ||
|
||
def forward(self, x: Tensor) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function forward
Please provide descriptive name for the parameter: x
return x | ||
|
||
|
||
def create_vit_model( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function create_vit_model
) | ||
|
||
|
||
def count_parameters(model: nn.Module) -> int: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function count_parameters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
dropout (float): Dropout rate | ||
""" | ||
|
||
def __init__(self, embed_dim: int = 768, num_heads: int = 12, dropout: float = 0.0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
self.proj = nn.Linear(embed_dim, embed_dim) | ||
self.proj_dropout = nn.Dropout(dropout) | ||
|
||
def forward(self, x: Tensor) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function forward
Please provide descriptive name for the parameter: x
computer_vision/vision_tranformer.py
Outdated
dropout (float): Dropout rate | ||
""" | ||
|
||
def __init__(self, embed_dim: int = 768, mlp_ratio: float = 4.0, dropout: float = 0.0): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please provide return type hint for the function: __init__
. If the function does not return a value, please provide the type hint as: def function() -> None:
# Initialize linear layers | ||
self.apply(self._init_linear_weights) | ||
|
||
def _init_linear_weights(self, module): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function _init_linear_weights
Please provide return type hint for the function: _init_linear_weights
. If the function does not return a value, please provide the type hint as: def function() -> None:
Please provide type hint for the parameter: module
if module.bias is not None: | ||
nn.init.zeros_(module.bias) | ||
|
||
def forward(self, x: Tensor) -> Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function forward
Please provide descriptive name for the parameter: x
return x | ||
|
||
|
||
def create_vit_model( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As there is no test file in this pull request nor any test function or class in the file computer_vision/vision_tranformer.py
, please provide doctest for the function create_vit_model
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Click here to look at the relevant links ⬇️
🔗 Relevant Links
Repository:
Python:
Automated review generated by algorithms-keeper. If there's any problem regarding this review, please open an issue about it.
algorithms-keeper
commands and options
algorithms-keeper actions can be triggered by commenting on this PR:
@algorithms-keeper review
to trigger the checks for only added pull request files@algorithms-keeper review-all
to trigger the checks for all the pull request files, including the modified files. As we cannot post review comments on lines not part of the diff, this command will post all the messages in one comment.NOTE: Commands are in beta and so this feature is restricted only to a member or owner of the organization.
for more information, see https://pre-commit.ci
I have cleared all the Issue can you please merge my PR which rises the issue: #13326 can you assign me this under Hacktoberfest 2025 |
Describe your change:
I have added vision_transformers.py file which is raised in the issue: Add Vision Transformer code for image classification #13326
can you assign me to this under Hacktoberfest 2025
Checklist: