You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This paper suggests a new architecture: Holistic Representation Guided Attention Network for text recognition model, inspired from transformers, which oustands SAR both in accuracy & speed.
We should implement this model, but the impressive speed results should be handled carefully (X8 speed compared to SAR), since the experiments are conducted on a GPU, and this model is highly parallelable (no recurrency). Is this new model so fast on CPU ?
The text was updated successfully, but these errors were encountered:
This paper suggests a new architecture: Holistic Representation Guided Attention Network for text recognition model, inspired from transformers, which oustands SAR both in accuracy & speed.
We should implement this model, but the impressive speed results should be handled carefully (X8 speed compared to SAR), since the experiments are conducted on a GPU, and this model is highly parallelable (no recurrency). Is this new model so fast on CPU ?
The text was updated successfully, but these errors were encountered: