-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Google page crawler problems #7
Comments
Thanks for reporting this @Humbi1992 Many of the methods in this Agent plugin are just wrappers to jenssegers/agent tool. Further up the chain, their 'isRobot' method refers to JayBizzle/Crawler-Detect . This Agent plugin requires the latest stable version of jenssegers/agent. If you're in a position to run a You can check if each of your installed packages are up-to date using these commands:
I'm going to see if I can replicate your circumstance though, as the Google Page Crawler seems like an obvious thing "Crawler-Detect" should be able to detect. As it stands... this is just a long-winded way of me saying I'm probably not going to be able to help. Sorry. You may need to raise an issue with Crawler-Detect instead to include the "Google Page Crawler" User Agent in their database. It may have been updated by Google. Obviously if I'm misunderstanding something please let me know. Happy to look into it. |
- Composer dependency versions ### Added - [Issue 7](#7) - You can add bespoke user agents to a configs file which will allow the "Check" method to pass. This will help in circumstances where bots using uncommon agent strings were failing for one reason or the other. ### Fixed - [Issue 6](#6) - Empty user agents now return false. - In some cases where the user agent version was 0 but the user agent name was corrent, the "check" method would return false. Also, when the version was 0 but the user agent name was incorrect the "check" method would returns true. This mixmatch of rules is now working as expected.
Hi @Humbi1992, you can now add User Agent exceptions to resolve the problem you were having. The query for 'ie' and it returning 'true' when it shouldn't have was related to the same bug. All fixed now. |
Good Morning
We use your plugin to detect if a user visits our page with the Internet Explorer Browser.
To check this we do following Check:
{% if craft.agent.check('ie') and not craft.agent.isRobot() %}
{% include '_partials/not-supported.twig' %}
{% else %}
{% include '_partials/navigation.twig' %}
{% block body %}
{% endblock %}
{% include '_partials/footer.twig' %}
{% endif %}
This works fine for the Internet Explorer detection, but the Google Page Crawler get's into this If clause too. So the page is not indexed by Google.
How can we fix this problem?
Please let me know if you need further details.
Kind regard
The text was updated successfully, but these errors were encountered: