Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploying on shinyapps.io when crawlers are blocked via robots.txt #11

Open
gadenbuie opened this issue May 26, 2020 · 0 comments
Open
Labels
priority: low Low priority status: blocked Blocked by another issue or problem

Comments

@gadenbuie
Copy link
Owner

shinyapps.io uses a root-level, non-configurable robots.txt file to block all web crawlers. Any web crawlers that respect the robots.txt specification will not show a preview for your app. This is why preview cards won't appear as expected on Twitter.

There are good reasons for this default behavior. Primarily, blocking web crawlers keeps ensures that your app isn't started up (incurring costs and hours against your quota) just to render the preview card. On the other hand, most social media sites implement some sort of caching mechanism.

At the moment, I am not aware of any way to configure the settings in the robots.txt file. It is served globally at the root level of <user_name>.shinyapps.io, so an app-specific robots.txt stored at www/robots.txt won't have an effect or change the setting.

I'll update this issue if these settings become configurable, or if anyone reports a work-around.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority: low Low priority status: blocked Blocked by another issue or problem
Projects
None yet
Development

No branches or pull requests

1 participant