Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

Adding a robots.txt to stop search engines from crawling yui.github.com/yui2/ #1

Closed
wants to merge 2 commits into from

2 participants

@triptych

We don't want search engines to crawl outdated docs. After we turn off docs on YDN we want the YUI3 docs to rise to the top of the results. This robots.txt file helps prevent old YUI 2.x docs from showing up in search results.

@triptych

/cc @ericf Need this to prevent yui.github.io/yui2 from being crawled by search engines

@ericf
Owner

I don't think this is a good idea. If people are still using YUI 2, then we want them to be able to find the documentation. I we prefer that we solve this problem by placing a banner across the top of all the archived docs HTML pages.

@triptych

Ok, btw the banner across the top of the archived docs will be ready to go soon. yui/yui2#14

@triptych triptych closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
This page is out of date. Refresh to see the latest.
Showing with 2 additions and 0 deletions.
  1. +2 −0  robots.txt
View
2  robots.txt
@@ -0,0 +1,2 @@
+User-agent: *
+Disallow: /yui2/
Something went wrong with that request. Please try again.