Skip to content


Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP


Adding a robots.txt to stop search engines from crawling #1

wants to merge 2 commits into from

2 participants


We don't want search engines to crawl outdated docs. After we turn off docs on YDN we want the YUI3 docs to rise to the top of the results. This robots.txt file helps prevent old YUI 2.x docs from showing up in search results.


/cc @ericf Need this to prevent from being crawled by search engines


I don't think this is a good idea. If people are still using YUI 2, then we want them to be able to find the documentation. I we prefer that we solve this problem by placing a banner across the top of all the archived docs HTML pages.


Ok, btw the banner across the top of the archived docs will be ready to go soon. yui/yui2#14

@triptych triptych closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
This page is out of date. Refresh to see the latest.
Showing with 2 additions and 0 deletions.
  1. +2 −0  robots.txt
2  robots.txt
@@ -0,0 +1,2 @@
+User-agent: *
+Disallow: /yui2/
Something went wrong with that request. Please try again.