Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Commits on Aug 4, 2014
  1. @alrra

    Change project architecture

    alrra authored
    * Move all the H5BP source files into the `src/` directory and remove
      all external components that can be fetch via `npm`, namely: the
      Apache Server Configs, jQuery, and Normalize.css.
    
    * Add `package.json`, and move to using `npm` for managing dependencies
      (for more information about `npm`, see: https://www.npmjs.org/doc/).
    
    * Add `gulp` based build script to allows us to automatically create
      the distribution files as well as an archive that can then be attached
      to the release - https://github.com/blog/1547-release-your-software
      (for more information about gulp, see: http://gulpjs.com/).
    
    * Add other miscellaneous files to help us in our development process:
    
       * `.editorconfig` - to define and maintain consistent coding styles
                           http://editorconfig.org/
    
       * `.jshintrc`     - to specify JSHint configuration options
                           http://www.jshint.com/docs/
    
       * `.travis.yml`   - to specify Travis CI configuration options
                           http://docs.travis-ci.com/
    
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    
    These changes:
    
      * automate some of the tedious work (e.g.: updating the external
        components, updating some of the inline content such as version
        numbers, etc.)
    
      * will allow us to experiment more (e.g.: allow us to create different
        builds of H5BP, builds that can contain different components)
    
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    
    Ref h5bp/html5-boilerplate#1563
    Close h5bp/html5-boilerplate#1563
Commits on Jan 15, 2014
  1. @alrra

    Add `Disallow:` to `robots.txt`

    alrra authored
    The addition of `Disallow:` is made in order to be compliant with:
    
      * the `robots.txt` specification (http://www.robotstxt.org/), which
        specifies that: "At least one Disallow field needs to be present
        in a record"
      * what is suggested in the documentation of most of the major search
        engines, e.g.:
    
          - Baidu:  http://www.baidu.com/search/robots_english.html
          - Google: https://developers.google.com/webmasters/control-crawl-index/docs/getting_started
                    http://www.youtube.com/watch?v=P7GY1fE5JQQ
          - Yandex: help.yandex.com/webmaster/controlling-robot/robots-txt.xml
    
    Besides the addition specified above, this commit also:
    
      * adds a comment making it clear to everyone that the directives from
        the `robots.txt` file allow all content on the site to be crawled
      * updates the URL to `www.robotstxt.org`, as `robotstxt.org` doesn't
        quite work:
    
          curl -LsS robotstxt.org
          curl: (7) Failed connect to robotstxt.org:80; Operation timed out
    
    Close #1487.
Commits on Feb 28, 2012
  1. @alrra
Commits on Feb 3, 2012
  1. @alrra
Something went wrong with that request. Please try again.