Skip to content
This repository has been archived by the owner on Jun 12, 2019. It is now read-only.

Commit

Permalink
Add ATDIS feed
Browse files Browse the repository at this point in the history
  • Loading branch information
henare committed Mar 14, 2016
1 parent 8353edd commit 8c31428
Show file tree
Hide file tree
Showing 3 changed files with 56 additions and 62 deletions.
11 changes: 2 additions & 9 deletions Gemfile
@@ -1,10 +1,3 @@
# It's easy to add more libraries or choose different versions. Any libraries
# specified here will be installed and made available to your morph.io scraper.
# Find out more: https://morph.io/documentation/ruby
source 'https://rubygems.org'

source "https://rubygems.org"

ruby "2.0.0"

gem "scraperwiki", git: "https://github.com/openaustralia/scraperwiki-ruby.git", branch: "morph_defaults"
gem "mechanize"
gem 'atdisplanningalertsfeed', github: 'planningalerts-scrapers/atdisplanningalertsfeed'
80 changes: 50 additions & 30 deletions Gemfile.lock
@@ -1,47 +1,67 @@
GIT
remote: https://github.com/openaustralia/scraperwiki-ruby.git
revision: fc50176812505e463077d5c673d504a6a234aa78
branch: morph_defaults
remote: git://github.com/planningalerts-scrapers/atdisplanningalertsfeed.git
revision: ce20de6801cf4131a0440602059ff63414d73a6c
specs:
scraperwiki (3.0.1)
httpclient
sqlite_magic
atdisplanningalertsfeed (0.0.1)
atdis
scraperwiki-morph

GEM
remote: https://rubygems.org/
specs:
domain_name (0.5.24)
activemodel (4.2.6)
activesupport (= 4.2.6)
builder (~> 3.1)
activesupport (4.2.6)
i18n (~> 0.7)
json (~> 1.7, >= 1.7.7)
minitest (~> 5.1)
thread_safe (~> 0.3, >= 0.3.4)
tzinfo (~> 1.1)
atdis (0.3.13)
activemodel
multi_json (~> 1.7)
rest-client
rgeo-geojson
builder (3.2.2)
domain_name (0.5.20160309)
unf (>= 0.0.5, < 1.0.0)
http-cookie (1.0.2)
domain_name (~> 0.5)
httpclient (2.6.0.1)
mechanize (2.7.3)
domain_name (~> 0.5, >= 0.5.1)
http-cookie (~> 1.0)
mime-types (~> 2.0)
net-http-digest_auth (~> 1.1, >= 1.1.1)
net-http-persistent (~> 2.5, >= 2.5.2)
nokogiri (~> 1.4)
ntlm-http (~> 0.1, >= 0.1.1)
webrobots (>= 0.0.9, < 0.2)
mime-types (2.5)
mini_portile (0.6.2)
net-http-digest_auth (1.4)
net-http-persistent (2.9.4)
nokogiri (1.6.6.2)
mini_portile (~> 0.6.0)
ntlm-http (0.1.1)
sqlite3 (1.3.10)
sqlite_magic (0.0.3)
httpclient (2.7.1)
i18n (0.7.0)
json (1.8.3)
mime-types (2.99.1)
minitest (5.8.4)
multi_json (1.11.2)
netrc (0.11.0)
rest-client (1.8.0)
http-cookie (>= 1.0.2, < 2.0)
mime-types (>= 1.16, < 3.0)
netrc (~> 0.7)
rgeo (0.5.3)
rgeo-geojson (0.4.2)
rgeo (~> 0.5)
scraperwiki (3.0.2)
httpclient
sqlite_magic
scraperwiki-morph (0.1.1)
scraperwiki
sqlite3 (1.3.11)
sqlite_magic (0.0.6)
sqlite3
thread_safe (0.3.5)
tzinfo (1.2.2)
thread_safe (~> 0.1)
unf (0.1.4)
unf_ext
unf_ext (0.0.7.1)
webrobots (0.1.1)
unf_ext (0.0.7.2)

PLATFORMS
ruby

DEPENDENCIES
mechanize
scraperwiki!
atdisplanningalertsfeed!

BUNDLED WITH
1.10.6
27 changes: 4 additions & 23 deletions scraper.rb
@@ -1,25 +1,6 @@
# This is a template for a Ruby scraper on morph.io (https://morph.io)
# including some code snippets below that you should find helpful
#!/usr/bin/env ruby
Bundler.require

# require 'scraperwiki'
# require 'mechanize'
#
# agent = Mechanize.new
#
# # Read in a page
# page = agent.get("http://foo.com")
#
# # Find somehing on the page using css selectors
# p page.at('div.content')
#
# # Write out to the sqlite database using scraperwiki library
# ScraperWiki.save_sqlite(["name"], {"name" => "susan", "occupation" => "software developer"})
#
# # An arbitrary query against the database
# ScraperWiki.select("* from data where 'name'='peter'")
url = "https://da.kiama.nsw.gov.au/atdis/1.0"

# You don't have to do things with the Mechanize or ScraperWiki libraries.
# You can use whatever gems you want: https://morph.io/documentation/ruby
# All that matters is that your final data is written to an SQLite database
# called "data.sqlite" in the current working directory which has at least a table
# called "data".
ATDISPlanningAlertsFeed.save(url)

0 comments on commit 8c31428

Please sign in to comment.