Skip to content

Commit

Permalink
update README and add new scripts for Michael Kalus aka darkness blac…
Browse files Browse the repository at this point in the history
…k and white barcode
  • Loading branch information
rtanglao committed Apr 23, 2011
1 parent 3d424a4 commit 534efbf
Show file tree
Hide file tree
Showing 4 changed files with 123 additions and 2 deletions.
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 14 additions & 2 deletions README.md
Expand Up @@ -10,7 +10,7 @@
* ImageMagick installed
* [free flickr api key](http://www.flickr.com/services/apps/create/apply)

## Method
## penmachine Method

1. How to get the metadata of all the pm photo on flickr (need to create a file called flickr.conf with your flickr api key )

Expand All @@ -30,7 +30,7 @@
2. How to retrieve the 4622 photos with height >= 720

mkdir HD_PICS; cd HD_PICS
./download720.rb < ../pm.photos.16.april2011.stdout 2>download720.16april2011.stderr
../download720.rb < ../pm.photos.16.april2011.stdout 2>download720.16april2011.stderr

3. Resize images to be 1pixel wide 720 high (for some reason, bug, this converted only 4616 images)

Expand Down Expand Up @@ -59,3 +59,15 @@
5. the culmination: pmbarcode2.png

montage -geometry +0+0 -tile x1 pmbarcode1000.png pmbarcode2000.png pmbarcode3000.png pmbarcode4000.png pmbarcode5000.png pmbarcode2.png

# darkness method

(too tired to give an exhaustive HOWTO like I did for penmachine)

1. get the flickr meta data

./getdblackandwhitephotos.rb 1>darkness.photos.21april2011.stdout 2>darkness.photos.21april2011.stderr

2. download the photos that are 720 pixels or higher (used original resolution which is 990 photos as April 21, 2011

../download720FromOriginal.rb < ../darkness.photos.21april2011.stdout 2>darkness.download720FromOriginls.16april2011.stderr
55 changes: 55 additions & 0 deletions download720FromOriginal.rb
@@ -0,0 +1,55 @@
#!/usr/bin/env ruby
require 'json'
require 'pp'
require 'curb'
# requires serialized flickr json file to be $stdin or specified on the command line and then
# downloads the flickr "l" with height > 720

$file_number = 1

def fetch_1_at_a_time(urls)

easy = Curl::Easy.new
easy.follow_location = true

urls.each do|url|
easy.url = url
filename = sprintf("%04d", $file_number)+".jpg"
$file_number += 1
$stderr.print "filename:'#{filename}'"
$stderr.print "url:'#{url}' :"
File.open(filename, 'wb') do|f|
easy.on_progress {|dl_total, dl_now, ul_total, ul_now| $stderr.print "="; true }
easy.on_body {|data| f << data; data.size }
easy.perform
$stderr.puts "=> '#{filename}'"
end
end
end

ARGF.each_line do |line|
serializedJSON = line
flickr_data_page = JSON.parse(serializedJSON)
total = flickr_data_page["photos"]["total"].to_i
total_pages = flickr_data_page["photos"]["pages"].to_i
page = flickr_data_page["photos"]["page"].to_i
$stderr.printf "Total photos to download:%d page:%d of:%d\n", total, page, total_pages

total_to_download_for_this_page = 0
if page == total_pages
total_to_download_for_this_page = total % 250 # 250 per page
else
total_to_download_for_this_page = 250
end
url_index = 0
urls = []
0.upto(total_to_download_for_this_page - 1) do |i|
if flickr_data_page["photos"]["photo"][i].has_key?("url_o")
if flickr_data_page["photos"]["photo"][i]["height_o"].to_i >= 720
urls[url_index] = flickr_data_page["photos"]["photo"][i]["url_o"]
url_index += 1
end
end
end
fetch_1_at_a_time(urls)
end
54 changes: 54 additions & 0 deletions getdblackandwhitephotos.rb
@@ -0,0 +1,54 @@
#!/usr/bin/env ruby
require 'json'
require 'net/http'
require 'pp'
require 'Time'
require 'uri'
require 'parseconfig'

flickr_config = ParseConfig.new('flickr.conf').params
api_key = flickr_config['api_key']
user_id = "41931500@N00"

content_type = "1" # photos only
sort = "date-taken-asc"
per_page = "250" # geo photos limited by flickr to only 250 per page
extras="original_format,date_taken,geo,tags,o_dims,views,url_sq,url_t,url_s,url_m,url_o,url_z,url_l" # get all the meta data!
tags="blackwhite"
page = 1

def getResponse(url)

http = Net::HTTP.new("api.flickr.com",80)

request = Net::HTTP::Get.new(url)
resp = http.request(request)
if resp.code != "200"
$stderr.puts "Error: #{resp.code} from:#{url}"
raise JSON::ParserError # This is a kludge. Should return a proper exception instead!
end

result = JSON.parse(resp.body)
return result
end

photos_to_retrieve = 250
first_page = true
photos_per_page = 0
while photos_to_retrieve > 0
search_url = "/services/rest/?method=flickr.photos.search&api_key="+api_key+
"&format=json&nojsoncallback=1&content_type="+content_type+
"&per_page="+per_page+"&user_id="+user_id+"&tags="+tags+
"&extras="+extras+"&sort="+sort+"&page="+page.to_s
photos_on_this_page = getResponse(search_url)
if first_page
first_page = false
photos_per_page = photos_on_this_page["photos"]["perpage"].to_i
photos_to_retrieve = photos_on_this_page["photos"]["total"].to_i - photos_per_page
else
photos_to_retrieve -= photos_per_page
end
page += 1
$stderr.puts photos_on_this_page
print JSON.generate(photos_on_this_page), "\n"
end

0 comments on commit 534efbf

Please sign in to comment.