Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Comparing changes

Choose two branches to see what's changed or to start a new pull request. If you need to, you can also compare across forks.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also compare across forks.
  • 17 commits
  • 7 files changed
  • 0 commit comments
  • 1 contributor
Commits on Apr 02, 2012
@houshuang Fixing split words like construc- tivism in skim.rb 9d4f1d9
@houshuang Improving bulletlist to not remove "and" if list is line-based cf3d705
Commits on Apr 03, 2012
@houshuang Made sure all links in RSS feed are to online version, not localhost cdc12c4
Commits on Apr 04, 2012
@houshuang Speed-up and improved formatting
I only generate a citation once, and store it in item[:cit], reused a number of times.
Also only list publications which have notes or clippings on main bibliography - all are still listed on author/kw etc.
Names of publications/authors/keywords are no longer printed, but individual and total times are.
Changed bibliography to use Dokuwiki tables instead of HTML (makes links work better local and remote).
Some reformatting for legibility.
61568ad
@houshuang Quick formatting change fa2a83e
@houshuang bibtex-batch RSS avoids duplicates 5b4f9ea
@houshuang Enable updating of RSS entries
Now, if you try to add a page to the RSS feed, and the page has already been added, it instead updates its entry.
Growl message is also updated to reflect this.
0e36d37
Commits on Apr 08, 2012
@houshuang Adding bibliography selector using Pashua in new global.rb (JSON must…
… be regenerated)

New file global.rb holds global keyboard macros. Cmd+. invokes Pashua selector with bibliography list
(citekey + title). Reconfigured json cache to include title (json has to be regenerated)
6d322c0
@houshuang Check if an item has a title, otherwise don't list it 58836ca
Commits on Apr 09, 2012
@houshuang Moving anystyle-import from anystyle-import.rb to global.rb f2f60bd
@houshuang Small change in bulletlist to account for (1) entries e746a7d
@houshuang First iteration of cleanup script - does a bunch of tests and outputs…
… to HTML page
34825ed
@houshuang Making sure it doesn't crash if text cannot be exported from PDF 1465999
@houshuang Various improvements to cleanup - show last 7 days 0e323f4
@houshuang Added command for deleting all related pages - after showing confirma…
…tion box
8e8137b
@houshuang Added new pubs last 7 days to cleanup 5adc0a9
@houshuang Added numbers of publications for this week's stats in cleanup a2da690
View
24 anystyle-import.rb
@@ -1,24 +0,0 @@
-# encoding: UTF-8
-$:.push(File.dirname($0))
-require 'open-uri'
-require 'utility-functions'
-
-# grab from clipboard, either look up DOI through API, or
-# use anystyle parser to convert text to bibtex. Paste to clipboard.
-
-def lookup_doi(doi)
- doi = doi.downcase.remove(/doi[:>]/,'http://','dx.doi.org/').strip
- url = "http://dx.doi.org/#{doi}"
- return open(url, "Accept" => "text/bibliography; style=bibtex").read
-end
-
-search = pbpaste
-if search.strip.downcase[0..2] == "doi"
- bibtex = lookup_doi(search)
- growl "Failure", "DOI lookup not successful" unless bibtex
-else
- require 'anystyle/parser'
- search = search.gsub("-\n", "").gsub("\n", " ")
- bibtex = Anystyle.parse(search, :bibtex).to_s
-end
-pbcopy(cleanup_bibtex_string(bibtex))
View
131 bibtex-batch.rb
@@ -35,6 +35,8 @@ def make_rss_feed
version = "2.0" # ["0.9", "1.0", "2.0"]
+ urls = Array.new
+
content = RSS::Maker.make(version) do |m|
m.channel.title = Wiki_title
m.channel.link = Internet_path
@@ -42,24 +44,38 @@ def make_rss_feed
m.items.do_sort = true # sort items by date
rss_entries.each do |entry|
+ next if urls.index(entry[:link]) # avoid duplicate content (should not be possible, but just in case)
+ urls << entry[:link]
+
i = m.items.new_item
i.title = entry[:title]
+ puts "--#{i.title}"
i.link = entry[:link]
i.date = entry[:date]
i.description = Sanitize.clean( entry[:description], Sanitize::Config::RELAXED )
end
end
- File.write(Wiki_path + "/feed.xml", content)
+ # make sure all links point to the online version, not the localhost
+ feedcontent = content.to_s.gsub(Internet_path, Server_path)
+
+ File.write(Wiki_path + "/feed.xml", feedcontent)
end
+timetot = Time.now
+timetmp = Time.now
+
puts "Making RSS feed"
+
make_rss_feed
-puts "RSS feed complete (will be updated next time you sync with server)"
+
+puts "RSS feed complete (will be updated next time you sync with server) (#{Time.now - timetmp} s.)"
+timetmp = Time.now
puts "Parsing BibTeX"
b = BibTeX.parse(File.read(Bibliography))
b.parse_names
-puts "Initial parse complete"
+puts "Initial parse complete (#{Time.now - timetmp} s.)"
+timetmp = Time.now
out1 = ''
out2 = ''
out3 = ''
@@ -75,8 +91,10 @@ def make_rss_feed
counter[:notes] = 0
counter[:clippings] = 0
counter[:images] = 0
+
puts "Starting secondary parse"
b.each do |item|
+ next unless try { item.title } # fragment, doesn't need its own listing
ax = []
if item.respond_to? :author
item.author.each do |a|
@@ -99,13 +117,17 @@ def make_rss_feed
end
cit = CiteProc.process item.to_citeproc, :style => :apa
+ cit.gsub!(/Retrieved from(.+?)$/, '')
+ item[:cit] = cit
year = (defined? item.year) ? item.year.to_s : "n.d."
if year == "n.d." and cit.match(/\((....)\)/)
year = $1
end
- json[item.key.to_s] = [namify(ax), year, cit]
+ json[item.key.to_s] = [namify(ax), year, cit, item.title]
+
hasfiles = Array.new
- hasfiles[4]=""
+ hasfiles[2] = '' # ensure that array is filled even if some fields are empty, for alignment
+
if File.exists?("#{Wiki_path}/data/pages/ref/#{item.key}.txt")
counter[:hasref] += 1
if File.exists?("#{Wiki_path}/data/pages/clip/#{item.key}.txt") || File.exists?("#{Wiki_path}/data/pages/kindle/#{item.key}.txt")
@@ -119,18 +141,15 @@ def make_rss_feed
if File.exists?("#{Wiki_path}/data/pages/notes/#{item.key}.txt")
counter[:notes] += 1
hasfiles[0] = "N"
- out1 << "<tr><td><a href = 'ref:#{item.key}'>#{item.key}</a></td><td>#{hasfiles.join("</td><td>&nbsp;")}</td><td>#{cit}</td></tr>\n"
- elsif hasfiles[1] == "C"
- out2 << "<tr><td><a href = 'ref:#{item.key}'>#{item.key}</a></td><td>#{hasfiles.join("</td><td>&nbsp;")}</td><td>#{cit}</td></tr>\n"
- else
- out3 << "<tr><td><a href = 'ref:#{item.key}'>#{item.key}</a></td><td>#{hasfiles.join("</td><td>&nbsp;")}</td><td>#{cit}</td></tr>\n"
+ end
+ txt = "| [#:ref:#{item.key}|#{item.key}] | #{hasfiles.join(" | ")} |#{cit}|\n"
+ if hasfiles[0] == "N"
+ out1 << txt
+ elsif hasfiles[1] == "C"
+ out2 << txt
end
-
- else
- counter[:noref] += 1
- out4 << "<tr><td>#{item.key}</td><td>#{hasfiles.join("</td><td>&nbsp;")}</td><td>#{cit}</td></tr>\n"
end
# mark as read if notes exist
@@ -140,16 +159,22 @@ def make_rss_feed
end
+puts "Finished secondary parse, generating main bibliography (#{Time.now - timetmp} s.)"
+timetmp = Time.now
-puts "Finished secondary parse, generating main bibliography"
-
-out = "h1. Bibliography\n\nDownload [[http://dl.dropbox.com/u/1341682/Bibliography.bib|entire BibTeX file]]. Also see bibliography by [[abib:start|author]] or by [[kbib:start|keyword]].\n\nPublications that have their own pages are listed on top, and hyperlinked. Most of these also have clippings and many have key ideas.\n\nStatistics: Totally **#{counter[:hasref] + counter[:noref]}** publications, and **#{counter[:hasref]}** publications have their own wikipages. Of these, **#{counter[:images]}** with notes (key ideas) **(N)**, **#{counter[:clippings]}** with highlights (imported from Kindle or Skim) **(C)**, and **#{counter[:images]}** with images (imported from Skim) **(I)** and.<html><table>"
+out = "h1. Bibliography\n\nDownload [[http://dl.dropbox.com/u/1341682/Bibliography.bib|entire BibTeX file]].
+Also see bibliography by [[abib:start|author]] or by [[kbib:start|keyword]].\n\nPublications that have their
+own pages are listed on top, and hyperlinked. Most of these also have clippings and many have key ideas.\n\n
+Statistics: Totally **#{counter[:hasref] + counter[:noref]}** publications, and **#{counter[:hasref]}**
+publications have their own wikipages. Of these, **#{counter[:images]}** with notes (key ideas) **(N)**,
+**#{counter[:clippings]}** with highlights (imported from Kindle or Skim) **(C)**, and **#{counter[:images]}**
+with images (imported from Skim) **(I)**\n\n"
#dt.document.save
File.open("#{Wiki_path}/lib/plugins/dokuresearchr/json.tmp","w"){|f| f << JSON.fast_generate(json)}
-out << out1 << out2 << out3 << out4 << "</table></html>"
+out << out1 << out2 << out3
File.open("#{Wiki_path}/data/pages/bib/bibliography.txt", 'w') {|f| f << out}
###############################################
@@ -170,11 +195,10 @@ def make_rss_feed
out = "h2. #{author}'s publications\n\n"
sort_pubs(pubs).each do |i|
item = b[i]
- cit = CiteProc.process item.to_citeproc, :style => :apa
if File.exists?("#{Wiki_path}/data/pages/ref/#{item.key}.txt")
- out1 << "| [#:ref:#{item.key}] | #{cit}|#{pdfpath(item.key)}|\n"
+ out1 << "| [#:ref:#{item.key}|#{item.key}] | #{item[:cit]}|#{pdfpath(item.key)}|\n"
else
- out2 << "| #{item.key} | #{cit}|#{pdfpath(item.key)}|\n"
+ out2 << "| #{item.key} | #{item[:cit]}|#{pdfpath(item.key)}|\n"
end
end
@@ -182,7 +206,6 @@ def make_rss_feed
authorname = clean_pagename(author)
authorlisted << [authorname,author,pubs.size]
File.open("#{Wiki_path}/data/pages/abib/#{authorname}.txt", 'w') {|f| f << out}
- puts author
end
File.open("#{Wiki_path}/data/pages/abib/start.txt","w") do |f|
@@ -198,44 +221,46 @@ def make_rss_feed
end
###############################################
# generate individual files for each keyword
-
+puts "Finished (#{Time.now - timetmp} s.)"
if keywordopt
+ timetmp = Time.now
puts "Generating individual files for each keyword"
-keywordslisted = Array.new
-keywords.each do |keyword, pubs|
- out =''
- out1 = ''
- out2 =''
- out = "h2. Publications with keyword \"#{keyword}\"\n\n"
- sort_pubs(pubs).each do |i|
- item = b[i]
- cit = CiteProc.process item.to_citeproc, :style => :apa
- if File.exists?("#{Wiki_path}/data/pages/ref/#{item.key}.txt")
- out1 << "| [#:ref:#{item.key}] | #{cit}| #{pdfpath(item.key)} |\n"
- else
- out2 << "| #{item.key} | #{cit} | #{pdfpath(item.key)}|\n"
+ keywordslisted = Array.new
+ keywords.each do |keyword, pubs|
+ out =''
+ out1 = ''
+ out2 =''
+ out = "h2. Publications with keyword \"#{keyword}\"\n\n"
+ sort_pubs(pubs).each do |i|
+ item = b[i]
+ if File.exists?("#{Wiki_path}/data/pages/ref/#{item.key}.txt")
+ out1 << "| [#:ref:#{item.key}|#{item.key}] | #{item[:cit]}| #{pdfpath(item.key)} |\n"
+ else
+ out2 << "| #{item.key} | #{item[:cit]} | #{pdfpath(item.key)}|\n"
+ end
end
- end
- out << out1 << out2
- kwname = keyword.gsub(/[\,\.\/ ]/,"_").downcase
- keywordslisted << [kwname,keyword,pubs.size]
- File.open("#{Wiki_path}/data/pages/kbib/#{kwname}.txt", 'w') {|f| f << out}
- puts kwname
-end
+ out << out1 << out2
+ kwname = keyword.gsub(/[\,\.\/ ]/,"_").downcase
+ keywordslisted << [kwname,keyword,pubs.size]
+ File.open("#{Wiki_path}/data/pages/kbib/#{kwname}.txt", 'w') {|f| f << out}
+ end
-File.open("#{Wiki_path}/data/pages/kbib/start.txt","w") do |f|
- f << "h1. List of publication keywords\n\n"
- keywordslisted.sort {|x,y| y[2].to_i <=> x[2].to_i}.each do |ax|
- f << "|[##{ax[0]}|#{ax[1]}]|#{ax[2]}|\n"
+ File.open("#{Wiki_path}/data/pages/kbib/start.txt","w") do |f|
+ f << "h1. List of publication keywords\n\n"
+ keywordslisted.sort {|x,y| y[2].to_i <=> x[2].to_i}.each do |ax|
+ f << "|[##{ax[0]}|#{ax[1]}]|#{ax[2]}|\n"
+ end
end
-end
+ puts "Finished (#{Time.now - timetmp} s.)"
+
end
###############################################
# generate individual files for each journal with more than five cits.
if journalopt
+ timetmp = Time.now
puts "Generating individual files for each journal"
authorlisted = Array.new
@@ -244,18 +269,16 @@ def make_rss_feed
out1 = ''
out2 =''
author = axx.strip
- p pubs, pubs.size
next unless pubs.size > 5
# only generates individual author pages for authors with full names. this is because I want to deduplicate author names
# when you import bibtex, you get many different spellings etc.
out = "h2. Publications in #{author}\n\n"
sort_pubs(pubs).each do |i|
item = b[i]
- cit = CiteProc.process item.to_citeproc, :style => :apa
if File.exists?("#{Wiki_path}/data/pages/ref/#{item.key}.txt")
- out1 << "| [#:ref:#{item.key}] | #{cit}|#{pdfpath(item.key)}|\n"
+ out1 << "| [#:ref:#{item.key}|#{item.key}] | #{item[:cit]}|#{pdfpath(item.key)}|\n"
else
- out2 << "| #{item.key} | #{cit}|#{pdfpath(item.key)}|\n"
+ out2 << "| #{item.key} | #{item[:cit]}|#{pdfpath(item.key)}|\n"
end
end
@@ -263,7 +286,6 @@ def make_rss_feed
authorname = clean_pagename(author)
authorlisted << [authorname,author,pubs.size]
File.open("#{Wiki_path}/data/pages/jbib/#{authorname}.txt", 'w') {|f| f << out}
- puts author
end
end
@@ -276,6 +298,7 @@ def make_rss_feed
end
f << "| [##{ax[0]}|#{ax[1]}] | #{apage} |#{ax[2]}|\n"
end
+ puts "Finished (#{Time.now - timetmp} s.)"
end
@@ -293,3 +316,5 @@ def make_rss_feed
# end
# end
# File.open("#{Wiki_path}/data/pages/bib/needs_key_ideas.txt","w") {|f| f << out}
+
+puts "All tasks finished. Total time #{Time.now - timetot} s."
View
63 cleanup.rb
@@ -0,0 +1,63 @@
+# encoding: UTF-8
+
+# performing various cleanup functions
+
+$:.push(File.dirname($0))
+require 'utility-functions'
+require 'appscript'
+include Appscript
+
+def bname(ary)
+ ary.map {|f| File.basename(f).remove(".txt")}
+end
+
+refs = bname(Dir[Wiki_path + "/data/pages/ref/*.txt"])
+skimg = bname(Dir[Wiki_path + "/data/pages/skimg/*.txt"])
+clips = bname(Dir[Wiki_path + "/data/pages/clip/*.txt"])
+notes = bname(Dir[Wiki_path + "/data/pages/notes/*.txt"])
+kindle = bname(Dir[Wiki_path + "/data/pages/kindle/*.txt"])
+
+refs_week = bname(`find /wiki/data/pages/ref/*.txt -mtime -7`.split("\n"))
+notes_week = bname(`find /wiki/data/pages/notes/*.txt -mtime -7`.split("\n"))
+
+notes_short_week = []
+notes_week.each {|f| notes_short_week << f unless File.size("#{Wiki_path}/data/pages/notes/#{f}.txt") > 500}
+
+# check off all the notes in BibDesk, takes a few seconds
+# notes.each do |n|
+# bibdesk_publication = try { app("BibDesk").document.search({:for =>n})[0] }
+# bibdesk_publication.fields["Notes"].value.set("1") if bibdesk_publication
+# end
+
+puts "<html><head><title>Researchr cleanup script report</title></head><body>"
+puts "<h1>Researchr cleanup script report</h1>"
+
+this = notes_week - notes_short_week
+puts "<h2>New publications added last 7 days with decent-sized notes (#{this.size})</h2>"
+this.each {|a| puts "<li><a href='#{Internet_path}/ref:#{a}'>#{a}</a></li>"}
+
+puts "<h2>New publications added last 7 days with brief notes (#{notes_short_week.size})</h2>"
+(notes_short_week).each {|a| puts "<li><a href='#{Internet_path}/ref:#{a}'>#{a}</a></li>"}
+
+this = refs_week - notes_week
+puts "<h2>New publications added last 7 days without notes (#{this.size})</h2>"
+this.each {|a| puts "<li><a href='#{Internet_path}/ref:#{a}'>#{a}</a></li>"}
+
+puts "<hr><h2>Notes pages without ref page</h2>"
+(notes - refs).each {|a| puts "<li><a href='#{Internet_path}/notes:#{a}'>#{a}</a></li>"}
+
+puts "<h2>Clipping pages without ref page</h2>"
+(clips - refs).each {|a| puts "<li><a href='#{Internet_path}/clip:#{a}'>#{a}</a></li>"}
+
+puts "<h2>Image pages without ref page</h2>"
+(skimg - refs).each {|a| puts "<li><a href='#{Internet_path}/skimg:#{a}'>#{a}</a></li>"}
+
+puts "<h2>Kindle pages without ref page</h2>"
+(kindle - refs).each {|a| puts "<li><a href='#{Internet_path}/kindle:#{a}'>#{a}</a></li>"}
+
+puts "<h2>Ref pages with no sub-pages</h2>"
+(refs - (skimg + clips + notes + kindle)).each {|a| puts "<li><a href='#{Internet_path}/ref:#{a}'>#{a}</a></li>"}
+
+# how to do a union of two arrays?
+#puts "<h2>Kindle pages that also has clipping page</h2>"
+#(kindle ).each {|a| puts "<li><a href='#{Internet_path}/kindle:#{a}'>#{a}</a></li>"}
View
67 dokuwiki.rb
@@ -140,8 +140,12 @@ def import_bibtex
# is executed (Ctrl+Alt+Cmd+F)
def add_to_rss
require 'open-uri'
+ require 'cgi'
+
fname = Wiki_path + "/rss-temp"
+
internalurl = cururl.split("/").last
+ url = "#{Internet_path}/#{internalurl}"
# load existing holding file, or start form scratch
if File.exists?(fname)
@@ -164,13 +168,35 @@ def add_to_rss
/\<div class\=\"plugin\_include\_content\ plugin\_include\_\_kindle(.+?)\<\/div\>/m
)
-
title = page_contents.scan(/\<h1(.+?)id(.+?)>(.+)\<(.+?)\<\/h1\>/)[0][2]
- rss_entries << {:title => title, :date => Time.now, :link => "#{Internet_path}/#{internalurl}", :description => contents}
+ title = CGI.unescapeHTML(title)
+
+ entry_contents = {:title => title, :date => Time.now, :link => url, :description => contents}
+
+ exists = false
+
+ rss_entries.map! do |entry|
+ if entry[:link] == url
+ exists = true
+ entry_contents
+ else
+ entry
+ end
+ end
+
+ unless exists
+ rss_entries << entry_contents
+ end
+
+ rss_entries = rss_entries.drop(1) if rss_entries.size > 15
- rss_entries = rss_entries.drop(1) if rss_entries.size > 10
File.write(fname, Marshal::dump(rss_entries))
- growl("Article added to feed", "'#{title}' added to RSS feed")
+
+ if exists
+ growl("Article updated", "Article #{title} updated")
+ else
+ growl("Article added to feed", "'#{title}' added to RSS feed")
+ end
end
# pops up dialogue box, asking where to send text, takes selected text (or just link, if desired) and inserts at the bottom
@@ -254,7 +280,7 @@ def bulletlist
splt = "\n"
elsif a.scan(")").size > a.scan("(").size + 2
splt = ")"
- a.gsub!(/[, ]*\d+\)/,")")
+ a.gsub!(/[, (]*\d+\)/,")")
elsif a.scan(";").size > 1
splt = ";"
elsif a.scan(".").size > 2
@@ -269,7 +295,8 @@ def bulletlist
splits = a.split(splt)
- if splits.last.index(" and ") # deal with situation where the last two items are delimited with "and"
+ # deal with situation where the last two items are delimited with "and", but not for line shift or 1) 2) kind of lists
+ if splits.last.index(" and ") && !(splt == "\n" || splt == ")")
x,y = splits.last.split(" and ")
splits.pop
splits << x
@@ -367,6 +394,34 @@ def newauthor
`open "http://localhost/wiki/a:#{page}?do=edit"`
end
+# removes current page and all related pages (ref, skimg etc) after confirmation
+def delete
+ require 'pashua'
+ include Pashua
+ config = <<EOS
+ *.title = Delete this page?
+ cb.type = text
+ cb.text = This action will delete this page, and all related pages (ref:, notes:, skimg:, kindle:, etc). Are you sure?
+ cb.width = 220
+ db.type = cancelbutton
+ db.label = Cancel
+EOS
+ pagetmp = pashua_run config
+ exit if pagetmp['db'] == "1"
+
+ page = cururl.split(":").last.downcase
+
+ directories = %w[ref notes skimg kindle]
+ paths = directories.map {|f| "#{Wiki_path}/data/pages/#{f}/#{page}.txt"}
+
+ c = 0
+ paths.each do |f|
+ c += 1 if try { File.delete(f) }
+ end
+
+ growl "#{c} pages deleted"
+end
+
#### Running the right function, depending on command line input ####
@chrome = Appscript.app('Google Chrome')
View
58 global.rb
@@ -0,0 +1,58 @@
+# encoding: UTF-8
+$:.push(File.dirname($0))
+require 'utility-functions'
+
+# Contains keyboard related functionality which can be invoked from any publication
+
+# triggered through Cmd+. shows a Pashua list of all references with titles
+# upon selection, a properly formatted citation like [@scardamalia2004knowledge] is inserted
+def bib_selector
+ require 'pashua'
+ include Pashua
+
+ bib = json_bib
+
+ config = "
+ *.title = researchr
+ cb.type = combobox
+ cb.completion = 2
+ cb.label = Insert a citation
+ cb.width = 800
+ cb.tooltip = Choose from the list or enter another name
+ db.type = cancelbutton
+ db.label = Cancel
+ db.tooltip = Closes this window without taking action"
+
+ # create list of citations
+ out = ''
+ json_bib.sort.each do |a|
+ out << "cb.option = #{a[0]}: #{a[1][3][0..90]}\n"
+ end
+
+ # show dialogue
+ pagetmp = pashua_run config + out
+
+ exit if pagetmp['cancel'] == 1
+
+ /^(?<citekey>.+?)\:/ =~ pagetmp['cb'] # extract citekey from citekey + title string
+
+ pbcopy("[@#{citekey}]")
+end
+
+# grab from clipboard, either look up DOI through API, or
+# use anystyle parser to convert text to bibtex. Paste to clipboard.
+def anystyle_parse
+ search = pbpaste
+ if search.strip.downcase[0..2] == "doi"
+ bibtex = doi_to_bibtex(search)
+ growl "Failure", "DOI lookup not successful" unless bibtex
+ else
+ require 'anystyle/parser'
+ search = search.gsub("-\n", "").gsub("\n", " ")
+ bibtex = Anystyle.parse(search, :bibtex).to_s
+ end
+
+ pbcopy(cleanup_bibtex_string(bibtex))
+end
+
+send *ARGV unless ARGV == []
View
8 skim.rb
@@ -84,7 +84,7 @@ def format(type, text, page)
type = $1
text = ''
else
- text << line # just add the text
+ text << line.gsub(/([a-zA-Z])\- ([a-zA-Z])/, '\1\2') # just add the text (mend split words)
alltext << line
end
end
@@ -96,10 +96,12 @@ def format(type, text, page)
`/usr/local/bin/pdftotext "#{filename}"`
ftlines = `wc "#{PDF_path}/#{Citekey}.txt"`.split(" ")[1].to_f
`rm "#{PDF_path}/#{Citekey}.txt"`
- percentage = ntlines/ftlines*100
@out << process(type, text, page) # pick up the last annotation
- outfinal = "h2. Highlights (#{percentage.to_i}%)\n\n" + @out.join('')
+
+ percentage_text = ftlines.to_i > 0 ? " (#{(ntlines/ftlines*100).to_i}%)" : ""
+
+ outfinal = "h2. Highlights#{percentage_text}\n\n" + @out.join('')
File.write("/tmp/skimtmp", outfinal)
`/wiki/bin/dwpage.php -m 'Automatically extracted from Skim' commit /tmp/skimtmp 'clip:#{Citekey}'`
View
2  utility-functions.rb
@@ -384,7 +384,7 @@ def add_to_jsonbib(citekey)
year = $1 if year == "n.d." and cit.match(/\((....)\)/)
json = JSON.parse(File.read(JSON_path))
- json[item.key.to_s] = [namify(ax), year, cit]
+ json[item.key.to_s] = [namify(ax), year, cit, item.title]
File.write(JSON_path, JSON.fast_generate(json) )
end

No commit comments for this range

Something went wrong with that request. Please try again.