Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
  • 17 commits
  • 7 files changed
  • 0 comments
  • 1 contributor
Apr 02, 2012
Stian Håklev Fixing split words like construc- tivism in skim.rb 9d4f1d9
Stian Håklev Improving bulletlist to not remove "and" if list is line-based cf3d705
Stian Håklev Made sure all links in RSS feed are to online version, not localhost cdc12c4
Apr 04, 2012
Stian Håklev Speed-up and improved formatting
I only generate a citation once, and store it in item[:cit], reused a number of times.
Also only list publications which have notes or clippings on main bibliography - all are still listed on author/kw etc.
Names of publications/authors/keywords are no longer printed, but individual and total times are.
Changed bibliography to use Dokuwiki tables instead of HTML (makes links work better local and remote).
Some reformatting for legibility.
61568ad
Stian Håklev Quick formatting change fa2a83e
Stian Håklev bibtex-batch RSS avoids duplicates 5b4f9ea
Stian Håklev Enable updating of RSS entries
Now, if you try to add a page to the RSS feed, and the page has already been added, it instead updates its entry.
Growl message is also updated to reflect this.
0e36d37
Apr 08, 2012
Stian Håklev Adding bibliography selector using Pashua in new global.rb (JSON must…
… be regenerated)

New file global.rb holds global keyboard macros. Cmd+. invokes Pashua selector with bibliography list
(citekey + title). Reconfigured json cache to include title (json has to be regenerated)
6d322c0
Stian Håklev Check if an item has a title, otherwise don't list it 58836ca
Stian Håklev Moving anystyle-import from anystyle-import.rb to global.rb f2f60bd
Stian Håklev Small change in bulletlist to account for (1) entries e746a7d
Stian Håklev First iteration of cleanup script - does a bunch of tests and outputs…
… to HTML page
34825ed
Stian Håklev Making sure it doesn't crash if text cannot be exported from PDF 1465999
Stian Håklev Various improvements to cleanup - show last 7 days 0e323f4
Stian Håklev Added command for deleting all related pages - after showing confirma…
…tion box
8e8137b
Stian Håklev Added new pubs last 7 days to cleanup 5adc0a9
Stian Håklev Added numbers of publications for this week's stats in cleanup a2da690
24  anystyle-import.rb
... ...
@@ -1,24 +0,0 @@
1  
-# encoding: UTF-8
2  
-$:.push(File.dirname($0))
3  
-require 'open-uri'
4  
-require 'utility-functions'
5  
-
6  
-# grab from clipboard, either look up DOI through API, or
7  
-# use anystyle parser to convert text to bibtex. Paste to clipboard.
8  
-
9  
-def lookup_doi(doi)
10  
-	doi = doi.downcase.remove(/doi[:>]/,'http://','dx.doi.org/').strip
11  
-  url = "http://dx.doi.org/#{doi}"
12  
-	return open(url, "Accept" => "text/bibliography; style=bibtex").read
13  
-end
14  
-
15  
-search = pbpaste
16  
-if search.strip.downcase[0..2] == "doi"
17  
-  bibtex = lookup_doi(search)
18  
-  growl "Failure", "DOI lookup not successful" unless bibtex
19  
-else
20  
-  require 'anystyle/parser'
21  
-  search = search.gsub("-\n", "").gsub("\n", " ")
22  
-  bibtex = Anystyle.parse(search, :bibtex).to_s
23  
-end
24  
-pbcopy(cleanup_bibtex_string(bibtex))
131  bibtex-batch.rb
@@ -35,6 +35,8 @@ def make_rss_feed
35 35
 
36 36
   version = "2.0" # ["0.9", "1.0", "2.0"]
37 37
 
  38
+  urls = Array.new
  39
+
38 40
   content = RSS::Maker.make(version) do |m|
39 41
     m.channel.title = Wiki_title
40 42
     m.channel.link = Internet_path
@@ -42,24 +44,38 @@ def make_rss_feed
42 44
     m.items.do_sort = true # sort items by date
43 45
 
44 46
     rss_entries.each do |entry|
  47
+      next if urls.index(entry[:link]) # avoid duplicate content (should not be possible, but just in case)
  48
+      urls << entry[:link]
  49
+
45 50
       i = m.items.new_item
46 51
       i.title = entry[:title]
  52
+      puts "--#{i.title}"
47 53
       i.link = entry[:link]
48 54
       i.date = entry[:date]
49 55
       i.description = Sanitize.clean( entry[:description], Sanitize::Config::RELAXED )
50 56
     end
51 57
   end
52 58
 
53  
-  File.write(Wiki_path + "/feed.xml", content)
  59
+  # make sure all links point to the online version, not the localhost
  60
+  feedcontent = content.to_s.gsub(Internet_path, Server_path)
  61
+
  62
+  File.write(Wiki_path + "/feed.xml", feedcontent)
54 63
 end
55 64
 
  65
+timetot = Time.now
  66
+timetmp = Time.now
  67
+
56 68
 puts "Making RSS feed"
  69
+
57 70
 make_rss_feed
58  
-puts "RSS feed complete (will be updated next time you sync with server)"
  71
+
  72
+puts "RSS feed complete (will be updated next time you sync with server) (#{Time.now - timetmp} s.)"
  73
+timetmp = Time.now
59 74
 puts "Parsing BibTeX"
60 75
 b = BibTeX.parse(File.read(Bibliography))
61 76
 b.parse_names
62  
-puts "Initial parse complete"
  77
+puts "Initial parse complete (#{Time.now - timetmp} s.)"
  78
+timetmp = Time.now
63 79
 out1 = ''
64 80
 out2 = ''
65 81
 out3 = ''
@@ -75,8 +91,10 @@ def make_rss_feed
75 91
 counter[:notes] = 0
76 92
 counter[:clippings] = 0
77 93
 counter[:images] = 0
  94
+
78 95
 puts "Starting secondary parse"
79 96
 b.each do |item|
  97
+  next unless try { item.title } # fragment, doesn't need its own listing
80 98
   ax = []
81 99
   if item.respond_to? :author
82 100
     item.author.each do |a|
@@ -99,13 +117,17 @@ def make_rss_feed
99 117
   end
100 118
 
101 119
   cit = CiteProc.process item.to_citeproc, :style => :apa
  120
+  cit.gsub!(/Retrieved from(.+?)$/, '')
  121
+  item[:cit] = cit
102 122
   year = (defined? item.year) ? item.year.to_s : "n.d."
103 123
   if year == "n.d." and cit.match(/\((....)\)/)
104 124
     year = $1
105 125
   end
106  
-  json[item.key.to_s] = [namify(ax), year, cit]
  126
+  json[item.key.to_s] = [namify(ax), year, cit, item.title]
  127
+
107 128
   hasfiles = Array.new
108  
-  hasfiles[4]=""
  129
+  hasfiles[2] = '' # ensure that array is filled even if some fields are empty, for alignment
  130
+
109 131
   if File.exists?("#{Wiki_path}/data/pages/ref/#{item.key}.txt")
110 132
     counter[:hasref] += 1
111 133
     if File.exists?("#{Wiki_path}/data/pages/clip/#{item.key}.txt") || File.exists?("#{Wiki_path}/data/pages/kindle/#{item.key}.txt")
@@ -119,18 +141,15 @@ def make_rss_feed
119 141
     if File.exists?("#{Wiki_path}/data/pages/notes/#{item.key}.txt")
120 142
       counter[:notes] += 1
121 143
       hasfiles[0] = "N"
122  
-      out1 << "<tr><td><a href = 'ref:#{item.key}'>#{item.key}</a></td><td>#{hasfiles.join("</td><td>&nbsp;")}</td><td>#{cit}</td></tr>\n"
123  
-    elsif hasfiles[1] == "C"
124  
-      out2 << "<tr><td><a href = 'ref:#{item.key}'>#{item.key}</a></td><td>#{hasfiles.join("</td><td>&nbsp;")}</td><td>#{cit}</td></tr>\n"
125  
-    else
126  
-      out3 << "<tr><td><a href = 'ref:#{item.key}'>#{item.key}</a></td><td>#{hasfiles.join("</td><td>&nbsp;")}</td><td>#{cit}</td></tr>\n"
  144
+    end
127 145
 
  146
+    txt = "| [#:ref:#{item.key}|#{item.key}] | #{hasfiles.join(" | ")} |#{cit}|\n"
128 147
 
  148
+    if hasfiles[0] == "N"
  149
+      out1 << txt
  150
+    elsif hasfiles[1] == "C"
  151
+      out2 << txt
129 152
     end
130  
-
131  
-  else
132  
-    counter[:noref] += 1
133  
-    out4 << "<tr><td>#{item.key}</td><td>#{hasfiles.join("</td><td>&nbsp;")}</td><td>#{cit}</td></tr>\n"
134 153
   end
135 154
 
136 155
   # mark as read if notes exist
@@ -140,16 +159,22 @@ def make_rss_feed
140 159
 
141 160
 end
142 161
 
  162
+puts "Finished secondary parse, generating main bibliography (#{Time.now - timetmp} s.)"
  163
+timetmp = Time.now
143 164
 
144  
-puts "Finished secondary parse, generating main bibliography"
145  
-
146  
-out = "h1. Bibliography\n\nDownload [[http://dl.dropbox.com/u/1341682/Bibliography.bib|entire BibTeX file]]. Also see bibliography by [[abib:start|author]] or by [[kbib:start|keyword]].\n\nPublications that have their own pages are listed on top, and hyperlinked. Most of these also have clippings and many have key ideas.\n\nStatistics: Totally **#{counter[:hasref] + counter[:noref]}** publications, and **#{counter[:hasref]}** publications have their own wikipages. Of these, **#{counter[:images]}** with notes (key ideas) **(N)**, **#{counter[:clippings]}** with highlights (imported from Kindle or Skim) **(C)**, and **#{counter[:images]}** with images (imported from Skim) **(I)** and.<html><table>"
  165
+out = "h1. Bibliography\n\nDownload [[http://dl.dropbox.com/u/1341682/Bibliography.bib|entire BibTeX file]].
  166
+Also see bibliography by [[abib:start|author]] or by [[kbib:start|keyword]].\n\nPublications that have their
  167
+own pages are listed on top, and hyperlinked. Most of these also have clippings and many have key ideas.\n\n
  168
+Statistics: Totally **#{counter[:hasref] + counter[:noref]}** publications, and **#{counter[:hasref]}**
  169
+publications have their own wikipages. Of these, **#{counter[:images]}** with notes (key ideas) **(N)**,
  170
+**#{counter[:clippings]}** with highlights (imported from Kindle or Skim) **(C)**, and **#{counter[:images]}**
  171
+with images (imported from Skim) **(I)**\n\n"
147 172
 
148 173
 #dt.document.save
149 174
 
150 175
 File.open("#{Wiki_path}/lib/plugins/dokuresearchr/json.tmp","w"){|f| f << JSON.fast_generate(json)}
151 176
 
152  
-out << out1 << out2 << out3 << out4 << "</table></html>"
  177
+out << out1 << out2 << out3
153 178
 File.open("#{Wiki_path}/data/pages/bib/bibliography.txt", 'w') {|f| f << out}
154 179
 
155 180
 ###############################################
@@ -170,11 +195,10 @@ def make_rss_feed
170 195
   out = "h2. #{author}'s publications\n\n"
171 196
   sort_pubs(pubs).each do |i|
172 197
     item = b[i]
173  
-    cit = CiteProc.process item.to_citeproc, :style => :apa
174 198
     if File.exists?("#{Wiki_path}/data/pages/ref/#{item.key}.txt")
175  
-      out1 << "| [#:ref:#{item.key}] | #{cit}|#{pdfpath(item.key)}|\n"
  199
+      out1 << "| [#:ref:#{item.key}|#{item.key}] | #{item[:cit]}|#{pdfpath(item.key)}|\n"
176 200
     else
177  
-      out2 << "| #{item.key} | #{cit}|#{pdfpath(item.key)}|\n"
  201
+      out2 << "| #{item.key} | #{item[:cit]}|#{pdfpath(item.key)}|\n"
178 202
     end
179 203
   end
180 204
 
@@ -182,7 +206,6 @@ def make_rss_feed
182 206
   authorname = clean_pagename(author)
183 207
   authorlisted << [authorname,author,pubs.size]
184 208
   File.open("#{Wiki_path}/data/pages/abib/#{authorname}.txt", 'w') {|f| f << out}
185  
-  puts author
186 209
 end
187 210
 
188 211
 File.open("#{Wiki_path}/data/pages/abib/start.txt","w") do |f|
@@ -198,44 +221,46 @@ def make_rss_feed
198 221
 end
199 222
 ###############################################
200 223
 # generate individual files for each keyword
201  
-
  224
+puts "Finished (#{Time.now - timetmp} s.)"
202 225
 if keywordopt
  226
+  timetmp = Time.now
203 227
   puts "Generating individual files for each keyword"
204 228
 
205  
-keywordslisted = Array.new
206  
-keywords.each do |keyword, pubs|
207  
-  out =''
208  
-  out1 = ''
209  
-  out2 =''
210  
-  out = "h2. Publications with keyword \"#{keyword}\"\n\n"
211  
-  sort_pubs(pubs).each do |i|
212  
-    item = b[i]
213  
-    cit = CiteProc.process item.to_citeproc, :style => :apa
214  
-    if File.exists?("#{Wiki_path}/data/pages/ref/#{item.key}.txt")
215  
-      out1 << "| [#:ref:#{item.key}] | #{cit}| #{pdfpath(item.key)} |\n"
216  
-    else
217  
-      out2 << "| #{item.key} | #{cit} | #{pdfpath(item.key)}|\n"
  229
+  keywordslisted = Array.new
  230
+  keywords.each do |keyword, pubs|
  231
+    out =''
  232
+    out1 = ''
  233
+    out2 =''
  234
+    out = "h2. Publications with keyword \"#{keyword}\"\n\n"
  235
+    sort_pubs(pubs).each do |i|
  236
+      item = b[i]
  237
+      if File.exists?("#{Wiki_path}/data/pages/ref/#{item.key}.txt")
  238
+        out1 << "| [#:ref:#{item.key}|#{item.key}] | #{item[:cit]}| #{pdfpath(item.key)} |\n"
  239
+      else
  240
+        out2 << "| #{item.key} | #{item[:cit]} | #{pdfpath(item.key)}|\n"
  241
+      end
218 242
     end
219  
-  end
220 243
 
221  
-  out << out1 << out2
222  
-  kwname = keyword.gsub(/[\,\.\/ ]/,"_").downcase
223  
-  keywordslisted << [kwname,keyword,pubs.size]
224  
-  File.open("#{Wiki_path}/data/pages/kbib/#{kwname}.txt", 'w') {|f| f << out}
225  
-  puts kwname
226  
-end
  244
+    out << out1 << out2
  245
+    kwname = keyword.gsub(/[\,\.\/ ]/,"_").downcase
  246
+    keywordslisted << [kwname,keyword,pubs.size]
  247
+    File.open("#{Wiki_path}/data/pages/kbib/#{kwname}.txt", 'w') {|f| f << out}
  248
+  end
227 249
 
228  
-File.open("#{Wiki_path}/data/pages/kbib/start.txt","w") do |f|
229  
-  f << "h1. List of publication keywords\n\n"
230  
-  keywordslisted.sort {|x,y| y[2].to_i <=> x[2].to_i}.each do |ax|
231  
-    f << "|[##{ax[0]}|#{ax[1]}]|#{ax[2]}|\n"
  250
+  File.open("#{Wiki_path}/data/pages/kbib/start.txt","w") do |f|
  251
+    f << "h1. List of publication keywords\n\n"
  252
+    keywordslisted.sort {|x,y| y[2].to_i <=> x[2].to_i}.each do |ax|
  253
+      f << "|[##{ax[0]}|#{ax[1]}]|#{ax[2]}|\n"
  254
+    end
232 255
   end
233  
-end
  256
+  puts "Finished (#{Time.now - timetmp} s.)"
  257
+
234 258
 end
235 259
 ###############################################
236 260
 # generate individual files for each journal with more than five cits.
237 261
 
238 262
 if journalopt
  263
+  timetmp = Time.now
239 264
   puts "Generating individual files for each journal"
240 265
 
241 266
 authorlisted = Array.new
@@ -244,18 +269,16 @@ def make_rss_feed
244 269
   out1 = ''
245 270
   out2 =''
246 271
   author = axx.strip
247  
-  p pubs, pubs.size
248 272
   next unless pubs.size > 5
249 273
   # only generates individual author pages for authors with full names. this is because I want to deduplicate author names
250 274
   # when you import bibtex, you get many different spellings etc.
251 275
   out = "h2. Publications in #{author}\n\n"
252 276
   sort_pubs(pubs).each do |i|
253 277
     item = b[i]
254  
-    cit = CiteProc.process item.to_citeproc, :style => :apa
255 278
     if File.exists?("#{Wiki_path}/data/pages/ref/#{item.key}.txt")
256  
-      out1 << "| [#:ref:#{item.key}] | #{cit}|#{pdfpath(item.key)}|\n"
  279
+      out1 << "| [#:ref:#{item.key}|#{item.key}] | #{item[:cit]}|#{pdfpath(item.key)}|\n"
257 280
     else
258  
-      out2 << "| #{item.key} | #{cit}|#{pdfpath(item.key)}|\n"
  281
+      out2 << "| #{item.key} | #{item[:cit]}|#{pdfpath(item.key)}|\n"
259 282
     end
260 283
   end
261 284
 
@@ -263,7 +286,6 @@ def make_rss_feed
263 286
   authorname = clean_pagename(author)
264 287
   authorlisted << [authorname,author,pubs.size]
265 288
   File.open("#{Wiki_path}/data/pages/jbib/#{authorname}.txt", 'w') {|f| f << out}
266  
-  puts author
267 289
 end
268 290
 end
269 291
 
@@ -276,6 +298,7 @@ def make_rss_feed
276 298
     end
277 299
     f << "| [##{ax[0]}|#{ax[1]}] | #{apage} |#{ax[2]}|\n"
278 300
   end
  301
+  puts "Finished (#{Time.now - timetmp} s.)"
279 302
 end
280 303
 
281 304
 
@@ -293,3 +316,5 @@ def make_rss_feed
293 316
 #   end
294 317
 # end
295 318
 # File.open("#{Wiki_path}/data/pages/bib/needs_key_ideas.txt","w") {|f| f << out}
  319
+
  320
+puts "All tasks finished. Total time #{Time.now - timetot} s."
63  cleanup.rb
... ...
@@ -0,0 +1,63 @@
  1
+# encoding: UTF-8
  2
+
  3
+# performing various cleanup functions
  4
+
  5
+$:.push(File.dirname($0))
  6
+require 'utility-functions'
  7
+require 'appscript'
  8
+include Appscript
  9
+
  10
+def bname(ary)
  11
+  ary.map {|f| File.basename(f).remove(".txt")}
  12
+end
  13
+
  14
+refs = bname(Dir[Wiki_path + "/data/pages/ref/*.txt"])
  15
+skimg = bname(Dir[Wiki_path + "/data/pages/skimg/*.txt"])
  16
+clips = bname(Dir[Wiki_path + "/data/pages/clip/*.txt"])
  17
+notes = bname(Dir[Wiki_path + "/data/pages/notes/*.txt"])
  18
+kindle = bname(Dir[Wiki_path + "/data/pages/kindle/*.txt"])
  19
+
  20
+refs_week = bname(`find /wiki/data/pages/ref/*.txt -mtime -7`.split("\n"))
  21
+notes_week = bname(`find /wiki/data/pages/notes/*.txt -mtime -7`.split("\n"))
  22
+
  23
+notes_short_week = []
  24
+notes_week.each {|f| notes_short_week << f unless File.size("#{Wiki_path}/data/pages/notes/#{f}.txt") > 500}
  25
+
  26
+# check off all the notes in BibDesk, takes a few seconds
  27
+# notes.each do |n|
  28
+#   bibdesk_publication = try { app("BibDesk").document.search({:for =>n})[0] }
  29
+#   bibdesk_publication.fields["Notes"].value.set("1") if bibdesk_publication
  30
+# end
  31
+
  32
+puts "<html><head><title>Researchr cleanup script report</title></head><body>"
  33
+puts "<h1>Researchr cleanup script report</h1>"
  34
+
  35
+this = notes_week - notes_short_week
  36
+puts "<h2>New publications added last 7 days with decent-sized notes (#{this.size})</h2>"
  37
+this.each {|a| puts "<li><a href='#{Internet_path}/ref:#{a}'>#{a}</a></li>"}
  38
+
  39
+puts "<h2>New publications added last 7 days with brief notes (#{notes_short_week.size})</h2>"
  40
+(notes_short_week).each {|a| puts "<li><a href='#{Internet_path}/ref:#{a}'>#{a}</a></li>"}
  41
+
  42
+this = refs_week - notes_week
  43
+puts "<h2>New publications added last 7 days without notes (#{this.size})</h2>"
  44
+this.each {|a| puts "<li><a href='#{Internet_path}/ref:#{a}'>#{a}</a></li>"}
  45
+
  46
+puts "<hr><h2>Notes pages without ref page</h2>"
  47
+(notes - refs).each {|a| puts "<li><a href='#{Internet_path}/notes:#{a}'>#{a}</a></li>"}
  48
+
  49
+puts "<h2>Clipping pages without ref page</h2>"
  50
+(clips - refs).each {|a| puts "<li><a href='#{Internet_path}/clip:#{a}'>#{a}</a></li>"}
  51
+
  52
+puts "<h2>Image pages without ref page</h2>"
  53
+(skimg - refs).each {|a| puts "<li><a href='#{Internet_path}/skimg:#{a}'>#{a}</a></li>"}
  54
+
  55
+puts "<h2>Kindle pages without ref page</h2>"
  56
+(kindle - refs).each {|a| puts "<li><a href='#{Internet_path}/kindle:#{a}'>#{a}</a></li>"}
  57
+
  58
+puts "<h2>Ref pages with no sub-pages</h2>"
  59
+(refs - (skimg + clips + notes + kindle)).each {|a| puts "<li><a href='#{Internet_path}/ref:#{a}'>#{a}</a></li>"}
  60
+
  61
+# how to do a union of two arrays?
  62
+#puts "<h2>Kindle pages that also has clipping page</h2>"
  63
+#(kindle ).each {|a| puts "<li><a href='#{Internet_path}/kindle:#{a}'>#{a}</a></li>"}
67  dokuwiki.rb
@@ -140,8 +140,12 @@ def import_bibtex
140 140
 # is executed (Ctrl+Alt+Cmd+F)
141 141
 def add_to_rss
142 142
   require 'open-uri'
  143
+  require 'cgi'
  144
+
143 145
   fname = Wiki_path + "/rss-temp"
  146
+
144 147
   internalurl = cururl.split("/").last
  148
+  url = "#{Internet_path}/#{internalurl}"
145 149
 
146 150
   # load existing holding file, or start form scratch
147 151
   if File.exists?(fname)
@@ -164,13 +168,35 @@ def add_to_rss
164 168
     /\<div class\=\"plugin\_include\_content\ plugin\_include\_\_kindle(.+?)\<\/div\>/m
165 169
   )
166 170
 
167  
-
168 171
   title = page_contents.scan(/\<h1(.+?)id(.+?)>(.+)\<(.+?)\<\/h1\>/)[0][2]
169  
-  rss_entries << {:title => title, :date => Time.now, :link => "#{Internet_path}/#{internalurl}", :description => contents}
  172
+  title = CGI.unescapeHTML(title)
  173
+
  174
+  entry_contents = {:title => title, :date => Time.now, :link => url, :description => contents}
  175
+
  176
+  exists = false
  177
+
  178
+  rss_entries.map! do |entry|
  179
+    if entry[:link] == url
  180
+      exists = true
  181
+      entry_contents
  182
+    else
  183
+      entry
  184
+    end
  185
+  end
  186
+
  187
+  unless exists
  188
+    rss_entries << entry_contents
  189
+  end
  190
+
  191
+  rss_entries = rss_entries.drop(1) if rss_entries.size > 15
170 192
 
171  
-  rss_entries = rss_entries.drop(1) if rss_entries.size > 10
172 193
   File.write(fname, Marshal::dump(rss_entries))
173  
-  growl("Article added to feed", "'#{title}' added to RSS feed")
  194
+
  195
+  if exists
  196
+    growl("Article updated", "Article #{title} updated")
  197
+  else
  198
+    growl("Article added to feed", "'#{title}' added to RSS feed")
  199
+  end
174 200
 end
175 201
 
176 202
 # pops up dialogue box, asking where to send text, takes selected text (or just link, if desired) and inserts at the bottom
@@ -254,7 +280,7 @@ def bulletlist
254 280
     splt = "\n"
255 281
   elsif a.scan(")").size > a.scan("(").size + 2
256 282
     splt = ")"
257  
-    a.gsub!(/[, ]*\d+\)/,")")
  283
+    a.gsub!(/[, (]*\d+\)/,")")
258 284
   elsif a.scan(";").size > 1
259 285
     splt = ";"
260 286
   elsif a.scan(".").size > 2
@@ -269,7 +295,8 @@ def bulletlist
269 295
 
270 296
   splits = a.split(splt)
271 297
 
272  
-  if splits.last.index(" and ") # deal with situation where the last two items are delimited with "and"
  298
+  # deal with situation where the last two items are delimited with "and", but not for line shift or 1) 2) kind of lists
  299
+  if splits.last.index(" and ") && !(splt == "\n" || splt == ")")
273 300
     x,y = splits.last.split(" and ")
274 301
     splits.pop
275 302
     splits << x
@@ -367,6 +394,34 @@ def newauthor
367 394
   `open "http://localhost/wiki/a:#{page}?do=edit"`
368 395
 end
369 396
 
  397
+# removes current page and all related pages (ref, skimg etc) after confirmation
  398
+def delete
  399
+  require 'pashua'
  400
+  include Pashua
  401
+  config = <<EOS
  402
+  *.title = Delete this page?
  403
+  cb.type = text
  404
+  cb.text = This action will delete this page, and all related pages (ref:, notes:, skimg:, kindle:, etc). Are you sure?
  405
+  cb.width = 220
  406
+  db.type = cancelbutton
  407
+  db.label = Cancel
  408
+EOS
  409
+  pagetmp = pashua_run config
  410
+  exit if pagetmp['db'] == "1"
  411
+
  412
+  page = cururl.split(":").last.downcase
  413
+
  414
+  directories = %w[ref notes skimg kindle]
  415
+  paths = directories.map {|f| "#{Wiki_path}/data/pages/#{f}/#{page}.txt"}
  416
+
  417
+  c = 0
  418
+  paths.each do |f|
  419
+    c += 1 if try { File.delete(f) }
  420
+  end
  421
+
  422
+  growl "#{c} pages deleted"
  423
+end
  424
+
370 425
 #### Running the right function, depending on command line input ####
371 426
 
372 427
 @chrome = Appscript.app('Google Chrome')
58  global.rb
... ...
@@ -0,0 +1,58 @@
  1
+# encoding: UTF-8
  2
+$:.push(File.dirname($0))
  3
+require 'utility-functions'
  4
+
  5
+# Contains keyboard related functionality which can be invoked from any publication
  6
+
  7
+# triggered through Cmd+. shows a Pashua list of all references with titles
  8
+# upon selection, a properly formatted citation like [@scardamalia2004knowledge] is inserted
  9
+def bib_selector
  10
+  require 'pashua'
  11
+  include Pashua
  12
+
  13
+  bib = json_bib
  14
+
  15
+  config = "
  16
+  *.title = researchr
  17
+  cb.type = combobox
  18
+  cb.completion = 2
  19
+  cb.label = Insert a citation
  20
+  cb.width = 800
  21
+  cb.tooltip = Choose from the list or enter another name
  22
+  db.type = cancelbutton
  23
+  db.label = Cancel
  24
+  db.tooltip = Closes this window without taking action"
  25
+
  26
+  # create list of citations
  27
+  out = ''
  28
+  json_bib.sort.each do |a|
  29
+    out << "cb.option = #{a[0]}: #{a[1][3][0..90]}\n"
  30
+  end
  31
+
  32
+  # show dialogue
  33
+  pagetmp = pashua_run config + out
  34
+
  35
+  exit if pagetmp['cancel'] == 1
  36
+
  37
+  /^(?<citekey>.+?)\:/ =~ pagetmp['cb']  # extract citekey from citekey + title string
  38
+
  39
+  pbcopy("[@#{citekey}]")
  40
+end
  41
+
  42
+# grab from clipboard, either look up DOI through API, or
  43
+# use anystyle parser to convert text to bibtex. Paste to clipboard.
  44
+def anystyle_parse
  45
+  search = pbpaste
  46
+  if search.strip.downcase[0..2] == "doi"
  47
+    bibtex = doi_to_bibtex(search)
  48
+    growl "Failure", "DOI lookup not successful" unless bibtex
  49
+  else
  50
+    require 'anystyle/parser'
  51
+    search = search.gsub("-\n", "").gsub("\n", " ")
  52
+    bibtex = Anystyle.parse(search, :bibtex).to_s
  53
+  end
  54
+
  55
+  pbcopy(cleanup_bibtex_string(bibtex))
  56
+end
  57
+
  58
+send *ARGV unless ARGV == []
8  skim.rb
@@ -84,7 +84,7 @@ def format(type, text, page)
84 84
       type = $1
85 85
       text = ''
86 86
     else
87  
-      text << line  # just add the text
  87
+      text << line.gsub(/([a-zA-Z])\- ([a-zA-Z])/, '\1\2')  # just add the text (mend split words)
88 88
       alltext << line
89 89
     end
90 90
   end
@@ -96,10 +96,12 @@ def format(type, text, page)
96 96
   `/usr/local/bin/pdftotext "#{filename}"`
97 97
   ftlines = `wc "#{PDF_path}/#{Citekey}.txt"`.split(" ")[1].to_f
98 98
   `rm "#{PDF_path}/#{Citekey}.txt"`
99  
-  percentage = ntlines/ftlines*100
100 99
 
101 100
   @out << process(type, text, page)  # pick up the last annotation
102  
-  outfinal = "h2. Highlights (#{percentage.to_i}%)\n\n" + @out.join('')
  101
+
  102
+  percentage_text = ftlines.to_i > 0 ? " (#{(ntlines/ftlines*100).to_i}%)" : ""
  103
+
  104
+  outfinal = "h2. Highlights#{percentage_text}\n\n" + @out.join('')
103 105
   File.write("/tmp/skimtmp", outfinal)
104 106
   `/wiki/bin/dwpage.php -m 'Automatically extracted from Skim' commit /tmp/skimtmp 'clip:#{Citekey}'`
105 107
 
2  utility-functions.rb
@@ -384,7 +384,7 @@ def add_to_jsonbib(citekey)
384 384
   year = $1 if year == "n.d." and cit.match(/\((....)\)/)
385 385
 
386 386
   json = JSON.parse(File.read(JSON_path))
387  
-  json[item.key.to_s] = [namify(ax), year, cit]
  387
+  json[item.key.to_s] = [namify(ax), year, cit, item.title]
388 388
   File.write(JSON_path, JSON.fast_generate(json) )
389 389
 end
390 390
 

No commit comments for this range

Something went wrong with that request. Please try again.