Permalink
Browse files

0.4.0 - geodict, geoparser fixes

  • Loading branch information...
1 parent a921352 commit 5cc640c6be6b36138ebfa8365d19a70c9da2a5ab @corajr corajr committed Mar 12, 2013
View
@@ -6,18 +6,18 @@ Paper Machines is an open-source extension for the [Zotero](http://www.zotero.or
This project is a collaboration between historian [Jo Guldi](http://www.joguldi.com) and digital ethnomusicologist [Chris Johnson-Roberson](http://www.chrisjr.org), graciously supported by Google Summer of Code, the William F. Milton Fund, and [metaLAB @ Harvard](http://metalab.harvard.edu/).
-**NOTE:** Paper Machines now bundles Jython 2.7a2 to ensure broader compatibility. If you encounter problems using the extension, please create a Github issue describing what operating system and version of Java you have installed, as well as the nature of the issue.
+**NOTE:** Paper Machines now bundles Jython 2.7a2 to ensure broader compatibility. If you encounter problems using the extension, please create an issue describing what operating system and version of Java you have installed, and the nature of the issue.
## Prerequisites
-In order to run Paper Machines, you will need the following (Java should be installed automatically on Mac OS X 10.6-10.7; if you are running Mac OS 10.8, please download it from the link below):
+In order to run Paper Machines, you will need the following (Java should be installed automatically on Mac OS X 10.6-10.7. If you are running Mac OS 10.8, please download it from the link below):
* [Zotero](http://www.zotero.org/) with PDF indexing tools installed (see the Search pane of Zotero's Preferences)
* a corpus of documents with full text PDF/HTML and high-quality metadata (recommended: at least 1,000 for topic modeling purposes)
* Java ([download page](http://java.com/en/download/index.jsp))
## Installation
-Paper Machines should work either in Zotero for Firefox or Zotero Standalone. To install, you must download the <a href="https://github.com/downloads/chrisjr/papermachines/papermachines-0.4.0pre2.xpi">XPI file</a>. If you wish to use the extension in the Standalone version, right-click on the link and save the XPI file in your Downloads folder. Then, in Zotero Standalone, go to the Tools menu -> Add-Ons. Select the gear icon at the right, then "Install Add-On From File." Navigate to your Downloads folder (or wherever you have saved the XPI file) and open it.
+Paper Machines should work either in Zotero for Firefox or Zotero Standalone. To install, you must download the <a href="http://www.papermachines.org/download/papermachines-0.4.0.xpi">XPI file</a>. If you wish to use the extension in the Standalone version, right-click on the link and save the XPI file in your Downloads folder. Then, in Zotero Standalone, go to the Tools menu -> Add-Ons. Select the gear icon at the right, then "Install Add-On From File." Navigate to your Downloads folder (or wherever you have saved the XPI file) and open it.
## Usage
To begin, right-click (control-click for Mac) on the collection you wish to analyze and select "Extract Texts for Paper Machines." Once the extraction process is complete, this right-click menu will offer several different processes that may be run on a collection, each with an accompanying visualization. Once these processes have been run, selecting "Export Output of Paper Machines..." will allow you to choose which visualizations to export.
@@ -42,19 +42,16 @@ Creates a CSV file with place name, latitude/longitude, the Zotero item ID numbe
Annotates files using the DBpedia Spotlight service, providing a look at what named entities (people, places, organizations, etc.) are mentioned in the texts. Entities are scaled according to the frequency of their occurrence.
### Topic Modeling
-Shows the proportional prevalence of different "topics" (collections of words likely to co-occur) in the corpus, by time or by subcollection. This uses the [MALLET](http://mallet.cs.umass.edu) package to perform [latent Dirichlet allocation](http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation), and by default displays the 5 most "coherent" topics, based on a metric devised by [Mimno et al.](http://www.cs.princeton.edu/~mimno/papers/mimno-semantic-emnlp.pdf) A variety of topic model parameters can be specified before the model is created. The default values should be suitable for general purpose use, but they may be adjusted to produce a better model.
+Shows the proportional prevalence of different "topics" (collections of words likely to co-occur) in the corpus, by time or by subcollection. This uses the [MALLET](http://mallet.cs.umass.edu) package to perform [latent Dirichlet allocation](http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation), and by default displays the 5 most "coherent" topics, based on a metric devised by [Mimno et al.](http://www.cs.princeton.edu/~mimno/papers/mimno-semantic-emnlp.pdf) A variety of topic model hyperparameters can be specified before the model is created.
After the model is generated, clicking "Save" in display will open a new window with the graph displayed free of interactive controls; this window may be saved as an ".SVG" file or captured via screenshot. It will also, in the original window, preserve the current selection of topics, search terms, and time scale as a permalink; please bookmark this if you wish to return to a specific view with interactive controls intact.
#### JSTOR Data For Research
The topic model can be supplemented with datasets from [JSTOR Data For Research](http://dfr.jstor.org/). You must first [register](http://dfr.jstor.org/accounts/register/) for an account, after which you may search for additional articles based on keywords, years of publiation, specific journals, and so on. Once the search is to your liking, go to the Dataset Requests menu at the upper right and click "Submit New Request." Check the "Citations" and "Word Counts" boxes, select CSV output format, and enter a short job title that describes your query. Once you click "Submit Job", you will be taken to a history of your submitted requests. You will be e-mailed once the dataset is complete. Click "Download (#### docs)" in the Full Dataset column, and a zip file timestamped with the request time will be downloaded. This file (or several files with related queries) may then be incorporated into a model by selecting "By Time (With JSTOR DFR)" in the Topic Modeling submenu of Paper Machines. Multiple dataset zips will be merged and duplicates discarded before analysis begins; be warned, this may take a considerable amount of time before it begins to show progress (~15-30 minutes).
-### Classification
-This allows you to train the computer to infer the common features of the documents under each subcollection; subsequently, a set of texts in a different folder can be sorted automatically based on this training. At the moment, the probability distribution for each text is given in plain text; the ability to automatically generate a new collection according to this sorting is forthcoming.
-
### Preferences
-Currently, the language stoplist in use, types of data to extract, default parameters for topic modeling, and an experimental periodical import feature (intended for PDFs with OCR and correct metadata) may be adjusted in the preference pane.
+Currently, the language stoplist in use, types of data to extract, and default parameters for topic modeling may be adjusted in the preference pane. Any custom stopwords may be added to the "Stop Words" pane, one per line, to help eliminate irrelevant terms from your data.
## Acknowledgements
Special thanks to [Matthew Battles](http://metalab.harvard.edu/people/) for providing space, guidance, and support for me at metaLAB. My gratitude also to the creators of all the open-source projects and services upon which this project relies:
@@ -1,48 +1,88 @@
#!/usr/bin/env python2.7
import sys, os, json, logging, traceback, base64, time, codecs, urllib, urllib2
-from xml.etree import ElementTree as ET
+from collections import defaultdict
+from lib.classpath import classPathHacker
+
import textprocessor
class Geoparser(textprocessor.TextProcessor):
"""
- Geoparsing using Europeana service (experimental)
+ Geoparsing using Pete Warden's geodict
"""
- def _basic_params(self):
- self.name = "geoparser"
- self.dry_run = False
- self.require_stopwords = False
-
- def annotate(self, text):
- values = {'freeText': text[0:10000].encode('utf-8', 'ignore')}
- data = urllib.urlencode(values)
- req = urllib2.Request("http://europeana-geo.isti.cnr.it/geoparser/geoparsing/freeText", data)
- response = urllib2.urlopen(req)
- annotation = response.read()
- return annotation
-
- def get_places(self, xml_string):
- xml_string = xml_string.replace("\n", " ")
- elem = ET.fromstring(xml_string)
- annotated = elem.find('annotatedText')
-
- current_length = 0
- for entity in annotated.getiterator():
- if entity.tag == 'PLACE':
- place = {"name": entity.text, "entityURI": entity.get("entityURI"), "latitude": entity.get("latitude"), "longitude": entity.get("longitude")}
- if entity.text is not None:
- reference = [current_length, current_length + len(entity.text)]
- current_length += len(entity.text)
- if entity.tail is not None:
- current_length += len(entity.tail)
- yield place, reference
- else:
- if entity.text is not None:
- current_length += len(entity.text)
- if entity.tail is not None:
- current_length += len(entity.tail)
+ def get_containing_paragraph(self, text, match):
+ start = match[0]
+ end = match[1]
+ chars_added = 0
+ c = text[start]
+ while c != '\n' and chars_added < 50 and start > 0:
+ start -= 1
+ chars_added += 1
+ c = text[start]
+
+ chars_added = 0
+ end = min(len(text) - 1, end)
+ c = text[end]
+
+ while c != '\n' and chars_added < 50 and end < len(text):
+ c = text[end]
+ end += 1
+ chars_added += 1
+
+ return text[start:end]
+
+ def contexts_from_geoparse_obj(self, geoparse_obj, filename):
+ contexts_obj = defaultdict(list)
+ with codecs.open(filename, 'rU', encoding='utf-8') as f:
+ text = f.read()
+
+ for entityURI, matchlist in geoparse_obj.get("references", {}).iteritems():
+ for match in matchlist:
+ paragraph = self.get_containing_paragraph(text, match)
+ geonameid = entityURI.split('/')[-1]
+ contexts_obj[geonameid].append(paragraph)
+
+ contexts_json = filename.replace(".txt", "_contexts.json")
+ contexts_obj = dict(contexts_obj)
+ with file(contexts_json, 'w') as f:
+ json.dump(contexts_obj, f)
+ return contexts_obj
+
+ def get_places(self, string, find_func):
+ try:
+ geodict_locations = find_func(string)
+ for location in geodict_locations:
+ found_tokens = location['found_tokens']
+ start_index = found_tokens[0]['start_index']
+ end_index = found_tokens[len(found_tokens)-1]['end_index']
+ name = string[start_index:(end_index+1)]
+ geonameid = found_tokens[0].get('geonameid', None)
+ entityURI = "http://sws.geonames.org/" + str(geonameid) if geonameid else None
+ geotype = found_tokens[0]['type'].lower()
+ lat = found_tokens[0]['lat']
+ lon = found_tokens[0]['lon']
+
+ if entityURI is None:
+ continue
+
+ place = {"name": name, "entityURI": entityURI, "latitude": lat, "longitude": lon, "type": geotype}
+ reference = [start_index, end_index]
+ yield place, reference
+ except:
+ logging.error(traceback.format_exc())
def run_geoparser(self):
+ import __builtin__
+ jarLoad = classPathHacker()
+ sqlitePath = os.path.join(self.cwd, "lib", "geodict", "sqlite-jdbc-3.7.2.jar")
+ jarLoad.addFile(sqlitePath)
+
+ import lib.geodict.geodict_config
+
+ self.database_path = os.path.join(self.cwd, "lib", "geodict", "geodict.db")
+
+ from lib.geodict.geodict_lib import GeodictParser
+
geo_parsed = {}
places_by_entityURI = {}
@@ -57,11 +97,14 @@ def run_geoparser(self):
self.update_progress()
file_geoparsed = filename.replace(".txt", "_geoparse.json")
+ contexts_json = filename.replace(".txt", "_contexts.json")
if os.path.exists(file_geoparsed):
try:
geoparse_obj = json.load(file(file_geoparsed))
if "places_by_entityURI" in geoparse_obj:
+ if not os.path.exists(contexts_json):
+ self.contexts_from_geoparse_obj(geoparse_obj, filename)
continue
else:
os.remove(file_geoparsed)
@@ -75,24 +118,25 @@ def run_geoparser(self):
id = self.metadata[filename]['itemID']
str_to_parse = self.metadata[filename]['place']
last_index = len(str_to_parse)
- str_to_parse += codecs.open(filename, 'r', encoding='utf8').read()[0:(48000 - last_index)] #50k characters, shortened by initial place string
+ str_to_parse += codecs.open(filename, 'rU', encoding='utf8').read()
city = None
places = set()
- xml_filename = filename.replace('.txt', '_geoparse.xml')
+ json_filename = filename.replace('.txt', '_geodict.json')
- if not os.path.exists(xml_filename):
- annotation = self.annotate(str_to_parse)
- with codecs.open(xml_filename, 'w', encoding='utf8') as xml_file:
- xml_file.write(annotation.decode('utf-8'))
+ if not os.path.exists(json_filename):
+ parser = GeodictParser(self.database_path)
+ places_found = list(self.get_places(str_to_parse, parser.find_locations_in_text))
+ with codecs.open(json_filename, 'w', encoding='utf8') as json_file:
+ json.dump(places_found, json_file)
else:
- with codecs.open(xml_filename, 'r', encoding='utf8') as xml_file:
- annotation = xml_file.read()
+ with codecs.open(json_filename, 'r', encoding='utf8') as json_file:
+ places_found = json.load(json_file)
- for place, reference in self.get_places(annotation):
+ for (place, reference) in places_found:
entityURI = place["entityURI"]
- geoparse_obj['places_by_entityURI'][entityURI] = {'name': place["name"], 'type': 'unknown', 'coordinates': [place["longitude"], place["latitude"]]}
+ geoparse_obj['places_by_entityURI'][entityURI] = {'name': place["name"], 'type': place["type"], 'coordinates': [place["longitude"], place["latitude"]]}
if reference[0] < last_index:
city = entityURI
@@ -133,7 +177,10 @@ def run_geoparser(self):
geoparse_obj['places'] = list(places)
geoparse_obj['city'] = city
- json.dump(geoparse_obj, file(file_geoparsed, 'w'))
+ with file(file_geoparsed, 'w') as f:
+ json.dump(geoparse_obj, f)
+ if not os.path.exists(contexts_json):
+ self.contexts_from_geoparse_obj(geoparse_obj, filename)
time.sleep(0.2)
except (KeyboardInterrupt, SystemExit):
raise
@@ -42,7 +42,7 @@ def process(self):
title = os.path.basename(filename)
itemID = self.metadata[filename]['itemID']
year = self.metadata[filename]['year']
- text = codecs.open(filename, 'r', encoding='utf-8', errors='ignore').read()
+ text = codecs.open(filename, 'rU', encoding='utf-8', errors='replace').read()
maximum_length = len(text)
for entityURI, ranges in geoparse_obj["references"].iteritems():
@@ -65,7 +65,6 @@ def process(self):
logging.info(traceback.format_exc())
params = {"CSVPATH": csv_output_filename}
-# "CSVFILEURL": "file://" + urllib.pathname2url(os.path.dirname(csv_output_filename))}
self.write_html(params)
logging.info("finished")
@@ -1,5 +1,6 @@
#!/usr/bin/env python2.7
import sys, os, json, logging, traceback, base64, time, codecs
+from collections import defaultdict
import cPickle as pickle
import geoparser
@@ -29,12 +30,23 @@ def process(self):
linksByYear = {}
itemIDToYear = {}
places = {}
+ contexts = defaultdict(dict)
- for rowdict in self.parse_csv(csv_input):
- validEntityURIs.add(rowdict["entityURI"])
+ try:
+ for rowdict in self.parse_csv(csv_input):
+ validEntityURIs.add(rowdict["entityURI"])
+ except:
+ logging.error(traceback.format_exc())
+ sys.exit(1)
+
+ if len(validEntityURIs) == 0: #empty csv file
+ os.remove(csv_input)
+ logging.error("Geoparser output file was empty!")
+ sys.exit(1)
for filename in self.files:
file_geoparsed = filename.replace(".txt", "_geoparse.json")
+ contexts_json = filename.replace(".txt", "_contexts.json")
if os.path.exists(file_geoparsed):
try:
geoparse_obj = json.load(file(file_geoparsed))
@@ -71,6 +83,13 @@ def process(self):
if itemID not in linksByYear[year][edge]:
linksByYear[year][edge][itemID] = 0
linksByYear[year][edge][itemID] += 1
+ if os.path.exists(contexts_json):
+ with file(contexts_json) as f:
+ contexts_obj = json.load(f)
+ else:
+ contexts_obj = self.contexts_from_geoparse_obj(geoparse_obj, filename)
+ for geonameid, paragraphs in contexts_obj.iteritems():
+ contexts[geonameid].update({itemID: paragraphs})
except:
logging.info(traceback.format_exc())
@@ -104,7 +123,8 @@ def process(self):
"ENDDATE": max(linksByYear.keys()),
"ENTITYURIS": places,
"YEARS": years,
- "LINKS_BY_YEAR": groupedLinksByYear
+ "LINKS_BY_YEAR": groupedLinksByYear,
+ "CONTEXTS": dict(contexts)
}
self.write_html(params)
@@ -0,0 +1,91 @@
+import jsqlite3, string, StringIO
+import geodict_config
+
+def get_database_connection():
+ db=jsqlite3.connect(geodict_config.database+'.db')
+ cursor=db.cursor()
+ return cursor
+
+def get_cities(pulled_word,current_word,country_code,region_code):
+ cursor = get_database_connection()
+ select = 'SELECT * FROM cities WHERE last_word=?'
+ values = (pulled_word, )
+ if country_code is not None:
+ select += ' AND country=?'
+
+ if region_code is not None:
+ select += ' AND region_code=?'
+
+ # There may be multiple cities with the same name, so pick the one with the largest population
+ select += ' ORDER BY population;'
+ # Unfortunately tuples are immutable, so I have to use this logic to set up the correct ones
+ if country_code is None and region_code is None:
+ values = (current_word, )
+ elif country_code is not None and region_code is None:
+ values = (current_word, country_code)
+ elif country_code is None and region_code is not None:
+ values = (current_word, region_code)
+ else:
+ values = (current_word, country_code, region_code)
+
+ values = [v.lower() for v in values]
+
+ cursor.execute(select, values)
+ candidate_rows = cursor.fetchall()
+ # print candidate_rows
+
+ name_map = {}
+ for candidate_row in candidate_rows:
+ # print candidate_row
+ candidate_dict = get_dict_from_row(cursor, candidate_row)
+ # print candidate_dict
+ name = candidate_dict['city'].lower()
+ name_map[name] = candidate_dict
+ return name_map
+
+# Converts the result of a MySQL fetch into an associative dictionary, rather than a numerically indexed list
+def get_dict_from_row(cursor, row):
+ d = {}
+ for idx,col in enumerate(cursor.description):
+ d[col[0]] = row[idx]
+ return d
+
+# Functions that look at a small portion of the text, and try to identify any location identifiers
+
+# Caches the countries and regions tables in memory
+
+def setup_countries_cache():
+ countries_cache = {}
+ cursor = get_database_connection()
+ select = 'SELECT * FROM countries;'
+ cursor.execute(select)
+ candidate_rows = cursor.fetchall()
+
+ for candidate_row in candidate_rows:
+ candidate_dict = get_dict_from_row(cursor, candidate_row)
+ last_word = candidate_dict['last_word'].lower()
+ if last_word not in countries_cache:
+ countries_cache[last_word] = []
+ countries_cache[last_word].append(candidate_dict)
+ return countries_cache
+
+def setup_regions_cache():
+ regions_cache = {}
+ cursor = get_database_connection()
+ select = 'SELECT * FROM regions;'
+ cursor.execute(select)
+ candidate_rows = cursor.fetchall()
+
+ for candidate_row in candidate_rows:
+ candidate_dict = get_dict_from_row(cursor, candidate_row)
+ last_word = candidate_dict['last_word'].lower()
+ if last_word not in regions_cache:
+ regions_cache[last_word] = []
+ regions_cache[last_word].append(candidate_dict)
+ return regions_cache
+
+def is_initialized(name):
+ cursor = get_database_connection()
+ cursor.execute("SELECT COUNT(*) FROM sqlite_master WHERE name = ?;",[name])
+ return cursor.fetchone()[0] > 0
+
Oops, something went wrong.

0 comments on commit 5cc640c

Please sign in to comment.