Skip to content
Browse files

Code and slides for my PyCon-AU 2010 talk.

  • Loading branch information...
0 parents commit 8ea6ab5c3173af7ac6a0436ee50fa535fa0e338e @malcolmt committed Jun 27, 2010
1 .gitignore
@@ -0,0 +1 @@
+*.pyc
32 LICENSE.txt
@@ -0,0 +1,32 @@
+All the code in this package is licensed as follows. This is the standard "new
+BSD" license (see http://www.opensource.org/licenses/bsd-license.php) and is
+the same license that is used for Django itself.
+
+ --o----------o--
+
+Copyright (c) 2010, Malcolm Tredinnick <malcolm.tredinnick@gmail.com>
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+ * Redistributions of source code must retain the above copyright notice,
+ this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+ * The name of Malcolm Tredinnick may not be used to endorse or promote
+ products derived from this software without specific prior written
+ permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
1 Powerhouse/.gitignore
@@ -0,0 +1 @@
+phm.sqlite*
11 Powerhouse/README.txt
@@ -0,0 +1,11 @@
+The Powerhouse Museum (http://www.powerhousemuseum.com/ ) in Sydney is an interesting collection of Australian contemporary and historical memorabilia. Well worth a visit if you've got half a day to kill in Sydney some time.
+
+The dataset used in this talk is a textual summary of all the items in the museum's collection and can be downloaded from http://www.powerhousemuseum.com/collection/database/download.php . The data is licensed under a Creative Commons Attribution-Sharealike 2.5 Australia license. This particular tutorial used a version that was downloaded on June 22, 2010, but there's nothing particularly special about that copy.
+
+When "python manage.py syncdb --noinput" is run to initialise the database, an admin user with username "admin" and password "admin" will automatically be created. The currents settings use a local sqlite database. However, for speed of import when I was developing this, I used an sqlite database on a memory-backed filesystem (in Linux). I ran:
+
+ mount -t tmpfs -o uid=500,gid=500 tmpfs /home/malcolm/store
+ # Changed settings.py to put the database in ~/store/phm.sqlite
+ PYTHONPATH=.. DJANGO_SETTINGS_MODULE=settings ./load.py \
+ categories.txt phm_collection.txt
+
0 Powerhouse/__init__.py
No changes.
20 Powerhouse/fixtures/initial_data.json
@@ -0,0 +1,20 @@
+[
+ {
+ "fields": {
+ "date_joined": "2010-06-26 22:25:27",
+ "email": "admin@example.com",
+ "first_name": "",
+ "groups": [],
+ "is_active": true,
+ "is_staff": true,
+ "is_superuser": true,
+ "last_login": "2010-06-26 22:25:27",
+ "last_name": "",
+ "password": "sha1$04476$609f9936b74baf755e52232702d6e41a9087111b",
+ "user_permissions": [],
+ "username": "admin"
+ },
+ "model": "auth.user",
+ "pk": 1
+ }
+]
1 Powerhouse/initial-auth.json
@@ -0,0 +1 @@
+[{"pk": 1, "model": "auth.user", "fields": {"username": "malcolm", "first_name": "", "last_name": "", "is_active": true, "is_superuser": true, "is_staff": true, "last_login": "2010-06-26 22:19:31", "groups": [], "user_permissions": [], "password": "sha1$866d0$d80c50e6b68c3ed24fa9122c3f8fa775c06ef6e2", "email": "malcolm.tredinnick@gmail.com", "date_joined": "2010-06-26 22:19:31"}}]
11 Powerhouse/manage.py
@@ -0,0 +1,11 @@
+#!/usr/bin/env python
+from django.core.management import execute_manager
+try:
+ import settings # Assumed to be in the same directory.
+except ImportError:
+ import sys
+ sys.stderr.write("Error: Can't find the file 'settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__)
+ sys.exit(1)
+
+if __name__ == "__main__":
+ execute_manager(settings)
0 Powerhouse/phm_collection/__init__.py
No changes.
13 Powerhouse/phm_collection/admin.py
@@ -0,0 +1,13 @@
+from django.contrib import admin
+
+from phm_collection import models
+
+class ItemAdmin(admin.ModelAdmin):
+ raw_id_fields = ["categories"]
+
+class CategoryAdmin(admin.ModelAdmin):
+ ordering = ["name"]
+
+admin.site.register(models.Category, CategoryAdmin)
+admin.site.register(models.Item, ItemAdmin)
+
145 Powerhouse/phm_collection/load.py
@@ -0,0 +1,145 @@
+#!/usr/bin/env python
+"""
+Loads the Powerhouse Museum data dump (a tab-separated file) into the data
+models.
+"""
+
+import csv
+import decimal
+import re
+import sys
+
+from django.conf import settings
+
+from phm_collection import models
+
+
+# Input file encoding
+ENCODING = "iso-8859-1"
+
+weight_pattern = re.compile(r"(\d+(?:\.\d+)?) (kg|g(?:m?))$")
+dimension_pattern = re.compile(r"(\d*(?:\.\d+)?) (m|mm|cm)(?:\.|\\)?$")
+
+def main(argv=None):
+ assert settings.DEBUG == False, \
+ "It's a really bad idea to run this importer with DEBUG = True!"
+ if argv is None:
+ argv = sys.argv
+ category_map = import_categories(argv[1])
+ import_records(argv[2], category_map)
+
+def import_categories(filename):
+ """
+ Creates Category records for each category name in filename (a file with
+ one name per line). Each name is normalised to the title case version of
+ the name, to remove some duplicates.
+
+ Can safely be run more than once on the same file, as any existing names
+ are skipped in the import phase.
+
+ Returns a dictionary mapping names to object_ids, which avoids extra
+ database lookups when items are imported.
+ """
+ category_map = dict(models.Category.objects.values_list("name", "id"))
+ names = set(line.strip().title() for line in open(filename).readlines()
+ if line)
+ names.difference_update(category_map.keys())
+ current = 0
+ for name in names:
+ obj = models.Category.objects.create(name=name)
+ category_map[name] = obj.id
+ current += 1
+ if current % 100 == 0:
+ print "Imported %d categories." % current
+ return category_map
+
+def import_records(filename, category_map):
+ """
+ Import item records, skipping over any that have already been imported.
+ """
+ existing = set(models.Item.objects.values_list("record_id", "title"))
+ fieldnames = (
+ "record_id",
+ "title",
+ "reg_number",
+ "description",
+ "marks",
+ "prod_date",
+ "provenance_prod",
+ "provenance_historical",
+ "categories",
+ "permalink",
+ "height",
+ "width",
+ "depth",
+ "diameter",
+ "weight",
+ "license_info",
+ )
+ convert_fields = (
+ "record_id",
+ "title",
+ "reg_number",
+ "description",
+ "marks",
+ "provenance_prod",
+ "provenance_historical",
+ "license_info",
+ )
+ reader = csv.DictReader(open(filename), fieldnames, delimiter="\t",
+ quoting=csv.QUOTE_NONE)
+ reader.next()
+ current = 0
+ for entry in reader:
+ if (entry["record_id"], entry["title"].decode(ENCODING)) in existing:
+ # Avoids multiple imports of the same data.
+ continue
+
+ # DEBUG:
+ # import pprint
+ # pprint.pprint(entry)
+ # print
+
+ for name in ("height", "width", "depth", "diameter"):
+ entry[name] = normalise_dimension(entry[name], current)
+ entry["weight"] = normalise_weight(entry["weight"], current)
+ for key in convert_fields:
+ entry[key] = entry[key].decode(ENCODING)
+ categories = entry.pop("categories")
+ item = models.Item.objects.create(**entry)
+
+ category_names = [c.strip().title() for c in categories.split("|") if c]
+ category_ids = [category_map[name] for name in category_names]
+ item.categories = category_ids
+
+ current += 1
+ if current % 500 == 0:
+ print "Handled %d." % current
+
+def normalise_dimension(value, count):
+ value = value.strip().lower()
+ if not value:
+ return None
+ match = dimension_pattern.match(value)
+ assert match is not None, "Bad dimension in entry %d: %s" % (count, value)
+ dimension = decimal.Decimal(match.group(1))
+ if match.group(2) == "m":
+ dimension *= 1000
+ if match.group(2) == "cm":
+ dimension *= 10
+ return dimension
+
+def normalise_weight(value, count):
+ value = value.strip().lower()
+ if not value:
+ return None
+ match = weight_pattern.match(value)
+ assert match is not None, "Bad weight entry line %d: %s" % (count, value)
+ weight = decimal.Decimal(match.group(1))
+ if match.group(2) == "gm":
+ weight /= 1000
+ return weight
+
+if __name__ == "__main__":
+ sys.exit(main())
+
40 Powerhouse/phm_collection/models.py
@@ -0,0 +1,40 @@
+from django.db import models
+
+class Category(models.Model):
+ name = models.CharField(max_length=50, unique=True)
+
+ class Meta:
+ # pylint: disable-msg=W0232
+ verbose_name_plural = "categories"
+
+ def __unicode__(self):
+ return self.name
+
+
+class Item(models.Model):
+ # Sadly, record_id is *not* unique in the source data.
+ record_id = models.CharField("Record ID", max_length=10)
+ title = models.CharField("Object Title", max_length=500)
+ reg_number = models.CharField("Registration Number", max_length=25,
+ blank=True)
+ description = models.TextField(blank=True)
+ marks = models.TextField( blank=True)
+ # Production date is a bit too freeform to parse sensibly at the moment.
+ prod_date = models.CharField("Production Date", max_length=30, blank=True)
+ provenance_prod = models.TextField("Provenance (Production)", blank=True)
+ provenance_historical = models.TextField("Provenance (History)", blank=True)
+ categories = models.ManyToManyField(Category)
+ permalink = models.URLField("Persistent Link", verify_exists=False)
+ height = models.PositiveIntegerField("Height in mm", blank=True, null=True)
+ width = models.PositiveIntegerField("Width in mm", blank=True, null=True)
+ depth = models.PositiveIntegerField("Depth in mm", blank=True, null=True)
+ diameter = models.PositiveIntegerField("Diameter in mm", blank=True,
+ null=True)
+ weight = models.DecimalField("Weight in kg", max_digits=13,
+ decimal_places=5, blank=True, null=True)
+ # This contains raw HTML; pass through "safe" filter in templates.
+ license_info = models.TextField()
+
+ def __unicode__(self):
+ return self.title
+
9 Powerhouse/phm_urls.py
@@ -0,0 +1,9 @@
+from django.conf.urls.defaults import *
+from django.contrib import admin
+
+admin.autodiscover()
+
+urlpatterns = patterns('',
+ (r'^admin/', include(admin.site.urls)),
+)
+
54 Powerhouse/settings.py
@@ -0,0 +1,54 @@
+import os
+
+DEBUG = False
+TEMPLATE_DEBUG = DEBUG
+
+SITE_DIR = os.path.abspath(os.path.dirname(__file__))
+
+DATABASES = {
+ "default": {
+ "ENGINE": "django.db.backends.sqlite3",
+ "NAME": os.path.join(SITE_DIR, "phm.sqlite"),
+ #"NAME": "/home/malcolm/tmpfs/phm.sqlite",
+ }
+}
+
+TIME_ZONE = "Australia/Sydney"
+
+SITE_ID = 1
+
+# Make this unique, and don"t share it with anybody.
+SECRET_KEY = "y$2!w=_bo-owlf8^@w$$@10*aywwi9thfi0!9+=-rbc=ic5-nm"
+
+# List of callables that know how to import templates from various sources.
+TEMPLATE_LOADERS = (
+ "django.template.loaders.filesystem.Loader",
+ "django.template.loaders.app_directories.Loader",
+)
+
+MIDDLEWARE_CLASSES = (
+ "django.middleware.common.CommonMiddleware",
+ "django.contrib.sessions.middleware.SessionMiddleware",
+ "django.middleware.csrf.CsrfViewMiddleware",
+ "django.contrib.auth.middleware.AuthenticationMiddleware",
+ "django.contrib.messages.middleware.MessageMiddleware",
+)
+
+ROOT_URLCONF = "phm_urls"
+
+TEMPLATE_DIRS = (
+ # Don"t forget to use absolute paths, not relative paths.
+)
+
+INSTALLED_APPS = (
+ "django.contrib.auth",
+ "django.contrib.contenttypes",
+ "django.contrib.sessions",
+ "django.contrib.sites",
+ "django.contrib.messages",
+ "django.contrib.admin",
+ "phm_collection",
+)
+
+FIXTURE_DIRS = [os.path.join(SITE_DIR, "fixtures")]
+
8 README.txt
@@ -0,0 +1,8 @@
+The two Django projects here were used to illustrate data importing techniques
+in my talk "Displaying Australian datasets with Django" at PyCon-AU, 27 June,
+2010.
+
+All original text and data is released under the Creative Commons Attribution-ShareAlike 3.0 Australia license. Refer to http://creativecommons.org/licenses/by-sa/3.0/au/ . As far as attribution for reuse goes, it suffices to use my name (Malcolm Tredinnick) as the author.
+
+All code is licensed under the new-style BSD license (the same license as Django), as described in the LICENSE.txt file.
+
15 Rivers/README.txt
@@ -0,0 +1,15 @@
+This project contains code and models for importing data about Australia's river basins. It uses data from 1997, updated slightly over the years up to 2004. I downloaded the version I used on June 22, 2010.
+
+Data is available from https://www.ga.gov.au/products/servlet/controller?event=GEOCAT_DETAILS&catno=42343 and is licensed under the Creative Commons Attribute 3.0 Australia license (http://creativecommons.org/licenses/by/3.0/au/). The *_shp.zip file is a 13 M download and that's the one I was using here (SHP datafiles are easy to use with GeoDjango).
+
+That license applies to the Australian Government Geoscience data (which isn't included in this package for space reasons). My own original code and test are licensed under the terms in the README.txt file in the parent directory.
+
+I used a PostGIS database to load the data:
+
+ createdb -T postgis_template rivers
+ ./manage.py syncdb --noinput
+ cd river_basins
+ PYTHONPATH=.. DJANGO_SETTINGS_MODULE=settings /load.py <shapefile_dir>
+
+The code here provides a way to load the data and view it in the admin.
+
0 Rivers/__init__.py
No changes.
20 Rivers/fixtures/initial_data.json
@@ -0,0 +1,20 @@
+[
+ {
+ "fields": {
+ "date_joined": "2010-06-26 22:25:27",
+ "email": "admin@example.com",
+ "first_name": "",
+ "groups": [],
+ "is_active": true,
+ "is_staff": true,
+ "is_superuser": true,
+ "last_login": "2010-06-26 22:25:27",
+ "last_name": "",
+ "password": "sha1$04476$609f9936b74baf755e52232702d6e41a9087111b",
+ "user_permissions": [],
+ "username": "admin"
+ },
+ "model": "auth.user",
+ "pk": 1
+ }
+]
11 Rivers/manage.py
@@ -0,0 +1,11 @@
+#!/usr/bin/env python
+from django.core.management import execute_manager
+try:
+ import settings # Assumed to be in the same directory.
+except ImportError:
+ import sys
+ sys.stderr.write("Error: Can't find the file 'settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__)
+ sys.exit(1)
+
+if __name__ == "__main__":
+ execute_manager(settings)
0 Rivers/river_basins/__init__.py
No changes.
7 Rivers/river_basins/admin.py
@@ -0,0 +1,7 @@
+from django.contrib.gis import admin
+
+from river_basins import models
+
+admin.site.register([models.RBasinPolygon, models.RBasinChain,
+ models.RBasinPoint], admin.GeoModelAdmin)
+
47 Rivers/river_basins/load.py
@@ -0,0 +1,47 @@
+#!/usr/bin/env python
+"""
+Import river basin data into models.
+"""
+
+import os
+import sys
+
+from django.contrib.gis import utils
+
+from river_basins import models
+
+
+VERBOSE = True
+
+FILES = (
+ "rbasin_polygon.shp",
+ "rbasin_chain.shp",
+ "rbasin_point.shp",
+)
+
+def main(argv=None):
+ if argv is None:
+ argv = sys.argv
+
+ directory = sys.argv[1]
+ layermapping = utils.LayerMapping(models.RBasinPolygon,
+ os.path.join(directory, FILES[0]),
+ models.rbasinpolygon_mapping,
+ transform=False, encoding="iso-8859-1")
+ layermapping.save(strict=True, verbose=VERBOSE)
+
+ layermapping = utils.LayerMapping(models.RBasinChain,
+ os.path.join(directory, FILES[1]),
+ models.rbasinchain_mapping,
+ transform=False, encoding="iso-8859-1")
+ layermapping.save(strict=True, verbose=VERBOSE)
+
+ layermapping = utils.LayerMapping(models.RBasinPoint,
+ os.path.join(directory, FILES[2]),
+ models.rbasinpoint_mapping,
+ transform=False, encoding="iso-8859-1")
+ layermapping.save(strict=True, verbose=VERBOSE)
+
+if __name__ == "__main__":
+ sys.exit(main())
+
121 Rivers/river_basins/models.py
@@ -0,0 +1,121 @@
+"""
+Models for importing the river basin data. All the basic model fields and
+mapping dictionaries were generated using the "ogrinspect" management command.
+
+I ran something along the lines of:
+
+ ./manage.py ogrinspect --decimal true --mapping --srid -1 \
+ rbasin_chain.shp RBasinChain >> river_basins/models.py
+
+(repeat two more times for rbasin_point.shp and rbasin_polygon.shp).
+
+"""
+
+from django.contrib.gis.db import models
+
+# FIXME: All the PolygonFields should have srid=-1, but that isn't known to
+# proj by default, so we ignore the projection for now.
+
+class RBasinPolygon(models.Model):
+ area = models.DecimalField(max_digits=31, decimal_places=15)
+ perimeter = models.DecimalField(max_digits=31, decimal_places=15)
+ aus_field = models.DecimalField(max_digits=11, decimal_places=0)
+ aus_id = models.DecimalField(max_digits=11, decimal_places=0)
+ f_code = models.CharField(max_length=12)
+ bname = models.CharField(max_length=30)
+ bnum = models.CharField(max_length=5)
+ rname = models.CharField(max_length=35)
+ rnum = models.CharField(max_length=5)
+ dname = models.CharField(max_length=25)
+ dnum = models.CharField(max_length=5)
+ centroid_y = models.DecimalField(max_digits=31, decimal_places=15)
+ centroid_x = models.DecimalField(max_digits=31, decimal_places=15)
+ geom = models.PolygonField()
+
+ objects = models.GeoManager()
+
+ def __unicode__(self):
+ return self.bname
+
+
+class RBasinPoint(models.Model):
+ area = models.DecimalField(max_digits=31, decimal_places=15)
+ perimeter = models.DecimalField(max_digits=31, decimal_places=15)
+ aus_field = models.DecimalField(max_digits=11, decimal_places=0)
+ aus_id = models.DecimalField(max_digits=11, decimal_places=0)
+ f_code = models.CharField(max_length=12)
+ bname = models.CharField(max_length=30)
+ bnum = models.CharField(max_length=5)
+ rname = models.CharField(max_length=35)
+ rnum = models.CharField(max_length=5)
+ dname = models.CharField(max_length=25)
+ dnum = models.CharField(max_length=5)
+ geom = models.PointField()
+
+ objects = models.GeoManager()
+
+ def __unicode__(self):
+ return self.bname
+
+
+class RBasinChain(models.Model):
+ fnode_field = models.DecimalField(max_digits=11, decimal_places=0)
+ tnode_field = models.DecimalField(max_digits=11, decimal_places=0)
+ lpoly_field = models.DecimalField(max_digits=11, decimal_places=0)
+ rpoly_field = models.DecimalField(max_digits=11, decimal_places=0)
+ length = models.DecimalField(max_digits=31, decimal_places=15)
+ aus_field = models.DecimalField(max_digits=11, decimal_places=0)
+ aus_id = models.DecimalField(max_digits=11, decimal_places=0)
+ f_code = models.CharField(max_length=12)
+ geom = models.LineStringField()
+
+ objects = models.GeoManager()
+
+ def __unicode__(self):
+ return self.f_code
+
+
+rbasinchain_mapping = {
+ 'fnode_field' : 'FNODE_',
+ 'tnode_field' : 'TNODE_',
+ 'lpoly_field' : 'LPOLY_',
+ 'rpoly_field' : 'RPOLY_',
+ 'length' : 'LENGTH',
+ 'aus_field' : 'AUS_',
+ 'aus_id' : 'AUS_ID',
+ 'f_code' : 'F_CODE',
+ 'geom' : 'LINESTRING',
+}
+
+rbasinpolygon_mapping = {
+ 'area' : 'AREA',
+ 'perimeter' : 'PERIMETER',
+ 'aus_field' : 'AUS_',
+ 'aus_id' : 'AUS_ID',
+ 'f_code' : 'F_CODE',
+ 'bname' : 'BNAME',
+ 'bnum' : 'BNUM',
+ 'rname' : 'RNAME',
+ 'rnum' : 'RNUM',
+ 'dname' : 'DNAME',
+ 'dnum' : 'DNUM',
+ 'centroid_y' : 'CENTROID_Y',
+ 'centroid_x' : 'CENTROID_X',
+ 'geom' : 'POLYGON',
+}
+
+rbasinpoint_mapping = {
+ 'area' : 'AREA',
+ 'perimeter' : 'PERIMETER',
+ 'aus_field' : 'AUS_',
+ 'aus_id' : 'AUS_ID',
+ 'f_code' : 'F_CODE',
+ 'bname' : 'BNAME',
+ 'bnum' : 'BNUM',
+ 'rname' : 'RNAME',
+ 'rnum' : 'RNUM',
+ 'dname' : 'DNAME',
+ 'dnum' : 'DNUM',
+ 'geom' : 'POINT',
+}
+
8 Rivers/river_urls.py
@@ -0,0 +1,8 @@
+from django.conf.urls.defaults import *
+from django.contrib.gis import admin
+
+admin.autodiscover()
+
+urlpatterns = patterns('',
+ (r'^admin/', include(admin.site.urls)),
+)
71 Rivers/settings.py
@@ -0,0 +1,71 @@
+import os
+
+DEBUG = True
+TEMPLATE_DEBUG = DEBUG
+
+SITE_DIR = os.path.abspath(os.path.dirname(__file__))
+
+ADMINS = (
+ # ('Your Name', 'your_email@domain.com'),
+)
+
+MANAGERS = ADMINS
+
+DATABASES = {
+ 'default': {
+ 'ENGINE': 'django.contrib.gis.db.backends.postgis',
+ 'NAME': 'rivers',
+ }
+}
+
+# Local time zone for this installation. Choices can be found here:
+# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
+# although not all choices may be available on all operating systems.
+# On Unix systems, a value of None will cause Django to use the same
+# timezone as the operating system.
+# If running in a Windows environment this must be set to the same as your
+# system time zone.
+TIME_ZONE = 'Australia/Sydney'
+
+# Language code for this installation. All choices can be found here:
+# http://www.i18nguy.com/unicode/language-identifiers.html
+LANGUAGE_CODE = 'en-us'
+
+SITE_ID = 1
+
+# Make this unique, and don't share it with anybody.
+SECRET_KEY = 'oo-bzn5*(x500%n2)p2hx#v*72829xkqbfech^$phcazfu%yy0'
+
+# List of callables that know how to import templates from various sources.
+TEMPLATE_LOADERS = (
+ 'django.template.loaders.filesystem.Loader',
+ 'django.template.loaders.app_directories.Loader',
+)
+
+MIDDLEWARE_CLASSES = (
+ 'django.middleware.common.CommonMiddleware',
+ 'django.contrib.sessions.middleware.SessionMiddleware',
+ 'django.middleware.csrf.CsrfViewMiddleware',
+ 'django.contrib.auth.middleware.AuthenticationMiddleware',
+ 'django.contrib.messages.middleware.MessageMiddleware',
+)
+
+ROOT_URLCONF = 'river_urls'
+
+TEMPLATE_DIRS = (
+ # Don't forget to use absolute paths, not relative paths.
+)
+
+INSTALLED_APPS = (
+ 'django.contrib.auth',
+ 'django.contrib.contenttypes',
+ 'django.contrib.sessions',
+ 'django.contrib.sites',
+ 'django.contrib.messages',
+ 'django.contrib.admin',
+ 'django.contrib.gis',
+ 'river_basins',
+)
+
+FIXTURE_DIRS = [os.path.join(SITE_DIR, "fixtures")]
+
310 pylint.rc
@@ -0,0 +1,310 @@
+# lint Python modules using external checkers.
+#
+# This is the main checker controlling the other ones and the reports
+# generation. It is itself both a raw checker and an astng checker in order
+# to:
+# * handle message activation / deactivation at the module level
+# * handle some basic but necessary stats'data (number of classes, methods...)
+#
+[MASTER]
+
+# Specify a configuration file.
+#rcfile=
+
+# Python code to execute, usually for sys.path manipulation such as
+# pygtk.require().
+#init-hook=
+
+# Profiled execution.
+profile=no
+
+# Add <file or directory> to the black list. It should be a base name, not a
+# path. You may set this option multiple times.
+ignore=CVS
+
+# Pickle collected data for later comparisons.
+persistent=yes
+
+# List of plugins (as comma separated values of python modules names) to load,
+# usually to register additional checkers.
+load-plugins=
+
+
+[MESSAGES CONTROL]
+
+# Enable only checker(s) with the given id(s). This option conflicts with the
+# disable-checker option
+#enable-checker=
+
+# Enable all checker(s) except those with the given id(s). This option
+# conflicts with the enable-checker option
+#disable-checker=
+
+# Enable all messages in the listed categories (IRCWEF).
+#enable-msg-cat=
+
+# Disable all messages in the listed categories (IRCWEF).
+disable-msg-cat=I
+
+# Enable the message(s) with the given id(s).
+#enable-msg=
+
+# Disable the message(s) with the given id(s).
+disable-msg=W0142,W0704,C0111,R0902,R0903,R0904
+
+
+[REPORTS]
+
+# Set the output format. Available formats are text, parseable, colorized, msvs
+# (visual studio) and html
+output-format=text
+
+# Include message's id in output
+include-ids=yes
+
+# Put messages in a separate file for each module / package specified on the
+# command line instead of printing them on stdout. Reports (if any) will be
+# written in a file name "pylint_global.[txt|html]".
+files-output=no
+
+# Tells whether to display a full report or only the messages
+reports=no
+
+# Python expression which should return a note less than 10 (10 is the highest
+# note). You have access to the variables errors warning, statement which
+# respectively contain the number of errors / warnings messages and the total
+# number of statements analyzed. This is used by the global evaluation report
+# (R0004).
+evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
+
+# Add a comment according to your evaluation note. This is used by the global
+# evaluation report (R0004).
+comment=no
+
+# Enable the report(s) with the given id(s).
+#enable-report=
+
+# Disable the report(s) with the given id(s).
+#disable-report=
+
+
+# try to find bugs in the code using type inference
+#
+[TYPECHECK]
+
+# Tells whether missing members accessed in mixin class should be ignored. A
+# mixin class is detected if its name ends with "mixin" (case insensitive).
+ignore-mixin-members=yes
+
+# List of classes names for which member attributes should not be checked
+# (useful for classes with attributes dynamically set).
+ignored-classes=SQLObject
+
+# When zope mode is activated, add a predefined set of Zope acquired attributes
+# to generated-members.
+zope=no
+
+# List of members which are set dynamically and missed by pylint inference
+# system, and so shouldn't trigger E0201 when accessed.
+generated-members=REQUEST,acl_users,aq_parent
+
+
+# checks for :
+# * doc strings
+# * modules / classes / functions / methods / arguments / variables name
+# * number of arguments, local variables, branches, returns and statements in
+# functions, methods
+# * required module attributes
+# * dangerous default values as arguments
+# * redefinition of function / method / class
+# * uses of the global statement
+#
+[BASIC]
+
+# Required attributes for module, separated by a comma
+required-attributes=
+
+# Regular expression which should only match functions or classes name which do
+# not require a docstring
+no-docstring-rgx=__.*__
+
+# Regular expression which should only match correct module names
+module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$
+
+# Regular expression which should only match correct module level names
+const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__)|(_?[a-zA-Z0-9_]+))$
+
+# Regular expression which should only match correct class names
+class-rgx=[A-Z_][a-zA-Z0-9]+$
+
+# Regular expression which should only match correct function names
+function-rgx=[a-z_][a-z0-9_]{2,30}$
+
+# Regular expression which should only match correct method names
+method-rgx=[a-z_][a-z0-9_]{2,30}$|setUp$|tearDown$
+
+# Regular expression which should only match correct instance attribute names
+attr-rgx=[a-z_][a-z0-9_]{2,30}$
+
+# Regular expression which should only match correct argument names
+argument-rgx=[a-z_][a-z0-9_]{2,30}$
+
+# Regular expression which should only match correct variable names
+variable-rgx=[a-z_][a-z0-9_]{2,30}$
+
+# Regular expression which should only match correct list comprehension /
+# generator expression variable names
+inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$
+
+# Good variable names which should always be accepted, separated by a comma
+good-names=i,j,k,ex,Run,_
+
+# Bad variable names which should always be refused, separated by a comma
+bad-names=foo,bar,baz,toto,tutu,tata
+
+# List of builtins function names that should not be used, separated by a comma
+bad-functions=map,filter,apply,input
+
+
+# checks for
+# * unused variables / imports
+# * undefined variables
+# * redefinition of variable from builtins or from an outer scope
+# * use of variable before assignment
+#
+[VARIABLES]
+
+# Tells whether we should check for unused import in __init__ files.
+init-import=no
+
+# A regular expression matching names used for dummy variables (i.e. not used).
+dummy-variables-rgx=_|dummy
+
+# List of additional names supposed to be defined in builtins. Remember that
+# you should avoid to define new builtins when possible.
+additional-builtins=
+
+
+# checks for sign of poor/misdesign:
+# * number of methods, attributes, local variables...
+# * size, complexity of functions, methods
+#
+[DESIGN]
+
+# Maximum number of arguments for function / method
+max-args=5
+
+# Argument names that match this expression will be ignored. Default to name
+# with leading underscore
+ignored-argument-names=_.*
+
+# Maximum number of locals for function / method body
+max-locals=15
+
+# Maximum number of return / yield for function / method body
+max-returns=6
+
+# Maximum number of branch for function / method body
+max-branchs=12
+
+# Maximum number of statements in function / method body
+max-statements=50
+
+# Maximum number of parents for a class (see R0901).
+max-parents=7
+
+# Maximum number of attributes for a class (see R0902).
+max-attributes=7
+
+# Minimum number of public methods for a class (see R0903).
+min-public-methods=2
+
+# Maximum number of public methods for a class (see R0904).
+max-public-methods=20
+
+
+# checks for :
+# * methods without self as first argument
+# * overridden methods signature
+# * access only to existent members via self
+# * attributes not defined in the __init__ method
+# * supported interfaces implementation
+# * unreachable code
+#
+[CLASSES]
+
+# List of interface methods to ignore, separated by a comma. This is used for
+# instance to not check methods defines in Zope's Interface base class.
+ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by
+
+# List of method names used to declare (i.e. assign) instance attributes.
+defining-attr-methods=__init__,__new__,setUp
+
+
+# checks for
+# * external modules dependencies
+# * relative / wildcard imports
+# * cyclic imports
+# * uses of deprecated modules
+#
+[IMPORTS]
+
+# Deprecated modules which should not be used, separated by a comma
+deprecated-modules=regsub,string,TERMIOS,Bastion,rexec
+
+# Create a graph of every (i.e. internal and external) dependencies in the
+# given file (report R0402 must not be disabled)
+import-graph=
+
+# Create a graph of external dependencies in the given file (report R0402 must
+# not be disabled)
+ext-import-graph=
+
+# Create a graph of internal dependencies in the given file (report R0402 must
+# not be disabled)
+int-import-graph=
+
+
+# checks for:
+# * warning notes in the code like FIXME, XXX
+# * PEP 263: source code with non ascii character but no encoding declaration
+#
+[MISCELLANEOUS]
+
+# List of note tags to take in consideration, separated by a comma.
+notes=FIXME,XXX,TODO
+
+
+# checks for similarities and duplicated code. This computation may be
+# memory / CPU intensive, so you should disable it if you experiments some
+# problems.
+#
+[SIMILARITIES]
+
+# Minimum lines number of a similarity.
+min-similarity-lines=4
+
+# Ignore comments when computing similarities.
+ignore-comments=yes
+
+# Ignore docstrings when computing similarities.
+ignore-docstrings=yes
+
+
+# checks for :
+# * unauthorized constructions
+# * strict indentation
+# * line length
+# * use of <> instead of !=
+#
+[FORMAT]
+
+# Maximum number of characters on a single line.
+max-line-length=80
+
+# Maximum number of lines in a module
+max-module-lines=1000
+
+# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
+# tab).
+indent-string=' '
BIN slides.odp
Binary file not shown.
BIN slides.pdf
Binary file not shown.

0 comments on commit 8ea6ab5

Please sign in to comment.
Something went wrong with that request. Please try again.