Skip to content
This repository

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

A wrapper for the posterous API, written in python

tag: v0.2

Fetching latest commit…

Cannot retrieve the latest commit at this time

README
posterous.py
============
A very simple (and incomplete) wrapper around the posterous API
that's available at http://posterous.com/api
Currently (v0.1) only supports reading sites & posts, and creating new posts.

Sample usage:
>>> import posterous
>>> p = posterous.Posterous('username', 'password')

# Get the user's sites
>>> sites = p.get_sites()
>>> for s in sites:
...   print s.name
...
testsite1
testsite2
testsite3

# Get the users posts
>>> posts = p.get_posts(site_id=sites[0].id, num_posts=3, page=2)
>>> for p in posts:
...   print p.url
...
http://post.ly/2345
http://post.ly/3456
http://post.ly/4567

# Create a new post
>>> post = p.new_post(title="I love Posterous")
>>> post.body = "Do you love it too?"
>>> image = open("jellyfish.png", "rb").read()
>>> post.add_media(quiet=True, file=image, url=["http://example.com/song.mp3", http://example.com/story.pdf"])
>>> post.source = "Posterous Python wrapper"
>>> post.save()

backup-posterous.py
=====================
A simple script that iterates over all your posterous sites and 
downloads all posts from all sites (including all media).
It's currently not very clever, and simply downlaods everything.
Though it's clever enough, to make multiple calls to the API
should the number of posts for a site exceed the given batch size
(which is 50 by default), in order to really get all posts for a site.


user@host:~$ python backup-posterous.py --help
Usage: backup-posterous.py [options]

Options:
    -h, --help            show this help message and exit
    -u USERNAME, --username=USERNAME
                          Email address associated with posterous account
    -p PASSWORD, --password=PASSWORD
                          Password associated with posterous account
    -f FOLDER, --folder=FOLDER
                          Folder to store backup data in (Beware, if it exists,
                          data may be overwritten). Defaults to backup/
    -s SITE_ID, --site-id=SITE_ID
                          Only query site with this id
    -b BATCH_SIZE, --batch-size=BATCH_SIZE
                          The number of posts to get per API call, default is 50
    -d, --debug           Debug output
    -v, --verbose         Verbose output (overrides -d)
    -q, --quiet           Quiet output (overrides -v and -d)

  
  
Inside the given <folder> argument it creates a file/folder structure 
like the following:
  /{options.folder}
      /{site.hostname}
          site-{site.hostname}.json   <-- all information of the site 
          {post-slug}.json  <-- body, comments & everything else
          {post-slug}_media{num}.{media_type}
Something went wrong with that request. Please try again.