Skip to content


Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Fetches and converts data between Facebook, Google+, Instagram, and Twitter native APIs, ActivityStreams, microformats2 HTML and JSON, Atom, and more.
Python HTML Other
branch: master
Failed to load latest commit information.
beautifulsoup @ 86629db Use new, current BS4 mirror
mf2py @ 9d5ae22 update mf2py. fixes #309
oauth_dropins @ d04835e Update oauth_dropins
static 1024x1024 AS logo
templates fix XML/Atom validation errors: use URL as id, escape all &s
testdata refine AS and mf2 for snarfed/bridgy#371: stop generating rsvp and in…
.gitignore add google_client_{id,secret}
.gitmodules Use new, current BS4 mirror refine support for search queries add to make this a package refine support for search queries extract atom rendering out into its own module add back symlinks extract oauth handlers into silo source files; upgrade to facebook AP…
app.yaml use the new oauth_dropins.twitter_auth module
app.yaml.facebook extract oauth handlers into silo source files; upgrade to facebook AP…
app.yaml.instagram move oauth handlers from to
app.yaml.twitter move oauth handlers from to suppress BeautifulSoup warning during tests fix XML/Atom validation errors: use URL as id, escape all &s fix XML/Atom validation errors: use URL as id, escape all &s facebook: filter out tangential shared_story posts (friends' likes, r… facebook: filter out tangential shared_story posts (friends' likes, r… refine AS and mf2 for snarfed/bridgy#371: stop generating 'liked this…
googleplus_api_discovery.json use a canned G+ API discovery doc in tests instead of fetching over t… refine AS and mf2 for snarfed/bridgy#371: stop generating 'liked this… noop tweak to instagram API /media/recent endpoint fix XML/Atom validation errors: use URL as id, escape all &s refine AS and mf2 for snarfed/bridgy#371: stop generating rsvp and in… refine mf2 for snarfed/bridgy#371: use empty alt for h-card <img>s add get_event method to Source and impl to Facebook drop tags and attachments from Source.activity_changed() noop: remove unused imports, other minor tweaks testutil.expect_requests_*: add support for multiple redirects refine AS and mf2 for snarfed/bridgy#371: stop generating 'liked this… fix XML/Atom validation errors: use URL as id, escape all &s

ActivityStreams activitystreams-unofficial


This is a library and REST API that fetches and converts social network data between a wide variety of formats:

You can try it out with these interactive demos:

It's part of a suite of projects that implement the OStatus federation protocols for the major social networks. The other projects include portablecontacts-, salmon-, webfinger-, and ostatus-unofficial.

License: This project is placed in the public domain.


The library and REST API are both based on the OpenSocial Activity Streams service.

Let's start with an example. This code using the library:

from activitystreams_unofficial import twitter

is equivalent to this HTTP GET request:

They return the authenticated user's Twitter stream, ie tweets from the people they follow. Here's the JSON output:

  "itemsPerPage": 10,
  "startIndex": 0,
  "totalResults": 12
  "items": [{
      "verb": "post",
      "id": ",2013:374272979578150912"
      "url": "",
      "content": "Getting stuff for barbecue tomorrow. No ribs left! Got some nice tenderloin though. (@ Metro Plus Famille Lemay)",
      "actor": {
      "username": "evanpro",
        "displayName": "Evan Prodromou",
        "description": "Prospector.",
        "url": "",
      "object": {
        "tags": [{
            "url": "",
            "startIndex": 113,
            "length": 22,
            "objectType": "article"
          }, ...],
    }, ...]

The request parameters are the same for both, all optional: USER_ID is a source-specific id or @me for the authenticated user. GROUP_ID may be @all, @friends (currently identical to @all), @self, or @search; APP_ID is currently ignored; best practice is to use @app as a placeholder.

Paging is supported via the startIndex and count parameters. They're self explanatory, and described in detail in the OpenSearch spec and OpenSocial spec.

When using the GROUP_ID @search (for platforms that support it — currently Twitter and Instagram), provide a search string via the q parameter. The API is loosely based on the OpenSearch spec, the OpenSocial Core Container spec, and the OpenSocial Core Gadget spec.

Output data is JSON Activity Streams 1.0 objects wrapped in the OpenSocial envelope, which puts the activities in the top-level items field as a list and adds the itemsPerPage, totalCount, etc. fields.

Most Facebook requests and all Twitter, Google+, and Instagram requests will need OAuth access tokens. If you're using Python on Google App Engine, oauth-dropins is an easy way to add OAuth client flows for these sites. Otherwise, here are the sites' authentication docs: Facebook, Google+, Instagram, Twitter.

If you get an access token and pass it along, it will be used to sign and authorize the underlying requests to the sources providers. See the demos on the REST API endpoints above for examples.

Using the REST API

The endpoints above all serve the OpenSocial Activity Streams REST API. Request paths are of the form:


All query parameters are optional. FORMAT may be json (the default), xml, or atom, both of which return Atom. The rest of the path elements and query params are described above.

Errors are returned with the appropriate HTTP response code, e.g. 403 for Unauthorized, with details in the response body.

To use the REST API in an existing ActivityStreams client, you'll need to hard-code exceptions for the domains you want to use e.g., and redirect HTTP requests to the corresponding endpoint above.

Using the library

See the example above for a quick start guide.

Clone or download this repo into a directory named activitystreams_unofficial (note the underscore instead of dash). Each source works the same way. Import the module for the source you want to use, then instantiate its class by passing the HTTP handler object. The handler should have a request attribute for the current HTTP request.

The useful methods are get_activities() and get_actor(), which returns the current authenticated user (if any). See the individual method docstrings for details. All return values are Python dicts of decoded ActivityStreams JSON.

The microformats2.*_to_html() functions are also useful for rendering ActivityStreams objects as nicely formatted HTML.

Future work

We'd love to add more sites! Off the top of my head, YouTube, Tumblr,, Sina Weibo, Qzone, and RenRen would be good candidates. If you're looking to get started, implementing a new site is a good place to start. It's pretty self contained and the existing sites are good examples to follow, but it's a decent amount of work, so you'll be familiar with the whole project by the end.


Pull requests are welcome! Feel free to ping me with any questions.

Most dependencies are included as git submodules. Be sure to run git submodule update --init --recursive after cloning this repo.

This ActivityStreams validator is useful for manual testing.

You can run the unit tests with If you send a pull request, please include (or update) a test for the new functionality if possible!

The tests require the App Engine SDK. They look for it in the GAE_SDK_ROOT environment variable, /usr/local/google_appengine, or ~/google_appengine, in that order.

Note the app.yaml.* files, one for each App Engine app id. To work on or deploy a specific app id, symlink app.yaml to its file. Likewise, if you add a new site, you'll need to add a corresponding file.

To deploy:

git co -- app.yaml && ./ && \
rm -f app.yaml && ln -s app.yaml.twitter app.yaml && \
  ~/google_appengine/ --oauth2 update . && \
rm -f app.yaml && ln -s app.yaml.facebook app.yaml && \
  ~/google_appengine/ --oauth2 update . && \
rm -f app.yaml && ln -s app.yaml.instagram app.yaml && \
~/google_appengine/ --oauth2 update . && \
git co -- app.yaml

To deploy facebook-atom, twitter-atom, and instagram-atom after an activitystreams-unofficial change:

foreach s (facebook twitter instagram)
  cd ~/src/$s-atom/activitystreams && gu && git submodule update && \
    cd .. && ~/google_appengine/ --oauth2 update .

Related work

Gnip is by far the most complete project in this vein. It similarly converts social network data to ActivityStreams and supports many more source networks. Unfortunately, it's commercial, there's no free trial or self-serve signup, and plans start at $500.

DataSift looks like broadly the same thing, except they offer self-serve, pay as you go billing, and they use their own proprietary output format instead of ActivityStreams. They're also aimed more at data mining as opposed to individual user access.

Cliqset's FeedProxy used to do this kind of format translation, but unfortunately it and Cliqset died.

Facebook used to officially support ActivityStreams, but that's also dead.

There are a number of products that download your social network data, normalize it, and let you query and visualize it. SocialSafe and ThinkUp are two of the most mature. There's also the lifelogging/lifestream aggregator vein of projects that pull data from multiple source sites. Storytlr is a good example. It doesn't include Facebook, Google+, or Instagram, but does include a number of smaller source sites. There are lots of others, e.g. the Lifestream WordPress plugin. Unfortunately, these are generally aimed at end users, not developers, and don't usually expose libraries or REST APIs.

On the open source side, there are many related projects. php-mf2-shim adds microformats2 to Facebook and Twitter's raw HTML. sockethub is a similar "polyglot" approach, but more focused on writing than reading.


  • https kwarg to get_activities() etc that converts all http links to https
  • convert most of the per-site tests to testdata tests
Something went wrong with that request. Please try again.