Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Fetches and converts data between Facebook, Google+, Instagram, and Twitter native APIs, ActivityStreams, microformats2 HTML and JSON, Atom, and more.
Python HTML Other
branch: master
Failed to load latest commit information.
beautifulsoup @ 86629db Use new, current BS4 mirror
mf2py @ 9d5ae22 update mf2py. fixes #309
oauth_dropins @ ca2ec98 drop tags and attachments from Source.activity_changed()
static 1024x1024 AS logo
templates default facebook web form to @self since FB 2.x API mostly breaks @all/
testdata refine AS and mf2 for snarfed/bridgy#371: stop generating rsvp and in…
.gitignore add google_client_{id,secret}
.gitmodules Use new, current BS4 mirror
README.md refine support for search queries
__init__.py add __init__.py to make this a package
activitystreams.py refine support for search queries
activitystreams_test.py extract atom rendering out into its own module
alltests.py add back symlinks
app.py extract oauth handlers into silo source files; upgrade to facebook AP…
app.yaml use the new oauth_dropins.twitter_auth module
app.yaml.facebook extract oauth handlers into silo source files; upgrade to facebook AP…
app.yaml.instagram move oauth handlers from app.py to instagram.py
app.yaml.twitter move oauth handlers from app.py to twitter.py
appengine_config.py suppress BeautifulSoup warning during tests
atom.py instagram to atom fix for blank content on untitled images
atom_test.py instagram to atom fix for blank content on untitled images
facebook.py facebook: prefer post ids with USERID_ prefix when possible
facebook_test.py facebook: prefer post ids with USERID_ prefix when possible
googleplus.py refine AS and mf2 for snarfed/bridgy#371: stop generating 'liked this…
googleplus_api_discovery.json use a canned G+ API discovery doc in tests instead of fetching over t…
googleplus_test.py refine AS and mf2 for snarfed/bridgy#371: stop generating 'liked this…
instagram.py noop tweak to instagram API /media/recent endpoint
instagram_test.py noop tweak to instagram API /media/recent endpoint
microformats2.py refine AS and mf2 for snarfed/bridgy#371: stop generating rsvp and in…
microformats2_test.py refine mf2 for snarfed/bridgy#371: use empty alt for h-card <img>s
source.py drop tags and attachments from Source.activity_changed()
source_test.py drop tags and attachments from Source.activity_changed()
testdata_test.py noop: remove unused imports, other minor tweaks
testutil.py testutil.expect_requests_*: add support for multiple redirects
twitter.py refine AS and mf2 for snarfed/bridgy#371: stop generating 'liked this…
twitter_test.py refine AS and mf2 for snarfed/bridgy#371: stop generating 'liked this…

README.md

ActivityStreams activitystreams-unofficial

About

This is a library and REST API that fetches and converts social network data between a wide variety of formats:

You can try it out with these interactive demos:

http://facebook-activitystreams.appspot.com/ http://twitter-activitystreams.appspot.com/ http://instagram-activitystreams.appspot.com/

It's part of a suite of projects that implement the OStatus federation protocols for the major social networks. The other projects include portablecontacts-, salmon-, webfinger-, and ostatus-unofficial.

License: This project is placed in the public domain.

Using

The library and REST API are both based on the OpenSocial Activity Streams service.

Let's start with an example. This code using the library:

from activitystreams_unofficial import twitter
...
tw = twitter.Twitter(ACCESS_TOKEN_KEY, ACCESS_TOKEN_SECRET)
tw.get_activities(group_id='@friends')

is equivalent to this HTTP GET request:

https://twitter-activitystreams.appspot.com/@me/@friends/@app/
  ?access_token_key=ACCESS_TOKEN_KEY&access_token_secret=ACCESS_TOKEN_SECRET

They return the authenticated user's Twitter stream, ie tweets from the people they follow. Here's the JSON output:

{
  "itemsPerPage": 10,
  "startIndex": 0,
  "totalResults": 12
  "items": [{
      "verb": "post",
      "id": "tag:twitter.com,2013:374272979578150912"
      "url": "http://twitter.com/evanpro/status/374272979578150912",
      "content": "Getting stuff for barbecue tomorrow. No ribs left! Got some nice tenderloin though. (@ Metro Plus Famille Lemay) http://t.co/b2PLgiLJwP",
      "actor": {
      "username": "evanpro",
        "displayName": "Evan Prodromou",
        "description": "Prospector.",
        "url": "http://twitter.com/evanpro",
      },
      "object": {
        "tags": [{
            "url": "http://4sq.com/1cw5vf6",
            "startIndex": 113,
            "length": 22,
            "objectType": "article"
          }, ...],
      },
    }, ...]
  ...
}

The request parameters are the same for both, all optional: USER_ID is a source-specific id or @me for the authenticated user. GROUP_ID may be @all, @friends (currently identical to @all), @self, or @search; APP_ID is currently ignored; best practice is to use @app as a placeholder.

Paging is supported via the startIndex and count parameters. They're self explanatory, and described in detail in the OpenSearch spec and OpenSocial spec.

When using the GROUP_ID @search (for platforms that support it — currently Twitter and Instagram), provide a search string via the q parameter. The API is loosely based on the OpenSearch spec, the OpenSocial Core Container spec, and the OpenSocial Core Gadget spec.

Output data is JSON Activity Streams 1.0 objects wrapped in the OpenSocial envelope, which puts the activities in the top-level items field as a list and adds the itemsPerPage, totalCount, etc. fields.

Most Facebook requests and all Twitter, Google+, and Instagram requests will need OAuth access tokens. If you're using Python on Google App Engine, oauth-dropins is an easy way to add OAuth client flows for these sites. Otherwise, here are the sites' authentication docs: Facebook, Google+, Instagram, Twitter.

If you get an access token and pass it along, it will be used to sign and authorize the underlying requests to the sources providers. See the demos on the REST API endpoints above for examples.

Using the REST API

The endpoints above all serve the OpenSocial Activity Streams REST API. Request paths are of the form:

/USER_ID/GROUP_ID/APP_ID/ACTIVITY_ID?startIndex=...&count=...&format=FORMAT&access_token=...

All query parameters are optional. FORMAT may be json (the default), xml, or atom, both of which return Atom. The rest of the path elements and query params are described above.

Errors are returned with the appropriate HTTP response code, e.g. 403 for Unauthorized, with details in the response body.

To use the REST API in an existing ActivityStreams client, you'll need to hard-code exceptions for the domains you want to use e.g. facebook.com, and redirect HTTP requests to the corresponding endpoint above.

Using the library

See the example above for a quick start guide.

Clone or download this repo into a directory named activitystreams_unofficial (note the underscore instead of dash). Each source works the same way. Import the module for the source you want to use, then instantiate its class by passing the HTTP handler object. The handler should have a request attribute for the current HTTP request.

The useful methods are get_activities() and get_actor(), which returns the current authenticated user (if any). See the individual method docstrings for details. All return values are Python dicts of decoded ActivityStreams JSON.

The microformats2.*_to_html() functions are also useful for rendering ActivityStreams objects as nicely formatted HTML.

Future work

We'd love to add more sites! Off the top of my head, YouTube, Tumblr, WordPress.com, Sina Weibo, Qzone, and RenRen would be good candidates. If you're looking to get started, implementing a new site is a good place to start. It's pretty self contained and the existing sites are good examples to follow, but it's a decent amount of work, so you'll be familiar with the whole project by the end.

Development

Pull requests are welcome! Feel free to ping me with any questions.

Most dependencies are included as git submodules. Be sure to run git submodule update --init --recursive after cloning this repo.

This ActivityStreams validator is useful for manual testing.

You can run the unit tests with alltests.py. If you send a pull request, please include (or update) a test for the new functionality if possible!

The tests require the App Engine SDK. They look for it in the GAE_SDK_ROOT environment variable, /usr/local/google_appengine, or ~/google_appengine, in that order.

Note the app.yaml.* files, one for each App Engine app id. To work on or deploy a specific app id, symlink app.yaml to its app.yaml.xxx file. Likewise, if you add a new site, you'll need to add a corresponding app.yaml.xxx file.

To deploy:

git co -- app.yaml && ./alltests.py && \
rm -f app.yaml && ln -s app.yaml.twitter app.yaml && \
  ~/google_appengine/appcfg.py --oauth2 update . && \
rm -f app.yaml && ln -s app.yaml.facebook app.yaml && \
  ~/google_appengine/appcfg.py --oauth2 update . && \
rm -f app.yaml && ln -s app.yaml.instagram app.yaml && \
~/google_appengine/appcfg.py --oauth2 update . && \
git co -- app.yaml

To deploy facebook-atom, twitter-atom, and instagram-atom after an activitystreams-unofficial change:

#!/bin/tcsh
foreach s (facebook twitter instagram)
  cd ~/src/$s-atom/activitystreams && gu && git submodule update && \
    cd .. && ~/google_appengine/appcfg.py --oauth2 update .
end

Related work

Gnip is by far the most complete project in this vein. It similarly converts social network data to ActivityStreams and supports many more source networks. Unfortunately, it's commercial, there's no free trial or self-serve signup, and plans start at $500.

DataSift looks like broadly the same thing, except they offer self-serve, pay as you go billing, and they use their own proprietary output format instead of ActivityStreams. They're also aimed more at data mining as opposed to individual user access.

Cliqset's FeedProxy used to do this kind of format translation, but unfortunately it and Cliqset died.

Facebook used to officially support ActivityStreams, but that's also dead.

There are a number of products that download your social network data, normalize it, and let you query and visualize it. SocialSafe and ThinkUp are two of the most mature. There's also the lifelogging/lifestream aggregator vein of projects that pull data from multiple source sites. Storytlr is a good example. It doesn't include Facebook, Google+, or Instagram, but does include a number of smaller source sites. There are lots of others, e.g. the Lifestream WordPress plugin. Unfortunately, these are generally aimed at end users, not developers, and don't usually expose libraries or REST APIs.

On the open source side, there are many related projects. php-mf2-shim adds microformats2 to Facebook and Twitter's raw HTML. sockethub is a similar "polyglot" approach, but more focused on writing than reading.

TODO

  • https kwarg to get_activities() etc that converts all http links to https
  • convert most of the per-site tests to testdata tests
Something went wrong with that request. Please try again.