Skip to content
Python script to keep track of website updates; sends email notifications and provides RSS2 feed
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.


Python script to keep track of website changes; sends email notifications on updates and/or also provides an RSS feed

To specify which parts of a website should be monitored, XPath selectors (e.g. "//h1"), CSS selectors (e.g. "h1"), and regular expressions can be used (just choose the tools you like!).

MailWebsiteChanges is related to PageMonitor for Chrome and AlertBox / Check4Change for Firefox. However, instead of living in your web browser, you can run it independently from command line / bash and install it as a simple cron job running on your linux server.

This is Open Source -- so please contribute eagerly! ;-)


Configuration can be done by creating a file (just place this file in the program folder): Some examples:

Website definitions

sites = [

         {'name': 'example-css',
          'parsers': [uri(uri='', contenttype='html'),

         {'name': 'example-xpath',
          'parsers': [uri(uri='', contenttype='html'),
                      xpath(contentxpath='//div[contains(concat(\' \', normalize-space(@class), \' \'), \' package-version-header \')]')

         {'name': 'my-script',
          'parsers': [command(command='/home/user/', contenttype='text'),


  • parameters:

    • name
      name of the entry, used as an identifier when sending email notifications
    • receiver (optional)
      Overrides global receiver specification.
  • parameters for the URL receiver:

    • uri
      URI of the website
    • contenttype (optional; default: 'html')
      content type, e.g., 'xml'/'html'/'text'.
    • enc (optional; default: 'utf-8')
      Character encoding of the website, e.g., 'utf-8' or 'iso-8859-1'.
    • userAgent (optional)
      Defines the user agent string, e.g.,
      'userAgent': 'Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0'
    • accept (optional)
      Defines the accept string, e.g.,
      'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8'
  • parameters for the Command receiver

    • command
      the command
    • contenttype (optional; default: 'text')
      content type, e.g., 'xml'/'html'/'text'.
    • enc (optional; default: 'utf-8')
      Character encoding of the website, e.g., 'utf-8' or 'iso-8859-1'.
  • parameters for the XPath parser:

    • contentxpath
      XPath expression for the content sections to extract
    • titlexpath (optional)
      XPath expression for the title sections to extract
  • parameters for the CSS parser:

    • contentcss
      CSS expression for the content sections to extract
    • titlecss (optional)
      CSS expression for the title sections to extract
  • parameters for the RegEx parser:

    • contentregex
      Regular expression for content parsing
    • titleregex (optional)
      Regular expression for title parsing
  • We collect some XPath/CSS snippets at this place: Snippet collection - please feel free to add your own definitions!

  • The --dry-run="shortname" option might be useful in order to validate and fine-tune a definition.

  • If you would like to keep the data stored in a different place than the working directory, you can include something like this:


Mail settings

enableMailNotifications = True   #enable/disable notification messages; if set to False, only send error messages
maxMailsPerSession = -1   #max. number of mails to send per session; ignored when set to -1
subjectPostfix = 'A website has been updated!'

sender = ''
smtphost = ''
useTLS = True
smtpport = 587
smtpusername = sender
smtppwd = 'mypassword'
receiver = ''   # set to '' to also disable notifications in case of errors (not recommended)

RSS Feeds

If you prefer to use the RSS feature, you just have to specify the path of the feed file which should be generated by the script (e.g., rssfile = 'feed.xml') and then point your webserver to that file. You can also invoke the script which implements a very basic webserver.

enableRSSFeed = True   #enable/disable RSS feed

rssfile = 'feed.xml'
maxFeeds = 100

Program execution

To setup a job that periodically runs the script, simply attach something like this to your /etc/crontab:

0 8-22/2    * * *   root	/usr/bin/python3 /usr/bin/mwc

This will run the script every two hours between 8am and 10pm.

If you prefer invoking the script with an alternate configuration files, simply pass the name of the configuration file as an argument, e.g., for, use mwc --config=my_alternate_config.


Requires Python 3, lxml, and cssselect. For Ubuntu 12.04, type:

  • sudo apt-get install python3 python3-dev python3-setuptools libxml2 libxslt1.1 libxml2-dev libxslt1-dev python-libxml2 python-libxslt1
  • sudo easy_install3 pip
  • sudo pip-3.2 install lxml cssselect

For Ubuntu 14.04, type:

  • sudo apt-get install python3-lxml python3-pip
  • sudo pip3 install cssselect
You can’t perform that action at this time.