Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Comparing changes

Choose two branches to see what's changed or to start a new pull request. If you need to, you can also compare across forks.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also compare across forks.
base fork: hynek/homepage
base: 8552f062d0
...
head fork: hynek/homepage
compare: 1324bfd645
  • 2 commits
  • 8 files changed
  • 0 commit comments
  • 1 contributor
View
12 _assets/css/style.css
@@ -25,7 +25,7 @@ pre {
body {
color:#444;
-font-size:16px;
+font-size:18px;
line-height:1;
margin:36px auto;
max-width:960px;
@@ -113,7 +113,7 @@ font-style:italic;
}
p.meta {
color:#999;
-font-size:16px;
+font-size:18px;
font-style:italic;
padding-bottom:42px;
}
@@ -158,13 +158,13 @@ padding:0 0 3px;
}
p {
-font-size:16px;
-line-height:24px;
+font-size:18px;
+line-height:27px;
max-width:540px;
}
article li {
-line-height:24px;
+line-height:27px;
}
aside {
@@ -259,7 +259,7 @@ blockquote p {
color:#888!important;
font-size:16px!important;
font-style:italic;
-line-height:24px!important;
+line-height:27px!important;
text-align:justify;
}
View
4 _posts/2005-11-22-pcap-format-for-logs.md
@@ -35,7 +35,7 @@ While the header mentioned above appears only once at the beginning of a log fil
Not too exciting. Simply the timestamp of the packet, the length that is contained in the log and the real length. So it’s possible to save only a certain amount of bytes of every packet.
-The following data of each packet depends heavily of the specified format. I’m going to show how I did it for `DLT_LINUX_SLL`: First, we need just another header (again, for each packet):
+The following data of each packet depends heavily of the specified format. I’m going to show how I did it for `DLT_LINUX_SLL`: first, we need just another header (again, for each packet):
~~~ {c}
@@ -50,4 +50,4 @@ The following data of each packet depends heavily of the specified format. I’m
These fields resemble those from `struct sockaddr_ll` which is returned with each packet so it’s just about 4 assignments and one `memcpy()`. The definition of this struct can be deducted from the pcap man page in the explanation of `DLT_LINUX_SLL`. I guess the other types should similarly straightforward.
-After this, the packet can be finally dumped to the file.
+After this, the packet can be finally dumped to the file.
View
6 _posts/2011-04-27-twisted-sybase.md
@@ -4,7 +4,7 @@ tags: [python]
title: 'Twisted Sybase SQL Anywhere'
---
-Using the [official sqlanydb](https://code.google.com/p/sqlanydb/) driver for Python together with Twisted’s adbapi produces not-so-occasional crashes as of today (sqlanydb 1.0.2, Twisted 11.0.0). Apparently, the official SQL Anywhere drivers aren’t thread-safe. It cost me several days to figure out because I was searching the fault in my code so I hope to spare you some pain.
+Using the [official sqlanydb](https://code.google.com/p/sqlanydb/) driver for Python together with Twisted’s adbapi produces not-so-occasional crashes as of today (sqlanydb 1.0.2, Twisted 11.0.0). Apparently, the official SQL Anywhere drivers aren’t thread-safe. It cost me several days to figure out because I was searching the fault in my code so I hope to spare you some pain.
Basically, there are two possible solutions to avoid the crashes (manifesting themselves in aborts/SIGABRTs, gdb revealed that there are segmentation faults inside of the binary-only driver.) : Limit the database pool to one thread by supplying `cp_max=1` when constructing `adbapi.ConnectionPool`. Unfortunately that means that only *one* database query at a time can run as the driver isn’t asynchronous.
@@ -33,8 +33,8 @@ At the moment, it seems pretty stable but developing with it is a pain as databa
On a related note I learned the hard way that the latest pyodbc (as of writing 2.1.8) and RHEL/CentOS 5’s ancient unixODBC (2.2.11, released in March 2005) don’t play along very well. At least in the case of using it together with SQL Anywhere, the queries simply hang.
-Updating unixODBC to 2.3 (2. 2.2.14 works too, it’s default for eg. RHEL 6 or Ubuntu Natty Narwhal) fixed the problem but introduces problems with either LD_LIBRARY_PATH juggling or broken dependencies in packages like FreeTDS or Erlang that depend on unixODBC. Sometimes computers just aren’t fun.
+Updating unixODBC to 2.3 (2. 2.2.14 works too, it’s default for eg. RHEL 6 or Ubuntu Natty Narwhal) fixed the problem but introduces problems with either LD_LIBRARY_PATH juggling or broken dependencies in packages like FreeTDS or Erlang that depend on unixODBC. Sometimes computers just aren’t fun.
**Update:** In production I encountered another problem after a few hours runtime: some part of the unixodbc/sqlanywhere driver stack hogs semaphores and at some point runs out of them. My advice is no to use sqlanywhere if possible.
-**Update 2:** I’ve been contacted by a Sybase engineer that the leaked semaphore issues has been fixed by SQLAnywhere “12.0.1 build 3713 or higher” – I can’t test it myself but I’ll give it the benefit of the doubt.
+**Update 2:** I’ve been contacted by a Sybase engineer that the leaked semaphore issues has been fixed by SQLAnywhere “12.0.1 build 3713 or higher” – I can’t test it myself but I’ll give it the benefit of the doubt.
View
14 _posts/2012-01-31-going-static.md
@@ -21,7 +21,7 @@ Wordpress is a great way to get quickly decent yet feature rich pages into the i
If I want to write an article, I just want start my Markdown editor and start writing. No matter whether I have Internet or not. When I’m done, I want to run some deploy script and be done.
-I’m happy to announce that I’ve achieved my goal. If I want to blog, I fire up [MultiMarkdown Composer] (http://multimarkdown.com/) (writing) and [Marked](http://markedapp.com/) (neat preview) and as soon as I feel the article is ready, I start a shell script that consists of two commands: One to compile my site (takes less than a half second) and one to rsync the data to my web host (depends on the bandwidth but it never takes more than 2–3 seconds).
+I’m happy to announce that I’ve achieved my goal. If I want to blog, I fire up [MultiMarkdown Composer] (http://multimarkdown.com/) (writing) and [Marked](http://markedapp.com/) (neat preview) and as soon as I feel the article is ready, I start a shell script that consists of two commands: One to compile my site (takes less than a half second) and one to rsync the data to my web host (depends on the bandwidth but it never takes more than 2–3 seconds).
So how did I achieve this bliss?
@@ -29,11 +29,11 @@ So how did I achieve this bliss?
I tried several static site generators and I don’t want to throw dirt at any of them – let’s just say I found the most of them overcomplicated and/or abandoned. In the end I was pretty happy with the Python-based **[mynt](http://mynt.mirroredwhite.com/)**.
-It’s pretty much bare bones but it does exactly what I need: templating using [Jinja2](http://jinja.pocoo.org/docs/) and blogging in [Markdown](http://daringfireball.net/projects/markdown/). Everything works straightforward and it never got into my way although it’s still pretty fresh (herb pun intended).
+It’s pretty much bare bones but it does exactly what I need: templating using [Jinja2](http://jinja.pocoo.org/docs/) and blogging in [Markdown](http://daringfireball.net/projects/markdown/). Everything works straightforward and it never got into my way although it’s still pretty fresh (herb pun intended).
-So I took my Wordpress theme called [Space](http://getspace.org/) (I paid actual money for, that’s why I haven’t open-sourced my blog [yet]), fleshed out new, simplified templates and rewrote my HTML posts in Markdown. The last step ended up being the hardest part
+So I took my Wordpress theme called [Space](http://getspace.org/) (I paid actual money for, that’s why I haven’t open-sourced my blog [yet]), fleshed out new, simplified templates and rewrote my HTML posts in Markdown. The last step ended up being the hardest part
-For previewing purposes I put the mynt command simply as my make command in vim so it’s always at my fingertips:
+For previewing purposes I put the mynt command simply as my make command in vim so it’s always at my fingertips:
~~~ {vim}
@@ -44,7 +44,7 @@ and serve the directory using Twisted’s integrated web server:
twistd -n web --path _site --port 8000
-Any other web server should work too, except for Python’s [SimpleHTTPServer](http://docs.python.org/library/simplehttpserver.html) which has to be run from the target directory and doesn’t seem to cope well with mynt’s habit of deleting the target directory before creating new files.
+Any other web server should work too, except for Python’s [SimpleHTTPServer](http://docs.python.org/library/simplehttpserver.html) which has to be run from the target directory and doesn’t seem to cope well with mynt’s habit of deleting the target directory before creating new files.
### Bells n’ Whistles ###
@@ -52,10 +52,10 @@ For stats I went for a self-hosted installation of [Piwik](http://piwik.org/) wh
If I wanted to add comments – which I [don’t](http://mattgemmell.com/2011/11/29/comments-off/) – I could achieve it easily using [Disqus](http://disqus.com/).
-That’s basically it. Some semantic HTML5, one for-loop for the archive and a purchased CSS file. If you want a more complex setup, I suggest the one of [Steve Losh](http://stevelosh.com/blog/2010/01/moving-from-django-to-hyde/) who went for the much more potent – but also complex – [Hyde](https://github.com/lakshmivyas/hyde/).
+That’s basically it. Some semantic HTML5, one for-loop for the archive and a purchased CSS file. If you want a more complex setup, I suggest the one of [Steve Losh](http://stevelosh.com/blog/2010/01/moving-from-django-to-hyde/) who went for the much more potent – but also complex – [Hyde](https://github.com/lakshmivyas/hyde/).
### What I’m still missing ###
I’d really like to use [MultiMarkdown](http://fletcherpenney.net/multimarkdown/) which has nice features like footnotes and mynt still seems to have some rough edges – e.g. for some reason it doesn’t copy my favicon.ico if it’s in the main directory.
-Unfortunately hyde and [Octopress](http://octopress.org/) are far too complex and [blogofile](http://www.blogofile.com/) abandonware – so I haven’t much choice here.
+Unfortunately hyde and [Octopress](http://octopress.org/) are far too complex and [blogofile](http://www.blogofile.com/) abandonware – so I haven’t much choice here.
View
2  _posts/2012-02-25-celery-and-sybase.md
@@ -6,7 +6,7 @@ title: 'Celery and Sybase SQL Anywhere'
In our newest installation of “why you should not use Sybase SQL Anywhere” I’d like to report the newest problem I had to solve: for some reason, I couldn’t connect using [sqlanydb](http://code.google.com/p/sqlanydb/) from [Celery](http://celeryproject.org/) tasks.
-I had code that worked fine if ran from plain Python, but barfed from a celery worker task with the ominous error:
+I had code that worked fine if ran from plain Python, but barfed from a celery worker task with the ominous error:
OperationalError: Failed to initialize connection object
View
26 _posts/2012-04-23-python-deployment-anti-patterns.md
@@ -27,7 +27,7 @@ Every time someone whines about lack of support for Python 2.4 in recent package
If you’re serious about using Python you should be prepared to roll your own RPMs/DEBs. We’re running even [RHEL 4](http://en.wikipedia.org/wiki/RHEL#Life_Cycle_Dates) on some of our servers; but we’re a Python company so we use the best thing we can get – even if it means extra work.
-We also have to compile our own Apaches and MySQLs for our customer servers (we don’t use any of them for our own systems, but our customers demand a solid [LAMP](http://en.wikipedia.org/wiki/LAMP_\(software_bundle\)) stack) because we need that fine-grained control. Why should be Python an exception? Rolling an own DEB/RPM is a lot of less of a nuisance than writing code for Python < 2.6.
+We also have to compile our own Apaches and MySQLs for our customer servers (we don’t use any of them for our own systems, but our customers demand a solid [LAMP](http://en.wikipedia.org/wiki/LAMP_\(software_bundle\)) stack) because we need that fine-grained control. Why should be Python an exception? Rolling an own DEB/RPM is a lot of less of a nuisance than writing code for Python < 2.6.
This works also both ways. It’s totally possible that you have some mission critical web app that isn’t compatible with Python newer than 2.4. Are you going to install a single server with an ancient OS, just to accommodate? Key infrastructure _must not_ be dictated by third parties.
@@ -37,7 +37,7 @@ On the other hand I’m **not** saying that you _have_ to compile Python yoursel
Gentlepeople, if you’re deploying software, *always* use [virtualenv](http://docs.python-guide.org/en/latest/dev/virtualenvs/). Actually, that same goes for local development – look into [virtualenvwrapper](http://www.doughellmann.com/projects/virtualenvwrapper/) which makes the handling of them a breeze. So never install into your global site-packages! The only exception is the aforementioned virtualenv – which in turn installs [pip](http://www.pip-installer.org/) in each environment it installs to.
-Test your software against certain versions of packages, pinpoint them using `pip freeze` and be confident that the identical Python environment is just a `pip install -r requirements.txt` away. For the record, I split up my requirement files; more on that in the next installment.
+Test your software against certain versions of packages, pinpoint them using `pip freeze` and be confident that the identical Python environment is just a `pip install -r requirements.txt` away. For the record, I split up my requirement files; more on that in the next installment.
Also, use real version pinning like `package==1.3`. Don’t do `package>=1.3`, it _will_ bite you eventually, just as it has bitten me and many others.
@@ -50,7 +50,7 @@ First of all, there’s no reason to succumb to a dictate of your distribution w
1. If I write and test software, I do it against certain packages. Packages tend to change APIs, introduce bugs, etc.
2. My software is supposed to run on any UNIXy platform as long as the Python it’s written against is present.
-What if the next Ubuntu ships with a different [SQLAlchemy](http://www.sqlalchemy.org/) by default? Do I have to fix all my applications before upgrading our servers? Or what if I need to deploy an app to an older server? Do I have to rewrite it so it runs with older packages? I prefer not to.
+What if the next Ubuntu ships with a different [SQLAlchemy](http://www.sqlalchemy.org/) by default? Do I have to fix all my applications before upgrading our servers? Or what if I need to deploy an app to an older server? Do I have to rewrite it so it runs with older packages? I prefer not to.
I really wish the Linux distributions wouldn’t ship anything more than the Python interpreter and virtualenv. Anything else just leverages bad behavior.
@@ -58,26 +58,26 @@ The only good they may be doing is automatically updating packages with security
### Don’t run your daemons in a tmux/screen ###
-It seems to be part of everyone’s evolution to do it, so be the first one to skip it!
+It seems to be part of everyone’s evolution to do it, so be the first one to skip it!
Yes, [tmux](http://tmux.sourceforge.net/) is full of awesome (and wayyy better than screen), but _please_ don’t just ssh on your host and start the service in a tmux or screen. You have nothing that brings the daemon back up if it crashes. You can’t restart it on 10 servers without ssh’ing on 10 servers, get the screen and Ctrl-C it. Granted, it’s easy in the beginning but it doesn’t scale and lacks basic features that simple-to-use tools have to offer.
-My favorite one is [supervisord](http://supervisord.org/). A definition for a service looks as simple as:
+My favorite one is [supervisord](http://supervisord.org/). A definition for a service looks as simple as:
[program:yourapp]
command=/path/to/venv/bin/gunicorn_django --config deploy/gunicorn-config.py settings/production.py
user=yourapp
directory=/apps/yourapp
-You add the file to `/etc/supervisor/conf.d/`, make a `supervisorctl update` and your service is up an running. It’s a no-brainer and much easier than juggling rc.d scripts. Crash recovery and optional web interface included.
+You add the file to `/etc/supervisor/conf.d/`, make a `supervisorctl update` and your service is up an running. It’s a no-brainer and much easier than juggling rc.d scripts. Crash recovery and optional web interface included.
-### Configuration is not part of the application ###
+### Configuration is not part of the application ###
-Your production configuration doesn’t belong into the (same) source repository. There are configuration management tools like [Puppet](http://puppetlabs.com/) or [Chef](http://www.opscode.com/chef/) that do exactly that for you.
+Your production configuration doesn’t belong into the (same) source repository. There are configuration management tools like [Puppet](http://puppetlabs.com/) or [Chef](http://www.opscode.com/chef/) that do exactly that for you.
-**Just better and reliably.** While installing the configuration, Puppet can make sure that the directories have always certain permissions. Configuration templates make it perfect for mass deployments. Some service IP changed? Just fix it in Puppet’s repo and deploy the changes. Eventually all services will catch up. If you want, you can always trigger a run, for example using a simple [Fabric](http://docs.fabfile.org/) script.
+**Just better and reliably.** While installing the configuration, Puppet can make sure that the directories have always certain permissions. Configuration templates make it perfect for mass deployments. Some service IP changed? Just fix it in Puppet’s repo and deploy the changes. Eventually all services will catch up. If you want, you can always trigger a run, for example using a simple [Fabric](http://docs.fabfile.org/) script.
-But don’t use Fabric for actual deployments! This is the perfect example of the battle between “simple” and “easy”. At first, it’s easier to put everything inside of the repo and run a Fabric script that does a `git pull` and restarts your daemon. On the long run, you’ll regret it like many before you did.
+But don’t use Fabric for actual deployments! This is the perfect example of the battle between “simple” and “easy”. At first, it’s easier to put everything inside of the repo and run a Fabric script that does a `git pull` and restarts your daemon. On the long run, you’ll regret it like many before you did.
Just to stress this point: I love Fabric and couldn’t live without. But it’s not the right tool for orchestrating deployments – that’s where Puppet and Chef step in.
@@ -85,7 +85,7 @@ Just to stress this point: I love Fabric and couldn’t live without. But it’s
Many people go for [Apache](http://httpd.apache.org/) and [mod_wsgi](http://code.google.com/p/modwsgi/) by default, because everybody has already heard about Apache.
-To me, Apache feels like a [big ball of mud](http://www.laputan.org/mud/) and I find the modular combination of [gunicorn](http://gunicorn.org/) or [uwsgi](http://projects.unbit.it/uwsgi/) together with [nginx](http://www.nginx.com/) much more pleasing and easier to control.
+To me, Apache feels like a [big ball of mud](http://www.laputan.org/mud/) and I find the modular combination of [gunicorn](http://gunicorn.org/) or [uwsgi](http://projects.unbit.it/uwsgi/) together with [nginx](http://www.nginx.com/) much more pleasing and easier to control.
YMMV, but have a look around before you settle.
@@ -93,9 +93,9 @@ YMMV, but have a look around before you settle.
I don’t claim that I’ve discovered the sorcerer’s stone, however I’ve developed a system for us that proved solid and simple on _the long run_.
-The trick is to build a debian package (but it can be done using RPMs just as well) with the application and the whole virtualenv inside. The configuration goes into Puppet and Puppet also takes care that the respective servers have always the latest version of the DEB.
+The trick is to build a debian package (but it can be done using RPMs just as well) with the application and the whole virtualenv inside. The configuration goes into Puppet and Puppet also takes care that the respective servers have always the latest version of the DEB.
-The advantage is that such a DEB is totally self-contained, doesn’t require to have build tools and libraries on the target servers and paired with solid Puppet configuration, it makes consistent deployments over a wide range of hosts easy, fast and reliable. But you have to do your homework first.
+The advantage is that such a DEB is totally self-contained, doesn’t require to have build tools and libraries on the target servers and paired with solid Puppet configuration, it makes consistent deployments over a wide range of hosts easy, fast and reliable. But you have to do your homework first.
If you find this approach intriguing, make sure you check out [my article][native] where I describe it!
View
44 _posts/2012-05-03-python-app-deployment-with-native-packages.md
@@ -5,15 +5,15 @@ title: 'Python Application Deployment with Native Packages'
---
After I’ve told you [what not to do][anti-patterns], I’d like to introduce you to the method we use to deploy a wide variety of services.
-First of all: I’m truly humbled by the interest and feedback my [last article][anti-patterns] has sparked. It’s so easy to forget, that something you do on a daily basis might actually be this interesting to others. Thanks again for votes, praise, traffic and constructive feedback!
+First of all: I’m truly humbled by the interest and feedback my [last article][anti-patterns] has sparked. It’s so easy to forget, that something you do on a daily basis might actually be this interesting to others. Thanks again for votes, praise, traffic and constructive feedback!
### Preamble & Disclaimer ###
-I understand your expectations are high, given the amount of feedback I have gotten. However deployment is a highly individual process and turnkey solutions aren’t really possible and/or useful. So in contrast to the previous article, this one is an *advanced topic* for a more advanced breed of DevOps. In other words: you should already have some experience with deploying under your belt to really benefit from this article.
+I understand your expectations are high, given the amount of feedback I have gotten. However deployment is a highly individual process and turnkey solutions aren’t really possible and/or useful. So in contrast to the previous article, this one is an *advanced topic* for a more advanced breed of DevOps. In other words: you should already have some experience with deploying under your belt to really benefit from this article.
-To avoid excessive length, I’ll assume you’re at least loosely familiar with Fabric. If not, please try to get a rough grasp of it [first][fab-docs]. Also, I won’t be able to dive into Puppet or Chef. Please use this article as a starting point – it won’t end up being all-encompassing. I sincerely hope to get you started though, if you consider doing this kind of deployment.
+To avoid excessive length, I’ll assume you’re at least loosely familiar with Fabric. If not, please try to get a rough grasp of it [first][fab-docs]. Also, I won’t be able to dive into Puppet or Chef. Please use this article as a starting point – it won’t end up being all-encompassing. I sincerely hope to get you started though, if you consider doing this kind of deployment.
-To reap all of the benefits, you’ll need to run a [private debian repository][private-debian] server for your packages. That’s not hard, but it takes some effort. Fortunately, you can avoid running your own debian repository and still gain most of the advantages: a debian package (or rpm package for that matter) can also be installed by hand using `dpkg -i your-package.deb` (`rpm -Uhv your-package.rpm`).
+To reap all of the benefits, you’ll need to run a [private debian repository][private-debian] server for your packages. That’s not hard, but it takes some effort. Fortunately, you can avoid running your own debian repository and still gain most of the advantages: a debian package (or rpm package for that matter) can also be installed by hand using `dpkg -i your-package.deb` (`rpm -Uhv your-package.rpm`).
### Why Native Packages at All? ###
@@ -21,13 +21,13 @@ Both in public discussions as well as privately by mail, one of the most frequen
> what’s wrong with Fabric+git-pull?
- So let me clarify that first.
+ So let me clarify that first.
**It doesn’t scale.** As soon as you have more than a single deployment target, it quickly becomes a hassle to pull changes, check dependencies and restart the daemon on every single server. A new version of [Django] is out? Great, fetch it on every single server. A new version of [psycopg2]? Awesome, compile it on each of `n` servers.
**It’s hard to integrate with Puppet/Chef.** It’s easy to tell [Puppet] “on server X, keep package foo-bar up-to-date or keep it at a special version!” That’s a one-liner. Try that while baby sitting git and pip.
-**You have to install build tools on target servers.** GCC and development files don’t belong on production servers. Not only are light weight systems better manageable and faster to set up, it’s also a security feature: Many attacks require a working C compiler.
+**You have to install build tools on target servers.** GCC and development files don’t belong on production servers. Not only are light weight systems better manageable and faster to set up, it’s also a security feature: Many attacks require a working C compiler.
**It can leave your app in an inconsistent state.** Sometimes `git pull` fails halfway through because of network problems, or pip times out while installing dependencies because [PyPI][pypi] went away (I heard that happens occasionally *cough*). Your app at this point is – put simply – broken.
@@ -35,7 +35,7 @@ Both in public discussions as well as privately by mail, one of the most frequen
On the other hand, deploying using self-contained native packages makes the update of an app a near-atomic, predictable operation. Rollbacks can be done easily by installing an older package version. You always _know_ in what state your application is right now. You need to update an app on many servers? Build once, let Puppet deploy everywhere. No compiling of any dependencies, no compilers or development packages at all on production servers.
-Some of the problems mentioned above can be mitigated by running a [private PyPI][private-pypi] server – which you _should_ do anyway. Nevertheless, in the grand picture, that’s just a short term hack. I’d like to seize the opportunity at this point to plug a great alternative to the official PyPI: [crate.io].
+Some of the problems mentioned above can be mitigated by running a [private PyPI][private-pypi] server – which you _should_ do anyway. Nevertheless, in the grand picture, that’s just a short term hack. I’d like to seize the opportunity at this point to plug a great alternative to the official PyPI: [crate.io].
That said, if you have one app on one server and you know it will never change (although people tend to err here), feel free to keep it simple until you have a real need. That’s the reason why I gave context about my work in the [previous article][anti-patterns]. Some points may be anti-patterns, however you may get away with them if your situation is different from mine.
@@ -43,13 +43,13 @@ That said, if you have one app on one server and you know it will never change (
Before I dig into the actual packaging code (I will later in this article, I promise!), let me show you the end result using a simple [Twisted] application which is our [whois] server for [ICANN] domains (like .com).
-Every application [we][vrmd] deploy has one “[fabfile.py][fab-docs]” that describes the build process, one “[requirements.txt]” containing all of it run-time requirements, and “postinst” and “postrm” scripts. The latter are [debian/Ubuntu specific][debian-scripts] and are executed after an installation/update and after a uninstallation or before an update (please note that there are more possible scripts and we also use them but I try to keep this article simple). After months of refining, all of them look really simple.
+Every application [we][vrmd] deploy has one “[fabfile.py][fab-docs]” that describes the build process, one “[requirements.txt]” containing all of it run-time requirements, and “postinst” and “postrm” scripts. The latter are [debian/Ubuntu specific][debian-scripts] and are executed after an installation/update and after a uninstallation or before an update (please note that there are more possible scripts and we also use them but I try to keep this article simple). After months of refining, all of them look really simple.
At the top level, all I do to build a new debian package of the app I’m active on now, is `fab deb`. Typically, this run takes from 30 seconds to 2.5 minutes – depending on the amount of dependencies that have to be processed before the actual packaging.
-To deploy it to our repositories, I do a `fab push`. From now on, it can be installed using `aptitude install <app>` on our servers that carry the necessary apt configuration. That’s also where it gets picked up as soon as Puppet realizes the packages on the production servers are out of date. Usually I trigger at least the first server with a `puppet agent --test` or `aptitude update && aptitude dist-upgrade <app>`.
+To deploy it to our repositories, I do a `fab push`. From now on, it can be installed using `aptitude install <app>` on our servers that carry the necessary apt configuration. That’s also where it gets picked up as soon as Puppet realizes the packages on the production servers are out of date. Usually I trigger at least the first server with a `puppet agent --test` or `aptitude update && aptitude dist-upgrade <app>`.
-Let’s start going into more details with **fabfile.py**:
+Let’s start going into more details with **fabfile.py**:
~~~ {py}
@@ -80,11 +80,11 @@ Let’s start going into more details with **fabfile.py**:
That’s all the programmatic information required to build a deb package. The instantiation of `Deployment` makes sure all `build_deps` are present and remembers to set `run_deps` as package dependencies. In this case we need “libpq-dev” for compiling [psycopg2] while building and when deployed, “[supervisor]” is necessary for supervising (_duh_!) the daemon.
-`Deployment.prepare_app()` creates the necessary directories on the build server, checks out the desired branch (`None` means current), creates a [virtualenv] and populates it with dependencies from requirements.txt. As a bonus it also fixes the shebangs (“`#!`”) of all scripts in the virtualenv to point the correct Python path on the target system.
+`Deployment.prepare_app()` creates the necessary directories on the build server, checks out the desired branch (`None` means current), creates a [virtualenv] and populates it with dependencies from requirements.txt. As a bonus it also fixes the shebangs (“`#!`”) of all scripts in the virtualenv to point the correct Python path on the target system.
-Now, `Deployment.build_deb()` takes the whole app _including the virtualenv_, packages it using [fpm] and downloads it to my local host. The version of the package is the build number – which is just the latest package version in our Ubuntu repositories plus one. Finally, `Deployment.push_to_repo()` takes the now-local debian packages and pushes it to our mirrors.
+Now, `Deployment.build_deb()` takes the whole app _including the virtualenv_, packages it using [fpm] and downloads it to my local host. The version of the package is the build number – which is just the latest package version in our Ubuntu repositories plus one. finally, `Deployment.push_to_repo()` takes the now-local debian packages and pushes it to our mirrors.
-Want a more involved example? Here’s a [Django] app including JavaScript minification, [LESS] compilation, i18n translation and several sub-apps:
+Want a more involved example? Here’s a [Django] app including JavaScript minification, [LESS] compilation, i18n translation and several sub-apps:
~~~ {py}
@@ -123,7 +123,7 @@ Want a more involved example? Here’s a [Django] app including JavaScript mini
~~~
-Most of it should be rather obvious. I’d just like to point out `Deployment.add_cache_busting()`, which looks for a special string in the supplied files and replaces it with the package version. This makes crazy expiration headers for CSS and JS files [possible][cache-busting].
+Most of it should be rather obvious. I’d just like to point out `Deployment.add_cache_busting()`, which looks for a special string in the supplied files and replaces it with the package version. This makes crazy expiration headers for CSS and JS files [possible][cache-busting].
So, what about “postinst” and “postrm”? Let’s start with **postrm** which is really simple:
@@ -177,9 +177,9 @@ Basically [debian boilerplate][debian-scripts] only. Yep, _we_ just tell supervi
In this case we have two interesting lines (9 & 10). First we recreate the virtualenv so that it matches the Python installation on the target system. After that, we start the daemon. The additional call to virtualenv does _not_ alter the actual packages but instead only adjusts the binaries – it’s the right way to do and was suggested by virtualenv’s original author – [Ian Bicking][ian] – to me.
-At this point, all the necessary configuration files are already in place thanks to [Puppet], even before this script has been run.
+At this point, all the necessary configuration files are already in place thanks to [Puppet], even before this script has been run.
-These two scripts and how they interact with configuration management might be the most critical point of the deployments as some packages need special treatment. For example, Pylons and Pyramid apps have to be `python setup.py install`’ed. Also, depending on your uptime/HA needs, you may not be able to just stop, install and start again – although it’s usually much faster than than a `git pull && pip install -U requirements.txt`.
+These two scripts and how they interact with configuration management might be the most critical point of the deployments as some packages need special treatment. For example, Pylons and Pyramid apps have to be `python setup.py install`’ed. Also, depending on your uptime/HA needs, you may not be able to just stop, install and start again – although it’s usually much faster than than a `git pull && pip install -U requirements.txt`.
I’ve watered your mouth even more and the article is too long already. So let’s move on quickly.
@@ -191,9 +191,9 @@ I’ll show you parts of my implementation and the reasoning behind it. In the l
We use dedicated VMs for building packages for certain OSs. On these, we expect a user called “buildbot” with no special privileges, virtualenv 1.7 (the version _is_ important because we rely on the `--no-site-packages` default) and [fpm] (which we currently have to install by hand/gem unfortunately).
-We use “vrmd” as a prefix for paths of our apps (for example “/vrmd/whois”) as well for packages (for example “vrmd-whois“).
+We use “vrmd” as a prefix for paths of our apps (for example “/vrmd/whois”) as well for packages (for example “vrmd-whois“).
-Every app has its own user with the same name as the app and owns a home directory in “/vrmd/app-name“. This contains _at least_ the virtualenv (for example “/vrmd/whois/venv”) and the app itself (for example “/vrmd/whois/whois” – this “double whois” is necessary as some apps need more than one directory for code or static files).
+Every app has its own user with the same name as the app and owns a home directory in “/vrmd/app-name“. This contains _at least_ the virtualenv (for example “/vrmd/whois/venv”) and the app itself (for example “/vrmd/whois/whois” – this “double whois” is necessary as some apps need more than one directory for code or static files).
An example:
@@ -293,8 +293,8 @@ Now let’s look at one of the two key methods: the one that builds the whole vi
I believe this code is mostly self-explanatory. Here are some less obvious points:
- `git_clone()` is just a helper function that does a `git clone` from our repo.
-- The git calls aren’t obvious but the variables help should explaining them: Line 7 determines the current branch (which is used by default), line 13 finds the latest commit id (which is used for the package description).
-- The shebang fix uses a [backreference][back-ref] `\1` to put the binary name – that it captured before – at the end of the correct path.
+- The git calls aren’t obvious but the variables help should explaining them: Line 7 determines the current branch (which is used by default), line 13 finds the latest commit id (which is used for the package description).
+- The shebang fix uses a [backreference][back-ref] `\1` to put the binary name – that it captured before – at the end of the correct path.
I may expand the explanations here if I get a feeling if something is rather unclear. But I think it’s straight-forward.
@@ -328,7 +328,7 @@ Now there’s only one thing left, the actual packaging of the deb. Which is –
get(rv.split('"')[-2], 'deploy/debian/%(basename)s')
~~~
-One last convention I have to mention: every app has a sub-directory called “deploy” containing another directory called “debian“. This contains the aforementioned “postrm” and “postinst” and that’s why the debian directory is pulled to the build directory in lines 5–7. These files are later referenced using `--after-remove` and `--after-install`. By the way, building a RPMs is just a matter of changing the fpm call from `-t deb` to `-t rpm` and adjusting “postrm” and “postinst” to RedHat standards.
+One last convention I have to mention: every app has a sub-directory called “deploy” containing another directory called “debian“. This contains the aforementioned “postrm” and “postinst” and that’s why the debian directory is pulled to the build directory in lines 5–7. These files are later referenced using `--after-remove` and `--after-install`. By the way, building a RPMs is just a matter of changing the fpm call from `-t deb` to `-t rpm` and adjusting “postrm” and “postinst” to RedHat standards.
The only “magic” in this method is `rv.split('"')[-2]`, which makes total sense if you know that fpm returns a string like
@@ -340,7 +340,7 @@ One particularity I really like is the package description:
> Automated build. Branch: master Commit: deadbeef
-Using the commit id, we can later check the git history for the exact specifics of this package, and what has happened since then.
+Using the commit id, we can later check the git history for the exact specifics of this package, and what has happened since then.
### Epilogue ###
View
10 about/index.html
@@ -20,10 +20,10 @@
<h3>Tech</h3>
- <p>I prefer UNIX-like operating systems, especially OS X &amp; GNU/Linux. I love programming and basically started hacking since I got my first computer in the early 1990’s</p>
- <p>Currently, my tools of trade are C, Python and (Java|Coffee)Script. But I’ve already done serious projects in nearly every major language out there. My favorite frameworks are Twisted, Pyramid and Django – depending on the problem to be solved.</p>
+ <p>I prefer UNIX-like operating systems, especially OS X &amp; GNU/Linux. I love programming and basically started hacking sinc I got my first computer in the early 1990’s</p>
+ <p>Currently, my tools of trade are C, Python and (Java|Coffee)Script. But I’ve already done serious projects in nearly every major language out there. My favorite frameworks are Twisted, Pyramid and Django – depending on the problem to be solved.</p>
<p>When it comes to design of any kind, I strongly believe that minimalism always wins. I love beautiful typography though.</p>
- <p>My private projects can be found on <a href="https://github.com/hynek" rel="author">GitHub</a>. Most of my current Open Source efforts are to make CPython (i.e. the current mainstream Python interpreter) more awesome – repos related to that kind of work can be found at <a href="https://bitbucket.org/hynek" rel="author">BitBucket</a> (damn you, Mercurial).</p>
+ <p>My private projects can be found on <a href="https://github.com/hynek" rel="author">GitHub</a>. Most of my current Open Source efforts are to make CPython (i.e. the current mainstream Python interpreter) more awesome – repos related to that kind of work can be found at <a href="https://bitbucket.org/hynek" rel="author">BitBucket</a> (damn you, Mercurial).</p>
<h3>Photography</h3>
@@ -33,11 +33,11 @@
<h3>Health</h3>
<p>I try to live a primal/paleo life as coined by <a href="http://www.marksdailyapple.com/">Mark Sisson</a> and <a href="http://robbwolf.com/">Robb Wolf</a>. Yes it works, I lost weight and got much stronger and healthier without hunger and hours in the gym.</p>
- <p>After having helped many people to lose weight, Nils and I launched a German site about the Primal life so others can benefit from our knowledge. Thus if you’re fluent in German and want to get slimmer, stronger and healthier, make sure to visit <a href="http://grok-on.de/">us.</a></p>
+ <p>After having helped many people to lose weight, Nils and I launched a German site about the Primal life so others can benefit from our knowledge. Thus if you’re fluent in German and want to get slimmer, stronger and healthier, make sure to visit <a href="http://grok-on.de/">us.</a></p>
<h3>Colophon</h3>
- <p>These pages are completely static and compiled using <a href="http://mynt.mirroredwhite.com/">mynt</a>. The source is at <a href="https://github.com/hynek/homepage">GitHub</a>. The design is based on the <a href="http://getspace.org/">Space theme</a>. HTML/<a href="http://jinja.pocoo.org/">Jinja</a> are done in <a href="http://vim.org/">vim</a>, markdown prose in <a href="http://multimarkdown.com/">MultiMarkdown Composer</a> and <a href="http://markedapp.com/">Marked</a>. Statistics are gathered using a self-hosted <a href="http://piwik.org/">Piwik</a>. Comments are <a href="http://mattgemmell.com/2011/11/29/comments-off/">off</a>.</p>
+ <p>These pages are completely static and compiled using <a href="http://mynt.mirroredwhite.com/">mynt</a>. The source is at <a href="https://github.com/hynek/homepage">GitHub</a>. The design is based on the <a href="http://getspace.org/">Space theme</a>. HTML/<a href="http://jinja.pocoo.org/">Jinja</a> are done in <a href="http://vim.org/">vim</a>, markdown prose in <a href="http://multimarkdown.com/">MultiMarkdown Composer</a> and <a href="http://markedapp.com/">Marked</a>. Statistics are gathered using a self-hosted <a href="http://piwik.org/">Piwik</a>. Comments are <a href="http://mattgemmell.com/2011/11/29/comments-off/">off</a>.</p>
</article>
</section>

No commit comments for this range

Something went wrong with that request. Please try again.