Skip to content

Commit

Permalink
Merge pull request #49 from makkoncept/master
Browse files Browse the repository at this point in the history
import remaining blog posts from the labs repository. #43
  • Loading branch information
kgodey committed Mar 27, 2019
2 parents 919c903 + 81e063c commit 427452e
Show file tree
Hide file tree
Showing 276 changed files with 6,566 additions and 0 deletions.
76 changes: 76 additions & 0 deletions content/tech-blog-archives/32-to-64bit-remotely/contents.lr
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
title: 32 to 64 bit remotely
---
categories:
Debian
software
---
author: nkinkade
---
body:

A couple months ago I [posted here](http://labs.creativecommons.org/2008/04/03/varnish-cache-at-cc/) about some of our experiences with [Varnish Cache](http://varnish.projects.linpro.no/) as an HTTP accelerator. By and large I have been very impressed with Varnish. We even found that it had the unexpected benefit of acting as a buffer in front of Apache, preventing Apache from getting overwhelmed with too many slow requests. Apache would get wedged once it had reached it's MaxClients limit, whereas Varnish seems to happily queue up thousands of requests even if the backend (Apache) is going slowly.

However, after a while we started running into other problems with Varnish, and I found the probable answer in [a bug report](http://varnish.projects.linpro.no/ticket/85) at the Varnish site. It turns out that Varnish was written with a 64 bit system in mind. That isn't to say that it won't work nicely on a 32 bit system, just that you better not expect high server load, or else you'll start running into resource limitations in a hurry. This left us with about 2 options: Move to 64 bit or ditch Varnish for something like Squid. Seeing as how I was loathe to do the latter, we decided to go 64 bit, which in any case is another logical step into the 21st century.

The problem was that our servers are co-located in data centers around the country. We didn't want to hassle with reprovisioning all of the them. [Asheesh](http://creativecommons.org/about/people/#83) did the the first remote conversion based on [some outdated document](http://www.underhanded.org/papers/debian-conversion/remotedeb.html) he found on remotely converting from Red Hat Linux to Debian. It went well and we haven't had a single problem on that converted machine since. Varnish loves 64bit.

I have now converted two more machines, and this last time I documented the steps I took. I post them here for future reference and with the hope that it may help someone else. Note that these steps are somewhat specific to Debian Linux, but the concepts should be generally applicable to any UNIX-like system. There are no real instructions below, so you just have to infer the method from the steps. See the [aforementioned article](http://www.underhanded.org/papers/debian-conversion/remotedeb.html) for more verbose, though dated, explanations. **BE WARNED** that if you make a mistake and don't have some [lovely rescue method](http://www.serverbeach.com/products/rapid_rescue.php) then you may be forced to call your hosting company to salvage the wreckage:

* [ssh server]
* aptitude install linux-image-amd64
* reboot
* [ssh server]
* sudo su -
* aptitude install debootstrap # if not already installed
* swapoff -a
* sfdisk -l /dev/sda # to determine swap partition, /dev/sda5 in this case
* mke2fs -j /dev/sda5
* mount /dev/sda5 /mnt
* cfdisk /dev/sda # set /dev/sda5 to type 83 (Linux)
* debootstrap --arch amd64 etch /mnt http://http.us.debian.org/debian
* mv /mnt/etc /mnt/etc.LOL
* cp -a /etc /mnt/
* mv /mnt/boot /mnt/boot.LOL
* cp -a /boot /mnt/ # this is really just so that the dpkg post-install hooks don't issue lots of warnings about things not being in /boot that it expects.
* chroot /mnt
* aptitude update
* aptitude dist-upgrade
* aptitude install locales
* dpkg-reconfigure locales # optional (I selected All locales, default UTF-8)
* aptitude install ssh sudo grub vim # and any other things you want
* aptitude install linux-image-amd64
* vi /etc/fstab # change /dev/sda5 to mount on / and comment out old swap entry
* mkdir /home/nkinkade # just so I have a home, not necessary really
* exit # get out of chroot
* vi /boot/grub/menu.lst # change root= of default option from sda6 to sda5
* reboot
* [ssh server]
* sudo su -
* mount /dev/sda6 /mnt
* chroot mnt
* dpkg --get-selections > ia32_dpkg_selections
* exit
* mv /home /home.LOL
* cp -a /mnt/home /
* mv /root /root.LOL
* cp -a /mnt/root /
* mkdir /mnt/ia32
* mv /mnt/* /mnt/ia32
* mv /mnt/.* /mnt/ia32
* cp -a bin boot dev etc etc.LOL home initrd initrd.img lib lib64 media opt root sbin srv tmp usr var vmlinuz /mnt
* mkdir /mnt/proc /mnt/sys
* vi /mnt/etc/fstab # make /dev/sda6 be mounted on / again, leave swap commented out
* vi /boot/grub/menu.lst # change the default boot option back to root=/dev/sda6
* reboot
* [ssh server]
* sudo su -
* mkswap /dev/sda5
* vi /etc/fstab (uncomment swap line)
* swapon -a
* dpkg --set-selections < /ia32/ia32_dpkg_selections
* apt-get dselect-upgrade # step through all the questions about changed /etc/files, etc.



---
pub_date: 2008-07-15
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
title: 64 bit woes (almost) cleared up
---
categories:
Debian
---
author: nkinkade
---
body:

As I mentioned in a [recent post](http://labs.creativecommons.org/2008/07/15/32-to-64bit-remotely/), we have upgraded our servers to 64 bit. All of them are now running [amd64](http://www.debian.org/ports/amd64/) for [Debian](http://debian.org). The first three server were upgraded remotely, but we noticed that a few applications were constantly dying due to segmentation faults. There was some speculation that this was a strange consequence of the remote upgrade process, so we upgraded the 4th server by reprovisioning it with [Server Beach](http://serverbeach.com) as a 64 bit system, cleanly installed from scratch.

Well, it turned out that even the cleanly installed 64 bit system was having problems. So I installed the [GNU Debugger](http://www.gnu.org/software/gdb/), which I had never actually used before. I attached it to one of the processes that was having a problem, and what should immediately reveal itself but:

> (gdb) c
> Continuing.
> [New Thread 1090525536 (LWP 16948)]
>
> Program received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 1082132832 (LWP 16865)]
> 0x00002aaaaacfcd91 in tidySetErrorSink () from /usr/lib/libtidy-0.99.so.0
[Nathan Yergler](http://creativecommons.org/about/people#31) made a few changes to [cc.engine](http://code.creativecommons.org/viewsvn/cc.engine/branches/production/), the application that was having a problem, and which is based on [Zope](http://www.zope.org/), to remove any dependencies to libtidy, and the segfaults ceased. We haven't had the time to debug libtidy itself, but it would seem that there was some incompatibility between the version we had installed and a 64 bit system.

We are still having a problem with [cgit](http://hjemli.net/git/cgit/about/) segfaulting, and that is the next thing to look into ... 1 down, 1 to go.

---
pub_date: 2008-08-02
23 changes: 23 additions & 0 deletions content/tech-blog-archives/a-farewell/contents.lr
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
title: A Farewell
---
categories:
Interns
liblicense
summer
---
author: tannewt
---
body:

Well, today is my last day. I've thoroughly enjoyed my time here at CC. Thanks to everyone here at the office and elsewhere who I interacted with this summer. I've met a lot of innovative, hardworking folks and they all deserve a big kudos. While I've been here I've created liblicense which had been conceived for a while by Jon. I think its at a point where inclusion into content platforms is now the key. We've done three releases but technically four. None of this would have been possible without the help of Jason Kivlighn. Back at the University of Washington (UW) him and I were roommates and will be living in the same house this fall. Jason is a super coder who finished his own Google Summer of Code project about halfway through the summer. Instead of just fizzling out he helped me by revamping parts of liblicense and writing modules to support all of the formats we currently support. I wouldn't have gotten nearly as far without Jason's help. Before I bid farewell and move on, let me recap a bit of what I did.

Well, my primary project was liblicense. Basically, I was handed down the idea of tracking licenses on 'the desktop'. I arrived to find tons of UI mockups created by Rebecca for my implementing pleasure. Before I began implementing the UIs I spent the first three or four weeks conceptualizing and implementing the core liblicense library. The first (and second) release was on July 13. After this release I began focusing on Gnome desktop integration. I really enjoyed seeing a frontend to the hardwork I did on the backend. Hopefully, more will be done. At the same time as I did the Gnome integration, Jason mirrored my efforts in KDE4. After the release of the gnome stuff and library changes on July 30, I moved onto Sugar integration. I've since finished the Sugar integration and the files are available here: http://cctools.svn.sourceforge.net/viewvc/cctools/liblicense-sugar/trunk/ . We'll probably never officially release it due to its exclusivity to the Sugar interface. I'm proud of where liblicense has come. Like I've written before, I think that license awareness is important and liblicense is a small, important step in the right direction.

My other project which I volunteered to help with was the LiveContent liveCD. When I got here Tim was assigned the task of producing the new Creative Commons LiveCD with a target of LinuxWorld (August 8th) as a release date. After a week or so here I offered to technically advise the project. While my intention was to only be a reference I quickly became technically responsible for the CD and its creation. Although this was not what I had bargained for, it was well worth it. To create the CD we utilized livecd-creator, a Fedora LiveCD creator. It was a challenge for me to adapt to a different packaging system (from portage for Gentoo to rpm for Fedora). Additionally, although livecd-creator is a step in the right direction (like liblicense) it did not fulfill all of our needs and resulted in some hacky scripts to build the CD. To counter those frustrations I added an easter egg to the CD which I will not disclose. :-D Having Tim orchestrating the entire project also reduced my stress level quite a bit. However, it is quite nerve-racking creating a LiveCD to be duplicated 1000 times and pushed by both Creative Commons and Fedora. All in all, it turned out great. The next version(s), which I will not be a part of, should be even better.

I'm moving on. While I've enjoyed my time here at CC, I'll not be continuing on these projects. I've got a somewhat short attention span for projects and it has run dry for both LiveContent and liblicense. Over the last few days I've been brain dumping so that Asheesh can pick up where I left off. This move should be good for the projects because Asheesh is a very smart and driven person. You should check out his current project jsWidget. Since I'm done, I'll be driving home to Washington (the state not the district) tomorrow morning. When I get there I'll be enjoying home and working on educational materials for teaching Python in the intro Computer Science course at UW. After that I plan on focusing on my own projects.

Lastly, I'd like to thank everyone here at CC. Alex, our graphic designer, has been a huge help on both liblicense (he wrote the Ruby bindings) and LiveContent ( he did the sweet packaging). Nathan Yergler's flexibility allowed me to switch between projects on my own will and thus keep stress to a minimum. All of the other CC interns, Tim, Rebecca, Cameron and Thierry, made this summer great because we bonded as a group as we worked hard and hung out at various events. Finally, Jon has been an immense help by entertaining my questions, dealing with my grumpiness and encouraging me on both of my projects. As always, there are many others who made this a great experience but the folks mentioned are those who I worked with day-to-day. Thanks, everyone. I hope that what I started while here at CC sees much use in the future. Cheers.

---
pub_date: 2007-08-24
24 changes: 24 additions & 0 deletions content/tech-blog-archives/a-patchy-web-server/contents.lr
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
title: A patchy web server
---
categories:
development
opensource
---
author: asheesh
---
body:

The tech team here is examining how we can better-organize our web servers and web sites. If you are a DNS sleuth, you may have noticed that most of our web sites are served from a server called "apps.creativecommons.org." But there are some problems with this setup: for example, some of our sites are lots of little static files (like the images site, i.creativecommons.org), and some serve CPU-intensive web pages (like wiki.creativecommons.org). We can handle many, many more requests to i.creativecommons.org per second than to wiki.creativecommons.org.

In order to prevent Apache from overloading the computer it's running on, we have to limit the maximum number of web pages it can serve at once. If a thousand people at once were requesting our images, that would be no problem, but we couldn't really handle a thousand requests to our wiki at once. So we have to set the web server to the limit that can be handled by the most difficult site - otherwise, when more people start editing our wiki at once than we can handle, service will degrade really terribly for everyone, even the clients just getting images from i.creativecommons.org!

We knew the above from experience, but we didn't really have data on which web sites ("vhosts") were using up the most server resources. At first I thought I'd turn to Apache's powerful logging system. I wanted to know how much server time was elapsing between the start of a request and when the server was ready to respond to it. Alas, [mod_log_config](http://httpd.apache.org/docs/2.0/mod/mod_log_config.html) only lets you log the total time elapsed from start of request to end of request, which means that large files or users on slow links can distort the picture. A large image file going slowly to a modem user doesn't actually preclude us from sending another few hundred such files at the same time. On the other hand, if a wiki page is taking 5 seconds before it is ready to be sent to the user, those 5 seconds will be even longer if someone else requests another wiki page. Measuring the time from start of request to the beginning of the response seemed a good (albeit rough) way to measure actual server-side resources needed to respond.

I did notice that [mod_headers](http://httpd.apache.org/docs/2.0/mod/mod_headers.html) could send a message to the client telling him exactly what I wanted to record! So I patched Apache to have mod_headers pass its information down to mod_log_config, and then I could log how much time elapsed from request start to

Every once in a while, but as recently as this past weekend (at [Super Happy Dev House](http://superhappydevhouse.org/SuperHappyDevHouse19), even!), I hear people say, "Open source is great, but it's not as if anyone actually uses the source to fix problems they encounter." In this case, software freedom was more than theoretical; it meant the ability to make the software tell us things about itself that helps us better serve the community we exist to serve.

You all can see (and distribute under the same terms as Apache) the [patch and some quick reporting scripts](http://cctools.svn.sourceforge.net/svnroot/cctools/log_analysis/vhost_effort/) I slapped together. Feel free to email me with any questions!

---
pub_date: 2007-08-13
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
title: Adobe XMP Toolkit 4.1.1 -- under BSD
---
categories:
xmp
---
author: ml
---
body:

Adobe's [XMP Toolkit](http://www.adobe.com/devnet/xmp/) is now available under the BSD license!

---
pub_date: 2007-05-13
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
title: Asheesh's liblicense interview
---
categories:
---
author: steren
---
body:

Relatively to the [liblicense 0.8 announcement](http://creativecommons.org/weblog/entry/8748), I recently made a video interview of Asheesh concerning his work on [liblicense](http://wiki.creativecommons.org/Liblicense).

Watch it [here](http://blip.tv/file/1142312/).

In his demo, Asheesh uses liblicense twice:

* in the online photo gallery to read and write metadata
* in the Eye Of Gnome plug-in to read license metadata



This shows that liblicense is now mature enough to be used by your applications.

---
pub_date: 2008-08-12
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
title: Asterisk on a $20/mo. Linode, part 2.
---
categories:
---
author: nkinkade
---
body:

I mentioned in my recent post about running Asterisk on a $20/month Linode that I would try to follow up with a review of the steps necessary to actually get it working. This isn't going to be a detailed review, but just a more or less bulleted list of steps to take.

In the previous post I said that I had run into a kernel-package bug ([#508487](http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=508487)) that was preventing me from successfully building a Xen kernel with make-kpkg. So I installed the version from unstable, in which the bug had been fixed. This step may not be necessary at some point:

# vi /etc/apt/sources.lst
[change deb-src to point to an unstable repository]
# apt-get update
# mkdir kernel-package && cd kernel-package
# apt-get source kernel-package
# apt-get build-dep kernel-package
# dpkg-buildpackage -rfakeroot -uc -b
# dpkg -i kernel-package_12.025_all.deb

Now to build the kernel.

# cd /usr/src
# apt-get source linux-image-`uname -r`
# cd linux-2.6-2.6.26
# aptitude install linux-patch-debian-2.6.26
# /usr/src/kernel-patches/all/2.6.26/apply/debian -a amd64 -f xen
# make menuconfig
[ _Processor type & features_ -> _Timer frequency_ - > _1000 HZ_ ]
# make-kpkg clean
# make-kpkg --initrd kernel_image
[wait a good while for the kernel to compile]
# mv /lib/modules/2.6.26/kernel/ /root/kernel.old.old
# dpkg -i ../linux-xen0-2.6.26_2.6.26-10.00.Custom_amd64.deb
# update-initramfs -c -k 2.6.26

Now to build the [Zaptel](http://www.voip-info.org/wiki/view/DAHDI) (DAHDI) kernel modules. It would normally be just a few steps, but there were some other problems regarding references to the RTC in zaptel-sources. You can find more information about this at the [voip-info.org wiki](http://www.voip-info.org/wiki/view/Asterisk+timer+ztdummy), most relevantly under the heading " _zaptel and xen-kernel 2.6.26-1-xen-686 in Debian Lenny_ ":

# apt-get install zaptel-source
# cd /usr/src
# vi modules/zaptel/kernel/ztdummy.c
[comment out _#define USE_RTC_ lines]
# m-a prepare
# m-a build zaptel
# m-a install zaptel
# modprobe ztdummy

I think those are the basic steps I took. The actual path was much less clean, as I hit bugs and went back and forth. It's possible I have missed a step or two in there, or that the way I went about things wasn't right, ideal or even correct. Of course, all of this also presupposes that you have already configured your Linode to boot from a custom local kernel instead of the default Linode kernel. [Instructions](http://library.linode.com/advanced/pv-grub-howto) on how to do this can be found at at linode.com.

---
pub_date: 2010-01-20
18 changes: 18 additions & 0 deletions content/tech-blog-archives/atom-license-extension/contents.lr
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
title: Atom License Extension
---
categories:
atom
rss
standards
syndication
---
author: ml
---
body:

Thanks to the work of James Snell the[ Atom License Extension has been approved for publishing as an Experimental RFC](http://www.snellspace.com/wp/?p=649).

Read about CC license support in RSS 1.0, RSS 2.0, and Atom 1.0 on our [wiki page about syndication](http://wiki.creativecommons.org/Syndication).

---
pub_date: 2007-05-04
Loading

0 comments on commit 427452e

Please sign in to comment.