From 8fe745092d18f28f4f49038a4246757ddda0e409 Mon Sep 17 00:00:00 2001 From: Gerold Date: Tue, 3 Jul 2012 18:43:02 +0800 Subject: [PATCH] completed formatting and comments:true for all 2008 posts --- ...-subversion-on-centos-with-cpanel.markdown | 8 +- ...install-wordpress-with-subversion.markdown | 3 + source/_posts/2008-06-12-down-time.markdown | 24 +- ...host-sign-of-strenght-or-weakness.markdown | 28 +-- ...-mcrypt-module-via-dynamic-module.markdown | 1 + ...on-centos-rhel-system-with-cpanel.markdown | 163 ++++++++------ ...g-aes-encryption-in-the-front-end.markdown | 60 ++--- .../2008-06-25-centos-52-upgrade.markdown | 1 + ...ere-is-no-usernamehost-registered.markdown | 12 +- .../2008-07-04-the-website-is-down.markdown | 1 + ...-07-account-do-not-show-up-in-whm.markdown | 10 +- ...-apache-wont-start-with-error-965.markdown | 11 +- ...the-mysql-replication-master-host.markdown | 30 +-- .../2008-07-25-happy-sysadmin-day.markdown | 1 + ...esktop-in-gnome-222-fedora-core-9.markdown | 7 +- ...l-filebrowser-configuration-issue.markdown | 6 +- ...peedcabling-system-admin-olympics.markdown | 4 +- ...re-after-apache-rebuild-in-cpanel.markdown | 1 + ...using-the-slow-query-log-in-mysql.markdown | 23 +- ...-when-mysql-starts-counting-sheep.markdown | 15 +- ...nagios-plugin-check_hparray-error.markdown | 40 ++-- ...g-all-the-scripts-in-etccrondaily.markdown | 5 +- ...-transfer-to-cost-of-video-played.markdown | 62 +---- ...lding-the-mysql-replication-slave.markdown | 9 +- ...14-pci-dss-compliance-for-dummies.markdown | 1 + ...tion-on-linux-server-with-ioncube.markdown | 56 ++--- ...i-virus-software-alone-not-enough.markdown | 211 ++---------------- ...-usb-mass-storage-buffer-io-error.markdown | 12 +- ...-10-23-homegrown-mysql-monitoring.markdown | 21 +- ...-dos-attacks-becoming-more-common.markdown | 3 +- ...isp-stop-a-40-gigabit-ddos-attack.markdown | 3 +- ...2008-11-17-server-and-backup-woes.markdown | 17 +- ...ck-and-the-defense-in-depth-layer.markdown | 35 +-- ...custom-configuration-to-httpdconf.markdown | 76 ++++--- .../2008-11-25-upgrading-to-trac-011.markdown | 1 + .../_posts/2008-12-01-black-monday.markdown | 35 +-- ...lements-source-port-randomization.markdown | 1 + ...-on-black-monday-and-black-friday.markdown | 2 +- ...08-12-09-vsftpd-logging-timestamp.markdown | 1 + ...timization-for-network-throughput.markdown | 11 +- ...change-the-timezone-on-rhelcentos.markdown | 1 + 41 files changed, 375 insertions(+), 637 deletions(-) diff --git a/source/_posts/2008-06-04-install-subversion-on-centos-with-cpanel.markdown b/source/_posts/2008-06-04-install-subversion-on-centos-with-cpanel.markdown index 5a44ba1..cb8ff07 100644 --- a/source/_posts/2008-06-04-install-subversion-on-centos-with-cpanel.markdown +++ b/source/_posts/2008-06-04-install-subversion-on-centos-with-cpanel.markdown @@ -1,4 +1,5 @@ --- +comments: true published: true date: '2008-06-04 05:38:20' layout: post @@ -13,13 +14,14 @@ author: gerold-mercadero To install Subversion (SVN) login to your server (shell) and execute: +``` yum install subversion +``` If `perl-URI` is not installed edit `/etc/yum.conf` and remove `perl*` then execute: +``` yum install perl-URI +``` Restore `perl*` on `/etc/yum.conf`. - - - diff --git a/source/_posts/2008-06-04-install-wordpress-with-subversion.markdown b/source/_posts/2008-06-04-install-wordpress-with-subversion.markdown index 42550dc..d8311c2 100644 --- a/source/_posts/2008-06-04-install-wordpress-with-subversion.markdown +++ b/source/_posts/2008-06-04-install-wordpress-with-subversion.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true date: '2008-06-04 05:11:20' layout: post slug: install-wordpress-with-subversion @@ -13,6 +14,8 @@ author: gerold-mercadero To install Wordpress with Subversion you need shell access to your server and have Subversion installed. To start, login to your server (shell) and go to directory where you want to install Wordpress and execute this command: +``` svn co http://svn.automattic.com/wordpress/trunk/ . +``` Copy `wp-config-sample.php` to `wp-config.php`. Edit `wp-config.php` and enter database details. Access your blog url to complete installation. diff --git a/source/_posts/2008-06-12-down-time.markdown b/source/_posts/2008-06-12-down-time.markdown index 41d3b3f..5ef140f 100644 --- a/source/_posts/2008-06-12-down-time.markdown +++ b/source/_posts/2008-06-12-down-time.markdown @@ -1,4 +1,5 @@ --- +comments: true date: '2008-06-12 14:29:26' layout: post slug: down-time @@ -14,32 +15,23 @@ published: true author: andrew-kucharski --- -Any host or system admin hates the words "its not working" or "its down". When I see a big site take a hit or is town there are two reactions that I generally have: - - - - - * Site Down - those poor bastards (system admins) their life must suck right now - +Any host or system admin hates the words *its not working* or *its down*. When I see a big site take a hit or is town there are two reactions that I generally have: - * Site Down - SEE!!! It happens to them too! No one is perfect! No one is immune! +* Site Down - those poor bastards (system admins) their life must suck right now +* Site Down - SEE!!! It happens to them too! No one is perfect! No one is immune! Here is a story that elicited these feelings in today's Slashdot stories: - -> -`+--------------------------------------------------------------------+ +` ++--------------------------------------------------------------------+ | US Amazon.com Website Down For Over 1 Hour | | from the there-goes-the-bottom-line dept. | | posted by ScuttleMonkey on Friday June 06, @15:10 (The Internet) | | [http://tech.slashdot.org/article.pl?sid=08/06/06/199211 ](http://tech.slashdot.org/comments.pl?sid=08/06/06/199211) | +--------------------------------------------------------------------+ - -CorporalKlinger writes "CNET News is reporting that Amazon's US website, Amazon.com, has been unreachable since 10:30 AM PDT today. As of posting, visiting www.amazon.com produces an 'Http/1.1 Service Unavailable' message. According to CNET, "Based on last quarter's revenue of $4.13 billion, a full-scale global outage would cost Amazon more than [0]$31,000 per minute on average." Some of Amazon's international websites still appear to be working, and some pages on the US Amazon.com site load if accessed using HTTPS instead of HTTP." - -Discuss this story at: - [http://tech.slashdot.org/comments.pl?sid=08/06/06/199211](http://tech.slashdot.org/comments.pl?sid=08/06/06/199211) ` +CorporalKlinger writes *CNET News is reporting that Amazon's US website, Amazon.com, has been unreachable since 10:30 AM PDT today. As of posting, visiting www.amazon.com produces an 'Http/1.1 Service Unavailable' message. According to CNET, "Based on last quarter's revenue of $4.13 billion, a full-scale global outage would cost Amazon more than [0]$31,000 per minute on average." Some of Amazon's international websites still appear to be working, and some pages on the US Amazon.com site load if accessed using HTTPS instead of HTTP.* +Discuss this story at: [http://tech.slashdot.org/comments.pl?sid=08/06/06/199211](http://tech.slashdot.org/comments.pl?sid=08/06/06/199211) diff --git a/source/_posts/2008-06-13-utilizing-google-apps-email-as-a-web-host-sign-of-strenght-or-weakness.markdown b/source/_posts/2008-06-13-utilizing-google-apps-email-as-a-web-host-sign-of-strenght-or-weakness.markdown index 715f981..6677f4d 100644 --- a/source/_posts/2008-06-13-utilizing-google-apps-email-as-a-web-host-sign-of-strenght-or-weakness.markdown +++ b/source/_posts/2008-06-13-utilizing-google-apps-email-as-a-web-host-sign-of-strenght-or-weakness.markdown @@ -14,44 +14,24 @@ tags: - hosting author: andrew-kucharski published: true +comments: true --- PrometHost offers best of class hosting for websites, databases, and other services such as email, domain registration, spam filtering, etc. We have recently made a decision to focus on three things for our hosting offering: -- security -- reliability -- scalability - - - +* security +* reliability +* scalability Performance is obviously important, and we help with performance on a case by case basis. - - - As with most businesses, we use third party tools and solutions to put together our offering. The value that we bring with our hosting offering depends on the processes and decisions along with the quality of implementation. We make decisions to use a reliable Data Center and provider of primary and secondary power and IP. We make decisions on hardware choices (HP, CISCO, etc), and software platforms (RedHat, cPanel, etc.) and software services (srsPlus, Netcleanse, etc). In terms of email – we chose an email server, spam service and configuration options. With the constant battle with spam bots, physhing scams, along with other more serious threats to data security we have decided to look for the best in class email service to offer to our clients so that we can focus on our core offerings. - - - We have looked at a number of options. Installing network filtering solutions such as [Barracuda ](http://www.barracudanetworks.com/ns/?L=en)(http://www.barracudanetworks.com/ns/?L=en) and [Sonicwall 9](http://www.sonicwall.com/us/6887.html?CMP=KNC-8WO858768998&_kk=barracuda&_kt=16f2ca5f-e9e2-4f4c-91d3-476e6b9850cd) ) . We have looked at beefing up our Netcleanse solution as well as using on server open source solutions such as spam assassin. However no option stood out as well as Google Apps. - - - [Google apps](http://www.google.com/a/help/intl/en/var_1c.html) not only is seamless in integration on the back end as well as the customer, but it also comes with some impressive set of tools which are extremely valuable to anyone running their business, virtual or not (Calendar, Docs, etc. http://www.google.com/a/help/intl/en/var_1c.html) - - - We have made this decision several months before this slashdot story was exposed: ["Large webhost urges customers to use Gmail."](http://tech.slashdot.org/article.pl?sid=08/05/27/137229&from=rss) - - - Although there were some negative comments I think the slashdot community understood and supported the decision. We stand by it and will always offer our customers an alternative to gmail but will recommend it as it is the best of class option at the moment. - - - The decision was not easy as we really like to manage as many things as we can and "outsourcing" it seemed like giving up on it, but business wise this was one of the best decisions we made. Fighting spam is frustrating and if there is a free better solution out there and most of your customers will not notice a difference if they use POP or IMAP, except maybe less spam. diff --git a/source/_posts/2008-06-14-installing-php-mcrypt-module-via-dynamic-module.markdown b/source/_posts/2008-06-14-installing-php-mcrypt-module-via-dynamic-module.markdown index 06d5ade..9c13351 100644 --- a/source/_posts/2008-06-14-installing-php-mcrypt-module-via-dynamic-module.markdown +++ b/source/_posts/2008-06-14-installing-php-mcrypt-module-via-dynamic-module.markdown @@ -13,6 +13,7 @@ tags: - php author: marius-ducea published: true +comments: true --- Considering that you have an existing php installation compiled from sources, it is very easy to add afterwards _new php modules_, outside the php binary. This can be achieved without any interruption of the web services, and requires just the following steps: diff --git a/source/_posts/2008-06-18-install-ffmpeg-ffmpeg-php-and-audio-binaries-on-centos-rhel-system-with-cpanel.markdown b/source/_posts/2008-06-18-install-ffmpeg-ffmpeg-php-and-audio-binaries-on-centos-rhel-system-with-cpanel.markdown index c435ce8..e790e28 100644 --- a/source/_posts/2008-06-18-install-ffmpeg-ffmpeg-php-and-audio-binaries-on-centos-rhel-system-with-cpanel.markdown +++ b/source/_posts/2008-06-18-install-ffmpeg-ffmpeg-php-and-audio-binaries-on-centos-rhel-system-with-cpanel.markdown @@ -1,4 +1,5 @@ --- +comments: true date: '2008-06-18 05:08:29' layout: post slug: install-ffmpeg-ffmpeg-php-and-audio-binaries-on-centos-rhel-system-with-cpanel @@ -16,140 +17,158 @@ published: true FFMPEG is an open source application that allows you to convert video and audio files easily between a variety of different formats. It supports most industry-standard codec and can convert from one file format to another quickly and easily. This guide is intented for the installatiion of ffmpeg, ffmpeg-php, mplayer, mencoder, lame mp3 encoder, flvtool2, libVorbis, and libogg and tested on CentOS5 and RHEL3 systems with Cpanel. **To start:** - -`cd /usr/src` +``` +cd /usr/src +``` **Download MPlayer and Codes, FlvTool2, Lame, and ffmpeg-php:** **MPlayer codecs** ([check for latest release](http://www3.mplayerhq.hu/MPlayer/releases/codecs/)): - -`wget http://www3.mplayerhq.hu/MPlayer/releases/codecs/essential-20071007.tar.bz2` - +``` +wget http://www3.mplayerhq.hu/MPlayer/releases/codecs/essential-20071007.tar.bz2 +``` **MPlayer** ([check for latest release](http://www3.mplayerhq.hu/MPlayer/releases/)): - -`wget http://www4.mplayerhq.hu/MPlayer/releases/MPlayer-1.0rc2.tar.bz2` - +``` +wget http://www4.mplayerhq.hu/MPlayer/releases/MPlayer-1.0rc2.tar.bz2 +``` **FlvTool2** ([check for latest release](http://rubyforge.org/frs/?group_id=1096)): - -`wget http://rubyforge.org/frs/download.php/17497/flvtool2-1.0.6.tgz` - +``` +wget http://rubyforge.org/frs/download.php/17497/flvtool2-1.0.6.tgz +``` **Lame MP3 Encoder** ([check for latest release](http://sourceforge.net/projects/lame/)): - -`wget http://easynews.dl.sourceforge.net/sourceforge/lame/lame-3.97.tar.gz` - +``` +wget http://easynews.dl.sourceforge.net/sourceforge/lame/lame-3.97.tar.gz +``` **FFMPEG-PHP** ([check for latest release](http://sourceforge.net/projects/ffmpeg-php/)): - -`wget http://nchc.dl.sourceforge.net/sourceforge/ffmpeg-php/ffmpeg-php-0.5.3.1.tbz2<` - - +``` +wget http://nchc.dl.sourceforge.net/sourceforge/ffmpeg-php/ffmpeg-php-0.5.3.1.tbz2< +``` **Extract downloaded files:** - -`tar -zxvf flvtool2-1.0.6.tgz +``` +tar -zxvf flvtool2-1.0.6.tgz tar -zxvf lame-3.97.tar.gz bunzip2 essential-20071007.tar.bz2; tar xvf essential-20071007.tar bunzip2 MPlayer-1.0rc2.tar.bz2; tar xvf MPlayer-1.0rc2.tar bunzip2 ffmpeg-php-0.5.3.1.tbz2; tar xvf ffmpeg-php-0.5.3.1.tar -` +``` **Create and import the Codecs directory:** - -`mkdir /usr/local/lib/codecs/ +``` +mkdir /usr/local/lib/codecs/ mv essential-20071007/* /usr/local/lib/codecs/ chmod -Rf 755 /usr/local/lib/codecs/ -` +``` **Install Subversion and Ruby using Yum:** - -`yum install subversion` (You may need to open SVN port 3690) -`yum install ruby` (If you’re on cPanel you can alternatively use _/scripts/installruby_) -`yum install ncurses-devel` +``` +yum install subversion +``` +_You may need to open SVN port 3690_ +``` +yum install ruby +``` +_If you’re on cPanel you can alternatively use `/scripts/installruby`_ +``` +yum install ncurses-devel +``` **Get FFMPEG and MPlayer from SVN:** - -`svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg -svn checkout svn://svn.mplayerhq.hu/mplayer/trunk mplayer` - +``` +svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg +svn checkout svn://svn.mplayerhq.hu/mplayer/trunk mplayer +``` **Install LAME:** - -`cd /usr/src/lame-3.97 +``` +cd /usr/src/lame-3.97 ./configure && make && make install -` +``` **Install libOgg and libVorbis:** - -`yum install libogg.i386 libvorbis.i386 libvorbis-devel.i386` +``` +yum install libogg.i386 libvorbis.i386 libvorbis-devel.i386 +``` **Install flvtool2:** - -`cd /usr/src/flvtool2-1.0.6/ +``` +cd /usr/src/flvtool2-1.0.6/ ruby setup.rb config ruby setup.rb setup ruby setup.rb install -` +``` **Install MPlayer:** - -`cd /usr/src/MPlayer-1.0rc2 -./configure && make && make install` - +``` +cd /usr/src/MPlayer-1.0rc2 +./configure && make && make install +``` **Install ffMPEG:** - -`cd /usr/src/ffmpeg/ +``` +cd /usr/src/ffmpeg/ mkdir tmp chmod 777 tmp export TMPDIR=./tmp ./configure --enable-libmp3lame --enable-libvorbis --disable-mmx --enable-shared -_ echo '#define HAVE_LRINTF 1' >> config.h +echo '#define HAVE_LRINTF 1' >> config.h make && make install export TMPDIR=/tmp -` +``` + **Finalize the codec setup:** -`ln -s /usr/local/lib/libavformat.so.50 /usr/lib/libavformat.so.50 +``` +ln -s /usr/local/lib/libavformat.so.50 /usr/lib/libavformat.so.50 ln -s /usr/local/lib/libavcodec.so.51 /usr/lib/libavcodec.so.51 ln -s /usr/local/lib/libavutil.so.49 /usr/lib/libavutil.so.49 ln -s /usr/local/lib/libmp3lame.so.0 /usr/lib/libmp3lame.so.0 ln -s /usr/local/lib/libavformat.so.51 /usr/lib/libavformat.so.51 -` -**You may get an error about a library path not being found, if so, run:** -`export LD_LIBRARY_PATH=/usr/local/lib` +``` -_**Install FFMPEG-PHP:**_ +**You may get an error about a library path not being found, if so, run:** +``` +export LD_LIBRARY_PATH=/usr/local/lib +``` -`cd /usr/src/ffmpeg-php-0.5.0/ +**Install FFMPEG-PHP:** +``` +cd /usr/src/ffmpeg-php-0.5.0/ phpize ./configure && make && make install ln -s /usr/local/bin/ffmpeg /usr/bin/ffmpeg ln -s /usr/local/bin/mplayer /usr/bin/mplayer -` +``` + **Add extension to php.ini (find the correct php.ini file):** -`[ffmpeg] +``` +[ffmpeg] extension_dir=/usr/local/lib/php/extensions/no-debug-non-zts-20060613/ extension=ffmpeg.so -` - -_**Restart Apache and check that the module is loaded in PHP:**_ +``` -`/etc/init.d/httpd restart` +**Restart Apache and check that the module is loaded in PHP:** +``` +/etc/init.d/httpd restart +``` -_**Test** _ffmpeg_ **from command line and if you get this errors: **_ +**Test** `ffmpeg` **from command line and if you get this errors: ** +``` +ffmpeg: error while loading shared libraries: libavformat.so.51:... +``` -`ffmpeg: error while loading shared libraries: libavformat.so.51:...` +**execute:** `echo "/usr/local/lib" >>/etc/ld.so.conf ` **and reload library cache with** `ldconfig -v` -_**execute:**_ `echo "/usr/local/lib" >>/etc/ld.so.conf ` **and reload library cache with** `ldconfig -v` +**Verify ffmpeg installation:** +``` +php -r 'phpinfo();' | grep ffmpeg +``` -_**Verify ffmpeg installation:**_ - -`php -r 'phpinfo();' | grep ffmpeg` - -_**If you get the following results then FFMPEG and all it's components are installed correctly:**_ -`ffmpeg +**If you get the following results then FFMPEG and all it's components are installed correctly:** +``` +ffmpeg ffmpeg support (ffmpeg-php) => enabled ffmpeg-php version => 0.5.3.1 ffmpeg-php gd support => enabled ffmpeg.allow_persistent => 0 => 0 -ffmpeg.show_warnings => 0 => 0` - - +ffmpeg.show_warnings => 0 => 0 +``` diff --git a/source/_posts/2008-06-20-implementing-aes-encryption-in-the-front-end.markdown b/source/_posts/2008-06-20-implementing-aes-encryption-in-the-front-end.markdown index beeec9c..a30a1c5 100644 --- a/source/_posts/2008-06-20-implementing-aes-encryption-in-the-front-end.markdown +++ b/source/_posts/2008-06-20-implementing-aes-encryption-in-the-front-end.markdown @@ -1,6 +1,7 @@ --- author: pim-van-der-wal published: true +comments: true date: '2008-06-20 08:24:56' layout: post slug: implementing-aes-encryption-in-the-front-end @@ -22,58 +23,37 @@ This post describes a way to implement data encryption in the front-end of an ap First off we need to decide what type of encryption to use. [AES](http://en.wikipedia.org/wiki/Advanced_Encryption_Standard) is an encryption standard that has been endorsed by the NSA and is generally available for most programming languages. It is actually the same as the [Rijndael](http://en.wikipedia.org/wiki/Rijndael) encryption scheme but with a couple of limitations. Rijndael describes a two-way encryption scheme which allows us to encrypt and decrypt data using an encryption key and a random generator. The random generator is used to make sure that when we're encrypting two identical values the resulting encrypted values will be different. Since we're encrypting credit card data like the expiration date this is actually a very desirable feature. The random value is called the [Initialization Vector](http://en.wikipedia.org/wiki/Initialization_vector) (I.V.) and will be stored with the encrypted data. To successfully encrypt and decrypt data we will need information stored in 3 different locations: - - - - 1. The encryption key which is stored on the web server in a plain text file - - - 2. The encryption key hash function which is stored in the front-end code - - - 3. The encrypted data and the I.V. which are both stored in the database - +1. The encryption key which is stored on the web server in a plain text file +2. The encryption key hash function which is stored in the front-end code +3. The encrypted data and the I.V. which are both stored in the database Both the encryption key and the hash function are needed to encrypt data. To decrypt data the I.V. and the encrypted data are also needed of course. If any of these items is not available decryption can not take place. The hash function for the encryption key is used to make sure that the key cannot be compromised by just retrieving the key file. The key in the key file needs to be encrypted with an [MD5](http://en.wikipedia.org/wiki/Md5) based function to be useful. For the second part I will describe how to encrypt and decrypt data using this method in PHP. The main concern here is to use only functions and parameters that are compatible with the Java front-end. As it turns out some PHP functions are not quite as generic as they appear at first glance. The [crypt()](http://us.php.net/manual/en/function.crypt.php) function is a good example of this. Although the crypt() function has MD5 functionality it is by no means generic. The value is encrypted several times and a random value (a salt) is inserted in several locations. With these variations being buried in the PHP source code it is hard to match the same encryption in Java. On the other hand the [md5()](http://us.php.net/manual/en/function.md5.php) function does provide a straightforward implementation of MD5 encryption. It is advisable to apply MD5 several times and insert a hard coded value into the string. This is similar to what the crypt() function does but by doing it ourselves we can make sure of compatibility between the platforms as well as add some security by making it custom. After encryption the key is ready for use and it is time to encrypt or decrypt the data. PHP comes with the [mcrypt](http://us.php.net/mcrypt) library that supports Rijndael encryption with I.V's (installation of the mcrypt library is described in a previous post). The limitation of using the mcrypt library is that the length of the encryption key and the length of the I.V. must be identical. Furthermore there is no way to specify what method of padding to use. When data is encrypted it will always result in blocks of data of the same size. If the data does not fill up the block completely it is padded with empty data. The default padding method is [TBC](http://www.gnu.org/software/gnu-crypto/manual/api/gnu/crypto/pad/TBC.html). The final code for encrypting a string will look something like this: - - - $iv = mcrypt_create_iv(mcrypt_get_iv_size(MCRYPT_RIJNDAEL_256, - MCRYPT_MODE_CBC), MCRYPT_RAND); - $encrypted_string = mcrypt_encrypt(MCRYPT_RIJNDAEL_256, - $key, $input_string, MCRYPT_MODE_CBC, $iv); - +``` +$iv = mcrypt_create_iv(mcrypt_get_iv_size(MCRYPT_RIJNDAEL_256, MCRYPT_MODE_CBC), MCRYPT_RAND); +$encrypted_string = mcrypt_encrypt(MCRYPT_RIJNDAEL_256, $key, $input_string, MCRYPT_MODE_CBC, $iv); +``` The final part of this post will deal with how to implement the same encryption in Java. The first problem we run into is trying to implement this with the standard Java encryption extensions. First of all the 256 bits version of Rijndael is not supported without the Unlimited Strength Jurisdiction Policy files. These can be downloaded from the main [JDK download page](http://java.sun.com/javase/downloads/index.jsp) at the Sun web site. But even with these files installed we still have some problems like no support for initialization vectors. This means we have to turn to alternative libraries like the [gnu-crypto library](http://www.gnu.org/software/gnu-crypto). This library contains everything we need for compatibility. It allows for the use of initialization vectors, TBC padding and different cypher block modes. We have not touched upon the use of [cypher block modes](http://en.wikipedia.org/wiki/Block_cipher_modes_of_operation) but they define how blocks of data are encrypted. Which mode to use is largely dependent on the type of data will be encrypted and it's definitely outside the scope of this post to delve into that. Originally we started using ECB mode with the PHP front-end but found that that mode is incompatible with the gnu-crypto implementation in Java. Instead we now use the CBC mode. The Java code is limited to the use of the encryption function but will need to use the IPad class afterwards to add the correct padding. - - - IMode mode = ModeFactory.getInstance("CBC", "Rijndael", 32); - Map attributes = new HashMap(); - attributes.put(IMode.KEY_MATERIAL, key); - attributes.put(IMode.CIPHER_BLOCK_SIZE, new Integer(32)); - attributes.put(IMode.STATE, new Integer(IMode.ENCRYPTION)); - attributes.put(IMode.IV, iv); - mode.init(attributes); - +``` +IMode mode = ModeFactory.getInstance("CBC", "Rijndael", 32); +Map attributes = new HashMap(); +attributes.put(IMode.KEY_MATERIAL, key); +attributes.put(IMode.CIPHER_BLOCK_SIZE, new Integer(32)); +attributes.put(IMode.STATE, new Integer(IMode.ENCRYPTION)); +attributes.put(IMode.IV, iv); +mode.init(attributes); +``` Some final tips: - - - - - * Insert a special string with each encrypted string to make it possible to determine whether a string has been encrypted or not. - - - * Calculate how big the data will be based on the block sizes and adjust the database columns accordingly. Using 256 bits encryption is very secure but it results in a large block size which is very inefficient for short data. - - * If you're upgrading your application to use encryption provide the developers with some easy functions to insert and upgrade the encrypted data. - - +* Insert a special string with each encrypted string to make it possible to determine whether a string has been encrypted or not. +* Calculate how big the data will be based on the block sizes and adjust the database columns accordingly. Using 256 bits encryption is very secure but it results in a large block size which is very inefficient for short data. +* If you're upgrading your application to use encryption provide the developers with some easy functions to insert and upgrade the encrypted data. diff --git a/source/_posts/2008-06-25-centos-52-upgrade.markdown b/source/_posts/2008-06-25-centos-52-upgrade.markdown index 29c3e60..5f274d4 100644 --- a/source/_posts/2008-06-25-centos-52-upgrade.markdown +++ b/source/_posts/2008-06-25-centos-52-upgrade.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: marius-ducea date: '2008-06-25 15:46:57' layout: post diff --git a/source/_posts/2008-06-27-mysql-error-1449-there-is-no-usernamehost-registered.markdown b/source/_posts/2008-06-27-mysql-error-1449-there-is-no-usernamehost-registered.markdown index be956db..5323900 100644 --- a/source/_posts/2008-06-27-mysql-error-1449-there-is-no-usernamehost-registered.markdown +++ b/source/_posts/2008-06-27-mysql-error-1449-there-is-no-usernamehost-registered.markdown @@ -1,6 +1,7 @@ --- author: pim-van-der-wal published: true +comments: true date: '2008-06-27 07:53:31' layout: post slug: mysql-error-1449-there-is-no-usernamehost-registered @@ -18,11 +19,12 @@ tags: The error message in the title occurs in a combination of circumstances. If you have 2 MySQL databases, one master and one slave in a replicating setup and you use triggers you may encounter this error. Although it is not necessary for the slave database to have the same users as the master for replication to work it is required for triggers to work. The triggers get fired at both the master and the slave and use the username of the user that caused the trigger to execute. So if user user@localhost is present at the master then the same user needs to be added to the slave. The privileges don't have to be the same as long as the user has enough access on the slave to complete the trigger. The biggest problem with this is that it causes replication to stop dead in its tracks. After adding the user to the slave database the replication needs to be restarted with the following MySQL command: - -`START SLAVE;` - +``` +START SLAVE; +``` After that check for errors with the following MySQL command: - -`SHOW SLAVE STATUS\G` +``` +SHOW SLAVE STATUS +``` Depending on how long replication has been suspended it may take some time for the slave to catch up. This will not be shown in the replication status. diff --git a/source/_posts/2008-07-04-the-website-is-down.markdown b/source/_posts/2008-07-04-the-website-is-down.markdown index bd0887f..5ca5a43 100644 --- a/source/_posts/2008-07-04-the-website-is-down.markdown +++ b/source/_posts/2008-07-04-the-website-is-down.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: marius-ducea date: '2008-07-04 10:02:43' layout: post diff --git a/source/_posts/2008-07-07-account-do-not-show-up-in-whm.markdown b/source/_posts/2008-07-07-account-do-not-show-up-in-whm.markdown index 9cb1762..c2637b7 100644 --- a/source/_posts/2008-07-07-account-do-not-show-up-in-whm.markdown +++ b/source/_posts/2008-07-07-account-do-not-show-up-in-whm.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: max-veprinsky date: '2008-07-07 06:58:48' layout: post @@ -14,11 +15,8 @@ tags: - whm --- -**Issue: Hosting account do not show up in WHM -** +**Issue: Hosting account do not show up in WHM** -Cause: /etc/trueuserdomains file which holds a list of account on the server is empty -Solution: restore contents of /etc/trueuserdomains and update cpanel with /scripts/upcp +*Cause*: `/etc/trueuserdomains` file which holds a list of account on the server is empty - - linuxsysadminblog.com: linsysad +*Solution*: restore contents of `/etc/trueuserdomains` and update cpanel with `/scripts/upcp` diff --git a/source/_posts/2008-07-07-apache-wont-start-with-error-965.markdown b/source/_posts/2008-07-07-apache-wont-start-with-error-965.markdown index bd7bb93..538eeac 100644 --- a/source/_posts/2008-07-07-apache-wont-start-with-error-965.markdown +++ b/source/_posts/2008-07-07-apache-wont-start-with-error-965.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: max-veprinsky date: '2008-07-07 07:04:42' layout: post @@ -13,12 +14,12 @@ tags: - apace --- -**Issue: Apache does not start, gives the following error -** +**Issue: Apache does not start, gives the following error** - +``` 965 File size limit exceeded$HTTPD -DSSL +``` +*Cause*: Apache Log or Domain Log file exceeded 2GB -Cause: Apache Log or Domain Log file exceeded 2GB -Solution: Rotate offending log file by tar/gzip'ing +*Solution*: Rotate offending log file by tar/gzip'ing diff --git a/source/_posts/2008-07-24-changing-the-mysql-replication-master-host.markdown b/source/_posts/2008-07-24-changing-the-mysql-replication-master-host.markdown index 1e195d4..1bac550 100644 --- a/source/_posts/2008-07-24-changing-the-mysql-replication-master-host.markdown +++ b/source/_posts/2008-07-24-changing-the-mysql-replication-master-host.markdown @@ -1,6 +1,7 @@ --- author: pim-van-der-wal published: true +comments: true date: '2008-07-24 10:09:22' layout: post slug: changing-the-mysql-replication-master-host @@ -17,23 +18,24 @@ tags: Here's a little post on how to change the master database host that the replication slave in a MySQL replication setup uses. This can happen if there is a change in network addresses or when you want to switch over to using different internal network. The main thing to keep in mind is that when you change the replication user host on the slave MySQL will think that you are now replicating from a different database. Therefore we need to force MySQL to continue replicating the changes where it left off before the change. First we need to create a new user on the master database server with the new IP address like this: - -`GRANT REPLICATION SLAVE ON *.* TO 'repl'@'10.0.0.1' IDENTIFIED BY 'password';` - +``` +GRANT REPLICATION SLAVE ON *.* TO 'repl'@'10.0.0.1' IDENTIFIED BY 'password'; +``` The rest of the operations are on the replication slave. We need to stop the replication and figure out where to restart the replication after changing the user: - -`STOP SLAVE; -SHOW SLAVE STATUS\G` +``` +STOP SLAVE; +SHOW SLAVE STATUS +``` From the slave status pick out the 2 settings for Master_Log_File and Read_Master_Log_Pos. Those items will look something like this: - -`Master_Log_File: mysql-bin.000101 +``` +Master_Log_File: mysql-bin.000101 Read_Master_Log_Pos: 591523680 -` - +``` Those settings need to be used in the following statement: +``` +CHANGE MASTER TO MASTER_HOST='10.0.0.1', MASTER_LOG_FILE='mysql-bin.000101', MASTER_LOG_POS=591523680; +START SLAVE; +``` -`CHANGE MASTER TO MASTER_HOST='10.0.0.1', MASTER_LOG_FILE='mysql-bin.000101', MASTER_LOG_POS=591523680; -START SLAVE;` - -The "start slave" command starts the replication again. Make sure you do a "show slave status" afterwards to make sure that replication is running again without errors. +The `start slave` command starts the replication again. Make sure you do a `show slave status` afterwards to make sure that replication is running again without errors. diff --git a/source/_posts/2008-07-25-happy-sysadmin-day.markdown b/source/_posts/2008-07-25-happy-sysadmin-day.markdown index 5f71c64..3fadac3 100644 --- a/source/_posts/2008-07-25-happy-sysadmin-day.markdown +++ b/source/_posts/2008-07-25-happy-sysadmin-day.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: marius-ducea date: '2008-07-25 15:26:37' layout: post diff --git a/source/_posts/2008-07-29-virtual-desktop-in-gnome-222-fedora-core-9.markdown b/source/_posts/2008-07-29-virtual-desktop-in-gnome-222-fedora-core-9.markdown index 35f19cf..67af323 100644 --- a/source/_posts/2008-07-29-virtual-desktop-in-gnome-222-fedora-core-9.markdown +++ b/source/_posts/2008-07-29-virtual-desktop-in-gnome-222-fedora-core-9.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: max-veprinsky date: '2008-07-29 09:11:15' layout: post @@ -21,8 +22,9 @@ Trying to figure out how to put your second monitor and dual port graphics card Since I want the virtual desktop to span both screens horizontally I need to specify a virtual screen of 2880x1200. I derived at this number by adding the the pixel length of both displays together (1280+1600) and the pixel height of the biggest monitor (1200). -In xorg.conf I added "Virtual 2880 1200" in the Section "Screen" +In `xorg.conf` I added `"Virtual 2880 1200"` in the Section `"Screen"` +``` Section "Screen" Identifier "Screen0" Device     "Videocard0" @@ -32,5 +34,6 @@ Viewport   0 0 Depth     24 Virtual   2880 1200 EndSubSection +``` -Now restart Xorg by pressing Ctrl-Alt-Backspace and run gnome-display-properties to adjust the position of the screen (left of right) and set the proper resolution. Be sure to uncheck "Mirror Screen" +Now restart Xorg by pressing `Ctrl-Alt-Backspace` and run gnome-display-properties to adjust the position of the screen (left of right) and set the proper resolution. Be sure to uncheck `"Mirror Screen"` diff --git a/source/_posts/2008-08-05-drupal-filebrowser-configuration-issue.markdown b/source/_posts/2008-08-05-drupal-filebrowser-configuration-issue.markdown index 558ec18..0ea0a1d 100644 --- a/source/_posts/2008-08-05-drupal-filebrowser-configuration-issue.markdown +++ b/source/_posts/2008-08-05-drupal-filebrowser-configuration-issue.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: gerold-mercadero date: '2008-08-05 11:46:25' layout: post @@ -12,10 +13,11 @@ categories: --- We have experienced a strange issue with our Drupal filebrowser module installation. We’re getting repeated error messages about open_basedir restriction in effect. -_ + +``` # warning: file_exists() [function.file-exists]: open_basedir restriction in effect. File(/file-folder.png) is not within the allowed path(s): (/home/username:/usr/lib/php:/usr/local/lib/php:/tmp) in /home/username/public_html/modules/filebrowser/filebrowser.module on line 338. # warning: file_exists() [function.file-exists]: open_basedir restriction in effect. File(/file-default.png) is not within the allowed path(s): (/home/username:/usr/lib/php:/usr/local/lib/php:/tmp) in /home/username/public_html/modules/filebrowser/filebrowser.module on line 338. -_ +``` After several checks on our account setup we found the issue was caused by our configuration in filebrowser module. There are only two values to set for Filebrowser: “Root directory” and “Icons directory”. Since we forgot to set the value for “Icons directory” and left it empty, this caused the open_basedir restriction issue. Because the module couldn't find the location of the icon directory, it tries to blame the open_basedir restriction settings. diff --git a/source/_posts/2008-08-07-speedcabling-system-admin-olympics.markdown b/source/_posts/2008-08-07-speedcabling-system-admin-olympics.markdown index bd0912e..039d8b9 100644 --- a/source/_posts/2008-08-07-speedcabling-system-admin-olympics.markdown +++ b/source/_posts/2008-08-07-speedcabling-system-admin-olympics.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: andrew-kucharski date: '2008-08-07 14:37:23' layout: post @@ -13,8 +14,7 @@ tags: - funny --- -Funny. First appeared as "[Nerdlympics, Summer 2008"](http://www.itworld.com/tech-society/54088/nerdlympics-summer-2008) in IT world. - +Funny. First appeared as [*Nerdlympics, Summer 2008*](http://www.itworld.com/tech-society/54088/nerdlympics-summer-2008) in IT world. > Speedcabling is a physical competition of sorts ... Computer scientist and artist Steven Schkolne launched the first speedcabling contest earlier this year, and you can watch the thrilling final here diff --git a/source/_posts/2008-08-25-tomcat-failure-after-apache-rebuild-in-cpanel.markdown b/source/_posts/2008-08-25-tomcat-failure-after-apache-rebuild-in-cpanel.markdown index 26615b0..e35fd29 100644 --- a/source/_posts/2008-08-25-tomcat-failure-after-apache-rebuild-in-cpanel.markdown +++ b/source/_posts/2008-08-25-tomcat-failure-after-apache-rebuild-in-cpanel.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: marius-ducea date: '2008-08-25 03:33:08' layout: post diff --git a/source/_posts/2008-08-26-enabling-and-using-the-slow-query-log-in-mysql.markdown b/source/_posts/2008-08-26-enabling-and-using-the-slow-query-log-in-mysql.markdown index ce76e37..0093247 100644 --- a/source/_posts/2008-08-26-enabling-and-using-the-slow-query-log-in-mysql.markdown +++ b/source/_posts/2008-08-26-enabling-and-using-the-slow-query-log-in-mysql.markdown @@ -1,6 +1,7 @@ --- author: pim-van-der-wal published: true +comments: true date: '2008-08-26 17:39:10' layout: post slug: enabling-and-using-the-slow-query-log-in-mysql @@ -16,21 +17,21 @@ tags: - tuning --- -By special request here is a post about the MySQL slow query log. MySQL has a wonderful feature that lets you keep track of all queries that took longer than a certain time to complete. To enable it simply add the following line to your my.cnf file: - -`log-slow-queries = [path to the log file]` - +By special request here is a post about the MySQL slow query log. MySQL has a wonderful feature that lets you keep track of all queries that took longer than a certain time to complete. To enable it simply add the following line to your `my.cnf` file: +``` +log-slow-queries = [path to the log file] +``` Secondly it is very useful to specify the minimum amount of time a query should take before being considered a slow query. Again, simply add the following line to your my.cnf file: - -`long_query_time = [minimum time in seconds for query]` - +``` +long_query_time = [minimum time in seconds for query] +``` Unfortunately you do have to restart MySQL for this to catch on, but once you do you will have a very powerful and simple tool for optimizing your queries. After letting it run for while and using the application that is accessing it open the log file and you will see entries like this: - -`# Time: 080826 15:33:48 +``` +# Time: 080826 15:33:48 # User@Host: testuser[testuser] @ [10.0.0.1] # Query_time: 2 Lock_time: 0 Rows_sent: 0 Rows_examined: 628951 -select * from customers where customers_email_address = 'test@test.com' and customers_sites_id = '1';` - +select * from customers where customers_email_address = 'test@test.com' and customers_sites_id = '1'; +``` As you can see the limit has been set pretty low (1 second to be exact) because the Query_time is 2 seconds. Still, most simple queries should not last more than a fraction of a second. The complete query is shown here and the number of rows that MySQL examined to get the result. The fact that the number is pretty high (628951) means that no index was used. The next step is to take this query and run the EXPLAIN command on it to verify whether an index was used or not. If that is the problem than the simple solution is to add an index for this column if this query is used often. The second way to extract useful information from the slow query log is to look for repeated queries. This will mainly occur in web applications where the user can hit the refresh button to restart the query. If the first one did not complete quickly enough you can bet the second one won't fare any better while the first one is still running. Seeing the same query appearing several times is a good sign that the query needs to be optimized. diff --git a/source/_posts/2008-09-04-when-mysql-starts-counting-sheep.markdown b/source/_posts/2008-09-04-when-mysql-starts-counting-sheep.markdown index 2022f16..d1224e7 100644 --- a/source/_posts/2008-09-04-when-mysql-starts-counting-sheep.markdown +++ b/source/_posts/2008-09-04-when-mysql-starts-counting-sheep.markdown @@ -1,6 +1,7 @@ --- author: pim-van-der-wal published: true +comments: true date: '2008-09-04 07:41:00' layout: post slug: when-mysql-starts-counting-sheep @@ -20,9 +21,8 @@ tags: We encountered a situation recently where the number of connections to our MySQL database started creeping up slowly but steadily. Strangely enough all the connections were in sleep mode and the database was not being stressed. The number of connections first reached 30 where it normally stays below 20 and started triggering our monitoring scripts. After another hour the number of connections had reach 40 and this trend continued. All connections were coming from the same server which was visible in the show processlist output: - - -`+---------+------------+-------------------+-------------+ +` ++---------+------------+-------------------+-------------+ | Id      | User       | Host              | Command     | +---------+------------+-------------------+-------------+ |      26 | repl       | 10.0.0.23:32795   | Binlog Dump | @@ -32,13 +32,14 @@ We encountered a situation recently where the number of connections to our MySQL | 1091130 | sfront     | 10.0.0.32:39273   | Sleep       | | 1092442 | sfront     | 10.0.0.32:42023   | Sleep       | | 1106425 | sfront     | 10.0.0.222:38971  | Query       | -+---------+------------+-------------------+-------------+` ++---------+------------+-------------------+-------------+ +` The MySQL documentation defines the sleep mode as:  - -> The thread is waiting for the client to send a new statement to it. - +` +The thread is waiting for the client to send a new statement to it. +` The application running on the offending server was a Java application and luckily the only application on that server. A new module in this application was monitoring a setting in the database in a loop with 5 second breaks. The problem was caused by inefficient use use of database connections. In each loop a new connection would be opened, a new prepared statement created and a new resultset generated. None of these were explicitly closed by the application. Now, because Java has a built in garbage collecter these objects would be cleaned automatically in time, but apparently the rate of creating new connections was just slightly higher than the rate of garbage collection so the number of connections rose with about 10 per hour. Although our maximum number of connections is set rather high this would not have caused a problem for another couple of days but we would eventually have maxed out. diff --git a/source/_posts/2008-09-11-nagios-plugin-check_hparray-error.markdown b/source/_posts/2008-09-11-nagios-plugin-check_hparray-error.markdown index e6bf677..1f72e35 100644 --- a/source/_posts/2008-09-11-nagios-plugin-check_hparray-error.markdown +++ b/source/_posts/2008-09-11-nagios-plugin-check_hparray-error.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: gerold-mercadero date: '2008-09-11 18:22:37' layout: post @@ -18,43 +19,44 @@ We have [check_hparray plugin](http://www.nagiosexchange.org/cgi-bin/page.cgi?g= Yesterday we've setup another HP server and installed NRPE, check_hparray, and hpacucli, same process used on previous installations. NRPE worked fine locally and from the Nagios server as the local disk space check was configured properly, but when we tried check_hparray we got "check_hparray Error". This error can have different causes like invalid slot value used or problems with permissions on executing hpacucli command. - - We reviewed our setup and installations and we have the same settings (based on setup with other servers). - We run check_hparray from NRPE we got the error: +We run check_hparray from NRPE we got the error: -_ [root@web161 nagios]# /usr/local/nagios/libexec/check_nrpe -H localhost -c check_raid - check_hparray Error. -_ +``` +[root@web161 nagios]# /usr/local/nagios/libexec/check_nrpe -H localhost -c check_raid +check_hparray Error. +``` -and it worked fine if run check_hparray command directly: +and it worked fine if run `check_hparray` command directly: -_ [root@web161 nagios]# /usr/local/nagios/libexec/check_hparray -s 1 - RAID OK - (Smart Array P400 in Slot 1 array A logicaldrive 1 (546.8 GB, RAID 1+0, OK)) -_ +``` +[root@web161 nagios]# /usr/local/nagios/libexec/check_hparray -s 1 +RAID OK - (Smart Array P400 in Slot 1 array A logicaldrive 1 (546.8 GB, RAID 1+0, OK)) +``` Both of the commands above were tested using root and nagios users and they have the same results. Then we enabled NRPE DEBUG option to get details on the problem: -edit: _/usr/local/nagios/etc/nrpe.cfg_ +edit: `/usr/local/nagios/etc/nrpe.cfg` -_# DEBUGGING OPTION +``` +# DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 -_ +``` and by looking on the system logs we saw the problem with our sudo: -_Sep 10 04:11:35 hostname sudo: root : TTY=pts/2 ; PWD=/var/log ; USER=root ; COMMAND=/usr/sbin/hpacucli controller slot=1 ld all show +``` +Sep 10 04:11:35 hostname sudo: root : TTY=pts/2 ; PWD=/var/log ; USER=root ; COMMAND=/usr/sbin/hpacucli controller slot=1 ld all show Sep 10 04:12:55 hostname sudo: nagios : sorry, you must have a tty to run sudo ; TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/sbin/hpacucli controller slot=1 ld all show" -_ +``` -and the solution was to comment out in _/etc/sudoers_ file the line: +and the solution was to comment out in `/etc/sudoers` file the line: +``` _Defaults requiretty_ - - - +``` diff --git a/source/_posts/2008-09-18-run-parts-not-running-all-the-scripts-in-etccrondaily.markdown b/source/_posts/2008-09-18-run-parts-not-running-all-the-scripts-in-etccrondaily.markdown index 8993e48..6f37767 100644 --- a/source/_posts/2008-09-18-run-parts-not-running-all-the-scripts-in-etccrondaily.markdown +++ b/source/_posts/2008-09-18-run-parts-not-running-all-the-scripts-in-etccrondaily.markdown @@ -1,6 +1,7 @@ --- author: pim-van-der-wal published: true +comments: true date: '2008-09-18 17:22:02' layout: post slug: run-parts-not-running-all-the-scripts-in-etccrondaily @@ -14,10 +15,8 @@ tags: - run-parts --- -Okay, I realize that this is a simple one but it caught me by surprise nonetheless. Most of our scheduled scripts are placed in the /etc/cron.daily, /etc/cron.weekly and /etc/cron.monthly directories. After migrating some of these scripts to another server it turned out they were not being executed. After some research it came up that run-parts (or at least some implementations of it) was just skipping the files that had . in it. Our CentOS server was executing them perfectly but the Debian server just skipped them. The man page mentions it by omission: - +Okay, I realize that this is a simple one but it caught me by surprise nonetheless. Most of our scheduled scripts are placed in the `/etc/cron.daily`, `/etc/cron.weekly` and `/etc/cron.monthly` directories. After migrating some of these scripts to another server it turned out they were not being executed. After some research it came up that run-parts (or at least some implementations of it) was just skipping the files that had . in it. Our CentOS server was executing them perfectly but the Debian server just skipped them. The man page mentions it by omission: > If  the  --lsbsysinit  option is not given then the names must consist entirely of upper and lower case letters, digits, under-scores, and hyphens. - To check what scripts will be executed call run-parts with the --test option. diff --git a/source/_posts/2008-10-06-calculating-videos-on-site-to-bandwidth-or-aggregate-transfer-to-cost-of-video-played.markdown b/source/_posts/2008-10-06-calculating-videos-on-site-to-bandwidth-or-aggregate-transfer-to-cost-of-video-played.markdown index 333fa65..d97bf4b 100644 --- a/source/_posts/2008-10-06-calculating-videos-on-site-to-bandwidth-or-aggregate-transfer-to-cost-of-video-played.markdown +++ b/source/_posts/2008-10-06-calculating-videos-on-site-to-bandwidth-or-aggregate-transfer-to-cost-of-video-played.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: andrew-kucharski date: '2008-10-06 22:42:04' layout: post @@ -16,75 +17,45 @@ tags: - video --- -A client had asked us a very innocent question about bandwidth usage "What will my average cost of video download be?". This question lead me down a path of assumptions and translations of different bandwidth terminologies which would make sense to someone without having to learn how to convert binary and calculate video compressions.  This article is first of an outline of how to get from a technical measurement to a non technical answer. +A client had asked us a very innocent question about bandwidth usage *What will my average cost of video download be?*. This question lead me down a path of assumptions and translations of different bandwidth terminologies which would make sense to someone without having to learn how to convert binary and calculate video compressions.  This article is first of an outline of how to get from a technical measurement to a non technical answer. **Bitrate** The assumptions for calculating the bandwidth of the video are the following: - - - - * Size of average video, which depends on: +* Size of average video, which depends on: - * pixel hight - - * pixel width - - * frames speed - - * compression or movement - - * length of video (time) - - - - We'll assume video bitstream is understood, there are lots of resources on the web which cover this topic, most widely [adobe itself](http://livedocs.adobe.com/flash/9.0/flvencoder/wwhelp/wwhimpl/common/html/wwhelp.htm?context=LiveDocs_NoParts&file=FLV_01.html), as well as some [old school ](http://sorenson-usa.com/vbe/index.html)sources. -_Let's assume that our video bitrate is 440kbs and audio 128kbs, making the whole a 568kbs. _ +Let's assume that our video bitrate is 440kbs and audio 128kbs, making the whole a 568kbs. For explanation of [KiloBytes vs. kilobits vs. Kibibyte](http://www.lyberty.com/encyc/articles/kb_kilobytes.html)s vs. Monkeybytes see Lyberty blog or one of these [nifty](http://www.valkaryn.net/bwcalc/) [calculators ](http://www.ibeast.com/content/tools/band-calc.asp)on the web. **Site bandwidth usage** -Once we have our "average" video size, lets consider the usage of the site, as the videos are embedded in a webiste, whose pages also incorporate images, javascript, css, etc, adding to the bandiwdth requirements. So what we need to determine next is some baseline by which we can determine the download of videos considering pageviews, visitors, etc. +Once we have our *average* video size, lets consider the usage of the site, as the videos are embedded in a webiste, whose pages also incorporate images, javascript, css, etc, adding to the bandiwdth requirements. So what we need to determine next is some baseline by which we can determine the download of videos considering pageviews, visitors, etc. If our baseline is visitors to the site, then we need to assume the number of pages viewed and videos watched per visit.  If its going to be determined by pageviews, then we need to understand number of videos watched per page.  This is a huge assumption as we assume each video preloads. Our assumptions here would therefor be: - - - - * Number of videos watched per visitor +* Number of videos watched per visitor - * average page size outside of video - - * average page views per visitor - - * average videos viewed per page - - - - The easiest number to assume here as it requires no assumptions but rather a guess, is: - - * Number of visitors +* Number of visitors - -_In our case, for simplicty of calculation, lets assume our video lenght is 3600 seconds and we'll just add a overhead percentage penalty on pages encapsulating the video of about 45%._ +In our case, for simplicty of calculation, lets assume our video lenght is 3600 seconds and we'll just add a overhead percentage penalty on pages encapsulating the video of about 45%. **Bandwidth Cost** @@ -98,30 +69,19 @@ I am not taking into consideration the "unmetered" offers which are in fact thro To assume: - - - - * Cost of Bandwidth - +* Cost of Bandwidth * based on tiered cost which is based on usage - - * based on total data transfer - - * based on connection - - - Once we have our assumptions, the rest is relatively easy.  To calculate total monthly bandwidth use the following formula: -[video encoding rate] x [average video length] x [daily uniques] x [assumed views per unique] x [30] x [overhead bandwidth addon] +> [video encoding rate] x [average video length] x [daily uniques] x [assumed views per unique] x [30] x [overhead bandwidth addon] Then backout your cost of bandwidth, either fixed or connection based: -[total monthly bandwidth] x [usage cost] +> [total monthly bandwidth] x [usage cost] This last calculation is simplified, we shall devote another article for continuation. diff --git a/source/_posts/2008-10-09-rebuilding-the-mysql-replication-slave.markdown b/source/_posts/2008-10-09-rebuilding-the-mysql-replication-slave.markdown index 05059e3..9f264be 100644 --- a/source/_posts/2008-10-09-rebuilding-the-mysql-replication-slave.markdown +++ b/source/_posts/2008-10-09-rebuilding-the-mysql-replication-slave.markdown @@ -1,5 +1,6 @@ --- author: pim-van-der-wal +comments: true published: true date: '2008-10-09 14:16:26' layout: post @@ -22,10 +23,8 @@ We are using a replication setup for our databases. The master database takes ca Recently our master database server decided it was time for a break and spontaneusly rebooted. Since the server immediately came back up and started all services correctly production was only interrupted for a couple of minutes. A little later alerts started coming in that the replication lag was growing. It turns out the replication slave had gotten confused and was processing old replication logs. This means that transactions from a month ago were being processed again. This could be measured in several ways. The lag was showing a number of seconds equivalent to about a month This meant that the records it was trying to insert already existed and we were getting the 1062 MySQL error. This means that the primary key already exists and the new insert cannot complete. This stops replication dead in its tracks. The second way to see that old transactions are being processed is to look when the duplicate records were originally inserted. This is not always possible, but in this case there was a timestamp in the records. - - Our conclusion was that we needed to take two actions. First of all we needed to rebuild the replication slave database and restart replication from scratch. To do this you have to lock the entire database while making a mysqldump (FLUSH TABLES WITH READ LOCK). This would lock out users for 10 to 15 minutes which is unacceptable during the daytime so we decided to execute this later in the evening when fewer users were online. The second action was to let the replication catch up so that we would have a reasonably consistent database in the mean time. This was done by adding the following line to the MySQL configuration file: - -`slave-skip-errors=1062` - +``` +slave-skip-errors=1062 +``` After that the replication slave server was restarted and the database caught up in 2 hours of processing. The reason we considered this an intermediate solution is that we don't delete many records. The applications execute mainly updates and inserts to modify data. As long as the latest updates are executed last the data will be in decent shape. After that we rebuilt the complete slave database at a more convenient time and all was well again..... diff --git a/source/_posts/2008-10-14-pci-dss-compliance-for-dummies.markdown b/source/_posts/2008-10-14-pci-dss-compliance-for-dummies.markdown index 7d4cb9d..0f9a119 100644 --- a/source/_posts/2008-10-14-pci-dss-compliance-for-dummies.markdown +++ b/source/_posts/2008-10-14-pci-dss-compliance-for-dummies.markdown @@ -1,6 +1,7 @@ --- author: pim-van-der-wal published: true +comments: true date: '2008-10-14 13:37:16' layout: post slug: pci-dss-compliance-for-dummies diff --git a/source/_posts/2008-10-15-zendoptimizer-installation-on-linux-server-with-ioncube.markdown b/source/_posts/2008-10-15-zendoptimizer-installation-on-linux-server-with-ioncube.markdown index c9e7f90..5143b87 100644 --- a/source/_posts/2008-10-15-zendoptimizer-installation-on-linux-server-with-ioncube.markdown +++ b/source/_posts/2008-10-15-zendoptimizer-installation-on-linux-server-with-ioncube.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: gerold-mercadero date: '2008-10-15 08:46:16' layout: post @@ -15,32 +16,33 @@ tags: ZendOptimizer installation is very short and can be done with the following steps: -1.)  Get the latest release from **[Zend](http://www.zend.com/en/products/guard/downloads)**, available file format is  "tar.gz" (ex:  ZendOptimizer-3.3.0a-linux-glibc23-x86_64.tar.gz). +1.)  Get the latest release from **[Zend](http://www.zend.com/en/products/guard/downloads)**, available file format is *tar.gz* (ex:  `ZendOptimizer-3.3.0a-linux-glibc23-x86_64.tar.gz`). -2.)  Extract the tarball at /usr/local/src/. - -_ cd /usr/local/src/_ -_tar xzpf ZendOptimizer-3.3.0a-linux-glibc23-x86_64.tar.gz_ +2.)  Extract the tarball at `/usr/local/src/`. +``` +cd /usr/local/src/ +tar xzpf ZendOptimizer-3.3.0a-linux-glibc23-x86_64.tar.gz +``` 3.)  Go to the extracted Zend directory and run the install script to launch Zend Optimizer installer.  Follow instructions. +``` +cd ZendOptimizer-3.3.0a-linux-glibc23-x86_64/ +./install +``` -_cd ZendOptimizer-3.3.0a-linux-glibc23-x86_64_/ -_./install_ - -After the installation most likely Apache will not start even if ZendOptimizer said that it restarted successfully, and you will get this error if you check PHP version (_php -v_): - -_**PHP Fatal error:  [ionCube Loader] The Loader must appear as the first entry in the php.ini file in Unknown on line 0 -**_ -The usual scenario is that the ZendOptimizer includes were added at the end of php.ini file while the IonCube on the include directory at /etc/php.d/ioncube-loader.ini. - - +After the installation most likely Apache will not start even if ZendOptimizer said that it restarted successfully, and you will get this error if you check PHP version (`php -v`): +``` +PHP Fatal error:  [ionCube Loader] The Loader must appear as the first entry in the php.ini file in Unknown on line 0 +``` +The usual scenario is that the ZendOptimizer includes were added at the end of `php.ini` file while the IonCube on the include directory at `/etc/php.d/ioncube-loader.ini`. -You have two options to fix the issue: +**You have two options to fix the issue**: **Option 1.  Place both of the IonCube and ZendOptimizer includes in php.ini file.** -Move the contents of _/etc/php.d/ioncube-loader.ini_ and put it into _/etc/php.ini_, above the ZendOptimizer lines.  Sample code: -**_zend_extension=/usr/lib64/php4/php_ioncube_loader_lin_4.3_x86_64.so +Move the contents of `/etc/php.d/ioncube-loader.ini` and put it into `/etc/php.ini`, above the ZendOptimizer lines.  Sample code: +``` +zend_extension=/usr/lib64/php4/php_ioncube_loader_lin_4.3_x86_64.so [Zend] zend_extension_manager.optimizer=/usr/local/Zend/lib/Optimizer-3.3.0 @@ -48,18 +50,22 @@ zend_extension_manager.optimizer_ts=/usr/local/Zend/lib/Optimizer_TS-3.3.0 zend_optimizer.version=3.3.0a zend_extension=/usr/local/Zend/lib/ZendExtensionManager.so zend_extension_ts=/usr/local/Zend/lib/ZendExtensionManager_TS.so -_** -Restart Apache ( _/etc/init.d/httpd restart_ ). +``` + +Restart Apache ( `/etc/init.d/httpd restart` ). **Option 2.  Place the ZendOptimizer includes on PHP's include directory.** -Create a  .ini file (_ex: zendoptimizer.ini_) inside PHP's include directory at _/etc/php.d/_, and move the ZendOptimizer lines (sample below) from _/etc/php.ini_. -_**[Zend] +Create a `.ini` file (ex: `zendoptimizer.ini`) inside PHP's include directory at `/etc/php.d/`, and move the ZendOptimizer lines (sample below) from `/etc/php.ini`. +``` +[Zend] zend_extension_manager.optimizer=/usr/local/Zend/lib/Optimizer-3.3.0 zend_extension_manager.optimizer_ts=/usr/local/Zend/lib/Optimizer_TS-3.3.0 zend_optimizer.version=3.3.0a zend_extension=/usr/local/Zend/lib/ZendExtensionManager.so zend_extension_ts=/usr/local/Zend/lib/ZendExtensionManager_TS.so -**_ -PHP includes are loaded in alphabetical order so be sure to name your ZendOptimizer include file like _zendoptimizer.ini_ so that this will be loaded after _ioncube-loader.ini_. -Restart Apache ( _/etc/init.d/httpd restart_ ). +``` + +PHP includes are loaded in alphabetical order so be sure to name your ZendOptimizer include file like `zendoptimizer.ini` so that this will be loaded after `ioncube-loader.ini`. + +Restart Apache ( `/etc/init.d/httpd restart` ). diff --git a/source/_posts/2008-10-20-anti-virus-software-alone-not-enough.markdown b/source/_posts/2008-10-20-anti-virus-software-alone-not-enough.markdown index d85fda6..13d9973 100644 --- a/source/_posts/2008-10-20-anti-virus-software-alone-not-enough.markdown +++ b/source/_posts/2008-10-20-anti-virus-software-alone-not-enough.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: max-veprinsky date: '2008-10-20 17:55:18' layout: post @@ -20,220 +21,44 @@ As a part of PCI compliance and general Internet usage safe practices a company Of the 20+ anti virus client available this day I will review the free AVG and Avast! clients with which I have spent some time testing and working with. Neither clients introduced any sluggishness to the system and are fairly transparent in day to day operation. The test rig is an average consumer grade PC with 1 GB of system memory, Pentium 4 2.66Ghz, 80GB disk running Windows XP with Service Pack 2. - - - - - - - - Functionality / protection - - - - - - - * - - -AVG Basic: file scanning, spyware, POP/IMAP - - - - - - * - - -Avast! Personal: file scanning, POP/IMAP, webmail, IM, P2P, network - - - - - - - - - +* AVG Basic: file scanning, spyware, POP/IMAP +* Avast! Personal: file scanning, POP/IMAP, webmail, IM, P2P, network Scan speed (measuring initial filesystem scan of 23.8GB) - - - - - - - * - - -AVG Basic: 1H 57M - - - - - - * - - -Avast! Personal: 33M - - - - - - - - - +* AVG Basic: 1H 57M +* Avast! Personal: 33M Resident memory usage - - - - - - - * - - -AVG Basic: 50MB - - - - - - * - - -Avast! Personal: 60MB - - - - - +* AVG Basic: 50MB +* Avast! Personal: 60MB Next we will look at virus detection performance of these 2 clients as well as other more popular variants. Detection rates against Windows viruses, macros, worms, scripts, backdoors, trojans and other malware: - - - * - - -AVG: 98.3 % - - - - - - * - - -Kaspersky: 97.8 - - - - - - * - - -Avast!: 97.2 - - - - - - * - - -Symantec: 96.9% - - - - - - * - - -McAfee: 93.6 % - - - - - - - - - +* AVG: 98.3 % +* Kaspersky: 97.8 +* Avast!: 97.2 +* Symantec: 96.9% +* McAfee: 93.6 % Detection rates against high polymorphic viruses (32/Bakaver.A, W32/Etap.D,W32/Insane.A, W32/Stepan.E, W32/Tuareg.H, W32/Zelly.A, W32/Zmist.B and W32/Zmist.D. ) - - - - - - - - * - - -AVG: 81.1% - - - - - - * - - -Kaspersky: 100% - - - - - - * - - -Avast!: 89.1% - - - - - - * - - -Symantec: 100% - - - - - * - - -McAfee: 99.7% - - - - - +* AVG: 81.1% +* Kaspersky: 100% +* Avast!: 89.1% +* Symantec: 100% +* McAfee: 99.7% Source: http://www.av-comparatives.org/seiten/ergebnisse/report17.pdf No single anti-virus product offers complete protection against Internet threats. A combination of anti-virus and web browser embedded object blocker can provide fairly comprehensive solution. Unlike the past where nearly all virus infection occurred via e-mail attachments which contained malicious executable files, macros embedded into MS Office documents or even non-executable files containing virus payloads today a user can become easily infected by visiting a website. Unknown to the user a embedded javascript applets, flash containers can infect a system simply by loading them via web browser. Most often these embedded objects do not have a place on the screen or are called from hidden frames. In this situation anti-virus software would not stop such objects from loading therefore a need for an embedded script blocker is required. The best and easiest to use plug in is available for the Mozilla Firefox (and browsers based on it) called NoScript. This add-on default behaviour is to block all browser embedded items letting the user decide which to allow. - - -Examples: +*Examples*: [http://www.symantec.com/security_response/writeup.jsp?docid=2003-090514-4048-99](http://www.symantec.com/security_response/writeup.jsp?docid=2003-090514-4048-99) [http://www.symantec.com/security_response/writeup.jsp?docid=2002-011011-3021-99](http://www.symantec.com/security_response/writeup.jsp?docid=2002-011011-3021-99) diff --git a/source/_posts/2008-10-22-usb-mass-storage-buffer-io-error.markdown b/source/_posts/2008-10-22-usb-mass-storage-buffer-io-error.markdown index 835355e..1f96a44 100644 --- a/source/_posts/2008-10-22-usb-mass-storage-buffer-io-error.markdown +++ b/source/_posts/2008-10-22-usb-mass-storage-buffer-io-error.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: max-veprinsky date: '2008-10-22 18:57:50' layout: post @@ -13,18 +14,17 @@ categories: While running rsync backups to a USB external disk I noticed an issue where the backup job would stop and /var/log/messages would spit out errors like: +``` Buffer I/O error on device sdb1, logical block 102630553 - scsi 3:0:0:0: rejecting I/O to dead device - sd 6:0:0:0: Device not ready: <6>: Current: sense key: Not Ready - Add. Sense: Logical unit not ready, initializing command required - - +``` These errors are NOT an indication of a bad disk but rather a what seems like a bug in the Mass Storage driver where after a period of inactivity the hard disk will go into a STANDBY state while the USB interface stays active and does not report the disk state to the OS thus creating a disconnect between the OS and the physical drive. The next disk access request does not bring the disk out of STANDBY mode and the OS thinks the device is dead. The STANDBY mode is a low energy state which also stops the platters from spinning.  While there are workarounds that involve udev rules, the simplest solution is to disable or increase the timout of STANDBY mode. To turn off STANDBY timer issue the following command: -`sdparm --clear STANDBY -S ` +``` +sdparm --clear STANDBY -S +``` Keep in mind that this will disable the STANDBY timer and will not spindown the platters. diff --git a/source/_posts/2008-10-23-homegrown-mysql-monitoring.markdown b/source/_posts/2008-10-23-homegrown-mysql-monitoring.markdown index 3ce94aa..ace66d5 100644 --- a/source/_posts/2008-10-23-homegrown-mysql-monitoring.markdown +++ b/source/_posts/2008-10-23-homegrown-mysql-monitoring.markdown @@ -1,6 +1,7 @@ --- author: pim-van-der-wal published: true +comments: true date: '2008-10-23 07:46:49' layout: post slug: homegrown-mysql-monitoring @@ -22,15 +23,19 @@ tags: If you can't do it with a shell script it usually ain't worth doin', right? Of course the number and quality of monitoring tools available to sys admins has gone up dramatically. Thanks to Nagios and other great tools it's pretty easy to keep track of what's going on and where and get notified pretty quickly. But some times you just want to monitor a couple of things first to see if they are actually worth monitoring. In my case it started out as a temporary thing to keep track of a recurring problem. The number of MySQL connections would max out of from time to time and we needed to be alerted very quickly if this happened of course. So we let crontab run a shell script every minute which just executes one command: -`mysql -e "show processlist;" > job.tmp` - +``` +mysql -e "show processlist;" > job.tmp +``` This command will let you track all user connections at that moment. Of course a lot can happen in a minute and there's lots of stuff you won't catch, but problems that are growing will manifest themselves here. To distill the number of running connections: -`CONNS=`cut ${SCRIPTS_DIR}/jobs.tmp -f5 | grep "Query" | wc -l | cut -f1 -d"/"`` - +``` +CONNS=`cut ${SCRIPTS_DIR}/jobs.tmp -f5 | grep "Query" | wc -l | cut -f1 -d"/"` +``` The number of locked out queries: -`LOCKED=`cut ${SCRIPTS_DIR}/jobs.tmp -f7 | grep "Locked" | wc -l | cut -f1 -d"/"`` - +``` +LOCKED=`cut ${SCRIPTS_DIR}/jobs.tmp -f7 | grep "Locked" | wc -l | cut -f1 -d"/"` +``` The longest running query: -`LONGRUN=`grep "Query" ${SCRIPTS_DIR}/jobs.tmp | cut -f6 | sort -n | tail -1`` - +``` +LONGRUN=`grep "Query" ${SCRIPTS_DIR}/jobs.tmp | cut -f6 | sort -n | tail -1` +``` In all these cases a simple if statement will mail the processlist in case a threshold is passed. No better way to get your attention then your mailbox filling up with an alert per minute. Besides, seeing the processlist at the time things started going wrong can be very useful in identifying the culprit. diff --git a/source/_posts/2008-11-10-denial-of-service-dos-attacks-becoming-more-common.markdown b/source/_posts/2008-11-10-denial-of-service-dos-attacks-becoming-more-common.markdown index 88682db..5f60fc4 100644 --- a/source/_posts/2008-11-10-denial-of-service-dos-attacks-becoming-more-common.markdown +++ b/source/_posts/2008-11-10-denial-of-service-dos-attacks-becoming-more-common.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: andrew-kucharski date: '2008-11-10 14:56:10' layout: post @@ -16,10 +17,8 @@ tags: Within a week I had seen the term DOS or Denial of Service being mentioned in two mainstream publications, the WJS and more recently in this article - [Internet Attacks Grow More Potent](http://www.nytimes.com/2008/11/10/technology/internet/10attacks.html?th&emc=th) -  by the Wall Street Journal. - > “We’re definitely seeing more targeted attacks toward e-commerce sites,” said Danny McPherson, chief security officer for Arbor Networks. “Most enterprises are connected to the Internet with a one-gigabit connection or less. Even a two-gigabit D.D.O.S. attack will take them offline.” - We have some experiences with this of course, luckily none of our customers was a targe of a DoS but if anyone on your network or even a datacenter is - you will most likely notice a latency at best and a DoS yourself. Most of the research I have seen on how to deal with a DoS attack points to the helplessness of the smaller DC and the hardware they have in order to deal with this.  They mostly have to work with the downstream providers to help them filter out the traffic. diff --git a/source/_posts/2008-11-12-can-your-isp-stop-a-40-gigabit-ddos-attack.markdown b/source/_posts/2008-11-12-can-your-isp-stop-a-40-gigabit-ddos-attack.markdown index de6a021..acfede7 100644 --- a/source/_posts/2008-11-12-can-your-isp-stop-a-40-gigabit-ddos-attack.markdown +++ b/source/_posts/2008-11-12-can-your-isp-stop-a-40-gigabit-ddos-attack.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: andrew-kucharski date: '2008-11-12 15:33:06' layout: post @@ -16,9 +17,7 @@ tags: The [2008 Internet Security Report](http://asert.arbornetworks.com/2008/11/2008-worldwide-infrastructure-security-report/) put out by Arbor Networks has this eye popping blurb... - > **Attacks Now Exceed 40 Gigabits** ...the largest DDoS attacks have now grown a hundredfold to break the 40 gigabit barrier this year. The growth in attack size continues to significantly outpace the corresponding increase in underlying transmission speed and ISP infrastructure investment - ![](http://asert.arbornetworks.com/uploads/2008/11/attacksize2008.png) diff --git a/source/_posts/2008-11-17-server-and-backup-woes.markdown b/source/_posts/2008-11-17-server-and-backup-woes.markdown index bcc43ea..e4d934f 100644 --- a/source/_posts/2008-11-17-server-and-backup-woes.markdown +++ b/source/_posts/2008-11-17-server-and-backup-woes.markdown @@ -1,6 +1,7 @@ --- author: pim-van-der-wal published: true +comments: true date: '2008-11-17 08:31:58' layout: post slug: server-and-backup-woes @@ -25,35 +26,27 @@ tags: --- Looking back it seems like most posts on this blog are helpful tips and not reports of problems we encountered. Not that we don't have any problems but we mostly report our solutions instead of the actual problems. Of course now and again a problem comes along that doesn't have a solution ready to copy-paste into a blog post. A week ago a wrong modification in a shell script resulted in the deletion of a good number of files before we caught it. The command below ended up being run with 2 empty variables: - -`rm -fr ${DIR}/${SUBDIR}` - -_Hint: add the following alias for all users to prevent this: alias rm='rm --preserve-root''_ +``` +rm -fr ${DIR}/${SUBDIR} +``` +_Hint: add the following alias for all users to prevent this: `alias rm='rm --preserve-root'`_ We were lucky in two ways. First off, this was not a production server, just a development and testing server and secondly the databases and web sites on that server were unaffected. That's where the good news ended and Murphy's Law kicked in. A couple of days before we found that our backup server had a corrupt filesystem on its RAID array. Since we did not have enough space available on other servers to place all the backups on other servers we temporarily suspended (you guessed it) the backups of the development and testing server. - - - ### To undelete or not to undelete - To get back up and running we immediately closed off access to the server and considered how we could recover the deleted files. Unfortunately undeleting files on an ext3 file system can only be done under certain circumstances. If the deleted files are still opened by some process the lsof utility can help as is documented on some web sites (just Google "ext3 undelete lsof") but for larger scale undeletes the first step is to create an image of the partition in question. That image can then be searched for inode entries which can be very useful for finding specific files. However, if you want perform a more general undelete this method is a lot less useful because the file names will not be recovered. Apart from the limited usefulness that creating this image would yield it would have taken several hours to complete during which development and testing would be at a standstill. We decided not to do this and instead take our losses instead. It is important to note what data we were losing at that point. Among the missing directories were some binary directories (/usr/sbin and such) which were easily recoverable by copying them from similarly configured servers. The most important missing data was the version control repository and a custom scripts directory. All the history of changes in the repository was lost but the latest state of the code was easily restored. We copied the latest code from the developer who had last performed a complete update (which is a part of the daily development process) and put that code into the repository again. Since the versions did not match up anymore after that (all code versions were reinitialized) all developers had to retrieve the complete set of code files again and copy their latest versions over it to keep working. Although this is definitely a loss for us the impact is limited by the fact that we keep copies of all released code. These copies were unaffected on the server in question but are also present on other servers. If need be we can go through that history to track down a change, but the comments are gone and it's not a process the developers can do themselves. - ### Rebooting the server - After all this we were left with one task, rebooting that server. Since we did not know exactly what got deleted this might give us some severe problems. This was scheduled for a quiet night with several system admins present. Unfortunately our hand was forced when a change in the iptables configuration caused a kernel panic. Rebooting the server revealed several more problems, the main one being the privileges on the /tmp directory. This resulted in Apache not being able to write session info there and MySQL not being able to write temporary data either. This was quickly solved of course. Without going into too many details the final action we took was to update our Cpanel. This reinstalled many missing scripts and binaries. I bet you're wondering why we don't use off site backups. Well, we do actually. The problem is that this involves copying many gigabytes over a limited line so we made a selection of what needed to be copied and we focused mainly on all our production servers. The main purpose of our off site backups is to recover production servers in case our data center becomes unavailable. - ### Conclusions - It's been an annoying experience and it's hard to draw positive lessons from mr. Murphy's teachings but all in all it could have been a lot worse. Production was not down or affected and even testing and development impact was pretty limited. The main things on our agenda after this are to review our backup strategy for essential locations and reviewing the use of root privileges on our servers. Although we use non-root users most of the time there are tasks that are made a lot quicker by changing to root. We all know the danger of this and need to be a lot more aware of it. diff --git a/source/_posts/2008-11-20-denial-of-service-attack-and-the-defense-in-depth-layer.markdown b/source/_posts/2008-11-20-denial-of-service-attack-and-the-defense-in-depth-layer.markdown index 2caa0c7..1b9f558 100644 --- a/source/_posts/2008-11-20-denial-of-service-attack-and-the-defense-in-depth-layer.markdown +++ b/source/_posts/2008-11-20-denial-of-service-attack-and-the-defense-in-depth-layer.markdown @@ -1,10 +1,10 @@ --- author: rachel-jaro +comments: true published: true date: '2008-11-20 08:20:29' layout: post slug: denial-of-service-attack-and-the-defense-in-depth-layer -status: publish title: Denial of Service Attack and the Defense-in-Depth Layer wordpress_id: '114' categories: @@ -27,60 +27,29 @@ _Application_: For the purpose of this article, application refers to an interac In [DoS attack](http://linuxsysadminblog.com/2008/11/denial-of-service-dos-attacks-becoming-more-common/), a vulnerability in one layer is a vulnerability in the other layers. It can attack on any level unlike other kind of attacks. Microsoft has well explained the [network, host and application threats and countermeasures](http://msdn.microsoft.com/en-us/library/aa302418.aspx#c02618429_004). The following are list of appropriate actions to address DoS: **Network Layer** - - - * Apply the latest service packs. - - * Harden the TCP/IP stack by applying the appropriate registry settings to increase the size of the TCP connection queue, decrease the connection establishment period, and employ dynamic backlog mechanisms to ensure that the connection queue is never exhausted. - - * Use a network Intrusion Detection System (IDS) because these can automatically detect and respond to SYN attacks. - **Host Layer** - - - * Configure your applications, services, and operating system with denial of service in mind. - - * Stay current with patches and security updates. - - * Harden the TCP/IP stack against denial of service. - - * Make sure your account lockout policies cannot be exploited to lock out well known service accounts. - - * Make sure your application is capable of handling high volumes of traffic and that thresholds are in place to handle abnormally high loads. - - * Review your application's failover functionality. - - * Use an IDS that can detect potential denial of service attacks. - **Application Layer** - - - * Thoroughly validate all input data at the server. - - * Use exception handling throughout your application's code base. - - * Apply the latest patches. - **Physical Layer** + The one last layer that was not given proper attention. The problem with our community is the lack of information regarding security issues. Help educate the common users on this kind of attack. An innocent employee or community can be used to execute a DDoS attack. Again, security must be practiced in all areas of operations. DoS attack could take any form. Having DiD, if one layer fails then another layer will likely succeed. diff --git a/source/_posts/2008-11-25-cpanel-adding-custom-configuration-to-httpdconf.markdown b/source/_posts/2008-11-25-cpanel-adding-custom-configuration-to-httpdconf.markdown index b6a7ca5..c3574cd 100644 --- a/source/_posts/2008-11-25-cpanel-adding-custom-configuration-to-httpdconf.markdown +++ b/source/_posts/2008-11-25-cpanel-adding-custom-configuration-to-httpdconf.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: gerold-mercadero date: '2008-11-25 09:39:23' layout: post @@ -19,61 +20,72 @@ Recently, Cpanel implemented their standard way of adding custom changes to virt Here are the two common situations of adding custom changes: -**1.)  Changes added inside a ** +**1.)  Changes added inside a ``** + This is very simple as we only need to create a single file that will contain our changes.  But we need to understand the correct location on where to place this file so that our changes will be read properly. +There are several cases of adding these changes and below are some samples and the coresponding directory where to put them: + +* One virtualhosts (either SSL or standard) +Apache1/SSL:  `/usr/local/apache/conf/userdata/ssl/1///.conf` +Apache1/Standard:   `/usr/local/apache/conf/userdata/std/1///.conf` +Apache2/SSL:   `/usr/local/apache/conf/userdata/ssl/2///.conf` +Apache2/SSL:   `/usr/local/apache/conf/userdata/std/2///.conf` -There are several cases of adding these changes and below are some samples and the coresponding directory where to put them: +* All virtualhosts (both SSL and standard) -- One virtualhosts (either SSL or standard) -Apache1/SSL:  _/usr/local/apache/conf/userdata/ssl/1///.conf_ -Apache1/Standard:   _/usr/local/apache/conf/userdata/std/1///.conf_ -Apache2/SSL:   _/usr/local/apache/conf/userdata/ssl/2///.conf_ -Apache2/SSL:   _/usr/local/apache/conf/userdata/std/2///.conf_ +`Apache 1/2:   /usr/local/apache/conf/userdata/.conf` --  All virtualhosts (both SSL and standard) -Apache 1/2:   _/usr/local/apache/conf/userdata/.conf_ +* All SSL virtualhost or all Standard virtualhost --  All SSL virtualhost or all Standard virtualhost -Apache 1/2 - all SSL:  _/usr/local/apache/conf/userdata/ssl/.conf_ -Apache 1/2 - all Standard:   _/usr/local/apache/conf/userdata/std/.conf_ +Apache 1/2 - all SSL:  `/usr/local/apache/conf/userdata/ssl/.conf` +Apache 1/2 - all Standard:   `/usr/local/apache/conf/userdata/std/.conf` * If you need to put the above changes on a specific Apache version you can put them this way: -Apache 1 - all SSL:   _/usr/local/apache/conf/userdata/ssl/1/.conf_ -Apache 2 - all Standard:  _/usr/local/apache/conf/userdata/std/2/.conf_ + +Apache 1 - all SSL:   `/usr/local/apache/conf/userdata/ssl/1/.conf` +Apache 2 - all Standard:  `/usr/local/apache/conf/userdata/std/2/.conf` The same process is followed on subdomains, like on one of my implementation i added a custom virtualhost in a subdomain to take effect on standard (http), so i put it on this directory: -_/usr/local/apache/conf/userdata/ssl/2/gerold/mysubdomain.gerold.com/custom.conf_. +`/usr/local/apache/conf/userdata/ssl/2/gerold/mysubdomain.gerold.com/custom.conf` -Take note that you also need to create the directories like "ssl", "std", "1", "2", or "mysubdomain.gerold.com" in order to have the correct directory structure/path. +Take note that you also need to create the directories like *ssl*, *std*, *1*, *2*, or *mysubdomain.gerold.com*" in order to have the correct directory structure/path. You can verify if your custom changes were added correctly using this command: -**_/scripts/verify_vhost_includes_** +``` +/scripts/verify_vhost_includes +``` + +**Then, update the include files**: -Then, update the include files: For changes concerning single account/user you can use this command: -**_ /scripts/ensure_vhost_includes --user=_** +``` +/scripts/ensure_vhost_includes --user= +``` And for all users run: -**_/scripts/ensure_vhost_includes --all-users_** +``` +/scripts/ensure_vhost_includes --all-users +``` -And finally, restart Apache (**_/etc/init.d/httpd restart_**) +And finally, restart Apache (`/etc/init.d/httpd restart`) -**2.)  Changes added outside a ** +**2.)  Changes added outside a ``** Adding custom changes outside of virtualhost can be done in different ways, like creating a templates or using the include editor. On my example, i will discuss using Include editor as i usually used this on some of our client sites. **Cpanel have three ways to place our custom changes using Include editor, these are:** -- **Pre-Main Include** - this is placed at the top of the httpd.conf file -Location:  _/etc/httpd/conf/includes/pre_main_1.conf_ -- **Pre-VirtualHost Include** - codes in this file are added before the first Vhost configuration -Location:  _/etc/httpd/conf/includes/pre_virtualhost_1.conf_ -- **Post-VirtualHost Include** - codes in this file are added at the end of httpd.conf -Location:  _/etc/httpd/conf/includes/post_virtualhost_1.conf_ - -So to add our changes we can go to WHM:  _Main >> Service Configuration >> Apache Setup >> Include Editor_, and select where you want to place your custom changes (_pre-main, pre-vhost, or post-vhost_). -You can also edit directly the files (_pre_main_1.conf, pre_virtualhost_1.conf, post_virtualhost_1.conf_) located at _/etc/httpd/conf/includes/_. -Finally, restart Apache (**_/etc/init.d/httpd restart_**) for changes to take effect. + +* **Pre-Main Include** - this is placed at the top of the httpd.conf file +Location:  `/etc/httpd/conf/includes/pre_main_1.conf` +* **Pre-VirtualHost Include** - codes in this file are added before the first Vhost configuration +Location:  `/etc/httpd/conf/includes/pre_virtualhost_1.conf` +* **Post-VirtualHost Include** - codes in this file are added at the end of httpd.conf +Location:  `/etc/httpd/conf/includes/post_virtualhost_1.conf` + +So to add our changes we can go to WHM:  `Main >> Service Configuration >> Apache Setup >> Include Editor`, and select where you want to place your custom changes (*pre-main, pre-vhost, or post-vhost*). +You can also edit directly the files (*pre_main_1.conf, pre_virtualhost_1.conf, post_virtualhost_1.conf*) located at `/etc/httpd/conf/includes/`. +Finally, restart Apache (`/etc/init.d/httpd restart`) for changes to take effect. **NOTE: For complete referrence please refer to[ Cpanel Docs](http://www.cpanel.net/support/docs/ea/ea3/customdirectives.html).** diff --git a/source/_posts/2008-11-25-upgrading-to-trac-011.markdown b/source/_posts/2008-11-25-upgrading-to-trac-011.markdown index f3d928f..01ee4d9 100644 --- a/source/_posts/2008-11-25-upgrading-to-trac-011.markdown +++ b/source/_posts/2008-11-25-upgrading-to-trac-011.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: marius-ducea date: '2008-11-25 06:16:59' layout: post diff --git a/source/_posts/2008-12-01-black-monday.markdown b/source/_posts/2008-12-01-black-monday.markdown index 8ee0331..334c87e 100644 --- a/source/_posts/2008-12-01-black-monday.markdown +++ b/source/_posts/2008-12-01-black-monday.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: andrew-kucharski date: '2008-12-01 23:38:57' layout: post @@ -16,46 +17,18 @@ tags: > E-commercre Report; Several of the most popular online merchants have struggled to cope with the heavy holiday traffic, a survey shows. - Ok, that was [NYTimes 2003](http://query.nytimes.com/gst/fullpage.html?res=9807E7DA103CF936A25751C1A9659C8B63) – what about now?   This season there have already been several announcements of ecommerce snafus.  Sears.com site shut down for a couple of hours and there have even been rumors of apple.com having problems. Although Snopes.com busts the claim of Cyber Monday as the busiest shopping day dollar wise, over the next five weeks eCommerce sites will experience their heaviest loads. - - - - -![](http://mail.google.com/a/promethost.com/?ui=2&ik=336c0e643f&view=att&th=11db6e3bf90e7a46&attid=0.1&disp=emb&zw) - - - One interesting phenomenon that has emerged in the last few years is a sort of blending of the Black Friday and Cyber Monday concepts:  The day that produces the most web traffic to online retail sites is Thanksgiving Day itself, as avid shoppers use the Internet to plan their strategies for Black Friday weekend sales at brick and mortar stores: - - - -> - -> > Matt Tahtam, a spokesman fo rHitwise, as company that tracks 100 of the largest online retailers, says there another trend that’s emerged over the last few holiday seasons: the gratest amount of online traffic (searching and visiting, though not necessairly buying) happening on turkey day itself. -> -> - - What is the biggest E-Shopping Day in History?  According to this Forbes article ([http://www.forbes.com/2006/11/29/cyber-monday-shopping-record-biz-cx_tvr_1129shop.html](http://www.forbes.com/2006/11/29/cyber-monday-shopping-record-biz-cx_tvr_1129shop.html))  :by tagging it as the unofficial start of the online holiday shopping season, saw a 26% gain in sales from the same day in 2005 to $608 million, according to industry tracker comScore Networks. The result, beating expectations, marked the single biggest shopping day in e-commerce history. So what should a pragmatic system admin do about this? - - - - * We don’t over sell your servers - - - * Dont run bandwidth, CPU or any aspect of hosting at or near capacity - - - * Monitor our servers and distribute load to make sure that when the big spike happens, we’re ready and your site stays up. - - +* We don’t over sell your servers +* Dont run bandwidth, CPU or any aspect of hosting at or near capacity +* Monitor our servers and distribute load to make sure that when the big spike happens, we’re ready and your site stays up. diff --git a/source/_posts/2008-12-02-how-to-check-if-your-dns-server-implements-source-port-randomization.markdown b/source/_posts/2008-12-02-how-to-check-if-your-dns-server-implements-source-port-randomization.markdown index 9cba09a..1061dff 100644 --- a/source/_posts/2008-12-02-how-to-check-if-your-dns-server-implements-source-port-randomization.markdown +++ b/source/_posts/2008-12-02-how-to-check-if-your-dns-server-implements-source-port-randomization.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: marius-ducea date: '2008-12-02 06:11:40' layout: post diff --git a/source/_posts/2008-12-03-why-is-there-a-system-change-freeze-especially-on-black-monday-and-black-friday.markdown b/source/_posts/2008-12-03-why-is-there-a-system-change-freeze-especially-on-black-monday-and-black-friday.markdown index 08c2ed8..a93b229 100644 --- a/source/_posts/2008-12-03-why-is-there-a-system-change-freeze-especially-on-black-monday-and-black-friday.markdown +++ b/source/_posts/2008-12-03-why-is-there-a-system-change-freeze-especially-on-black-monday-and-black-friday.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: andrew-kucharski date: '2008-12-03 18:53:34' layout: post @@ -24,7 +25,6 @@ I think this story convinced me that doing production work these days on the bus This story: [Microsoft Says Sorry For Black Friday Cashback Outage](http://www.efluxmedia.com/news_Microsoft_Says_Sorry_For_Black_Friday_Cashback_Outage_30408.html) - > For Internet users, [Black Friday](http://www.efluxmedia.com/news_Microsoft_Says_Sorry_For_Black_Friday_Cashback_Outage_30408.html#) was supposed to be about buying and cashing back, but [Microsoft’s](http://www.efluxmedia.com/news_Microsoft_Says_Sorry_For_Black_Friday_Cashback_Outage_30408.html#) Live Search cashback machine apparently broke down just as customers “barged in” to make some early morning purchases. According to a blog posting, the unexpected outage occurred due to a significant spike in traffic, which caused the system to .html">Buy Propecia go down for several hours. It took quite a while for it to come back to life, but apparently that was related to investigating the issue and rebuilding and deploying the [databases](http://www.efluxmedia.com/news_Microsoft_Says_Sorry_For_Black_Friday_Cashback_Outage_30408.html#) and indexes that support Microsoft Live Search Cashback. diff --git a/source/_posts/2008-12-09-vsftpd-logging-timestamp.markdown b/source/_posts/2008-12-09-vsftpd-logging-timestamp.markdown index 6fbc89c..fbdaed0 100644 --- a/source/_posts/2008-12-09-vsftpd-logging-timestamp.markdown +++ b/source/_posts/2008-12-09-vsftpd-logging-timestamp.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: marius-ducea date: '2008-12-09 03:23:41' layout: post diff --git a/source/_posts/2008-12-10-mysql-query-optimization-for-network-throughput.markdown b/source/_posts/2008-12-10-mysql-query-optimization-for-network-throughput.markdown index cae8794..d81f6a2 100644 --- a/source/_posts/2008-12-10-mysql-query-optimization-for-network-throughput.markdown +++ b/source/_posts/2008-12-10-mysql-query-optimization-for-network-throughput.markdown @@ -1,6 +1,7 @@ --- author: pim-van-der-wal published: true +comments: true date: '2008-12-10 10:42:46' layout: post slug: mysql-query-optimization-for-network-throughput @@ -19,18 +20,18 @@ tags: It's a bit of a long title for a blog post but the point I want to make is that not every query optimization is aimed at making the query faster. As a case in point we have a client that has a web shop and their network traffic between the web servers and the database servers has been sizable to say the least. There was a good amount of old code that probably worked pretty well when the shop just started out and had small amounts of data. Now the shop has grown certain queries suddenly don't perform so good anymore. -On any weekday around lunch time network traffic between web and data servers was around 80Mb/s. On a 100Mb/s connection that is dangerously high and to address this we've been in the process of modifying queries that do a SELECT *. There really hardly ever is a reason to do a SELECT * except when you have very flexible code that automatically deals with extra columns. That was not the case with this application. +On any weekday around lunch time network traffic between web and data servers was around 80Mb/s. On a 100Mb/s connection that is dangerously high and to address this we've been in the process of modifying queries that do a `SELECT *`. There really hardly ever is a reason to do a `SELECT *` except when you have very flexible code that automatically deals with extra columns. That was not the case with this application. On Cyber Monday we found that traffic for this shop was touching 100Mb/s and web site performance really suffered. As a temporary measure we switched the database server to a 1Gb/s connection but both web servers stayed on 100Mb/s connections. Looking at the slow query log and mtop revealed a ton of similar queries: -` +``` # User@Host: username[username] @  [10.0.0.123] # Query_time: 2  Lock_time: 0  Rows_sent: 1948  Rows_examined: 2047 -SELECT * FROM sites, sites_to_bundle WHERE sites_to_bundle.sites_id = sites.sites_id AND sites.site_name ='shop1';` - +SELECT * FROM sites, sites_to_bundle WHERE sites_to_bundle.sites_id = sites.sites_id AND sites.site_name ='shop1'; +``` In and by them selves these queries look pretty harmless. 2000 rows is not that much. Maybe more than is needed but not enough to choke up the network either. The problem turns out to be that this query was executed several times for each page in the check out process. The database server was not overloaded since it had the whole result set in its cache, but it had to transfer 2MB of data for each request. When a developer investigated it turned out that only one 2 digit value of the entire result set was used. We rewrote query and pushed to production on that same day. Yes, I know that a previous blog post tells us not to do that but in this case I'm glad we did. Network traffic dropped below 10Mb/s and the web site was flying. Below is the Cacti graph that shows that difference. Shortly after 15:00 we implemented the optimization and traffic dropped dramatically. -[caption id="attachment_162" align="alignnone" width="500" caption="Network traffic decrease after query optimization"][![](http://linuxsysadminblog.com/images/2008/12/network_traffic_improvement.png)](http://linuxsysadminblog.com/images/2008/12/network_traffic_improvement.png)[/caption] +[![](http://linuxsysadminblog.com/images/2008/12/network_traffic_improvement.png)](http://linuxsysadminblog.com/images/2008/12/network_traffic_improvement.png) diff --git a/source/_posts/2008-12-16-howto-change-the-timezone-on-rhelcentos.markdown b/source/_posts/2008-12-16-howto-change-the-timezone-on-rhelcentos.markdown index a5c821c..22927a1 100644 --- a/source/_posts/2008-12-16-howto-change-the-timezone-on-rhelcentos.markdown +++ b/source/_posts/2008-12-16-howto-change-the-timezone-on-rhelcentos.markdown @@ -1,5 +1,6 @@ --- published: true +comments: true author: marius-ducea date: '2008-12-16 05:05:17' layout: post