Category Archives: linux

Let’s face it, “500 internal server error” from apache is about the most annoying, unspecific thing you run into on a linux box. It could be anything (and usually is) and the logging associated with it is next to useless. The only thing worse is Perl’s unspecified error logging. :p

So…how do you find out what is wrong? Simple.

Run the following command from another server/location:

$ telnet 80
Connected to
Escape character is ‘^]’.

it will give you a blank prompt after that escape character line. Type the following:

GET / HTTP/1.1

hit return ONCE.

Now, go to your server where the website is hosted. Do the following command:

netstat -natp | grep “”

You should get back something like:

tcp 0 0 ::ffff:blah.blah.blah:80 ESTABLISHED 25051/httpd

That bit just before the httpd is what you want. That is the process id of the apache process you are connected to. Now run:

strace -s 6666 -p 25051

Where the 25051 is the number that was actually in your output. In case you are wondering, the -s sets the number of characters each line can be. If you don’t set this, you end up with truncated lines that make it nearly impossible to tell what is really going on. So I just do the -s and a large number to be safe.

Now go back to your other window and just under your GET command, type:


hit enter twice and then go watch the output in strace.

Now, I know what you are thinking. I thought the exact same thing the first time I ever tried to use strace. OMG WHAT THE HELL DOES ALL OF THAT MEAN??? Strace output can be VERY wall of text. Just take a deep breath and then actually look at what it is telling you. Strace shows you every call the process makes. Every file it opens and reads. Everything it did is recorded right there, so if you start at where the process dies and move backwards, you can generally put it all together. It just requires taking the time to read each line and try to understand what it is telling you.

Trust me, once you get the hang of it? This will become the most valuable tool you have for troubleshooting “what the hell is apache doing???” issues and other obscure problems were a process isn’t doing what you think it should be, but you don’t get any relevant errors to point you in the right direction. Strace is easily one of my favourite tools. Live it, love it, use it.

If you’ve ever had a system drift pretty wide on the time, you are aware that ntp can’t update the time after a certain amount of drift. I’ve found this to be a particular problem on some systems from a reboot, where the time never gets manually set and so it stays off kilter and just keeps drifting more and more.

On “Redhat” flavor boxes, you can edit


and change

OPTIONS=”-u ntp:ntp -p /var/run/”

to be

OPTIONS=”-x -u ntp:ntp -p /var/run/”

That -x is a very tiny change, but a huge effect. What this does is when you stop/start ntp (or it starts on a reboot of your system), it does the equivelent of

ntpdate -u time.server.of.choice

ie, forcing the manual update against your chosen time server. No more manually fixing drift that has gotten too wide. From a reboot the time is set to a value that ntp then can automatically update and keep updated moving forward.

Try running

service ntpd restart

and you’ll see it do the manual time update.

# service ntpd restart
Shutting down ntpd: [ OK ]
ntpd: Synchronizing with time server: [ OK ]
Syncing hardware clock to system time [ OK ]
Starting ntpd: [ OK ]

Category: advice, linux, ntp

To add a new tmpfs mount in a specific location:

mkdir -p /path/to/directory/
mount -t tmpfs -o size=512M,mode=0744 tmpfs /path/to/directory

and then add the following to /etc/fstab:

tmpfs /path/to/directory tmpfs size=512M,mode=0777 0 0

The potential returns this can give something like your session (if you’re still using files and not, say, memcache) or cache files is pretty significant. As always, test in your own environment before rolling out into production. And in case you weren’t aware, a reboot will wipe the contents of that folder…after all, it is a ram disk, not actual file storage.

To grow the size of an existing tmpfs:

mount -o remount,size=2G /path/to/directory

And don’t forget to modify /etc/fstab if you want it to be permanent!

Category: file systems, linux

Update: 09/03/2012

Lsync has been added to Redhat’s repository, so those of you using redhat can do:

yum install epel-release
yum install lsync

to get lsync installed even easier than the below guide.

I’ve mentioned Lsync here before, but I figured I could do a bit more documentation on this service.

Lsync is a very handy software package for doing near instantanious updates of files/directories from a central location to many external locations. Think star topology. It works particularly well in the cloud where you tend to find large number of web servers that need the same content across them at any given moment. It will NOT work if you can not isolate your application to doing uploads to a single server in such a load balanced configuration. You were warned. That said, as long as all your file uploads are going to a single server, lsync can keep your other web servers current and up to date with, at most, a 20 second delay.

All that being said…here is how you install it.


Category: linux, replication

This is a short and easy way to track down the source of a compromised PHP script that is spamming out of your system.

Change the /etc/php.ini sendmail path to:

sendmail_path = /usr/local/bin/sendmail-php -t -i

and then create that file with the following contents:


logger -p sendmail-php: site=${HTTP_HOST}, client=${REMOTE_ADDR}, script=${SCRIPT_NAME}, filename=${SCRIPT_FILENAME}, docroot=${DOCUMENT_ROOT}, pwd=${PWD}, uid=${UID}, user=$(whoami)

/usr/sbin/sendmail -t -i $*

chmod +x that file and restart apache.

Now when a php script is called and sends a piece of mail, it is logged to the log file and you can then trace back what is legitimate and what isn’t. Spammers tend to be very…brute force…so it is usually pretty apparent, fairly quickly, which script is being abused.

Category: linux, php, spam

Ok, folks…

I realize that you are operating a business. I realize that keeping costs low is the name of the game. Honest, I do.

That being said? You need to understand something. Black Friday/Cyber Monday is not a surprise holiday that was announced 2 days ago. You’ve known this day was coming for months. You know it and I know it. So when you call me 24 hours before Black Friday, desperate to scale out your solution NOW NOW NOW because your site is going to tank otherwise?

Well, you have no one to blame but yourself. Sure, I’m going to do my best to help you. That’s what I do, but you need to understand how many clients I have. How many clients I have that think just like you do, all of whom are calling me 24 hours before Black Friday to make everything better.

In your own best business interests, plan to scale out your configuration 2-3 weeks before Black Friday and take it back down 2 weeks or so after NYE. Just put that in your budget every year. Understand that it is the name of the game. Because when you and thousands of others push administrators like me to rush scale out your configurations in a 24 hour window of time with no real opportunity to test?

Well, you’re just inviting trouble when the traffic begins to flood in.

Scale early, test often.

Just some friendly advice from a guy who has watched companies like yours make the same mistake year after year after year.

Category: advice, linux

Using lsync to keep your code current between machines but it has suddenly stopped and you can figure out why?

The problem is most likely the fact that inotify has a limit set on the number of files it can watch:


If you exceed that number, lsync stops updating. Increase this value and restart lsync.

Category: linux

It is hard to believe that it was 10 years ago that I started with my current employer. 10 years in the same job is a lifetime in the current “stay in a job for a year or two and move on” culture; particularly in the tech market. What can I say? I love my job. I love the “fireman” aspect of it; the sense of being under the gun that I experience on a nightly basis:

“Hey Alex, there’s 20 seconds on the clock and you’ve got a down server running some application you’ve never even heard of….what are you gonna do?”

*cracks knuckles*

One of the things that makes me love my job so much is the huge variety of things I have to deal with. No day is ever the same. Ever. Personalities, issues, new trends and a whole gamut of other things- they all change from day to day. I thought that an interesting way to celebrate my 10 year anniversary would be to create a list of things I’ve “learned” in that time that don’t change.

To start, I am going to look at the things that you, the person dealing with your systems administrator, should know and/or keep in mind:


Category: linux

Ran into an interesting thing I hadn’t seen about php before.

Client wanted to install the pecl module for Mongo DB. Should have been a pretty straight forward process:

pecl install mongo

Nope! It immediately returned the error:

Fatal error: Allowed memory size of 8388608 bytes exhausted (tried to allocate 23040 bytes)

Strange, given that the php memory_limit value was considerably higher than that.

As it turns out, pecl doesn’t use the default php.ini file. Who knew? Thanks to another blogger, I was able to quickly resolve the problem by running:

peardev install pecl/mongo

Figured I’d help up the google hits on this one for the next person.

Category: linux
Tags: , ,

Here’s how you pass all ssh connections through a single bastion host:

  • First, set up passwordless ssh between you and the bastion. There’s a billion and one guides on google for how to do this, so I’m going to skip this step and assume you know this one.
  • Then you will need to edit the file /home/username/.ssh/config and inside of that file, put the following details:

    Host nameorip.of.bastion.server
    StrictHostKeyChecking no
    User username
    ProxyCommand none
    ControlMaster auto
    ControlPath ~/.ssh/master-%r@%h:%p
    Host *
    ProxyCommand ssh -qax nameorip.of.bastion.server ‘nc -w 600 %h %p’

  • Once that is done, you save the file.

Ta dah! All of your ssh connections will now be routed through that host.

Now, you may want to flip back and forth (like I do). The way I did this was to create two files


Inside of config.on is what I detailed above. Inside of is what the config file was set to before I set the above up (ie, not passing through bastion). I then modified /home/username/.bash_profile and set the following two aliases:

alias baston=’cp -f /home/username/.ssh/config.on /home/username/.ssh/config’
alias bastoff=’cp -f /home/username/.ssh/ /home/username/.ssh/config’


source /home/username/.bash_profile

Now when I want to route my connections through the bastion, I type “baston” and then move ahead. To disable routing it through the bastion, I type “bastoff”.

Simple, but like I said. I had never really learned how to do this previously due to a lack of real need on my part until recently. As usual, it was easy peasy once you just look into how it is done.

Category: linux
Tags: ,


gives good tech
Kale is one of the smartest people I know

Racker Hacker
Major is always good for leet deetz