A few weekends ago, with threat of a hurricane looming, I did the most logical thing: brought all my sites down as I fumbled through switching around my directory structure and installing nginx.
Some background: This site (and presentation.js, valley summit, and others) are all hosted on one 256MB slice at Slicehost running Ubuntu 10.04. I followed the assorted articles to get myself started, but mostly just left default configuration in place, including the directory structure they use (which I grew to dislike). That also meant I installed apache. This was by choice, I wanted something fast & easy to set up, and that I was already vaguely familiar with. So, there we go - I had a server! I shared my success, went to bed, and the next day it had almost completely seized up.
apache vs nginx
I never did grab exactly how much memory the average apache process took, but this article asserts that it’s about 25-50MB in size. The reason for this is that each process is bundled with PHP, so it doesn’t need to talk to anything external to process your PHP pages. This would be great if all the files you’re serving are PHP, but for most sites, the majority of content delivered is images, stylesheets, javascript files, and for each of these, the apache process still has PHP embedded.
Nginx, by contrast, doesn’t include PHP - in fact, to get PHP working, you need to use an external process (FastCGI) to handle the communication between nginx and PHP.
Why? Apache is a memory hog. This is well known, and discussed in an article I failed to read (configuring the apache MPM). The issue was that I was allowing more processes to be created than I had available memory for. After a lot of tweaking I ended up with (in my opinion) ridiculously low numbers. I was only allowing, I think, 5 simultaneous connections? That is, across three+ sites, two of which are fairly well-trafficked. So, needless to say (if you were here before, you know) my server was pretty freaking slow.
What now? At this point I had a decision to make - upgrade to more memory, so apache wouldn’t choke my system, move hosts & start over (this was made more appealing by the fact that I’d misunderstood what Slicehost’s move to Rackspace meant), or keep banging on it till it worked. Obviously the last option wasn’t working so well, and I really couldn’t justify spending more money on my site right now, so I opted for (b), minus the host-move. I spent a little while researching server software, and decided on nginx (my other option was lighttpd, and I don’t really have a clear reason why one won over the other).
And now, the work starts.
Installing nginx was actually really simple. I stopped apache, then (remember, I’m on Ubuntu 10.04):
apt-get install nginx
and I was in business… kind of. Since I was running multiple sites, I had virtual hosts set up. The next thing I needed to do was configure that. (In reality, the next thing I did was move a half-dozen directories, create new users & groups, and set up some ACLs that are still not what I want — but for the sake of this post, I’m going to go straight to the logical next step, not the step that keeps everything not-working for 24hrs).
Re-creating your virtual hosts
In apache, you have a sites-enabled folder with files that probably look like this:
<VirtualHost *:80> # Admin email, Server Name (domain name) and any aliases ServerAdmin you@site.net ServerName www.site.net ServerAlias site.net # Index file and Document Root (where the public files are located) DirectoryIndex index.php index.html DocumentRoot /path/to/site/files </VirtualHost>
What this is doing is saying when someone accesses site.net (or www.site.net), it will display the files in /path/to/site/files
. The equivalent for nginx would be:
server { listen 80; server_name site.net *.site.net; index index.html; root /path/to/site/files; location / { try_files $uri $uri/; } }
The server { ... }
block is basically the equivalent of the <VirtualHost *:80> </VirtualHost>
block. You tell nginx to watch for site.net (or any subdomain of site.net) and when it’s found, to serve up the file found in /path/to/site/files. The try_files
line is pretty neat - if nginx can’t find the initially requested file, it then tries the next file - so in this case if you type a directory but forget the ending slash, it will still find it.
The location of these configuration files is different per-system, in my case they are in /etc/nginx
. nginx.conf
is where the main configurations are stored, I ended up leaving these as is. The last line is important, though:
include /etc/nginx/conf.d/*;
This includes all files in the conf.d directory, so you can use it like sites-enabled in apache. I created new files for each site, and will go over them more in detail after the next step.
But I need PHP. So there’s still another step to come. In my aside on Apache vs Nginx, I mentioned that nginx doesn’t come embedded with PHP like Apache — so we need to install FastCGI. I followed the relevant bits of this post, which lead me through installing FastCGI & creating a startup script, which creates a few FastCGI processes. Since that post doesn’t exist anymore, you should be able to follow this & grab the content for php-fastcgi from this gist.
apt-get install php5-cgi nano /etc/init.d/php-fastcgi (grab the content for this file from the other post) chmod +x /etc/init.d/php-fastcgi
I did change a few things from their script, as I was having some issues and had read that it could be solved using a UNIX socket rather than a TCP connection, so the top 5 lines of my script look like:
#!/bin/bash BIND=/tmp/php.socket USER=www-data PHP_FCGI_CHILDREN=5 PHP_FCGI_MAX_REQUESTS=50
I also changed the children & requests, because I want to reduce as much memory use as possible. And I’ve been running this for about 3 weeks with no issues.
So, once you’ve grabbed that script, you need to run it to start the processes.
/etc/init.d/php-fastcgi start
To make this script run on startup, use the following command.
update-rc.d php-fastcgi defaults
Now I’ve got PHP, and I’ve got nginx, but they’re not working together. So I’ve got to go back to the nginx configuration, and tell it to send anything php to FastCGI. I went through each file in /etc/nginx/conf.d/
, and replaced all my previous configurations with includes to restrictions.conf
and wordpress.conf
, grabbed from this codex post. Now the file for redradar looks like:
server { server_name redradar.net me.redradar.net; root /path/to/files; include /etc/nginx/global/restrictions.conf; include /etc/nginx/global/wordpress.conf; }
Finally, in wordpress.conf
, is the directive to send php to FastCGI. I used the example script almost exactly, only removing some of the extra comments & caching rules (as I don’t currently have caching enabled - that’s a project for another weekend).
And I’m done. I’d be lying, though, if I didn’t mention that all this took place over a few days, with my sites all being intermittently down while I ran through tutorial after tutorial, googleing the various errors I was getting when images failed to load, php wouldn’t work… Apparently the WordPress Codex was the place to look all along, but I didn’t understand where everything went until I’d been mucking around for a while.
I hope this helps anyone else who decides to make this switch - it’s really not that hard, if you actually do your research first.
7/1/12 I’ve since upgraded to a 512MB slice, and turned on caching with APC & Batcache, which was stupidly simple (since APC was already installed).
The post with the content for /etc/init.d/php-fastcgi is gone. Can you put the content up in a gist or pastebin or something please?
Now that it’s a month later it’s probably not helpful anymore, but if it’s still useful/useful to anyone else, I created a gist of my php-fastcgi file.