DISCLAIMER: I’m not a “sysadmin” by any stretch of the imagination, but I know my way around the Internets and have spent my fair share of time dealing with DNS, networks, server configuration, automation, HTTP-related stuff etc etc to know my way around things like this. I’m sure some of this would have been a lot easier for someone else, but hey – it works.
ASSUMPTIONS: I’m going to assume you know about Amazon EC2 and S3 and some of the terminology involved therein, so if not, please go read up a little on that first.
So — when I was looking for a new home for FeedBlendr, I wanted something that would be extremely scalable, because I have high hopes (obviously), and it’s part of a much bigger puzzle for me, so the scalability side of things was important. In this sort of application, the biggest issue with scaling and load has been processor time and memory, since my system spends a lot of time downloading feeds from the ‘net and then holding them in memory while it’s blending them and re-ordering them and whatnot. My main issue is not database “bandwidth”, it’s “web processing power”. With that in mind, here’s what I’ve done.
- Right now, my database remains on DreamHost (outside of Amazon entirely)
- I have a relatively dynamic system configured where I can call up a new instance from EC2 based on my own customized AMI. When it loads, it will grab a copy of my latest “distribution” of my web app from S3, install it on itself, and then send me an email (and an SMS) to let me know it’s ready to roll, and to add it into the DNS if I want it to be a part of my main cluster.
- I have 2 instances (servers) running in Amazon, configured using round-robin DNS to handle/balance the requests involved in powering FeedBlendr.
Setting Up an AMI
My first task (once getting myself set up with the EC2 Tools) was to actually set up my own Amazon Machine Image (AMI). This is your “server” if you like – operating system and all. I worked from a Fedora Core 6 base image that someone had shared on the Amazon Developer Forums, so that was a good starting point for me. Basically, this is what I did:
- Got the AMI running, then logged into it (getting logged on using shared certificates etc was new for me, but I got it sorted out)
- Did a bunch of
yum update
and yum install
processes to install some things I needed (Apache, PHP etc)
- Configured everything to work as I wanted. Remember to use name-based domain VirtualHost configuration on your images, because you don’t know what IP they’ll have when they come online (unless you wanted to factor that into your launch procedure somehow)
- While doing all of this, keep track of the process I needed to go through to actually install the codebase that runs FeedBlendr (and some other things) and permissions that needed to be changed etc.
- Built out the deployment process/scripts and did some iterative testing to make sure it worked etc.
- Deployed 2 instances, added their IPs to my DNS service and switched everything over to being hosted by Amazon — EASY! Right?
Custom Deployment Process
So I think what makes my process a little interesting is my deployment process. Rather than install and configure my complete application on my server, then take a snapshot of that and bundle it up as my AMI, I opted for a process where my AMI doesn’t actually contain my code at all. What happens is that the AMI is configured as a relatively barebones Apache+PHP system, capable of serving anything. When it launches, it calls a few very simple commands, which grab a package from S3, then extract it and execute a script contained within it.
That script does all the magic. It handles relocating files to where they need to be, fixing permissions, creating symbolic links, etc etc etc. It does everything it needs to do to deploy my entire system (including 2 websites, the custom feed handling core, a WordPress installation etc) in about 15 lines of bash script.
Why go to all the trouble of having this de-coupled AMI/deployment process? Simple: I work on the code for FeedBlendr a lot, and it’s undergoing pretty constant revisions. I realized very quickly that making AMIs and uploading them into S3 and registering them etc etc… sucks. It’s slow, it’s tedious, and I wanted to do it as little as possible. So doing things this way, I don’t have to make a new AMI every time I change my code, I just make a new distro package, throw it in S3, then launch a new instance and it’s got it all running. I can also “re-launch” an instance that’s already running (to save me dealing with DNS) by running a simple script which goes through the same process as when my instances first start up to get new code and overwrite everything currently running.
Dealing with DNS
DNS will come up pretty quickly as an issue if you’re working with EC2 – obviously. You launch an instance, you get a new IP. Close it down, launch a new one. New IP. Problem.
The short answer is just get yourself a custom DNS account somewhere. I’m using DynDNS, but they may not be the best. One specific problem I have with them is that there’s no programmatic way to update a hostname that’s configured with round-robin load balancing. I have 2 IPs allocated to the same domain name (feedblendr.com), so I can’t use any of their clients to add/remove/change IPs for that host. That’s something I specifically want to be able to do (have instances automatically jump into my round-robin and start balancing load – so if you know of someone, let me know!). ZoneEdit might be another option, and I know there are all sorts of other providers out there as well.
Set up your hostname in your new DNS service, and configure with a low TTL (Time To Live) (since you want to be able to change the authoritative IPs for your host quickly in case an instance goes away). I have mine set to 300 seconds, but you might even want to go shorter (if your provider will allow you to). On DynDNS, their Custom DNS service (to enable all of this) costs $25 per year, per host. Not too bad.
Now you’re in a position to add/remove IPs to that host and load balance, shift requests to a new instance etc as required. Always remember – instances in EC2 are transient! They may disappear and never come back.
Deployment Distributions
If you’re wondering how I build my packages for distribution purposes – here’s the deal:
- I use subversion as my code repository/version control system, so everything is in there and up to date at all times (hopefully :-p)
- I love
make
, it’s capable of some really cool things, so I use it here and there to automate some project management related tasks
- I already used make to do local testing (handling exporting from SVN and then setting up permissions/links etc within the project), so it made sense to extend that process to my deployment packages.
- I can check everything into SVN, then go to my “extras” directory and type
make ec2-distro
and that’s it
- It exports all the sub-projects that make up everything that will be deployed on the server, sets up permissions within the scope of the project, creates some internal symlinks (relative file-paths of course) and then tar’s it all up. From there, it uses s3curl.pl to send a copy up into S3 in a pre-defined location, and then it’s done.
- That package is what gets downloaded to instances and deployed when they launch.
Challenges I Faced/Face
It’s not all smooth sailing. I have had, and continue to have some things I’m not entirely happy with in this process, and in my experience with EC2. Here’s a couple specific ones in no particular order:
- DNS: I’m not entirely happy with my DNS set up. Right now if an instance disappears, it relies on me noticing and removing it from my DNS entries, then involves some amount of time before that change is noticed. I plan on trying to improve this by figuring out some sort of heart-beat based monitoring of my instances, possibly using Nagios or something like that. I wanted to use something like WeoCEO, but I’ve not heard back from the guys there in the timeframe I was working under, so had to go it alone.
- Shared Filesystems: I had hoped to make use of the promising S3DFS system, which promises to provide you with a fast-access (through lots of internal caching), shared filesystem, which is backed onto S3, but is accessible as a normal, local filesystem (using the FUSE system). Now here’s the kicker. It promises to enable multiple instances to access the same filesystem simultaneously. I had hoped to be able to use it to have multiple instances share a cache repository between them to improve the performance of my caching backend, and not have both instances downloading the same content right after each other because of round-robin issues. Well to make the story short – there were performance problems that meant that wasn’t an option. BUT! I’ve been in touch with the developers, and they’re working on a beta right now which should address all my problems, so I’m hoping to try it out again and use it in the future.
- Web Stats: I used to use Analog/Webalizer-type tools to look at my server logs, but with multiple instances serving content, that starts to get difficult, unless you’re willing to log to a central server, or write something custom to deal with merging logs etc. Rather than do that, I installed Google Analytics on my site, so I now get centralized stats from that, but it doesn’t cover my non-Javascript enabled content (e.g. any feed accesses). Luckily I log those details myself, but now it’s more important that I build some good tools for peering into that data.
- Hosting a Database: I’ve read all sorts of interesting posts about hosting databases within EC2, but something about it just makes me uneasy 🙂 Call me old-fashioned, but I’d like to know that my database was hosted on a machine that’s not going to disappear if it crashes. I suppose it’s just one more level of true redundancy to deal with right? I haven’t figured out master-master replication which seems to be an obvious requirement for that yet, so I’m not 100% happy with my database situation just yet.
- Keeping My AMI Generic: Because I wanted to be able to modify my AMI as little as possible over time, I actually ended up moving my PHP and Apache configuration files into my distribution package as well. I have a directory called “extras” which contains things generally related to deployment, including a vhosts.conf file, and a php.ini. During deployment, these files are copied into place on the server and then Apache is automatically restarted. This allows me to customize my Apache configuration (including RewriteRules etc), without having to modify the AMI.
Handy Tools for EC2/S3
Here are a couple tools I found useful in this process, which might help you out as well:
- s3curl.pl — a really handy little Perl script that you can use as a cURL wrapper for doing command-line requests against S3. Great because it handles the complex authentication stuff, you just give it your access keys and it takes care of things so that you can use it basically the same as you would use cURL on the command line.
- S3 Browser — a very cool, lightweight and simple tool for checking out what you have in S3 buckets (and uploading/downloading/deleting things)
That will do for now – please ask in the comments if you have any questions and I’ll answer them here and/or revise this post to reflect new information.
Cheers — Beau