Preparing your application for serving static content efficiently
(Reposted from the Empower Campaigns blog)
Most web applications have static content such as CSS, JS, images, etc. To save on bandwidth costs and improve load times for your users, it's a good idea to tell the client to cache these items for a long period of time. But even though your static content changes infrequently, when it does, the cached content needs to be invalidated. In addition, high traffic sites often want to go a step further and take advantage of a CDN to offload static content delivery. Below I'll share the method we used here at Empower Campaigns to accomplish both of these things.
Organizing PHP Batch Jobs
This week at work I got the chance to address the growing number of batch oriented CLI scripts for our main web application. While they weren't quite unmanageable yet, they were heading in that direction. There was too much common code, especially with bootstrapping the application and parsing options. Also, the location of scripts didn't really make sense... ./bin/bar.php
, ./cron/foo.php
, etc. So I decided to carve out some time and clean it up.
The goals were pretty straight forward:
- Everything must use the application's model layer. This is mostly so that the built in caching will be consistent, but also to enforce that all data access goes through the same code.
- Centralize all CLI option parsing, application bootsrapping, error handling, and multi-tenant logic (this is a multi-tenant SaaS application)
- Keep the jobs themselves very simple.
With the above in mind, I ended up splitting things up into 3 parts:
PHP step debugging in VIM
When debugging problems in PHP, most of the time it's easiest to just add var_dump($foo); exit;
in the middle of your script, and you can see the contents of $foo
right in your browser. But if you have to do much more, this approach gets cumbersome pretty quickly. I've recently been using step debugging for harder to track down problems. It allows me to examine the state of things all the way through execution of a request, line by line, or skipping ahead to break points. This process also gives you more insight into everything else happening in a request, which can be useful when you're using frameworks or other 3rd party code in your application.
2010 Year in Review
Welcome to my second annual Year in Review. Lots of changes and exciting things happened this year, including a space shuttle launch, a house purchase, hearing damage, a job change, traveling to New Zealand, and shoulder surgery.
Using a dnscache proxy to get faster AppleTV movie downloads and still use OpenDNS
Since buying the new AppleTV a few months ago, I've been disappointed in how long you have to wait for your movie download to be ready for viewing. I get about 10Mbps download speeds through Comcast, and use OpenDNS, so shouldn't it be faster than 15-20 hours? Changing from HD to Standard definition has helped a bit, but you still have to come back later to watch your movie.
Today I came across this story on Slashdot that identifies the problem: Akamai, the CDN that Apple uses to distribute its content, does geolocation on the DNS request to determine the IP of the server you should download your movie from. By using a 3rd party like OpenDNS or Google for your DNS service, you'll get the IP of the server closest to that DNS server, not the closest one to you. Makes sense, this is how many CDNs work.
Since I have a home linux server doing NAT, DHCP, and some file serving, I figured I would just just proxy my DNS through dnscache to solve this problem. The idea is to send only any Akamai related DNS requests through my ISP's servers, and send everything else to go through OpenDNS. And it works! I'm watching Inception, in HD, minutes after downloading it. Here's what I did to get it working.