Archive for the ‘Web’ Category

Whats going on with Squid-2 and Squid-3 ?

January 10, 2008

A few people have asked me what the deal is with Squid-2 and Squid-3.

“Why are you developing on Squid-2 when Squid-3 is now out?”

“Should I upgrade to Squid-3 now that its released?”

I’m focusing on Squid-2 for a few reasons, namely:

  • Its what people running high-traffic sites are currently running, and Squid-3 doesn’t work at all for them;
  • I was fed up waiting for Squid-3 to be released and for it to become mature enough for users to migrate to before I started my performance work. I gave up about 12 months ago and began planning out the work thats currently going on.
  • I’m personally much more familiar with the Squid-2 codebase than the Squid-3 codebase.

So what exactly am I doing to Squid-2? Well, I’m doing all the things to Squid-2 which I personally believe we should’ve done in the C++ Squid-3 branch before all the “new stuff” was added. You can find it all at . A summary of what I’m doing in this first round:

  • I’m taking a very sharp scalpel to the codebase and removing all of the extra data copies and buffering which is going on;
  • I’m reworking the buffer management so arbitrary sized data buffers can be used, rather than fixed 4k buffers for network/disk traffic;
  • I’m reworking the Strings interface to use reference counting and reference underlying buffers, saving on memcpy() and malloc() calls, cutting down on the amount of transient memory used to handle requests and dropping the CPU and memory bus utilisation quite dramatically;
  • I’m reworking the dataflow between server->store and store->client to use the above reference counted buffers, so data isn’t memcpy()’ed between layers, again dropping CPU and memory bus utilisation;
  • And I’m going to break out as much of the code into external libraries with well-understood dependencies, as preparation for documentation, unit testing and further profiling.

My aim is to fix whatever bugs show up in Squid-2.7 and then in Squid-2.HEAD (which has some of the above included already.) I’ll then start bringing across my changes as they’ve been tested and been found stable. My aim is to have the bulk of the above done within the next month or so and get it into Squid-2.HEAD and concentrate on making it stable before I continue tidying up the dataflow and restructuring the ugly bits of code.

Whats this mean for Squid-3? The Squid-3 guys are doing some great work with things such as ICAP and IPv6 and I hope that they’ll gain more experience with their codebase over the next 12 months or so. I’m certainly not bringing ICAP support into Squid-2 until I’ve reworked the dataflow and tidied up the code enough for ICAP to sit comfortably in the data pipeline, rather than have it bolted onto the side and hooking into strange places where it shouldn’t. (I may bring in IPv6 into Squid-2 soon though!)

Hopefully my work and their work will culminate with the development of the next Squid major version over the next 12 to 24 months. There’s a long way to go though and my main aim here is to get faster, better and shinier code out to the majority of Squid users now so they can benefit from the development, rather than repeating the 4-odd year gap between Squid-2.5 and Squid-2.6. Users hated that.

So whats it mean for you?

  • If you want to try out Squid-3; if you want supported ICAP services then try it out.
  • Squid-2.X will continue being developed over the next 12 months as time permits, so don’t feel like you have to move to Squid-3.
  • If you feel adventurous, try out Squid-2.7. Initial reports are that its stable and slightly less CPU intensive.
  • Squid-2.7 is the first version to include changes to allow Youtube and Microsoft Updates caching. It doesn’t do it out of the box, but the support is there, and I’ll be publishing test rules soon to let people start caching this stuff.
  • If you feel really adventurous then try out Squid-2.HEAD and report back if you have any issues. It should be even less CPU intensive, but only under certain workloads.

How cachable is google (part 2) – Youtube content

November 17, 2007

Youtube is (one of) the bane of small-upstream network administrators. The flash files are megabytes in size, and a popular video can be downloaded by half the people in the office or student residential college in one afternoon.

It is, at the present time, very difficult to cache. Lets see why.

There’s actually two different methods employed to serve the actual flash media files that I’ve seen. The first method involves fetching from servers; the second involves fetching from IP addresses in Google IP space.

The first method is very simple: the URL form is:

XXX is the pop name; YYY is I’m guessing either a server or a cluster name.

This is pretty standard stuff – and If-Modified-Since requests seem to also be handled badly too! The query string “?” in the URL makes it uncachable to Squid by default, even though its a flash video. Its probably not going to change very often.

The second method involves a bit more work. First the video is requested from a google server. This server then issues a HTTP 302 reply pointing the content at a changing IP address. This request looks somewhat like this:

Again, the “?” query string. Again, the origin, but its encoded in the URL. Finally, not only are If-Modified-Since requests not handled correctly, the replies include ETags and requests with an If-None-Match revalidation still return the whole object! Aiee!

So how to cache it?

Firstly, you have to try and cache replies with a “?” reply. It would be nice if they handled If-Modified-Since and If-None-Match requests correctly when the object hasn’t been modified – revalidation is cheap and its basically free bandwidth. They could set the revalidation to be, say, after even 30 minutes – they’re already handling all the full requests for all the content, so the request rate would stay the same but the bandwidth requirements should drop.

The URLs also have to rewritten, much like they do to cache google maps content. The “canonical” form URL will then reference a “video” regardless of which server the client is asking.

Now, how do you do this in Squid? I’ve got some beta code to do this and its in the Squid-2 development tree. Take a look here for some background information. It works around the multiple-URL-referencing-same-file problem but it won’t unfortunately work around their broken HTTP/1.1 validation code. If they fixed that then Youtube may become something which network administrators stop asking to filter.

(ObNote: the second method uses lighttpd as the serving software; and it replies with a HTTP/1.1 reply regardless of whether the request was HTTP/1.0 or HTTP/1.1. Grr!)

How cachable is google (part 1): Google Maps

November 16, 2007

I’m looking at how cachable Google content is with an eye to make Squid cache some of it better. Contrary to popular belief, a lot of the google content (that I’ve seen!) is dynamically generated “static” content – images, videos – which could be cached but unfortunately aren’t.

Google Maps works by breaking up the “map” into multiple square tiled images. The various compositing that occurs (eg maps on top of a satellite image) are rendered by the browser and not dynamically generated by Google.

We’ll take one image URL as an example:

A few things to notice:

  1. The first part of the hostname – kh3 – can and does change (I’ve see kh0 -> kh3.) All the tiles as far as I can tell can be fetched from each of these servers. This is done to increase concurrency in the browser: the Javascript will select one of four servers for each tile so the concurrency limit is reached for multiple servers (ie, N times the concurrency limit) rather than just to one server.
  2. The query string is a 1:1 mapping between query and tile, regardless of which keyhole server they’re coming from.
  3. The use of a query string negates all possible caching, even though…
  4. .. the CGI returns Expires and Last-Modified headers!

Now, the reply headers (via a local Squid):

HTTP/1.0 200 OK
Content-Type: image/jpeg
Expires: Sat, 15 Nov 2008 02:44:29 GMT
Last-Modified: Fri, 17 Dec 2004 04:58:08 GMT
Server: Keyhole Server 2.4
Content-Length: 15040
Date: Fri, 16 Nov 2007 02:44:29 GMT
Age: 531
X-Cache: HIT from violet.local
Via: 1.0 violet.local:3128 (squid/2.HEAD-CVS)
Proxy-Connection: close

The server returns a Last-Modified header and Expires header; but as it has a query identifier in the URL (ie, the “?”) then plenty of caches and I’m guessing some browsers will not cache the response, regardless of the actual cachability of the content. See RFC2068 13.9 and RFC2616 13.9. Its unfortunate, but what we have to deal with.

Finally, assuming the content is cached, it will need to be periodically revalidated via an If-Modified-Since request. Unfortunately the keyhole server doesn’t handle IMSes correctly, always returning a 200 OK with the entire object body. This means that revalidation will always fail and the entire object will be fetched in the reply.

So how to fix it?

Well, by default (and for historical reasons!) Squid will not cache anything with “cgi-bin” or “?” in the path. Thats for a couple of reasons – firstly, replies from HTTP/1.0 servers with no expiry information shouldn’t be cached if it may be a CGI (and “?”‘s generally are); and secondly intermediate proxies in the path may “hide” the version of the origin server and you never quite know whether it was HTTP/1.0 or not.

Secondly, since the same content can come from one of four servers:

  • You’ve got a 1 in 4 chance that you’ll get the same google host for the given tile; and
  • You’ll end up caching the same tile data four times.

I’m working on Squid to work around these shortcomings. Ideally Google could fix the second one by not using query-strings but instead using URL paths with correct cachability information and handling IMS, eg:

might become:

That response would be cachable (assuming that they didn’t vary the order of the query parameters!) and browsers/caches would be able to handle that without modification.

I’ve got a refresh pattern to cache that content but its still a work in progress. Here’s an example:

refresh_pattern    ^ftp:            1440 20% 10080
refresh_pattern    ^gopher:    1440 0% 1440
refresh_pattern    cgi-bin        0 0% 0
refresh_pattern    \?                0 0% 4320
refresh_pattern    .                    0 20% 4320

I then remove the “cache deny QUERY” line and simply use a cache allow all; then I use refresh_pattern’s to match on which patterns shouldn’t be cachable if no expiry information is given (ie – if a URL with cgi-bin or ? in the path returns expiry information then Squid will cache it.)

[UPDATE: We have now merged the results of Adrians work here into Squid-2.7 and 3.1+. The new requirement for refresh_patterns are:

refresh_pattern    ^ftp:        1440  20% 10080
refresh_pattern    ^gopher:        1440   0% 1440
refresh_pattern    -i (/cgi-bin/|\?)        0   0% 0
refresh_pattern    .        0   20% 4320

hierarchy_stoplist cgi-bin ?


It’d then be nice if Google handled IMS requests by the keyhole server correctly!

Secondly, Squid needs to be taught that certain URLs are “equivalent” for the purposes of cache storage and retrieval. I’m working on a patch which will take a URL like this:

Match on the URL via a regular expression, eg:


And mapping that to a fixed URL regardless of the keyhole server number, eg:

The idea, of course, is that there won’t ever be a valid URL normally fetched whose host part ends in .SQUIDINTENRAL and thus we can use it as an “internal identifier” for local storage lookups.

This way we can then request the tile from any kh server ending in any country, so the following URLs would be equivalent from the point of view of caching:

Its important to note here that the content is still fetched from the requested host, its just stored in the cache under a different URL.

I’ll next talk about caching Google Images and finally how to cache Youtube.

Why even bother making cachable content?

September 8, 2007

I see so many sites pop up in some Squid logs which seem to try and avoid any attempt at caching. I’m not sure why, but I’m going to try and cover a few points here.

  1. I want to know exactly how many bits I’m shipping! This is especially prevalent in the American internet scene. Everyone’s about shipping bits. The more bits you ship the “better” you are. (There’s some talk about the “number of prefixes you advertise” also being linked to how “big” your network is; or maybe people are just lazy at trying to aggregate their BGP announcements. I digress..) Sure, if you graph your outbound links this is true. But you can do HTTP tricks to know exactly how many requests you’re handling without shifting the whole object out. Just set the objects to “must revalidate” rather than being immediately expired; let the web cache always revalidate the request via an If-Modified-Since request. You’ll get the IMS and can send back a “not-modified” reply; you can then synthesise a graph based on what you -would- be serving. Voila, free bits. This can be quite substantial if you have lots and lots of images on your site.
  2. I want to know how many people are accessing my site! This is definitely a left-over from the 90s and even then the problem was solved. If you absolutely positively need to know about page impressions then just embed a non-cachable 1×1 transparent gif somewhere where it won’t slow the page rendering down. Leave the rest of the site cachable. Really though, these days people should just use javascript and cookies (a la the Google “urchin”) if they want accurate “people” and “impression” counts. Trying to do it based on page accesses and unique IPs just isn’t going to cut it.
  3. I don’t want people to cache the data; they have to login first! You can tell proxy caches that they must first revalidate the authentication information from the origin server before serving out content. You can have your cake and eat it too.
  4. Making my content cachable is too damned hard! How do I know what headers when and where? Its not all that difficult. Mark Nottingham’s Caching Tutorial covers a lot of useful information about building cachable websites. You can keep control of your authenticated content and push out more content than you’re actually buying transit for.

Just remember a few simple rules:

  • Don’t hide static content behind query URLs (ie, stuff with a ‘?’ in them). Caches won’t cache them (unless, of course, they’re built by me. But then, I am pretty evil.) I see plenty of websites which hide all of their images and flash videos behind a CGI script with a ? in the path – caches just won’t bother trying to cache it. Amusingly, most of those sites hide static content behind CGI scripts! Just imagine what it’d be like to be able to push five or ten times the amount of content to clients behind proxy caches.
  • Don’t be afraid to ask for help in how to optimise your site for forward caching. Heck, even asking on the squid-users mailing list will probably get you sorted out without too much trouble.
  • There are people behind proxy caches – the developing world for one, but there’s plenty of caches to be found in schools, offices, wired buildings, wireless mesh networks and the like. Bandwidth isn’t free and never will be. You might be able to buy a 40gbit pipe to your favourite transit provider in North America but that won’t help people in South Africa or Australia where international bandwidth is still expensive and will remain so for the forseeable future. And yes, we like watching Youtube as much as the next person.

Squid-2.6.STABLE16 is out!

September 6, 2007

Henrik has released Squid-2.6.STABLE16. This resolves a number of bugs, including a crash bug introduced in Squid-2.6.STABLE15.

The changeset list explains whats changed; the release page includes downloads and other useful stuff. Don’t forget to read the release notes if you’re updating from 2.5 to 2.6!

And don’t forget the Squid-2.6 Configuration Manager!

Reverse Proxying with Squid

September 3, 2007

A Squid user posted about their little “CDN” installation to speed up their content delivery to the clients of a particular ISP.

You can read more about it here.

Blocking Ads in Squid

August 29, 2007

One of the more bandwidth-intensive “features” of the Web is the proliferation of ad images and flash media which has a nasty habit of wasting bandwidth and increasing loading times.

Squid has been able to filter ads and other unwanted media for a number of years. Various articles have been written to cover how exactly its done and so I won’t bother covering the how-to here.

The original method involved the “redirector”. A redirector was simply an external program which would read in URLs on STDIN and spit out “alternate” URLs on STDOUT. This could be used for a number of things – the initial use being to rewrite URLs when using Squid as a web server accelerator – but people quickly realised they could rewrite “ad” URLs to filter them out.

Another method is to simply build a text file with identified ad content URLs and hostnames and simply deny the traffic. This is simple but can scale poorly if you try filtering thousands of URLs against regular expression matches.

Finally, another method involves using the more recent “external ACL” helper. It is an external program which can be passed a variety of information about a request (URL, client IP, authenticated username, arbitrary HTTP headers, ident to name a few, but its very customizable!) and spit back a YES or a NO, with an optional message. Content can then be filtered by simply denying access to it, but it currently doesn’t let you return modified content. One of the most popular uses of the external ACL helper is actually to implement ACL groups from sources like LDAP/Windows Active Directory.

How you do it is up to you. Here’s a few links explaining whats involved.

Proxying with Squid: A Users Perspective

July 17, 2007

Someone pointed me over to where the author wrote up a quick Howto for various Squid tasks – basic refresh_patterns for controlling cacheability of files, filetypes and web URLs; remote refreshing; performance review; and an example reverse accelerator setup.

I think its a nice high-level introduction to using Squid as an website accelerator.

New website is up!

May 15, 2007

The new website is up at . Please report issues via the Squid Bugzilla. (Obviously, feel free to email us or comment here if the website is so broken you can’t use the Bugzilla..)

Request for some help: CSS template magic!

May 10, 2007

I’m not really a “web developer” and although I can drive style sheets, I’m not a “CSS” hacker. So here’s my first request: could someone please give me a hand adapting the CSS from the new Squid site into a WordPress-happy theme? I’d love to have the Squid blog(s) themed similarly to the website.

Oh, and if someone could give Kinkie a hand adopting the new CSS to the wiki ( we’d be forever grateful.