Archive for the ‘Web’ Category

Squid-3.2: Pragma, Cache-Control, no-cache versus storage

October 16, 2012

The no-cache setting in HTTP has always been a misunderstood beastie. The instinctual reaction for developers everywhere is to believe that it prevents caching or cache handling or some such myth.

This is not true.

By definition it merely forces caches to revalidate existing content before use (ie it tells the proxy to “be ultra, super-duper conservative. Do not send anything from cache without first contacting the server to double check it.”).

When sent on a client (browser) request:

  • Pragma:no-cache instructs HTTP/1.0 caches to revalidate any cached response before using it.
  • Cache-Control:no-cache instructs HTTP/1.1 caches to revalidate any cached response before using it.
  • Pragma:no-cache only works for HTTP/1.1 caches when Cache-Control is missing.
  • all other values of Pragma are undefined and to be ignored.

When sent on a server response:

  • Pragma in all its forms has no meaning whatsoever. It must be ignored.
  • Cache-Control:no-cache instructs HTTP/1.1 caches to revalidate this response every time it is re-used.

If you read those bullet points above very carefully you will notice that at no point is store mentioned. None whatsoever. The closest it gets is mentioning what to do with already-stored content (revalidate it). In fact the HTTP/1.1 specification goes as far as to say explicitly that responses with no-cache MAY be stored – provided the revalidation is done as above.

no-cache in Squid

The well-known squid versions of the past have all been HTTP/1.0 compliant and advertised themselves as HTTP/1.0 software. These proxies both looked for Pragma:no-cache headers and obeyed them:

  • Squid being HTTP/1.0 that Pragma took precedence over Cache-Control.
  • Due to lack of full HTTP/1.1 revalidation in very old versions Squid has traditionally treated no-cache in either header as if it were Cache-Control:no-store.
  • Due to some old server software Pragma:no-cache on responses was treated as a mistaken form of Cache-Control:no-store.

Starting with version 3.2 Squid is advertising and attempting to fully support HTTP/1.1 specifications. This is a game changer.

All of the above is about to be up-ended, assumptions can be thrown away and some funky cool proxy behaviour allowed to take place.

Hiding in the background is the instruction that Pragma only applies when Cache-Control is missing from a request. We can ignore it – almost completely. When we do have to pay attention we only need to notice the no-cache value and can treat it as if we received Cache-Control:no-cache.

The other change is a potential game changer: The object being transfered is stored now, revalidated later.

Some implications from storing no-cache responses:

  • servers can utilize 304 responses instead of generating new content. Saving a lot of bandwidth and CPU cycles.
  • all those configuration hacks for ignoring or stripping no-cache are no longer needed. Also, the harm they do will become more visible as revalidation is skipped.
  • cache HIT ratio can potentially rise above 50% for forward proxies. As a side effect of the HIT counting market a large portion of web traffic is utilizing no-cache instead of no-store or private. This large portion is cacheable but until now Squid has been dropping it.

Before the marketing department panics about the end of the world lets be clear on one important point:

revalidation means every client request will still reach the end server doing HIT counting, traffic control, whatever – but in a way which allows 304 bandwidth optimization on the responses.

Do not expect a sudden rise of TCP_HIT in the proxy logs though. It is more likely to show up as TCP_REFRESH_HIT or the nasty TCP_REFRESH_MODIFIED/TCP_REFRESH_MISS which is produced by broken web applications always sending out new unchanged content.

Happy Eyeballs

July 14, 2012

Geoff Huston wrote up a very interesting analysis of the RFC 6555 “Happy Eyeballs” features being added to web browsers recently.

As these features reach the mainstream stable browser releases and more people being using them Squid in the role of intercepting proxy are starting to face the same issues mentioned for CGN gateways. For all the same reasons. Whether or not you are operating an existing interception proxy or installing a new one this is one major new feature of the modern web which needs to be taken into account when provisioning the network and Squid socket/FD resources.

Squid operating as forward proxy do not face this issue as each browser only opens a limited number of connections to the proxy. Although Firefox implementation of the  “Happy Eyeballs” algorithm appears to have been instrumental in uncovering a certain major bug in Squids new connection handling recently.

A Squid Implementation

For those interested, Squid-3.2 does implement by default a variation of the “Happy Eyeballs” algorithm.

DNS lookups are performed in parallel now, as opposed to serial as they were in 3.1. As a result the maximum DNS lookup time is reduced from the sum of A and AAAA response times, to the maximum of both.

TCP connection attempts are still run in serial, but where older versions of Squid interspersed a DNS lookup with each set of TCP attempts the new 3.2 code identifies all the possible destinations first and tries each individual address until a working connection is found. Retries under the new version are also now limited per-address where in the older versions each retry meant a full DNS result set of addresses was re-tried.

As a result dns_timout is separated from connect_timeout which is now fully controlling only one individual TCP connection handshake.

Proxying HTTPS for an internal service

June 18, 2011

Since version 2.6 changed the way http_port worked and let Squid service multiple different types of traffic simultaneously people have been struggling with one setup which should to all outward appearances be quite simple.

I’m speaking of the scenario where you have a proxy serving as both a forward-proxy gateway for the internal LAN users and as a reverse-proxy gateway for some SSL secured internal services (an HTTPS internal site).

Both setups are essentially simple. For the reverse-proxy you setup an origin cache_peer with SSL certificate options. Perhapse an https_port to receive external traffic.  For the forward-proxy you setup users browsers to contact the proxy for their HTTP and HTTPS requests. Perhapse with NAT interception to force those who refuse.

They you discover that Squid can’t seem to relay requests from internal users to your internal peer. You get warnings about clientNegotiateSSL failing on plain HTTP requests. Even though it may appear the user was opening HTTPS properly to contact it.

The problem is that when relaying through a known proxy browser wrap their SSL request, inside a CONNECT tunnel setup request. Which is plain-text HTTP. Squid passes this intact on to any cache_peers you have configured. Even the origin one which is expecting SSL. It may do the right thing and wrap it in a second layer of SSL. But that just makes things worse as the server at the other end gets this weird CONNECT request it cant do anything with.

Until recently the only fix has been to setup a bypass so that internal LAN users don’t use the proxy when visiting the internal HTTPS site. Which works perfectly for user access. But does cause problems on the recording and accounting systems which now have to track two sets of logs and filter proxy relayed requests out of one.

Or alternatively to set the LAN DNS to point users at the reverse-proxy port and figure some way to avoid forwarding loops by bypassing Squid like above or disabling the loop detection.

Both alternatives having the same problems at best. Worst case in the second you have opened some security vulnerabilities by ignoring loops.

In Squid-3.1 we have trialled two possible ways to fix this whole situation.

The first attempt was to simply not relay CONNECT to peers with origin type configured. This failed with a few unwanted side effects. One was that Squid would lookup the DNS and go to that server. Fine for most, but not all Squid have split-DNS available. Or Squid could relay it to a non-origin peer instead. Possibly halfway round the world with worse lag effects than a little extra calculation handling the logs.

The second attempt, which we are currently running with in 3.1.12 and later. Is to strip the CONNECT header and connect the tunnel straight to the peer. But only when the peer port matches the intended destination of that tunnel, and your access controls permit it for selection.

  • The port restriction is there as a simple check that the service is likely to match protocols. Even if we cant be sure which.
  • Traffic to that internal service does go through the proxy and traffic accounting only has to handle the proxy logs.
  •  Requests from LAN clients use the clients SSL certificates instead of the cache_peer configured ones.

This last point is one which can bite or confuse. If you have LAN users in this type of scenario and require all contact with the internal service to use the proxy configured certificates you will still need to configure those clients with the old methods.

 

Enjoy. And as always, if you have better ideas or problems please let us know.

Continuous Integration

August 18, 2009

For the last few years there has been a slow growing improvement to the testing and QA Squid is subject to. This last week has seen the construction and rollout  of a full-scale build farm to replace some of our simple internal testing. Robert Collins covers the growth process in his blog.

Here is the initial release notice:

Hi, a few of us dev’s have been working on getting a build-test environment up and running. We’re still doing fine tuning on it but the basic facility is working.

We’d love it if users of squid, both individuals and corporates, would consider contributing a test machine to the buildfarm.

The build farm is at http://build.squid-cache.org/ with docs about it at http://wiki.squid-cache.org/BuildFarm.

What we’d like is to have enough machines that are available to run test builds, that we can avoid having last-minute scrambles to fix things at releases.

If you have some spare bandwidth and CPU cycles you can easily volunteer.

We don’t need test slaves to be on all the time – if they aren’t on they won’t run tests, but they will when the come on. We’d prefer machines that are always on over some-times on.

We only do test builds on volunteer machines after a ‘master’ job has passed on the main server. This avoids using resources up when something is clearly busted in the main source code.

Each version of squid we test takes about 150MB on disk when idle, and when a test is going on up to twice that (because of the build test scripts).

We currently test:

  • 2.HEAD
  • 3.0
  • 3.1
  • 3.HEAD

I suspect we’ll add 2.7 to that list. So I guess we’ll use abut 750MB of disk if a given slave is testing all those versions.

Hudson, our build test software, can balance out the machines though – if we have two identical platforms they will each get some of the builds to test.

So, if your favorite operating system is not currently represented in the build farm, please let us know – drop a mail here or to noc @ squid-cache.org – we’ll be delighted to hear from you, and it will help ensure that squid is building well on your OS!

-Rob

That just about covers everything. Hardware and build software requirements are listed in the build farm page.

Hi, a few of us dev's have been working on getting a build-test
environment up and running. We're still doing fine tuning on it but the
basic facility is working.

We'd love it if users of squid, both individuals and corporates, would
consider contributing a test machine to the buildfarm.

The build farm is at http://build.squid-cache.org/ with docs about it at
http://wiki.squid-cache.org/BuildFarm.

What we'd like is to have enough machines that are available to run test
builds, that we can avoid having last-minute scrambles to fix things at
releases.

If you have some spare bandwidth and CPU cycles you can easily
volunteer. 

We don't need test slaves to be on all the time - if they aren't on they
won't run tests, but they will when the come on. We'd prefer machines
that are always on over some-times on.

We only do test builds on volunteer machines after a 'master' job has
passed on the main server. This avoids using resources up when something
is clearly busted in the main source code.

Each version of squid we test takes about 150MB on disk when idle, and
when a test is going on up to twice that (because of the build test
scripts).

We currently test
2.HEAD
3.0
3.1
3.HEAD

and I suspect we'll add 2.7 to that list. So I guess we'll use abut
750MB of disk if a given slave is testing all those versions.

Hudson, our build test software, can balance out the machines though -
if we have two identical platforms they will each get some of the builds
to test.

So, if your favorite operating system is not currently represented in
the build farm, please let us know - drop a mail here or to noc @
squid-cache.org - we'll be delighted to hear from you, and it will help
ensure that squid is building well on your OS!

-Rob

Life of a Beta

July 11, 2009

From early inception when the developers have nothing but dreams for it.  Through the coding and arguments about what should be included and how. Through the alpha testing with its harrowing hours pondering obscure code from last decade. Even the odd period of panic as security bugs are whispered about behind closed doors. Such is the early life of software.

Two weeks ago word went out that 3.1 was reaching end-game.

This part of the release lifecycle seems to be going well. Packages appearing very slowly as QA throws demanding eyes on the code and making us actually fix things. Don’t be fooled by the packages out already, they have been in QA for a few months to get this far. On that note:

NetBSD, Gentoo, Ubuntu, FreeBSD and RedHat already have packages ready and available for at least testing use if you know where to look (ie the links right there might be a good start).

Debian has a bit more QA to go as of the writing, but the maintainer tells me there will be packages out soon.

OpenBSD and Mac turned out at the last minute to be running split-stack IPv6 implementations (for security apparently). All the documentation read in two years left the impression it was a Windows XP anarchism (and who runs XP Pro on a server?), so support was delayed and delayed.  The OpenBSD maintainer and someone interested from Mac are working with myself on closing that gap in the features.

There may be more OS with 3.1 packages. I’ve only begun working my way down the distrowatch.org popularity list to see which OS do and who to contact. Squid has bundles on over 600 OS apparently.

If you know who does the official packaging for your OS and whether there are 3.1 packages ready, please do me a favor and mention it. I’m seeking a web page where to find the squid (or squid3/squid30/squid31) package information and also the place where distro bug reports about Squid might end up.

Release 3.1

November 4, 2008

Kinkie pointed out Linus Torvalds blog today to the rest of us here working on Squid. As the release maintainer for Squid-3 this year I kind of agree, its a sad time to cutting a new version. For me its more of a reflection that for all the high hopes we have of this new release, we had the same or similar hopes of the earlier one. Just 12 months ago now.

On that sad note, yes its finally happened. 3.0 has aged into a full blown stable package. Most of a month and no new bugs. Perfect time for something shiny and new for the neo-tech fanclub. And so with that for an intro we are gone for 3.1 !

3.1 is available for beta testing in the form of 3.1.0.1. see the Release Notes for further details on the finer details of change.

This release has gained from the experiences of 3.0 and 2.6, starting from a much more stable base of code than the initial. 3.0 had a long period of years with few active developers, an interminably long period of testing releases, and in hindsight a premature birth.

Alongside the code this release has a wider collaboration with active users. For the first time in many years we held a Developer meeting that included Users. We who were there certainly took in a lot of feedback from all sides. I hope those users who talked to us can see in this release that their comments, even those made in passing, have been listened to and worked on.

The small comment from one user when asked what their biggest itch with squid was “we don’t like these being called STABLE, when its obvious they are not.” has led to the most notable change made to 3.1.  That comment and similar feelings by others lead us into discussions on the release naming and numbering. From which we have produced – 3.1.0.1 – the second milestone point of the branch we are calling 3.1. Where the developers have everything done and working for us.

no more DEVEL, PRE, or RC, no more premature labels guessing when things might be STABLE.  Just 3.1.0.1. Further testing from the rest of you will show whether anyone can consider it stable, unstable, usable or as buggy as raw earth.

From the developers; We use it. We love it. Try it, and see for yourselves.

Some of the stuff you will find there is;

  • a lot of small changes aimed towards easier use and configuration (three cheers to those who nagged long an hard for this).
  • a lot of network RFC compliance extensions, making 3.1 much more capable of meeting modern network needs. The future still holds improvements, but 3.1 is definitely better in many respects than everything that came before.
  • a lot of things to make Squid a better experience for your own users. More seamless network recovery tricks than ever before. We have even tagged along behind the international localization bandwagon in our own way to make the errors squid does have to show both pretty and readable.

Sadly, careful readers will notice a section of the Release Notes labeled “Regressions against 2.7″.  Yes, those of you who moved to 2.7 because you needed some brand new feature there may still have trouble migrating up to 3.1. What we have done is to port as many of the 2.6 features and fixes as we could. A few did not make it in time, but will be coming in 3.2, alongside the features added as experimental in 2.7.

On the overview:

  • 2.5 has disappeared over the horizon into the long dark night of obsoletion.
  • 2.6 is itself officially aging out now. Supported, but the developer first response is “can you try something newer?”.
  • 2.7 is being maintained for the few extremely high-performance accelerator setups. But in general the Squid-2 sequence is aging out for us developers.
  • 3.0 has reached a point of stability, though not fully-featured.
  • 3.1 is available for testing as the next step up. You should be planning to migrate up to 3.1 or later release.

If there are any features holding you to Squid-2, or even an issues you find with testing Squid-3 speak up, we rely on your input to choose the most needed features for porting.

Thank you all, and enjoy your use of Squid 3.1

Chunked Decoding

April 29, 2008

We have been getting a growing number of reports and bugs from people using Squid 3.0 described as ‘squid producing a blank page’ when bypassing squid apparently works.

Sounds familiar to some yes? I’m bringing it up now because while it is an old problem, its not the TCP issues Adrian wrote about earlier and you should also check if you find its not this. Which incidentally can have exactly the same visible effects for end-users.

This ‘new’ issue is caused by certain widely-used web servers which shall remain nameless and unadvertised by me. Which always respond with HTTP/1.1 chunked-encoding of pages.

Servers are explicitly forbidden from sending that particular encoding type to software announcing itself as HTTP/1.0 (such as squid). But the broken server is doing it anyway!

Ironically: The authors use this server on their own help and support website. So those who are having this problem both see it as a squid problem, and can’t find or see any solution they may have posted anyway.

How to tell if this is your problem?

Use squidclient to make a web request that bypasses the squid proxy. It should send out the HTTP/1.0 request and get a page back. If the headers of the response include “Transfer-Encoding: chunked” there is your problem.

This is currently only an issue in Squid 2.5 or earlier and 3.0, which is still highly modeled around 2.5.

The solutions are varied depending on your capabilities.

Simplest for some will be to just bypass squid for those domains.

[ UPDATE: (thanks Michael Graham)

Apparently several people are having success with simply dropping the Accept-Encoding header to certain of these broken servers. Adding this to their squid.conf :

# Fix broken sites by removing Accept-Encoding header
acl broken dstdomain ...
request_header_access Accept-Encoding deny broken

NP: don't forget to remove it again when you upgrade out of 3.0

]

Next best is to use peer-routing to divert those domain requests at a squid 2.6 (or if you are feeling experimental a 3.1 build)

If its a serious issue and you are accelerating for one of these broken web servers. Then you will need to stick with Squid 2.6 until 3.1 is available for production use.

Why does it work for 2.6 and 3.1 but not 3.0?

Well, things are a bit messy I’ll have to write it up one day. Suffice to say that 3.1 has a lot more HTTP/1.1 support where the chunked-encoding/decoding was intended for. But 2.6 needed it a bit earlier so a version of the decoding (only!) was done to fit 2.6 needs at solving this same issue for high-performance users earlier last year.

The 3.0 code is just different enough that it would need a whole new back-port project to get it going well. The time and work that would take is being used instead to get 3.1 out faster. Which should be within a month of this writing so procrastinating could solve the problem for you.

[UPDATE: Thanks to the Gentoo Project for their work back-porting this will be available from 3.0.STABLE16-RC1 ]

Squid-2 performance work: graph #1

January 23, 2008

 

Whats going on with Squid-2 and Squid-3 ?

January 10, 2008

A few people have asked me what the deal is with Squid-2 and Squid-3.

“Why are you developing on Squid-2 when Squid-3 is now out?”

“Should I upgrade to Squid-3 now that its released?”

I’m focusing on Squid-2 for a few reasons, namely:

  • Its what people running high-traffic sites are currently running, and Squid-3 doesn’t work at all for them;
  • I was fed up waiting for Squid-3 to be released and for it to become mature enough for users to migrate to before I started my performance work. I gave up about 12 months ago and began planning out the work thats currently going on.
  • I’m personally much more familiar with the Squid-2 codebase than the Squid-3 codebase.

So what exactly am I doing to Squid-2? Well, I’m doing all the things to Squid-2 which I personally believe we should’ve done in the C++ Squid-3 branch before all the “new stuff” was added. You can find it all at http://devel.squid-cache.org/changesets/squid/s27_adri.html . A summary of what I’m doing in this first round:

  • I’m taking a very sharp scalpel to the codebase and removing all of the extra data copies and buffering which is going on;
  • I’m reworking the buffer management so arbitrary sized data buffers can be used, rather than fixed 4k buffers for network/disk traffic;
  • I’m reworking the Strings interface to use reference counting and reference underlying buffers, saving on memcpy() and malloc() calls, cutting down on the amount of transient memory used to handle requests and dropping the CPU and memory bus utilisation quite dramatically;
  • I’m reworking the dataflow between server->store and store->client to use the above reference counted buffers, so data isn’t memcpy()’ed between layers, again dropping CPU and memory bus utilisation;
  • And I’m going to break out as much of the code into external libraries with well-understood dependencies, as preparation for documentation, unit testing and further profiling.

My aim is to fix whatever bugs show up in Squid-2.7 and then in Squid-2.HEAD (which has some of the above included already.) I’ll then start bringing across my changes as they’ve been tested and been found stable. My aim is to have the bulk of the above done within the next month or so and get it into Squid-2.HEAD and concentrate on making it stable before I continue tidying up the dataflow and restructuring the ugly bits of code.

Whats this mean for Squid-3? The Squid-3 guys are doing some great work with things such as ICAP and IPv6 and I hope that they’ll gain more experience with their codebase over the next 12 months or so. I’m certainly not bringing ICAP support into Squid-2 until I’ve reworked the dataflow and tidied up the code enough for ICAP to sit comfortably in the data pipeline, rather than have it bolted onto the side and hooking into strange places where it shouldn’t. (I may bring in IPv6 into Squid-2 soon though!)

Hopefully my work and their work will culminate with the development of the next Squid major version over the next 12 to 24 months. There’s a long way to go though and my main aim here is to get faster, better and shinier code out to the majority of Squid users now so they can benefit from the development, rather than repeating the 4-odd year gap between Squid-2.5 and Squid-2.6. Users hated that.

So whats it mean for you?

  • If you want to try out Squid-3; if you want supported ICAP services then try it out.
  • Squid-2.X will continue being developed over the next 12 months as time permits, so don’t feel like you have to move to Squid-3.
  • If you feel adventurous, try out Squid-2.7. Initial reports are that its stable and slightly less CPU intensive.
  • Squid-2.7 is the first version to include changes to allow Youtube and Microsoft Updates caching. It doesn’t do it out of the box, but the support is there, and I’ll be publishing test rules soon to let people start caching this stuff.
  • If you feel really adventurous then try out Squid-2.HEAD and report back if you have any issues. It should be even less CPU intensive, but only under certain workloads.

How cachable is google (part 2) – Youtube content

November 17, 2007

Youtube is (one of) the bane of small-upstream network administrators. The flash files are megabytes in size, and a popular video can be downloaded by half the people in the office or student residential college in one afternoon.

It is, at the present time, very difficult to cache. Lets see why.

There’s actually two different methods employed to serve the actual flash media files that I’ve seen. The first method involves fetching from youtube.com servers; the second involves fetching from IP addresses in Google IP space.

The first method is very simple: the URL form is:

http://XXX-YYY.XXX.youtube.com/get_video?video_id=VIDEO_ID

XXX is the pop name; YYY is I’m guessing either a server or a cluster name.

This is pretty standard stuff – and If-Modified-Since requests seem to also be handled badly too! The query string “?” in the URL makes it uncachable to Squid by default, even though its a flash video. Its probably not going to change very often.

The second method involves a bit more work. First the video is requested from a google server. This server then issues a HTTP 302 reply pointing the content at a changing IP address. This request looks somewhat like this:

http://74.125.15.83/get_video?video_id=HrLFb47QHi0&origin=dal-v37.dal.youtube.com

Again, the “?” query string. Again, the origin, but its encoded in the URL. Finally, not only are If-Modified-Since requests not handled correctly, the replies include ETags and requests with an If-None-Match revalidation still return the whole object! Aiee!

So how to cache it?

Firstly, you have to try and cache replies with a “?” reply. It would be nice if they handled If-Modified-Since and If-None-Match requests correctly when the object hasn’t been modified – revalidation is cheap and its basically free bandwidth. They could set the revalidation to be, say, after even 30 minutes – they’re already handling all the full requests for all the content, so the request rate would stay the same but the bandwidth requirements should drop.

The URLs also have to rewritten, much like they do to cache google maps content. The “canonical” form URL will then reference a “video” regardless of which server the client is asking.

Now, how do you do this in Squid? I’ve got some beta code to do this and its in the Squid-2 development tree. Take a look here for some background information. It works around the multiple-URL-referencing-same-file problem but it won’t unfortunately work around their broken HTTP/1.1 validation code. If they fixed that then Youtube may become something which network administrators stop asking to filter.

(ObNote: the second method uses lighttpd as the serving software; and it replies with a HTTP/1.1 reply regardless of whether the request was HTTP/1.0 or HTTP/1.1. Grr!)


Follow

Get every new post delivered to your Inbox.

Join 30 other followers