Archive for the ‘Web’ Category

Squid-3.2 mythbusting NAT

December 19, 2014

One of the more frequently mentioned “problems” with Squid-3.2 since its release is a change in how it handles NAT failures.

The Myths

“Squid used to work when I NAT traffic to it from my router.”

“Squid used to work with one port when I configure the browser and NAT traffic.”

The Reality

No. Squid up until 3.1 would silence the NAT errors and treat the router as if it were the client browser. Any differences between the Host header and the requested URL were also completely ignored. All of this would be invisible to you the administrator, hidden at debug levels not normally shown. With several of the problems being completely unrecorded as well.

This last part seems to be behind a fair bit of skepticism about whether the problem we solved really exists. Nothing was showing up in the logs before and even a full packet trace from Squid or a gateway server would not reveal how JavaScript hijacks a client browser to pass internal documents to an external attacker.

What Changed?

Squid-3.2 finally added Host header validation to protect against the nasty behaviour and a few 0-day attacks using CVE-2009-0801 security vulnerability. Which has been a known flaw with NAT interception by HTTP proxies for at least 12 years now. This meant two things had to change:

* ignoring NAT errors had to stop. We can’t rely on validation results if the TCP details are already known to be corrupted.

* traffic directly from a browser to a NAT intercept port on Squid had to stop. There is no way to separate the NAT lookup result of error and unknown entry. A pity really, but that is what we are forced to work with.

During testing we uncovered a lot of really quite nasty behaviour by various clients, and indeed some public services setup to take advantage of the NAT bug as if it were a feature. On the whole a lot of client software sends a Host header with values strangely unrelated to what is being requested of the proxy. All of this had to go, or did it? for most of the year or so of testing it was banned completely and a HTTP error message returned for any such garbage. But in the end it was clear that we had to let it through somehow or we would be pitting Squid against the biggest players on the Internet … yeah.

(Almost-) Final Results

Squid-3.2 and later will reject traffic where NAT lookups fail on an intercept port. This includes NAT done on external devices and browsers directly sending proxy requests to the Squid intercept port. At the very least accurate reporting about the traffic and what it contains is the critical factor. If we let this traffic into Squid it would seriously compromise the validation results and also allow malicious clients to attack internal network resources with a large measure of anonymity. Neither of which is acceptible for a proxy trusted with controlling user traffic.

The best practice guideline for some years has been that NAT MUST be done on the Squid device. Squid-3.2 are now enforcing it as a basic requirement. If you are one of the network administrators running into this requirement change, please investigate the Policy Routing functionality of the device you originally had doing the NAT.

Squid-3.2 and later will accept invalid Host headers and produce a response to the client. But they will not cache the untrustworthy transaction results. They will contact the same server which the client TCP connection would have originally reached had Squid not been there (visible as ORIGINAL_DST in the logs). Notice that this is now deserving of the name “transparent proxy” a lot more than previous intercepting Squid. Even so NAT interception is still only half-transparent with the server being able to easily identify the proxies existence.

The loss of caching is intended to be a temporary solution until we can properly implement per-client caching of objects. As the saying goes there is nothing so permanent as the temporary – Squid still contains this workaround several series later. Support with this work is very welcome.

The situation when upstream peers are involved is also quite dangerous. For now we have had to permit Squid to pass the traffic to peers, which opens all multi-hop systems of proxies to the same vulnerabilities that were previously possible wth a single intercepting proxy. The solution is also going to require a substantial amount more work and be some years away. For the meanwhile it is a good idea to avoid passing intercepted traffic to cache_peer when possible.

More information on the problems, log entries, when they occur and what can be done about each is detailed in the Squid wiki Host header forgery page.

Zero-Sized Replies from Windows Servers

April 30, 2014

During the last few months there have again been a number of bug reports and queries from administrators seeing Zero Sized Reply error pages being produced by Squid 3.2 and later.

These “errors” are produced when Squid sends an HTTP request, then something out in the network goes wrong and the TCP connection gets severed while Squid is still waiting for the start of HTTP response to arrive. As you can imagine this is a little vague because that “something” is any one of a large set of potential networking problems.

Investigation of the old usual culprits in ECN, Window Scaling, PMTUd, and CONNECT proxying ruled them out leaving us mostly in the dark.

Testing without the proxy appeared to work fine. As did small short transactions even through the proxy. Leaving us more than a little confused.

The most common theme this time seems to be Windows based SSL/TLS services with recent but not top of the line software versions. IIS or Sharepoint on Server 2008 and 2010 for example.

Daniel Beschorner has done some investigating and reported this:

Since Squid 3.2 the SSL flag SSL_OP_ALL is no longer enabled by default in Squid. It enables different workarounds in the OpenSSL library.

Windows / IIS seems to get confused by empty packets (to mitigate the BEAST attack) sent from OpenSSL in TLS 1.0.

So the possibilities are:

We have also had remarkably similar problem reports about iTunes servers. That one is still unconfirmed and unresolved.

Squid-3.2: Pragma, Cache-Control, no-cache versus storage

October 16, 2012

The no-cache setting in HTTP has always been a misunderstood beastie. The instinctual reaction for developers everywhere is to believe that it prevents caching or cache handling or some such myth.

This is not true.

By definition it merely forces caches to revalidate existing content before use (ie it tells the proxy to “be ultra, super-duper conservative. Do not send anything from cache without first contacting the server to double check it.”).

When sent on a client (browser) request:

  • Pragma:no-cache instructs HTTP/1.0 caches to revalidate any cached response before using it.
  • Cache-Control:no-cache instructs HTTP/1.1 caches to revalidate any cached response before using it.
  • Pragma:no-cache only works for HTTP/1.1 caches when Cache-Control is missing.
  • all other values of Pragma are undefined and to be ignored.

When sent on a server response:

  • Pragma in all its forms has no meaning whatsoever. It must be ignored.
  • Cache-Control:no-cache instructs HTTP/1.1 caches to revalidate this response every time it is re-used.

If you read those bullet points above very carefully you will notice that at no point is store mentioned. None whatsoever. The closest it gets is mentioning what to do with already-stored content (revalidate it). In fact the HTTP/1.1 specification goes as far as to say explicitly that responses with no-cache MAY be stored – provided the revalidation is done as above.

no-cache in Squid

The well-known squid versions of the past have all been HTTP/1.0 compliant and advertised themselves as HTTP/1.0 software. These proxies both looked for Pragma:no-cache headers and obeyed them:

  • Squid being HTTP/1.0 that Pragma took precedence over Cache-Control.
  • Due to lack of full HTTP/1.1 revalidation in very old versions Squid has traditionally treated no-cache in either header as if it were Cache-Control:no-store.
  • Due to some old server software Pragma:no-cache on responses was treated as a mistaken form of Cache-Control:no-store.

Starting with version 3.2 Squid is advertising and attempting to fully support HTTP/1.1 specifications. This is a game changer.

All of the above is about to be up-ended, assumptions can be thrown away and some funky cool proxy behaviour allowed to take place.

Hiding in the background is the instruction that Pragma only applies when Cache-Control is missing from a request. We can ignore it – almost completely. When we do have to pay attention we only need to notice the no-cache value and can treat it as if we received Cache-Control:no-cache.

The other change is a potential game changer: The object being transfered is stored now, revalidated later.

Some implications from storing no-cache responses:

  • servers can utilize 304 responses instead of generating new content. Saving a lot of bandwidth and CPU cycles.
  • all those configuration hacks for ignoring or stripping no-cache are no longer needed. Also, the harm they do will become more visible as revalidation is skipped.
  • cache HIT ratio can potentially rise above 50% for forward proxies. As a side effect of the HIT counting market a large portion of web traffic is utilizing no-cache instead of no-store or private. This large portion is cacheable but until now Squid has been dropping it.

Before the marketing department panics about the end of the world lets be clear on one important point:

revalidation means every client request will still reach the end server doing HIT counting, traffic control, whatever – but in a way which allows 304 bandwidth optimization on the responses.

Do not expect a sudden rise of TCP_HIT in the proxy logs though. It is more likely to show up as TCP_REFRESH_HIT or the nasty TCP_REFRESH_MODIFIED/TCP_REFRESH_MISS which is produced by broken web applications always sending out new unchanged content.

Happy Eyeballs

July 14, 2012

Geoff Huston wrote up a very interesting analysis of the RFC 6555 “Happy Eyeballs” features being added to web browsers recently.

As these features reach the mainstream stable browser releases and more people being using them Squid in the role of intercepting proxy are starting to face the same issues mentioned for CGN gateways. For all the same reasons. Whether or not you are operating an existing interception proxy or installing a new one this is one major new feature of the modern web which needs to be taken into account when provisioning the network and Squid socket/FD resources.

Squid operating as forward proxy do not face this issue as each browser only opens a limited number of connections to the proxy. Although Firefox implementation of the  “Happy Eyeballs” algorithm appears to have been instrumental in uncovering a certain major bug in Squids new connection handling recently.

A Squid Implementation

For those interested, Squid-3.2 does implement by default a variation of the “Happy Eyeballs” algorithm.

DNS lookups are performed in parallel now, as opposed to serial as they were in 3.1. As a result the maximum DNS lookup time is reduced from the sum of A and AAAA response times, to the maximum of both.

TCP connection attempts are still run in serial, but where older versions of Squid interspersed a DNS lookup with each set of TCP attempts the new 3.2 code identifies all the possible destinations first and tries each individual address until a working connection is found. Retries under the new version are also now limited per-address where in the older versions each retry meant a full DNS result set of addresses was re-tried.

As a result dns_timout is separated from connect_timeout which is now fully controlling only one individual TCP connection handshake.

Proxying HTTPS for an internal service

June 18, 2011

Since version 2.6 changed the way http_port worked and let Squid service multiple different types of traffic simultaneously people have been struggling with one setup which should to all outward appearances be quite simple.

I’m speaking of the scenario where you have a proxy serving as both a forward-proxy gateway for the internal LAN users and as a reverse-proxy gateway for some SSL secured internal services (an HTTPS internal site).

Both setups are essentially simple. For the reverse-proxy you setup an origin cache_peer with SSL certificate options. Perhapse an https_port to receive external traffic.  For the forward-proxy you setup users browsers to contact the proxy for their HTTP and HTTPS requests. Perhapse with NAT interception to force those who refuse.

They you discover that Squid can’t seem to relay requests from internal users to your internal peer. You get warnings about clientNegotiateSSL failing on plain HTTP requests. Even though it may appear the user was opening HTTPS properly to contact it.

The problem is that when relaying through a known proxy browser wrap their SSL request, inside a CONNECT tunnel setup request. Which is plain-text HTTP. Squid passes this intact on to any cache_peers you have configured. Even the origin one which is expecting SSL. It may do the right thing and wrap it in a second layer of SSL. But that just makes things worse as the server at the other end gets this weird CONNECT request it cant do anything with.

Until recently the only fix has been to setup a bypass so that internal LAN users don’t use the proxy when visiting the internal HTTPS site. Which works perfectly for user access. But does cause problems on the recording and accounting systems which now have to track two sets of logs and filter proxy relayed requests out of one.

Or alternatively to set the LAN DNS to point users at the reverse-proxy port and figure some way to avoid forwarding loops by bypassing Squid like above or disabling the loop detection.

Both alternatives having the same problems at best. Worst case in the second you have opened some security vulnerabilities by ignoring loops.

In Squid-3.1 we have trialled two possible ways to fix this whole situation.

The first attempt was to simply not relay CONNECT to peers with origin type configured. This failed with a few unwanted side effects. One was that Squid would lookup the DNS and go to that server. Fine for most, but not all Squid have split-DNS available. Or Squid could relay it to a non-origin peer instead. Possibly halfway round the world with worse lag effects than a little extra calculation handling the logs.

The second attempt, which we are currently running with in 3.1.12 and later. Is to strip the CONNECT header and connect the tunnel straight to the peer. But only when the peer port matches the intended destination of that tunnel, and your access controls permit it for selection.

  • The port restriction is there as a simple check that the service is likely to match protocols. Even if we cant be sure which.
  • Traffic to that internal service does go through the proxy and traffic accounting only has to handle the proxy logs.
  •  Requests from LAN clients use the clients SSL certificates instead of the cache_peer configured ones.

This last point is one which can bite or confuse. If you have LAN users in this type of scenario and require all contact with the internal service to use the proxy configured certificates you will still need to configure those clients with the old methods.

 

Enjoy. And as always, if you have better ideas or problems please let us know.

Continuous Integration

August 18, 2009

For the last few years there has been a slow growing improvement to the testing and QA Squid is subject to. This last week has seen the construction and rollout  of a full-scale build farm to replace some of our simple internal testing. Robert Collins covers the growth process in his blog.

Here is the initial release notice:

Hi, a few of us dev’s have been working on getting a build-test environment up and running. We’re still doing fine tuning on it but the basic facility is working.

We’d love it if users of squid, both individuals and corporates, would consider contributing a test machine to the buildfarm.

The build farm is at http://build.squid-cache.org/ with docs about it at http://wiki.squid-cache.org/BuildFarm.

What we’d like is to have enough machines that are available to run test builds, that we can avoid having last-minute scrambles to fix things at releases.

If you have some spare bandwidth and CPU cycles you can easily volunteer.

We don’t need test slaves to be on all the time – if they aren’t on they won’t run tests, but they will when the come on. We’d prefer machines that are always on over some-times on.

We only do test builds on volunteer machines after a ‘master’ job has passed on the main server. This avoids using resources up when something is clearly busted in the main source code.

Each version of squid we test takes about 150MB on disk when idle, and when a test is going on up to twice that (because of the build test scripts).

We currently test:

  • 2.HEAD
  • 3.0
  • 3.1
  • 3.HEAD

I suspect we’ll add 2.7 to that list. So I guess we’ll use abut 750MB of disk if a given slave is testing all those versions.

Hudson, our build test software, can balance out the machines though – if we have two identical platforms they will each get some of the builds to test.

So, if your favorite operating system is not currently represented in the build farm, please let us know – drop a mail here or to noc @ squid-cache.org – we’ll be delighted to hear from you, and it will help ensure that squid is building well on your OS!

-Rob

That just about covers everything. Hardware and build software requirements are listed in the build farm page.

Hi, a few of us dev's have been working on getting a build-test
environment up and running. We're still doing fine tuning on it but the
basic facility is working.

We'd love it if users of squid, both individuals and corporates, would
consider contributing a test machine to the buildfarm.

The build farm is at http://build.squid-cache.org/ with docs about it at
http://wiki.squid-cache.org/BuildFarm.

What we'd like is to have enough machines that are available to run test
builds, that we can avoid having last-minute scrambles to fix things at
releases.

If you have some spare bandwidth and CPU cycles you can easily
volunteer. 

We don't need test slaves to be on all the time - if they aren't on they
won't run tests, but they will when the come on. We'd prefer machines
that are always on over some-times on.

We only do test builds on volunteer machines after a 'master' job has
passed on the main server. This avoids using resources up when something
is clearly busted in the main source code.

Each version of squid we test takes about 150MB on disk when idle, and
when a test is going on up to twice that (because of the build test
scripts).

We currently test
2.HEAD
3.0
3.1
3.HEAD

and I suspect we'll add 2.7 to that list. So I guess we'll use abut
750MB of disk if a given slave is testing all those versions.

Hudson, our build test software, can balance out the machines though -
if we have two identical platforms they will each get some of the builds
to test.

So, if your favorite operating system is not currently represented in
the build farm, please let us know - drop a mail here or to noc @
squid-cache.org - we'll be delighted to hear from you, and it will help
ensure that squid is building well on your OS!

-Rob

Life of a Beta

July 11, 2009

From early inception when the developers have nothing but dreams for it.  Through the coding and arguments about what should be included and how. Through the alpha testing with its harrowing hours pondering obscure code from last decade. Even the odd period of panic as security bugs are whispered about behind closed doors. Such is the early life of software.

Two weeks ago word went out that 3.1 was reaching end-game.

This part of the release lifecycle seems to be going well. Packages appearing very slowly as QA throws demanding eyes on the code and making us actually fix things. Don’t be fooled by the packages out already, they have been in QA for a few months to get this far. On that note:

NetBSD, Gentoo, Ubuntu, FreeBSD and RedHat already have packages ready and available for at least testing use if you know where to look (ie the links right there might be a good start).

Debian has a bit more QA to go as of the writing, but the maintainer tells me there will be packages out soon.

OpenBSD and Mac turned out at the last minute to be running split-stack IPv6 implementations (for security apparently). All the documentation read in two years left the impression it was a Windows XP anarchism (and who runs XP Pro on a server?), so support was delayed and delayed.  The OpenBSD maintainer and someone interested from Mac are working with myself on closing that gap in the features.

There may be more OS with 3.1 packages. I’ve only begun working my way down the distrowatch.org popularity list to see which OS do and who to contact. Squid has bundles on over 600 OS apparently.

If you know who does the official packaging for your OS and whether there are 3.1 packages ready, please do me a favor and mention it. I’m seeking a web page where to find the squid (or squid3/squid30/squid31) package information and also the place where distro bug reports about Squid might end up.

Release 3.1

November 4, 2008

Kinkie pointed out Linus Torvalds blog today to the rest of us here working on Squid. As the release maintainer for Squid-3 this year I kind of agree, its a sad time to cutting a new version. For me its more of a reflection that for all the high hopes we have of this new release, we had the same or similar hopes of the earlier one. Just 12 months ago now.

On that sad note, yes its finally happened. 3.0 has aged into a full blown stable package. Most of a month and no new bugs. Perfect time for something shiny and new for the neo-tech fanclub. And so with that for an intro we are gone for 3.1 !

3.1 is available for beta testing in the form of 3.1.0.1. see the Release Notes for further details on the finer details of change.

This release has gained from the experiences of 3.0 and 2.6, starting from a much more stable base of code than the initial. 3.0 had a long period of years with few active developers, an interminably long period of testing releases, and in hindsight a premature birth.

Alongside the code this release has a wider collaboration with active users. For the first time in many years we held a Developer meeting that included Users. We who were there certainly took in a lot of feedback from all sides. I hope those users who talked to us can see in this release that their comments, even those made in passing, have been listened to and worked on.

The small comment from one user when asked what their biggest itch with squid was “we don’t like these being called STABLE, when its obvious they are not.” has led to the most notable change made to 3.1.  That comment and similar feelings by others lead us into discussions on the release naming and numbering. From which we have produced – 3.1.0.1 – the second milestone point of the branch we are calling 3.1. Where the developers have everything done and working for us.

no more DEVEL, PRE, or RC, no more premature labels guessing when things might be STABLE.  Just 3.1.0.1. Further testing from the rest of you will show whether anyone can consider it stable, unstable, usable or as buggy as raw earth.

From the developers; We use it. We love it. Try it, and see for yourselves.

Some of the stuff you will find there is;

  • a lot of small changes aimed towards easier use and configuration (three cheers to those who nagged long an hard for this).
  • a lot of network RFC compliance extensions, making 3.1 much more capable of meeting modern network needs. The future still holds improvements, but 3.1 is definitely better in many respects than everything that came before.
  • a lot of things to make Squid a better experience for your own users. More seamless network recovery tricks than ever before. We have even tagged along behind the international localization bandwagon in our own way to make the errors squid does have to show both pretty and readable.

Sadly, careful readers will notice a section of the Release Notes labeled “Regressions against 2.7″.  Yes, those of you who moved to 2.7 because you needed some brand new feature there may still have trouble migrating up to 3.1. What we have done is to port as many of the 2.6 features and fixes as we could. A few did not make it in time, but will be coming in 3.2, alongside the features added as experimental in 2.7.

On the overview:

  • 2.5 has disappeared over the horizon into the long dark night of obsoletion.
  • 2.6 is itself officially aging out now. Supported, but the developer first response is “can you try something newer?”.
  • 2.7 is being maintained for the few extremely high-performance accelerator setups. But in general the Squid-2 sequence is aging out for us developers.
  • 3.0 has reached a point of stability, though not fully-featured.
  • 3.1 is available for testing as the next step up. You should be planning to migrate up to 3.1 or later release.

If there are any features holding you to Squid-2, or even an issues you find with testing Squid-3 speak up, we rely on your input to choose the most needed features for porting.

Thank you all, and enjoy your use of Squid 3.1

Chunked Decoding

April 29, 2008

We have been getting a growing number of reports and bugs from people using Squid 3.0 described as ‘squid producing a blank page’ when bypassing squid apparently works.

Sounds familiar to some yes? I’m bringing it up now because while it is an old problem, its not the TCP issues Adrian wrote about earlier and you should also check if you find its not this. Which incidentally can have exactly the same visible effects for end-users.

This ‘new’ issue is caused by certain widely-used web servers which shall remain nameless and unadvertised by me. Which always respond with HTTP/1.1 chunked-encoding of pages.

Servers are explicitly forbidden from sending that particular encoding type to software announcing itself as HTTP/1.0 (such as squid). But the broken server is doing it anyway!

Ironically: The authors use this server on their own help and support website. So those who are having this problem both see it as a squid problem, and can’t find or see any solution they may have posted anyway.

How to tell if this is your problem?

Use squidclient to make a web request that bypasses the squid proxy. It should send out the HTTP/1.0 request and get a page back. If the headers of the response include “Transfer-Encoding: chunked” there is your problem.

This is currently only an issue in Squid 2.5 or earlier and 3.0, which is still highly modeled around 2.5.

The solutions are varied depending on your capabilities.

Simplest for some will be to just bypass squid for those domains.

[ UPDATE: (thanks Michael Graham)

Apparently several people are having success with simply dropping the Accept-Encoding header to certain of these broken servers. Adding this to their squid.conf :

# Fix broken sites by removing Accept-Encoding header
acl broken dstdomain …
request_header_access Accept-Encoding deny broken

NP: don’t forget to remove it again when you upgrade out of 3.0

]

Next best is to use peer-routing to divert those domain requests at a squid 2.6 (or if you are feeling experimental a 3.1 build)

If its a serious issue and you are accelerating for one of these broken web servers. Then you will need to stick with Squid 2.6 until 3.1 is available for production use.

Why does it work for 2.6 and 3.1 but not 3.0?

Well, things are a bit messy I’ll have to write it up one day. Suffice to say that 3.1 has a lot more HTTP/1.1 support where the chunked-encoding/decoding was intended for. But 2.6 needed it a bit earlier so a version of the decoding (only!) was done to fit 2.6 needs at solving this same issue for high-performance users earlier last year.

The 3.0 code is just different enough that it would need a whole new back-port project to get it going well. The time and work that would take is being used instead to get 3.1 out faster. Which should be within a month of this writing so procrastinating could solve the problem for you.

[UPDATE: Thanks to the Gentoo Project for their work back-porting this will be available from 3.0.STABLE16-RC1 ]

Squid-2 performance work: graph #1

January 23, 2008

 


Follow

Get every new post delivered to your Inbox.

Join 32 other followers