Since version 2.6 changed the way http_port worked and let Squid service multiple different types of traffic simultaneously people have been struggling with one setup which should to all outward appearances be quite simple.
I’m speaking of the scenario where you have a proxy serving as both a forward-proxy gateway for the internal LAN users and as a reverse-proxy gateway for some SSL secured internal services (an HTTPS internal site).
Both setups are essentially simple. For the reverse-proxy you setup an origin cache_peer with SSL certificate options. Perhapse an https_port to receive external traffic. For the forward-proxy you setup users browsers to contact the proxy for their HTTP and HTTPS requests. Perhapse with NAT interception to force those who refuse.
They you discover that Squid can’t seem to relay requests from internal users to your internal peer. You get warnings about clientNegotiateSSL failing on plain HTTP requests. Even though it may appear the user was opening HTTPS properly to contact it.
The problem is that when relaying through a known proxy browser wrap their SSL request, inside a CONNECT tunnel setup request. Which is plain-text HTTP. Squid passes this intact on to any cache_peers you have configured. Even the origin one which is expecting SSL. It may do the right thing and wrap it in a second layer of SSL. But that just makes things worse as the server at the other end gets this weird CONNECT request it cant do anything with.
Until recently the only fix has been to setup a bypass so that internal LAN users don’t use the proxy when visiting the internal HTTPS site. Which works perfectly for user access. But does cause problems on the recording and accounting systems which now have to track two sets of logs and filter proxy relayed requests out of one.
Or alternatively to set the LAN DNS to point users at the reverse-proxy port and figure some way to avoid forwarding loops by bypassing Squid like above or disabling the loop detection.
Both alternatives having the same problems at best. Worst case in the second you have opened some security vulnerabilities by ignoring loops.
In Squid-3.1 we have trialled two possible ways to fix this whole situation.
The first attempt was to simply not relay CONNECT to peers with origin type configured. This failed with a few unwanted side effects. One was that Squid would lookup the DNS and go to that server. Fine for most, but not all Squid have split-DNS available. Or Squid could relay it to a non-origin peer instead. Possibly halfway round the world with worse lag effects than a little extra calculation handling the logs.
The second attempt, which we are currently running with in 3.1.12 and later. Is to strip the CONNECT header and connect the tunnel straight to the peer. But only when the peer port matches the intended destination of that tunnel, and your access controls permit it for selection.
- The port restriction is there as a simple check that the service is likely to match protocols. Even if we cant be sure which.
- Traffic to that internal service does go through the proxy and traffic accounting only has to handle the proxy logs.
- Requests from LAN clients use the clients SSL certificates instead of the cache_peer configured ones.
This last point is one which can bite or confuse. If you have LAN users in this type of scenario and require all contact with the internal service to use the proxy configured certificates you will still need to configure those clients with the old methods.
Enjoy. And as always, if you have better ideas or problems please let us know.