[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Captive-portals] Arguments against (any) Capport "API"
On 7 April 2017 at 10:44, David Bird <[email protected]> wrote:
> the more familiar "boingo.com" in the FQDN
You mean boıngo.com ? Looks legit.
On the larger subject, as a browser person, the real reason for
sandboxing is - I believe - privacy. One basic security assumption
for the web is that it is easy to cause a user to visit a site. A
captive portal isn't special in that regard, so I don't credit claims
that sandboxing is a security measure.
The credible reason is that you don't want a user to be tracked (or
de-anonymized) across points of network connection. That is
definitely a credible story. You don't want the network using cookies
set by a portal in network A being read by a portal at the same origin
in network B when you just took somewhat extraordinary steps to ensure
that your MAC address was different in both networks.
That doesn't sound like much. And it is trivially defeated (see
below). The same user likely visits the same websites from both
locations, but the captive portal has a unique ability to correlate
network-level information (e.g., MAC) with persistent state. Random
sites on the internet don't have the same access.
The way to defeat this is to wait for an unencrypted HTTP session to
pass. You can observe tracking cookies and use them to de-anonymize
users. If there are no tracking cookies, then "header encrichment" can
be used to implant a cookie. We learned at the last meeting that this
is one reason that portals defeat detection: so they can fall back on
this technique.
If the entire web were to use HTTPS exclusively, this method might
stop working. Or users would have to restrict their cleartext
browsing to a sandbox. (We've discussed shorter cookie lifetimes for
cleartext origins on the web, but the usability concerns are basically
insurmountable right now.)