[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Captive-portals] Arguments against (any) Capport "API"



On Apr 6, 2017 8:46 PM, "Martin Thomson" <[email protected]> wrote:
On 7 April 2017 at 10:44, David Bird <[email protected]> wrote:
> the more familiar "boingo.com" in the FQDN


You mean boıngo.com ?  Looks legit.


On the larger subject, as a browser person, the real reason for
sandboxing is - I believe - privacy.  One basic security assumption
for the web is that it is easy to cause a user to visit a site.  A
captive portal isn't special in that regard, so I don't credit claims
that sandboxing is a security measure.

Yes, that (all the privacy and security  risks) is true with public access, generally. (Why are captive portal networks extra suspicious?) Browsers themselves might behave differently on a 'capport compliant' device in that they would rely on the os detection. But, the browser could (and can today) simply ask the user 'want to connect to this captive portal?' .. let the user decide. It is pretty safe if https and a URL the user is comfortable with.


The credible reason is that you don't want a user to be tracked (or
de-anonymized) across points of network connection.  That is
definitely a credible story.  You don't want the network using cookies
set by a portal in network A being read by a portal at the same origin
in network B when you just took somewhat extraordinary steps to ensure
that your MAC address was different in both networks.

That doesn't sound like much.  And it is trivially defeated (see
below).  The same user likely visits the same websites from both
locations, but the captive portal has a unique ability to correlate
network-level information (e.g., MAC) with persistent state.  Random
sites on the internet don't have the same access.

The way to defeat this is to wait for an unencrypted HTTP session to
pass.  You can observe tracking cookies and use them to de-anonymize
users. If there are no tracking cookies, then "header encrichment" can
be used to implant a cookie.  We learned at the last meeting that this
is one reason that portals defeat detection: so they can fall back on
this technique

I agree, these are all issues with public access, generally. As far as defeating detection because they *want* the users background traffic, well.. they sorta have a point: they could be offering access with no portal and the user (or apps) wouldn't​ be the wiser. One benefit about the ICMP method is that defeating detection isn't possible - iff the Nas complies with rfc and responds appropriately to blocked traffic, with or without rfc7710 support.

Applications that use cleartext protocols in the background on public access networks should stop doing that! The OS should have a 'public access mode' that stops cleartext apps from working (to make app developers learn the hard way)... It is more of a judgement call if the user wants to use secure connections on public access networks, but they did select the SSID, I think their intention to use the network, if they can, is clear.

If the entire web were to use HTTPS exclusively, this method might
stop working.  Or users would have to restrict their cleartext
browsing to a sandbox.  (We've discussed shorter cookie lifetimes for
cleartext origins on the web, but the usability concerns are basically
insurmountable right now.)

Absolutely, browsers should take measures to help users not use clear-text protocols on *any* public access network. 

I think it is also true that browsers (and os) shouldn't care about, and shouldn't prevent users from accessing, secured resources freely available on public access networks, without captive portal (which they do today), or within the captive portal walled garden (not always true today).