Category Archives: Academics

Another Strike against Domain Fronting

In 2014, Domain Fronting became the newest obfuscation technique for covert, difficult to censor communication. Even today, the Meek Pluggable transport serves ~400GB of Tor traffic each day, at a cost of ~$3000/month.

The basic technique is to make an HTTPS connection to the CDN directly, and then once the encryption has begun, make the HTTP request to the actual backing site instead. Since many CDNs use the same “front-end cache” servers for incoming requests to all of the different sites they host, there is a disconnect between the software handling SSL, and the routing web server proxying requests to where they need to go.

Even as the technique became widely adopted in 2014-2015, its demise was already predicted, with practitioners in the censorship circumvention community focused on how long it could be made to last until the next mechanism was found. This prediction rested on two points:

  1. The CDN companies will find themselves in a difficult position politically, since they are now in the position of supporting circumvention while also maintaining a relationship with the censoring countries.
  2. The technique has security and cost implications that make it not great for either the CDNs, or the practitioners.

We’ve seen both of these predictions mature.

Cloudflare, explicitly doesn’t support this mechanism of circumvention, and coincidentally has major Chinese partnerships and worked to deploy into China. Google also has limited the technique over periods as they have struggled with abuse (although mute in China, since the Google cloud doesn’t work there as a CDN.)

In terms of cost, the most notable incident is the “Great Cannon”, which targeted not only Github as widely reported, but also caused a significant amount of traffic to go to Amazon-hosted pages run by GreatFire, a dissident news organization, and costing them significant amounts of money. GreatFire had been providing a free browser that operated by proxying all traffic through domain-fronting. Due to a separate and less reported Chinese “DDOS” they ended up with a monthly bill for several tens of thousands of dollars and had to turn down the service.

The latest strike against domain fronting is a post a couple weeks back by Cobalt Strike that the technique is also gaining adoption for Malware C&C. This abuse case will further incentivize CDNs from allowing the practice to continue, since there will now be many legitimate western voices actively calling on them to stop. Enterprises attempting to track threats on their networks, and CDN customers wanting to not be blamed for attacks will both begin putting more pressure on the CDNs to remove the ability for different domains to be intermixed, and we should expect to see a continued drop in the willingness of providers to offer such a service.

Thoughts on IPv6 Measurement

About five years ago two projects, Zmap and Masscan, helped to shift the way that many researchers thought about the Internet. The tools both provide a relatively optimized code path for sending packets and collecting replies, and allow a researcher with moderate resources to attempt connections to every computer on the IPv4 Internet in about an hour.

These techniques are widely applied to monitor the Internet-scale security of services, with prominent examples of censys.io, scans.io, and shodan.io. For the security community, they have become a first-step for reconnaissance, allowing hackers to find origin IPs masked by CDNs, unadvertised points of presence, and vulnerable hosts within an organization.

While the core of the Internet and the services we actively choose to connect with remain staunchly IPv4, the networks that many end hosts are connected to are more rapidly adopting IPv6, responding to the exhaustion and density of the IPv4 address space.

This fall, a new round of research has focused on what is possible for the enumeration and exploration of the IPv6 address space. ‘You can -J reject but you can’t hide’ was presented at CCC, focusing on spidering DNS records to learn of active IPv6 addresses which are registered within the DNS system. Earlier in the fall, there were several sessions at IMC thinking about IPv6. Most notably, “Entropy/IP – uncovering entropy in IPv6″, which looks at how addresses are allocated in practice as seen by Akamai at the core of the network. In addition, IPv6 was the focus of a couple WIP sessions, expressing thoughts on discovering hosts through progressive ICMP probing, as well the continued exploration of what’s actually happening in the core as seen by Akamai.

Where does this growing understanding of wide-scale IPv6 usage take us?

  • Enumeration of candidate addresses is a new first step that will be needed for anything beyond a single prefix. Even then, scanning within a single organizational prefix can be considered an active brute-force attack, rather than the relatively ‘harmless’ reconnaissance of IPv4 scanning.
  • There are many potential sources to interact with for enumeration, including DNS records, observed network traffic, and default ::1 addresses. The Entropy/IP paper points out that shodan.io has already been observed adding itself as a member of the NTP pool to harvest candidate IPv6 addresses for scanning.
  • Address generation for many hosts is not fully random, embedding a mac address, IPv4 address, or other non-random information. This can be used to discover a subset of hosts more efficiently, though still not at Internet scale. (for example, 2^32 attempts to look for hosts of a specific brand within a 2^64 network address space.) This would still sends several gigabytes of traffic to an individual network in the process of scanning. Non-random addresses tend to be more often associated with servers and routers than with end-clients.
  • Discovery of network topology is possible by enumerating where error responses to guessed addresses come back from. This doesn’t allow for discovery of individual machines either.

What do we do about it?

There will probably not be a shodan.io for ipv6 in the same way there is for ipv4. Instead, much of the wide-scale scanning on the IPv6 network will be performed through reflection from hosts discovered through their participation in other active services, for instance bit torrent, NTP, or DNS.

Conversely, the number of vulnerable IPv6 hosts will keep growing, because they can exist for much longer before anyone will find them. This will likewise increase the value that can be obtained through scanning – both to hackers, and to academics looking at Internet dynamics. We can expect to see a marketplace for addresses observed passively by ISPs, the network core, and passive services.

It’s worth also watching the watchers here: which providers are “selling me out” so to speak? It would be worth building the honey-pots to observe which services and servers leak client information and lead top probing and the potential for compromise of end hosts.

Internet Censorship 2016

We have reached the end of 2016, as well as the annual CCC congress in Germany. I had the exciting chance to speak together with Philipp Winter on the shifting landscape of Internet censorship in 2016. The talk followed mostly the same format as last year’s, calling out the continuing normalization and ubiquity of censorship around the world.

I left congress once again energized to work on system infrastructure advancing the Internet community in the face of these existential threats.

Slides from the talk are on this site.
A writeup (in german) is on Netzpolitik blog.

Thoughts on China’s Updated Cyber-security Regulations

On Monday, China ratified an updated cybersecurity legislation that will enter effect next June. The policy regulates a number of aspects of the Chinese Internet: What data companies need to keep on domestic servers, the interaction between companies and the government, and the interaction between companies and Chinese users.

Notably, when considering the impact on the Internet, the law include:

  • Network operators are expected to record network security incidents and store logs for at least 6 months (Article 21)
    Note that the punishment for refusing to keep logs is a fine up to 10,000usd to the operator, and of up to 5,000usd to the responsible person.
  • Services must require real-identity information for network access, telecom service, domain registration, blogging, or IM (Article 24)
    The punishment for failing to require identity is up to 100,000usd and suspension of operations.
  • Network operators must provide support to the government for national security and crime investigations (Article 28)
  • If a service discovers prohibited user generated content they must remove it, save logs, and report to the government (Article 47)
    The punishment for this is up to 100,000usd and closing down the website

The concerns from foreign companies seem to center around a couple things: The first is that there’s a fairly vague classification of ‘critical infrastructure’, which includes power, water and other infrastructure elements explicitly, but also refers to services needed for public welfare and national security. Any such service gets additional monitoring requirements, and needs to keep all data on the mainland. Companies are worried they could be classified as a critical service, and that there aren’t clear guidelines about how to avoid or limit their risk of becoming subject to those additional regulations.

The other main concern seems to be around the fairly ambiguous regulation of supporting national security investigations by the government. There’s a concern that there aren’t really any limits in place for how much the government can request from services, which could include requiring them to include back doors, or perform significant technical analysis without compensation.

My impression is that these regulations aren’t much of a surprise within China, and they are unlikely to cause much in the way of change from how smaller companies and individuals experience Internet management already.

Satellite

I’m excited to present Satellite, a network measurement project I’ve been working on over the last couple years, at USENIX ATC next month.

Satellite takes a look at understanding shared CDN behaviors and automatically detecting censorship by regularly querying open DNS resolvers around the world.

For example, we can watch the trends in censorship in Iran using only a single, external machine.

The data for satellite is posted publicly each week, and will shortly be merged into the OONI data set to help provide better baselines for what behavior should be occurring.

SP3

I started running a public sp3 server today. It’s a small side-project I’ve hacked together over the last couple weeks to make it easier for people to play with packet spoofing. The server works similarly to a public proxy, but with the trade-off that while it won’t send high-volumes of traffic, it will allow you to send arbitrary IPv4 packets from any source you want.

There are a few fun applications that need this capability that I’ve been thinking of: helping with NAT holepunching of TCP connections; characterizing firewall routing policies; and for cover traffic in circumvention protocols. I think there are others as well, so I wanted to start running a server to see what people come up with.

The code is on github.