Tag Archives: Online

Another Strike against Domain Fronting

In 2014, Domain Fronting became the newest obfuscation technique for covert, difficult to censor communication. Even today, the Meek Pluggable transport serves ~400GB of Tor traffic each day, at a cost of ~$3000/month.

The basic technique is to make an HTTPS connection to the CDN directly, and then once the encryption has begun, make the HTTP request to the actual backing site instead. Since many CDNs use the same “front-end cache” servers for incoming requests to all of the different sites they host, there is a disconnect between the software handling SSL, and the routing web server proxying requests to where they need to go.

Even as the technique became widely adopted in 2014-2015, its demise was already predicted, with practitioners in the censorship circumvention community focused on how long it could be made to last until the next mechanism was found. This prediction rested on two points:

  1. The CDN companies will find themselves in a difficult position politically, since they are now in the position of supporting circumvention while also maintaining a relationship with the censoring countries.
  2. The technique has security and cost implications that make it not great for either the CDNs, or the practitioners.

We’ve seen both of these predictions mature.

Cloudflare, explicitly doesn’t support this mechanism of circumvention, and coincidentally has major Chinese partnerships and worked to deploy into China. Google also has limited the technique over periods as they have struggled with abuse (although mute in China, since the Google cloud doesn’t work there as a CDN.)

In terms of cost, the most notable incident is the “Great Cannon”, which targeted not only Github as widely reported, but also caused a significant amount of traffic to go to Amazon-hosted pages run by GreatFire, a dissident news organization, and costing them significant amounts of money. GreatFire had been providing a free browser that operated by proxying all traffic through domain-fronting. Due to a separate and less reported Chinese “DDOS” they ended up with a monthly bill for several tens of thousands of dollars and had to turn down the service.

The latest strike against domain fronting is a post a couple weeks back by Cobalt Strike that the technique is also gaining adoption for Malware C&C. This abuse case will further incentivize CDNs from allowing the practice to continue, since there will now be many legitimate western voices actively calling on them to stop. Enterprises attempting to track threats on their networks, and CDN customers wanting to not be blamed for attacks will both begin putting more pressure on the CDNs to remove the ability for different domains to be intermixed, and we should expect to see a continued drop in the willingness of providers to offer such a service.

First-party Google Analytics

Third party analytics services are suffering from the growing prevalence of ad blocking, tracking protection, and the trend of minimizing connections and requests. However, from a site owner perspective, receiving usage information remains important for measuring site growth.

My expectation is that we are already on the curve where ads and tracking software will be more tightly integrated into websites and make it significantly more difficult for clients to disambiguate
“good” and “bad” scripts, which are mostly done today from the URL.

Google already provides the tools needed to relay analytics communication through a third party server, and it took under an hour to put together a proof of concept that removes the final third-party requests that are required when viewing this page. In essence, my server proxies all the requests that would normally go to Google, and adds on a couple extra parameters to track who the real client is.

The modified loading script for google analytics, and the corresponding nginx configuration to make my server a relay are here.

Thoughts on China’s Updated Cyber-security Regulations

On Monday, China ratified an updated cybersecurity legislation that will enter effect next June. The policy regulates a number of aspects of the Chinese Internet: What data companies need to keep on domestic servers, the interaction between companies and the government, and the interaction between companies and Chinese users.

Notably, when considering the impact on the Internet, the law include:

  • Network operators are expected to record network security incidents and store logs for at least 6 months (Article 21)
    Note that the punishment for refusing to keep logs is a fine up to 10,000usd to the operator, and of up to 5,000usd to the responsible person.
  • Services must require real-identity information for network access, telecom service, domain registration, blogging, or IM (Article 24)
    The punishment for failing to require identity is up to 100,000usd and suspension of operations.
  • Network operators must provide support to the government for national security and crime investigations (Article 28)
  • If a service discovers prohibited user generated content they must remove it, save logs, and report to the government (Article 47)
    The punishment for this is up to 100,000usd and closing down the website

The concerns from foreign companies seem to center around a couple things: The first is that there’s a fairly vague classification of ‘critical infrastructure’, which includes power, water and other infrastructure elements explicitly, but also refers to services needed for public welfare and national security. Any such service gets additional monitoring requirements, and needs to keep all data on the mainland. Companies are worried they could be classified as a critical service, and that there aren’t clear guidelines about how to avoid or limit their risk of becoming subject to those additional regulations.

The other main concern seems to be around the fairly ambiguous regulation of supporting national security investigations by the government. There’s a concern that there aren’t really any limits in place for how much the government can request from services, which could include requiring them to include back doors, or perform significant technical analysis without compensation.

My impression is that these regulations aren’t much of a surprise within China, and they are unlikely to cause much in the way of change from how smaller companies and individuals experience Internet management already.

Watch your PAC

In the last week at Blackhat / Defcon two groups looked deeply at one of the lesser known implementations of network policy called Proxy Autoconfig. (In particular, badWPAD by Maxim and Crippling HTTPS with unholy PAC by Safebreach.)

Proxy AutoConfig (PAC) is a mechanism used by many organizations to configure an advanced policy for connecting to the Internet. A PAC file is written in JavaScript to provide a dynamic determination of how different connections should be made, and which proxy they should use. In particular, international companies with satellite offices often find the PAC system useful in routing some traffic through a corporate proxy for compliance or geographical reasons while other traffic is routed directly to the Internet.

These two talks both focus on what a malicious individual could do to attack the standard, and each find an interesting line of attack. The first attack is that the PAC file is allowed to make DNS requests in determining how to proxy connections, and in many browsers sees the full URL being accessed rather than only the domain. This means that even when the user is communicating with a remote server over HTTPS, the local network can learn the full URL that is being visited. The second attack has to do with where computers look for PAC files on their local network – for a file called `wpad.dat`.

While there is certainly the potential for an attacker to target a victim through these technologies, they are more accessible and arguably more valuable to a ISP or state level actor interested in passive surveillance. This explicit policy for connectivity is not inherently more invasive than policies employed by many ISPs already, and could likely be deployed on many networks without consumer push-back as a performance enhancement for better caching. It is also appropriate for targeted surveillance, since vulnerability can be determined passively.

The viability of surveillance through WPAD and PACs is a bit of a mixed bag. Most ISPs use DHCP already and set a “search domain”, which will result in a recognizable request for proxy information from vulnerable clients. While organizations often require all clients to enable discovery, this is not true of many consumer machines. Unfortunately, some versions of windows have proxy discovery enabled by default.

The NMAP tool used for network exploration, and pitched towards use as a tool facilitating network attackers, already has support for WPAD. In contrast, the network status and monitoring tools, like Netalyzr and OONI do not yet monitor local proxy status and won’t provide indication of malicious behavior.

Stunning

I’ve started to dive once again into the mess of connection establishment. Network address translation (NAT) is a reality today for most Internet users, and poses a significant hurdle in creating the user-user (or peer-peer) connections. NAT is the process used by your router to provide multiple internal (192.168.x.x) addresses that are all only visible as a single external address on the Internet. The challenge caused by this device is that if someone outside wants to connect to your computer, they have to figure out how to get the router to send their traffic back to you, and not just drop it or send it to another computer on your network.

Without configuring your router to add a ‘port forwarding’ rule, it isn’t supposed to do this, so many of the connection establishment procedures are really ways to trick your NAT into forwarding traffic without realizing what’s happening.

There are two main protocols on the Internet today: UDP and TCP. UDP is stateless, each “packet” of data is its own message, and is self contained. In contrast, TCP is a representation of a longer “stream” of data – many messages are sent with an explicit ordering . TCP is much harder to trick routers into establishing, and there has been little work there.

The current generation of p2p systems are led by high-bandwidth applications that want to offload traffic from central servers in order to save on bandwidth costs. Good examples of these are Google’s hangouts and other VOIP (video over IP) traffic.

These systems establish a channel to send UDP traffic between two computers both behind NAT routers using a system called ICE (interactive connectivity establishment). This is a complex dance with multiple sub-protocols used to try several different ways of establishing connectivity and tricking the routers.

One of the key systems used by ICE is a publicly visible server that speaks a protocol called STUN. STUN servers provide a way for a client to open a UDP connection through their router to a server that is known to be able to receive messages, and then learn what that connection looks like outside of its router. It can then provide that external view of how it’s connected to another peer which may be able to send messages to the same external address and port and have them forwarded back to the client.

One of the unfortunate aspects of this situation is that the complexity of these systems has led to very few implementations. This is unfortunate, since the existence of libraries making it easy to reuse these techniques can allow more p2p systems to continue working in the modern Internet without forcing users to manually configure their routers.

I’ve started work on a standalone go implementation of the ICE connectivity stack. Over the weekend I reached the first milestone – The library can create a STUN connection, and learn the external appearance of the connection as reported by the STUN server.

Satellite

I’m excited to present Satellite, a network measurement project I’ve been working on over the last couple years, at USENIX ATC next month.

Satellite takes a look at understanding shared CDN behaviors and automatically detecting censorship by regularly querying open DNS resolvers around the world.

For example, we can watch the trends in censorship in Iran using only a single, external machine.

The data for satellite is posted publicly each week, and will shortly be merged into the OONI data set to help provide better baselines for what behavior should be occurring.

Contextualizing RedStar OS

At the 2015 Chaos Communication Congress, Florian and Niklaus presented an analysis of Red Star OS 3.0, the system which leaked online a year ago.

In their talk they provide technical backing for several observations about the system which have gained some press attention. The first is that the Operating System is designed without obvious backdoors and doing a reasonable job of security. This implies that it is aimed at a serious, internal market. The second point is that there is tracking of accessed content, also known as digital watermarking, occurring in the system. This can be seen as a malicious attempt of control over users of the system, which is the dominant interpretation made by the press. However, it’s worth pointing out that interpretation is dependent on a lot of context about how the system is used that we don’t have.

We know that RedStar is developed by KCC, the Korea Computer Center, which is one of the large government technology labs. We also know that a part of KCC’s business has been industrial contract work. They’ve run external branches intermittently, and work with foreign clients. So far, as pointed out in Florian’s talk, the only computers observed to run Red Star are some of the Publicly Internet facing servers, run in the country, like naenara.com.kp. It is not unreasonable to expect that these servers are operated by KCC as a contract service for the relevant entities.

First, I want to take a somewhat skeptical look at the purpose of this watermarking. I’ll admit that it absolutely introduces the capacity for surveillance, but I think in this case it’s a largely irrelevant point from a human rights perspective. First, this OS as far as we know is only being used in industrial settings. We’ve seen older versions of RedStar in e-libraries and show-computer-labs around the country, but so far version three has not been deployed to these semi-public machines. Computers available in stores that would be bought for personal ownership are universally running Windows, and that’s also what we see in the personal laptops of the PUST students. The Surveillance chain insinuated in the talk assumes that most machines are running the new OS, which is absolutely not the case.

Instead, we can see this development to be a reaction to two things that we know to be pressing issues in the country: The ability to clean up after viruses that have spread through an industrial network. KCC also develops its own antivirus software, and students at PUST often express concern about malware and gaining security against attacks from foreign state-level actors. This seems like a reasonable concern, given that such attacks have been admitted to. Having lineage on files passed around on USB sticks lets you find what other computers on your network have been infected. In this same vein we can see the digital watermarking as a digital auditing capability within an office, and here it is no more intrusive than the practices commonly in place in most global companies. To put this succinctly: the capability is one which we use, and know to have value – but we’re scared that it has a potential for misuse, though we haven’t seen evidence of that yet.

Recently, Joshua Stanton made the claim that this evidence of watermarking in RedStar should cause us to reconsider current academic engagement with the country. In particular, he points to a long-standing interaction with Syracuse university. The cited report on this collaboration mentions

Areas of particular interest included a secure fax program (this is now being marketed through a Japanese company), machine translation programs, digital copyright and watermarking programs, and graphics communication via personal digital assistants.

One trap this line of reasoning falls into is the common perception that North Korea is all one entity, somehow all working malevolently together to subvert whatever assistance is provided. In reality, the country like any other has many
different organizations and bureaus with different groups jockeying for power and substantial bureaucracy. The fact that the report mentions PIC, a rival computing center, is probably enough to indicate that the syracuse interaction wasn’t attached to KCC. Several other arguments can be made to separate this instance from the observed watermarking:

  1. The actual collaboration, as noted in the same report was on systems assurance. As a Computer Scientist, I’m willing to say that digital watermarking is not in that scope.
  2. Students at Kim Chaek have had a standard undergraduate computer science education, but have no exposure to linux programming. The standard curriculum that Kim Chaek graduates I’ve interacted with have had only covers programming in a windows environment.
  3. Digital Watermarking information is easily accessible on the internet. There’s no reason to expect that the US academics had any more knowledge or conveyed anything better than the books and online resources that KCC would easily be able to access and translate on its own.

I’m a strong believer of these arguments, and they cause me to remain in support of the syracuse-style (and PUST for that mater) interactions with university students in Pyongyang. I think there is a strong personal benefit in building these relationships. Without engagement, it’s really hard to change perceptions. These are some of the rare opportunities we have to access the future middle-class and well-connected people in Pyongyang and give them something more personal than just the evil US government to think of when they think of the US. In addition, these interactions are how the rest of the world learns about the state of technology in the country and is even able to have the conversation about whether Red Star is a surveillance tool.

SP3

I started running a public sp3 server today. It’s a small side-project I’ve hacked together over the last couple weeks to make it easier for people to play with packet spoofing. The server works similarly to a public proxy, but with the trade-off that while it won’t send high-volumes of traffic, it will allow you to send arbitrary IPv4 packets from any source you want.

There are a few fun applications that need this capability that I’ve been thinking of: helping with NAT holepunching of TCP connections; characterizing firewall routing policies; and for cover traffic in circumvention protocols. I think there are others as well, so I wanted to start running a server to see what people come up with.

The code is on github.