Tag Archives: tech

A whirlwind trip to Beirut

Through a series of unlikely events, I found myself with the opportunity to visit Beirut for a week in early March of 2018. It was a great experience, and challenged many of the stereotypes I had developed about the realities of both the middle east and proximity to conflict zones.

The most impressive aspect of Lebanon to me was the handling and presence of the refugee situation in the area. Lebanon has had a significant southern area of refugee camps for those moving away from conflict in Palestine. More recently, a sizable refugee population has entered the country leaving the Syrian conflict. Today, there are more refugees in Lebanon than citizens, which is a source of conflict and tension in many parts of the country.

Camps, at least the impressive images of dense clusters of refugees we see in western news, do not reflect the reality I found in Lebanon. At least from the portion of the eastern countryside I saw, refugees are situated in small clusters of a few families at edges of existing towns and cities. While shelter construction is rushed, as families arrive and quickly need places to stay, there’s a significant local variability in how much local time and resources are available to construct more livable dwellings. On the ground, the competence and overloaded-ness of the local NGOs and community members is probably the biggest factor in outcome. The structures I saw had power, TVs, and charging android phones.

I was caught off guard in a good way by the urban population center of Beirut. First, Beirut continues to exist as a melting pot of a bunch of different ethnicities and cultures. Second, there was both a general tolerance and liberalism that exceeded what I’ve seen in UAE or Pakistan. Third, that liberalism translated into a much less pervasive security apparatus than I was expecting given the location and strife in the region. I needed to provide a passport as Identification for hotels, but did not need it for travel in the country, and did not need to show ID for access to school campuses of businesses. Part of that is white privilege, but in general there was not infrastructure to support any meaningful restrictions of movement or exclusion of groups from public areas.

I was likewise surprised by the seeming ease with which people were able to travel between Lebanon and Syria. For the demo day of a syrian entrepreneurship bootcamp, a number of spectators traveled to Beirut for the day from Damascus. The general sentiment I heard from several Lebanese was that the country is generally safe, but that as you get towards the edges, it’s preferable to travel with someone from the area who knows people. It’s often non-obvious, but traveling with someone who already has relationships built with those in the region seems to be the accepted way of keeping situations diffused.

In terms of connectivity, much of the stress of the country is that the conflict surrounding it has meant that there are not solid landline connections to the rest of the world. This means most Internet traffic is routed through an undersea cable to Cyprus, which limits the overall capacity for the country. In turn, this leads to relatively expensive fixed-line Internet pricing, with many people opting for mobile Internet. Mobile connections can often be cheaper and faster than the DSL providers. In rural areas, it was noted that there are some cases of communities sharing mobile connections, through hotspots or tethering to a connected phone.

One of the signs I found heartening was that at the makerspace in Beirut, there were members with Tor project and Internet activism stickers on their laptops. The ability openly express support for those causes is a great sign that civil society is able to function without significant pressure on that front.


I’m very excited to have two talks at CCC at the end of the month. The bulk of accepted talks can be seen and voted on at the CCC “halfnarp”.

The first talk is on the Internet in Cuba. It expands upon the recent talk I presented at IMC last month, to provide additional color on what Internet access is really like in Cuba, and what the community there is doing to create LANs and other alternatives to the official but expensive ETECSA service.

The second talk looks again at technology in Pyongyang. Since 2014, there have been a number of talks about the totally closed off tech ecosystem there, but as it ramps up we continue to only get a few glimpses into what’s going on, and it’s getting only harder as the broader tensions ramp up. My goal is to propose a path for getting more rather than less transparency into the picture, because it is a really fascinating place.

The talks should both be recorded, and might even be streamed. If you’re one of the (I hear it could be up 16,000) participants, I hope to see you in Leipzig!

Accessing gnome-keyring on a mac

One of the more common password managers in linux environments is the gnome-keyring, which is split into a service (gnome-keyring-daemon), and a user interface (most commonly, seahorse).

After a bit of fiddling in the last couple weeks, this system can be compiled to run on a mac, with only a little bit of pain.

On the off chance that it saves someone some pain who’s trying to do the same thing, here are the basic steps I needed to take:

brew install autoconf automake dbus gettext gnome-icon-theme gobject-introspection gtk+3 gtk-doc intltool libffi libgcrypt libtool p11-kit pkg-config vala
brew install libsecret --with-vala

mkdir keyring-buildenv
cd keyring-buildenv

mkdir /usr/local/opt/seahorse

git clone https://github.com/GNOME/gcr
cd gcr
wget https://gist.githubusercontent.com/willscott/fb5d50eba8a2fda17b7ead7d6e6ed98d/raw/5dcdc33f617e1196d5b365dda6b3b8e798f6b644/0001-patch-for-osx-compilation.patch
git apply 0001-patch-for-osx-compilation.patch
automake -a
PATH=/usr/local/opt/gettext/bin/:$PATH ./configure --enable-valgrind=no --enable-vala=yes --disable-nls --prefix=/usr/local/opt/seahorse
make install

cd ..
git clone https://github.com/GNOME/gnome-keyring
cd gnome-keyring
automake -a
PATH=/usr/local/opt/gettext/bin/:$PATH PKG_CONFIG_PATH=/usr/local/opt/libffi/lib/pkgconfig/:/usr/local/opt/seahorse/lib/pkgconfig/ ./configure --disable-valgrind --without-libcap-ng --disable-doc --disable-pam --disable-ssh-agent --disable-selinux --disable-p11-tests --disable-nls --prefix=/usr/local/opt/seahorse
make install

cd ..
git clone
cd seahorse
automake -a
PATH=/usr/local/opt/gettext/bin/:$PATH PKG_CONFIG_PATH=/usr/local/opt/libffi/lib/pkgconfig/:/usr/local/opt/seahorse/lib/pkgconfig/ ./configure --disable-ldap --disable-hkp --disable-sharing --disable-ssh --disable-pkcs11 --prefix=/usr/local/opt/seahorse/

To run, you’ll need to run these components connected by a DBUS instance.
The following script seems to accomplish this:


#dbus session.
dbus-daemon --session --nofork --address=unix:path=$HERE/unix_listener &

#keyring daemon
GSETTINGS_SCHEMA_DIR=/usr/local/opt/seahorse/share/glib-2.0/schemas/ DBUS_SESSION_BUS_ADDRESS=unix:path=$HERE/unix_listener ./gnome-keyring/gnome-keyring-daemon --start --foreground &

GSETTINGS_SCHEMA_DIR=/usr/local/opt/seahorse/share/glib-2.0/schemas/ DBUS_SESSION_BUS_ADDRESS=unix:path=$HERE/unix_listener ./gcr/gcr-prompter &

GSETTINGS_SCHEMA_DIR=/usr/local/opt/seahorse/share/glib-2.0/schemas/ DBUS_SESSION_BUS_ADDRESS=unix:path=$HERE/unix_listener ./seahorse/seahorse

# cleanup
kill $KPID
kill $DPID


Last week I talked briefly about the state of open internet measurement for network anomalies at IETF 98. This was my first time attending an IETF in-person meeting, and it was very useful in getting a better understanding of how to navigate the standards process, how it’s used by others, and what value can be gained from it.

A couple highlights that I took away from the event:

There’s a concern throughout the IETF about solving the privacy leaks in existing protocols for general web access. There are three major points in the protocol that need to be addressed and are under discussion as part of this: The first is coming up with a successor to DNS that provides confidentiality. This, I think, is going to be the most challenging point. The second is coming up with a SNI equivalent that doesn’t send the requested domain in plain-text. The third is adapting the current public certificate transparency process to provide confidentiality of the specific domains issued certificates, while maintaining the accountability provided by the system.

Confidential DNS

There are two proposals with traction for encrypting DNS that I’m aware of. Neither fully solve the problem, but both provide reasonable ways forward. The first is dnscrypt, a protocol with support from entities like yandex and cloudflare. It maintains a stateless UDP protocol, and encrypts requests and responses against server and client keys. There are working client proxies for most platforms, although installation on mobile is hacky, and a set of running providers. The other alternative, which was represented at IETF and seems to be preferred by the standards community is DNS over TLS. The benefit here that there’s no new protocol, meaning less code that needs to be audited to gain confidence of the security properties for the system. There are some working servers and client proxies available for this, but the community seems more fragmented, unfortunately.

The eventual problem that isn’t yet addressed is that you still need to trust some remote party with your dns query and neither protocol changes the underlying protocol where the work of dns resolution is performed by someone chosen by the local network. Current proxies allow the client to choose who this is instead, but that doesn’t remove the trust issue, and doesn’t work well with captive portals or scale to widespread deployment. It also doesn’t prevent that third party from tracking the chain of dns requests made by the client and getting a pretty good idea about what the client is doing.

Hidden SNI

SNI, or server name identification, is a process that occurs at the beginning of an HTTPS request where the client tells the server which domain it wants to talk to. This is a critical part of the protocol, because it allows a single IP address to host HTTPS servers for multiple domains. Unfortunately, it also allows the network to detect and potentially block requests at a domain, rather than IP granularity.

Proposals for encrypting the SNI have been around for a couple years. Unfortunately, they did not get included in TLS1.3, which means that it will be a while before the next iteration of the standard and the potential to include this update.

The good news was that there seems to be continued interest in figuring out ways to protect the SNI of client requests, though no current proposal I’m aware of.

Certificate Transparency Privacy

Certificate Transparency is an addition to the HTTPS system to enforce additional accountability in to the certificate authority system. It requires authorities (CA)’s to publish a log of all certificates they issue publicly, so that third parties can audit their list and make sure they haven’t secretly mis-issued certificates. While a great feature for accountability and web security, it also opens an additional channel where the list of domains with SSL certificates can be enumerated. This includes internal or private domains that the owner would like to remain obscure.

As google and others have moved to require the CT log from all authorities through requirements on browser certificate validity, this issue is again at the fore. There’s been work on addressing this problem, including a cryptographic proposal and the IETF proposal for domain label redaction which seems to be advancing through the standards process.

There remains a ways to go to migrate to protocols which provide some protection against a malicious network, but there’s willingness and work to get there, which is at least a start.

Another Strike against Domain Fronting

In 2014, Domain Fronting became the newest obfuscation technique for covert, difficult to censor communication. Even today, the Meek Pluggable transport serves ~400GB of Tor traffic each day, at a cost of ~$3000/month.

The basic technique is to make an HTTPS connection to the CDN directly, and then once the encryption has begun, make the HTTP request to the actual backing site instead. Since many CDNs use the same “front-end cache” servers for incoming requests to all of the different sites they host, there is a disconnect between the software handling SSL, and the routing web server proxying requests to where they need to go.

Even as the technique became widely adopted in 2014-2015, its demise was already predicted, with practitioners in the censorship circumvention community focused on how long it could be made to last until the next mechanism was found. This prediction rested on two points:

  1. The CDN companies will find themselves in a difficult position politically, since they are now in the position of supporting circumvention while also maintaining a relationship with the censoring countries.
  2. The technique has security and cost implications that make it not great for either the CDNs, or the practitioners.

We’ve seen both of these predictions mature.

Cloudflare, explicitly doesn’t support this mechanism of circumvention, and coincidentally has major Chinese partnerships and worked to deploy into China. Google also has limited the technique over periods as they have struggled with abuse (although mute in China, since the Google cloud doesn’t work there as a CDN.)

In terms of cost, the most notable incident is the “Great Cannon”, which targeted not only Github as widely reported, but also caused a significant amount of traffic to go to Amazon-hosted pages run by GreatFire, a dissident news organization, and costing them significant amounts of money. GreatFire had been providing a free browser that operated by proxying all traffic through domain-fronting. Due to a separate and less reported Chinese “DDOS” they ended up with a monthly bill for several tens of thousands of dollars and had to turn down the service.

The latest strike against domain fronting is seen in posts by Cobalt Strike and FireEye that the technique is also gaining adoption for Malware C&C. This abuse case will further incentivize CDNs from allowing the practice to continue, since there will now be many legitimate western voices actively calling on them to stop. Enterprises attempting to track threats on their networks, and CDN customers wanting to not be blamed for attacks will both begin putting more pressure on the CDNs to remove the ability for different domains to be intermixed, and we should expect to see a continued drop in the willingness of providers to offer such a service.