For as long as you code
“The first laptop designed to be your first laptop” SHARP.
I think Facebook is a serious problem for our democracy (and possibly our species), but I also worry that many of the regulatory responses to the company just enshrine its dominance, but imposing conditions it can grudgingly comply with, but which new competitors could not.
That’s why I think that we should regulate Facebook…but carefully. Get it right and we can nerf it down to the point where it no longer dominates our society. Get it wrong and we will crown it emperor everlasting.
But sometimes, changes to Facebook that go awry aren’t the result of some hard-to-foresee unintended consequence - sometimes, it’s just Facebook engaging in lethal fuckery and/or stupidity.
This is one of those times. Facebook has observed that in the final days before an election, unethical political operatives can push disinformation in the form of political ads that aren’t removed until after the election - after the damage has been done.
Their response is to ban political ads for the 7 days before the election, which sounds fine - until you realize that the only way local election officials can get last-minute information in front of voters is by buying ads.
That’s because FB has no other facility for allowing election officials to announce late-breaking info about polling places, mail-in votes, and other nonpartisan, factual information that helps people vote.
If you’re an election official - or even the Census - and you want to put something into the feeds of people in your area, you have to buy an ad - worse, FB calls that a “political ad.”
And of course, this election is full of late-breaking info: changes to in-person voting due to pandemics, changes to postal voting due to USPS sabotage, changes to polling rules due to dozens of voting rights lawsuits.
To that, add the climate emergency: the Connecticut primary was disrupted by Hurricane Isaias and officials had to spend thousands to tell voters about new rules in light of the crisis.
Like I said, some changes to FB rules could just make things worse - for example, proposals to abolish or weaken CDA230 will just create a world where Facebook’s terrible moderators make more stupid calls, removing legit material.
While making it legally and commercially impossible to operate a FB rival with better rules on harassment, hate speech, etc - AND snuffing out any hope of forcing FB to interoperate or federate with rivals.
(We can order FB to police its users’ actions, or we can order FB to allow third parties to connect to its service, but we can’t do both, because they can’t do both.)
But this isn’t one of those thorny problems. FB has two glaringly obvious ways to solve the problem of last-minute political ad disinfo:
- Allow election officials to put messages in voters’ feeds without buying ads; or
- Exempt election officials from the 7-day ban on political ads.
I know we should never attribute to malice that which can be attributed to incompetence, but honestly, this is such A-grade fuckery it’s hard not to go with “both.”
Still one of the illest visuals 🙌🏾
Super important for our sex worker friends out there!
While the US is conveniently turning spying focus to Huawei, this came out yesterday (11 February 2020).The account identifies the CIA officers who ran the program and the company executives entrusted to execute it. It traces the origin of the venture as well as the internal conflicts that nearly derailed it. It describes how the United States and its allies exploited other nations’ gullibility for years, taking their money and stealing their secrets.
Last October, two Amazon employees – Maren Costa (UX designer) and Jamie Kowalski (software engineer) spoke on the record to the Washington Post about their employer’s complicity in the climate crisis, including the provision of cloud computing services to energy company in search of new sources of fossil fuels.
Amazon threatened to fire them. Rather than shutting up, the two employees recruited fellow members of Amazon Employees for Climate Justice to publish 357 on-the-record, attributed condemnations of Amazon’s climate policies from current Amazon tech workers.
It’s the latest installment in the tech worker uprising in which tech workers are realizing that the high demand for their skills and massive talent shortage gives them incredible leverage over their employers. Tech workers are a critical part of the fight for a better world, because they can both hold their employers to account and provide accurate assessments of the culture, choices and decisions that feed into our current tech landscape.
Facebook provides a suite of turnkey app-building tools for Android that are widely used among the most popular Google Play apps, with billions of combined installs; naturally, these tools create incredibly data-hungry defaults in the apps that incorporate them, so that even before you do anything with an app, it has already snaffled up a titanic amount of data, tied it into your Google Ad ID (which is recycled by Facebook to join up data from different sources) and sent it to Facebook.
Needless to say, the GDPR made these practices radioactively illegal, but despite two years’ warning that the GDPR was coming into effect last spring, Facebook dragged another six months out before updating its tools, and these updates still have propagated to all the apps in Google Play.
The data harvested from phones – including, for example, which Bible verses you read using a King James Bible app, and which searches you made on Kayak – is added to your “shadow profile”, and no one (outside of Facebook) knows for sure how that’s used.
You can practice a little self-defense, but it’s cumbersome: root your phone and you can block all network traffic to *.facebook.com; you can also reset your Ad ID and disaggregate the data coming off your phone. I’ve had a poke around but can’t find a tool that resets the Ad ID every 10 seconds – please leave a comment if you know of one.
Frederike Kaltheuner and Christopher Weatherhead from Privacy International gave an outstanding talk on the subject at the Chaos Communications Congress in Leipzig last month; an accompanying paper gives more detail, including methods.
Kaltheuner and Weatherhead were able to gain insight into the apps’ behavior by rooting an Android phone and installing a man-in-the-middle proxy that used forged certificates to intercept and decrypt data on its way to Facebook. Ominously, none of the apps they tested used certificate pinning (let alone certificate transparency) to detect/prevent this kind of man-in-the-middle activity.
It’s not clear whether the same conduct is present in apps in Apple’s App Store; Apple uses unique Ad IDs that are similar to Google/Android’s and could be exploited in the same way. However, Apple’s DRM is designed to make this kind of research much harder. I hope the Privacy International researchers take a crack at it: perhaps they could use simulated, cloud-based Ios devices used for developer testing.