The Chinese Supreme People’s Court has just released a report on a “mobile court” pilot program that’s been running since March to manage procedures in civil legal disputes through the Wechat social media platform, through which litigants are prompted by an AI chatbot “judge” (with a judicial avatar) to state their cases; the evidence is entered into the blockchain.
The cases are actually adjudicated by a human judge, with the automated system used to manage all the procedures to gather evidence and testimony prior to the judgment.
A separate “cybercourt” program has been running in Hangzhou since 2017 to settle online trade disputes, copyright claims, and product liability claims.
The Chinese Supreme Court touts the system as a means of streamlining justice by automating routine processes, which allows judges to focus on adjudicating cases, ensuring that litigants get their disputes resolved in timely fashion.
The AFP article on the measure is incredibly confusing, calling the chatbot an AI and blending descriptions of several programs; I had to read it three times before I realized that this was primarily a document collection assistant to make it easier for judges to review evidence, and not an “AI” that was rendering judgments. Also, they accept at face value the seemingly entirely superfluous inclusion of blockchain technology, which appears to exist solely for purposes of buzzword-compliance.
[Omoyele Sowore is a Nigerian journalist and owner of the independent media outlet Saraha Reporters; shortly after the election of President Buhari, Sowore was arrested under the country’s anti-cyberstalking laws for “causing insult, enmity, hatred and ill-will on the person of the President of the Federal Republic of Nigeria.” He’s still in jail, where he has been tortured. His case has attracted condemnation from US senators and solidarity from PEN. My EFF colleague Cindy Cohn, who met Sowore through her work on the Bowoto case, prosecuting Chevron for a mass murder in service to oil exploration, wrote a post, crossposted below, about how overbroad, sloppy harassment and stalking bills can be weaponized. -Cory]
EFF has long been concerned that—unless carefully drafted and limited—cyberstalking laws can be misused to criminalize political speech. In fact, earlier this year we celebrated a federal court decision in Washington State in the United States that tossed out an overbroad cyberstalking law. In the case, the law had been used to silence a protester who used strong language and persistence in criticizing a public official. EFF filed anamicus brief in that case where we cautioned that such laws could be easily misused and the court agreed with us.
Now the problem has occurred in a high-profile political case in Nigeria. Just this week the Nigerian government formally filed “cyberstalking” charges against Omoyele Sowore, a longtime political activist and publisher of the respected Sahara Reporters online news agency. Sowore had organized political protests in Nigeria under the hashtag #RevolutionNow and conducted media interviews in support of his protest. He was detained along with another organizer between early August and late September before being granted bail. He reports that he has been beaten and denied access to his family and, for a while, denied access to an attorney.
The charges make clear that this prosecution is a misuse of the overbroad cyberstalking statute, passed in 2015. They state that Sowore committed cyberstalking by: “knowingly sent messages by means of press interview granted on ‘arise Television’ network which you knew to be false for the purpose of causing insult, enmity, hatred and ill-will on the person of the President of the Federal Republic of Nigeria.”
That’s it. The prosecution claims that you can “cyberstalk” the President by going on TV and saying allegedly false things about him with a goal of causing “insult” or “ill-will.” This is obviously a misuse of the law and flatly inconsistent with freedom of expression under both Nigerian and international law. The President of Nigeria is a public figure and criticisms of his policies should be strongly protected. Instead, this prosecution appears to be a textbook case of a poorly drafted law being misused for political purposes.
Similar problems exist with the claim of “treason,” which is also based solely on Sowore’s protest activities and the use of the “#RevolutionNow” slogan. There appear to be a similar political agenda behind the final charges for “financial crimes,” based on Sowore allegedly moving funds between his organization’s own bank accounts.
Freedom of expression is a cherished, internationally recognized human right. Nigeria is party to the International Covenant on Civil and Political Rights, and additionally recognizes the right to free expression in its 1999 Constitution under section 39(1). Yet on its face, Nigeria’s constitution (section 45.1) also allows many exceptions to freedom of expression that can essentially eviscerate the right, unless carefully interpreted. It’s up to the courts and the prosecutors to protect freedom of expression and interpret any exceptions narrowly and carefully, and up to the legislature not to pass laws that can be so easily misused.
We hope that the judges and prosecutors of Nigeria recognize the problem in applying this cyberstalking law to prosecute a political activist. Nigeria has a long and proud tradition of peaceful but powerful political protest. Such protests are key to a functioning democracy. Protecting core and longstanding human rights such as freedom of expression, especially when that expression is aimed at convincing the public on a political matter, is the obligation of a modern government. If Nigeria is to uphold its international human rights obligations as well as its own traditions, these charges against Sowore and his co-defendant should be dropped immediately.
That’s Louis Rossman, a repair technician and YouTuber, who went viral recently for railing against Apple. Apple purposely charges a lot for repairs and you either have to pay up or buy a new device. That’s because Apple withholds necessary tools and information from outside repair shops. And to think, we were just so close to change.
Reblog if you:
- Have an iPhone and are in need of repairs
- Have a friend with that problem
- Hate Apple and are more than happy to spite them in some way
No one will know which is it
This guy inspired me to repair my own macbook. First of all, you should know that I am not… like, I have to look up HOW to look up what my computer specifications are. Tech, that ware either soft or hard, is not a subject in which I experience comfort or competence.
But my puppy peed on my keyboard, and I asked the apple store, or the fucking mac cafe, or the godsdamn Computer House Chill Zone or whatever cute ass name they have for their bullshit store, and they said it would be TWELVE HUNDRED DOLLARS TO REPLACE MY KEYBOARD. I’m not even exaggerating.
So I asked the internet, well how hard IS it to repair? And I saw this guy’s video, and while I am no techie, I AM fueled by spite, so I was all “oh, they do that shit on purpose specifically so they can charge me $1200 bucks or make me buy a new computer hunh? FUCK THEM” and I bought all the tools I needed for about $25 and I bought all the parts I needed for about another $25 and I watched a few tutorial videos, and I replaced my own keyboard.
So, once you are doing the actual deed, it becomes pretty obvious that they are finding creative ways to make this much harder than it has to be on purpose. On thing that stood out to me is, instead of all the tiny screws being the same size, there are about two dozen very slightly different sizes. They could easily be all the same size, or like, two sizes at most, but no.
These mother fuckers will take a panel that screws into place and they’ll use a different size screw for each corner. They are so close that you almost cannot tell them apart visually, but they each will only screw into the matching corner. Like, it’s a pretty clear “fuck you” to anyone trying to do repairs.
anyway, this guy is also fueled by spite, and doing holy work, and I have mad respect
This is awesome. Man is doing good ass deeds 24/7 because he’s giving people control.
How dare you not leave a link to his channel, this guy is the savior of the modern world.
After Trump’s tax-cuts and forgiveness program, Apple repatriated $260 billion it had stashed in offshore tax havens (or, more truthfully, had funneled through offshore tax-havens to buy onshore financial products that were notionally held offshore); this made Apple the leading beneficiary of the Trump tax forgiveness program.
Apple used that money to continue its streak of record-setting stock buybacks, with which the company gooses its share price and allows investors to cash out, diverting money from worker compensation and R&D to financial engineering.
Apple’s stock-buybacks are so aggressive that they have lured in Berkshire Hathaway, famous for “patient investing” – Apple CEO Tim Cook initially touted this as vindication that the company still had the confidence of “value investors,” until Berkshire CEO Warren Buffet clarified that his stake in Apple was based on the expectation that the company would continue to use financial engineering to reward investors who brought nothing to the table except the ability to move share prices.
Cook has since suggested that the buybacks will create public value because of the capital gains that Apple investors will pay when they cash out – but of course, Trump’s tax cuts offer massively preferential tax rates for people who earn money through capital gains, shifting the US tax burden onto waged workers who earn their money by making things that other people use.
The Googler Uprising was a string of employee actions within Google over a series of issues related to ethics and business practices, starting with the company’s AI project for US military drones, then its secretive work on a censored/surveilling search tool for use in China; then the $80m payout to Android founder Andy Rubin after he was accused of multiple sexual assaults.
Tens of thousands of Google employees participated in the uprising, including 20,000 who walked off the job in February. The activist Google employees moved from victory to victory, including the ouster of a a transphobic, racist, xenophobic ideologue who had been appointed to Google’s “AI Ethics” board.
Two key organizers, Meredith Whittaker and Claire Stapleton, publicly accused the company of targeting them for retaliation in April (to enormous internal uproar).
Now, Whittaker has resigned (on the thirteenth anniversary of her employement with Google), along with Celie O’Neil-Hart, who had been global head of trust and transparency marketing at YouTube Ads, and Google News Labs’ Erica Anderson.
In Whittaker’s farewell note to her colleagues, she calls on them to “unionize — in a way that works,” “protect conscientious objectors and whistleblowers,” “demand to know what you’re working on, and how it’s used” and “build solidarity with those beyond the company.” She says that Google’s entry into “new markets” like “healthcare, fossil fuels, city development and governance, transportation, and beyond…is gaining significant and largely unchecked power to impact our world (including in profoundly dangerous ways, such as accelerating the extraction of fossil fuels and the deployment of surveillance technology).”
Whittaker will devote her work to AI Now, the group she co-founded to build and promulgate critical, ethical frameworks for AI research. I wish her the best.
Whittaker is a friend and colleague of mine, and I volunteer on the advisory board for Simply Secure, a nonprofit she founded.
We are all going to die.
SHA 256 is an algorithm that takes a digital input of any length and returns a string of 256 bits (typically converted to 64 hexadecimal digits). It’s a one-way algorithm, which means there’s no known way to practically retrieve the input from the output. As far as anyone knows, there has never been an instance of two different inputs having the same output, which means the hash of an input is a reliable unique digital fingerprint.
In this 6-minute video, Matthew Weathers explains why SHA 256 is “useful for digital signatures, cryptography, authentication, and is a central part of the Bitcoin protocol.”
Never ever turn off your phone: rethinking security culture in the era of big data analysis.
Back in the 80′s if you were a pissed off anarchist that wanted to burn down a building, you probably checked your home for listening devices and made a plan. If you were the same kind of pissed off anarchist in the late 90′s, you turned off your phone and encrypted your online traffic. In the 2020′s we’re gonna have to change our strategies once again. Intelligence gathering has adapted and so we must adapt too.
To get a head start at this, let’s look at how big data analysis is being used. To do this, we’ll need to talk about 3 things: metadata, patterns and networks. Those sound boring and complicated but I’m not a techy and I won’t bore you with tech language, I’ll keep it as easy as I can.
Metadata: In the context of online activity, ‘content’ means ‘the message you send’ and ‘metadata’ means ‘everything other than the content’. So, for example, if you send your friend a text about lunch, the content might be “Let’s go out for lunch” and the metadata might be “Message send at 01/04/2018 11.32 from phone 0478239055 to phone 079726823 using Signal”.
This information is registered by your phone even if the app encrypts your actual message. Your metadata is very badly protected by technology and very badly protected by the law. No matter which country you are in, most of your metadata if freely available to intelligence agencies regardless of whether you are a suspect in anything.
Patterns: Whether you realize it or not, your metadata has a pattern. If you have a daily job you might have a very consistent pattern, if you do not your pattern might be more flexible but you have a pattern. If someone wanted to know the rhythm of your day, they could very easily do so because your pattern is in the metadata.
For example: Maybe you use the wifi at your favourite bar on most Sunday nights until about midnight, you wake up around 10 AM and check your Signal, you use your public transport card to get to class every Monday afternoon and you spend on average 1 hour on Tumblr twice a day. All this is part of your pattern.
Networks: You have online networks. Your facebook friends, the people in your phone adress book, the dropbox you share with coworkers, everyone who bought online tickets to the same punk band you attended, the people using the same wifi points as you. Take your networks, combine them with other people’s networks, and clusters reveal themselves. Your work community, your family, your activist scene, etc.
If you are in an anarchist community that will probably be abundantly clear from all your minor network connections like going to the same band and knowing the same people as other anarchists. Even if you never liked an anarchist facebook page or pressed ‘going’ on an anarchist facebook event, your network is hard to hide.
Now, let’s say you commit a crime,
the kind that would result in some serious research. Let’s say that on Sunday night 3 AM, you are your friends go out and burn down a nazi’s house. It’s obvious that anarchists did it but there are no other clues. You use traditional style security culture: you burn your notes, you are careful not to communicate about your plans near technology and you do not leave physical traces.
But because you commited the crime that night, your metadata will vary strongly from your usual rhythm: you stay at your usual bar until 2 AM to wait for your friends, you do not wake up at 10 AM in the morning so you do not check your Signal or Tumblr until 1 PM. You do not go to class. Your metadata pattern is very different from your usual pattern. The metadata patterns of your friends are different too. If one of you is clumsy, they might generate a super suspicious metadata signal like a phone being switched off at 2.30 AM and activated at 4AM. You wouldn’t be the first.
If I wanted to solve this crime using data analysis, what I would do is:
- let a piece of software run a pattern analysis of the local anarchist scene to generate the 300 people most connected to the anarchist scene.
- let a second piece of software analyse the metadata patterns of those 300 people over the last months and identify the biggest metadata variations around Sunday night as well as very suspicious metadata activity
- Illiminate pattern variations with an obvious cause or an obvious alibi (people who are on vacation, people who are in the hospital, people who lost their job, etc).
- Do indepth research into the ones that remain.
Which is how, out of a massive amount of people that I couldn’t possible all listen to at the same time, I could quickly identify a few to monitor closely. This is how I could find and catch you.
So, now what?
If traditional security culture doesn’t protect us as well as it used to, how do we adapt? Well, I don’t have all the answers but for a start, I’d say: know your network + know your pattern.
In the case of the crime above: leave the bar at midnight, return home and put your phone on your bedside table. Check the apps you check before going to bed and set your alarm to 10AM. Return to the bar without your phone. Commit the crime. Wake up at 10AM and check your Signal. Drag yourself to class or ask a comrade to make the trip with your travel card and do not use technology in your home while the comrade is taking your travel card to class. Stick to your pattern. Never ever turn off your phone.
You might also be able to manipulate your network but that seems much harder to do. Not having a smartphone and dropping out of all social activity online is a big commitment. Knowing your data pattern and making sure your data pattern doesn’t look out of the ordinary? Much less commitment.
Some of the old rules will still apply: don’t talk about a crime around devices with microphones, don’t brag after a succesful action, etc. Other rules, like ‘turn off your phone when planning an illegal act’ need to change because their metada looks too out of the ordinary. No one switches off their phone anymore. We look suspicious as fuck when we do.
This is just one idea on how we could update our security culture. There are probably other people with other, better ideas about updating our security culture. If we start the conversation, we may get somewhere.
Finally: we need to keep adapting.
As technology changes, more information is becoming available, including data we have very little control over. Smart-tv’s and ads in public spaces that listen to every word we say and the tone of our voice when we say it are examples. Data analysis projects are currently using license plate reading software on security footage to map the travel patterns of cars. A lot suggests they may soon be ready to do the same with face recognition, at which point the presence of our face in public space becomes part of our metadata. More information means more accurate data analysis. Our metadata may soon be too vast annd too complex to completely map and mirror. Which means we will need to adapt our counter measures if you want to hide something.
How do we keep it all under the radar? I don’t know. But let’s try to figure this shit out. These are some first thoughts about what security culture should look like in the age of modern big data analysis and I’d be very happy for any insights from comrades that have some thoughts on this.
Also: feel free to distribute and rework these words without credit.
You can work silence into patterns though. Like, say for example you go on your computer to do stuff for hours- or read stuff, or do homework. Start turning off your phone randomly. It will just look like you’re developing habits. Or heck, make your phone die in a place without a wall plug.
Also periodically go through permissions, delete apps that use voice permissions that shouldn’t, download apps to force permissions.
To be honest, I think if you turned off your phone regularily you would stick out because the only people who do that are activists. Your own pattern would be consistent but it’d be so different from the general population that it would raise eyebrows. So it is bothmore effective and less work to simply leave your phone on but make sure it is not with you when you plan and do actions.
The FBI has been known to use “blackout” periods as signs of when to try and associate criminal activity (for an actual reference, I remember they used it in the Isreal Keys murders). It’s way easier to rely on your pattern and replicate it than to try and create a blackout habit with no pattern, not to mention you’re really just opening up the potential for them to associate you with other crimes related to your other blackout patterns if you end up going that route.
cyberpunk resistance organization /
but this is FUCKING IMPORTANT
A thing to note though here is that while altering your patterns could raise suspicions, talking about potential crimes around a listening device ACTIVELY gives the pigs CONCRETE evidence. If you do organizing, they know you do organizing. Just make a habit of having phones off/away for even the most innocuous shit. When/if they eventually start being able to get convictions on suspicion alone itll be time to just fully reassess how security culture works, but for now its more important to not have concrete evidence of what youre actually saying at (whatever) than maintaining consistent data patterns.
That said, if you are going to turn your phone off to talk abt stuff TAKE THE BATTERY OUT, as microphones can be accessed remotely even when the phone is “off.” If you cant take out the battery, take the dog for a walk and leave your phone at home.
Letting your phone battery just… drain when youre out and about every now and then is probably a good precaution moving forward but honestly AVOIDING GIVING EVIDENCE is way more vital than whether you raise suspicion.