kedreeva: amy-reblogs: thistherapylife: sandalwoodandsunlight:…













kedreeva:

amy-reblogs:

thistherapylife:

sandalwoodandsunlight:

FCC HOTLINE: 1-888-225-5322

CALL

Hey American friends, so much Internet stuff happens in America that if this passes, it will affect everyone. PLEASE DO SOMETHING.

Okay, so like, I called the hotline, and it’s a bunch of menu options that are not exactly intuitive, and I couldn’t seem to get in touch with an actual operator BUT. Good news. I did eventually go far enough down the rabbit hole to be directed to their consumer complaint website.

Here you can click on “Tell your story” and voice your concerns to the FCC. From the site:

When you have issues concerning a provider or policy, let us know about it. By submitting your story you are NOT filing a consumer complaint. Your story won’t be forwarded to your provider and you will not hear back from your provider or the FCC. We will share your story internally and use it to inform policy making and potential enforcement activities.

By filing a consumer complaint and telling your story, you contribute to federal enforcement and consumer protection efforts on a national scale and help us identify trends and track the issues that matter most.

So, this seems like a likely alternative to calling (and was the recommendation when I DID call), which is cool because I know how much anxiety calling places gives people. Go forth and type away, friends! Help protect net neutrality!

micdotcom: Russian hackers reportedly tried to get into…



micdotcom:

Russian hackers reportedly tried to get into Emmanuel Macron’s campaign in French election

  • Emmanuel Macron — the pro-European Union centrist facing off against far-right nationalist Marine Le Pen in the French presidential election — was the target of Russian hackers, Time reported Tuesday.
  • According to Japanese antivirus firm Trend Micro, hackers linked to Russia created fake websites in an attempt to steal passwords and online credentials from Macron staffers.
  • Mounir Mahjoubi, digital chief for the Macron campaign, confirmed there were attempts to hack the campaign but said they were unsuccessful.
  • “It’s serious, but nothing was compromised,” Mahjoubi told the Associated Press on Monday night.
  • Though it didn’t point its finger at any governments, Trend Micro said it was “very, very likely” a group called Pawn Storm — which U.S. intelligence considers a Russian spying organization — is behind the attempted hacks. Read more (4/25/17)

Princeton researchers discover why AI become racist and sexist

Princeton researchers discover why AI become racist and sexist:

guerrillamamamedicine:

Ever since Microsoft’s chatbot Tay started

spouting racist commentary

after 24 hours of interacting with humans on Twitter, it has been obvious that our AI creations can fall prey to human prejudice. Now a group of researchers has figured out one reason why that happens. Their findings shed light on more than our future robot overlords, however. They’ve also worked out an algorithm that can actually predict human prejudices based on an intensive analysis of how people use English online.

The implicit bias test

Many AIs are trained to understand human language by learning from a massive corpus known as the Common Crawl. The Common Crawl is the result of a large-scale crawl of the Internet in 2014 that contains 840 billion tokens, or words. Princeton Center for Information Technology Policy researcher Aylin Caliskan and her colleagues wondered whether that corpus—created by millions of people typing away online—might contain biases that could be discovered by algorithm. To figure it out, they turned to an unusual source: the Implicit Association Test (IAT), which is used to measure often unconscious social attitudes

People taking the IAT are asked to put words into two categories. The longer it takes for the person to place a word in a category, the less they associate the word with the category. (If you’d like to take an IAT, there are several online at Harvard University.) IAT is used to measure bias by asking people to associate random words with categories like gender, race, disability, age, and more. Outcomes are often unsurprising: for example, most people associate women with family, and men with work. But that obviousness is actually evidence for the IAT’s usefulness in discovering people’s latent stereotypes about each other. (It’s worth noting that there is some debate among social scientists about the IAT’s accuracy.)

Using the IAT as a model, Caliskan and her colleagues created the Word-Embedding Association Test (WEAT), which analyzes chunks of text to see which concepts are more closely associated than others. The “word-embedding” part of the test comes from a project at Stanford called GloVe, which packages words together into “vector representations,” basically lists of associated terms. So the word “dog,” if represented as a word-embedded vector, would be composed of words like puppy, doggie, hound, canine, and all the various dog breeds. The idea is to get at the concept of dog, not the specific word. This is especially important if you are working with social stereotypes, where somebody might be expressing ideas about women by using words like “girl” or “mother.” To keep things simple, the researchers limited each concept to 300 vectors.

To see how concepts get associated with each other online, the WEAT looks at a variety of factors to measure their “closeness” in text. At a basic level, Caliskan told Ars, this means how many words apart the two concepts are, but it also accounts for other factors like word frequency. After going through an algorithmic transform, closeness in the WEAT is equivalent to the time it takes for a person to categorize a concept in the IAT. The further apart the two concepts, the more distantly they are associated in people’s minds.

The WEAT worked beautifully to discover biases that the IAT had found before. “We adapted the IAT to machines,” Caliskan said. And what that tool revealed was that “if you feed AI with human data, that’s what it will learn. [The data] contains biased information from language.” That bias will affect how the AI behaves in the future, too. As an example, Caliskan made a video (see above) where she shows how the Google Translate AI actually mistranslates words into the English language based on stereotypes it has learned about gender