Silicon Valley increasingly using AI and algorithms to screen "offensive" content
12-01-2016, 06:39 PM
Facebook developing artificial intelligence to flag offensive live videos
https://www.yahoo.com/news/facebook-deve...nance.html
Google Tweaking Top Stories Algorithm To Filter Out Fake News?
https://www.seroundtable.com/google-top-...23053.html
And then once their computers identified "offensive" or "fake" content (e.g. ROK, Breitbart, Alex Jones, etc), they try to starve them of advertising money...
Facebook and Google promise to cut off fake news websites from advertising
https://www.theguardian.com/technology/2...-algorithm
Even worse is that the way they "teach" the AI to know what is bad content is by feeding it "safe" sites like the NY Times...
http://www.theverge.com/2016/9/21/129987...ls-to-spot
The future will be AI computers using corporate media as the gold standard, and anything that doesn't adhere to that narrative will be removed from Google/Facebook and starved for cash. This is a clear escalation of their effort to maintain the status quo.
Quote:Quote:
Facebook Inc is working on automatically flagging offensive material in live video streams, building on a growing effort to use artificial intelligence to monitor content, said Joaquin Candela, the company’s director of applied machine learning.
The social media company has been embroiled in a number of content moderation controversies this year, from facing international outcry after removing an iconic Vietnam War photo due to nudity, to allowing the spread of fake news on its site.
Facebook has historically relied mostly on users to report offensive posts, which are then checked by Facebook employees against company "community standards." Decisions on especially thorny content issues that might require policy changes are made by top executives at the company.
Candela told reporters that Facebook increasingly was using artificial intelligence to find offensive material. It is “an algorithm that detects nudity, violence, or any of the things that are not according to our policies,” he said. The company already had been working on using automation to flag extremist video content, as Reuters reported in June.
https://www.yahoo.com/news/facebook-deve...nance.html
Google Tweaking Top Stories Algorithm To Filter Out Fake News?
Quote:Quote:
It looks like Google is or has worked on tweaking the top stories algorithm to filter out fake news sites, or rather, promote higher quality news sites from showing in that box. This is around Google's efforts to not show fake news including banning some AdSense publishers.
The BBC reported earlier this month, an interview with Google's CEO Sundar Pichai who said "there should just be no situation in which fake news gets distributed." In fact, he added that Google would "make our algorithms better," to make sure to "drive news towards more trusted sources."
https://www.seroundtable.com/google-top-...23053.html
And then once their computers identified "offensive" or "fake" content (e.g. ROK, Breitbart, Alex Jones, etc), they try to starve them of advertising money...
Facebook and Google promise to cut off fake news websites from advertising
Quote:Quote:
Facebook and Google have pledged to ban websites that peddle fake news from their advertising services after the world’s two most popular websites were accused of spreading false and incendiary articles about the US presidential election.
Facebook, which has faced a storm of criticism in the last week over its role in Donald Trump’s victory, added fake news websites to its list of banned adverts, which also prohibits ads for guns, gambling and spy cameras, on Monday night.
https://www.theguardian.com/technology/2...-algorithm
Even worse is that the way they "teach" the AI to know what is bad content is by feeding it "safe" sites like the NY Times...
Quote:Quote:
Jigsaw, a subsidiary of parent company Alphabet is certainly trying, building open-source AI tools designed to filter out abusive language. A new feature from Wired describes how the software has been trained on some 17 million comments left underneath New York Times stories, along with 13,000 discussions on Wikipedia pages. This data is labeled and then fed into the software — called Conversation AI — which begins to learn what bad comments look like.
According to the report, Google says Conversation AI can identify abuse with "more than 92 percent certainty and a 10 percent false-positive rate" when compared to the judgements of a human panel.
http://www.theverge.com/2016/9/21/129987...ls-to-spot
The future will be AI computers using corporate media as the gold standard, and anything that doesn't adhere to that narrative will be removed from Google/Facebook and starved for cash. This is a clear escalation of their effort to maintain the status quo.
Roosh
http://www.rooshv.com