rooshvforum.network is a fully functional forum: you can search, register, post new threads etc...
Old accounts are inaccessible: register a new one, or recover it when possible. x


AI could show when the next genocide is about to happen by tracking hate speech
#1

AI could show when the next genocide is about to happen by tracking hate speech

http://motherboard.vice.com/read/this-al...ource=mbfb

ake note, Nostradamus: There could be another proselytizer of potential violence in the market, except this one has the potential to be more accurate. And it speaks in 1s and 0s.

Researchers at the iHub Data Lab in Kenya are building an algorithm that has the potential to show early warning signs of violence across the world. Called Umati, or “crowd” in Swahili, the program monitors dangerous speech on Twitter and Facebook. Experts say that inflammatory speech can foreshadow ethnic violence, and even genocide. The algorithm is expected to be released for public use at the beginning of 2016, and the team will first use the algorithm in South Sudan, home to a brutal civil war that has displaced more than 2.2 million people.

“I am a believer that AI will someday be better able to track hate speech than humans, although I don't know how long it will take,” Sidney Ochieng, a project coordinator of Umati, told me. “Much of speech relies around context, which is hard to code, but humans also have biases when we monitor hate speech.”

The Umati algorithm scavenges the internet in search of a “bag of words”, or key-phrases which a human inputs. The list is like a rap-sheet of hate-speech—words regarding tribe, nationality, gender, and sexual orientation. With its collection of bottom-feeder language on the web, the algorithm takes into account how influential the speaker is and how hateful the speech is and gives it a ranking.

Basically, the user inputs a bunch of phrases, and the algorithm searches for them, and then ranks them on potentially dangerous speech. Think of it as pressing "control +f" on Twitter and Facebook, and then ranking each post based on the number of hits it gets. Then, a human goes through and double checks.

“Dangerous speech has preceded episodes of terrible intergroup violence, over and over in many different contexts”
Because Umati tracks online sentiment and dangerous speech, the accuracy of the algorithm’s ability to detect ethnic violence depends on the connection between online speech and offline action in a society.

“Dangerous speech has preceded episodes of terrible intergroup violence, over and over in many different contexts,” said Susan Benesch, a faculty associate of the Berkman Center for Internet and Society at Harvard University.

In Rwanda the traditional use of the word “Inyenzi”, or cockroach, was a description of Tutsi militia members who carried out cross-border raids on the Rwandan army. Like the bug, the Tutsi militia were hard to eliminate. But after its repeated use on radio stations in the run up to the 1994 genocide, the meaning of “Inyenzi”changed to encompass all Tutsi, militia or not. A human would have easily picked up that “Inyenzi”became a keyword for dangerous speech, but an algorithm might have had more trouble.

Much of speech depends on the context of words, and programming an algorithm to decipher new niche phrases like “Inyenzi” isn’t easy. The hope is that the Umati algorithm will be able to self-learn and analyze large networks to identify dangerous speech before humans can in the future. But for now, Umati will rely on humans to select the “bag of words” when they unveil the algorithm in South Sudan.

Social media in South Sudan has exacerbated divides between ethnic groups, according to the United States Institute of Peace, and the radio stations in the country have reportedly urged men to rape women based on perceived ethnic and political loyalties.

There is a potential that the algorithm could be used by some, specifically authoritarian governments, to vilify members of the political opposition. Because hate speech laws in some countries are arbitrary, they can be used as a pretext for arresting members of the opposition. Yet there is no proof yet that dangerous speech directly causes violence.

“We have to be careful not to claim a direct causal connection between speech online and violence,” Benesch told me. “Someone might be more likely to commit violence because of something he heard, but we can't say that he did the violence only because of what he heard.”

In other words, if authoritarian governments—or anyone—used the algorithm as a surveillance tool, its utility would be limited. In 2013, a previous version of Umati actually found that there was little connection between online threats and real world action in the run up to the presidential election. In an effort to avoid ethnic violence that broke out after elections in 2007, users on Twitter and Facebook regularly confronted and condemned dangerous speech, one of the best ways to counter its effects.

Yet that phenomenon might just be unique to Kenya at that time. When the algorithm is released, Ochieng, the project coordinator, marveled at the potential uses that journalists or political campaigns would find with Umati. “It would be interesting to track online sentiment and measure the effect of Donald Trump.”
Reply
#2

AI could show when the next genocide is about to happen by tracking hate speech

At least they are tracking hate speech in actual violent places like Sudan, not "hate speech" that exists only in the minds of spoiled Canadian feminists.

"Imagine" by HCE | Hitler reacts to Battle of Montreal | An alternative use for squid that has never crossed your mind before
Reply
#3

AI could show when the next genocide is about to happen by tracking hate speech

Minority Report in real life. But without Tom Cruise or Meagan Good.

Why is Trump in this article at all? Kicking out illegal immigrants is now called genocide?
Reply
#4

AI could show when the next genocide is about to happen by tracking hate speech

Quote: (09-26-2015 01:53 PM)Virtus Wrote:  

....
Social media in South Sudan has exacerbated divides between ethnic groups, according to the United States Institute of Peace, and the radio stations in the country have reportedly urged men to rape women based on perceived ethnic and political loyalties.
....

Social media.

It will bring us closer together, they say.

It will make us well connected to each other, they say.

Human beings and our tendencies to recede back to our lowest common denominator. Instead we humans use social media to further divide and kill and rape and murder each other. No surprise here, i guess.


Quote: (09-26-2015 01:53 PM)Virtus Wrote:  

In other words, if authoritarian governments—or anyone—used the algorithm as a surveillance tool, its utility would be limited. In 2013, a previous version of Umati actually found that there was little connection between online threats and real world action in the run up to the presidential election.


SJW/feminists will use this and other version of this, to track and silence, and cause job-loss for anybody that goes against their "liberal agenda". Mark my words. They will use it to track key words of masculinity, pickup artists, anti-gay, anti-pluralgender comments etc. Mapping it out, connecting all over the place. Especially, in classrooms; universities will have it to track "potential rapists", potential "racists", etc.
Reply
#5

AI could show when the next genocide is about to happen by tracking hate speech

This could be really bad. People who have access can use it against any idea, and then use to target the people who have it. Goodbye free speech.

"In the name of stopping a potential 'genocide' trolling teenagers will be imprisoned and sent to re-education camp and diversity/empathy training."
[Image: tumblr_mrqxrnIYPU1qc3ju8o8_250.gif]

Chicago Tribe.

My podcast with H3ltrsk3ltr and Cobra.

Snowplow is uber deep cover as an alpha dark triad player red pill awoken gorilla minded narc cop. -Kaotic
Reply
#6

AI could show when the next genocide is about to happen by tracking hate speech

There are already programs like this in the works, governments will pay top dollar for algorithms that can de-anonymize supposedly anonymous posters based on writing and spelling patterns.

Soon it won't be safe to have dissenting opinions even on the internet.
Reply
#7

AI could show when the next genocide is about to happen by tracking hate speech

When I was in Los Angeles once for work, I was at this bank of grills at a luxury apartment complex. There was this really big fat guy who seemed to be ragingly insecure Littledark type rattling on about his son and his work.

I don't remember what exactly made him rich, but he went on to mention how he bought this software company that tracks people on social media for certain key phrases. He landed a government contract with the NSA and apparently was making bank. I don't know why, but for some reason he had the most punchable face.

He then went on to talk about his 8 year old son going from agent to agent bragging to this other guy he was with.

To this day, I remember his face. I never said anything to them, but just listened. If there is ever a revolution, I hope he's on the losing side of it.
Reply
#8

AI could show when the next genocide is about to happen by tracking hate speech

This technology could only possibly work backwards, not forwards. History is 20/20; looking back upon a genocide we can see all the factors that led up to it, and compare them to factors present in other genocides, but this is more art than science. Each one is going to be unique, each is going to have its own custom set of phrases and unique triggers (just off the top of my head; the collapse of the coffee market due to a drought could easily be the proximate trigger for a genocide, the spark that ignites the other tensions).

At best this technology will predict upcoming incidents in ongoing genocidal violence; for instance, it could be used to monitor terrorist networks and flag potential 'storms' that will be occurring.

But as for predicting black swans? Not a chance in hell.

For instance, one of the phrases I'm hearing in a lot of places lately (and have evens started using myself) is "...better start warming up the ovens...". It's generally made in response to extreme degeneracy, the sort of stuff that puts Sodom and Gomorrah to shame; child-sized, rubber, FtM transexual pseudo-penises (seriously), or sexual deviancy being written about in mainstream publications.

This phrase is very significant - but not in the way your typical Leftoid would assume. I strongly suspect this phrase is predicting a wave of partially-genocidal violence with whites as the victims. Rather than a threat, it's uttered by the sensible people who see what's going on and know that things are getting hot; that it's time for like-minded people to start preparing for winter, and banding together into teams. It's not the precursor for a genocide against the immigrants, but a response to their upcoming genocide against the natives.
Reply
#9

AI could show when the next genocide is about to happen by tracking hate speech

Quote: (09-27-2015 09:14 AM)Aurini Wrote:  

This technology could only possibly work backwards, not forwards. History is 20/20; looking back upon a genocide we can see all the factors that led up to it, and compare them to factors present in other genocides, but this is more art than science. Each one is going to be unique, each is going to have its own custom set of phrases and unique triggers (just off the top of my head; the collapse of the coffee market due to a drought could easily be the proximate trigger for a genocide, the spark that ignites the other tensions).

At best this technology will predict upcoming incidents in ongoing genocidal violence; for instance, it could be used to monitor terrorist networks and flag potential 'storms' that will be occurring.

But as for predicting black swans? Not a chance in hell.

For instance, one of the phrases I'm hearing in a lot of places lately (and have evens started using myself) is "...better start warming up the ovens...". It's generally made in response to extreme degeneracy, the sort of stuff that puts Sodom and Gomorrah to shame; child-sized, rubber, FtM transexual pseudo-penises (seriously), or sexual deviancy being written about in mainstream publications.

This phrase is very significant - but not in the way your typical Leftoid would assume. I strongly suspect this phrase is predicting a wave of partially-genocidal violence with whites as the victims. Rather than a threat, it's uttered by the sensible people who see what's going on and know that things are getting hot; that it's time for like-minded people to start preparing for winter, and banding together into teams. It's not the precursor for a genocide against the immigrants, but a response to their upcoming genocide against the natives.

Stronger African states are still grappling with asymmetrical warfare of the Islamic type (suicide bombings primarily). However they have mastered the art of suppressing ethno destructive tendencies that dogged the continent in the 1980-1990s.

I wouldn't under-estimate the power of such technology. The article doesn't explain it how it works in detail. It's more of a data-mining tool used to extrapolate from a massive pool of online data (not just twitter, but everything from e-mails to facebook to sms) to predict hotspots for potential outburst of ethnic violence. Then there are government agents in the 'right place' at the 'right time' to make sure these outbursts don't actually turn into orgies of violence.

It's as simple as having plain clothed police officers around an area to stop an anti [insert hated ethnic group] riot turning into a full on pogrom.
Reply
#10

AI could show when the next genocide is about to happen by tracking hate speech

The concept of anti white anti christian genocide is being implemented with seemingly benign replacement birthrates... Sharia zones are set up - Christians threatened and driven out - LGBTQs terrified and Men with up to 4 wives moved in and supported on the various dole programs and liberal western welfare systems and abandoned business locations are taken over and only Sharia compliant businesses are the only ones patronized - diversity policies in hiring and education is not only laughed at it is forbidden and once prosperous countries like Beirut, Lebanon once the Paris of the Middle East were converted in as little as three decades from peaceful diverse multi-ethnic countries into radical islamist countries outbirthed by Palestinian Refugees and now run by Hamas and Hezbollah Iran's much more subtle (No public beheadings) but equally ruthless version of the Saudi Sunni's supported ISIS/ISIL.

What happened in Beirut is now occurring in Londonistan, Swedistan, Norwegistan, Paristan, Berlinistan and greater Eurostania.

The liberal Marxist Feminized self loathing European Union within 50 years will be subsumed into Greater Shariastan of the European Continent with small pockets of remaining ethnic white Orthodox Christian Europeans in Eastern Europe and perhaps Switzerland and Italia the later who will go full mafia underground with Vatican Blessings to combat Sharia fire with ancient Roman rules - only possible hope for Europa is that Christian Soldiers unite under the flag of their respective Churches and band together to oust the Marxist Feminist Global Banksters controlled governments and revert back to the determination of King Ferdinand and Queen Isabella to drive the Islamist invaders and other non Christians from their lands.

I suspect this AI engine was NOT programmed for sustained systematic birthrate genocide detection however history has been.
Reply
#11

AI could show when the next genocide is about to happen by tracking hate speech

^^ That post was completely unrelated to anything on this thread.
Reply
#12

AI could show when the next genocide is about to happen by tracking hate speech

This is great for preventing conflicts in third world countries.

Years ago, I would think of this as an unequivocal success.

Now I'm just worried about how the NSA and EU are going to utilize it to silence dissent.
Reply
#13

AI could show when the next genocide is about to happen by tracking hate speech

Quote:Quote:

The Umati algorithm scavenges the internet in search of a “bag of words”, or key-phrases which a human inputs.

This is a technique commonly taught in the first week of a beginner's course on data science. This tutorial show how to characterize movie review with "positive" and "negative" words:
https://www.kaggle.com/c/word2vec-nlp-tu...g-of-words

EDIT: What I am saying is that this is about as close to AI as the average guy is to five hundred rep points on RVF.

If you're going to try, go all the way. There is no other feeling like that. You will be alone with the gods, and the nights will flame with fire. You will ride life straight to perfect laughter. It's the only good fight there is.

Disable "Click here to Continue"

My Testosterone Adventure: Part I | Part II | Part III | Part IV | Part V

Quote:Quote:
if it happened to you it’s your fault, I got no sympathy and I don’t believe your version of events.
Reply
#14

AI could show when the next genocide is about to happen by tracking hate speech

Well, damn, this project is doomed to fail because ghetto thugs can't even spell English words properly and constantly invent new lingo so the algorithm is just never gonna catch the real perpetrators of violent crime

oh and [Image: schermafbeelding-2010-11-29-om-19-34-10.png]
Reply
#15

AI could show when the next genocide is about to happen by tracking hate speech

The funny thing is feminists are positively brimming with hate speech.

It's gonna be a lolfest when this tracker points straight at modern feminists.
Reply
#16

AI could show when the next genocide is about to happen by tracking hate speech

It would be funny if they use it in Western societies...and it concludes that white men are the most likely to be genocided because of SJWs.

They'd be puzzled as to how this "glitch" could have occurred.
Reply


Forum Jump:


Users browsing this thread: 2 Guest(s)