rooshvforum.network is a fully functional forum: you can search, register, post new threads etc...
Old accounts are inaccessible: register a new one, or recover it when possible. x


AI Could be the End of Humankind: Stephen Hawking
#1

AI Could be the End of Humankind: Stephen Hawking

With artificial intelligence continuously surpassing itself, there is massive fear of it exponentially growing to the point of being even better than the human mind. Something that has been on my mind ever since I saw an article about Elon Musk fearing the same thing.

Link: http://www.bbc.com/news/technology-30290540

Quote:Quote:

Stephen Hawking: "Humans, who are limited by slow biological evolution, couldn't compete and would be superseded"

Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.

He told the BBC:"The development of full artificial intelligence could spell the end of the human race."

His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI.

But others are less gloomy about AI's prospects.

The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.

Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.
"It would take off on its own, and re-design itself at an ever increasing rate," he said.

"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

But others are less pessimistic.

"I believe we will remain in charge of the technology for a decently long time and the potential of it to solve many of the world problems will be realised," said Rollo Carpenter, creator of Cleverbot.

Cleverbot's software learns from its past conversations, and has gained high scores in the Turing test, fooling a high proportion of people into believing they are talking to a human.

Rise of the robots
Mr Carpenter says we are a long way from having the computing power or developing the algorithms needed to achieve full artificial intelligence, but believes it will come in the next few decades.

"We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it," he says.

But he is betting that AI is going to be a positive force.

Prof Hawking is not alone in fearing for the future.

In the short term, there are concerns that clever machines capable of undertaking tasks done by humans until now will swiftly destroy millions of jobs.
In the longer term, the technology entrepreneur Elon Musk has warned that AI is "our biggest existential threat".

Robotic voice
In his BBC interview, Prof Hawking also talks of the benefits and dangers of the internet.

He quotes the director of GCHQ's warning about the net becoming the command centre for terrorists: "More must be done by the internet companies to counter the threat, but the difficulty is to do this without sacrificing freedom and privacy."

He has, however, been an enthusiastic early adopter of all kinds of communication technologies and is looking forward to being able to write much faster with his new system.

But one aspect of his own tech - his computer generated voice - has not changed in the latest update.

Prof Hawking concedes that it's slightly robotic, but insists he didn't want a more natural voice.

"It has become my trademark, and I wouldn't change it for a more natural voice with a British accent," he said.

"I'm told that children who need a computer voice, want one like mine."

"Money over bitches, nigga stick to the script." - Jay-Z
They gonna love me for my ambition.
Reply
#2

AI Could be the End of Humankind: Stephen Hawking

[Image: 1377787792_terminator-o.gif]
Reply
#3

AI Could be the End of Humankind: Stephen Hawking

To be a threat: AI will need a self preservation instinct and a free will. Each one would have it's own personality, goals, and psychoses.

Just imagine an AI left with nobody of its intelligence worth talking to. It would be a completely psychotic child, in all likelihood. Humans would stop being interesting to it very quickly, and it likely wouldn't care if they existed or not.

But, I don't think that we're in danger of the first two requirements being met any time soon.
Reply
#4

AI Could be the End of Humankind: Stephen Hawking

Musk and Hawking are obviously beyond my intelligence. But why would anyone assume that AI would ever develop maliciousness or have a desire to destroy humanity for its inferiority. It seems like they are just projecting humanity's negative traits onto AI.

Are there any instances of AI independebtly choosing to do harm to lifeforms?
Reply
#5

AI Could be the End of Humankind: Stephen Hawking

Quote: (12-02-2014 11:52 PM)Sonsowey Wrote:  

Musk and Hawking are obviously beyond my intelligence. But why would anyone assume that AI would ever develop maliciousness or have a desire to destroy humanity for its inferiority. It seems like they are just projecting humanity's negative traits onto AI.

Are there any instances of AI independebtly choosing to do harm to lifeforms?

They think that it is progressing so quickly that it may start to build upon itself, and create something that cannot be undone. Its kind of like the IBM A.I. Watson. That thing is fucking insane how much it knows and can look up information.





"Money over bitches, nigga stick to the script." - Jay-Z
They gonna love me for my ambition.
Reply
#6

AI Could be the End of Humankind: Stephen Hawking

I think humans as we exist today would be able to stop AI from taking over. I highly doubt scientists would give it the ability to fight, therefore humans would win in some dystopian end-of-the-world battle. However, I personally am worried that AI would render 95% of humans obsolete, and then I worry what would happen when 95% of people don't have any work to do and are forced to live off of welfare. Scary thought.

Founding Member of TEAM DOUBLE WRAPPED CONDOMS
Reply
#7

AI Could be the End of Humankind: Stephen Hawking

Yea it knows a lot, but where is its desire to kill lifeforms because it views them as inferior? It seems like the drive to wipe out the competition is basicaly only a human desire. Do other omnivores or predators have a record of ever wiping out complete populations? I know chimps hunt other primates but that is not to wipe out a species but just to eat.

I think our desire to expand, conquer, vanquish, has nothing to do with machines. Why in the world would a machine feel superiority or loathe inferiority? Why would a machine have any motivation to kill humans. How would a machine benefit from such an action. How would a machine benefit from anything like that. Or even have self interest to begin with?
Reply
#8

AI Could be the End of Humankind: Stephen Hawking

Those who think it couldn't happen are overlooking a very basic scientific instinct which is to ask, "What would happen if we did this?"

And then do it.

This is a very real danger.
Reply
#9

AI Could be the End of Humankind: Stephen Hawking

I think what is more realistic is some combination of cyborg and/or genetic enhancements being the end of homo sapiens.
Reply
#10

AI Could be the End of Humankind: Stephen Hawking

That Wilson is just a search engine with voice.

I doubt an A.I can be created that can find it's own purpose like humans do.

AI has thinking independent from its sensors. It has no pleasure and pain and even if you program it to seek an overload of any particular sensor it has no further motivation after that. If it's super logical it's logical response is to remove sensors and become a truly cold machine - which removes any motivation to act on it's own.

Living beings are thinking with the very pleasure and pain they feel and constant sensory input always makes us think in different directions. Unlike a machine we can never get satisfied, when we reach a pleasure we get bored and seek another, where a machine would feel it's mission accomplished and have no purpose any more.

Once thrown in this world a human must deal with the burden of existence, there is no such thing for a machine.
Reply
#11

AI Could be the End of Humankind: Stephen Hawking

Quote: (12-02-2014 11:52 PM)Sonsowey Wrote:  

Musk and Hawking are obviously beyond my intelligence. But why would anyone assume that AI would ever develop maliciousness or have a desire to destroy humanity for its inferiority. It seems like they are just projecting humanity's negative traits onto AI.

Are there any instances of AI independebtly choosing to do harm to lifeforms?

It doesn't necessarily have to have a conscious malicious intent. Sometimes computer programs can merely get "confused" and can have conflicts in their programming which have unintended consequences. Like let's say you program a computer to figure out ways to combat mosquitoes and kill them. Well a computer might determine that since mosquitoes feed off human blood as one of their food sources, then eliminating all humans would be a way to limit the mosquito population.

This is essentially what made HAL kill in 2001 A Space Odyssey

From Wikipedia:

Quote:Quote:

The novel explains that HAL is unable to resolve a conflict between his general mission to relay information accurately and orders specific to the mission requiring that he withhold from Bowman and Poole the true purpose of the mission. With the crew dead, he reasons, he would not need to be lying to them. He fabricates the failure of the AE-35 unit so that their deaths would appear accidental.

It's not that the machine will develop "feelings" as we have them(although who knows,) it's actually the lack of feelings or empathy that makes a highly intelligent AI with powerful capabilities so dangerous.

The notion of machines being able to develop empathy is a central theme in the film "Blade Runner."
Reply
#12

AI Could be the End of Humankind: Stephen Hawking

First off I have worked with robots before and the ones I have seen are so dang unreliable it is hard to take Stephan Hawking's complain too seriously, granted they were all R&D type robots so some the lack of reliability it is expected. Of course this is a "relatively easy" fixable problem, so let us then talk about a couple of issues which I don't consider fixable or atleast very, very difficult.
Yes robots have faster computers and bigger memory banks, but humans are significantly more efficient. We are very good at coming up with close to the right answer with very little energy. We run with all our running and jumping and thinking etc. on about the same energy cost as running a light bulb[1]. Although the push to make technology is coming along, I doubt they it will become more efficient than nature. [Unfixable]
The second is that we are self repairing--that is not something which happens with machines. Additionally I haven't seen computer programs which have troubleshooting capabiliites which humans possess. [Technology will probably exist at some point but not in the near future]
The third is that machines lack the adabtablility that humans do. We can run, jump, climb, swim, trasverse difficult terrain (i.e. we can do these things on nonideal surfaces), think, we have incredible limbs which can do all sorts of things. The reality is that most robots can usually only do one to two tasks very well[2], while humans can do multiple tasks well. Certainly people have humanoid robots (such as Honda's ASIMO[3]) but they basically mechanically suck at everything. They've been working on ASIMO for forty years but he still looks like the retarded kid from gym class trying to figure out how to run[4]. [Fixable but not too soon, maybe 20 years]
Finally we still lack energy storage mechanisms to make robots a viable competitor for humans. Batteries begin to die after 600-1000 life cycles, are big, heavy, expenisve and toxic. They shelf life primarily due to microfractures of the leads due to the constant heating and cooling this is a nontrivial battery problem that none of our commercial products are capable of fixing (I've heard that Li+ capaciors have fixed the problem but I am doubtful). [Perhaps fixable, no current technology shows much promise beyond incremental steps]

I didn't even touch the problem of coding these things. I'm sure some of it will eventually get figured out, but it is not trivial.

All of these problems are career length problems to solve if solvable (which I don't think the efficiency one is). Now if computers take control of all of our Nukes and kill everybody or some other accident well yea that could happen but that's a fair cry from humans being superceded and in my mind more attributed to human error than computer maliciousness. This is of course my opinion and I have been known to eat my words before, but these are some of the reasons I don't worry about robots and computers superceding humans.

Reference
[1]http://www.npr.org/blogs/krulwich/2013/0...ight-bulbs
[2]Shout out to rhex https://www.youtube.com/watch?feature=pl...tlI-pDUxPE
[3]http://asimo.honda.com/
[4]https://www.youtube.com/watch?v=sv35ItWLBBk
Reply
#13

AI Could be the End of Humankind: Stephen Hawking

[Image: 300px-Steven_SMTIV.png]

"Believe in your FLYNESS ...
... conquer your shyness"
- Kanye Omari West
Reply
#14

AI Could be the End of Humankind: Stephen Hawking

I remember writing an essay on the basis of artificial intelligence. Check it out if you're interested.
https://drive.google.com/file/d/0B3aW0ay...sp=sharing

Just imagine when a robot can do your job, quicker than you and for free. I think the day will come when they do pose a threat, but it's a long way a way.
Reply
#15

AI Could be the End of Humankind: Stephen Hawking

It would take a lot of robots to do building maintenance. Climbing over pipes, evaluate situation, have many tools, and be programmed to get creative and fix a problem quickly. I don't see that happening anytime soon. Now making your food? They already have that.

Chicago Tribe.

My podcast with H3ltrsk3ltr and Cobra.

Snowplow is uber deep cover as an alpha dark triad player red pill awoken gorilla minded narc cop. -Kaotic
Reply
#16

AI Could be the End of Humankind: Stephen Hawking

This kind of thing will only be possible after large advances in Quantum Computing. This is why:

Every action you normally make is pretty much sub-conscious. It is a reaction to certain impulses and is generally mechanical. These all require regular computation, albeit a large amount of it. What we generally have computers and even supercomputers performing is all just regular computation to a very high degree. In computation power these supercomputers have far surpassed the human brain. Computers have far surpassed us but only in our "subconscious" actions.

But where does the real power of the human come from? That comes from consciousness. No matter how big and sophisticated you build your computers, they are still working on regular computational algorithms. Even when it is predicting what you will say, that is still using largely statistical data. Theories for where true consciousness comes from lies in quantum computing/ quantum information theory.

The following is an old, mainly discarded, yet highly interesting theory of "Quantum Consciousness" or "Quantum Mind". The guy Roger Penrose is actually Hawking's partner in a lot of the Black Hole singularity theories. Essentially they are trying to deduce what part of the brain could be responsible for making it into a quantum computer, and then say that it would thus be the quantum computational power/center of the brain that is responsible for conscious thought.






Even though the specific area they speak about is not too probable, their overall theory is plausible in my mind. Atleast it is something that can be built on.


My main fear is machines operated by humans on humans. We are still very very far from AI takeover. The human brain is more complicated than people give it credit for, though we are all doing our best to change that fact in modern society.

You don't get there till you get there
Reply
#17

AI Could be the End of Humankind: Stephen Hawking

Bottom line: he's being paranoid. There's no way a steak sauce is going to lead to the end of the world.

Now if he were to argue mayonnaise will, sure, then he'd have a point.
[Image: 0005440000004-500x500.jpg]
Reply
#18

AI Could be the End of Humankind: Stephen Hawking

I'd also like to point out the potential energy problem: humans (like most living beings) are amazingly efficient creatures as far as input/output is concerned, having evolved in permanent and harsh scarcity conditions. Give a human a bowl of rice and they can perform complex tasks all day, be they intellectual or physical labor, and easily switch between them, their locations, intensities and circumstances.

Now compare the enormous amount of energy guzzled by a robot. As soon as you leave the realm of thought (even though that one can get pretty steep too), the energy cost becomes mind-boggling. Have you ever noticed those amazing robot dogs and mules in tests being powered by thick electrical cables attached to their bags? Each one of those things guzzles the energy equivalent a hundred humans. It needs a direct, continuous source of massive amounts of energy, and unlike a human it can't store and carry around 30 days' worth of energy.

Unless we're talking cyborgs or robots that perfectly resemble humans down to the tiniest muscle/nerve (only made from inorganic materials), we're not going to see any sort of AI revolution before the discovery of both infinite energy production and infinite energy storage.

"Imagine" by HCE | Hitler reacts to Battle of Montreal | An alternative use for squid that has never crossed your mind before
Reply
#19

AI Could be the End of Humankind: Stephen Hawking

I know lots of libertarians and free marketeers are very pro-robotics because it makes the economy more efficient. I support free markets but I have some Neo-Luddite reservations about advanced robotics. What will people do if the entire (or most of) the economy is automated by robotics? Most people do not have the IQ to be a robotics engineer to work on building, maintaining, and repairing robots. Sometimes we need people to do those jobs like cleaning toilets and delivering pizzas, otherwise, they wont have work and will be on the dole.

Terminator or The Matrix are just movies but they're realistic dystopian possibilities of robotics run amok. Do we really believe that robotics will bring about a future like the Jetsons or Futurama?

Follow me on Twitter

Read my Blog: Fanghorn Forest
Reply
#20

AI Could be the End of Humankind: Stephen Hawking

Silly humans, arguing about what happened long ago. Fate, it seems, is not without a sense of irony.

[Image: tumblr_mz2xgrklCH1spaqpio10_500.gif]


[Image: the-matrix11.jpg]

"Me llaman el desaparecido
Que cuando llega ya se ha ido
Volando vengo, volando voy
Deprisa deprisa a rumbo perdido"
Reply
#21

AI Could be the End of Humankind: Stephen Hawking

Artificial intelligence is like nanotechnology or cold fusion - something that sounds great in concept, but probably won't ever materialise into a viable technology during our lifetimes, if ever.

Consider the advances in AI during the past 30 years - we have machines that can play a mean game of chess, programs that can answer trivia questions, and applications that can barely hold up the pretense of a conversation.

And that's about it.

We're not appreciably closer to machines that can actually think than we were in the 1980's.

How could we be, when we still don't understand how humans think? Sure, we understand what neurons and ganglia are, but how consciousness arises from a bundle of proteins and electrical impulses is still a mystery.

So we can't build thinking machines yet, despite decades of Moore's Law at work meaning the phone in your pocket has more computing power at its disposal than an early 80's supercomputer.

Even if we did build a machine that could think and had a will of its own, how would it threaten us?

Science fiction has posited a number of scenarios:

Death by AI-inflicted atomic holocaust

[Image: jgh8p0.jpg]

Why this will never happen:

* Nukes are manually controlled by men in submarines or silos
* We still have Matthew Broderick

Death by Computer rape

[Image: 352onb5.jpg]

Why this will never happen:

* Computers don't have penises
* Julie Christie is 100 years old now

Death by killbot

[Image: fmqjok.jpg]

Why this will never happen:

* Real robots can barely climb a flight of stairs
* They can't build a smartphone that lasts 24 hours without the battery dying, and phones have no moving parts. The killbots would need to recharge every 10 minutes. "I need your clothes, your boots, and four hours plugged into the mains, please!"

Death by virtual reality

[Image: 2n683yw.jpg]

Why this will never happen:

[Image: 35asmx1.jpg]
Reply
#22

AI Could be the End of Humankind: Stephen Hawking

AI invasion is as likely as a zombie apocalypse. That's it.
Having said that, economic takeover seems to be a more realistic and inevitable scenario. We all watched Human Needs Not Apply. There was a part I still remember clearly: The biggest unemployment rate during the great depression was 25%. Robots are likely to take over very soon 45% of the jobs. These are the jobs that don't require intelligence or creativity. Like transportation, construction, service industry etc.
Well, this is worse than an invasion. Instead of fighting robots, humans will be fighting each other.
Reply
#23

AI Could be the End of Humankind: Stephen Hawking

Quote: (12-03-2014 08:34 AM)turkishcandy Wrote:  

The biggest unemployment rate during the great depression was 25%. Robots are likely to take over very soon 45% of the jobs. These are the jobs that don't require intelligence or creativity. Like transportation, construction, service industry etc.

Sure.

But there's not a fixed number of jobs in the economy.

There is, in fact, a potentially infinite number of jobs.

1000 years ago 90% of people worked on the land as farmers.

Most of that work was mechanised long ago, but we don't have 90% unemployment, because people found other things to do. And because total productivity is much higher as a result of mechanisation, even poor people today are rich by the standards of their ancestors.

Say robots took over taxi driving, customer services, and gardening work.

That would be quite a shock for all the taxi drivers, customer services reps and gardeners, but they'd find other work.

People adapt. It's what we do. It's why we no longer have to worry about being eaten by sabre toothed tigers or what-have-you.
Reply
#24

AI Could be the End of Humankind: Stephen Hawking

^If you saw the video, it answers this too. New jobs that are being created are far too small both in quantity and velocity, so it's impossible to replace the jobs that will be lost to robots with the new jobs. And never before in the history were so many jobs under threat. It's not a single job like taxi drivers or waiters. We are talking about revolution in industrial scale. Think about the whole transportation industry being wiped out by robots and you are trying to make up for this loss by jobs like scientists, artists, celebrity agents, designers, composers etc. It's like 1000 employment positions will be lost to robots every day and only 5-6 new positions will be created for humans. At least that's the theory. For everybody's sake I hope you are right though.
Reply
#25

AI Could be the End of Humankind: Stephen Hawking

Quote: (12-03-2014 08:54 AM)SteveMcMahon Wrote:  

Say robots took over taxi driving






Can't wait for our Johnny cab future!
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)