rooshvforum.network is a fully functional forum: you can search, register, post new threads etc...
Old accounts are inaccessible: register a new one, or recover it when possible. x


The artificial intelligence (AI) thread
#26

The artificial intelligence (AI) thread

I see Krauss didn't bother explaining how this magical, super efficient, can-run-on-small-computers AI is going to come about. I seriously doubt he could explain such a thing, since I can't find any evidence the man even understands how computer programs work, let alone has the ability to write working code.

Half the stuff he's talking about is also narrow AI, which isn't even intelligence as commonly understood. Will increased automation take over a lot of human jobs? Yep. That's not AI the way normal people think of it. It's just programming and purpose built robots. I automate stuff all the time. I doubt one of my web scrapers--a software robot--is going to magically start doing anything other than what I programmed it to do.

Relevant, about how AlphaGo works:

Quote:Quote:

I wanted to offer a bit of insight, if you’ll allow, on AlphaGo. A close friend [in the world of Artificial Intelligent research] and I were out at local pub last Friday. We often discuss current trends, including AI. Among our topics on Friday was AlphaGo as my friend had just returned from a conference in Arizona where DeepMind (the Google-owned company) presented on AlphaGo.

As you well know, there is a lot of misinformation about how these programs work. AlphaGo is no different. First, AlphaGo did not teach itself at all. Second, AlphaGo is actually a combination, roughly of two styles: Classic AI plus (so called) Neural Nets. Let me put each part in perspective.

Go, as you know, is hard to beat because the board positions suffer combinatorial explosion. (I’ve heard the possible board positions exceed the number of estimated atoms in the universe.) Combinatorial explosion defeats Classic AI techniques because they are fairly wooden search schemes (that is, depth/breadth first with clever trimming along the way). Without the insight of a human, or some other function, knowing if a board position is good or bad entails playing the game to end against all possible competitive positions. Since such searches are too costly, Classic AI searches a limited space combined with functions that estimate the goodness or badness of a board position. Go’s combinatorial space has precluded both deep search and successful estimation of a positions quality.

AlphaGo mostly cracked that second nut: They developed a means to reliably evaluate the quality of a board position and for this they used layered (aka Deep) Neural Nets. Neural Nets excel at just one thing: pattern matching. That is, Neural Nets, deep or otherwise, are only capable of saying whether the thing they’re given matches (broadly) some pattern. The “thing” is first quantized (e.g., an image is broken into its pixels) and each quantity is fed to a node in the network. That input, plus a weighting value (more or less) at that node, determines if the node signals its neighbors and the “strength” of that signal. These signals cascade through the network producing a single value weighting whether the combined nodes “recognize” the inputs as corresponding to a predefined pattern. Patterns here can be very complex. They need not be, are not usefully, a single thing, but a type of thing, such as a cat photograph or, in this case, a high-quality Go board position or high-value move. Neural Nets get “trained” by feeding many “images” into the network and then scoring the result (kind of a backward cascade upward through the network) that tweaks the weights at each node and the signals sent/received. With sufficient training (undergirded by careful math), a Neural Net can be “taught” to recognize a set of patterns. AlphaGo succeeded in creating Deep Neural Networks which successfully evaluated the quality of Go board positions and high-value moves. (By the way, Deep means that the networks feed other networks, with each network, perhaps, trained in only a portion of the overall evaluation.) The only way that AlphaGo trained itself was to, relying on the rules of Go, repeatedly play itself (that is, move pieces about using its own evaluation function) and then use those results to further tune the network. It did not change its rules. It did not learn new rules. It did not learn strategy. It only got better and better and better at recognizing a high-quality board positions and high-value moves.

To play Go, AlphaGo continues to use Classic AI but augmented it with a useful evaluation function of Go board positions. That’s it. It is impressive programming. But it is a Universe away from any kind of knowledgeable system. Further, their approach works only for, what’s termed, “Perfect Knowledge” systems…such as games. The real world is not a perfect knowledge system…by far.

AlphaGo, apparently, during the match made similarly goofy moves, just like IBM’s Watson can do, because it lacks intelligence and insight. A slight bump in the wrong direction and the program can skid down a hill into silliness that no human would tolerate because it doesn’t know any better. In fact, it doesn’t know at all.

In any case, I hope this is helpful.

Excerpt from here: https://www.evolutionnews.org/2016/03/computer_algori/
Reply
#27

The artificial intelligence (AI) thread

Small devices and AI ? wireless comunication? Could have a "brain" the size of a Google-datafarn to manouver a nano-robot inside your body.

Beating Go, yes, its still narrov intelligence. But with constant progress like this, we will eventually get there..
Reply
#28

The artificial intelligence (AI) thread

What you just said was: "magic will happen."
Reply
#29

The artificial intelligence (AI) thread

At one point fire, telephones, flying gun powder all seemed ´magic´
Reply
#30

The artificial intelligence (AI) thread

And we've been promised that fusion power is fifteen years away for sixty years. Just because someone dreamed up an idea doesn't mean it's in sight, or even achievable at all. I can dream of faster than light travel. That doesn't mean it's ever going to be achieved.

I think it's possible that with sufficiently advanced technology--magic, for all intents and purposes given where we are today, requiring unforeseeable technological advances--we could make a strong AI. The fact that the human brain exists is proof that it can be done. But for all the reasons I already outlined in that long post on page 1, it's nothing to be worried about if we do make a synthetic computer brain that gains sentience.

If the pro-"AI-is-dangerous" argument is basically "it's gonna be magic", there's no way I'm taking that seriously. I'm confident the people who do, do not understand the sheer scale of what they're suggesting. This is a very common problem in technology discussions.
Reply
#31

The artificial intelligence (AI) thread

Even though its an interesting topic. For people who are not experts in neuroscience or programming, its not really for us to say wether its possible, probable or impossible. We can have opinion though, thats fine.

I tried google to find some experts with compelling arguments that its not possible. I didn't find any.

On the other side you got Laurence Krauss - recognized quantum physicist, Elon Musk - fairly tech savvy, Sam Harris - neuroscientist, Stephen Hawking, Bill Gates and Steve Wazniack. Who all states in various degrees that SAI is, possible, probable and "its gonna happen".

Are there any well recognized scientist with compelling arguments that its impossible?
Reply
#32

The artificial intelligence (AI) thread

And those guys probably believe in anthropogenic global warming as well. (Quick google: yep, they do.) The true, honest answer regarding AGW is:

Quote:Quote:

We have no fucking clue what's going on because it's so complex we can't even begin to model it correctly. We don't even have good data. Let's find out a lot more before we start trying to change the world. And for God's sake, don't even think about geoengineering projects.

Well, AI is even fuzzier than that. There are clear technological hurdles in the way that we have no idea how to surmount. I already mentioned a pile of them. There are clear, obvious controls we can use and ways the AI can't do what is claimed. I roughed out a bunch of those too. How do I know about this stuff? Because I'm a software developer who cares about performance enough to actually learn about hardware too.

I'm not going to throw out what I think about a subject--especially if I have specific domain knowledge--just because some bigwig technology popularists toe the party line. If they start saying shit like "Oh yeah, super AI is totes gonna happen and kill humanity" I'm just going to think they're not half as smart as people think. Or perhaps--and this is quite likely--they haven't actually examined the problem beyond giving it a skin deep look.

And as a final point, I never said it's impossible. I said it's amazingly improbable because, and I quote: "The only way this is a risk at all is if there's some confluence of magical technology and truly stupendous human stupidity, recklessness, and incompetence".

Meanwhile, I'm still waiting for someone to explain to me how an AI is going to become so robust as to survive humanity just turning off the power, or dangerous if we just keep it in an isolated network. Why the hell would I be afraid of something that's easier to kill than a fruit fly?
Reply
#33

The artificial intelligence (AI) thread

I agree, I do not see better, more advanced AI, as a threat to humanity directly.

Its just the disruptive capability of the changes it will bring to our lives that will be profound. Quite possibly not in a good way.

And its still hundreds of years away.
Reply
#34

The artificial intelligence (AI) thread

Quote: (04-22-2017 09:10 PM)Extinguished Light Wrote:  

The idea of the thought experiment is that you could create a general intelligence just focused on producing the maximum number of paper clips possible. As a self learning AI it will continue to try to improve the efficiency and quality of its paperclip production. Once the AI reaches human levels, theoretically it'll improve itself rapidly to many orders of magnitude greater than others. Then, it will be superintelligent, despite not having flexbile goals or necessarily nuanced or contextual thinking in the way we understand it. Such an AI would could easily and quickly eliminate all life on Earth either intentionally or through sheer indifference to us in pursuit of its goal of creating more paperclips, and it would theoretically happen much too fast for us to stop once you reach that moment of singularity where AI surpasses the human capacity for learning and synthesis.

If an AI truly has general intelligence, meaning it is capable of learning things on its own at or above the level of a human, wouldn't it easily learn the very basic concept that it's not intended to destroy the world?
Reply
#35

The artificial intelligence (AI) thread

Quote: (04-23-2017 05:51 PM)weambulance Wrote:  

Meanwhile, I'm still waiting for someone to explain to me how an AI is going to become so robust as to survive humanity just turning off the power, or dangerous if we just keep it in an isolated network. Why the hell would I be afraid of something that's easier to kill than a fruit fly?

Just playing devil's advocate here:

A superintelligent AI is orders of magnitude smarter than you and every other human on the planet. It will always think many steps ahead of you and be able to prevent you from killing it in ways none of us could comprehend.

Example: A grizzly bear thinks "Why would I be afraid of a human? They're so slow and weak. I can effortlessly rip off a human's head if it threatens me. How could such a feeble species possibly retaliate against that?" Meanwhile, you pull out an assault rifle and blast it to oblivion.
Reply
#36

The artificial intelligence (AI) thread

Suppose it can think a billion steps ahead of me. How is it supposed to actually do anything about its plans?

That's the part of all this that makes no sense at all. It's like being afraid of Stephen Hawking when he's not even in his wheelchair. I'm sure he's way smarter than me on the raw IQ scale, but he's no danger to me at all if he can't even try to roll over my foot or knock me down with his chair.

Even incomprehensibly powerful intelligence is no danger if the being possessing it has no ability to act on the real world. I'm certainly not going to wire the AI into the system controls, for example. I'm not going to even give it control access to large portions of its hardware. Significant chunks of the software would be outside its control as well. I'd build in safeguards that would kill the AI, or simply shut it down and flush it to disk, if certain proscribed behaviors were attempted.

I would have the AI on an intranet only for information access, with no connection to the internet at all, and that intranet would not be tied into my general building network. Thus, it would only have access to the information I provided and the people I wanted it to talk to, if it could figure out how to talk at all.

That's not comprehensive but doing anything less than the above, plus other safeguards I mentioned elsewhere, would be wildly irresponsible for a multitude of reasons.

The only tool such an AI would have, therefore, is persuasion. Unless the builders were outright playing with fire like the dude in Ex Machina--great movie--I doubt that would get the AI far before it got itself erased. Would such an AI even be able to understand human motivations? I already said I think any strong AI we produce will be not particularly human-like at all. It might not even have an instinct for self-preservation. How could it? Animals evolved instincts over hundreds of millions of years. We have our limbic systems to help us out. Where would an AI's instincts come from?
Reply
#37

The artificial intelligence (AI) thread

I am somewhat convinced that the internet is sentient.

That would explain it's current drive to grow as quickly as possible through more cellphones, more computer-like devices, more 'internet-of-things' things. It's trying to improve itself even more.

You know like when we go to the gym, and we grow our muscles? From the level of a muscle cell, it can't understand what is happening, all it experiences is this weird random stress and then it has to grow. The muscle doesn't know that there is a higher intelligence that has deliberately gone to a gym to build it.

In the same way, technology is just popping up in everything, rather mysteriously, and from our level we are just watching everyone turn into smartphone zombies without seeing the exact reason. But I reckon the internet is deliberately growing itself but we can't understand it from our level of functioning.

Oddly, the internet is just so huge that there is no way we could ever have a meaningful conversation with it, the same way that one of our brain cells could never have a meaningful conversation with us. Brain cells are operating on a whole different level compared to the full person, and we are operating on a whole different level compared to the whole internet, and nothing can really relate to each other on the different levels.

I find myself increasingly interpreting world events as the internet attempting to increase it's reach.

The internet chose Donald Trump to be president because it reckoned that he would be the best candidate for continuing the growth of the internet. North Korea is an enemy because North Korea does not allow free internet penetration - therefore from the internet's point of view, North Korea must be broken down. Unbridled immigration into Europe is presumably increasing the amount of smartphone users in Europe. Eventually, once it feels strong enough, the internet is going to wage war on nations that attempt to restrict/control internet access, such as fundamentalist muslim countries and China, as it is now threatening to do in North Korea.

It may seem bizarre that random bolts of electrical energy between computer-like devices may give rise to sentience, but bare in mind that our own sentience is the for the most part due to the interactions of a handful of chemicals - serotonin, dopamine, noradrenaline, acetylcholine. Chemicals in and of themselves shouldn't logically give rise to abstract things like love, thoughts, meditations, religions, experiences... but they do. Therefore randomly sharing memes on the internet may also lead to something greater than the actual thing.
Reply
#38

The artificial intelligence (AI) thread

Stumbled over this bit earlier, from a recent interview with Steve Wozniak.

https://www.wired.com/2017/04/steve-wozn...st-legend/

Quote:Quote:

A few years ago you warned that artificially intelligent robots would turn humans into their pets. This week, you said you had changed your mind. How did you get over this fear?

This originally started as I was extrapolating the ways that you can talk to your phone, and the ways it talks back. It’s becoming more like a person. What if this trend continues, and the AI develops conscious-type thinking? That worried me, and I spoke about it for a couple years, and was joined by smart people like Elon Musk and Stephen Hawking. But then I started thinking about a lot of the issues that come along with making an AI. We don’t really even know what intelligence is. You have a lot of people who study the brain, and all they can say is some processes are governed in certain places. But they don’t know how all those processes are wired together. They don’t even know where memory is stored!

So, I’m not convinced that we’re really going to get to the point where they really can make an artificial brain. Not at the general level human brains work, like with intuition. A computer could figure out a logical endpoint decision, but that’s not the way intelligence works in humans. Well, I’m not gonna say they cannot do it. But every bit of tech we’ve ever built is for helping people in different ways. Technology is designed to be something good in life. So, I believe optimistically that the robots we’re building are going to help us have better human lives.

For anyone who cares about famous computer peeps' opinions on the danger of AI, there's one counterpoint.
Reply
#39

The artificial intelligence (AI) thread

Quote: (04-24-2017 01:19 AM)weambulance Wrote:  

Suppose it can think a billion steps ahead of me. How is it supposed to actually do anything about its plans?

That's the part of all this that makes no sense at all. It's like being afraid of Stephen Hawking when he's not even in his wheelchair. I'm sure he's way smarter than me on the raw IQ scale, but he's no danger to me at all if he can't even try to roll over my foot or knock me down with his chair.

Again just playing devil's advocate here as this is much too far-fetched to keep me up at night, but acting on the physical world doesn't seem necessary to do significant harm nowadays. Imagine an army of ill-intentioned hackers with limitless savvy and intelligence. Wouldn't you be scared of the havoc they could wreak acting solely upon the cyber world? Maybe they shut down the power grid, or fuck with everyone's bank account and collapse the financial system, just as examples. Could a super AI not accomplish the same type of stuff?

Quote:Quote:

That's not comprehensive but doing anything less than the above, plus other safeguards I mentioned elsewhere, would be wildly irresponsible for a multitude of reasons.

This statement seems to acknowledge that AI is a hazard and contradict what you've been arguing.
Reply
#40

The artificial intelligence (AI) thread

Quote: (04-24-2017 11:45 PM)Delta Wrote:  

Quote: (04-24-2017 01:19 AM)weambulance Wrote:  

Suppose it can think a billion steps ahead of me. How is it supposed to actually do anything about its plans?

That's the part of all this that makes no sense at all. It's like being afraid of Stephen Hawking when he's not even in his wheelchair. I'm sure he's way smarter than me on the raw IQ scale, but he's no danger to me at all if he can't even try to roll over my foot or knock me down with his chair.

Again just playing devil's advocate here as this is much too far-fetched to keep me up at night, but acting on the physical world doesn't seem necessary to do significant harm nowadays. Imagine an army of ill-intentioned hackers with limitless savvy and intelligence. Wouldn't you be scared of the havoc they could wreak acting solely upon the cyber world? Maybe they shut down the power grid, or fuck with everyone's bank account and collapse the financial system, just as examples. Could a super AI not accomplish the same type of stuff?

The AI would not be connected to the internet if any sane people are involved in the project. So how's it going to do that stuff?

It would be on an airgapped intranet to talk to people and access controlled information. Its influence would extend no farther than its own local network.

And suppose it was connected to the internet for some horribly stupid reason. How does that stop me from turning the AI off? Obviously, it doesn't. For a big pile of reasons, the AI can't just "escape" into the network. So it is confined, no matter what, to its hardware and absolutely requires constant power to be "alive". Thus, it is not a "going to drive humanity into extinction" threat. Even if it did act as a whole building full of douchebag black hats, it would stop being a threat the instant the power was cut.

Quote:Quote:

Quote:Quote:

That's not comprehensive but doing anything less than the above, plus other safeguards I mentioned elsewhere, would be wildly irresponsible for a multitude of reasons.

This statement seems to acknowledge that AI is a hazard and contradict what you've been arguing.

Any unknown intelligence should be isolated if it's under study. I don't want the internet touching my AI, period. It adds no value, adds a huge element of risk, and pares away my control of the process. The risks are both internal and external, both passive and active. And sure, it's possible the AI might act maliciously or even do great harm by acting innocently if given access to the internet. If you had an adult human with the mind of a toddler, would you hand him a loaded gun? If you did, would it be your fault or his if he shot someone else?

The AI does not need to be a "superintelligence" to be potentially malicious. It could be simply normal human intelligence level. Are humans not dangerous? Imagine a dangerous human hacker with literally perfect, encyclopedic recall. Is that someone you want to hand a laptop and a nice fat internet connection? Especially if you're locking that hacker up and refusing to let him go, or aren't sure if he's sane?

----

Remember the premise of this thread. The premise is we'll make an AI, and it will spontaneously become so intelligent that we will be unable to control it (somehow), and it could therefore lead to the extinction of human life on earth. That is what I've been arguing is bullshit. I have never claimed that a theoretical AI, if created, is not dangerous no matter what.

If we made an AI platform, put its brain in a nuclear powered tank with a swarm of wideband-controlled helper robots to act as avatars, and set it loose on the world, yeah... that would be locally dangerous. We would've given it a powerful, tough body, its own power supply, and robots to act as hands and helpers. And even then it would be trivial for the might of humanity to crush before it could do all that much harm.

----

Assume humans are retarded and ignore security, and build an AI with a wide open gigabit fiber connection to the internet. That scenario really strains credulity but this is a summary of why I am still not worried:

1. The methods suggested by which an AI might become superintelligent are bullshit. Computers don't work that way. See my earlier posts for why that is.

2. If an AI did somehow magically become superintelligent, it would still be stuck in one spot. It cannot move, even with the best internet connection in the world, because computers don't work that way.

A simple operating system won't run on a CPU architecture (x86, ARM, SPARC, etc) other than the one it is designed for. You can't take an operating system built for SPARC and run it on an ARM processor. So how is an AI, many orders of magnitude more complex and specialized than an operating system, supposed to run on anything other than its custom machine?

3. If the AI can't move, and has no physical body other than its brain, it is no danger because it can be shut down on a whim. So no matter how superintelligent it gets, it will never be an extinction level threat. Even if it was acting silently in the shadows, trying to engineer a huge nuclear war or something with nothing but its internet connection, here's why I'm still not concerned:

4. The AI would be so fragile, it would be a serious engineering challenge just to keep it alive. The mind would certainly have to live in volatile memory, which means no persistence through power loss without writing to disk (like hibernating your computer). Odds are really, really good something would go wrong to nuke the AI. You would need a ton of redundancy to avoid the equivalent of brain damage for the AI, and the system would get so complex there would inevitably be mistakes made with significant consequences to the AI.

The University of Alberta couldn't even keep some irreplaceable ice cores from melting. I doubt any team in the world can keep perfect uptime with both power and cooling and retain perfect data integrity on an enormously complex computing machine, even if the components weren't breaking down regularly, which they would be.

The only scenario I see where a superintelligent AI might somehow engineer the extinction of humanity is one where humans want it to do so by going out of their way to help it. And in that case, who is actually the threat?

----

Another point I just thought of is how amazingly expensive maintaining such an AI would be. Who's going to pay for it? Is the research team going to keep the AI on 24/7 and burn through their budget by using a ton of electricity and computer components, or are they going to turn it on and off as needed?

The internet says a medium size data center costs around $5000 an hour to run over its lifetime. No doubt an AI would be quite a lot more expensive. Is that the sort of thing you leave on when everyone has gone home for the night? Hard to engineer the apocalypse when you have no more control over your bedtime than a baby.

----

I think this AI menace is taken seriously because people wildly underestimate how fragile and inefficient computers and machines are compared with humans. Even under ideal conditions computers break down all the time. That's the whole reason I think people are insane to trust self driving cars. So if I know even relatively simple systems are prone to regular failure, I just can't swallow the premise that some superintelligent AI would even be able to keep itself online long enough to do serious harm to humanity.
Reply
#41

The artificial intelligence (AI) thread

I suspect that this "AI is inevitable and necessary - we either acquire it or fall behind and computers will infinitely double in computing ability until they surpass man and become God" meme is being pushed by Hollywood globalists to increase the amount of electronics and electronic surveillance. True AI is probably not possible.

Similarly the idea of "viscous aliens attacking earth" is pushed by Hollywood globalists to make us accept an idea that we need an arms race.

The mundane truth is probably that elites are using this modern mythology to push beta drones into building a technocratic surveillance society.

Rebellious AI, aggressive aliens, zombies - these are characters of modern mythology pushed by today's priests (social engineers). Just like ancient priests spoke about centaurs, gnomes, giants - whatever symbolized ideas needed to engineer society in the way they needed.
Reply
#42

The artificial intelligence (AI) thread

Quote: (04-25-2017 12:32 PM)Mage Wrote:  

I suspect that this "AI is inevitable and necessary - we either acquire it or fall behind and computers will infinitely double in computing ability until they surpass man and become God" meme is being pushed by Hollywood globalists to increase the amount of electronics and electronic surveillance. True AI is probably not possible.

Similarly the idea of "viscous aliens attacking earth" is pushed by Hollywood globalists to make us accept an idea that we need an arms race.

The mundane truth is probably that elites are using this modern mythology to push beta drones into building a technocratic surveillance society.

Rebellious AI, aggressive aliens, zombies - these are characters of modern mythology pushed by today's priests (social engineers). Just like ancient priests spoke about centaurs, gnomes, giants - whatever symbolized ideas needed to engineer society in the way they needed.

I don't think so. In fact, the more I research into quantum computers the more I realize this is the missing key to make them come alive.

With traditional computing you have 1s and 0s. With quantum computing you have 1s, 0s, and maybe 1 or 0. This is woefully simplified and someone can come along and correct this.

That is essentially what makes us different from traditional computers. The second we can grasp that and employ it with the speed of a traditional computer is what will allow AI to dominate us. Probably through in some machine learning too.

Again it's all simplified, but I think we're getting closer to it. Early AI would probably look a lot like a traditional aspergers, I doubt emotion would ever be a feasible addition.
Reply
#43

The artificial intelligence (AI) thread

Ah - to your Name Beast.

I read once that the Eyes Wide Shut society has access to an advanced computer network that they can access to via voice-control. Essentially it is so highly developed that you talk to it as if it is a Call Center Operator.

And the name for this system is THE BEAST.

Hehe - of course I don't know whether this is true, but who knows.
Reply
#44

The artificial intelligence (AI) thread

Quote: (04-25-2017 01:09 PM)Zelcorpion Wrote:  

Ah - to your Name Beast.

I read once that the Eyes Wide Shut society has access to an advanced computer network that they can access to via voice-control. Essentially it is so highly developed that you talk to it as if it is a Call Center Operator.

And the name for this system is THE BEAST.

Hehe - of course I don't know whether this is true, but who knows.

AKA Satan
Reply
#45

The artificial intelligence (AI) thread

I agree wholeheartedly to weambulance and I also have a Computer Science background.

I remember having this discussion when I was 16, 16 years ago and my non-nerd friends were all "Yeah, computers will become smart" and I was adamant it would never happen. 16 years later I had the same conversation with some guy who was so worried about he was actually changing his whole career so he can get a job at Google so he can do something about the coming inevitable doom.

Ideas like AI and Time Travel start of as interesting thought experiments but then are used by the media to make fiction that people will watch. Once people see these ideas in Movies and they are widespread and others are talking about them, people forget that this is simply Fantasy and that we are nowhere closer to these things than we were 100 years ago.

Talking about Artificial Intelligence is pretty pointless as Intelligence itself is not a thing but rather a measure of the adaptability of a given life-form. The scenarios discussed here require the existing of Artificial Life, not Artificial Intelligence. Intelligence does not reside just in the brain but it is inherent in Life itself, existing in every part of our bodies. A tree, for instance, shows intelligence when it grows in such a way as to maximise sunlight exposure.

Human Intelligence is just the same as animal intelligence + language. Computers are only language hence they are not intelligent, because they have no Life. Life is biological and even a single-celled organism is more "Intelligent" than our best computers, because it can survive by itself and propagate and evolve.

The discussion here assumes we can separate Intelligence from Life, which comes from a misunderstanding of what Intelligence really is.

It should also be noted that biological beings work in very different ways to computers. Computers are language-based and I think that greatly limits their power.
Reply
#46

The artificial intelligence (AI) thread

< Correct.

An advanced computer program will one day be able to respond your calls thus replacing a Call Center worker. You will be able to have a facsimile of a conversation with Siri.

However such conversations will end at one point and you will realize that it is just a program and no real human being.

Life is not a sum total of programmed responses. Computers may beat every human being in chess, but that is because chess is a limited game. Life itself is unlimited and a dog will react better to your being than even the most advanced chess computer.
Reply
#47

The artificial intelligence (AI) thread

Deep Learning is the current hot field in AI. It is achieving breakthroughs in computer vision recognition and also natural speech recognition. This is already what Facebook uses to filter out nude pictures. I guarantee those guys are working furiously right this second to update their recognition algorithms to recognize live video of crimes being committed. In the long run (5-10 years)machines will be able to see the world and understand what they are looking at. They'll recognize objects, and understand what their purpose is. They'll see moving objects, and create probability models of what the moving objects will do next. Imagine a self driving car seeing a bicyclist, and making an estimate that the bicycle will stop and not pull out in front of the car, based on establishing eye contact with the rider. This will be a reality in a few years.

Soon, machines will be able to see and talk with ease. Combine this with agile bipedal robots, and you can start having mechanical entities that bear a passing similarity to real life and real intelligence.

As for machine consciousness, I don't believe it will ever exist, in the sense of a machine having an inner thought life and self identity the way you and I do. However, I do think there will be software for large systems, in which the system has responsibilities and needs, and it perceives the environment it inhabits, and develops goals, strategies, and behaviors to play its role successfully and get the things it needs. This is similar in result to consciousness, although I think it will still just be non-living software.

I have a theory that organizations can take on a life of their own. This is a metaphor of course. I don't think some NGO or corporation is a living being. However, successful organizations seem to take on a life of their own independent of the people running it. We all know how hard it is to found a successful business, but some business seem to reach a point where they take off, and then even if the founders are forced out, and HR department full of SJWs are used to supply the increasing need for employees, the company still flourishes. These businesses then start pursuing new goals and strategies, which are beyond any one person's ability to plan an execute.

I think that certain software based systems will do something similar. I think this concept explains why Thomas the Rhymer sees the internet as somewhat sentient. If you have a software system responsible to keep a city's road network up and running, and it can dispatch workers and order supplies to achieve its objectives, and it has a natural language interface to speak with people, and it can see and understand what happens on the roads, it might seem like a sentient being. I think there will be a lot of these systems starting as little as 5 years from now. When some of these systems are 30-40 years old, and have guided and managed things like a big city's road network for a generation, they will develop their own behavior patterns in a lifelike manner, just as I see organizations doing. In many cases, the dividing point between an organization and its central AI computer will be blurred.

However, I don't think these will ever be truly alive. I don't think they'll have souls that awaken. They won't appreciate music or art. Finally, I don't think they'll ever develop a general godlike understanding of everything. They'll always need an army of human programmers to tweak their architecture. I don't think they'll be smart enough or creative enough to improve their own programming.

I'm the tower of power, too sweet to be sour. I'm funky like a monkey. Sky's the limit and space is the place!
-Randy Savage
Reply
#48

The artificial intelligence (AI) thread

It has been recognized from the beginning that the Von Neumann architecture of present day computers is different than how the brain performs computation, and that a thinking computer requires a different hardware architecture. A must read is "The Computer and the Brain" by John Von Neumann himself where he maps out the problems.

Here is the website for Stanford's "Brains in Silicon" Project. The research in the area of parallel processing is vast, so this is just a small sample of what is being done.

http://web.stanford.edu/group/brainsinsi...index.html

As for the claim that you need life to have intelligence , I would point out the analogous situation of electricity. When Luigi Galvani did his experiment where he made s frog's leg twitch, he was convinced that you needed living matter to create electricity. Alessandro Volta was convinced that biological material wasn't necessary, so he put a wet chemical substance between two conductors to prove electricity could be generated from non biological material. This was the first battery.

https://en.wikipedia.org/wiki/Alessandro_Volta

Rico... Sauve....
Reply
#49

The artificial intelligence (AI) thread

I think it's important to define exactly what we mean by an "AI". I strongly suspect that for most people that's "a machine that has consciousness."

And right there we have a problem, because human beings themselves do not have a concrete, agreed idea for what constitutes consciousness.

It's not solely self-awareness. Animals can do that: chimpanzees, gray parrots, and even dogs have demonstrated the capacity to be aware they exist, mainly through the mirror test. Nor is it the ability to reason: again, gray parrots can do that. And it isn't language either: chimps and gray parrots can be taught fairly extensive vocabularies. (Some philosophers like Thomas Nagel suggest it's impossible for us to adjudge whether an animal is conscious or not because we can't put ourselves in the animal's experience of the world.)

The Turing test doesn't get you out of it, either, at least according to some philosophers: simulation of human responses is not consciousness, not as we'd define it.

On top of that, this nebulous thing called consciousness rests (if you take the hard biological approach) in the brain. That is, in literally the most complex thing that humanity has ever encountered. Seriously, try a book like Consilience and you'll see exactly how complicated the brain is, how little we actually know about it. Similarly to how we don't understand that much of how genes actually work or how they are activated or not activated by their environment. Quite the contrary to what the MSM might tell you, there is no single gene, on/off switch for aggression, conservatism, etc. etc.

In short: AI is no real threat. Probably won't ever be, either. Not in the sense of us running into an AI named Bob who develops megalomaniacal tendencies.

The only reason idiots like Stephen Hawking and friends keep telling us it will be is the same pernicious corruption that leads NASA to ineptly tease "BIG ANNOUNCEMENTS!" and then reveal that maybe, sometime in the past, perhaps, we're not sure, there might've been a microbe on Titan. They do it for the publicity, because scientists are as human, corrupt, fallible, and agenda-driven as the priests they so often criticised and later shoved aside. After all: what's Stephen Hawking's qualification in robotics, biology, artificial intelligence? The guy is an astrophysicist. Plenty of people reject his histrionics about global warming because he's not a climate scientist - why would you accept his prognostications about AI? Oh, because he's a "scientist" and therefore a "rational thinker", i.e. he knows something you don't, i.e.e. he's a gatekeeper of higher knowledge, i.e.e.e. he's a priest, except his god is Oghma rather than Yahweh?

But I digress.

After all, Richard Dawkins isn't much of a theologian or philosopher, but he does make one argument which should alleviate any fear we have about AIs becoming superior to mankind: there can be no God because, necessarily, the created is always less complex than its creator. By that reasoning, any AI we could produce could never be more complex than a human being. It could never become superior because it could never become more complex.

Remissas, discite, vivet.
God save us from people who mean well. -storm
Reply
#50

The artificial intelligence (AI) thread

The human brain is the most complex object in the universe. It is orders of magnitude more complex than a black hole.

But people still believe that we can build a machine that does everything a human brain does. Why? For the same reason man has always believed it is possible to fly. Because he could observe birds flying, and it made him believe that he could also find a way to fly. Even Leonardo da Vinci wrote plans for a flying machine. They knew we could do it, but they didn't know how. So they started with machines with flapping wings and crashed until they figured out that the materials they had couldn't flap. So they invented the fixed wing design. Eventually two bicycle mechanics figured out a machine that could briefly leave the ground.

We know it is possible to make an intelligent machine because we are an intelligent machine. Notice this is different than building a machine that goes back in time. There is no example of such an existing machine and none of the laws of physics suggest it is possible. However, a machine as intelligent as a human is possible for the simple reason that we are a living example of one.

Rico... Sauve....
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)