rooshvforum.network is a fully functional forum: you can search, register, post new threads etc...
Old accounts are inaccessible: register a new one, or recover it when possible. x


The artificial intelligence (AI) thread
#1

The artificial intelligence (AI) thread

If we accept the human mind is nothing special when it comes to intelligence, its reasonable to belive with constant improvement in computer science we will eventually be able to create a machine at human level intelligence. This is often referred to as the "singularity".

Scientist then belive it is possible to give the machine itself the task to reprogram itself to become smarter. As it grows smarter, it can then again reprogram itself to become even smarter and so on..

Let this process run, and human intelligence could quickly be ant-like in comparison to this super intelligence. When humans are dealing with super intelligence of this magnitude, it becomes very important that this machines interest corresponds with our interest. If it does not, defeating human kind would be a small task for such intelligence.

On the other side this super AI could also solve all problems humans are faced with. Energy, cancer, material needs. Its easy to see why developing super AI is interesting. And even if you were to belief the negative consequences outweigh the positive. If we do not develop such a machine whoever does good/bad people receives all the power. Being a terrorist organization or mad psychopath wanting to get his name in the history books. As in chess, First mover got an advantage.

Famous scientist like Stephen hawking and Elon Musk are very skeptical to the development of Super AI, and given the negative consequences its easy to understand why.

i think its a very interesting topic that deserves more attention. After all we are dealing with the fate of humanity.

interesting articles:

http://waitbutwhy.com/2015/01/artificial...ion-1.html
Reply
#2

The artificial intelligence (AI) thread

How do you treat ants and other insects?

[Image: images?q=tbn:ANd9GcRYcCxz25PVLkuD6aUJXd9...WNAwlCaSz4]
Reply
#3

The artificial intelligence (AI) thread

If we were to "solve" all of our "problems" then our existences would be quite meaningless.

Point in question: current standards of life in many western/ized countries,though far from "having solved all problems", have managed to cover many of the basic necessities even for the most downtrodden sectors of their population. The populace of this countries is often the victim of creeping existential doubt as well...

We move between light and shadow, mutually influencing and being influenced through shades of gray...
Reply
#4

The artificial intelligence (AI) thread

I believe Humanity should first look into understanding our own capacity for intelligence.
Then we can move on to the artificial.

One of my very good friends is a Professor of Neuroscience and has been pushing the envelope in his field (teaching, researching, testing) for over 17 years.
The work he is doing right now is proving that Human IQ is actually very malleable and can even slightly vary on a daily basis.

It a nutshell: If we train our brains as hard as we train our bodies,
by consistently forcing ourselves out of our mental comfort zones -
it is possible to increase our ability to learn, reason and understand.

His studies are also showing that this is not as dependent on age as most would assume.

Look in the near future to read more about his work,
increasing his IQ naturally to 155 and also artificially up to 170 (using chemicals similar to LSD)
Reply
#5

The artificial intelligence (AI) thread

The thought unplease me as well. We are already very nurtured in modern society. Invention of such machine will put this nurturing to an extreme. But at the same time, it will have perfect understanding of our brain, and whatever world the brain thrives in, it will create. From an outside perspective such thought experiment is uncomfortable. But from a logical perspective its hard to argument Super AI would not be able to create a world for us to thrive in.

Other argues we most likely live in a simulation already. Maybe this is the world we asked the machine to create for us already.
Reply
#6

The artificial intelligence (AI) thread

Artificial intelligence is overdone. The globalists sometimes err - they interbred among 1-3 high-IQ families to produce the super-race in the 19th century and this backfired tremendously.

They were wrong about overpopulation and other things.

They will be wrong about AIs taking over or humans being downloaded onto a hard-drive.

Yes - you will be able to talk to your computer and they will be able to significantly help you in terms of combat, reaction time etc. But those advanced programs will never ever become sentient.
Reply
#7

The artificial intelligence (AI) thread

Why would I accept that biological intelligence is nothing special? We outright don't understand how the brain works and we're nowhere near replicating it in terms of complexity with computer hardware. Like, can't see the destination with the Hubble telescope far away.

If there is some kind of true general AI in the future I don't expect it to be anything like human intelligence. Could it be dangerous? Sure, if the people involved in building it are so irresponsible the world would've been better off if their entire genetic line never existed. Hopefully that will not be the case.
Reply
#8

The artificial intelligence (AI) thread

Then what is it about biological intelligence that can't be reproduced? Is your argument based in religion? As there is a "soul" within the brain that gives it its power?
Reply
#9

The artificial intelligence (AI) thread

How are you supposed to reproduce something you don't understand? We don't even understand how individual neurons really work, let alone the whole structure. Every time scientists think they understand the brain, they find out it's more complicated than they thought. We can't reproduce the human brain (or even an insect brain) in hardware, and we sure can't reproduce the mind in software.

The human brain works nothing like computers as we know them in any case. The brain is a massively parallel, multitasking, chemical computer that's incredibly power-efficient. Hardware and software are inextricably linked. General purpose computers are very power hungry serial devices, in which the software is only loosely coupled to the hardware (there is some light coupling because different types of processors use different instruction sets). All parallel devices I know of are application specific, not general purpose.

What is consciousness, anyway? If you grew a human brain in a lab, would you able to "turn it on"? I don't think anyone can answer that question with certainty.

There are many other problems with the idea of just creating an artificial intelligence, especially if it's meant to do something specific. I think it's much more likely we'll just get really good at using computers to solve problems through rapid iteration and modeling than develop some amazing self-improving artificial intelligence.
Reply
#10

The artificial intelligence (AI) thread

Ted enumerates the possibilities in "Industrial Society & Its Future ". In my opinion his list is exhaustive.

It is still too soon to see which one will occur with any certainty.

However, Google has had some very interesting developments with it's TPU recently:
https://news.ycombinator.com/item?id=14043059

And some of the engineers responsible for it have left to form a brand new company:
http://www.cnbc.com/2017/04/20/ex-google...itiya.html

We may see a third type of mainstream chip, CPU, GPU, TPU.

But the naming of deep neural networks as A.I. is really unfortunate and for many top people largely historical.

If you're going to try, go all the way. There is no other feeling like that. You will be alone with the gods, and the nights will flame with fire. You will ride life straight to perfect laughter. It's the only good fight there is.

Disable "Click here to Continue"

My Testosterone Adventure: Part I | Part II | Part III | Part IV | Part V

Quote:Quote:
if it happened to you it’s your fault, I got no sympathy and I don’t believe your version of events.
Reply
#11

The artificial intelligence (AI) thread

Its going to happen, but its going to take a few hundred more years.

Computer power has hit a wall based on physics, and connecting lots of little computers together doesn't make them smart.

Its going to take a few breakthroughs, in software and hardware.
Reply
#12

The artificial intelligence (AI) thread

Quote: (04-22-2017 05:50 PM)weambulance Wrote:  

How are you supposed to reproduce something you don't understand? We don't even understand how individual neurons really work, let alone the whole structure. Every time scientists think they understand the brain, they find out it's more complicated than they thought. We can't reproduce the human brain (or even an insect brain) in hardware, and we sure can't reproduce the mind in software.

I'll get into this a lot more with a future post when I have a bit more time but basically the theory is that you don't need to understand how the brain works.

One aspect is redundancy. Like most organ systems the brain sees a lot of redundancy so once we understand the basic functional unit, that could theoretically multiplied.

Another problem with that line of reasoning is that artificial intelligence doesn't need to mimic the structure or underlying functional pathway of the human brain to be intelligent per se.

Further still, there is the idea of self learning AI. As we create software designed to a) research artificial intelligence and neuroscience b) modify and self replicate its own software and c) modify and replicate its own hardware, then progress in this field can theoretically happen exponentially.

Final point for this post, we often make the mistake of anthropomorphizing the idea of AI. A super intelligence wouldn't necessarily have to have flexible goals, generalized interests, emotions, etc. to be much more intelligent than us. One of the frightening possibilities of superintelligence is shown in a thought experiment just like that: the Paperclip Maximizer.

The idea of the thought experiment is that you could create a general intelligence just focused on producing the maximum number of paper clips possible. As a self learning AI it will continue to try to improve the efficiency and quality of its paperclip production. Once the AI reaches human levels, theoretically it'll improve itself rapidly to many orders of magnitude greater than others. Then, it will be superintelligent, despite not having flexbile goals or necessarily nuanced or contextual thinking in the way we understand it. Such an AI would could easily and quickly eliminate all life on Earth either intentionally or through sheer indifference to us in pursuit of its goal of creating more paperclips, and it would theoretically happen much too fast for us to stop once you reach that moment of singularity where AI surpasses the human capacity for learning and synthesis.

The article posted in the OP explains a lot of the basics and is worth a read. There is reason to believe this will be the existential threat of our lifetimes, and if not ours then certainly our children's.
Reply
#13

The artificial intelligence (AI) thread

Quote: (04-22-2017 08:14 PM)RatInTheWoods Wrote:  

Its going to happen, but its going to take a few hundred more years.

Computer power has hit a wall based on physics, and connecting lots of little computers together doesn't make them smart.

Its going to take a few breakthroughs, in software and hardware.

I think it'll happen in the next 50 years unless we agree as a species not to go down this rabbit hole. Which we should.
Reply
#14

The artificial intelligence (AI) thread

When AI becomes successful, it becomes its own field and is no longer AI. For example, computers can now play championship chess. But the intelligence is confined to a very narrow area of thinking - coming up with strategies for a future chess move. The program can't be used for more generalized thinking. Also, natural language translation and speech programs started out as AI, but are now their own specialized field.

The goal of AI initially was to come up with a general algorithm for thinking. The Nobel prize winner in economics, Herbert A. Simon created several AI models for general thinking in the 1960s.

Another attempt at generalized thinking was Expert Systems. The idea was to take the knowledge of an expert in a specialized field and encode it into a computer as a set of if-then rules. Several systems were developed in medicine to diagnose diseases. The problem with this model, is that the expert system can't learn and becomes outdated.

The next attempt was neural networks which was an abstract model based on how neurons work. Neural networks have a narrow ability to learn from their environment. The recent renewed interest in AI comes from the ability of computers to process mass amounts of data.

There is also a branch of AI that believes there is no general algorithm of thinking. Evidence for this comes from observing the brain, which has specific areas of the brain which perform specialized functions. For example, speech processing, language recognition, and vision are all performed by distinctly different brain areas.

The bottom line is that computers can be very successful in solving specific problems in narrow domains, but there has been no breakthrough in AI to justify the belief that computers will become more intelligent than humans. We still don't have any idea how general thinking is done in the brain. However, brain research is progressing at warp speed, and will eventually provide breakthrough discoveries.

Rico... Sauve....
Reply
#15

The artificial intelligence (AI) thread

In that last post I was mostly talking about replicating an AI that would behave like a human. That's why I said in the first post I made that if we do achieve a true general AI, I don't expect it to be human-like at all.

However, there are still major problems with this idea of a sort of exponential intelligence explosion.


First: where's the extra hardware going to come from?

The idea of an AI "modifying" its hardware is ridiculous; it takes incredibly specialized equipment to build modern computer components. Modern PCBs often have 6-10+ laminated layers of circuit traces. CPUs are made in multi-billion-dollar ultra-cleanroom foundries and even then half the time they come out imperfectly (that's where a lot of lower end CPUs come from). There are only a handful of foundries working with cutting edge lithography, as well. Most of the foundries in the world are making large lithography microcontrollers and the like, not <= 14 nm chips.

You don't just modify computer hardware in any meaningful way beyond plugging compatible components together. Modern computers are seriously complex.

The "build more hardware" thing is a total handwave. How's it going to pull that off? It's an immobile box. Does it have money, to pay humans to do it? Probably not. If so, how are those humans getting access to the brain in a box to add hardware? Is there an unsecured network line around piped straight into the AI box, implying amazing incompetence on the part of the AI team? Any AI like this would be heavily secured because we're talking about a multi-billion dollar project at minimum here.

Are people just not going to notice the AI upgrading its hardware if it did somehow pull that off? Of course not. It would take a long time to accomplish such a feat, and would be stopped very early on.

Otherwise, is the premise that the AI will just continuously refine itself to maximize the hardware it has? How? Did we give it a huge pile of extra hardware to work with, and if so, why? Or is this happening in-place?

Suppose it can happen in-place. That means there's by definition a hard cap on how efficient--and therefore intelligent--the AI can become, and it probably will not be all that much better than what skilled programmers can already do unless this AI is just deriving world-shattering algorithms on the fly. It's not like the AI can rewrite the CPU instructions; they're hardwired. Its best bet would just be writing everything as optimally as possible in machine code, and some things are just never going to get faster without new hardware because they're already as optimized as it is possible to be for a given architecture by the compiler.

And if it is somehow coming up with world-shattering advances in algorithms on the fly, it's not going to get a lot of chances to get it right when it applies the changes. It's like performing brain surgery on yourself. Maybe you'll get it right, but 99.99% of the time you're going to have a suboptimal outcome. I'd consider it much more likely that the AI would make itself irreversibly dumber than smarter.


Second: the article specifically relies on the continuation of Moore's law. As I already suggested in another post, that's not how things are going. Moore's law is hitting a wall as we speak, already has, really. CPUs aren't getting much faster anymore. Even the chip makers say they're not trying to aim for the traditional speed increases anymore, since they're reaching the physical limits of IC miniaturization. Instead they're adding cores, increasing CPU cache capacity, marginally increasing instructions per cycle, and software is being written to enable more and better threading.

We're already at the point today where physical distance is a limitation in computing. What I mean is, the speed of communication over copper traces between the CPU and memory is so slow compared to the clock speed of CPUs that cache misses--where you have to access the RAM before the CPU can continue, because you wrote a shitty program, and the information the CPU needs isn't available in the CPU cache--are a nightmare in any program where you need performance (like games). There's a maximum hardware packing density that's sustainable as well, if you want to be able to cool the equipment. The sheer distance between nodes in a large AI "brain" would be a nontrivial problem to overcome.


Third: the premise also relies on the scientists and engineers involved in the project being genre unsavvy, paradoxically stupid--smart enough to build a true AI, stupid enough to make really obvious mistakes--and incompetent. It's like a zombie movie where none of the characters ever saw a zombie movie before, and they just run around panicked and confused, getting eaten left and right. In real life, if zombies broke out we'd put them down in an afternoon because everyone and his mother knows what to do in a zombie apocalypse: fort up and shoot them in the head.

Well, the engineers and scientists involved in any such project in real life won't be genre unsavvy; they'll be familiar with the idea of exponential artificial intelligence growth--it's an idea as old as computers--and take measures to prevent it. Even basic precautions any project of this scale should have would prevent anything from going seriously wrong. Tightly controlled physical access to the machine? Yep. Secured or isolated network? Yep. Okay, no chance of a Skynet scenario then.


Fourth: suppose the AI does manage all this and become super smart. So? We can turn it off at any time, even if that meant cutting the power lines. It would be an incredibly fragile "life form", one BSOD away from permanent brain damage or death (until a backup was restored). It's not like it can just "escape" into the wild, either, because we're probably talking about a couple terabytes of state at minimum, and where would it even go? To one of the other multi-billion-dollar custom-built AI cores with open ports facing the internet, just waiting around to be invaded? And I guess nobody would notice while the unsecured, unreasonably large capacity upload line in the facility was saturated for several hours as the AI copied itself over.

An obvious, easy safeguard there is to have the massive pile of chips required for the machine custom made with specific hardware security features onboard. Intel already does something like this; I'm blanking on the name but it's like a private key for the chip that means software designed to use the feature can only run on that specific chip once it's registered to the chip owner. It's hardware level DRM, basically. So you build a secret into your chips, and et voilà, the AI can only run on chips the project commissioned. It would be trivial to make sure the AI never knew what the secret was, or that there even was a secret, and it could therefore never replicate the hardware or move to another machine.

I can think of many other ways to control the AI, limit its influence, limit its ability to increase its intelligence or migrate elsewhere, etc that could be built right into the system.


Conclusion: The only way this is a risk at all is if there's some confluence of magical technology and truly stupendous human stupidity, recklessness, and incompetence, and even then it would be trivial to resolve before it got out of hand. Just turn off the power, however you need to do that. Problem solved.

Even with amazing leaps in computer technology that wipe out many of the physical limitations I outlined above, there will still be plenty of ways to limit the autonomy of the AI, and it will always be limited to living in a purpose-built computer as well. And it will always die when the power goes off.
Reply
#16

The artificial intelligence (AI) thread

The reasonable way would defiantly be to take this process very slowly.
In a digital steril environment, with no internet connection, not part of a power net it could be able to use for communication. Then just gradually increase its intelligence over decades.

However, there are massive economical interest in developing SAI, and spending extra time/money on safety measures when developing an super AI is not really what the free market is all about.
Reply
#17

The artificial intelligence (AI) thread

https://vigilantcitizen.com/moviesandtv/...hilosophy/

A lot of the stuff that is concerned with transhumanism and AIs is just based on globalist occult understanding of reality.

The site is run by a strong Christian and not up my bent exactly, but he is right about the symbolism.

[Image: lucy5.jpg]
Quote:Quote:

As soon as Lucy breaks out from her cell, she kills everybody in the vicinity. Is remorselessly killing people a sign of advanced intelligence?

This is about as clear as the globalists see it - divine level of intelligence automatically makes you into a cold killing machine. And that is what they expect the AI to become - to slaughter human beings like ants who are ruining your perfect garden.

This topic comes again and again also in the new TV series Westworld. Never mind whether running a programming loop countless times would really produce sentient life - it wouldn't in my mind, but if you espouse the Atheist view of life being just matter and life itself having no meaning, then I guess that this would be logical to you.

Then why not go on a killing spree, kill babies and eat them in front of their mothers? It all does not matter since there is no meaning in life whatsoever.

Ah - but there is something holding back most human beings aside from the rush of emotions.

And this something knows exactly that this would be wrong. A program will never have this and the globalists will never be able to download their brains into an artificial body.

Dead end like the one-family-breeding program that the Rothschilds had implemented in the 19th century.
Reply
#18

The artificial intelligence (AI) thread

Quote: (04-23-2017 03:47 AM)pants Wrote:  

The reasonable way would defiantly be to take this process very slowly.
In a digital steril environment, with no internet connection, not part of a power net it could be able to use for communication.

The technical term for this is an air gap.

Sam Harris has been raising the issue a lot recently, in a TED talk and on his podcast (the latter specifically referring to Westworld). He is convinced that our reckless race to developing AI could be catastrophic for our species.
Off the top of my head, he mentions two ways it could be a threat:

1) Terminator scenario.

2) Once one country attains superintelligent computing, the most rational response for other countries would be to instantly nuke the more advanced one, since having a tool that could do, say, 10,000 years of human thinking per second would instantly provide an insurmountable advantage and inevitably lead to world domination.
Reply
#19

The artificial intelligence (AI) thread

Quote: (04-23-2017 04:31 AM)Zelcorpion Wrote:  

https://vigilantcitizen.com/moviesandtv/...hilosophy/

A lot of the stuff that is concerned with transhumanism and AIs is just based on globalist occult understanding of reality.

(Snip)

This is about as clear as the globalists see it - divine level of intelligence automatically makes you into a cold killing machine. And that is what they expect the AI to become - to slaughter human beings like ants who are ruining your perfect garden.

(Snip)

Then why not go on a killing spree, kill babies and eat them in front of their mothers? It all does not matter since there is no meaning in life whatsoever.

Ah - but there is something holding back most human beings aside from the rush of emotions.

And this something knows exactly that this would be wrong. A program will never have this and the globalists will never be able to download their brains into an artificial body.

Dead end like the one-family-breeding program that the Rothschilds had implemented in the 19th century.

Globalist is an interesting therm related to immigration and trade. Not sure why it comes up in a discussion about SAI. Do you mean unlimited intelligence makes you into a "killing machine"? Or is this the viewpoint from globalist? Do they want it in your opinion, or are they scared of it?

Meaning and life is a very loosely defined term. And when i say that life has no meaning, i am referring to life from an external point of view. Of course life has meaning to me.

Rothschild inbreeding program 100 years ago and the possibility of Super AI in the future is unrelated.
Reply
#20

The artificial intelligence (AI) thread

Quote: (04-22-2017 12:56 PM)Zelcorpion Wrote:  

Artificial intelligence is overdone. The globalists sometimes err - they interbred among 1-3 high-IQ families to produce the super-race in the 19th century and this backfired tremendously.

They were wrong about overpopulation and other things.

They will be wrong about AIs taking over or humans being downloaded onto a hard-drive.

Yes - you will be able to talk to your computer and they will be able to significantly help you in terms of combat, reaction time etc. But those advanced programs will never ever become sentient.

I agree with you, and it seems self evident that what are basically a shitload of on/off switches will never become sentient.

So why do you hear so many technologists and otherwise intelligent people talk about these things like this super Sci_Fi.reality will be our future?

Are they all compromised, or are even the nerds hysterical about this issue?

“The greatest burden a child must bear is the unlived life of its parents.”

Carl Jung
Reply
#21

The artificial intelligence (AI) thread

A partial solution to the hardware problem has already been built. It is called "The connection machine". The original idea was to build a computer consisting of a million small processors which can connect with any other processor. Thus, a processor can store weights, and the computation is primarily done by the connections, hence the title "connection machine". This allows for massively parallel computation similar to neurons in the brain where each neuron is a simple processor and the computation is perfomed by the connection of neurons. It was invented by Danny Hillis when he was at MIT. The nobel prize winner Richard Feynman actually did some work on this.

https://en.wikipedia.org/wiki/Connection_Machine

Rico... Sauve....
Reply
#22

The artificial intelligence (AI) thread

Quote: (04-23-2017 12:23 PM)debeguiled Wrote:  

Quote: (04-22-2017 12:56 PM)Zelcorpion Wrote:  

...

I agree with you, and it seems self evident that what are basically a shitload of on/off switches will never become sentient.

So why do you hear so many technologists and otherwise intelligent people talk about these things like this super Sci_Fi.reality will be our future?

Are they all compromised, or are even the nerds hysterical about this issue?

Tech people are notorious for believing in their own genius when discussing fields they don't really understand. Even though the vast majority of the time their "genius" is just having memorized a pile of over-complicated shit at school and doing things the same way everyone else does them, dogmatically. And maybe lucking into a startup fortune because their simple little app any developer with a clue could've written hit the market at precisely the right time and they had precisely the right contacts to be successful.

It's insane how echo-chambery the tech world is, as well.

I wouldn't say it's self-evident that a computer could never become sentient, though, because that implies an understanding of consciousness we don't really have. I just don't think it's something that could happen spontaneously by mistake with normal networked computers, or something to really worry about if someone did make a strong AI.
Reply
#23

The artificial intelligence (AI) thread

Interesting podcast. First 10 minutes discuss a lot of the pros and cons, and the fears of AI.






Also very interesting, but a lot more pessimistic.



Reply
#24

The artificial intelligence (AI) thread

Quote: (04-23-2017 08:48 AM)pants Wrote:  

Globalist is an interesting therm related to immigration and trade. Not sure why it comes up in a discussion about SAI. Do you mean unlimited intelligence makes you into a "killing machine"? Or is this the viewpoint from globalist? Do they want it in your opinion, or are they scared of it?

Meaning and life is a very loosely defined term. And when i say that life has no meaning, i am referring to life from an external point of view. Of course life has meaning to me.

Rothschild inbreeding program 100 years ago and the possibility of Super AI in the future is unrelated.

It has relevance since transhumanism and the rise of the all-powerful AIs is one of those ideas coming from the top.

That is what I mean by it being another of those dead-end solutions that will never come about.

Just as Global Warming will be proven wrong in a mere decades when they will at best marvel the con, but not the science.
Reply
#25

The artificial intelligence (AI) thread

As above, it turns out that (I'll just make this short) those that laud AI and its ability to replace humans, especially high skilled humans, is nothing more than ego, self interested marketing, or all of the above. It is FAR more complex in real life, and one has to just look at 2001 or Blade Runner to see how far these pie in the sky types get ahead of themselves...
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)