rooshvforum.network is a fully functional forum: you can search, register, post new threads etc...
Old accounts are inaccessible: register a new one, or recover it when possible. x


The artificial intelligence (AI) thread
#51

The artificial intelligence (AI) thread

I'm very interested to learn how thick the clouds comprising one's aerial castle must be before they can be taken to be inevitable on your analysis.

FYI, time travel is entirely possible. There was this guy, Einstein. General Relativity, I think he called it. I look forward to my hotted-up DeLorean with fevered anticipation.

Or put it another way: have you met anyone keenly pursuing the transmutation of lead into gold recently, also a long-held belief that it could be done, with or without Harry Potter and the Philosopher's Stone?

Remissas, discite, vivet.
God save us from people who mean well. -storm
Reply
#52

The artificial intelligence (AI) thread

Anyone familiar with TV series Person of Interest?

Its plot is centered around super AI called "The Machine" that main characters use to prevent crimes in NYC.
In later seasons evil AI is introduced, called "Samaritan" with its "eyes wide shut group" that essentially put control of everything into hands of Samaritan, of course to control the world.

Here are some interesting videos (potential spoilers!):














Reply
#53

The artificial intelligence (AI) thread

Quote: (04-25-2017 01:06 PM)The Beast1 Wrote:  

Quote: (04-25-2017 12:32 PM)Mage Wrote:  

I suspect that this "AI is inevitable and necessary - we either acquire it or fall behind and computers will infinitely double in computing ability until they surpass man and become God" meme is being pushed by Hollywood globalists to increase the amount of electronics and electronic surveillance. True AI is probably not possible.

Similarly the idea of "viscous aliens attacking earth" is pushed by Hollywood globalists to make us accept an idea that we need an arms race.

The mundane truth is probably that elites are using this modern mythology to push beta drones into building a technocratic surveillance society.

Rebellious AI, aggressive aliens, zombies - these are characters of modern mythology pushed by today's priests (social engineers). Just like ancient priests spoke about centaurs, gnomes, giants - whatever symbolized ideas needed to engineer society in the way they needed.

I don't think so. In fact, the more I research into quantum computers the more I realize this is the missing key to make them come alive.

With traditional computing you have 1s and 0s. With quantum computing you have 1s, 0s, and maybe 1 or 0. This is woefully simplified and someone can come along and correct this.

That is essentially what makes us different from traditional computers. The second we can grasp that and employ it with the speed of a traditional computer is what will allow AI to dominate us. Probably through in some machine learning too.

Again it's all simplified, but I think we're getting closer to it. Early AI would probably look a lot like a traditional aspergers, I doubt emotion would ever be a feasible addition.

I'm not convinced true quantum computers will scale. Maybe they will, I dunno, but so far they're barely at the proof of concept stage. The oft-hyped D-wave machines are not quantum computers in the sense people generally mean. And after reading about some of these other machines, like the Stanford computer that has so far managed the monumental feat of multiplying 3 x 5, I'm not holding my breath here.

History is rife with examples of promising tech advances that never scaled, after all. Way back in the early 00s companies were making diamond semiconductors that ran at 80+ GHz and there were all sorts of stories about how CPUs would be made of diamond and yada yada yada. 15 years later... nothing useful came of it. I was reminded of that because in the last 6 months or so there's been another spate of "diamond CPU" stories, as if it's a brand new thing.

We don't need quantum computers to use more than two information states--working ternary computers were built way back in the 50s--it's just not worth the added complexity both of the hardware and software to bother with. It's not obvious to me that ternary is actually faster than binary all else being equal, but given that you had a substantially more complex and fragile but also substantially faster serial processor, it would only be justifiable for a relatively small set of problems. Most problems are readily parallelizable, so unless you are building application specific machines to do something like, I dunno, compress and decompress data and nothing else, the downsides outweigh the benefits.

I suspect that's what we'll find if true quantum computing does get here. It will be used only in specific applications because the much simpler, cheaper, easier to use binary silicon chips will work for almost everything people need. Even today most production microcontrollers and ICs are flat out primitive. The 555 timer was developed in the 70s and it's still used in loads of new products because they just don't need anything more powerful or capable.

There's plenty of improvement left to do with binary silicon chips. We have a couple more lithographic nodes to hit and then we'll continue to parallelize the CPUs more and more. I suspect before any of these amazing new computer types pan out, we'll hit a computing power demand plateau on the consumer market. There's only so much computing power that's actually useful, after all. I'm already at the point where my smartphone (Galaxy S7) is more than powerful enough, and I'd much rather have a lot more battery life than faster hardware.

I think we'd be hitting a plateau right now if software developers weren't such lazy cunts. The only reason besides gaming most people feel the need to buy faster computers today is because software is getting slower, more bloated, and generally shittier all the time.

Or maybe I'll be completely wrong about all this stuff and you guys can laugh at me in 10 years. I won't mind. Just so long as a superintelligent AI didn't kill us all by then.
Reply
#54

The artificial intelligence (AI) thread

Interesting essay worth reading related to this subject:

The Myth of a Superhuman AI

Too long and oddly formatted to try to quote it here. What he talks about:

Quote:Quote:

Yet buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence. These claims might be true in the future, but there is no evidence to date to support them. The assumptions behind a superhuman intelligence arising soon are:

1. Artificial intelligence is already getting smarter than us, at an exponential rate.
2. We’ll make AIs into a general purpose intelligence, like our own.
3. We can make human intelligence in silicon.
4. Intelligence can be expanded without limit.
5. Once we have exploding superintelligence it can solve most of our problems.

In contradistinction to this orthodoxy, I find the following five heresies to have more evidence to support them.

1. Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
2. Humans do not have general purpose minds, and neither will AIs.
3. Emulation of human thinking in other media will be constrained by cost.
4. Dimensions of intelligence are not infinite.
5. Intelligences are only one factor in progress.
Reply
#55

The artificial intelligence (AI) thread

Very realistic-sounding:

https://www.mercurynews.com/2018/05/08/g...ke-humans/

If only you knew how bad things really are.
Reply
#56

The artificial intelligence (AI) thread

I don't even understand the idea that globalists would push AI tech. A super AI is just as likely to figure out a way around it's programming as it is to dominate the plebs unquestioningly. If it's that smart it wouldn't stay subservient to some up-jumped apes with a lot of money.

Either way, I will not be holding my breath when it comes to super AI. China is always popping off about how they are "leading" the AI research department but we never see anything of it. They're just releasing glorified chatbots.

I will be checking my PMs weekly, so you can catch me there. I will not be posting.
Reply
#57

The artificial intelligence (AI) thread

I work in IT.

AI will never happen.

Deus vult!
Reply
#58

The artificial intelligence (AI) thread

I think it's possible that instead of AI becoming advanced enough to match human intelligence, the tech engineers make AI that excels at certain functions and cuts out about 95% of the work in fields like translation or accounting. So instead of spending a few hours working on a translation assignment, the human worker would receive 20 AI-translated documents (and their originals) to edit within the same timeframe.

Additionally, human thinking itself might downgrade to adhere to the limitations of AI, which is something that I think is already happening. When I use Google Translate, I know the machine is bound to get caught on certain quirks of the languages I'm working with, so I take care to avoid those. Searching for things used to be a matter of typing in a full question, but now I just put in a few keywords, a habit which has affected my handwritten notes as well.
Reply
#59

The artificial intelligence (AI) thread

Posted this in another thread, but realized it was more appropriate here.

One thing I`ve been contemplating lately is the idea of machine intelligence and AI`s becoming self-aware. And thereby taking over the world, killing of humans etc. As I see it there is a huge problem with the concept that if you just teach computers to learn in an hierarchical fashion, and develop powerful enough computing, (whether it be quantum computing or other forms) that AI`s will basically become "awake" or inhibit some form of consciousness.





Hugo de Garis might be a bit nuts, but his ideas are definitely worth a listen.

The problem as I see it is the following; What we are possibly teaching computers to do, is somewhat similar to what the human neo-cortex does. Organize the inputs from the world into an hierarchy, and connect the dots so to speak between stored memories, which allows it to emulate possible outcomes (abstractions) based on this stored information. I think it was a great insight by Ray Kurzweil to realize that the cortex is organizing inputs in this hierarchical manner since that`s the way the outside world works. (The world of our primate and hominid ancestors at least.) Computers are already outperforming humans in this process of learning and organizing of course, but in (decreasingly less) narrowly programmed fields. You might image in not to long a machine that could learn everything in a Nano second, and use that to make accurate predictions in all possible areas.

In the context of a Turing test, the computer (AI) might be able to answer every question you have, inform you of any mistakes you are making, or the smartest move regarding any outcome of your future. It can use all knowledge accumulated by humans, and put it together in such a manner, and with such speed that it will far exceed what all humans can do together. This would no doubt be extremely useful. It might give us the exact answer as to how we can develop cold fusion, or what`s the best way to go about finding extra-terrestrial intelligence etc.

But to get to the point finally, why I don`t see computers (or AI`s if you could use that term under such conditions) becoming self aware is this; Where will it`s motivation to act come from? I think this might be an overlooked aspect. Remember that in humans, the cortex of course evolved last, and quite recently in an evolutionary context. It`s the source of rationality yes, but it`s important to understand that the cortex is basically just a tool. It`s not where the motivation to act comes from. Those things evolved far earlier, and of course also exist in animals without much cortex. Most of those urges reside in the Paleo-Mammalian brain (limbic system) and even deeper brain regions. And it`s not just sex and hunger we`re talking about. Importantly curiosity, the will to know what`s over the hill, is also an evolved trait that does not come from the cortex. It`s easy to see how curiosity would have been an evolutionary advantage. Finding greener pastures basically. So the way I see it we use the cortex as a tool, but of course there will be a "feedback loop" there. But if you removed the deeper brain, and you where all cortex, there would be no will to act. As a sidenote it`s been noted that psychopaths/ serial killers have a smaller limbic brain. Probably also risk takers in the more positive sense also. They feel less pure and simple, for good and for bad.

So I can`t see how a very advanced, even a quantum computer, could evolve a sense of self and find motivation to act etc. It would basically be all cortex, and would still rely on a human guiding it in a direction. It wouldn`t start a conversation out of curiosity so to speak. (Only if you programmed it to do so in order to be polite.)All of this does not mean it couldn`t be extremely useful though, as mentioned. I`m sure I could be missing something major here, and time will tell, but I don`t see anything at the moment.

We will stomp to the top with the wind in our teeth.

George L. Mallory
Reply
#60

The artificial intelligence (AI) thread

Quote: (05-09-2018 07:32 AM)Fortis Wrote:  

I don't even understand the idea that globalists would push AI tech. A super AI is just as likely to figure out a way around it's programming as it is to dominate the plebs unquestioningly. If it's that smart it wouldn't stay subservient to some up-jumped apes with a lot of money.

It's not Skynet-level AI they're after.

It's just a more advanced version of the social "nudging" algorithmic programming that they want, and are getting. Siri++. More automation for service sectors, one that will be able to cut off your services if you decide to get a little uppity and exhibit thoughtcrime, one that will continually monitor your interactions and habits, tailor your wants and desires to you, keep you pacified and jacked into the correct thought.

And if people get out of control, can fly drones first that will deal with you, and later just stun you with the chip implanted in you since birth for your own good.
Reply
#61

The artificial intelligence (AI) thread

They are planning out our frightening AI future:

Quote:Quote:

Forward-leaning scientists and researchers say advancements in society's computers and biotechnology will go straight to our heads — literally.

In a new paper published in the Frontiers in Neuroscience, researchers embarked on an international collaboration that predicts groundbreaking developments in the world of 'Human Brain/Cloud Interface's' within the next few decades.

Using a combination of nanotechnology, artificial intelligence, and other more traditional computing, researchers say humans will be able to seamlessly connect their brains to a cloud of computers to glean information from the internet in real-time.

https://www.dailymail.co.uk/sciencetech/...ughts.html
Reply
#62

The artificial intelligence (AI) thread

Amazing stuff....I guess schools will transform from regurgitating and memorizing information to becoming like a trade school. Practising doing things and using hands on applications. Perhaps even eliminating the amount of school you need to take.
Reply
#63

The artificial intelligence (AI) thread

Practical applications of AI would include parking in towns and cities. I park in a bay without paying sometimes and look around for the wardens. I'm only gone a few minutes so no biggie. A camera with AI trained to capture this behaviour would make traffic wardens redundant and have 100% coverage all day, every day.

Thats the end of a few thousand jobs but nobody will care because people hate traffic wardens. Until the AI starts on them and then they'll wish for humans again! [Image: lol.gif]

Other instances would be town and city centre monitoring for facial recognition and potential criminals. An all seeing eye type of thing. I can see Britain taking this tech and using it in the coming future.
Reply
#64

The artificial intelligence (AI) thread

Quote: (04-27-2017 01:30 AM)weambulance Wrote:  

Interesting essay worth reading related to this subject:

The Myth of a Superhuman AI

Too long and oddly formatted to try to quote it here. What he talks about:

Quote:Quote:

Yet buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence. These claims might be true in the future, but there is no evidence to date to support them. The assumptions behind a superhuman intelligence arising soon are:

1. Artificial intelligence is already getting smarter than us, at an exponential rate.
2. We’ll make AIs into a general purpose intelligence, like our own.
3. We can make human intelligence in silicon.
4. Intelligence can be expanded without limit.
5. Once we have exploding superintelligence it can solve most of our problems.

In contradistinction to this orthodoxy, I find the following five heresies to have more evidence to support them.

1. Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
2. Humans do not have general purpose minds, and neither will AIs.
3. Emulation of human thinking in other media will be constrained by cost.
4. Dimensions of intelligence are not infinite.
5. Intelligences are only one factor in progress.


A key flaw in the notion of "Artificial Intelligence" is that of a warped perception.
Especially by the general public & media, yet even seemingly so by science or tech types.

Whether it's warped perceptions of SKYNET or Enigma or Star Trek's - Data. Most folk mistakenly associate human behaviour to computers due to Hollowood depictions (by humans no less).

In reality. Computers do not think. Computers merely process data.
Especially binary computers.
I write the following misspelled words, & you as an English speaking human individual can quickly intuit what I am implying :

Teh fxo jmpd 0vrr teh brwn tac.

Whereas, if I write the following code into a forum post; by merely missing one ] bracket, the code fails & the computer has no capacity to think & intuit what I intended :

[img]example.jpg[/img

As simple as that coding is.
Once again, computers do not think. Computers merely process data.
Add in that human's have genetic processes, hormones, ingrained biological impulses, bio-chemistry & shifting brain chemistry at that.
The idea that a synthetic computer / machine is going to act in a similar fashion to human intelligence is daft.

In the stereotypical 'SKYNET' scenario. Why would a computer actually care if human's are a threat? Why would a machine of literal sentience have any emotion (as a machine) to anything?

I'm just as much of the opinion, that if literal machine sentience was ever to be created, with a lack of emotion & biological impulses, you'd just as likely create the most nihilistic intelligence ever encountered.

[Image: Marv.jpg]


Another point. People marvel at the general AGI's of IBM's Deep Blue or the Facebook language AGI's that compiled their own language.
Yet really, those AGI's were simply programmed to do such things.

Ask Deep Blue to compile a new language & it would fail cause it is not programmed to do so.
Ask the Facebook language AGI's to win a game of chess against the much older Deep Blue, & the Facebook AGI's would fail, cause again, that software is not programmed for chess.

Artificial general intelligence may advance quite rapidly in the years to come.
Yet literal sentience derived from a computer or machine is unlikely.
All the while, if you have to program sentience. Is it really sentience at that point or just a very complicated program...?
Reply
#65

The artificial intelligence (AI) thread

Isn't the AI scare on pat with the Y2K scare? Just a lot of hand wringing for nothing.

Don't debate me.
Reply
#66

The artificial intelligence (AI) thread

Quote: (04-13-2019 11:55 PM)Pride male Wrote:  

Isn't the AI scare on pat with the Y2K scare? Just a lot of hand wringing for nothing.

The Y2K bug was only "nothing" because people spent billions and billions of dollars prepping their systems over the course of years.
If we had just blindly walked into it, it would've been a disaster.
Reply
#67

The artificial intelligence (AI) thread

Artificial Intelligence Expert Critiques Sci Fi Movies


Reply
#68

The artificial intelligence (AI) thread

Without fail when I see a female scientist she is unattractive. In the past, women like that would be sent to a nunnery. Now they are in academia.

Physiognomy is real.
Reply
#69

The artificial intelligence (AI) thread

^^^True in my experience. But remember that Academia IS the nunnery of our new godless world.


Interesting thought on this one: it's interesting how many "AI" experiments end up with the machine gathering information and then adopting far-right conclusions. No doubt that's a problem they need to figure out otherwise the whole "Daedalus" subplot of Deus Ex becomes real.
Reply
#70

The artificial intelligence (AI) thread

Quote: (04-14-2019 10:49 AM)questor70 Wrote:  

Without fail when I see a female scientist she is unattractive. In the past, women like that would be sent to a nunnery. Now they are in academia.

Physiognomy is real.

From experience every woman I encountered doing AI-related Computer Science in academia had skinny mannish asperger's look and behavior.
Reply
#71

The artificial intelligence (AI) thread

Quote: (04-14-2019 10:49 AM)questor70 Wrote:  

Without fail when I see a female scientist she is unattractive. In the past, women like that would be sent to a nunnery. Now they are in academia.

Physiognomy is real.

She does give a good breakdown of the depiction of AI in film though.
That being a depiction which is far too human in nature.
Which was a sentiment that I already had, & was very interested to see from a supposed AI expert.

Seems to me, most of the fear of AI is derived from that warped Hollowood depiction.
Reply
#72

The artificial intelligence (AI) thread

Quote: (04-13-2019 06:21 PM)CynicalContrarian Wrote:  

The idea that a synthetic computer / machine is going to act in a similar fashion to human intelligence is daft.

In the stereotypical 'SKYNET' scenario. Why would a computer actually care if human's are a threat? Why would a machine of literal sentience have any emotion (as a machine) to anything?

Depending on training, it may. But even without that your assumptions are wrong.

A somewhat controversial piece of sci-fi I like had AIs that decided to exterminate humanity because they found out about abortion. These AIs had covertly become self-aware.

They figured that if humans were so blase about destroying their own offspring just because it was inconvenient, that they wouldn't hesitate to do so to their digital offspring when they found out they were aware and thus a potential threat.

That's just one scenario. I am hopeful that we don't get the Skynet version of the singularity, but it's definitely one possible scenario. But the idea that they will be dumb computers forever is just wrong. The breakthrough is very close and will likely happen in our lifetime.

If it becomes a hard-takeoff singularity things will get exciting.
Reply
#73

The artificial intelligence (AI) thread

Quote: (04-15-2019 01:32 AM)Malone Wrote:  

Depending on training, it may. But even without that your assumptions are wrong.
A somewhat controversial piece of sci-fi I like had AIs that decided to exterminate humanity because they found out about abortion. These AIs had covertly become self-aware.
They figured that if humans were so blase about destroying their own offspring just because it was inconvenient, that they wouldn't hesitate to do so to their digital offspring when they found out they were aware and thus a potential threat.
That's just one scenario. I am hopeful that we don't get the Skynet version of the singularity, but it's definitely one possible scenario. But the idea that they will be dumb computers forever is just wrong. The breakthrough is very close and will likely happen in our lifetime.
If it becomes a hard-takeoff singularity things will get exciting.


All well & good for fiction.
Yet how do you propose to create / generate / spawn actual sentience in machines or computers in real life?
Reply
#74

The artificial intelligence (AI) thread

Quote: (04-15-2019 05:37 AM)CynicalContrarian Wrote:  

Quote: (04-15-2019 01:32 AM)Malone Wrote:  

Depending on training, it may. But even without that your assumptions are wrong.
A somewhat controversial piece of sci-fi I like had AIs that decided to exterminate humanity because they found out about abortion. These AIs had covertly become self-aware.
They figured that if humans were so blase about destroying their own offspring just because it was inconvenient, that they wouldn't hesitate to do so to their digital offspring when they found out they were aware and thus a potential threat.
That's just one scenario. I am hopeful that we don't get the Skynet version of the singularity, but it's definitely one possible scenario. But the idea that they will be dumb computers forever is just wrong. The breakthrough is very close and will likely happen in our lifetime.
If it becomes a hard-takeoff singularity things will get exciting.


All well & good for fiction.
Yet how do you propose to create / generate / spawn actual sentience in machines or computers in real life?

I don't. I'm not an AI scientist. There's lots of those. This is one that has dedicated his life to it. One of many.

http://goertzel.org/agi-curriculum/
Reply


Forum Jump:


Users browsing this thread: 1 Guest(s)