Quote: (04-27-2017 01:30 AM)weambulance Wrote:
Interesting essay worth reading related to this subject:
The Myth of a Superhuman AI
Too long and oddly formatted to try to quote it here. What he talks about:
Quote:Quote:
Yet buried in this scenario of a takeover of superhuman artificial intelligence are five assumptions which, when examined closely, are not based on any evidence. These claims might be true in the future, but there is no evidence to date to support them. The assumptions behind a superhuman intelligence arising soon are:
1. Artificial intelligence is already getting smarter than us, at an exponential rate.
2. We’ll make AIs into a general purpose intelligence, like our own.
3. We can make human intelligence in silicon.
4. Intelligence can be expanded without limit.
5. Once we have exploding superintelligence it can solve most of our problems.
In contradistinction to this orthodoxy, I find the following five heresies to have more evidence to support them.
1. Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
2. Humans do not have general purpose minds, and neither will AIs.
3. Emulation of human thinking in other media will be constrained by cost.
4. Dimensions of intelligence are not infinite.
5. Intelligences are only one factor in progress.
A key flaw in the notion of "Artificial Intelligence" is that of a warped perception.
Especially by the general public & media, yet even seemingly so by science or tech types.
Whether it's warped perceptions of SKYNET or Enigma or Star Trek's - Data. Most folk mistakenly associate human behaviour to computers due to Hollowood depictions (by humans no less).
In reality. Computers do not think. Computers merely
process data.
Especially binary computers.
I write the following misspelled words, & you as an English speaking human individual can quickly intuit what I am implying :
Teh fxo jmpd 0vrr teh brwn tac.
Whereas, if I write the following code into a forum post; by merely missing one ] bracket, the code fails & the computer has no capacity to think & intuit what I intended :
[img]example.jpg[/img
As simple as that coding is.
Once again, computers do not think. Computers merely process data.
Add in that human's have genetic processes, hormones, ingrained biological impulses, bio-chemistry & shifting brain chemistry at that.
The idea that a synthetic computer / machine is going to act in a similar fashion to human intelligence is daft.
In the stereotypical 'SKYNET' scenario. Why would a computer actually care if human's are a threat? Why would a machine of literal sentience have any emotion (as a machine) to anything?
I'm just as much of the opinion, that if literal machine sentience was ever to be created, with a lack of emotion & biological impulses, you'd just as likely create the most nihilistic intelligence ever encountered.
Another point. People marvel at the general AGI's of IBM's Deep Blue or the Facebook language AGI's that compiled their own language.
Yet really, those AGI's were simply programmed to do such things.
Ask Deep Blue to compile a new language & it would fail cause it is not programmed to do so.
Ask the Facebook language AGI's to win a game of chess against the much older Deep Blue, & the Facebook AGI's would fail, cause again, that software is not programmed for chess.
Artificial general intelligence may advance quite rapidly in the years to come.
Yet literal sentience derived from a computer or machine is unlikely.
All the while, if you have to program sentience. Is it really sentience at that point or just a very complicated program...?