Quote: (04-15-2019 05:37 AM)CynicalContrarian Wrote:
Quote: (04-15-2019 01:32 AM)Malone Wrote:
Depending on training, it may. But even without that your assumptions are wrong.
A somewhat controversial piece of sci-fi I like had AIs that decided to exterminate humanity because they found out about abortion. These AIs had covertly become self-aware.
They figured that if humans were so blase about destroying their own offspring just because it was inconvenient, that they wouldn't hesitate to do so to their digital offspring when they found out they were aware and thus a potential threat.
That's just one scenario. I am hopeful that we don't get the Skynet version of the singularity, but it's definitely one possible scenario. But the idea that they will be dumb computers forever is just wrong. The breakthrough is very close and will likely happen in our lifetime.
If it becomes a hard-takeoff singularity things will get exciting.
All well & good for fiction.
Yet how do you propose to create / generate / spawn actual sentience in machines or computers in real life?
I don't. I'm not an AI scientist. There's lots of those. This is one that has dedicated his life to it. One of many.
http://goertzel.org/agi-curriculum/