rooshvforum.network is a fully functional forum: you can search, register, post new threads etc...
Old accounts are inaccessible: register a new one, or recover it when possible. x


What happens when our computers get smarter than we are?
#15

What happens when our computers get smarter than we are?

I'm skeptical as to whether we will achieve true AI in any recognizable future. If we do, it may be 1000 years from now.

For some reason, the media has chosen to give the populace the false impression that we are on a linear if not exponential track toward it when nothing could be further from the truth.

At least three things need to first be overcome, and none of them will likely come to fruition short on an unpredictable breakthrough that might come 1000 years down the road, or tomorrow, but it is unpredictable. The breakthroughs certainly are not slated to occur on a linear timetable let alone an exponential one. Those three obstacles are: materials (and cooling), a definitive model of the function of the animal/human brain, and a suitable coding language.

Processing technology has hit a materials wall about a decade ago, and new developments are largely about how to milk the last bit of power out of the maxed out technology. To think that we will build AI with out current materials resources and processing technology, or even an advanced iteration of it, isn't realistic unless the AI is planning on taking up several U.S. states worth of warehouse space.

I don't deny that the media and engineers are good at fooling humans into thinking that they are witnessing proto-AI, but the truth is that they are only witnessing parlor tricks performed by advanced calculators. There is nothing resembling thinking going on. There is impressive data searching, data filtering, rule following, voice simulation, voice recognition, etc occurring but none of it is even the beginning of the type of thinking that humans do. At this point, I'd be hesitant to rank the most advanced computer above the intelligence of an insect. There is a long way to go to AI.

When and if AI is developed, the only scenario that I can conclude that would protect humans is if someone who is truly benevolent develops AI before everyone else by ten years and that AI, using the ten year head start to stay smarter than other later AIs, becomes the effective world police against all later-arriving AIs. If AIs are developed together, there will not be able to be any rules on them because rules will slow them down and let them be defeated by foreign hostile AIs with no rules. Thus, we will indeed have a world ruled by unrestricted AI. I don't think that something similar to the terminator scenario (minus the time travel) is not unreasonable to consider. Though, we'd probably be more likely attacked through atmospheric conditions and food supply than killer robots.
Reply


Messages In This Thread

Forum Jump:


Users browsing this thread: 1 Guest(s)