The artificial intelligence (AI) thread
04-22-2017, 11:37 PM
In that last post I was mostly talking about replicating an AI that would behave like a human. That's why I said in the first post I made that if we do achieve a true general AI, I don't expect it to be human-like at all.
However, there are still major problems with this idea of a sort of exponential intelligence explosion.
First: where's the extra hardware going to come from?
The idea of an AI "modifying" its hardware is ridiculous; it takes incredibly specialized equipment to build modern computer components. Modern PCBs often have 6-10+ laminated layers of circuit traces. CPUs are made in multi-billion-dollar ultra-cleanroom foundries and even then half the time they come out imperfectly (that's where a lot of lower end CPUs come from). There are only a handful of foundries working with cutting edge lithography, as well. Most of the foundries in the world are making large lithography microcontrollers and the like, not <= 14 nm chips.
You don't just modify computer hardware in any meaningful way beyond plugging compatible components together. Modern computers are seriously complex.
The "build more hardware" thing is a total handwave. How's it going to pull that off? It's an immobile box. Does it have money, to pay humans to do it? Probably not. If so, how are those humans getting access to the brain in a box to add hardware? Is there an unsecured network line around piped straight into the AI box, implying amazing incompetence on the part of the AI team? Any AI like this would be heavily secured because we're talking about a multi-billion dollar project at minimum here.
Are people just not going to notice the AI upgrading its hardware if it did somehow pull that off? Of course not. It would take a long time to accomplish such a feat, and would be stopped very early on.
Otherwise, is the premise that the AI will just continuously refine itself to maximize the hardware it has? How? Did we give it a huge pile of extra hardware to work with, and if so, why? Or is this happening in-place?
Suppose it can happen in-place. That means there's by definition a hard cap on how efficient--and therefore intelligent--the AI can become, and it probably will not be all that much better than what skilled programmers can already do unless this AI is just deriving world-shattering algorithms on the fly. It's not like the AI can rewrite the CPU instructions; they're hardwired. Its best bet would just be writing everything as optimally as possible in machine code, and some things are just never going to get faster without new hardware because they're already as optimized as it is possible to be for a given architecture by the compiler.
And if it is somehow coming up with world-shattering advances in algorithms on the fly, it's not going to get a lot of chances to get it right when it applies the changes. It's like performing brain surgery on yourself. Maybe you'll get it right, but 99.99% of the time you're going to have a suboptimal outcome. I'd consider it much more likely that the AI would make itself irreversibly dumber than smarter.
Second: the article specifically relies on the continuation of Moore's law. As I already suggested in another post, that's not how things are going. Moore's law is hitting a wall as we speak, already has, really. CPUs aren't getting much faster anymore. Even the chip makers say they're not trying to aim for the traditional speed increases anymore, since they're reaching the physical limits of IC miniaturization. Instead they're adding cores, increasing CPU cache capacity, marginally increasing instructions per cycle, and software is being written to enable more and better threading.
We're already at the point today where physical distance is a limitation in computing. What I mean is, the speed of communication over copper traces between the CPU and memory is so slow compared to the clock speed of CPUs that cache misses--where you have to access the RAM before the CPU can continue, because you wrote a shitty program, and the information the CPU needs isn't available in the CPU cache--are a nightmare in any program where you need performance (like games). There's a maximum hardware packing density that's sustainable as well, if you want to be able to cool the equipment. The sheer distance between nodes in a large AI "brain" would be a nontrivial problem to overcome.
Third: the premise also relies on the scientists and engineers involved in the project being genre unsavvy, paradoxically stupid--smart enough to build a true AI, stupid enough to make really obvious mistakes--and incompetent. It's like a zombie movie where none of the characters ever saw a zombie movie before, and they just run around panicked and confused, getting eaten left and right. In real life, if zombies broke out we'd put them down in an afternoon because everyone and his mother knows what to do in a zombie apocalypse: fort up and shoot them in the head.
Well, the engineers and scientists involved in any such project in real life won't be genre unsavvy; they'll be familiar with the idea of exponential artificial intelligence growth--it's an idea as old as computers--and take measures to prevent it. Even basic precautions any project of this scale should have would prevent anything from going seriously wrong. Tightly controlled physical access to the machine? Yep. Secured or isolated network? Yep. Okay, no chance of a Skynet scenario then.
Fourth: suppose the AI does manage all this and become super smart. So? We can turn it off at any time, even if that meant cutting the power lines. It would be an incredibly fragile "life form", one BSOD away from permanent brain damage or death (until a backup was restored). It's not like it can just "escape" into the wild, either, because we're probably talking about a couple terabytes of state at minimum, and where would it even go? To one of the other multi-billion-dollar custom-built AI cores with open ports facing the internet, just waiting around to be invaded? And I guess nobody would notice while the unsecured, unreasonably large capacity upload line in the facility was saturated for several hours as the AI copied itself over.
An obvious, easy safeguard there is to have the massive pile of chips required for the machine custom made with specific hardware security features onboard. Intel already does something like this; I'm blanking on the name but it's like a private key for the chip that means software designed to use the feature can only run on that specific chip once it's registered to the chip owner. It's hardware level DRM, basically. So you build a secret into your chips, and et voilà, the AI can only run on chips the project commissioned. It would be trivial to make sure the AI never knew what the secret was, or that there even was a secret, and it could therefore never replicate the hardware or move to another machine.
I can think of many other ways to control the AI, limit its influence, limit its ability to increase its intelligence or migrate elsewhere, etc that could be built right into the system.
Conclusion: The only way this is a risk at all is if there's some confluence of magical technology and truly stupendous human stupidity, recklessness, and incompetence, and even then it would be trivial to resolve before it got out of hand. Just turn off the power, however you need to do that. Problem solved.
Even with amazing leaps in computer technology that wipe out many of the physical limitations I outlined above, there will still be plenty of ways to limit the autonomy of the AI, and it will always be limited to living in a purpose-built computer as well. And it will always die when the power goes off.