We must consider the consequences of technological singularity before it is too late. It is hard to explain these consequences to the world because the pros of A.I. development seem to outweigh the cons. Even if the whole world were to understand the threats of A.I. and be in deadly fear of it, progress toward A.I development would continue.
A.I. gives so many economic and military advantages. Forbidding A.I. development merely assures that someone else will get it first. Governments cannot control the private process of A.I. development. It is even hard to define what artificial intelligence is. It can be easily created in secrecy. We won’t know until it hits the news. We won’t be able to react until it is too smart.
Then, if we tried to control these super-smart systems or devices, we may trigger their “survival instinct.” They would quickly outthink our plan to limit them, and immediately generate an alternative plan to neutralize us, whether that be taking control of humankind or entirely wiping us out.