Korea Advanced Institute of Science and Technology has partnered with a weapons manufacturer with the stated intent of developing defense technology, leading to a boycott by AI experts who fear the partnership will result in the creation of "killer robots."
Control. We all like to think we have it, but is it all just an illusion? It might seem like a very existential question but it plays an important part in our acceptance of new technologies, especially when it comes to robots.
AI-coordinated attacks can launch cyber or real-world weapons almost instantly, making the decision to attack before a human even notices a reason to. AI systems can change targets and techniques faster than humans can comprehend, much less analyze.
Powerful AI needs to be reliably aligned with human values. Does this mean that AI will eventually have to police those values? Cambridge philosophers Huw Price and Karina Vold consider the trade-off between safety and autonomy in the era of superintelligence.
Autonomous weaponry, for example, may be intended for certain purposes by government militaries, but then emerge as more dreaded unintended consequences where, for example, these weapons decide themselves when and whom to strike.
Well before it gets to the point of superhuman technology, AI can be put to terrible uses. Already, scholars and commentators worry that self-flying drones may be precursors to lethal autonomous robots.