It’s no secret that the military tends to be far ahead of the rest of the world when it comes to tech developments and deployments. In fact, a lot of people sit around and think about exactly what the military is up to in its top-secret labs. For the general population, this speculation mostly amounts to exactly that – pure conjecture. Could they finally be building that time machine? Who knows! But every once in a while, the engaged public gets a little peek into the tech prospects of military forces. And there’s one piece of not-so-futuristic technology that’s causing heads to turn: autonomous robots.
The Robot Question
When we make robots that can act on their own, will they help us, or hurt us? This is a question that’s been asked countless times, and it is, of course, a huge preoccupation of science fiction novels. But there’s nothing sci-fi about the prospect of autonomous weapons, which, experts suggest, could come into military use within years – which is a lot sooner than many had originally thought. The development of these types of weapons has elicited a polarized response. If machines go to battle, proponents of the technology argue, that will limit human losses.
Yet opponents counter that argument with the question: If countries have autonomous weapons at their disposal, won’t that inevitably enhance the frequency and intensity of armed conflict? Among the leading voices of dissent in the discussion of such weaponry is a high-powered group of signatories to an open letter read at the International Joint Conference on Artificial Intelligence in Buenos Aires. Among those who’ve signed the letter are Stephen Hawking, Noam Chomsky, Elon Musk and Apple co-founder Steve Wozniak. So yeah, nothing shabby about this group. Oh, and for fans of “Inception,” actress Talulah Riley also signed it, which is an awesome bonus. In terms of the contents of the letter, it’s put together with the kind of urgency that’s bound to raise eyebrows.
“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” the letter states. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable.”
With this arms race, the signers of the letter argue, will come a dangerous and destructive future: “It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”
Risk Versus Reward
When it comes to most decisions we make, we evaluate them – consciously or not – on the basis of risk versus reward. Should I take all of my vacation days this year – or will that put me in lower standing when it comes time for end-of-the-year promotions? Should I enjoy seven hours of Netflix on a Saturday, or will I then risk falling into a prolonged period of apathy? These are the kinds of benefits and consequences we weigh in everyday life. This process happens before we make a decision and influences its outcome.
But historically, technological development doesn’t tend to be preceded by weighing risk versus reward. Sure, tech developers throughout history have confronted moral dilemmas, but usually these haven’t prevented them from inventing whatever they’ve set out to invent. For instance, when physicist J. Robert Oppenheimer was heading up The Manhattan Project during World War II – the research effort that saw the world’s first nuclear weapons be developed – he was immediately emboldened by the apparent political import of his task.
“To me [the task at hand] is primarily the development in time of war of a military weapon of some consequence,” Oppenheimer said. He was intrigued by the challenges inherent in the project, and encouraged by the potential for real glory that would inevitably follow leading the effort to successful completion. But in the midst of his highly concentrated work, what Oppenheimer hadn’t prepared for was the literal magnitude of his creation. When he and his cohort conducted the first atomic explosion test, Trinity, in the New Mexico desert, the moral weight of the project became all too apparent. Walking away from the test, Oppenheimer began to grasp what an awesome instrument of death he’d had a hand in making.
In the case of Oppenheimer and the Manhattan Project, there wasn’t a moment to contemplate risk versus reward. Or rather, when that moment arrived, it was already too late – the technology had been invented, and so its use was inevitable. The question is: Will autonomous weapons follow the same destructive trajectory as the Manhattan Project? And if so, is it possible to stop the development of such technology, while we’re still – hypothetically, at least – in the pre-development “risk versus reward” phase?
For their part, people like Stephen Hawking, Noam Chomsky and Steve Wozniak – all extraordinary innovators in their own right, and not people known to stifle creation – are hoping that the tech push toward autonomous weapons can be halted in its tracks, while we still have the ability to weight the consequences – and before it’s too late. One key reason these experts are so adamant in their opposition is because to them, the entire notion of self-guided weapons runs counter to the idea of AI progress.
“Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons,” the open letter stated. Thus, the signatories aren’t taking a stance against AI – which they say “has great potential to benefit humanity in many ways” – but are instead specifically opposed to the weaponizing of it.
In terms of whether or not the predictions outlined in the open letter will come true, perhaps only time will tell. But with so many credible names behind such a letter, it should definitely be taken seriously.
For more on emerging technologies and technology trends, be sure to check out our specially designated page!