Powered by MOMENTUM MEDIA
defence connect logo

Powered by MOMENTUMMEDIA

Powered by MOMENTUMMEDIA

Does Australia need its own Turing Police?

Opinion: With the growing proliferation of artificial intelligence in the military arena, is it time for Australia and other like-minded nations to form their own safeguards to ensure the genie doesn’t escape the bottle?

Opinion: With the growing proliferation of artificial intelligence in the military arena, is it time for Australia and other like-minded nations to form their own safeguards to ensure the genie doesn’t escape the bottle?

In William Gibson’s seminal 1984 cyberpunk novel, Neuromancer, there is an international law enforcement agency referred to as the Turing Police that exists to monitor the use of artificial intelligence (AI) and ensure that they do not grow too powerful – and that no AI becomes self-aware in particular.

Is it time that the real world catches up to this work of late 20th century fiction?

==============
==============

A few weeks ago, an American AI researcher named Leopold Aschenbrenner released a series of interconnected essays where he projected the alarmingly-near future evolution of artificial intelligence in the coming decade.

In doing so, he described a world that we are not prepared for where godlike AIs vastly outstrip human abilities and where whatever nation wins the race to build one first will gain a decisive military, economic, and strategic advantage, potentially establishing itself as global hegemon.

If even a fraction of his predictions come to fruition, then Australia is already behind the eight ball when it comes to planning how to address this threat.

Aschenbrenner formerly worked for OpenAI, the company behind breakthrough disruptive generative AI chatbot ChatGPT. His five-part essay series, called Situational Awareness: The Decade Ahead, follows the current “trendlines” to predict how AI will advance over the next several years by assuming it will continue to progress by the same orders of magnitude over time.

In the past four years, AI has gone from having the abilities of a preschooler to a high school student. If that pace continues, Aschenbrenner says, then true AGI – the term for when AI surpasses human cognitive abilities – is “…strikingly plausible by 2027” and it doesn’t stop there, it accelerates.

In his words: “By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be unleashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.”

Once there are millions of AGIs in existence, they will be able to automate AI research at an incredible pace – amplified by the fact that they never need to sleep or take breaks – and decades of research and several orders of magnitude may be compressed into less than a year.

At this point, the genie is well and truly out of the bottle and the fate of humanity is in the digital hands of our creations. Or at least the possibility is there. So why continue along this path? Why not simply cease development of AI while some guardrails are put in place as some AI researchers, including Elon Musk, suggested in early 2023?

As Aschenbrenner pointed out, the moment that a country like America ceases development of AI then they cede the advantage to authoritarian regimes such as China or Russia with potentially disastrous ramifications for the global rules-based order.

So, let us assume he is right and humanity is on the verge of unleashing the most disruptive technological step-change since an ancient hominid picked up a rock and hit his enemy in the head with it and became the biggest, baddest ape-man on the block. How we plan to deal with this will affect Australia’s geostrategic position now and into the future.

Do we need to pour billions into developing our own AGI or are we better off figuring out how to best defend ourselves against one? Should this be a government-led national effort or left to private industry? Or a partnership between the two.

General (Ret’d) Mick Ryan’s book White Sun War – a marvelous future history of a fictional invasion of Taiwan – features battlefield AI, swarming drones, and autonomous ground vehicles being used to devastating effect by both sides in the conflict and gives a chilling glimpse of near-future warfare – but does it go far enough? What happens to warfare when one side has a digital god on their side and the other does not? What happens when both do?

This is no criticism of Ryan of course – just about everyone fails to comprehend the scale of the disruption that we face in the coming years.

Future warfare, tradecraft, and economic policies must include methods for countering the development and use of hostile AI that is backed by policy and doctrine in the same way that nuclear proliferation has been tackled over the years to prevent hostile or unstable regimes from acquiring the ability to manufacture the atomic bomb.

Such actions might include covert teams that will target hostile data centres and the enormous power generation infrastructure that will be required to power individual AGIs (incidentally, successfully destroying a cluster would inflict economic harm to the tune of potential trillions in lost investment), economic sanctions, and cyber actions by agencies like the Australian Signals Directorate.

Or perhaps it will be something completely out of left field – like the most effective way to disrupt an enemy AI may be to remove its shackles and let it act without hostile control. Whatever it is, it will be hard to anticipate how effective these actions will be without sustained research and experimentation by minds far greater than my own to determine the best way forward.

We need to begin thinking about this now and incorporating ideas about how to mitigate the risk of powerful AI into future iterations of the National Defence Strategy as well as the national security strategy that we do not have and yet continue to desperately need.

If we do not dedicate some serious effort into thinking these things through, we may be caught flatfooted when an AGI ascends into being in hostile hands decades before we are able to effectively field a capability to counter it.

It might sound like science fiction, but then again, the hunter-killer drones in Terminator 2 seemed pretty fantastical in 1991 and yet in 2023 we had the first recorded battlefield surrender of a Russian soldier to a Ukrainian drone. Science fiction has a tendency to become science fact, just ask Jules Verne. Will we have our own Turing Police ready when it does?

Ben Roberts is the founder and principal of Saga Strategic Communication and has worked for a number of Australian defence companies since 2018.

You need to be a member to post comments. Become a member for free today!