Opinion: A fast-developing area of AUKUS’ second pillar technologies is artificial intelligence (AI) and all kinds of applications. At the same time, the defence community is steadily thinking through the implications for lethal effects that depend on AI such as armed drones, writes former naval officer and defence industry analyst Christopher Skinner.
To continue reading the rest of this article, please log in.
Create free account to get unlimited news articles and more!
This has been a feature of the US Department of Defense Directive 3000.09 Autonomy in Weapon Systems (DoD 3000.09 25 January 2023) that was recently updated to reflect continuing new features of artificial intelligence and the related machine learning.
At the same time, there has been notable publicity for a new generative AI tool, ChatGPT, released late last year by OpenAI in which Microsoft is a major investor. Google and other international technology companies are scrambling to release similar platforms.
ChatGPT has acquired some hundred million users in just a couple of months since its release.
The extraordinary capability of ChatGPT to generate text essays, proposals, and analyses is taking industry, commerce, and national security by storm. The degree of sophistication of ChatGPT-generated material is beyond anything seen before and far beyond the scope of plagiarism detection tools currently used in online education processes.
There are already some salient lessons apparent for defence and national security. The first and most important is to be very careful what materials you trust and always think about the fallback if the material turns out to be false.
Secondly, there should aways be entirely independent means to verify and validate whatever is coming from an AI-powered source. No report or analysis should ever be accepted on face value alone— look for holes and flaws. Be the red team or devil’s advocate.
The federal government is already on alert with Chief Scientist Dr Cathy Foley warning last month of the need for policy changes needed in the light of ChatGPT and is leading the development of a rapid research information report.
Dr Jacob Wallis, head of the information operations and disinformation program at the Australian Strategic Policy Institute (ASPI), was quoted as saying there is potential for generative language models such as ChatGPT to be involved in ‰widespread, at scale, dissemination of propaganda and disinformation”.
Other spokespersons have cautioned on similar hazards in legal, commercial, and educational fields.
At the same time, there continues to be heavy focus on beneficial applications of AI and nowhere more than in defence capability development.
Which all leaves us with the urgent need to consider the challenges and opportunities presented by AI and how we can ensure that what we develop is effective, can be trusted, and is resilient to deliberate attempts to compromise its effects.
At the same time, the potential for AI tools to impersonate humans is another aspect to be addressed. How do you know this article wasn’t written by ChatGPT?
Defence has addressed the cyber challenge by a number of approaches, one of them termed zero-trust interfaces. In other words, assume the other party is untrustworthy and operate as far as possible without the need for such trust. Then by all means accumulate trust in a variety of ways.
Similarly, be prepared to challenge broadcast information that is false but do this in a manner that can be verified such that you gain the trust of the audience. When such disinformation is likely to occur, there should be defensive materials prepared in advance; a contingency plan if you like.
Bottom line then for the defence and national security community is we must expect that AI-generated disinformation will be credible and coherent but equally may be rebutted with well-composed opposing information, prepared with the help of our own AI tools, which is able to be readily verified by authoritative sources.
Such information warfare is an integral part of international competition and conflict as we have seen in Ukraine and continue to see in the Indo-Pacific. This behoves all of us engaged in national security and maintenance of the international rules-based order to be vigilant and well prepared for such activity.
Artificial intelligence is a powerful new technology that we must learn to control as well as to leverage every possible advantage from its use.