Navigating the Artificial Intelligence Frontier

In November 2022 members of the Australian Federal Police (AFP) and the Tasmania Police began a joint investigation following reports from the National Centre for Missing and Exploited Children which identified an Australian-based user who was downloading child abuse material (CAM).  

A search warrant was executed on the user’s home in May 2023 and what they found was shocking but not unexpected. The user’s computer contained hundreds of artificial intelligence (AI) generated images of CAM. Another search warrant executed in the same month in Victoria produced a computer which held 793 images of AI generated CAM.

These cases illustrate the novel threats presented by AI. Criminals will use any means possible to achieve their ends, even if it means undermining legal principles or basic democratic rights or creating a feeling of terror within a community. 

While AI creates avenues for criminals to scale their activities, these technologies are equally essential for policing and law enforcement agencies in their efforts to counteract the threats presented by AI.

Law enforcement must work together to integrate AI into our operational environment to effectively fulfill our mandate of safeguarding public safety and combatting criminal activities. 

HOW AI ENABLES CRIMINAL BEHAVIOUR

In his recent address to the National Press Club, AFP Commissioner Reece Kershaw laid bare the realities of modern policing in a dynamic and tech-driven world. “We used to plan the future of policing through the lens of the years to come,” he said. “But now, because of constant advances in technology, the years to come are almost every 24 hours.”

Commissioner Kershaw

Criminal enterprises have been and will continue to be early adopters of technology wherever they see an opportunity to advance their insatiable appetite to benefit at the expense of others.

The key threats posed by AI affecting the criminal environment include:

  • Increased Potency: AI enables more frequent, widespread and scalable cyber-attacks on critical infrastructure, systems of national significance, governments, industry and the community. This is in addition to attacks at the individual and community levels.
  • Enhanced Accessibility: AI lowers the entry bar and cost for non-technical individuals to engage in malicious activities. 
  • Exploitation of Human-Centric Vulnerabilities: AI is more efficient and effective in leveraging vulnerabilities unique to human behaviour. 
  • Deliberate Sabotage of Critical Algorithms: AI introduces the risk of poisoned and sabotaged algorithms leading to ineffectual use of AI and/or harm.

CAM exemplifies many of these threats. It is growing rapidly, and the underlying technology is advancing. CAM is driven by deepfake technology; deepfakes are digital photos, videos or sound files of a real person that have been edited to create an extremely realistic, but false, depiction of them doing or saying something.

According to Australia’s eSafety Commissioner, sexually explicit deepfake imagery across all ages has soared by as much as 550 per cent year-on-year since 2019. The advanced nature of deepfakes makes it challenging to separate 'fact from fiction' and 'real from virtual,' presenting a significant challenge for law enforcement and the community at large. Deepfakes also highlight some of the key benefits of AI for criminal actors. They are cheap and easily accessible software programs, allowing individuals without technical backgrounds to undertake tech enabled criminal behaviour. 

RESPONSIBLE USE OF AI BY POLICE AND PUBLIC TRUST

Criminals have no boundaries or respect for the law; however, we adhere to a higher standard than our adversaries, exercising caution in our use of AI to ensure it aligns with our principles and public expectations.

A number of surveys indicate that Australia ranks among the nations most concerned about AI, with many Australians believing its risks outweigh its benefits, particularly regarding bias in AI models. This means that the operationalisation of AI must strike a fine balance between security and transparency: sensitive to significant public concerns about law enforcement's use of AI systems while protecting certain police methodologies and capabilities to maintain effectiveness. 

Law enforcement agencies have been proactively defining what responsible AI means within the context of policing. The Australia New Zealand Police Advisory Agency (ANZPAA) has published a one-page list of principles to guide the ethical and responsible use of AI. 

The nine principles are: transparencyhuman oversightproportionality and justifiabilityexplainabilityfairnessreliability, accountabilityskills and knowledgeand privacy and securityIn 2023 these principles were endorsed by all Australian and New Zealand Police Commissioners.

The AFP is working to implement AI governance that embodies the ANZPAA principles and that emphasises a human-in-the-loop approach to the use of AI. The AFP will use a combination of internal guidelines, a human oversight committee and technology/privacy impact assessments to ensure our AI use is responsible, compliant, transparent and in support of a safer community.

HOW IS THE AFP USING AI TECHNOLOGIES?

The AFP’s current application of AI has been focused on exploring opportunities to transform data from one format to another, to enhance analysis and process large amounts of data. This includes experimenting with different AI tools and systems for data enhancement, such as language translation and transcription. 

The AFP utilises automated translation and transcription software that assists in transcribing audio recordings or determining whether foreign language material is relevant to an investigation. The identified material must always be reviewed by a qualified professional and the technology is not used to translate or transcribe materials which are to be used for evidentiary purposes.

The AFP is also working with the technology industry to test organisational use cases with Large Language Models (LLMs). The test environment has built in safeguards and the data inputted is not used to train the LLM, ensuring security and privacy obligations are met.

PATHWWAY TO AN AI-READY WORKFORCE

The evolving threat landscape demands continuous investment in technology and innovation, strengthened by strategic partnerships, safe-guarded by modernised legislative frameworks and governance, and empowered by a skilled workforce.

Central to AI adoption is the recognition of the need to invest in the organisation’s most valuable capability, its people. A strong AI adoption requires a skilled workforce equipped with an understanding on AI, technical leadership and policy skills. Investing in people is crucial, as the responsible use of AI is becoming increasingly complex due to factors such as rapid technological advancements, legal obligations, evolving AI policies, cybersecurity challenges, the political landscape, public trust, and operational requirements. 

Commander Operations of the AFP’s Southern Command, Gail McClure APM is responsible for leading Aviation and Protection Operations, Australian protective security arrangements in Melbourne, as well as, counter terrorism first response and aviation security at Melbourne Airport. She recently undertook the International Action Learning Group (IALG) Pearls in Policing Program to research Trust and Legitimacy, in the context of emerging technology and AI.

Commander McClure

The Commander faced several challenges at the beginning of her journey to understand AI technology as her exposure to AI was limited to the mixed reactions portrayed in the media. It was through engaging with AFP technical specialists, such as Chief Data Officer, Benjamin Lamont, and the AFP’s research partners, including Monash University, that she was able develop a strong understanding of the risks and opportunities presented by AI.  

Mr Lamont

"AI is coming, and it brings very real benefits for policing” said McClure, “it is really important that you seek out information and learn in this space. It’s not just about technical knowledge; we need to ensure police leaders and decision-makers have strategic AI knowledge to understand the benefits and risks of AI and be able to ask the right questions about how we might deploy the technology responsibly in a policing context and mitigate any risks."

Navigating the AI frontier in policing is not a simple task. Policing, at its heart, remains a problem-solving activity, undertaken in collaboration with the community, as well as through the exercise of unique investigative police powers and the use of sensitive capabilities. By empowering staff, investing in technology and strengthening partnerships, policing can confidently meet the challenges and opportunities presented by AI.

Share this article:

Become an APJ subscriber now

Want to read more posts like this one and stay up to date with the latest in Australian policing news? Subscribe to the Australian Police Journal.

What are you looking for?

Search
Browse by Topic

Login

Not a subscriber?

Warning

Some articles and images within the Australian Police Journal are extremely detailed and graphic, and may be distressing to some readers. By ticking the below box you are confirming that you acknowledge this warning, are over 18, and will not allow children who are under 18 to access the publication.