top of page

How Attackers Are Using AI: The New Tactics Targeting Australian Businesses

  • Writer: ValiDATA AI
    ValiDATA AI
  • Apr 8
  • 5 min read

In the criminal AI ecosystem, tools like WormGPT and FraudGPT have been available on dark web forums since 2023. These are large language models with their safety guardrails removed, specifically designed for offensive use: writing phishing emails without ethical restrictions, generating malicious code, and providing step-by-step guidance on attack execution. By 2026, these tools have proliferated to the point where technical skill is no longer the primary barrier to executing a sophisticated cyberattack. Capital is. And the cost of running an AI-assisted attack campaign has dropped to a level accessible to mid-tier criminal groups.

For Australian security teams and business leaders trying to build adequate defences, understanding what attackers are actually doing with AI is more useful than general statements about AI-enabled threats. The specifics matter. Different attack techniques require different countermeasures, and prioritising the wrong ones wastes the budget and attention that most Australian organisations have in limited supply.

Automated Reconnaissance: Building Your Attack Profile

Before any attack, attackers gather intelligence. This phase, called reconnaissance, has historically been time-consuming and required meaningful human effort to do well. AI has fundamentally changed the economics. Tools combining web scraping, social media harvesting, and natural language processing can now build a detailed organisational profile in minutes: who the executives are, what their roles and reporting lines look like, what technologies the organisation uses (often visible in job postings), which staff members have recently changed roles, what business challenges the organisation has publicly discussed, and who the key suppliers and customers are.

For a mid-sized Australian professional services firm with an active LinkedIn presence and a website that names its team, an attacker can construct a targeting package in under an hour that would previously have taken a human analyst days. That package includes the names and contact details of the most valuable targets, their approximate authority levels, the organisational relationships that could be exploited through social engineering, and the technology stack that determines which vulnerabilities to look for.

The practical implication for Australian businesses is that reducing public-facing information about organisational structure, technology choices, and personnel is a meaningful risk reduction measure. This does not mean abandoning LinkedIn or removing team pages. It means being deliberate about what information is valuable to an attacker and what is not, and making informed decisions about what to publish.

AI-Generated Malware: Lowering the Technical Bar

Malware development has historically required significant programming expertise and knowledge of operating system internals. Large language models are lowering that barrier in a specific and important way: they make it easier to adapt, modify, and customise existing malicious code rather than writing it from scratch. An attacker with limited programming skills can now describe the behaviour they want to an uncensored AI model and receive functional code. They can ask the model to modify existing malware to evade specific antivirus signatures, change its persistence mechanism, or adjust its communication patterns to avoid network detection.

The downstream effect is that signature-based antivirus and endpoint detection products face a harder problem than they did three years ago. When malware variants can be generated faster than signatures can be written, relying on signature-based detection as a primary control is inadequate. This is one of the reasons the ACSC has consistently emphasised application control, which prevents execution of any unauthorised application regardless of whether it matches a known malicious signature, as the highest-maturity control in the Essential Eight.

Credential Stuffing and Account Takeover at AI Scale

Credential stuffing, using leaked username and password combinations to attempt access to other services, is not new. What AI has changed is the efficiency and adaptability of these attacks. AI-assisted tools can now process large credential datasets, prioritise the combinations most likely to succeed based on patterns in the data, adapt their behaviour in real time to evade rate limiting and CAPTCHA systems, and rotate through proxy networks to avoid IP-based blocking. The practical outcome is that credential stuffing at AI-assisted scale succeeds against authentication systems that would have blocked traditional automated attacks.

For Australian businesses, the critical control is multi-factor authentication. An account protected by MFA is largely immune to credential stuffing regardless of whether the password is in a leaked database. The ACSC's position is unambiguous: MFA should be required for all remote access, all administrative accounts, and all access to systems containing sensitive data. The remaining exposure is MFA bypass through social engineering, which leads directly to the phishing and social engineering threats covered in the next article in this series.

Deepfake and Voice Clone Fraud: The Australian Cases

The most high-profile international case involved a finance employee at a Hong Kong multinational who transferred the equivalent of $39 million AUD after participating in a video call with what appeared to be the company's CFO and other senior executives. Every person on the call was a deepfake. The employee became suspicious only after the fact and confirmed with head office. Australian businesses operating in similar environments, where executives travel frequently, where remote communication is normalised, and where finance team members are authorised to make significant transfers, face an equivalent exposure.

Voice cloning requires remarkably little source material. A 30-second audio sample is sufficient to train a convincing voice clone using commercially available tools. For Australian executives who have appeared on podcasts, given conference presentations, participated in webinars, or provided media commentary, that material is often freely available online. The ACSC has documented voice cloning attacks targeting Australian organisations, most commonly in the context of business email compromise where a follow-up phone call using a cloned executive voice is used to add credibility to a fraudulent payment instruction.

Autonomous Agents as Attack Tools

The FreeBSD kernel exploit represents the leading edge of a category that will become more significant: autonomous AI agents executing multi-step attack chains. What distinguishes this category from AI-assisted attacks is the level of autonomy. An AI-assisted attack still requires a human to direct each major step. An autonomous agent attack can identify targets, choose attack vectors, develop exploits, execute the attack, and adapt its approach when initial attempts fail, all without human intervention after the initial prompt.

For Australian businesses, the immediate practical concern is not defending against fully autonomous attack agents, which are currently at the frontier of capability. The concern is that the techniques used in these attacks, automated vulnerability discovery, adaptive exploitation, and autonomous lateral movement, are being incorporated into tools used by criminal groups who are less sophisticated than the researchers who built them. The democratisation of attack capability that AI is enabling means that the adversary profile facing an average Australian SME has shifted meaningfully upward.

Building Defences Calibrated to the Actual Threat

The most important thing Australian businesses can do with this information is update their threat model. A threat model documents who is likely to attack the organisation, what their motivations are, what capabilities they have, and which attack vectors they are most likely to use. Most Australian SME threat models were built or last updated before AI-enabled attacks became mainstream. They need to be revisited.

The practical defensive priorities that follow from understanding these attack categories are: phishing-resistant MFA as a non-negotiable baseline; application control that explicitly covers AI tools as a category; anomaly-based detection rather than signature-based detection for endpoint and network monitoring; out-of-band verification protocols for financial transactions and sensitive actions regardless of how the request arrives; and regular tabletop exercises that specifically test AI-assisted attack scenarios, including voice-clone impersonation of executives and deepfake video calls. The adversary has updated their playbook. The defence needs to do the same.

Comments


bottom of page