AI and the Essential Eight: Applying Australia's Cybersecurity Framework to AI
- ValiDATA AI

- Apr 8
- 5 min read

The Essential Eight was first published by the Australian Cyber Security Centre in 2017 as a prioritised set of eight mitigation strategies drawn from the ACSC's broader Strategies to Mitigate Cyber Security Incidents. It was designed around the most common attack vectors observed in real incidents against Australian organisations, and it has been updated regularly as the threat landscape has evolved. It is now mandatory for non-corporate Commonwealth entities under the Protective Security Policy Framework, and is widely referenced as a baseline standard by APRA, ASIC, and state government agencies.
The framework uses a maturity model with four levels. Maturity Level Zero means an organisation is not meeting the intent of the control. Maturity Level One means the controls are implemented to mitigate commodity threats from opportunistic attackers. Maturity Level Two means the controls are more comprehensive and address targeted attackers willing to invest some effort. Maturity Level Three means the controls address sophisticated, determined attackers and include continuous monitoring and rapid response capability.
AI introduces complications for each of the eight controls. Most organisations that have achieved a given maturity level for their traditional IT environment have not assessed whether that maturity level still holds when AI tools and systems are brought into scope. In many cases, it does not. What follows is an analysis of how AI changes the Essential Eight, control by control.
Control 1: Application Control
Application control prevents unauthorised applications from executing on systems. At Maturity Level Three, this means a comprehensive allow list of approved applications is enforced, and any attempt to execute an application not on the list is blocked and logged. The AI complication is that many AI tools do not look like traditional applications. Browser-based AI tools accessed through an approved browser may not be blocked by application control policies that focus on executable files. AI plugins and extensions installed in approved applications may circumvent controls entirely.
The practical extension of application control to the AI context requires organisations to explicitly define what AI tools are approved for use, through what interfaces they can be accessed, and what data can be processed through them. This is not just a technical control; it requires a policy that employees understand and that is enforced consistently. Organisations that have solid application control for traditional software but no policy on AI tool usage are operating at a lower effective maturity level than their Essential Eight assessment suggests.
Control 2: Patch Applications
Patching requires organisations to remediate security vulnerabilities in applications within defined timeframes based on risk. At Maturity Level Three, critical vulnerabilities in internet-facing applications must be patched within 48 hours. AI tools and the models they run present two distinct patching challenges. First, AI tools are software applications that have their own vulnerabilities and receive security updates. These updates need to be incorporated into the organisation's patch management process with the same urgency applied to other applications.
Second, AI models themselves can have security-relevant vulnerabilities, including susceptibility to specific prompt injection techniques, that are addressed in model updates. The model update process is different from traditional software patching: it may involve changes to model behaviour, not just security fixes, and it may have functional consequences that require testing before deployment. Organisations need a process for assessing AI model updates that addresses both the security implications and the functional change management requirements.
Control 4: Restrict Administrative Privileges
This control requires limiting administrative privileges to those who genuinely need them, with regular review and strict justification requirements. AI agents introduce a new category of privileged identity. An AI agent that can access systems, query databases, send emails, or modify files is, in functional terms, a privileged account. Yet many organisations that have rigorous controls over human administrative accounts have not applied equivalent controls to AI agent service accounts.
At Maturity Level Three for this control applied to AI, every AI agent or automated system with access to organisational resources should have a dedicated service account with access scoped strictly to what the agent needs to function. That service account should be documented in the privileged account register, subject to regular review, and have its access revocable on short notice. Shared credentials between AI systems and human users should not exist. The credentials used by AI systems should rotate regularly and should be managed through a secrets management system, not hardcoded into application configurations.
Control 6: Multi-Factor Authentication
MFA is the single most impactful control for preventing account compromise, and it is the control where AI creates the most nuanced complications. The first complication is on the attacker side: AI-assisted phishing is now capable of conducting real-time adversary-in-the-middle attacks that intercept MFA codes. When a user clicks a phishing link, they are taken to an attacker-controlled proxy that forwards their credentials and MFA code to the real site in real time, establishing an authenticated session for the attacker. SMS and app-based TOTP codes are vulnerable to this technique.
Phishing-resistant MFA, specifically FIDO2 hardware security keys or passkeys, is not vulnerable to this technique because the cryptographic authentication is bound to the legitimate domain. A passkey generated for mybank.com.au cannot be used to authenticate to a phishing site mimicking it. The ACSC updated its MFA guidance in 2024 to explicitly recommend phishing-resistant MFA for high-value targets, and for Maturity Level Three, phishing-resistant MFA should now be treated as the required standard rather than an enhancement.
The second MFA complication is the defender side: AI agents that authenticate to systems need their own authentication mechanism. Service-to-service authentication for AI systems should use client certificates, API keys with appropriate rotation schedules, or OAuth 2.0 client credentials flow, not username and password combinations. These credentials should be treated as privileged and managed accordingly.
Control 8: Regular Backups
Backup requirements specify that important data is backed up, that backups are tested regularly, and that recovery from backup can be achieved within recovery time objectives. AI introduces two important backup considerations. First, AI models themselves may represent significant organisational assets. A fine-tuned model trained on internal data, a custom AI assistant configured for a specific business function, or an AI agent's memory and conversation history may all be data assets that need to be included in backup scope.
Second, AI-assisted ransomware attacks are increasingly sophisticated in their targeting of backup systems. Ransomware operators have learned that encrypting backups before encrypting primary data maximises their leverage. AI tools are being used to identify backup systems within compromised networks and target them specifically before triggering the final ransomware payload. Organisations whose backup systems are accessible from the primary network, or whose backup credentials are stored in systems that an attacker might compromise, face a meaningfully higher ransomware recovery risk than those with air-gapped or immutable backup systems.
Where the Essential Eight Has Gaps for AI
The Essential Eight is a strong foundation, but it was designed for traditional IT environments and does not address several AI-specific risks. It does not address prompt injection or other AI-specific attack vectors. It does not address the security of AI model supply chains. It does not address data poisoning or model integrity. And it does not address the governance of AI decision-making, which is increasingly subject to regulatory expectations under the Privacy Act and sector-specific frameworks.
The right approach for Australian organisations is to treat the Essential Eight as a necessary but not sufficient baseline for the AI era. Achieving Maturity Level Two or Three across all eight controls, with the AI-specific extensions described above, provides a solid defensive foundation. Building on top of that foundation with AI-specific controls drawn from frameworks like the OWASP LLM Top 10 and the NIST AI Risk Management Framework provides the more comprehensive coverage that AI deployment requires. The two frameworks are complementary, not competing.




Comments