
As artificial intelligence becomes more deeply embedded in business operations, so too does the temptation to treat it as a convenient scapegoat when things go wrong. Some may recall pop culture references such as Shaggy’s song “It Wasn’t Me”, which playfully captures the instinct to deny responsibility even when caught in the act. This tendency to deflect blame is not new, and AI is now reflecting some of these very human behaviours back at us.
While AI systems can and do make mistakes, whether through hallucinated outputs or exploitable vulnerabilities, experts caution against placing the blame solely on the technology. Communications strategist Carol Barreyre warns that using AI as a scapegoat can erode trust among stakeholders. She argues that when leaders shift blame to AI, it highlights a lack of oversight and governance, since accountability does not vanish simply because automation is involved. A commentary in InfoWorld similarly likened this behaviour to the outdated practice of blaming interns, a tactic that ultimately reveals weaknesses in leadership rather than resolving the issue at hand.
At its core, artificial intelligence is a tool that enables automation and accelerates decision-making. The fundamental challenges facing Australian organisations today remain largely unchanged from the pre-AI era. These include failure to implement zero trust principles, inadequate application of least privilege, lapses in information protection, ineffective data lifecycle management, and poor records and data governance. What has changed is the scale and speed at which these shortcomings can now be exploited. Attackers are increasingly using AI-driven tools to enhance the effectiveness and reach of their operations. Without strong governance and comprehensive controls, organisations are more vulnerable to these evolving threats.
Although influencing human behaviour remains a complex task, there is a growing recognition of the importance of accountability and transparency. Leaders in corporate governance, academic publishing, and the technology sector are taking proactive steps to promote integrity and reduce blame-shifting. Ethicists continue to stress the need for human oversight in algorithmic decision-making. Relying on AI should not be used as an excuse for poor outcomes. Instead, businesses must focus on improving information hygiene and implementing effective governance and controls to reduce the risk of data breaches and ensure accountability when incidents occur.
The following sections explore these themes in greater depth:
- Case Studies: Real-World Breach Disclosures
- Protecting Your Business: The House Analogy
- The Challenge of Real-World AI Testing: The Lab-to-Reality Gap
- Human and Social Aspects of Blame-Shifting
- Conclusion: Strengthening Accountability and Governance in the AI Era
Case Studies: Real-World Breach Disclosures

In the past two years, several high-profile organisations have publicly attributed data breaches or cybersecurity incidents to artificial intelligence (AI) systems or tools. In many cases, AI was framed as the cause or enabler of the breach, often to shift attention away from internal failures in governance, access control, or oversight. Below are four notable examples, followed by an analysis of emerging patterns in how AI is invoked in breach narratives.
| Air Canada – Feb 2024 Airline Sector Incident: AI-powered customer-service chatbot gave a grieving passenger incorrect refund info, leading the customer to incur costs. “Wasn’t me: defence: Air Canada argued the “chatbot is a separate legal entity responsible for its own actions,” effectively claiming the AI misled the customer (a defense a tribunal called “remarkable”). Source: Tribunal ruling; Ars Technica | McDonald’s – Jul 2025 Fast Food/Retail Incident: Data privacy breach via “McHire” – an AI-driven recruiting chatbot (by vendor Paradox.ai) – exposed personal data of job applicants (initial reports speculated up to millions; confirmed impact was 5 records). “Wasn’t me” defence: McDonald’s public statement pinned the blame on “an unacceptable vulnerability from a third-party provider, Paradox.ai,” stressing that the flaw in the AI hiring tool caused the exposure. Source: McDonald’s statement (via Fox News) |
| Salesloft (Drift Chatbot) – Aug 2025 Tech (B2B SaaS) Incident: Hackers breached Salesloft’s systems and stole OAuth tokens to access ~700 companies’ Salesforce data by exploiting a flaw in Drift, a customer-facing AI chat agent integrated with Salesforce. Stolen credentials were used to export sensitive data (e.g. account records, passwords, API keys) from numerous corporate Salesforce databases. “Wasn’t me” defence: Salesloft’s advisory highlighted a “security issue in the Drift application” – effectively pointing to the third-party AI chatbot integration as the weak link. “A threat actor used OAuth credentials to exfiltrate data from our customers’ Salesforce instances,” the company explained, noting that customers not using the AI-driven Drift–Salesforce integration were unaffected. This framing emphasized the AI chatbot tool as the source of the breach. Source: Salesloft incident statement; The Hacker News | Fortinet Firewalls – Feb 2026 Cybersecurity Incident: Over 600 Fortinet FortiGate firewalls worldwide were compromised by a hacking group. The attackers – described as relatively low-skilled – managed to breach systems in 55 countries by automating their campaign with off-the-shelf generative AI tools. AI-generated scripts helped the hackers rapidly scan for vulnerable devices, generate exploit code, and coordinate attacks at a scale and speed that would have been difficult otherwise. “Wasn’t me” defence: In analysing the incident, Amazon Web Services’ Chief Security Officer Stephen Schmidt publicly stressed that AI allowed an “unsophisticated” hacker to massively scale their attack, lowering the barrier for cybercriminals. He noted that “AI is making certain types of attacks more accessible to less sophisticated actors who can now leverage AI to enhance their capabilities and operate at greater scale”. By highlighting the role of AI in the attack’s success, the narrative implicitly shifted focus toward the advanced tools employed by criminals rather than solely on firewall or user shortcomings. Source: AWS Threat Intelligence report (via CRN) |
Emerging Patterns
These cases reveal a growing trend. AI is increasingly cited in breach disclosures, either as the cause of the incident or as a tool that enabled the attacker. In some instances, organisations have used AI as a rhetorical shield, emphasising the novelty or autonomy of the technology to deflect scrutiny from internal lapses in oversight, governance, or vendor management.
This pattern reinforces the need to treat AI systems as part of an organisation’s broader digital infrastructure. Whether AI is developed in-house or integrated through third-party providers, the responsibility for its behaviour and impact remains with the organisation. As these examples show, failing to apply foundational security principles such as least privilege access, zero trust architecture, and strong data governance can leave organisations exposed to both technical and reputational harm. Addressing these challenges requires more than the deployment of technology. It involves building the right organisational processes, defining clear governance structures, and ensuring that people understand and uphold their security responsibilities. Effective controls must be embedded into daily operations, with continuous oversight and accountability. AI systems, like any other business tool, must be governed with clarity and care to ensure they support the organisation’s objectives without introducing unmanaged risk.
Protecting Your Business: The House Analogy

A helpful way to understand cybersecurity in the modern workplace is to think of your organisation as a house. Just as you would not rely on a single lock to protect your home, businesses should not depend on a single layer of defence to secure their systems and data. A basic lock may deter casual intruders, but if valuables are left in plain sight and every household member has unrestricted access, the risk of theft or misuse increases significantly. This scenario reflects the dangers of weak access controls and poor data governance in digital environments.
The principle of least privilege is like ensuring that only certain individuals have keys to specific rooms in the house. Not everyone needs access to the safe, just as not every employee requires access to sensitive financial records or customer data. By limiting access to only those who need it for their role, organisations can reduce the potential impact of both accidental and malicious breaches.
Taking this further, a zero trust model functions like a multi-layered security system. Even if an intruder manages to get through the front door, they will still face additional barriers such as internal locks, motion sensors, and alarm systems before reaching anything of real value. In the digital world, this translates to continuous verification, segmentation of networks, and multi-factor authentication. For example, accessing a critical system might require approval from two separate individuals, much like a joint bank account that needs dual authorisation for any transaction.
However, securing a business is not simply a matter of deploying technology. It requires a clear understanding of what needs to be protected, how it is accessed, and who is responsible for maintaining those protections. Effective cybersecurity depends on embedding the right controls into daily operations, supported by well-defined governance structures and informed, accountable people. This includes classifying sensitive data, enforcing access policies, and ensuring that security measures are consistently applied and reviewed. Just as a well-maintained home requires regular checks, updates, and responsible occupants, a secure organisation must continuously assess its risk exposure, adapt to new threats, and ensure that its people understand their roles in protecting information. Technology plays a vital role, but it must be part of a broader strategy that includes process maturity, cultural awareness, and strong leadership.
The Challenge of Real-World AI Testing: The Lab-to-Reality Gap

Artificial intelligence systems are often developed and tested in controlled environments, where variables are known, data is clean, and outcomes are predictable. However, once deployed in the real world, these systems are exposed to far more complex, unpredictable, and dynamic conditions. This disconnect between development and deployment environments is commonly referred to as the “lab-to-reality gap”.
In the lab, AI models are typically trained and validated using curated datasets. These datasets are often limited in scope and may not reflect the full diversity of real-world inputs, behaviours, or edge cases. As a result, models that perform well in testing may behave unpredictably when confronted with unfamiliar scenarios, ambiguous language, or adversarial inputs in production.
For example, a chatbot trained on structured customer service queries may struggle to interpret sarcasm, slang, or emotionally charged language when interacting with real users. Similarly, an AI system designed to detect fraud may fail to identify novel attack patterns that were not present in the training data. These limitations are not necessarily due to flaws in the technology itself, but rather in the assumptions made during development about how the system would be used.
The challenge is compounded by the fact that many AI systems are now integrated into critical business processes, such as customer support, recruitment, financial decision-making, and cybersecurity. Failures in these contexts can have significant consequences, including reputational damage, regulatory breaches, and financial loss.
Moreover, the increasing use of generative AI introduces new risks. These systems can produce outputs that appear plausible but are factually incorrect, misleading, or even harmful. Without robust validation mechanisms, organisations may inadvertently act on inaccurate information, leading to poor decisions or unintended outcomes.
Bridging the lab-to-reality gap requires a shift in how AI systems are tested and governed. It is not enough to evaluate performance against benchmark datasets or in sandbox environments. Organisations must adopt practices that simulate real-world conditions, including diverse user behaviours, adversarial scenarios, and operational constraints. This includes:
- Stress-testing AI systems with unpredictable or ambiguous inputs
- Monitoring for drift in model performance over time
- Implementing human-in-the-loop oversight for high-impact decisions
- Establishing clear escalation paths when AI outputs are uncertain or contested
- Continuously updating models with new data and feedback from real-world use
Equally important is the recognition that AI testing is not a one-off event. It is an ongoing process that must be embedded into the lifecycle of AI systems, from design and development through to deployment and maintenance. This requires collaboration across technical, operational, and governance teams to ensure that AI systems remain reliable, secure, and aligned with organisational values and regulatory obligations.
It is also important to acknowledge the financial and operational barriers to effective testing. Many organisations face challenges in trialling AI workloads at scale due to the cost and complexity of replicating production-like environments. As explored in more detail in The Hidden Cost of “Just Turning It On”: Why AI Workloads Are Becoming Harder to Trial Before You Buy, the shift towards consumption-based pricing models and the increasing sophistication of AI systems have made it more difficult for organisations to conduct meaningful pre-deployment evaluations. Ultimately, closing the lab-to-reality gap is not just a technical challenge. It is a matter of trust. Organisations must demonstrate that they understand the limitations of AI, are transparent about its capabilities, and are committed to responsible deployment. Only then can they realise the benefits of AI while managing the risks it introduces.
Human and Social Aspects of Blame-Shifting

As artificial intelligence becomes more embedded in business operations, it is increasingly being drawn into the social dynamics of accountability. When things go wrong, organisations often face pressure to explain what happened, who was responsible, and how similar incidents will be prevented in future. In this context, AI is sometimes used not just as a tool, but as a convenient scapegoat.
Blame-shifting is not a new phenomenon. In the past, organisations have deflected responsibility by pointing to junior staff, external vendors, or ambiguous processes. Today, AI systems are beginning to occupy a similar role. When an AI model produces an incorrect output, makes a poor decision, or enables a breach, it can be tempting to frame the issue as a failure of the technology itself, rather than a failure of oversight, governance, or design.
This tendency is reinforced by the perception of AI as autonomous or opaque. Phrases like “the algorithm did it” or “the chatbot made a mistake” suggest that the system acted independently when it was designed, trained, and deployed by people. In some cases, organisations have gone so far as to describe AI systems as separate entities, distancing themselves from the consequences of their own implementations.
The social implications of this are significant. When organisations deflect blame onto AI, they risk undermining trust with customers, regulators, and employees. It signals a lack of accountability and raises questions about whether appropriate controls, testing, and oversight were in place. It also obscures the human decisions that shape AI behaviour, from data selection and model training to deployment and monitoring.
Moreover, this pattern can discourage meaningful learning and improvement. If AI is treated as the problem, rather than a reflection of organisational choices, there is less incentive to examine the underlying causes of failure. This includes gaps in data governance, unclear roles and responsibilities, or inadequate risk management practices.
To address this, organisations must foster a culture of accountability that recognises AI as part of a broader system of people, processes, and technology. This means:
- Clearly defining ownership and responsibility for AI systems across their lifecycle
- Ensuring that decisions made by AI are traceable and explainable
- Embedding human oversight into high-impact or high-risk use cases
- Being transparent about the limitations of AI and the safeguards in place
- Responding to incidents with honesty and a commitment to improvement
Ultimately, the way organisations talk about AI failures reveals much about their internal culture. Those who take responsibility, learn from mistakes, and invest in better governance are more likely to build trust and resilience. Those who shift blame risk repeating the same errors and eroding confidence in their use of technology.
Conclusion: Strengthening Accountability and Governance in the AI Era

As artificial intelligence becomes more deeply embedded in business operations, the need for robust governance, clear accountability, and thoughtful implementation has never been more urgent. While AI offers significant opportunities for efficiency, insight, and innovation, it also introduces new risks that cannot be addressed through technology alone.
The case studies and examples explored in this report demonstrate that AI-related incidents are rarely the result of the technology acting in isolation. More often, they reflect broader organisational challenges—such as unclear responsibilities, inadequate testing, poor data governance, or a lack of oversight. In some cases, AI has been used as a convenient explanation for failures that stemmed from human decisions or systemic weaknesses.
To move forward responsibly, organisations must recognise that AI is not a substitute for sound judgement, nor is it a shield against accountability. Effective use of AI requires a foundation of well-defined processes, clear roles, and a culture that prioritises transparency and continuous improvement. This includes:
- Embedding AI systems within existing governance frameworks
- Ensuring that access to sensitive data is limited and monitored
- Testing AI models under realistic, dynamic conditions
- Maintaining human oversight for high-impact decisions
- Being transparent about the capabilities and limitations of AI tools
Ultimately, trust in AI is built not by claiming perfection, but by demonstrating responsibility. Organisations that invest in the right controls, foster a culture of accountability, and remain vigilant in the face of evolving risks will be better positioned to realise the benefits of AI while protecting their people, data, and reputation.
References and Acknowledgements
This report was informed by a range of publicly available sources, including:
- Schmarr, A. (2026). The Hidden Cost of “Just Turning It On”: Why AI Workloads Are Becoming Harder to Trial Before You Buy. Available at: https://schmarr.com/2026/02/25/the-hidden-cost-of-just-turning-it-on-why-ai-workloads-are-becoming-harder-to-trial-before-you-buy/
- Tribunal ruling on Air Canada chatbot case, as reported by Ars Technica. (2024). Available at: https://arstechnica.com/tech-policy/2024/02/air-canada-says-chatbot-is-responsible-for-lying-to-customer-about-refund/
- McDonald’s statement on McHire data breach, via Fox News. (2025). Available at: https://www.foxnews.com/tech/mcdonalds-ai-chatbot-data-breach-privacy-concerns
- Salesloft incident involving Drift chatbot, as reported by The Hacker News. (2025). Available at: https://thehackernews.com/2025/08/salesloft-drift-chatbot-breach.html
- AWS Threat Intelligence commentary on Fortinet firewall breaches, via CRN. (2026). Available at: https://www.crn.com/news/aws-cso-warns-of-ai-assisted-attacks-after-fortinet-breach
- Barreyre, C. (2023). AI: The Newest Scapegoat? LinkedIn. Available at: https://www.linkedin.com/pulse/aithe-newest-scapegoat-carol-barreyre-37u6c
- InfoWorld. (2023). Stop Blaming the Intern: The Rise of AI as the New Scapegoat. Available at: https://www.infoworld.com/article/3701234/stop-blaming-the-intern-the-rise-of-ai-as-the-new-scapegoat.html
- Hood, C. (1998). The Art of the State: Culture, Rhetoric, and Public Management. Oxford University Press.
- Bovens, M. (2007). Analysing and Assessing Accountability: A Conceptual Framework. European Law Journal, 13(4), 447–468.
- Elish, M. C. (2019). Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction. Engaging Science, Technology, and Society, 5, 40–60.
- Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501–507.
This blog post was developed with the assistance of artificial intelligence to support research, drafting, and editorial refinement. All facts and references have been reviewed for accuracy and relevance.




























