“It wasn’t me, it was AI”, the new scapegoat for breaches

As artificial intelligence becomes more deeply embedded in business operations, so too does the temptation to treat it as a convenient scapegoat when things go wrong. Some may recall pop culture references such as Shaggy’s song “It Wasn’t Me”, which playfully captures the instinct to deny responsibility even when caught in the act. This tendency to deflect blame is not new, and AI is now reflecting some of these very human behaviours back at us.

While AI systems can and do make mistakes, whether through hallucinated outputs or exploitable vulnerabilities, experts caution against placing the blame solely on the technology. Communications strategist Carol Barreyre warns that using AI as a scapegoat can erode trust among stakeholders. She argues that when leaders shift blame to AI, it highlights a lack of oversight and governance, since accountability does not vanish simply because automation is involved. A commentary in InfoWorld similarly likened this behaviour to the outdated practice of blaming interns, a tactic that ultimately reveals weaknesses in leadership rather than resolving the issue at hand.

At its core, artificial intelligence is a tool that enables automation and accelerates decision-making. The fundamental challenges facing Australian organisations today remain largely unchanged from the pre-AI era. These include failure to implement zero trust principles, inadequate application of least privilege, lapses in information protection, ineffective data lifecycle management, and poor records and data governance. What has changed is the scale and speed at which these shortcomings can now be exploited. Attackers are increasingly using AI-driven tools to enhance the effectiveness and reach of their operations. Without strong governance and comprehensive controls, organisations are more vulnerable to these evolving threats.

Although influencing human behaviour remains a complex task, there is a growing recognition of the importance of accountability and transparency. Leaders in corporate governance, academic publishing, and the technology sector are taking proactive steps to promote integrity and reduce blame-shifting. Ethicists continue to stress the need for human oversight in algorithmic decision-making. Relying on AI should not be used as an excuse for poor outcomes. Instead, businesses must focus on improving information hygiene and implementing effective governance and controls to reduce the risk of data breaches and ensure accountability when incidents occur.

The following sections explore these themes in greater depth:

  • Case Studies: Real-World Breach Disclosures
  • Protecting Your Business: The House Analogy
  • The Challenge of Real-World AI Testing: The Lab-to-Reality Gap
  • Human and Social Aspects of Blame-Shifting
  • Conclusion: Strengthening Accountability and Governance in the AI Era

Case Studies: Real-World Breach Disclosures

In the past two years, several high-profile organisations have publicly attributed data breaches or cybersecurity incidents to artificial intelligence (AI) systems or tools. In many cases, AI was framed as the cause or enabler of the breach, often to shift attention away from internal failures in governance, access control, or oversight. Below are four notable examples, followed by an analysis of emerging patterns in how AI is invoked in breach narratives.

Air Canada – Feb 2024
Airline Sector

Incident: AI-powered customer-service chatbot gave a grieving passenger incorrect refund info, leading the customer to incur costs.  

“Wasn’t me: defence: Air Canada argued the “chatbot is a separate legal entity responsible for its own actions,” effectively claiming the AI misled the customer (a defense a tribunal called “remarkable”).  

Source: Tribunal ruling; Ars Technica
McDonald’s – Jul 2025
Fast Food/Retail

Incident: Data privacy breach via “McHire” – an AI-driven recruiting chatbot (by vendor Paradox.ai) – exposed personal data of job applicants (initial reports speculated up to millions; confirmed impact was 5 records).  

“Wasn’t me” defence: McDonald’s public statement pinned the blame on “an unacceptable vulnerability from a third-party provider, Paradox.ai,” stressing that the flaw in the AI hiring tool caused the exposure.  

Source: McDonald’s statement (via Fox News)
Salesloft (Drift Chatbot) – Aug 2025
Tech (B2B SaaS)

Incident: Hackers breached Salesloft’s systems and stole OAuth tokens to access ~700 companies’ Salesforce data by exploiting a flaw in Drift, a customer-facing AI chat agent integrated with Salesforce. Stolen credentials were used to export sensitive data (e.g. account records, passwords, API keys) from numerous corporate Salesforce databases.  

“Wasn’t me” defence: Salesloft’s advisory highlighted a “security issue in the Drift application” – effectively pointing to the third-party AI chatbot integration as the weak link. “A threat actor used OAuth credentials to exfiltrate data from our customers’ Salesforce instances,” the company explained, noting that customers not using the AI-driven Drift–Salesforce integration were unaffected. This framing emphasized the AI chatbot tool as the source of the breach.  

Source: Salesloft incident statement; The Hacker News
Fortinet Firewalls – Feb 2026
Cybersecurity

Incident: Over 600 Fortinet FortiGate firewalls worldwide were compromised by a hacking group. The attackers – described as relatively low-skilled – managed to breach systems in 55 countries by automating their campaign with off-the-shelf generative AI tools. AI-generated scripts helped the hackers rapidly scan for vulnerable devices, generate exploit code, and coordinate attacks at a scale and speed that would have been difficult otherwise.  

“Wasn’t me” defence: In analysing the incident, Amazon Web Services’ Chief Security Officer Stephen Schmidt publicly stressed that AI allowed an “unsophisticated” hacker to massively scale their attack, lowering the barrier for cybercriminals. He noted that “AI is making certain types of attacks more accessible to less sophisticated actors who can now leverage AI to enhance their capabilities and operate at greater scale”. By highlighting the role of AI in the attack’s success, the narrative implicitly shifted focus toward the advanced tools employed by criminals rather than solely on firewall or user shortcomings.  

Source: AWS Threat Intelligence report (via CRN)

Emerging Patterns

These cases reveal a growing trend. AI is increasingly cited in breach disclosures, either as the cause of the incident or as a tool that enabled the attacker. In some instances, organisations have used AI as a rhetorical shield, emphasising the novelty or autonomy of the technology to deflect scrutiny from internal lapses in oversight, governance, or vendor management.

This pattern reinforces the need to treat AI systems as part of an organisation’s broader digital infrastructure. Whether AI is developed in-house or integrated through third-party providers, the responsibility for its behaviour and impact remains with the organisation. As these examples show, failing to apply foundational security principles such as least privilege access, zero trust architecture, and strong data governance can leave organisations exposed to both technical and reputational harm. Addressing these challenges requires more than the deployment of technology. It involves building the right organisational processes, defining clear governance structures, and ensuring that people understand and uphold their security responsibilities. Effective controls must be embedded into daily operations, with continuous oversight and accountability. AI systems, like any other business tool, must be governed with clarity and care to ensure they support the organisation’s objectives without introducing unmanaged risk.

Protecting Your Business: The House Analogy

A helpful way to understand cybersecurity in the modern workplace is to think of your organisation as a house. Just as you would not rely on a single lock to protect your home, businesses should not depend on a single layer of defence to secure their systems and data. A basic lock may deter casual intruders, but if valuables are left in plain sight and every household member has unrestricted access, the risk of theft or misuse increases significantly. This scenario reflects the dangers of weak access controls and poor data governance in digital environments.

The principle of least privilege is like ensuring that only certain individuals have keys to specific rooms in the house. Not everyone needs access to the safe, just as not every employee requires access to sensitive financial records or customer data. By limiting access to only those who need it for their role, organisations can reduce the potential impact of both accidental and malicious breaches.

Taking this further, a zero trust model functions like a multi-layered security system. Even if an intruder manages to get through the front door, they will still face additional barriers such as internal locks, motion sensors, and alarm systems before reaching anything of real value. In the digital world, this translates to continuous verification, segmentation of networks, and multi-factor authentication. For example, accessing a critical system might require approval from two separate individuals, much like a joint bank account that needs dual authorisation for any transaction.

However, securing a business is not simply a matter of deploying technology. It requires a clear understanding of what needs to be protected, how it is accessed, and who is responsible for maintaining those protections. Effective cybersecurity depends on embedding the right controls into daily operations, supported by well-defined governance structures and informed, accountable people. This includes classifying sensitive data, enforcing access policies, and ensuring that security measures are consistently applied and reviewed. Just as a well-maintained home requires regular checks, updates, and responsible occupants, a secure organisation must continuously assess its risk exposure, adapt to new threats, and ensure that its people understand their roles in protecting information. Technology plays a vital role, but it must be part of a broader strategy that includes process maturity, cultural awareness, and strong leadership.

The Challenge of Real-World AI Testing: The Lab-to-Reality Gap

Artificial intelligence systems are often developed and tested in controlled environments, where variables are known, data is clean, and outcomes are predictable. However, once deployed in the real world, these systems are exposed to far more complex, unpredictable, and dynamic conditions. This disconnect between development and deployment environments is commonly referred to as the “lab-to-reality gap”.

In the lab, AI models are typically trained and validated using curated datasets. These datasets are often limited in scope and may not reflect the full diversity of real-world inputs, behaviours, or edge cases. As a result, models that perform well in testing may behave unpredictably when confronted with unfamiliar scenarios, ambiguous language, or adversarial inputs in production.

For example, a chatbot trained on structured customer service queries may struggle to interpret sarcasm, slang, or emotionally charged language when interacting with real users. Similarly, an AI system designed to detect fraud may fail to identify novel attack patterns that were not present in the training data. These limitations are not necessarily due to flaws in the technology itself, but rather in the assumptions made during development about how the system would be used.

The challenge is compounded by the fact that many AI systems are now integrated into critical business processes, such as customer support, recruitment, financial decision-making, and cybersecurity. Failures in these contexts can have significant consequences, including reputational damage, regulatory breaches, and financial loss.

Moreover, the increasing use of generative AI introduces new risks. These systems can produce outputs that appear plausible but are factually incorrect, misleading, or even harmful. Without robust validation mechanisms, organisations may inadvertently act on inaccurate information, leading to poor decisions or unintended outcomes.

Bridging the lab-to-reality gap requires a shift in how AI systems are tested and governed. It is not enough to evaluate performance against benchmark datasets or in sandbox environments. Organisations must adopt practices that simulate real-world conditions, including diverse user behaviours, adversarial scenarios, and operational constraints. This includes:

  • Stress-testing AI systems with unpredictable or ambiguous inputs
  • Monitoring for drift in model performance over time
  • Implementing human-in-the-loop oversight for high-impact decisions
  • Establishing clear escalation paths when AI outputs are uncertain or contested
  • Continuously updating models with new data and feedback from real-world use

Equally important is the recognition that AI testing is not a one-off event. It is an ongoing process that must be embedded into the lifecycle of AI systems, from design and development through to deployment and maintenance. This requires collaboration across technical, operational, and governance teams to ensure that AI systems remain reliable, secure, and aligned with organisational values and regulatory obligations.

It is also important to acknowledge the financial and operational barriers to effective testing. Many organisations face challenges in trialling AI workloads at scale due to the cost and complexity of replicating production-like environments. As explored in more detail in The Hidden Cost of “Just Turning It On”: Why AI Workloads Are Becoming Harder to Trial Before You Buy, the shift towards consumption-based pricing models and the increasing sophistication of AI systems have made it more difficult for organisations to conduct meaningful pre-deployment evaluations. Ultimately, closing the lab-to-reality gap is not just a technical challenge. It is a matter of trust. Organisations must demonstrate that they understand the limitations of AI, are transparent about its capabilities, and are committed to responsible deployment. Only then can they realise the benefits of AI while managing the risks it introduces.

Human and Social Aspects of Blame-Shifting

As artificial intelligence becomes more embedded in business operations, it is increasingly being drawn into the social dynamics of accountability. When things go wrong, organisations often face pressure to explain what happened, who was responsible, and how similar incidents will be prevented in future. In this context, AI is sometimes used not just as a tool, but as a convenient scapegoat.

Blame-shifting is not a new phenomenon. In the past, organisations have deflected responsibility by pointing to junior staff, external vendors, or ambiguous processes. Today, AI systems are beginning to occupy a similar role. When an AI model produces an incorrect output, makes a poor decision, or enables a breach, it can be tempting to frame the issue as a failure of the technology itself, rather than a failure of oversight, governance, or design.

This tendency is reinforced by the perception of AI as autonomous or opaque. Phrases like “the algorithm did it” or “the chatbot made a mistake” suggest that the system acted independently when it was designed, trained, and deployed by people. In some cases, organisations have gone so far as to describe AI systems as separate entities, distancing themselves from the consequences of their own implementations.

The social implications of this are significant. When organisations deflect blame onto AI, they risk undermining trust with customers, regulators, and employees. It signals a lack of accountability and raises questions about whether appropriate controls, testing, and oversight were in place. It also obscures the human decisions that shape AI behaviour, from data selection and model training to deployment and monitoring.

Moreover, this pattern can discourage meaningful learning and improvement. If AI is treated as the problem, rather than a reflection of organisational choices, there is less incentive to examine the underlying causes of failure. This includes gaps in data governance, unclear roles and responsibilities, or inadequate risk management practices.

To address this, organisations must foster a culture of accountability that recognises AI as part of a broader system of people, processes, and technology. This means:

  • Clearly defining ownership and responsibility for AI systems across their lifecycle
  • Ensuring that decisions made by AI are traceable and explainable
  • Embedding human oversight into high-impact or high-risk use cases
  • Being transparent about the limitations of AI and the safeguards in place
  • Responding to incidents with honesty and a commitment to improvement

Ultimately, the way organisations talk about AI failures reveals much about their internal culture. Those who take responsibility, learn from mistakes, and invest in better governance are more likely to build trust and resilience. Those who shift blame risk repeating the same errors and eroding confidence in their use of technology.

Conclusion: Strengthening Accountability and Governance in the AI Era

As artificial intelligence becomes more deeply embedded in business operations, the need for robust governance, clear accountability, and thoughtful implementation has never been more urgent. While AI offers significant opportunities for efficiency, insight, and innovation, it also introduces new risks that cannot be addressed through technology alone.

The case studies and examples explored in this report demonstrate that AI-related incidents are rarely the result of the technology acting in isolation. More often, they reflect broader organisational challenges—such as unclear responsibilities, inadequate testing, poor data governance, or a lack of oversight. In some cases, AI has been used as a convenient explanation for failures that stemmed from human decisions or systemic weaknesses.

To move forward responsibly, organisations must recognise that AI is not a substitute for sound judgement, nor is it a shield against accountability. Effective use of AI requires a foundation of well-defined processes, clear roles, and a culture that prioritises transparency and continuous improvement. This includes:

  • Embedding AI systems within existing governance frameworks
  • Ensuring that access to sensitive data is limited and monitored
  • Testing AI models under realistic, dynamic conditions
  • Maintaining human oversight for high-impact decisions
  • Being transparent about the capabilities and limitations of AI tools

Ultimately, trust in AI is built not by claiming perfection, but by demonstrating responsibility. Organisations that invest in the right controls, foster a culture of accountability, and remain vigilant in the face of evolving risks will be better positioned to realise the benefits of AI while protecting their people, data, and reputation.

References and Acknowledgements

This report was informed by a range of publicly available sources, including:

This blog post was developed with the assistance of artificial intelligence to support research, drafting, and editorial refinement. All facts and references have been reviewed for accuracy and relevance.

The Hidden Cost of “Just Turning It On”: Why AI Workloads Are Becoming Harder to Trial Before You Buy

Enterprise AI is rapidly moving toward consumption‑based pricing models. On paper, this makes sense: customers pay for compute, scale with usage, and avoid rigid per‑user licences.

In practice, however, this shift is introducing a growing and often overlooked problem:

It’s becoming harder for customers and experts to safely trial and evaluate AI workloads before committing financially.

Microsoft Security Copilot is a notable real-world example of this trend, though it is not the sole instance.

Executive Summary

Across the industry, many enterprise AI workloads are adopting compute‑metered, consumption‑based pricing. While this approach aligns costs with usage, it increasingly shifts financial risk to the evaluation phase, before value is proven. Microsoft Security Copilot is a visible example of this broader challenge, not an isolated case.

When AI features are included with premium licenses like Microsoft 365 E5, users can try out AI tools without paying extra. On the other hand, if these features aren’t bundled, testing them usually means setting up constant computing resources that incur charges continuously, whether they’re actually used or not.

This creates significant friction for customers and for security professionals, architects, and consultants who need to test AI tools using real telemetry, real alerts, and real operational noise. Experts want to triage incidents, investigate edge cases, and stress AI systems using data they generate themselves. Guided walkthroughs, documentation, or tenants preloaded with synthetic “happy path” data are useful for orientation, but they are insufficient to expose limitations or operational shortcomings.

As a result, many AI workloads are effectively evaluated only after financial commitment, or at the customer’s expense, limiting independent validation and informed decision‑making. This is not a critique of AI value, but a growing misalignment between how AI is priced and how it must be learned, tested, and trusted.

The Pricing Model Makes Sense — Until You Try to Learn

From a vendor perspective, consumption‑based AI pricing is rational:

  • AI compute is expensive
  • Usage varies dramatically
  • Static per‑user pricing doesn’t reflect real load

For organisations already invested in premium bundles, this works reasonably well.

Security Copilot as an Example (E5 Tenants)

For a tenant with 1,000 Microsoft 365 E5 licences, Microsoft includes:

  • 400 Security Compute Units (SCUs) per month

In low‑usage scenarios:

  • A limited number of active users
  • Occasional prompts or investigations
  • Light incident summaries

👉 The additional monthly cost can realistically be $0, if usage stays within that included capacity.

This is a good outcome. It encourages experimentation inside production environments and reduces adoption friction.

Where the Model Breaks: Evaluation Outside Premium Bundles

The challenge emerges the moment evaluation happens outside a premium licence bundle — whether for:

  • Demo tenants
  • Lab environments
  • Partner testing
  • Consultant sandboxes
  • Pre‑sales or architecture validation

In these scenarios, Security Copilot (like many AI workloads) requires:

  • Provisioned compute capacity
  • Billed continuously, per hour
  • Regardless of whether the service is used

For Security Copilot specifically:

  • Minimum: 1 SCU
  • Cost: ~$4 per SCU per hour
  • Billing: 24×7 while provisioned

This is not “pay per prompt”. It is pay for availability.

When “$4” Quietly Becomes Thousands

One of the most common misunderstandings with AI pricing is the unit of time.

“It’s only $4.”

Yes — per hour.

That means:

  • 1 SCU × 24 hours × ~30 days
  • ≈ $2,920 per month

For a single, idle unit.

Multiply that across workloads or forget to deprovision, and costs scale very quickly.

A Real Evaluation Scenario (And an Expensive Lesson)

In a demo tenant:

  • Security Copilot was enabled
  • 1 SCU provisioned
  • No prompts executed
  • No active use

It was enabled for about 7 days.

The resulting charge:

  • $850.04

This wasn’t a billing error. This wasn’t misuse. This was simply:

  • ~212 hours × $4/hour

There was no end‑of‑month credit. No “unused capacity” adjustment.

Once compute is provisioned, the meter runs.

Why This Is a Bigger Problem Than One Product

Security Copilot is just one example of a wider AI evaluation problem.

Experts Need Real Data, Not Happy Paths

Security professionals, architects, and consultants don’t evaluate tools by reading guides alone.

They need to:

  • Generate real alerts
  • Ingest noisy, imperfect telemetry
  • Triage incidents under pressure
  • Observe how AI behaves when data is incomplete or contradictory

That kind of evaluation:

  • Requires live data
  • Requires control over the environment
  • Requires time to experiment and break things

Preloaded demo tenants and guided scenarios are useful introductions, but they do not expose operational limitations.

Evaluation Now Happens After Commitment

Because of cost exposure:

  • Customers hesitate to “just try it”
  • Experts can’t easily test independently
  • Validation often happens after purchase

In many cases:

  • Evaluation is pushed into production
  • Or absorbed as part of a customer engagement

That’s not how trust in AI systems is built.

This Isn’t About Cost — It’s About Friction

The issue isn’t that AI workloads are “too expensive”.

In many real‑world scenarios:

  • Costs are low
  • Or already covered by existing licences

The issue is that learning has a price tag.

When:

  • Experimentation incurs immediate cost
  • Idle time is billable
  • There’s no safe sandbox

People stop experimenting. And AI adoption slows.

What Would Help (Across All AI Workloads)

A few changes would dramatically improve evaluation without undermining commercial models:

  • Time‑boxed compute trials (e.g. limited SCU hours)
  • Capped evaluation allowances
  • Pause/hibernate functionality for AI capacity
  • Expert or partner sandbox environments
  • Clearer cost warnings at enablement

These reduce the cost of learning, not the value of running.

Final Thought

AI systems demand trust. Trust demands hands‑on experience. Hands‑on experience demands safe experimentation.

Right now, for many AI workloads, it’s easier to justify buying than it is to safely try.

Security Copilot illustrates the issue well — but the challenge is broader than any single product.

If enterprise AI is to scale responsibly, vendors need to lower the barrier to learning, not just optimise the cost of consumption.

Okta AD Integration with Azure AD Domain Services

1. Introduction

This is a experimental article, using a existing Azure Active Directory (AD) and Azure Active Directory (AD) Domain Services deployment and integrating it with a Okta solution.

2. Preparation tasks

3. Assumptions

The following assumptions are made in following this article:

  • Windows 2012 R2 Member server of the  Azure AD Domain Services
  • The member server has internet access
  • Okta free trail without any modifications made

 

4. Installation

4.1 Create Service Account in Azure AD

  1. Log into Azure AD, Go to Users and Click “ADD USER
  2.  In “Type of user“, Choose “New user  in your organization
  3. In “User Name”, Use company Service Account Name convention e.g. okta
  4. okta6
  5. In “First Name“, “Last Name” and “Display Name“, Enter Okta
  6. In “Role”, Choose “User”
  7. Create a temporary password, and document the password for next step.
  8. Go to http://portal.office.com, Login as the new user and set the password.
  9. The default password expiry is set on the account and should be disabled by using Azure AD PowerShell.

4.2 Okta AD Agent Install

Please follow theses steps for integrating Azure AD Domain services with Okta:

  1. Log onto the Domain joined Server that will run the Okta Agent
  2. Go to your okta administrator url e.g. https://<Company>-admin.okta.com/admin/dashboard
  3. On the top navigation bar, go to Security, Authentication
  4. okta1
  5. Click “Configure Active Directory
  6. okta2
  7. Click, “Set Up Active Directory
  8. okta3
  9. Click, “Download Agent
  10. okta4
  11. Once the agent is finished downloading, run the installation.
  12. In the Welcome screen, Click “Next
  13. okta5
  14. Choose the path for the installation and click “Install
  15. In The Domain field, enter company domain and click “Next” e.g. schmarr.com
  16. Choose “Use an alternate account that I specify
  17. Enter username and password and click “Next
  18. At Okta AD agent Proxy Configuration, Click “Next
  19. At Register Okta AD Agent, Choose Production and in “Enter Subdomain” add company name.
  20. Click “Next
  21. Sign in with your Okta Admin Account
  22. Click “Allow Access
  23. Agent Installed, Click “Next”
  24. As an Example choose the following in “Basic Setting
  25. okta7
  26. Click “Next
  27. Click “Next
  28. In “Select the attributes to build your Okta User profile“, Click “Next
  29. Done
  30. okta8

Conclusion

The ability to add SaaS applications in Azure AD and Okta, Azure AD being the Identity store for both.

Integrate SharePoint with Azure AD

1. Introduction

This article will show the quick configuration tasks, that are required to make Azure AD a trusted identity provider for a SharePoint 2013 installation.

2. Assumptions

The following assumptions are made during this article:

3. Preparation

Before starting with the article the following needs to be in place:

  • Azure AD PowerShell tools installed, look here for more details.

4. Configuration

The configuration will be broken into the following sections:

  • Azure AD configuration
  • SharePoint configuration
  • Assigning Users

4.1 Azure AD Configuration

Follow these tasks to document the Azure AD WS-Federation metadata URL for later use:

  1. In the Azure Management Portal (Classic), Click Active Directory.
  2. Click on the Azure AD that will be integrated with SharePoint 2013
  3. Click Applications
  4. On the bottom bar, Click View Endpoints
  5. Document the Federation metadata document url for later use

Follow these tasks to create / configure the namespace in Azure AD :

  1. In the Azure Management Portal (Classic), Click Active Directory.
  2. Click Access Control Namespaces, and create a new namespace and called it “Company”
  3. Click Manage on the bottom bar. This should open https://company.accesscontrol.windows.net/v2/mgmt/web.
  4. Click Identity Providers, Click Add
  5. Click WS-Federation identity provider, click Next.
  6. In Displayname enter, “Company”
  7. In Login link Text enter, “Company”
  8. In WS-Federation metadata, choose URL and enter the URL that was documented in tasks above Example: https://accounts.accesscontrol.windows.net/company.onmicrosoft.com/FederationMetadata/2007-06/FederationMetadata.xml
  9. Click Save
  10. Click Relying party applications, then click Add
  11. Enter the following in each field:
    1. Name: “Company SharePoint”
    2. Realm: “urn:sharepoint:company”
    3. Token format: SAML 1.1
    4. Token lifetime (secs) default is 600: Recommended value is 2 hours
  12. Click Save
  13. Click Rule Groups, and then Add
  14. Click Generate
  15. Click Add
  16. Fill in all the fields as illustrated below:
  17. The claim rules in Azure Access Control
  18. Click Save
  19. Delete the existing claim rule named upn
Extract the X.509 certificate from Azure Access Control for later use
  1. In the Access Control Service pane, under Development, click Application integration.
  2. In Endpoint Reference, locate the Federation.xml that is associated with your Azure tenant, and then copy the location in the address bar of a browser.
  3. In the Federation.xml file, locate the RoleDescriptor section, and copy the information from the <X509Certificate> element, as illustrated in the following figure.
  4. X509 Certificate element of Federation.xml file
  5. from the root of drive C:\, create folder named Certs
  6. Save the X509Certificate information using notepad to the folder C:\Certs and name the file ACS.cer
  7. Run the following PowerShell commands:
    1. “Connect-MsolService”
    2. “Import-Module MSOnlineExtended -Force”
    3. $replyUrl = New-MsolServicePrincipalAddresses -Address “https://company.accesscontrol.windows.net&#8221;
    4. “New-MsolServicePrincipal -ServicePrincipalNames @(“https://company.accesscontrol.windows.net&#8221;) -DisplayName “Company ACS Namespace” -Addresses $replyUrl”

4.2 SharePoint 2013 Configuration

Follow these steps to configure Azure AD as the identity provider for SharePoint 2013:

  1. From the Start menu, click All Programs.
  2. Click Microsoft SharePoint 2013 Products.
  3. Click SharePoint 2013 Management Shell
  4. Run the following PowerShell commands:
    1. $root = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2(“C:\Certs\ACS.cer”)
    2. New-SPTrustedRootAuthority -Name “Token Signing Cert Parent” -Certificate $root
    3. $cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2(“C:\Certs\ACS.cer”)
    4. New-SPTrustedRootAuthority -Name “Token Signing Cert” -Certificate $cert
    5. $map1 = New-SPClaimTypeMapping -IncomingClaimType “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn&#8221; -IncomingClaimTypeDisplayName “UPN” -SameAsIncoming
    6. $map2 = New-SPClaimTypeMapping -IncomingClaimType “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname&#8221; -IncomingClaimTypeDisplayName “GivenName” -SameAsIncoming
    7. $map3 = New-SPClaimTypeMapping -IncomingClaimType “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname&#8221; -IncomingClaimTypeDisplayName “SurName” -SameAsIncoming
    8. $map4 = New-SPClaimTypeMapping -IncomingClaimType “http://schemas.microsoft.com/ws/2008/06/identity/claims/role&#8221; -IncomingClaimTypeDisplayName “Role” -SameAsIncoming
    9. $realm = “urn:sharepoint:company”
    10. $signInURL = “https://company.accesscontrol.windows.net/v2/wsfederation&#8221;
    11. $ap = New-SPTrustedIdentityTokenIssuer -Name “ACS Provider” -Description “SharePoint secured by SAML in ACS” -realm $realm -ImportTrustCertificate $cert -ClaimsMappings $map1,$map2,$map3,$map4 -SignInUrl $signInURL -IdentifierClaim “http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn&#8221;
  5. Verify that the user account that is performing this procedure is a member of the Farm Administrators SharePoint group.

    In Central Administration, on the home page, click Application Management.

  6. On the Application Management page, in the Web Applications section, click Manage web applications.
  7. Click the appropriate web application.
  8. From the ribbon, click Authentication Providers.
  9. Under Zone, click the name of the zone. For example, Default.
  10. On the Edit Authentication page, in the Claims Authentication Types section, select Trusted Identity provider, and then click the name of your provider, which for purposes of this article is ACS Provider. Click OK.
  11. The following figure illustrates the Trusted Provider setting.
The Trusted Provider setting in a web app

4.3 Assigning Users

Use the following steps to set the permissions to access the web application.

  1. In Central Administration, on the home page, click Application Management.
  2. On the Application Management page, in the Web Applications section, click Manage web applications.
  3. Click the appropriate web application, and then click User Policy.
  4. In Policy for Web Application, click Add Users.
  5. In the Add Users dialog box, click the appropriate zone in Zones, and then click Next.
  6. In the Add Users dialog box, type user2@company.onmicrosoft.com (ACS Provider).
  7. In Permissions, click Full Control.
  8. Click Finish, and then click OK.

Conclusion

Azure AD is the trusted identity provider for SharePoint 2013, and Azure AD users will be able to authenticate and use SharePoint 2013 resources.

External Links

Some good extra reading articles:

 

 

AD CS Install Guide For Azure AD Domain Services

1. Introduction

Active Directory Certificate Services (AD CS) provides customizable services for issuing and managing public key certificates used in software security systems that employ public key technologies.

Azure AD domain Services allows limited access to the Active Directory instance for administrators, only a standalone Certificate Authority (CA) deployment will be possible.

More information about AD CS can be found here.

2. Assumptions

The following assumptions are made during the creation of this article:

  • Azure AD Domain Services is up and running
  • Active Directory Member sever, running windows 2012 R2
  • Experienced in Microsoft Certificate Authority
  • Experienced in Active Directory

3. Installation

Please follow the instruction below to install a Standalone CA:

Disclaimer:

This is a quick and basic installation and should be evaluated if it meet business and security requirements.

  1. Run the following PowerShell Command as Administrator
    1. Install-WindowsFeature AD-Certificate,ADCS-Cert-Authority,ADCS-Web-Enrollment -IncludeManagementTools
  2. Run the following PowerShell command as Administrator
    1. Install-AdcsCertificationAuthority -CAType StandaloneRootCa
  3. Run the following Powershell command as Administrator
    1. Install-AdcsWebEnrollment

4. Configuration

After the completion of section 3, the AD CS service should be up and running with default configuration. Here is some recommendation for making the AD CS more secure and a production ready service.

Steps to import the Root CA as trusted authority for all domain joined Servers/ machines.

  1. Download Root CA:
    1. Go to http://<Servername>/certsrv
    2. Click “Download a CA certificate, Certificate chain, or CRL”
    3. Click “Download CA certificate”
    4. Save the file for next steps
  2. Open Group Policy Management (Follow below to install Group Policy Management on a Member Server)
    1. Run the following PowerShell Command as Administrator
      1. Install-WindowsFeature GPMC
  3. Edit “AADDC Computers GPO
  4. Go to “Computer Configuration\Windows Settings\Security Settings\Public Key Policies\Trusted Root Certification Authorities” section
  5. Import the Root CA into the section above
  6. Close the Group Policy
  7. To allow the group policy to take affect:
    1. reboot member servers, or
    2. run “gpudpate /force” as administrator

5. Conclusion

Azure AD domain Services domain joined servers will be able to install and trusted the new standalone CA Certificates.

 

 

Azure AD Domain Services Quick Install

Introduction

Azure Active Directory Domain Services lets you join Azure virtual machines to a domain without the need to deploy domain controllers, more detail can be found here.

This article show quick way to install and configure Azure AD Domain Services, other options might be required for a production deployment and not highlighted in this article.

At the time of writing this article most of the configuration will be done in Azure Portal (Classic), Microsoft is planning to move everything to the new Azure portal.

Assumptions

The following assumptions are made in this article:

  • Functional Azure AD – A quick guide can found here
  • Access to Azure Subscription

Preparation Tasks

The following preparation tasks will be required before starting the installation process below:

Installation

This section will be divided into the following sections:

To create all the required Azure resources, please follow the steps below:

1. Azure Virtual Network

  1. Go to https://manage.windowsazure.com
  2. Click “+ NEW”
  3. AzureADDomainServices5.JPG
  4. Click “Network Services“, “Virtual Network” and then click “Custom Create
  5. AzureADDomainServices6
  6. In Name, enter required network name
  7. Choose correct Location
  8. AzureADDomainServices7
  9. On Page 2, leave DNS servers empty for now
  10. On Page 3, enter the required Address space range and Subnets for the network
  11. AzureADDomainServices8
  12. Click check mark to create network

2. Create ‘AAD DC Administrators’ Group

To allow users to manage Azure AD Domain Services, you’ll first need to create a group in Azure AD called ‘AAD DC Administrators’ and add all the users that should have admin rights.

For more detailed tasks, please have a look here.

3. Azure AD Domain Services

  1. Go to https://manage.windowsazure.com/
  2. On the left Menu find, “ACTIVE DIRECTORY
  3. Click on the required Azure AD in the list provided
  4. AzureADDomainServices9
  5. Click “CONFIGURE” tab
  6. Scroll down and find “domain services” section
  7. Change “ENABLE DOMAIN SERVICES FOR THIS DIRECTORY” to “YES
  8. Change “DNS DOMAIN NAME OF DOMAIN SERVICES” to required suffix
  9. Choose the network that was create in steps above for “CONNECT DOMAIN SERVICES TO THIS VIRTUAL NETWORK
  10. Click “Save
  11. The creation might take a bit of time to complete, once completed DNS server IP addresses will be provided for use in the created Virtual Network. (Please follow steps below to finish Virtual Network configuration)

3. Configure Azure Virtual Network DNS Servers

  1. Go to https://manage.windowsazure.com/
  2. On the left Menu find, “ACTIVE DIRECTORY
  3. Click on the required Azure AD in the list provided
  4. AzureADDomainServices9
  5. Click “CONFIGURE” tab
  6. Scroll down and find “domain services” section
  7. Document the IP Addresses in “IP ADDRESS” section for next steps
  8. AzureADDomainServices10
  9. On Left hand menu, Choose “NETWORKS
  10. Open the network that was created and have been enabled for Azure Domain Services
  11. Click “CONFIGURE
  12. In the “dns servers” section, enter the two dns servers documented in previous step
  13. Click “SAVE

 

Before using Azure AD domain services, please follow this guide to enable password synchronization.

Conclusion

By the end of this guide Azure AD domain services will be functional with the ability to domain join Azure Virtual machines.

Filtering on Azure AD Connect

Introduction

This article will add a filter for Azure AD Connect for only syncing user accounts that have a valid email address. Additional options may be required by the organization and more detail can be found here.

Preparation Tasks

The following tasks should be completed before starting the process:

  1. Azure AD Connect is installed and configured – see “Getting Started with Azure AD Free Edtion
  2. Administrator Access for Azure AD Connect Server

Adding the Filter

The following tasks should be preformed on the Azure AD Connect Server:

Disable scheduled task

To disable the scheduled task which will trigger a synchronization cycle every 3 hours, follow these steps:

  1. Start Task Scheduler from the start menu.
  2. Directly under Task Scheduler Library find the task named Azure AD Sync Scheduler, right-click and select Disable.
    Task Scheduler
  3. You can now make configuration changes and run the sync engine manually from the synchronization service manager console.

After you have completed all your filtering changes, don’t forget to come back and Enable the task again.

  1. Open “Synchronization Rules Editor
  2. Click “Inbound
  3. FilteringAzureADScreenShot1
  4. Find “In From AD – User Join” rule, click “Edit
  5. FilteringAzureADScreenShot2
  6. Click “Yes
  7. In “Precedence“, enter “500
  8. Click “Next
  9. Only include user that a have email address
    1. Click “Add clause
    2. Attribute Field choose “mail
    3. Operator field choose “ISNOTNULL
    4. FilteringAzureADScreenShot4
  10. Add Company email domain (Optional – checking if user have a email address solves most cases)
    1. This rule assumes you only have one email domain, will not work for multi-domain
    2. Click “Add clause
    3. Attribute Field choose “mail
    4. Operator field choose “ENDSWITH
    5. Value enter “<email>.<domain-name>”
    6. FilteringAzureADScreenShot5
  11. Apply and Verify changes
  12. Enable Scheduled task

Conclusion

Completion of this article, the organization will only sync user accounts that have a valid email address into Azure AD.

 

 

 

Getting Started with Azure Active Directory Free Edition

 Introduction

Azure Active Directory (Azure AD) is Microsoft’s multi-tenant cloud based directory and identity management service.

More in-depth detail about Azure AD can be found here.

The article illustrate the registration process and the essential configuration tasks for Azure AD free edition for use of organization internal users. (Future posts will look at other scenarios)

Preparation tasks

The follow preparation tasks will be required:

  1. Have a Microsoft account ready to use for sign-up;
    1. Generate a Microsoft account by going here;
    2. Follow the on-screen wizard and complete sign-up;
  2. A credit Card – This will only be used for verification and not be charged unless you explicitly upgrade to a paid offer;
  3. Optional – External Domain Name e.g. schmarr.com to integrate into Azure AD;
    1. P.S. You’ll need to be able to create TXT records in the external domain.

Installation

Registration

Please follow theses steps in registering your free Azure subscription that will host Azure AD:

  1. Go to the following url: https://azure.microsoft.com/en-us/trial/get-started-active-directory/;
  2. Click on “Create a free Azure Account”;
  3. Click “Start Now”;
  4. Fill in the form and submit;
  5. The subscription will take up to 4 minutes to be created.
  6. Once the process is complete you should see the following screen:

AzureADFree-ScreenShot1

By now a default Azure AD is already created, skip “Create Azure AD” section if default instance shouldn’t be used.

Create Azure AD (Optional)

Follow the these steps to create a new Azure AD:

  1. In the left corner click on the “+ New” icon
  2. Click “Security + Identity”
  3. Click “Active Directory”
  4. It will re-direct to the Azure Classic portal (This might change in the future)
  5. You will get the following Wizard
  6. AzureADFree-ScreenShot2
  7. Fill in the form and click the check to create the Azure AD

Essential Azure AD Configuration

At this point Azure AD is fully functional, with the following constraints:

  • Manual process is required for creating user accounts (GUI, PowerShell or CSV import);
  • User passwords will not be in-sync with their network passwords;
  • Usernames at this stage will be <username>@<AzureADName>.onmicrosoft.com;
    • Users will be required to remember their usernames. (Most users find it difficult remember their password);

Follow my Essential Azure Configuration guide here if you want to address the constraints mentioned above.

Conclusion

After completion of this guide, Azure AD free edition will available and be functional with available features.

Essential Azure AD Configuration

Introduction

Azure Active Directory (Azure AD) is Microsoft’s multi-tenant cloud based directory and identity management service.

More in-depth detail about Azure AD can be found here.

a fresh Azure AD installation will have the following constraints:

  1. Usernames at this stage will be <username>@<AzureADName>.onmicrosoft.com;

    Users will be required to remember usernames. (Most users find it difficult remember their passwords);

  2. Manual process is required for creating user accounts (GUI, PowerShell or CSV import);
  3. User passwords will not be in-sync with their network password;

This article will only focus in addressing the constraints above, with the least possible effort. More complex options are available and will depend on security & business requirements.

Configuration / Installation

Each optional tasks below should be followed chronological order to address the constraints above.

1. Add Email Domain (Optional)

This step should be done first before proceeding with step 2.

Only one Azure AD can own the organization email domain and Microsoft will not allow registering the email domain into another Azure AD Subscriptions.

Follow these steps to allow usernames to be the same as the organization email address:

  1. Go to https://manage.windowsazure.com
  2. On the left menu click “Active Directory”
  3. Open your new create Azure AD, you should see the following screen:
  4. AzureADFree-ScreenShot3
  5. Click “Domain” Tab
  6. Click “ADD A CUSTOM DOMAIN”, the following wizard will appear:
  7. AzureADFree-ScreenShot4
  8. Enter company domain name, example: “schmarr.com”, click “add”
  9. On Page 2, will have the TXT record that should be created in the company domain external DNS service. Please see example:
  10. AzureADFree-ScreenShot5
  11. Once the DNS TXT record is created, Click “verify” (Please allow DNS replication to complete up to 48 hours)
  12. Change the user accounts manually to reflect user emails addresses

    For automating user creation and password sync, please proceed to step 2.

2. Automate Account creation in Azure AD (Optional)

It is recommended that step 1 should be completed first, otherwise all users account will be created with system created domain name e.g. “<username>@<AzureADName>.onmicrosoft.com”

Assumptions / Requirements

The following assumptions are made in this section:

  • Internal Active Directory up and running
  • Windows 2012 R2 Member server of the Active Directory, ready to install Azure AD Connect
  • The member server has internet access
  • Active Directory userPrincipalName(UPN) reflects the user’s mail address
    • This assumption is in most cases are a challenge for most organizations and the following options are available:
      • Run the Azure AD Connect (Steps below is expert install) in advanced mode and choose “mail” attribute instead of userPrincipalName(UPN) – Easy fix;
      • Fix the UPN to be the same as email address, here is a microsoft tool that can assist;
  • Azure AD
  • A Active Directory enterprise administrator account
  • global admin account is created with default domain context e.g. admin@<AzureADName>.onmicrosoft.com

Azure AD Configuration

  1. Go to https://manage.windowsazure.com
  2. On the left menu click “Active Directory”
  3. Open your new create Azure AD, you should see the following screen:
  4. AzureADFree-ScreenShot3
  5. Click “DIRECTORY INTEGRATION” Tab
  6. Click “ATIVATED”, then click “SAVE”

Azure AD Connect Installation

The following tasks will be completed on the domain member server.

  1. Download Azure AD Connect from Here
  2. Run the downloaded setup file on the Windows 2012 R2 member server
  3. Click “Continue”
  4. Click “Use express settings”
  5. Enter your Azure AD Admin username e.g. admin@<AzureADName>.onmicrosoft.com and password
  6. Click “Next”
  7. Enter the Active Directory Enterprise Admin account and password
  8. Click “Next”
  9. Click “Install”
  10. AzureADFree-ScreenShot6

Once the installation is complete all user accounts will be created in Azure AD automatically with the current user email address. Password synchronization will also be automatically enabled.

Advanced installation will allow disabling password sync if not required.

Conclusion

User will be now be able to login into Azure AD using exiting email address and network password.

 

 

 

ADFS 2016 Technical Preview 4 Install Guide

Introduction

Microsoft is in the process of releasing a new version of Windows Server 2016, with this new release it will include and new version of ADFS.

In this article I will be only focusing on the installation process of ADFS 2016 preview (The easy bit), future guides will have more focus on integration.

Here is also some related reading material from my previous posts:

  • Group Managed Service Accounts – This is highly recommended for all ADFS implementations. This article was written on 2012 but still relevant for 2016.

My Lab

The lab is running in Microsoft Azure, the following relevant services for ADFS 2016 is running in this lab:

  • Active Directory – Single Forest, Single Domain
    • OS – Windows 2012 R2
    • Server Name – DC2012R2
  • PKI Certificate Server running on the domain controller (Not a recommended for production)
  • ADFS 2016 Backend Server
    • OS – Windows 2016 Technical Preview 4
    • Server Name – S2016PR4ADFS01
  • ADFS 2016 Web Applications Proxy Server
    • OS – Windows 2016 Technical Preview 4
    • Server Name – S2016PR4PRX01

Preparation

All ADFS implementation will require the following high-level preparation tasks before starting with the installation: (Microsoft has a well documented checklist that should be follow)

  • Split Brain DNS – This allows internal users to resolved the ADFS URL to the Internal ADFS backend servers. Info can be found here.
  • DNS records
    • External DNS
      • A Record – adfs2016.schmarr.com
        • Point to Web Application Proxy Server external IP
      • A Record – enterpriseregistration.schmarr.com
        • Point to Web Application Proxy Server external IP
      • A Record – certauth.adfs2016.schmarr.com
        • Point to Web Application Proxy Server external IP
    • Internal DNS
      • A record – adfs2016.schmarr.com
        • Point to ADFS 2016 backend server internal IP
      • A Record – enterpriseregistration.schmarr.com
        • Point to ADFS 2016 backend server internal IP
      • A Record – certauth.adfs2016.schmarr.com
        • Point to ADFS 2016 backend Server internal IP
  • ADFS features – ADFS has additional feature which needs to be consider before proceeding in acquiring the required certificate for encryption. e.g. workplace join have some additional requirements for the certificate, Read more about workplace join here.
  • Certificate – All ADFS communication between the client and ADFS is encrypted, so the certificate should be trusted by all parties. A External Certificate is advised.
    • In this article a internal certificate was used.
    • Lab SSL Certificate attributes:
      • Subject Name (CN): adfs2016.schmarr.com
      • Subject Alternative Name (DNS): adfs2016.schmarr.com
      • Subject Alternative Name (DNS): enterpriseregistration.schmarr.com
  • Group Managed Service Account  – how-to
    • Enable Managed Service Accounts (On Domain Controller running 2012 R2 or higher)
    • ADFSGmsa
      • Powershell command  – “New-ADServiceAccount ADFSGmsa -DNSHostName adfs2016.schmarr.com -ServicePrincipalNames http/adfs2016.schmarr.com”
  • Topology – Choose the correct topology to fit business requirements. More information about topologies can be found here.
    • Stand-Alone Federation Server using WID (Windows Internal Database) will be used in this article.
      • Gotcha – Limit memory usage for WID

Installation

ADFS 2016 backend server installation

  1. Install the required SSL certificate.
    1. Here is a guide for requesting a SAN certificate for internal PKI or External certificate provider.
    2. ADFS2016Preview-ScreenShot2
    3. Additional information for the “certauth” url can be found here
  2. Install Active Directory Federation Services role on server
    1. Powershell command – “Install-windowsfeature adfs-federation –IncludeManagementTools
    2. Get the 2012 R2 wizard options from here
  3. Configure Active Directory Federation Services
    1. Powershell command – “Install-AdfsFarm -CertificateThumbprint ff236398ad5b51b9dd427cf819e6586b43d2009b -FederationServiceName adfs2016.schmarr.com -GroupServiceAccountIdentifier AS\ADFSGmsa$
    2. Get the 2012 R2 wizard options from here
  4. Limit WID memory usage
    1. Install SQL Management Studio Express – Download from here
    2. Open admin Command prompt – “osql -E -S \.pipeMICROSOFT##WID
    3. Enter the following commands
      1. exec sp_configure ‘show advanced option’, ‘1’;
      2. reconfigure;
    4. To check current config:
      1. exec sp_configure;
      2. go
    5. Reconfigure to use 2GB
      1. exec sp_configure ‘max server memory’, 2048;
      2. reconfigure with override;
      3. go
      4. quit
    6. Restart the Windows Internal Database service
    7. Optional – Uninstall SQL Management Studio Express
  5. Testing Installation
    1. Powershell command – “Set-AdfsProperties -EnableIdPInitiatedSignonPage $true
    2. Go to the following url https://adfs2016.schmarr.com/adfs/ls/IdpInitiatedSignOn
      1. User should be able to sign-in from domain joined machine on the internal network
      2. User should be able to sign-in from non domain joined machine on the internal network
    3. Optional – Disabled IdPInitiatedSignonPage
      1. Powershell command – “Set-AdfsProperties -EnableIdPInitiatedSignonPage $false

 ADFS 2016 Web Application Proxy Server installation

  1. Export the certificate from the ADFS backend server with private key
  2. Import into computer store of Web Application proxy server
  3. Install Web Application Proxy Role
    1. Powershell Command – “Install-WindowsFeature Web-Application-Proxy -IncludeManagementTools
    2. Get the 2012 R2 wizard options from here
  4. Configure Web Application Proxy Role
    1. Powershell Command – “Install-WebApplicationProxy –CertificateThumbprint ‘‎ff236398ad5b51b9dd427cf819e6586b43d2009b’ -FederationServiceName adfs2016.schmarr.com
    2. Get the 2012 R2 wizard options from here
  5. Testing Installation
    1. On ADFS backend Server run Powershell command – “Set-AdfsProperties -EnableIdPInitiatedSignonPage $true
    2. Go to the following url https://adfs2016.schmarr.com/adfs/ls/IdpInitiatedSignOn
      1. User should be able to sign-in from domain joined machine on the external network
      2. User should be able to sign-in from non domain joined machine on the internal network
    3. Optional – Disabled IdPInitiatedSignonPage
      1. Powershell command – “Set-AdfsProperties -EnableIdPInitiatedSignonPage $false

Conclusion

This article demonstrated installing ADFS 2016 preview 4 in a Stand-Alone Federation Server using WID topology.