The AI Black Hole
Stopping Unsanctioned AI from Consuming Your Organisation’s Information
The arrival of Generative AI (GenAI) has presented IT leaders with a high-stakes, dual challenge: how to govern the organisation’s officially licensed tools while simultaneously protecting sensitive information from the proliferation of uncontrolled, unsanctioned AI applications.
The risk is immediate, moving beyond concerns like plagiarism to focus squarely on data leakage, sensitive data exposure, and compliance failure.
The Shadow AI Threat
Across organisations, staff are using public GenAI tools, such as ChatGPT, Claude, and Gemini, to summarise documents, draft policies, and assist with research tasks. While the productivity gains are obvious, the governance risk is catastrophic.
When confidential, sensitive, or high-value research data is entered into these public models, it risks being retained for model training or stored without the organisation's security controls, creating a silent, unmonitored leak of organisational IP and data into a global, uncontrolled environment. This is Shadow AI and it operates entirely outside the university’s legal and technical boundaries.
For many organisations, with their vast stores of sensitive personal data, competitive research outcomes, and financial data, this silent consumption represents an existential compliance threat that firewalls and basic security tools cannot solve.
The Licensed AI Trap: Governing Copilot Safely
To combat Shadow AI and satisfy user demand, many organisations are rolling out licensed enterprise tools like Microsoft 365 Copilot. While these tools offer enterprise data protection and do not use customer data to train the underlying models, they introduce a separate, but equally critical, risk: the permission exposure trap.
M365 Copilot is designed on a principle of least privilege, respecting the user’s existing access rights. This means Copilot will only surface data that the user can already access through SharePoint, OneDrive, or Teams.
However, in many tenants, years of organisational sprawl, default sharing settings, abandoned or orphaned M365 Groups have led to vast quantities of over-shared sensitive data. When a user prompts Copilot, the AI can efficiently index and summarise every piece of information they have access to, effectively exposing and highlighting every historical permission flaw and governance failure.
The Strategic Defence: Information Architecture and Information Governance First
The solution to both the Shadow AI threat and the licensed AI trap is not a technical block, but a strategic implementation of Information Architecture as the core governance layer.
Before any AI tool can be safely deployed, whether sanctioned or not, an organisation must first establish four foundational controls:
- Data Classification: Mandatory, accurate application of Microsoft Purview Sensitivity Labels (or equivalent) at the point of creation and across existing content.
- Permissions: Maximising the benefit of AI tools requires staff to have access to as much information as possible. Reviewing and applying a considered permissions model will ensure the protection of commercial and personal sensitive information.
- Provisioning Control: Enforcing the information architecture and classification rules through structured Teams and Group Provisioning to ensure every new container is born compliant.
- Lifecycle and Retention: Implementation of Microsoft Purview Retention Policies and labels to manage the entire document lifecycle, ensuring that redundant, obsolete, or trivial (ROT) data is systematically disposed of, while legally or regulatory mandated documents are retained and preserved.
The urgency is clear: the organisation’s reliance on governance architecture must complement its reliance on security technology. The path to secure AI adoption requires an Information Architecture and Information Governance mindset. We provide our Governance Accelerator that supports the strategic information governance for safe AI adoption by taking IT leadership out of firefighting mode and into a position of proactive strategic control.