AI has removed speed as the primary bottleneck in most knowledge work. A single employee can now produce in an afternoon what previously required a full team and a week. But speed without perspective is not an advantage — it is a fast route to a slow disaster.

The problem is not the speed. It is what you might call the echo chamber of one: the closed loop that forms when a single person uses AI to research, draft, analyse, and synthesise without input from colleagues in other functions. They are not moving fast — they are moving fast in one direction, with one frame of reference, across a landscape full of hazards they cannot see from where they are sitting. Without a Legal perspective on data usage, without an Operations perspective on whether a new workflow is actually feasible at scale, without a Governance perspective on what the model might be getting wrong — AI does not eliminate the need for cross-functional review. It makes it more urgent. So who in your organisation is providing that second perspective?

This article sits alongside our piece on foundational data skills as part of a broader argument about what responsible AI adoption actually requires in practice. The technical safeguards matter. The documentation matters. But none of it holds together without the organisational structure to ensure that AI outputs are seen, challenged, and approved by people with different lenses before they become decisions.

The Single-User Silo Problem

There is a specific failure mode that emerges when AI is adopted at the individual level without cross-functional oversight. Call it the AI-developed opinion: the output of a closed feedback loop in which a person's prompts shape the AI's synthesis, and the AI's synthesis confirms the person's assumptions. No dissenting perspectives enter. No friction from colleagues in other departments slows the process down. The result is something that looks like a finished deliverable but has never been stress-tested against the realities that other functions would immediately recognise.

A marketing team member who can produce a complete campaign strategy in a single afternoon using AI has a real productivity advantage. That advantage evaporates the moment the strategy reaches Legal and is paused for data compliance review, or reaches Operations and is found to require infrastructure that does not exist, or reaches leadership and is found to rest on market assumptions that the sales team would have corrected in a ten-minute conversation. The work was done quickly. It was not done safely. And the time saved in production is rarely recovered in the subsequent firefighting. Does that sound familiar?

The Friction That Gets Automated Away

Peer review in cross-functional teams is often experienced as friction. It slows things down. It surfaces objections. It requires justification. In an AI-augmented workflow, it is tempting to treat this friction as a bottleneck to be eliminated.

It is not. It is a quality control mechanism. The objection from the compliance team that delays a campaign by a week is considerably less costly than the regulatory finding that follows a campaign that should never have launched. Removing the friction does not remove the risk. It removes the warning.

What Each Function Brings to the Table

Safe AI adoption is a team sport, and each position on the team brings a distinct form of literacy that no other function can fully substitute for. The table below maps each department's primary contribution to the AI review process — not as bureaucratic gatekeeping, but as a genuine filter that catches different categories of failure.

Function
What they catch
Role
Legal & ComplianceRisk Radar
IP infringement, data privacy obligations, regulatory classification under frameworks like the EU AI Act, liability exposure in AI-assisted decisions
Flags what the organisation is legally obligated to do — and what it is not permitted to do — before a workflow reaches production
LeadershipROI Compass
Whether an AI initiative is generating measurable business value or consuming resources on capabilities the organisation does not yet need
Ensures AI adoption is driven by strategic intent rather than enthusiasm for the technology itself
GovernanceEthical Anchor
Bias in training data, fairness of outputs across customer segments, transparency obligations, accountability gaps in AI-informed decisions
Asks the questions about who might be harmed if the model is wrong — before the model is deployed, not after
OperationsReality Check
Whether existing infrastructure can support a new AI-driven workflow at the scale being proposed, and what breaks if it cannot
Translates strategic AI ambition into operational feasibility — the function most likely to prevent a promising pilot from collapsing at rollout

The value of this table is not in the categories themselves — most organisations are aware that Legal, Governance, and Operations all have a stake in AI adoption. The value is in treating their input as structurally required, not optionally solicited. How many AI pilots at your organisation have reached the deployment stage before compliance had a proper look at them? The difference between a cross-functional review that happens reliably and one that happens when someone remembers to ask is, in practice, the difference between catching a problem before it scales and reading about it in a post-mortem.

The Benefits of Getting This Right

Cross-functional AI governance is not primarily a risk mitigation exercise, though it is that too. Organisations that build it properly also see measurable operational benefits — because the same structures that catch problems early also speed up the journey from AI pilot to sustainable deployment.

Higher resource efficiency
Deloitte's State of AI in the Enterprise research found that AI-literate organisations with structured cross-functional adoption processes see significantly higher resource-allocation efficiency than those where AI use is siloed. The mechanism is straightforward: when Legal and Compliance are involved from the start, the "final hour veto" that kills most AI pilots never arrives.
Higher adoption rates
When employees from multiple levels and functions are involved in selecting and shaping AI tools, resistance to those tools drops measurably. People who have contributed to a decision are considerably more likely to implement it consistently than people who have had it handed down to them. Inclusion in the process is, in practice, a change management strategy.
Speed through safety
Clearing Legal and Compliance hurdles during the build phase — rather than at the end of it — means projects do not stall at the point they are ready to deploy. The compliance review that takes two weeks when done at the start takes two months when done retroactively on a completed system with dependencies already built around it.
Accountability clarity
Cross-functional sign-off creates a documented record of who reviewed what and when — which is precisely the kind of audit trail that responsible AI documentation requires. It also means that when something does go wrong, the investigation starts from a clear record rather than a reconstruction from memory.

When It Goes Wrong: The LAUSD Chatbot

In 2023, the Los Angeles Unified School District launched "Ed," an AI-powered chatbot built to support students, parents, and staff with tasks ranging from tracking attendance to accessing learning resources. The investment was approximately $6 million. The ambition was genuine. The structural problems were significant and, in retrospect, foreseeable.

Ed was built on deep dependency on a single external vendor, AllHere, without meaningful internal integration into the district's core infrastructure. When AllHere experienced financial difficulty and leadership instability, there was no internal capability to maintain the service and no fallback system to replace it. The chatbot went dark. EdSurge reported on the collapse and raised a further question that the district could not immediately answer: where did the student data go?

⚠ The Vendor Dependency Trap

The LAUSD case is a failure of cross-functional oversight as much as it is a vendor management failure. An Operations function engaged from the start would have flagged the absence of a fallback system. A Governance function would have required data residency and portability terms before a single student's information was handed to a third party. A Legal function would have mapped the liability exposure of an unsupported dependency on a single vendor for critical student services. None of these perspectives appear to have been structurally integrated into the adoption process. The result was a $6 million system that lasted less than a year and left open questions about student data that a school district should never have been in a position to face.

The lesson is not that AI in education is a bad idea. It is that treating AI as a plug-and-play solution rather than a capability that must be operationally integrated, contractually governed, and cross-functionally reviewed is a reliable path to exactly this kind of outcome. The technology was not the problem. The organisational structure around it was. And here is the question worth sitting with: if AllHere had collapsed on day one rather than after a year of operation, would your organisation have had the internal capability to keep the lights on?

The PM as AI Coach

Effective cross-functional AI adoption does not happen by accident. It requires someone to hold the structure together — to ensure that Legal, Governance, Operations, and Leadership are not just nominally represented in an AI initiative but are genuinely contributing their distinct perspectives at the right stages of the process.

This is, increasingly, a core function of the project manager in AI-augmented organisations. The PM's role is not to understand every technical dimension of the AI system being deployed. It is to facilitate the conversations that translate clashing departmental perspectives into coherent, defensible decisions — and to maintain the documentation that proves those conversations happened. The PM who can do this reliably is not just managing a project. They are the person who ensures that the organisation's AI adoption survives its first contact with operational reality.

The Closing Argument

Cross-functional literacy is the translation layer between an AI strategy that looks good in a slide deck and one that holds together when it meets reality. If you are not arguing about AI outputs across departments today, you will be apologising for them to your customers tomorrow. The colleague who slows you down with an awkward question in week two is the one who saves you from the regulator in week forty.

Build the literacy that makes
cross-functional AI governance possible

Every function in the table above brings a different form of AI literacy to the review process. Our AI literacy courses are designed to develop that foundation across your organisation — so that the people in the room are equipped to contribute the scrutiny their role requires.

Explore AI Literacy Courses →