AI has removed speed as the primary bottleneck in most knowledge work. A single employee can now produce in an afternoon what previously required a full team and a week. But speed without perspective is not an advantage — it is a fast route to a slow disaster.
The problem is not the speed. It is what you might call the echo chamber of one: the closed loop that forms when a single person uses AI to research, draft, analyse, and synthesise without input from colleagues in other functions. They are not moving fast — they are moving fast in one direction, with one frame of reference, across a landscape full of hazards they cannot see from where they are sitting. Without a Legal perspective on data usage, without an Operations perspective on whether a new workflow is actually feasible at scale, without a Governance perspective on what the model might be getting wrong — AI does not eliminate the need for cross-functional review. It makes it more urgent. So who in your organisation is providing that second perspective?
This article sits alongside our piece on foundational data skills as part of a broader argument about what responsible AI adoption actually requires in practice. The technical safeguards matter. The documentation matters. But none of it holds together without the organisational structure to ensure that AI outputs are seen, challenged, and approved by people with different lenses before they become decisions.
The Single-User Silo Problem
There is a specific failure mode that emerges when AI is adopted at the individual level without cross-functional oversight. Call it the AI-developed opinion: the output of a closed feedback loop in which a person's prompts shape the AI's synthesis, and the AI's synthesis confirms the person's assumptions. No dissenting perspectives enter. No friction from colleagues in other departments slows the process down. The result is something that looks like a finished deliverable but has never been stress-tested against the realities that other functions would immediately recognise.
A marketing team member who can produce a complete campaign strategy in a single afternoon using AI has a real productivity advantage. That advantage evaporates the moment the strategy reaches Legal and is paused for data compliance review, or reaches Operations and is found to require infrastructure that does not exist, or reaches leadership and is found to rest on market assumptions that the sales team would have corrected in a ten-minute conversation. The work was done quickly. It was not done safely. And the time saved in production is rarely recovered in the subsequent firefighting. Does that sound familiar?
Peer review in cross-functional teams is often experienced as friction. It slows things down. It surfaces objections. It requires justification. In an AI-augmented workflow, it is tempting to treat this friction as a bottleneck to be eliminated.
It is not. It is a quality control mechanism. The objection from the compliance team that delays a campaign by a week is considerably less costly than the regulatory finding that follows a campaign that should never have launched. Removing the friction does not remove the risk. It removes the warning.
What Each Function Brings to the Table
Safe AI adoption is a team sport, and each position on the team brings a distinct form of literacy that no other function can fully substitute for. The table below maps each department's primary contribution to the AI review process — not as bureaucratic gatekeeping, but as a genuine filter that catches different categories of failure.
The value of this table is not in the categories themselves — most organisations are aware that Legal, Governance, and Operations all have a stake in AI adoption. The value is in treating their input as structurally required, not optionally solicited. How many AI pilots at your organisation have reached the deployment stage before compliance had a proper look at them? The difference between a cross-functional review that happens reliably and one that happens when someone remembers to ask is, in practice, the difference between catching a problem before it scales and reading about it in a post-mortem.
The Benefits of Getting This Right
Cross-functional AI governance is not primarily a risk mitigation exercise, though it is that too. Organisations that build it properly also see measurable operational benefits — because the same structures that catch problems early also speed up the journey from AI pilot to sustainable deployment.
When It Goes Wrong: The LAUSD Chatbot
In 2023, the Los Angeles Unified School District launched "Ed," an AI-powered chatbot built to support students, parents, and staff with tasks ranging from tracking attendance to accessing learning resources. The investment was approximately $6 million. The ambition was genuine. The structural problems were significant and, in retrospect, foreseeable.
Ed was built on deep dependency on a single external vendor, AllHere, without meaningful internal integration into the district's core infrastructure. When AllHere experienced financial difficulty and leadership instability, there was no internal capability to maintain the service and no fallback system to replace it. The chatbot went dark. EdSurge reported on the collapse and raised a further question that the district could not immediately answer: where did the student data go?
The LAUSD case is a failure of cross-functional oversight as much as it is a vendor management failure. An Operations function engaged from the start would have flagged the absence of a fallback system. A Governance function would have required data residency and portability terms before a single student's information was handed to a third party. A Legal function would have mapped the liability exposure of an unsupported dependency on a single vendor for critical student services. None of these perspectives appear to have been structurally integrated into the adoption process. The result was a $6 million system that lasted less than a year and left open questions about student data that a school district should never have been in a position to face.
The lesson is not that AI in education is a bad idea. It is that treating AI as a plug-and-play solution rather than a capability that must be operationally integrated, contractually governed, and cross-functionally reviewed is a reliable path to exactly this kind of outcome. The technology was not the problem. The organisational structure around it was. And here is the question worth sitting with: if AllHere had collapsed on day one rather than after a year of operation, would your organisation have had the internal capability to keep the lights on?
The PM as AI Coach
Effective cross-functional AI adoption does not happen by accident. It requires someone to hold the structure together — to ensure that Legal, Governance, Operations, and Leadership are not just nominally represented in an AI initiative but are genuinely contributing their distinct perspectives at the right stages of the process.
This is, increasingly, a core function of the project manager in AI-augmented organisations. The PM's role is not to understand every technical dimension of the AI system being deployed. It is to facilitate the conversations that translate clashing departmental perspectives into coherent, defensible decisions — and to maintain the documentation that proves those conversations happened. The PM who can do this reliably is not just managing a project. They are the person who ensures that the organisation's AI adoption survives its first contact with operational reality.
Every function in the table above brings a different form of AI literacy to the review process. Our AI literacy courses are designed to develop that foundation across your organisation — so that the people in the room are equipped to contribute the scrutiny their role requires.
Explore AI Literacy Courses →