Inside a typical enterprise AI rollout, the workflow is already familiar.

A team meets. Everyone aligns on what needs to get done. Then each person goes back to their desk and opens a private ChatGPT or Claude window.

Three people run the same lookup. Nobody sees what anyone else is producing. The next day, the team reconvenes to compare answers and figure out which one is right.

That is not collaboration. It is parallel AI use.

And according to a growing group of enterprise operators, it is starting to show up in the budget.

“That’s a coordination tax,” said Tanmai Gopal, CEO and co-founder of PromptQL, an enterprise AI unicorn whose customers include Instacart, McDonald’s, and Cisco. “The individual-copilot framing assumes the bottleneck is the human typing the prompt. The actual bottleneck is everything that happens before and after that prompt. Once you see it, you can’t unsee it.”

The companies that rushed to give every employee access to frontier models are now confronting a problem hidden inside that decision. Compute is being spent. Outputs are being generated. But the operating leverage companies expected has been harder to find.

The Bottleneck Is Not the Prompt

For the last two years, enterprise AI has largely been sold as an individual productivity story: smarter model, faster worker, better first draft.

Gopal argues that frame misses the real constraint.

The bottleneck is not the prompt. It is the context before the prompt, the handoff after the prompt, the duplicate work happening across teams, and the verification required before anything can be trusted.

The math becomes clear when you watch a team work for a week.

Ten employees running private model sessions means ten versions of the same context being assembled, ten redundant retrievals, and ten separate verification passes against similar data. Compute spend scales with headcount because the architecture is per-seat. Work product scales more slowly because much of what the model produces gets duplicated, discarded, or ignored.

That is where AI usage starts to become expensive without becoming transformative.

A Different Shape of Usage

PromptQL’s argument is not that companies need a better model. It is that they need a different operating model.

Gopal calls it “multiplayer AI.”

“Multiplayer AI is when the AI sits in the conversation between people, not behind each person individually,” he said. “Practically, that means a thread where a product manager, an engineer, and a salesperson are all looking at the same query, the same data, the same agent run. The AI has the full context of what’s been discussed, what’s been tried, what’s been ruled out.”

In that environment, the AI is not a private assistant. It becomes part of the shared workspace.

Gopal said the shape of the work changes quickly. In one PromptQL thread, a product manager iterated on the same dashboard 43 times. By iteration 40, the CEO had jumped in and added the columns that made it useful. In another, a marketer watched an engineer run a coding agent in a public thread, recognized the pattern, and began converting video files herself the next day.

“Capabilities flatten in a way that genuinely surprised us,” Gopal said. “Roles don’t disappear, but the boundaries between them get a lot more porous. People don’t talk as much, but there’s a lot more work that’s happening.”

The Internal Numbers

PromptQL’s internal data, which the company shared on the record, points in the same direction.

A single user of PromptQL consumes three to five times fewer tokens than the equivalent setup running individual sessions, the company said, because of how the architecture handles planning and pre-computation. When usage moves from individual to shared, that efficiency can compound.

Thirty days after PromptQL replaced Slack internally with its own tool:

  • Direct messages were down 66 percent
  • The number of SaaS tools in use was down 20 percent
  • Pull requests submitted were up 50 percent
  • Pull requests merged were up 25 percent

“The ‘people budget’ is becoming a ‘token budget,’” Gopal said. “You laid off employees to cut cost, and now you’re spending what you saved on Claude and ChatGPT seats, and the spend isn’t going down because every individual is still running their own session against the model.”

Why This Is Happening Now

In Gopal’s view, two things had to become true at the same time.

First, models had to become capable enough to participate in real work without constantly derailing the conversation. Second, enough employees had to use AI individually to expose the failure mode of per-seat adoption.

Both happened in the last twelve months.

The broader market appears to be moving in the same direction. GitHub has announced AI features that point toward more collaborative coding workflows. Notion has publicly questioned what a 2026 version of Slack should look like, and whether it should look like Slack at all. Cursor’s team has also hinted at a shift away from individual AI use toward more shared development workflows.

“When that many independent companies converge on the same idea simultaneously, that’s a phase change,” Gopal said. “The split that’s worth watching is between the companies bolting AI into Slack as a chatbot and the companies arguing the tool itself has to change.”

What It Means for the P&L

For finance leaders, the implication is straightforward: companies may be spending on the wrong layer.

“They’re treating the model as the product, when the model is actually the last mile,” Gopal said. “The industry is spending hundreds of billions on smarter models and almost nothing on the 95% of work that has to happen before the model is called. The retrieval, the planning, the pre-computation.”

The second problem is verification.

Gopal argues that companies are underestimating how often current models produce answers that sound right but are wrong. In coding, that failure can be caught by a compiler or a test. In finance, operations, or legal work, there is no equivalent safety net.

“A confidently wrong answer looks right, gets approved, and ends up in a board deck,” he said.

PromptQL co-authored a benchmark with researchers at the University of California, Berkeley earlier this year that found 85 percent of agent failures on enterprise data tasks traced back to bad planning rather than insufficient compute.

That finding reinforces Gopal’s broader point: enterprise AI leverage will not come from model access alone. It will come from the systems around the model, including planning, shared context, retrieval, and verification.

Two or three years from now, Gopal expects internal work to look meaningfully different.

“Slack is no longer where internal work happens,” he said. “It’s a notification surface, the same way email became one. Most internal coordination has moved into AI-native environments where the agent is in the room by default.”

In that world, headcount may be smaller, but the scope of what each person can credibly own expands. The marketer can ship code. The analyst can run the financial model. The salesperson can stand up a dashboard.

The budget consequence follows.

Token spend becomes a real P&L line item. CFOs start scrutinizing AI usage the way they scrutinize payroll. And the companies that win are not necessarily the ones buying the most seats.

They are the ones that figure out how to make AI work shared, verifiable, and cheaper per unit of output.

Or, as Gopal put it: “Confidently wrong at enterprise scale is the failure mode that ends careers.”