OpenAI data shows a widening 6x productivity gap between AI power users and everyone else

Key Takeaways

  • OpenAI’s latest enterprise usage report shows frontier workers sending 6x more messages to ChatGPT than median employees, with a 17x gap in coding tasks.
  • MIT’s Project NANDA finds that despite $30–40B in GenAI spending, only 5% of companies see transformative returns.
  • Both studies point to behavior and organizational structure—not access or tooling—as the real drivers of the divide.

The tools are rolled out. The trainings happened. The licenses are active across millions of seats. And yet, OpenAI’s new enterprise usage report lands with a jarring conclusion: access is no longer the problem. Behavior is.

Workers at the 95th percentile of AI adoption—what the report calls “frontier users”—send six times as many messages to ChatGPT as the median employee. That alone is striking, but the task-level gaps are even more revealing. Frontier workers send 17 times as many coding-related messages and data analysts hit the data analysis tool 16 times more often than their typical peers. It’s a small detail, but it says a lot about how quickly technical work is being redistributed inside companies.

Everyone has the same software. Only some are changing their workflow.

That’s the part executives don’t always want to admit. OpenAI’s data covers more than one million business customers and over seven million enterprise seats, yet 19 percent of monthly active users have never touched the data analysis feature. Fourteen percent haven’t used reasoning. Twelve percent haven’t used search. These aren’t edge features. They’re the core of what companies point to when justifying AI investments in the first place.

Daily users tell a different story. Only 3 percent of them have skipped data analysis; just 1 percent have avoided search or reasoning. The habit line, not the access line, is where the real split forms.

And once people start experimenting, the compounding kicks in. Workers who apply AI across roughly seven different task categories report saving five times as much time as those who use it for four. Employees who save more than 10 hours per week consume eight times more AI credits than workers who report no savings at all. There’s a flywheel here: more experimentation leads to more use cases, which leads to more time saved, which likely leads to better performance reviews and more opportunities to apply AI again.

Seventy-five percent of surveyed workers said they can now complete tasks they previously couldn’t—programming support, spreadsheet automation, tech troubleshooting. For the ones leaning in, job boundaries are expanding. For the ones holding back, the boundaries may feel like they’re shrinking.

Still, individual behavior isn’t the only divide. MIT’s Project NANDA team, in a separate study, found an eerie parallel at the organizational level. Despite $30–40 billion in generative AI investments, only 5 percent of companies are seeing transformative returns. The majority are stuck in pilots—some of them very expensive pilots. Only technology and media show meaningful business transformation so far. The rest are dabbling without crossing the threshold into actual operational change. MIT researchers refer to this as the “GenAI Divide” in their study.

What does that mean for teams already struggling with integration debt? That’s where it gets tricky.

According to MIT, just 40 percent of companies have purchased official LLM subscriptions, yet employees at more than 90 percent of firms use personal AI tools for work. The so-called “shadow AI” economy is thriving. Workers aren’t waiting for IT. They’re signing up for personal plans, testing AI on their own, and figuring out what actually improves their productivity. Counterintuitively, MIT found that these unsanctioned workflows often deliver better ROI than formal initiatives. A micro-tangent here: it mirrors the early days of cloud, when unauthorized Dropbox folders quietly solved real problems while official enterprise systems lagged behind.

Across both studies, the largest gaps show up exactly where generative AI has made the most progress—coding, writing, and analysis. And it’s not limited to engineering departments. Among ChatGPT Enterprise users in marketing, HR, legal, finance, and other non-technical functions, coding-related messages jumped 36 percent in six months. Someone who can automate their own workflows is slowly becoming a different kind of employee than someone who can’t, even when they share a job description.

The academic research OpenAI cites notes an “equalizing effect” among workers who actually use AI. Lower performers catch up faster. But the equalizing effect only applies to the people who participate. A significant share simply aren’t using the tools enough to benefit.

The same pattern applies to companies. Frontier firms generate twice as many AI messages per employee as the median enterprise. And when you look specifically at custom GPT usage—tools tailored for internal workflows—the gap widens to seven-fold. Median companies appear to treat AI as optional productivity tooling. Frontier companies treat it as infrastructure.

Even so, about one in four enterprises still hasn’t enabled data connectors that let AI access company information, according to OpenAI’s report. Without connectors, the usefulness of any generative AI deployment drops dramatically. MIT’s researchers also found that organizations fare far better when buying AI tools from specialized vendors than when trying to build everything in-house: 67 percent success versus one in three. It’s a reminder that the AI era may be “live” in theory but not in practice for many firms.

OpenAI, for its part, says it’s shipping new capabilities roughly every three days. The constraint is no longer model performance. It’s whether organizations can absorb the pace of change. MIT’s formulation is blunt: the dividing line isn’t intelligence. The real obstacles are memory, adaptability, and the capacity of tools to learn. Enterprise systems that can’t evolve quickly enough simply get bypassed—often by their own employees.

Leading firms do the unglamorous work. They invest in executive sponsorship, data readiness, workflow standardization, and change management. They share custom tools across teams. They evaluate performance and make AI adoption a strategic priority. Everyone else is hoping adoption will just happen on its own.

The gap—six-fold, seventeen-fold, seven-fold depending on the metric—suggests it won’t.

And with enterprise contracts locking in over the next 18 months, the window for catching up is narrowing. The GenAI Divide won’t last forever, but the companies that figure out how to cross it sooner will shape the next phase of competition.

For now, most workers still prefer humans for mission-critical tasks—90 percent said so in OpenAI’s survey—while AI has “won the war for simple work.” The people pulling ahead aren’t doing so because they have extra access. They’re pulling ahead because they decided to use the tools everyone already has, and kept going long enough to find the leverage points.

The story isn’t about the software. It’s about behavior. And behavior doesn’t update on a quarterly release cycle.