Cambridge launches real-time index exposing the global market for fake account verifications

Key Takeaways

  • The University of Cambridge has unveiled the first global index tracking real-time prices and stock levels for fake account verifications across more than 500 platforms.
  • New analysis of 12 months of data reveals significant country‑level price gaps and election‑driven spikes on apps like Telegram and WhatsApp.
  • Researchers argue that SIM regulation and country‑of‑origin labeling could help curb the online manipulation economy, though vendors are already selling workarounds.

The University of Cambridge is putting hard numbers around a market that’s often described but rarely measured: the global trade in fake account verifications. The institution’s Social Decision-Making Lab has launched the Cambridge Online Trust and Safety Index (COTSI), a real-time tracker of the daily costs and available “stock” of SMS verifications sold by large SIM‑farm operators. It covers more than 500 platforms—everything from TikTok and Instagram to Uber, Airbnb, Amazon, and even McDonald’s.

The project arrives at a moment when businesses and governments are trying to understand just how industrialized manipulation has become. For once, the data isn’t anecdotal. It’s indexed, global, and updated continuously.

And it’s surprisingly cheap to operate a bot army in places many executives wouldn’t expect.

Cambridge’s year-long analysis, published in the journal Science (linked here for readers who want the underlying methodology: Science), shows that the cost of verifying fake accounts for US and UK services is almost as low as in Russia, long assumed to be the cheapest market. During the July 2024 to July 2025 study window, the average price was $0.26 in the US, $0.10 in the UK, and $0.08 in Russia. Compare that with Japan at $4.93 and Australia at $3.24—differences driven primarily by SIM card pricing and ID requirements.

It’s a small detail, but it illustrates something critical: SIM costs, not platform security design, often dictate how expensive it is to unleash a fake‑engagement campaign. That’s a tough reality for platforms that spend heavily on verification pipelines.

The COTSI team identified 17 major vendors and used the top 10 by traffic to build the index, rotating four at a time into the active data feed. Their crawl doesn’t just capture pricing; it tracks availability—sometimes millions of verifications ready to go for countries like the US, UK, Brazil, and Canada. Stocks are highest for platforms including X, Uber, Discord, Amazon, Tinder, and Steam.

Some prices are almost absurdly low. Meta, Grindr, and Shopify average $0.08 per verification globally. X and Instagram sit around $0.10. TikTok and LinkedIn hover near $0.11, and Amazon averages $0.12. It raises a fair question: if acquiring a fake identity on a major service costs less than a cup of coffee, how should risk teams recalibrate their assumptions?

Cambridge researchers even tested the providers directly. One major vendor delivered working US Facebook verifications only 21% of the time; another succeeded more than 90% of the time. That divergence came down largely to whether a provider relied on virtual SIMs or physical SIM hardware. Virtual numbers—commonly sold by CPaaS or IoT connectivity companies—are easy to purchase in bulk but often carry metadata that platforms can detect and block. Physical SIMs, or eSIMs from conventional carriers, tend to slip through more reliably. As a result, countries where SIM cards are costlier see higher fake‑account prices, which Cambridge suggests may suppress certain malicious activities simply by raising the financial barrier.

The story becomes more complicated when the research team looks at political timing. Prices for fake accounts on Telegram and WhatsApp rose sharply—12% and 15% on average—during the 30‑day periods preceding 61 national elections between mid‑2024 and mid‑2025. Because both apps display visible phone numbers, influence operators need accounts that appear native to the target country. That demand drives local SMS verification prices up.

Platforms like Facebook or Instagram don’t show the same pattern. A fake account registered in Russia can post about the US or EU without revealing its true origin, so demand doesn’t behave in the same country-specific way. Reach, not locality, drives their usage.

Telegram and WhatsApp also sit among the most expensive fake accounts overall, averaging $1.02 and $0.89 globally. Telegram in particular remains a favored channel for state actors, including Russia, which has invested heavily in information warfare on the platform.

Underlying all of this is a grey market that’s become international, sophisticated, and—oddly—quite transparent. Many of the large vendors have customer support, provide bulk‑order tools for followers or fake accounts, and use Russian and Chinese payment systems. The grammar and phrasing on many sites hint at Russian authorship. One micro‑tangent here: Russia’s recent law banning third‑party account registration forced vendors to suspend Russian-origin SMS verifications as of late 2025, but it didn’t prevent Russia-based operators from selling verifications tied to other countries. It says something about how fluid jurisdiction becomes in a business built on obfuscation.

The Cambridge team has a policy angle as well. They argue that stricter SIM card regulation could discourage parts of the manipulation economy. The UK outlawed SIM farms in April 2025—an unusual move within Europe—and researchers expect COTSI to provide a real-time view into whether that law actually shifts vendor behavior. They also note that account-level country‑of‑origin labels, similar to what X recently introduced, might help with transparency. Still, vendors already offer services to circumvent such labels.

What does all of this mean for B2B leaders? For one, it reframes the cost structure behind deceptive traffic. If artificial engagement can be purchased at single‑cent pricing, companies need to reconsider both monitoring strategies and assumptions about what’s real. It also underscores how generative AI is changing the game. As Dr. Jon Roozenbeek points out, sophisticated bots can now tailor interactions, hold conversations, and manage hundreds of accounts. The infrastructure to support them—SIM banks, virtual carriers, traffic‑rerouting systems—has consolidated into a predictable market with real supply and demand dynamics.

And yet, there’s a curious upside here. By turning hidden transactions into measurable data, COTSI gives companies, regulators, and researchers a way to observe the economic heartbeat of online manipulation. It’s far from a full solution, but it’s the first consistent window into how the business model of misinformation actually works—and how it fluctuates under policy pressure, platform updates, or geopolitical events.