Anthropic's $380 Billion Valuation
$30 Billion Raised and the State of the Models
On 12 February 2026, Anthropic announced the close of a US$30 billion Series G funding round, valuing the company at US$380 billion post-money. It was the second-largest private technology financing round in history, trailing only OpenAI’s raise of over US$40 billion the previous year. Led by Singapore’s sovereign wealth fund GIC and Coatue, with additional capital from Founders Fund, D.E. Shaw and many more. The deal further demonstrated the simple idea that frontier large language model companies are no longer start-ups in any conventional sense. They are the fastest-growing enterprises the technology industry has ever produced, combined with what some perceive to be the most important technology in human history, and the capital markets are pricing them accordingly.
To appreciate the scale, consider the broader landscape. OpenAI was valued at US$500 billion in an October 2025 secondary share sale and is reportedly seeking a further US$100 billion round that would push its valuation towards US$830 billion. xAI was valued at US$250 billion when it merged with SpaceX on 2 February 2026 (albeit this is more of an internal valuation), creating a combined entity worth US$1.25 trillion ahead of a planned IPO. Google’s parent Alphabet, the only publicly traded peer operating at the frontier (Meta’s Llama currently lags in most performance benchmarks right now), carries a market capitalisation approaching US$4 trillion, and has committed up to US$185 billion in capital expenditure for 2026 to defend its position. These numbers are staggering, and they are also, in the eyes of many investors, entirely rational bets based on what the AI industry promises to deliver.
LLMs as the Measure of Progress
There is a lively, even heated, debate within the AI research community about whether large language models represent the path to artificial general intelligence or merely a useful waypoint, some even think it’s a dead end. Some researchers argue that the current architecture, for all its remarkable successes, lacks the grounding, reasoning depth, and embodied understanding needed to reach genuine machine intelligence. Others contend that scale, data, and clever engineering may be sufficient to close these gaps, especially as models are augmented with tool use, memory, and agentic capabilities. Regardless of where one lands on this question, the practical reality in early 2026 is that LLMs are the dominant proxy for AI progress. They are the products generating tens of billions of dollars in revenue. They are what enterprises are deploying across their operations. They are what governments are subsidising through infrastructure programmes. And they are what investors are using to determine the current pecking order of the AI industry. By most measures, OpenAI, Google DeepMind, and Anthropic are widely recognised as the frontier leaders, with many ambitious challengers like xAI, Meta, DeepSeek, Alibaba, and a vibrant ecosystem of more companies with most being concentrated in the US and China. Benchmark performances (which many think are flawed to begin with) among the leading main flagship models has converged significantly over the past twelve months. On widely tracked evaluations, the gap between Claude Opus 4.6, GPT-5.2, and Gemini 3 Pro has narrowed to the point where leaderboard positioning shifts with each new release. The competitive picture is fluid and increasingly difficult to summarise with a single ranking, especially in such an ambitious and undefined field. What is clear is that no single lab has established a durable lead, and each new model generation resets the competition. Whether large language models ultimately prove central to the long-term pursuit of artificial general intelligence and beyond, or merely a stepping stone superseded by breakthroughs in spatial AI, neuro-symbolic reasoning, novel architectures, or something not yet imagined, is genuinely unknown, but in the current landscape, they are the yardstick by which the industry measures itself.
Financial Growth at Unprecedented Scale
The financial trajectories of the leading AI companies are, by any historical standard, extraordinary. OpenAI reported annualised recurring revenue of US$20 billion at the end of 2025, up from US$6 billion in 2024 and US$2 billion in 2023 which is a tenfold increase over two years. In its blog post disclosing the figure, OpenAI’s CFO Sarah Friar drew a direct line between revenue growth and compute expansion, noting that the company’s available compute had grown from 0.2 gigawatts in 2023 to 1.9 gigawatts in 2025. Anthropic has tracked a remarkably similar curve. The company disclosed alongside its Series G that its run-rate revenue had reached US$14 billion, growing more than tenfold annually over the past three years. At the start of 2025, Anthropic’s run rate was approximately US$1 billion. By August it had surpassed US$5 billion. The number of customers spending more than US$100,000 annually has grown sevenfold in the past year, more than 500 now spend over US$1 million. Google does not disclose Gemini-specific revenue but what is telling is the sheer scale of its investment (as mentioned before), with up to US$185 billion earmarked for capital expenditure in 2026, Alphabet is outspending every other AI participant by a wide margin, a reflection of both its existing infrastructure and its determination not to cede the frontier.
Claude Code, Cowork and the SaaSpocalypse
If any single product has defined Anthropic’s ascent in early 2026, it is Claude Code. Launched in May 2025, the AI coding agent has grown to an annualised revenue run rate of over US$2.5 billion (a figure cited by Anthropic itself, though the exact methodology is unclear as Claude Code access is bundled into broader Claude subscription tiers and API usage, making it difficult to isolate precisely) and more than doubling since the start of 2026 alone. Business subscriptions have quadrupled in the same period, and enterprise customers now account for more than half of Claude Code’s revenue. A recent SemiAnalysis report estimated that roughly 4 per cent of all public GitHub commits worldwide were authored by Claude Code, double the share from just one month earlier. The anecdotal evidence is just as striking, for example, Spotify co-CEO Gustav Söderström told investors this week that the company’s best engineers have not written a single line of code since December, instead generating and supervising output through an internal system built on Claude Code.
The product that truly rattled global markets was Claude Cowork, a desktop productivity agent that extends Claude’s capabilities beyond coding into broader knowledge work. When Anthropic released eleven open-source, industry-specific plug-ins for Cowork on 30 January, covering legal, finance, sales, and data analysis workflows, the reaction was severe. Software stocks across the S&P 500 fell sharply. Thomson Reuters and LegalZoom each dropped more than 15 per cent. FactSet fell around 10 per cent, and S&P Global, Moody’s, and dozens of other enterprise software names suffered double-digit declines. The selloff spread globally, hitting Indian IT exporters, Japanese staffing firms, and Chinese software indices. According to J.P. Morgan, software stocks have now undergone their largest non-recessionary twelve-month drawdown in over thirty years, with approximately US$2 trillion in market capitalisation wiped from the sector peak. Nvidia CEO Jensen Huang called the panic “illogical,” arguing that AI will use existing software tools rather than replace them. Analysts at Wedbush cautioned that investors were pricing in a doomsday scenario far from reality. Gartner wrote that Cowork’s plug-ins are potential disrupters for task-level knowledge work but not a replacement for SaaS applications managing critical business operations. J.P. Morgan’s own strategists argued the selloff had gone too far, noting deeply embedded switching costs and strong earnings fundamentals across the sector. But Goldman Sachs strategist Ben Snider offered a more sobering view, warning this may be “the end of the beginning” and drawing parallels to the long decline of the newspaper industry. Jefferies coined the term “SaaSpocalypse.” Whether the market reaction is rational or not, it lays bare the depth of fear and uncertainty gripping the industry as many simply do not know how far or how fast these AI tools will reshape the landscape, and that uncertainty alone is enough to move trillions.
Consumer Versus Enterprise
One of the more notable dynamics in AI right now is the divergence between consumer and enterprise markets. In consumer AI, ChatGPT remains dominant but its lead is shrinking fast. Similarweb data from January 2026 puts ChatGPT at roughly 64.5 per cent of global generative AI web traffic, down from 86.7 per cent a year earlier. On a mobile app basis, Apptopia shows it falling from 69.1 per cent to 45.3 per cent. Google’s Gemini has been the main beneficiary, rising from 5.7 per cent to 21.5 per cent in web traffic and to 25.2 per cent on an app basis as reported by some sources. This has been powered by integration across Android, Chrome, and Workspace combined with an increasingly capable model. Gemini now has over 750 million monthly active users, closing in on ChatGPT’s estimated 810 million. Claude holds about 2 per cent of consumer traffic according to some sources, though it leads all competitors in time spent per daily active user at roughly 34.7 minutes. The enterprise market looks very different. According to Menlo Ventures, Anthropic overtook OpenAI as the top enterprise LLM provider in 2025 with 32 per cent of usage, versus OpenAI’s 25 per cent (down from 50 per cent in 2023). By late 2025, Anthropic’s share of enterprise LLM spend had risen to an estimated 40 per cent.
The split could potentially be explained by consumer AI being more of a distribution game where pre-installation, brand recognition, and free tiers matter most in the short term. Google’s ecosystem and OpenAI’s first mover advantage gives strong distribution points for Gemini and Claude that are hard to match, especially in the short-term. Enterprise AI can be seen as won on production reliability, safety, and handling complex workflows, where Anthropic has built a disproportionate presence relative to its consumer footprint. This is obviously a simplification, but I think a potentially useful one to try to understand the situation.
Enterprise adoption also appears further ahead in terms of clear value. Enterprise generative AI spending tripled to roughly $37 billion in 2025 according to Menlo Ventures, and the rationale is straightforward: models that can automate coding, draft documents, handle customer queries, and process data at a fraction of the cost of human labour represent an obvious productivity gain. The ROI case largely speaks for itself. On the consumer side, the picture is less clear. Most of what today’s chatbots offer, whether it be answering questions, summarising information, helping with writing, are things the average person can already do themselves well enough, or that free tiers handle to a good enough standard. The convenience gain exists, but it’s modest relative to what enterprises see, and it’s not yet obvious why hundreds of millions of people would pay a monthly subscription for it. This is starting to show in how companies are thinking about business models. OpenAI has begun exploring advertising, signalling that even the market leader sees limits to pure subscription growth at consumer scale. A search-adjacent, ad-supported model (closer to how Google has monetised) may prove a more natural fit for mass consumer AI than asking users to pay $20 a month. Consumer AI may well find its footing as capabilities improve and genuinely new use cases emerge, but for now the gap between what’s on offer and what feels essential in daily life remains wide, and the business model question is still very much open.
Closing Remarks
The fundraising arms race among AI companies reflects a simple reality, that training and running frontier AI models is extraordinarily expensive. These AI companies are raising more money than any private enterprise has ever raised before, and none of these companies is profitable. OpenAI reportedly lost approximately US$9 billion in 2025 (based on projections) on US$20 billion in revenue. Anthropic’s burn rate has been estimated at several billion dollars annually, though the company has indicated it expects to reduce losses substantially in 2026 and reach profitability by 2028.
These valuations defy every conventional metric. At US$380 billion on US$14 billion in run-rate revenue, Anthropic trades at roughly 27 times revenue. OpenAI, at US$500 billion on US$20 billion, trades at 25 times. At its rumoured US$830 billion target, the multiple would rise to over 40 times. For context, mature high-growth SaaS companies typically trade at 5–20 times revenue. And yes, I know P/S ratio is not that great an indicator many times, but it does give a general idea of valuations. Whether these valuations prove justified is the central question of the current technology cycle. The bull case is straightforward: if AI can automate a meaningful fraction of knowledge work, the addressable market is measured in trillions of dollars, and the companies that control the most capable models will be the platforms upon which that future is built. The bear case is equally clear: the capital intensity of training and inference is enormous, competition is fierce, open-source alternatives continue to improve, and the history of technology is littered with examples of early leaders who failed to capture the value they helped create.
The AI industry in early 2026 sits in an exciting but uncertain place. The infrastructure buildout is accelerating, with hyperscalers collectively planning to spend over US$660 billion on AI facilities this year alone. Both Anthropic and OpenAI are reportedly preparing for initial public offerings, which would subject their financials to the scrutiny of public markets for the first time. The software industry is grappling with what may be a once-in-a-generation disruption to its business model. And the question of whether LLMs are a stepping stone to something far more profound or a ceiling that will require fundamentally new approaches to surpass remains unresolved.
What is beyond dispute is the pace. Anthropic went from zero revenue to US$14 billion in annualised run rate in roughly three years. OpenAI reached US$20 billion in the same timeframe. These are growth curves that have no precedent in the history of enterprise software, and they are occurring against a backdrop of geopolitical competition, massive capital deployment, and a potential complete reinvention of the global workforce and jobs. The race to build artificial intelligence is one of the defining economic contests of the twenty-first century, and its outcome will shape industries, labour markets, and societies for decades to come.
*Disclaimer: This information is for general informational purposes only and does not constitute financial, investment, or professional advice. The author may hold positions in the assets or companies discussed.
Bonnefin Research is a free publication focused on better investment thinking. Subscribe and share to support the work, and join the discussion in the comments as we continue building this community.

