SAN FRANCISCO — Inside Anthropic headquarters, President and co-founder Daniela Amodei keeps coming back to a phrase that’s become a sort of governing principle for the artificial intelligence startup’s entire strategy: Do more with less.
It’s a direct challenge to the prevailing mood across Silicon Valley, where the biggest labs and their backers are treating scale as destiny.
Firms are raising record sums, locking up chips years in advance, and pouring concrete across the American heartland for data centers in the belief that the company that builds the largest intelligence factory will win.
OpenAI has become the clearest example of that approach.
The company has made roughly $1.4 trillion in headline compute and infrastructure commitments as it works with partners to stand up massive data center campuses and secure next-generation chips at a pace the industry has never seen.
Anthropic’s pitch is that there’s another way through the race, one where disciplined spending, algorithmic efficiency, and smarter deployment can keep you at the frontier without trying to outbuild everyone else.
“I think what we have always aimed to do at Anthropic is be as judicious with the resources that we have while still operating in this space where it’s just a lot of compute,” Amodei told CNBC. “Anthropic has always had a fraction of what our competitors have had in terms of compute and capital, and yet, pretty consistently, we’ve had the most powerful, most performant models for the majority of the past several years.”

Daniela Amodei and her brother, Dario Amodei, who is Anthropic’s CEO and a Baidu and Google alumni, helped build the very worldview they’re now betting against.
Dario Amodei was among the researchers who helped popularize the scaling paradigm that has guided the modern model race. It is the strategy that increasing compute, data, model size, and capabilities tends to improve the model in a predictable way.
That pattern has effectively become the financial bedrock of the AI arms race.
It underwrites hyperscaler capital spending, justifies towering chip valuations, and keeps private markets willing to assign enormous prices to companies that are still spending heavily to reach profitability.
But even as Anthropic has benefited from that logic, the company is trying to prove that the next phase of competition won’t be decided only by who can afford the largest pre-training runs.
Its strategy leans into higher-quality training data, post-training techniques that improve reasoning, and product choices designed to make models cheaper to run and easier to adopt at scale — the part of the AI business where the compute bill never stops.
To be clear, Anthropic isn’t operating on a shoestring. The company has roughly $100 billion in compute commitments, and expects those requirements to keep rising if it wants to stay at the frontier.
“The compute requirements for the future are very large,” Daniela Amodei said. “So our expectation is, yes, we will need more compute to be able to just stay at the frontier as we get bigger.”
Still, the company argues that the headline numbers flying around the sector are often not directly comparable — and that the industry’s collective certainty about the “right” amount to spend is less solid than it sounds.
“A lot of the numbers that are thrown around are sort of not exactly apples to apples, because of just how the structure of some of these deals are kind of set up,” she said, describing an environment where players feel pressure to commit early to secure hardware years down the line.
The bigger truth, she added, is that even insiders who helped shape the scaling thesis have been surprised by how consistently performance and business growth have compounded.

“We have continued to be surprised, even as the people who pioneered this belief in scaling laws,” Daniela Amodei said. “Something that I hear from my colleagues a lot is, the exponential continues until it doesn’t. And every year we’ve been like, ‘Well, this can’t possibly be the case that things will continue on the exponential’ — and then every year it has.”
That line captures both the optimism and the anxiety of today’s buildout.
If the exponential keeps holding, then the companies that lock up power, chips and sites early may look prescient. If it breaks — or if adoption lags behind the pace of capability — then the players that overcommitted could be left carrying years of fixed costs and long-lead-time infrastructure built for demand that never arrives.
Daniela Amodei drew a distinction between the technology curve and the economic curve, an important nuance that tends to get conflated in the public debate.
From a technological perspective, she said Anthropic doesn’t see progress slowing down, based on what the company has observed so far. The more complicated question is how quickly businesses and consumers can integrate those capabilities into real workflows where procurement, change management, and human friction can slow even the best tool.
“Regardless of how good the technology is, it takes time for that to be used in a business or sort of personal context,” she said. “The real question to me is: How quickly can businesses in particular, but also individuals, leverage the technology?”
That enterprise emphasis is central to why Anthropic has become such a closely watched bellwether for the broader generative AI trade.
The company has positioned itself as an enterprise-first model provider, with much of its revenue tied to other companies paying to plug Claude into workflows, products, and internal systems — usage that can be stickier than a consumer app, where churn can rise once the novelty fades.

Anthropic said revenue has grown tenfold year over year for three straight years. And it has built a distribution footprint that’s unusual in a market defined by fierce rivalry. The Claude model is available across the major cloud platforms, including through partners that are also building and selling competing models.
Daniela Amodei framed that presence less as détente and more as a reflection of customer pull, with large enterprises wanting optionality across clouds, and cloud providers wanting to offer what their biggest customers are asking to buy.
In practice, that multicloud posture is also a way to compete without making a single infrastructure bet.
If OpenAI is attempting to anchor a vast buildout around bespoke campuses and dedicated capacity, Anthropic is trying to remain flexible, shifting where it runs based on cost, availability, and customer demand, while focusing internal energy on improving model efficiency and performance per unit of compute.
As 2026 begins, the divide matters for another reason: Both companies are being pushed toward the discipline of public-market readiness while still operating in a private-market world where compute needs are growing faster than certainty.
Anthropic and OpenAI have not announced IPO timelines, but both are making moves that look like preparation, adding finance, governance, forecasting, and an operating cadence that can withstand public scrutiny.
At the same time, both are still raising fresh capital and striking ever-larger compute arrangements to fund the next leg of model development.
That sets up a real test of strategy rather than rhetoric.
If the market keeps funding scale, OpenAI’s approach may remain the industry standard. If investors start demanding greater efficiency, Anthropic’s “do more with less” posture could put them at an advantage.
In that sense, Anthropic’s contrarian bet isn’t that scaling doesn’t work. It’s that scaling isn’t the only lever that matters, and that the winner of the next phase may be the lab that can keep improving while spending in a way the real economy can sustain.
“The exponential continues until it doesn’t,” Daniela Amodei said. The question for 2026 is what happens to the AI arms race — and to the companies building it — if the industry’s favorite curve finally stops behaving.
WATCH: Anthropic, OpenAI rivalry goes global

