Richard Batt |
OpenAI Is Targeting $600 Billion in Compute by 2030
Tags: AI Strategy, Business
In January 2026, OpenAI confirmed that it is targeting $600 billion in compute infrastructure investment by 2030. That is not a typo. Six hundred billion dollars. Over five years. That is more than the entire revenue of Microsoft, invested exclusively in the silicon, data centres, and electricity required to run AI models.
Key Takeaways
- The Picks-and-Shovels Analogy, apply this before building anything.
- What This Means for AI Pricing.
- The Vendor Concentration Risk, apply this before building anything.
- The Real Winners, apply this before building anything.
- What This Means for Your Business.
Google is investing similarly. Amazon is investing similarly. Microsoft is pouring vast sums into the same arms race. This is a capital expenditure frenzy on a scale that has not been seen since the dot-com era, and it dwarfs it.
The reason this matters to you is not because you care about OpenAI's balance sheet. It matters because the outcome of this capital race will determine the shape of the AI industry for the next decade, and the shape of the AI industry will determine your pricing, your options, and your strategic choices.
I have worked through enough technology cycles to recognise a pattern. This is the picks-and-shovels moment. And if you do not understand what that means for your business, you are about to make expensive strategic mistakes.
The Picks-and-Shovels Analogy
During the gold rush, the people who got rich were not always the ones mining gold. Many of them were the ones selling picks and shovels to miners.
The AI infrastructure buildout is a gold rush. But it is a gold rush for compute, not gold. And the people getting rich are not necessarily the ones building the best AI models. Many of them are going to be the ones providing the infrastructure that enables those models.
OpenAI, Google, Amazon, and Microsoft are all building massive data centres, contracting for chips from Nvidia and others, securing electricity contracts, and building the fundamental compute layer that everything else sits on top of.
I sat down with a data centre operator last month who told me something that crystallised this for me: "We have more demand for rack space than we have had in fifteen years. Every tech company wants to own compute infrastructure now. Everyone is building their own data centres. The constraint is not capital. The constraint is physical space and reliable electricity."
This is the picks-and-shovels moment playing out in real time.
What This Means for AI Pricing
Here is the first practical implication: AI pricing is about to drop significantly, and then it is going to stabilise at a much lower level than it is today.
OpenAI's pricing for GPT-4 Turbo is currently £0.03 per 1,000 input tokens and £0.06 per 1,000 output tokens. That is expensive. That pricing made sense when compute was scarce and heavily competed for. When supply is constrained, prices are high.
By 2030, when OpenAI has deployed $600 billion in compute infrastructure, supply will no longer be constrained. Pricing will drop. How much? I would estimate 60 to 70 per cent reduction in pricing within three years, and another 30 to 40 per cent reduction over the following two years. We will see commodity pricing for API access to frontier models.
Practical tip: If you are locked into a long-term contract with an AI vendor at today's pricing, you are about to overpay significantly. Do not do this. Keep your contracts short and revisit pricing every six to twelve months. The vendor is not going to volunteer a price reduction. You need to renegotiate.
I worked with a financial services firm that signed a three-year deal with an AI vendor in 2024 at £50,000 per month. Today, the same workload runs at £18,000 per month on competitive pricing. They are stuck in the contract. Do not make this mistake. Price is not stable. Price is about to move dramatically in one direction.
The Vendor Concentration Risk
Here is the second implication, and it is more troubling: three companies will control the infrastructure layer of AI.
OpenAI will have its own infrastructure. Google will have its own infrastructure through Google Cloud. Amazon will have its own infrastructure through AWS. Microsoft will have access through its partnership with OpenAI, but it will also be building its own capacity.
And that is it. No other company has the capital, the coordination, or the technical expertise to build compute infrastructure at the scale required to be competitive with frontier AI models.
This creates a vendor concentration risk that most organisations are not thinking about carefully enough.
I sat down with a CTO at a mid-market SaaS company in February. They had built their AI product on top of OpenAI's API. When I asked them about their vendor risk, they said, "We could migrate to another model provider if we needed to." I asked them which provider. They paused. They could move to Claude or Llama, but those models would require retraining for their use case. And they were already paying into OpenAI's ecosystem. Switching costs were real.
this: if you build on top of OpenAI's infrastructure in 2026, you are betting that OpenAI will remain competitive, well-managed, and available for the lifetime of your product. That is not a certainty. OpenAI has not been around for ten years yet. No mature company survives without strategic mistakes. Management changes. Priorities shift. Pricing decisions alienate customers.
But the alternative is to use a smaller model provider or an open-source model, which means accepting lower performance or higher operational complexity.
The Real Winners
Here is what I think will happen, and it contradicts much of the conventional wisdom.
The most profitable companies in this cycle will not be OpenAI, Google, or Amazon. They will be the infrastructure companies that enable all of them. Nvidia (chip manufacturing), but also cooling companies, power companies, cable and networking companies. The companies building the physical layer that compute sits on.
The second tier of winners will be the companies that build the abstraction layer on top of the infrastructure. Not the AI model providers, but the platforms that sit between users and the infrastructure. Think of it like this: in the internet gold rush, the winners were not just the big tech companies. They were also the companies that provided development frameworks, deployment platforms, and monitoring tools.
I expect we will see significant consolidation around AI operations platforms in the next three years. Companies that help you manage costs across multiple AI providers, that help you route requests intelligently, that help you monitor and optimise usage. These companies will be quite valuable.
The least profitable companies will probably be the ones trying to build proprietary AI models to compete with OpenAI and Google. The capital requirements are astronomical. The competitive moat is unclear. The returns are uncertain.
What This Means for Your Business
Let me translate this into concrete decisions you need to make in 2026.
First, if you are planning to make significant AI investments, understand that pricing will move down, not up. Build your financial models assuming 50 per cent lower per-unit costs within two years. If the business case requires current pricing to work, it is not a real business case.
Second, if you are choosing an AI vendor or platform, think carefully about lock-in. Can you switch providers if needed? Are you betting on a vendor's continued existence and competence? What is your exit strategy?
Third, do not overinvest in proprietary AI model development unless you have $500 million and ten years to burn. The returns are not there. Instead, invest in understanding how to use foundation models effectively. Invest in infrastructure and operations. Invest in data and domain expertise.
Fourth, if you have capital available, look at infrastructure plays, not AI application plays. The picks-and-shovels moment rewards the tool makers, not the miners.
I worked with a venture capital firm in January that was evaluating whether to invest in an AI application company or an AI operations company. The application company had great traction. The operations company was early but solving a real problem. I told them: the application company be worth £50 million in five years. The operations company be worth £500 million. The capital is flowing to infrastructure, not application. That does not mean you should not invest in applications, but understand the asymmetry.
The Longer View
By 2030, when the $600 billion in compute is deployed, the shape of the AI industry will be set. The infrastructure layer will be consolidated. Pricing will be commoditised. The differentiation will be in how you use AI, not in the AI itself.
The companies that win will be the ones that understood this early and structured themselves accordingly. They will not be overexposed to any single vendor. They will not be over-invested in proprietary models. They will be flexible and opportunistic.
The companies that lose will be the ones that thought AI was a tactical tool, not a strategic shift. They will be locked into contracts at old pricing. They will be dependent on a single vendor. They will be trying to catch up to the architecture that winners built three years earlier.
Richard Batt has delivered 120+ AI and automation projects across 15+ industries. He helps businesses deploy AI that actually works, with battle-tested tools, templates, and implementation roadmaps. Featured in InfoWorld and WSJ.
Frequently Asked Questions
How long does it take to implement AI automation in a small business?
Most single-process automations take 1-5 days to implement and start delivering ROI within 30-90 days. Complex multi-system integrations take 2-8 weeks. The key is starting with one well-defined process, proving the value, then expanding.
Do I need technical skills to automate business processes?
Not for most automations. Tools like Zapier, Make.com, and N8N use visual builders that require no coding. About 80% of small business automation can be done without a developer. For the remaining 20%, you need someone comfortable with APIs and basic scripting.
Where should a business start with AI implementation?
Start with a process audit. Identify tasks that are high-volume, rule-based, and time-consuming. The best first automation is one that saves measurable time within 30 days. Across 120+ projects, the highest-ROI starting points are usually customer onboarding, invoice processing, and report generation.
How do I calculate ROI on an AI investment?
Measure the hours spent on the process before automation, multiply by fully loaded hourly cost, then subtract the tool cost. Most small business automations cost £50-500/month and save 5-20 hours per week. That typically means 300-1000% ROI in year one.
Which AI tools are best for business use in 2026?
It depends on the use case. For content and communication, Claude and ChatGPT lead. For data analysis, Gemini and GPT work well with spreadsheets. For automation, Zapier, Make.com, and N8N connect AI to your existing tools. The best tool is the one your team will actually use and maintain.
What Should You Do Next?
If you are not sure where AI fits in your business, start with a roadmap. I will assess your operations, identify the highest-ROI automation opportunities, and give you a step-by-step plan you can act on immediately. No jargon. No fluff. Just a clear path forward built from 120+ real implementations.
Book Your AI Roadmap, 60 minutes that will save you months of guessing.
Already know what you need to build? The AI Ops Vault has the templates, prompts, and workflows to get it done this week.