Can Oracle’s AI infrastructure pivot really challenge AWS, Microsoft Azure, and Google Cloud by 2028?
Can Oracle’s $30B AI cloud play rival AWS and Microsoft by 2028? Explore the hyperscale race, GPU strategy, and investor bets on Oracle’s pivot.
Oracle Corporation (NYSE: ORCL) has entered a new phase of its cloud infrastructure ambitions with a 4.5 gigawatt (GW) data center expansion tied to its expanded Stargate partnership with OpenAI. Valued at approximately US$30 billion annually starting fiscal 2028, the agreement cements Oracle as a central player in the U.S. race for generative AI infrastructure dominance. What was once a legacy enterprise software giant is now building capacity at a scale rivaling traditional hyperscalers—Amazon Web Services, Microsoft Azure, and Google Cloud Platform.
To support its AI-centric growth strategy, Oracle has committed US$25 billion in capital expenditures over the next 12 months and holds a US$7 billion equity stake in Stargate LLC. The company is scaling up from its 1.2 GW data center in Abilene, Texas to nearly 2 GW and exploring additional hyperscale facilities across Michigan, Wisconsin, Wyoming, New Mexico, Georgia, Ohio, and Pennsylvania. These efforts could lift Oracle Cloud Infrastructure (OCI) from its fiscal 2025 revenue base of US$10.3 billion into a top-tier hyperscaler by the end of the decade.

What are Oracle’s comparative advantages in AI cloud infrastructure versus AWS, Azure, and Google Cloud?
Oracle Cloud Infrastructure competes on cost, integration, and hardware flexibility. OCI offers lower pricing for compute-intensive workloads compared to Amazon Web Services, particularly for GPU and AMD-based instances. Oracle’s bare-metal offerings are often favored by enterprises running custom AI training models, and the cloud platform supports high-density GPU configurations tailored for inference and training at scale.
Integration remains a defining advantage. Oracle’s cloud services are natively linked to its database and application stack, including Oracle Autonomous Database, Exadata, and Real Application Clusters. For large enterprise clients—particularly in finance, telecom, and manufacturing—this integration allows seamless migration of legacy and AI workloads onto a single cloud.
While AWS leads in global market share with over 30 percent and hundreds of availability zones, OCI’s strategic focus is narrower. Rather than building ubiquitous global coverage, Oracle is concentrating resources on hyperscale AI regions—an approach that enables more efficient utilization of GPUs, energy, and capital.
How does Oracle’s AI infrastructure buildout compare in scale and timing with AWS, Azure, and Google Cloud?
Oracle is rapidly closing the infrastructure gap in AI-specific capacity. Analysts at TD Cowen estimate that Oracle’s total AI-focused data center ambitions could reach 5 GW of deployed capacity by the end of 2026. This would place Oracle’s AI footprint on par with dedicated AI compute zones of AWS, Microsoft Azure, and Google Cloud—although not yet matching their global general-purpose cloud infrastructure scale.
In fiscal 2025, Oracle’s data center infrastructure revenue stood at US$10.3 billion. If its OpenAI contract delivers the projected US$30 billion annually from FY2028 onward, it could position OCI as a top-three cloud provider for AI workloads. By comparison, AWS has launched its own US$150 billion multiyear data center investment plan, while Google and Microsoft are building out AI-specific zones with co-developed custom silicon.
Oracle’s pivot to GPU-dense buildouts via Stargate contrasts with Google’s TPU-driven managed services model and AWS’s own silicon-focused strategy via Trainium and Inferentia. Oracle is betting on Nvidia’s roadmap and OpenAI’s sustained compute demand—key differentiators as competition intensifies.
Why is investor sentiment shifting in Oracle’s favor despite concerns over its late cloud start?
Institutional investors have responded positively to Oracle’s AI expansion. Shares of Oracle Corporation are up over 36 percent year-to-date, supported by optimism around the OpenAI deal and a broader repositioning of the company’s cloud narrative. Quarterly earnings in fiscal Q4 showed Oracle Cloud Infrastructure revenue growth exceeding 70 percent year-on-year, with management raising forward guidance for FY26.
Oracle also disclosed that its total performance obligations (backlog) rose to US$138 billion, driven by multiyear cloud deals. Analysts interpret this as a sign that the cloud business is gaining contractual traction well beyond OpenAI. Bullish sentiment from institutional investors centers on OCI’s emerging role as a hyperscale compute vendor—not merely a database-as-a-service provider.
Though Oracle’s traditional enterprise software base is mature, the Stargate expansion gives the company new growth optionality. Analysts expect that if execution remains on track, Oracle’s cloud margins could improve steadily over the next three years, especially as capital intensity moderates post-2026.
What are the execution risks and financial trade-offs in Oracle’s hyperscale AI buildout strategy?
The scale of Oracle’s pivot introduces significant execution risk. The company has committed to US$25 billion in capital spending in fiscal 2026, much of it front-loaded to accommodate OpenAI’s timeline. This capex figure dwarfs Oracle’s historic infrastructure investments and could pressure short-term free cash flow and credit ratings.
Geographical dispersion adds complexity. Oracle must secure permits, power contracts, and construction labor across multiple U.S. states. Environmental groups are also scrutinizing AI data centers for water usage, energy emissions, and land acquisition practices. Any delays could affect revenue timelines or result in cost overruns.
Contractual risk is also material. Oracle may be liable for up to US$1 billion in payments to suppliers and partners, even if OpenAI’s infrastructure demand fluctuates. Given the long-term nature of the Stargate project, execution risks will remain elevated through at least 2027.
How could Oracle extend beyond OpenAI and attract new AI-native cloud customers by 2028?
If Oracle successfully delivers on its Stargate milestones, analysts expect the company to pursue additional anchor tenants in the foundation model space. Potential future customers could include Anthropic, Mistral, xAI, or major enterprise adopters of large language models in finance, insurance, and retail.
Oracle’s ability to offer predictable GPU pricing, vertical integration with enterprise data stacks, and regulatory alignment with U.S. energy and infrastructure policy makes it an appealing alternative to AWS and Microsoft Azure. The challenge will be demonstrating low-latency performance, developer tooling, and international availability.
In the medium term, Oracle’s appeal may lie in compute predictability. With GPU availability constrained globally, companies seeking dedicated capacity for training or inference may prioritize Oracle’s Stargate infrastructure over more elastic but congested alternatives.
What does Oracle’s AI push mean for the broader cloud infrastructure landscape by 2028?
Oracle’s Stargate expansion signals that the AI cloud race is no longer confined to three players. While AWS, Azure, and Google Cloud continue to dominate general-purpose cloud workloads, AI-specific compute infrastructure is emerging as a parallel battleground—one where Oracle, CoreWeave, and others are gaining relevance.
For CIOs and model developers, the shift means more options for workload placement, pricing strategies, and long-term contracts. For hyperscalers, it introduces a new benchmark for building AI-native infrastructure at scale, with fixed power allocations, GPU access, and energy-efficient design.
If Oracle delivers its 5 GW roadmap and meets OpenAI’s compute needs, it may redefine itself as not just a software pioneer—but a full-scale AI infrastructure leader.
Discover more from Business-News-Today.com
Subscribe to get the latest posts sent to your email.