QumulusAI Appoints Former Applied Digital CTO Michael Maniscalco as CEO to Lead Growth in AI Infrastructure Market
CTO and CMO appointments round out team with the experience to bring enterprise-grade AI supercomputing infrastructure to market
ATLANTA, GA / September 25, 2025 / QumulusAI, a provider of GPU-powered cloud infrastructure for artificial intelligence, today announced the appointment of Michael Maniscalco as Chief Executive Officer to propel the company through a rapid growth phase.
Maniscalco, formerly CTO of Applied Digital, brings deep expertise in scaling high-performance computing platforms – under his leadership at Applied Digital, his team deployed 6,000 state-of-the-art GPUs in 12 months. At QumulusAI, he will drive expansion of the company’s differentiated approach of owning the full stack – from energy and data centers to GPU-accelerated cloud services – delivering cost-efficient, enterprise-grade AI infrastructure with the speed to move fast and the scale to grow with customers.
The company also announced two additional executive appointments: Ryan DiRocco as Chief Technology Officer and Stephen Hunton as Chief Marketing Officer. DiRocco, previously CTO at Performive LLC, a leading VMware-focused managed multicloud provider. In this role, he will oversee QumulusAI’s technical strategy, ensuring products are secure, high-performing, and aligned with customer needs, while guiding clients’ smooth, cost-effective adoption of AI.
Hunton, who most recently served as Head of Global Social and Content Experience at IBM, adds global marketing expertise from Google, YouTube and Chevrolet. In this role, Hunton will focus on establishing the brand as the category leader in AI infrastructure – driving market visibility, accelerating enterprise adoption, and building the momentum that will fuel long-term value for customers and partners/investors.
The strengthened leadership team will focus on expanding market presence, accelerating product innovation, and building strategic partnerships as QumulusAI advances its mission to make enterprise-grade AI supercomputing more accessible.
“These appointments mark a pivotal inflection point for QumulusAI,” said Steve Gertz, Chairman of the Board. “AI adoption is accelerating across every industry, and the ability to deliver scalable, cost-efficient infrastructure has become a critical enabler. Michael, Ryan, and Stephen bring proven expertise in building technology platforms, scaling infrastructure, and creating global brands. This team has the vision and execution experience needed to establish QumulusAI as a premier AI infrastructure provider.”
“The demand for scalable AI infrastructure is one of the fastest-growing markets in tech,” said Steven Dickens, CEO & Principal Analyst at HyperFrame Research. “QumulusAI’s model of controlling the full stack positions it to deliver performance and economics that many enterprises simply can’t get from hyperscalers. Adding Michael Maniscalco as CEO is a strong signal the company is ready to scale.”
View the original press release on ACCESS Newswire
Modular Designs Are the Starting Point for the Future of AI Infrastructure
“Data centers are evolving to become AI-optimized, modular, purpose-built ecosystems.” — Pipeline Magazine, June 2025
The recent piece from Pipeline makes a compelling case for modular data center design in the AI era. They highlight the rapid shift toward prefabricated builds, new cabinet geometries, high-density liquid cooling, and pre-integrated power systems—and how all of it is converging to meet the demands of AI.
We agree. That’s why QumulusAI’s latest facilities in Oklahoma and Texas are being built around the very modular design principles Pipeline describes.
But we also believe modularity alone won’t get us where we need to go.
What AI workloads require isn’t just faster construction or tighter thermal envelopes—it’s orchestration. The real barrier to AI isn’t just the time it takes to build. It’s aligning every layer of the stack: energy, power distribution, compute, cooling, and deployment timelines.
That’s where the QumulusAI approach builds on what Pipeline calls out.
We deploy modular designs—but we tie them directly to:
Behind-the-meter natural gas with fixed 10-year pricing to eliminate energy volatility
Real-time GPU inventory access for priority deployment of H200s and B200s
Cluster designs optimized around pulse-load behavior
Factory-tested cooling subsystems that drop in without delay
Immersion cooling built into the spec from day one, not retrofitted later
Modular construction builds the site. Integrated infrastructure gets it to revenue.
And that’s the part most headlines miss.
As the Pipeline article concludes, “deep collaboration across the supply chain” is the only way forward. At QumulusAI, we’ve taken that a step further: we’ve compressed the supply chain into a single delivery model—from molecules to models, from megawatts to machines.
Not Hyperscale. Hyperspeed.
There’s something awe-inspiring about a 500 MW data center. Until you remember how long it takes to build. The tech that goes in often changes faster than the permits clear. And by the time power comes online? The workloads it was designed for may be obsolete.
That’s the hyperscale dilemma: chasing AI growth with industrial-age momentum.
QumulusAI is built to move differently.
Forget Massive. Think Modular.
While the industry celebrates ever-larger campuses, we’re focused on sub-50 MW facilities deployed where they’re actually needed. These aren’t proof-of-concepts or pop-up sheds—they’re fully redundant, GPU-optimized data centers, designed from day one for AI performance and next-gen cooling.
By staying under the 50 MW threshold, we avoid years-long approval cycles. We co-locate with gas and fiber. And we activate faster than most teams can even negotiate a hyperscale contract.
The Cost of Overbuilding
What’s often left out of hyperscale headlines is the cost—not just in dollars, but in friction:
Communities face rising opposition: noise, water consumption, and grid strain have turned public sentiment.
Companies face lock-in: rigid contracts for compute that may no longer serve their evolving models.
And regulators are playing catch-up with energy realities that hyperscalers helped create.
Meanwhile, investors wait. Clients stall. Innovation slows.
AI Moves Fast. So Should Infrastructure.
We’re not anti-scale. We’re anti-lag.
QumulusAI is proving that scale doesn’t have to mean sprawl. By deploying purpose-built facilities faster, closer to where the demand lives, we give our clients access to compute without the drag. No twelve-month waitlist. No fifteen-year amortization gamble.
Just energy-efficient, AI-tuned, revenue-generating infrastructure—in months, not years.
From Molecules to Models
Our approach is vertically integrated: power gen, data center, compute. That means fewer intermediaries, more predictability, and complete control across the stack. It also means we can pass savings to clients and reinvest faster in the tech that matters.
This isn’t just about facilities. It’s about philosophy. QumulusAI believes infrastructure should evolve at the pace of innovation—not slow it down.
Public Backlash Against Data Centers Is Emerging. Here’s Our Plan.
Public pushback against data centers is rising—and not without reason. When massive mega and giga factories threaten to overwhelm local grids, or quietly shift infrastructure costs onto ratepayers, communities are right to demand better.
In New Jersey, electric rates jumped 20%, and lawmakers say hyperscale data centers are overloading infrastructure without covering the costs. NJ101.5
In Pennsylvania, grid operators say surging AI and data center demand is tipping the balance—leaving power supplies potentially short under extreme summer conditions. WESA
In Illinois, new legislation would require data centers to report energy and water use—aiming to uncover whether residents are unknowingly footing the bill for AI growth. Capitol News Illinois
QumulusAI: Built for the Long Term
At QumulusAI, we’re building for the long term: a more strategic, more nimble, and more measured approach to AI infrastructure. Our plans align with local capacity, not against it, and sustain real growth.
Right-sized for the region, not oversized for the headline: We build nimble, sub-50MW facilities designed to match local capacity—not overwhelm it.
Built with diverse, sustainable power—including behind-the-meter natural gas: Our model reduces grid stress, improves resiliency, and aligns with long-term environmental planning.
Live in months, not years: Our modular data centers deploy fast—without dragging down utilities or forcing costly upgrades on ratepayers.
In step with the communities we serve: We work directly with policymakers, utilities, and local leaders to align infrastructure growth with public interest—not just private demand.
Sustainable by design: Our energy-efficient clusters are optimized for AI workloads from day one—minimizing waste, maximizing performance, and staying accountable to the regions that host us.
The data center industry is at a crossroads. We can keep bulldozing through communities with oversized projects that privatize profits and socialize costs—or we can prove that AI infrastructure can actually strengthen the places that host it. The choice we make now will determine whether communities welcome the next wave of technology or fight it at every turn.