Dive into the future of AI infrastructure with Dominic Jainy, an IT professional whose deep expertise in artificial intelligence, machine learning, and blockchain has positioned him as a leading voice in the tech landscape. In this insightful conversation with Maison Edwards, Dominic unpacks the transformative rise of neo cloud providers and GPU as a Service (GPUaaS) in Australia and New Zealand. We explore how these technologies are addressing the soaring demand for AI compute power, tackling cost challenges, advancing sustainability, and navigating the complexities of governance. From specialized infrastructure to environmental impact and regulatory balance, Dominic offers a compelling look at how businesses are scaling AI responsibly and economically in the region.
What can you tell us about neo cloud providers and how they stand apart from traditional or public cloud services?
Neo cloud providers are a new breed of specialized cloud services tailored specifically for high-performance workloads like AI and machine learning. Unlike traditional or public cloud setups, which offer a broad range of general-purpose services, neo clouds focus on optimized infrastructure—think advanced GPUs, high-bandwidth memory, and low-latency networking. This tight integration means they can handle the intense demands of AI model training much more efficiently. For businesses in Australia and New Zealand, this translates to faster results and lower operational overhead compared to the one-size-fits-all approach of larger hyperscalers.
How would you describe GPU as a Service, or GPUaaS, and what’s fueling its growing popularity in this region?
GPUaaS is essentially a model where businesses can access powerful GPU resources on-demand through the cloud, without needing to invest in expensive hardware themselves. It’s like renting a high-performance engine for your AI projects. In Australia and New Zealand, organizations are flocking to GPUaaS because it offers scalability and flexibility. With AI workloads growing, companies need compute power that can ramp up quickly without breaking the bank, and GPUaaS fits the bill by providing access to cutting-edge tech without the upfront capital costs.
There’s a projection that 84% of organizations will adopt GPUaaS by 2027. What do you see as the key drivers behind this trend?
The surge toward GPUaaS adoption is largely driven by the explosive growth in AI initiatives across industries. Companies are realizing that training complex models or running real-time inference requires immense computational power, which traditional setups can’t sustain cost-effectively. Add to that the pressure to innovate quickly—whether it’s in healthcare, finance, or agriculture—and GPUaaS becomes a no-brainer. It allows firms to experiment and scale without the burden of managing physical infrastructure, especially in a region like ours where local tech talent is strong but capital for hardware can be a hurdle.
Managing costs for AI workloads is often cited as a major challenge. What specific cost issues do organizations face with traditional cloud setups?
With traditional cloud setups, the costs for AI workloads can spiral out of control due to the sheer resource intensity. Training a single AI model can rack up huge bills for compute time, power consumption, and cooling needs. Public cloud providers often charge premium rates for high-performance resources, and there’s little optimization for AI-specific tasks. Plus, as demand fluctuates, companies can end up over-provisioning or paying for idle resources. It’s a real pain point for organizations trying to balance innovation with budget constraints.
In what ways do neo cloud providers help address these cost challenges compared to other options?
Neo cloud providers tackle cost issues head-on by designing their infrastructure specifically for AI workloads. They use tightly integrated systems that maximize efficiency—less power waste, better cooling, and optimized GPU clusters. This means lower operational costs per workload compared to generic public clouds. They also tend to offer more predictable pricing models tailored to AI needs, so businesses aren’t hit with unexpected bills. For many organizations in our region, this focused approach is a game-changer over the broader, often pricier hyperscaler options.
Can you break down what makes neo cloud infrastructure, like advanced GPUs and efficient cooling, particularly well-suited for AI workloads?
Neo cloud infrastructure is built from the ground up for AI. Advanced GPUs are at the core—they’re designed to handle parallel processing, which is critical for training deep learning models. High-bandwidth memory and low-latency networking ensure data moves fast with minimal bottlenecks. Then there’s the cooling aspect; AI workloads generate a ton of heat, and neo clouds often use innovative systems like closed-loop dielectric cooling to keep temperatures down without guzzling water or energy. Together, these elements create an environment where AI tasks run faster, smoother, and more cost-effectively than on generic setups.
The research highlights three types of neo cloud providers: infrastructure players, platformers, and aggregators. Can you explain what each of these roles entails?
Sure, each type serves a distinct purpose. Infrastructure players focus on the raw hardware and foundational tech—think data centers packed with GPUs and optimized networking. They’re the backbone providers. Platformers build on that by offering tools and environments for developing and deploying AI models, often integrating with existing multi-cloud setups, which makes them super versatile. Aggregators, on the other hand, act as middlemen, pulling together resources from various providers to offer a unified service. They’re great for companies wanting a one-stop shop without dealing directly with multiple vendors.
Why do you think platformers are often viewed as the most flexible choice for companies already using multiple cloud systems?
Platformers stand out for their ability to play nicely with existing multi-cloud environments. Many companies in Australia and New Zealand already rely on a mix of public and private clouds, and platformers provide a layer of tools and services that can bridge those systems seamlessly. They offer APIs and frameworks that let businesses integrate AI capabilities without overhauling their current setups. This adaptability reduces friction and lets firms focus on innovation rather than wrestling with compatibility issues.
How does the emergence of neo cloud providers connect to sustainability goals for businesses in the region?
Neo cloud providers are closely tied to sustainability because they’re designed with efficiency in mind, which directly cuts down on energy and resource use. AI workloads are notoriously power-hungry, but neo clouds use advanced cooling and optimized hardware to minimize waste. Many are also tapping into renewable energy sources, aligning with broader environmental, social, and governance goals. For businesses here, adopting neo cloud isn’t just about performance—it’s a way to meet sustainability targets without sacrificing growth, especially as stakeholders demand greener practices.
There’s mention of closed-loop cooling systems and a Power Usage Effectiveness rating as low as 1.03 in neo cloud platforms. Can you explain what this means and why it matters for the environment?
Closed-loop cooling systems are a sustainable alternative to traditional methods that rely on massive amounts of water to cool data centers. Instead, they use a contained fluid to absorb and dissipate heat, drastically reducing water consumption. The Power Usage Effectiveness, or PUE, rating of 1.03 is a measure of energy efficiency—a lower number means less energy is wasted on non-compute tasks like cooling. Compared to the industry average of around 1.55, a 1.03 rating is exceptional. It matters for the environment because it cuts down on both energy and water use, reducing the ecological footprint of AI operations at a time when resource conservation is critical.
What is your forecast for the balance between profitability and environmental responsibility as renewable energy costs continue to drop?
I’m optimistic that we’re moving toward a future where profitability and environmental responsibility go hand in hand. As renewable energy costs keep falling, businesses no longer face a stark trade-off. Adopting sustainable practices, like leveraging neo cloud platforms powered by renewables, can actually lower long-term operational expenses while meeting ESG mandates. In Australia and New Zealand, where we have abundant renewable resources and growing public pressure for green initiatives, I expect more companies will see sustainability as a competitive advantage, not a burden, over the next decade. It’s a win-win if executed thoughtfully.
 