TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
NEW! Try Stackie AI
Cloud Native Ecosystem / Edge Computing / Operations

Q&A: Nutanix CEO Rajiv Ramaswami on the Cloud Native Enterprise

VMs or containers? Nutanix supports both, and finds its enterprise customers need both as well — both in the cloud and back on premises.
May 12th, 2025 7:00am by
Featued image for: Q&A: Nutanix CEO Rajiv Ramaswami on the Cloud Native Enterprise
Photo by The New Stack. 

Nutanix Next

More applications will be built in the next three years than in the previous 30, estimated Nutanix CEO Rajiv Ramaswami, in a keynote at his company’s annual .NEXT user conference in Washington, D.C. last week.

And this is why the enterprise will need Kubernetes. While K8s may seem like a complicated piece of software to manage,  the average organization will need it to manage its own increasingly complex infrastructure.

Nutanix itself has evolved considerably since 2011, when it launched as a provider of hyper-converged infrastructure (HCI), with its Acropolis Operating System (AOS) platform and virtual machine (VM) hypervisor — Acropolis Hypervisor AHV —  providing users with a one-stop shop for scalable, storage-backed infrastructure.

The company is now expanding beyond its HCI roots, however. Now the company is positioning itself as a platform provider for companies that need to run their apps everywhere, in the cloud, on premises or on the edge, using the same set of management tools.

Key to this approach is cloud native computing. In 2023, the company acquired the cloud native-focused D2iQ (formerly Mesosphere), which maintained its own Kubernetes-based cloud native platform.

Last week, the company held its, Over 5,000 people attended. There we had the chance to interview Ramaswami. We discussed how the company’s cloud native offerings could provide value to enterprises. We discussed the importance of hybrid environments, noting that customers use both VMs and containers, and routinely run applications across different environments. We also discussed the continued fallout of the Broadcom acquisition of VMware, a chief rival to Nutanix.

The interview has been edited for clarity and length.

How does Nutanix define cloud native computing?

D2iQ formed the core of our cloud native efforts. We have a complete Cloud Native Computing Foundation-compliant solution with the Nutanix Kubernetes platform.

So the first is, we got a multicloud Kubernetes platform, and it’s based on the D2iQ solution, where everything we work with is open source and CNCF compliant. So we have a Kubernetes platform for runtime.

We also have a management platform for Kubernetes that works with native, underlying Kubernetes substrates, whether it’s [AWS‘s] Elastic Kubernetes Service, Azure Kubernetes Service or the Google Kubernetes Engine. You can use Nutanix to manage clusters that span multiple clouds, or cloud-on-prem and public clouds. It’s a multicloud solution.

But cloud native efforts have multiple elements to them. It’s not just a Kubernetes solution for management. It also includes about 30+ add-ons: load balancing, observability; all of those open source components we pull in, we orchestrate and integrate it as part of the solution. So that’s the base.

Now that’s not by itself enough. You still need data services and storage services, and you need platform services.

From a storage-services perspective, Nutanix has a very good solution for virtual machines: files, objects, blocks, storage, disaster recovery, replication, snapshotting, all of that. That entire storage stack now works on top of native Kubernetes. So you can take our storage stack, deploy it anywhere there’s native Kubernetes clusters available. If you’d like to run cloud new environment, bare metal, we support that too. So now, so you have the runtime and management, and then you have the data storage services.

Then the last component that we’re working on is what we call your platform services. That means databases-as-a-service [through Nutanix Database Service]. We support, for example, Postgres and Oracle. So to think of it like an Amazon RDS available on prem, and we’re making it available in the public cloud.

If you think about those three components together, and on top of that, there’s an AI element to it. So there is Nutanix Enterprise AI, which is all built on top of Kubernetes. It essentially provides a framework for agentic AI applications. So we have a set of capabilities to download models, instantiate models, deploy the full Nvidia AI stack, right?

Historically, we’ve thought of Nutanix as providing a hyper-converged infrastructure (HCI), with its own hypervisor and strong suite of storage solutions. How does this — I don’t want to call it a legacy stack  — but this platform mesh with the cloud native offerings? 

I think most of our customers do both. They have a lot of VMs and virtual machine applications, and they’re also building container-based applications. They’re doing both, and they in some ways, VMs and containers actually serve somewhat different purposes.

We look at VMs as providing a lot of capabilities to effectively manage the infrastructure. You can improve the utilization of the infrastructure with virtual machines, right? That is a classic play. You can improve the manageability, and you can provide security resilience for the infrastructure, right?  So we do all of that.

Containers are a developer focus, right? They provide agility. They provide scale for applications and reduce the time to market for developers to go build and run these things.

Most customers will have a mix of both, and we allow them to run on the same platform. We can embed containers into VMs, and we think actually, that’s a good way for people to run most cloud native applications.

In fact, if you look at the public clouds, for the most part, Kubernetes is actually running on top of VMs in the public cloud. Why? Because you get better efficiency of the underlying infrastructure.

Now, there are some situations where Kubernetes can also be run bare metal. And we support that too.

For Edge use cases where you might only have a few new applications running on small numbers of nodes, you don’t need a hypervisor there. Or you have high performance workloads that can consume the full infrastructure that we have available, then fine. You don’t need a hypervisor.

So we provide flexibility and choice. You can run VMs on containers, you can run containers on VMs, and you can run containers on bare metal, depending on the use case.

Do you see a growing discontent with the cost and inflexibility of cloud providers?

Not everything is going to be built in the public cloud. So the world is very much a hybrid environment. So some applications will be run in the public cloud. Some are going to be run on prem. Some will be the edge.

We are partnered with the cloud providers. We’ve been running on AWS bare metal and Azure bare metal for a while. Now, we’re today going to announce we’re running on Google as well. So we work together with them; customers will choose where they want to run their applications.

Today’s applications, tomorrow’s applications, AI applications: They’re all, in my view, going to be hybrid. So our view is, we provide that flexibility and choice.

One of the customers you hear today [at .Next] is Micron Technology, and you’ll see how they’re deploying containers and VMs. They, for example, operate manufacturing plants. They have modern applications to control those manufacturing plants. They’re not running those in the public cloud. Those are cloud native applications running on prem.

The enterprise worries about cost-efficiency, and this may be the reason they are pulling back from cloud providers?

Cost and efficiency are very important, of course. I mean, they realize public cloud costs more money, typically when running at scale data. Data sovereignty is another issue. Many countries want data to be localized. Security is another consideration. People will not necessarily go put everything in the public cloud.

From a data perspective, latency is another consideration. So if you’re running an AI inferencing on data which is being generated at the edge, you don’t really have the time to go send that data to the cloud for you to do the inferencing and come back. You have to do it locally. So in other words, I think AI compute has to go where the data, not the other way around.

This year’s conference has a heavy emphasis on providing an application to run anywhere. Could you talk more about that? 

People want to be able to run their applications everywhere. Some will run locally. Some will run on public clouds, and some will run on the edge. And so our philosophy is to provide a platform that allows customers to run these applications wherever they’d like to. That’s to run anywhere: the public cloud, on prem, on a hypervisor, on bare metal container.

One of the value propositions we bring to the table is that regardless of where you’re running, the platform is the same. You’re using the same one set of tools to manage the entire environment.

For example, today you’ll see something called Nutanix Central, and that is a single pane of glass where you can manage your clusters wherever they are. They might be on prem, public cloud, edge. You can put them all together, manage them all in one place and do operations on them in one place. That’s the value proposition of our hybrid platform.

Many still think of Nutanix as a provider of hyperconverged infrastructure (HCI). Is that still the case?

That is where we were before today, I would say. You should think of us as a multicloud platform software company. So we made the transition to be a cloud platform that does both virtual machines and containers.

But can I still buy HCI components from Nutanix?

Absolutely. We provide flexibility at every layer in the stack. You can choose what you want to use from us. We’re not trying to lock you in. We provide flexibility. You can pick and choose what you like.

We’ve been writing a lot about AI, mostly from the developer perspective. At the conference, Nutanix unveiled its partnership with Nvidia. How do you see the enterprise using AI? 

It’s early days. Of course, traditional AI and machine learning computer vision have been used for a while, for many use cases in the enterprise. The next foray is generative AI. I would say we’re in the early stages of inferencing.

Our focus is on inferencing not on training. Training models are going to be trained on generic data in the public cloud, and then people are going to take those models, fine tune them, or rank them, and then use them for their specific use cases. And that’s the set of applications that we are focused on.

We’ve seen initial applications being deployed. We worked with a bank in Hong Kong that recorded all the conversations their sellers make, and then they use an AI custom app that they built to pass through those conversations and look for patterns, summarize them, look for potential noncompliance or fraud in those conversations. So that’s one example.

Another topic we’ve been following quite a bit is the dissatisfaction of Broadcom, particularly in terms of pricing for the VMware platform for virtual machines, even if many customers seem to be sticking with the company. What are you seeing these days, and what’s Nutanix’s value proposition here?

So we’ve always characterized this as a multiyear journey. For customers, your infrastructure software tends to be sticky. You cannot move off immediately. There’s multiple reasons why, and so we’ve characterized this people will slowly move off over the next three to five years, maybe even longer, and we are seeing that, right?

So we added 700 new customers last quarter, roughly, and a lot of those are moving from VMware. And so it’s happening, but it’s happening at a measured pace, and it will continue at that pace for several years.

On a personal note, we understand you are a big fan of the card game of bridge. Are there things that you learned from playing bridge that have become useful for managing a large organization?

Bridge is a strategy game, right? I was trained as an engineer and researcher, and I tend to think of things in a very logical way, and analyze all the factors and then come to a decision based on what I know.

So a lot of times you have to actually make a decision based on understanding the facts, but you will not have the full picture. And that’s exactly like bridge.

With bridge, you’ve got four people playing. And you have to try and figure out what’s in each of each person’s hand, and you keep refining that with every step. Every time a card gets played, you have to update that model. It’s almost like a logical-thinking model, right?

And so, it helps you think about things in a logical way, to adapt logically. And I apply the same philosophy to decision making at work. Learn to gather all the data you can, because the more data you have, the better your decision quality is going to be.

Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.