Really grateful to Victor Dey for capturing some of my thoughts in his Forbes article covering IBM's focus on addressing enterprise AI requirements. 🙏🙏 “Every hyperscaler wants to own the on-ramp, enterprises want freedom, across clouds, sovereign regions, and edge sites,” said Haseeb Budhani, CEO and co-founder, Rafay. “The winner is the platform that makes that experience feel the same everywhere: whether it’s a public cloud, on-prem, or neocloud. That’s a high bar. If IBM clears it, great. If not, multi-cloud stays a slide, not a system.” Budhani added that in AI infrastructure, economics, not hardware, will help win the race. “GPUs don’t sell themselves; experience does,” he said. When orchestration improves utilization and enables predictable spending, especially across sovereign and regional clouds, procurement decisions shift. Full article is here: https://lnkd.in/ge9sCF3S
Forbes article on IBM's AI strategy and multi-cloud challenges
More Relevant Posts
-
Anthropic partners with Google to access 1 million AI chips. Claude AI training scales up with TPUs and cloud support. https://lnkd.in/gpS8fXi2 #Anthropic #ClaudeAI #AIchips #TechNews #artificialintelligencetutorial
To view or add a comment, sign in
-
"any model, any hardware, any cloud" Red Hat's AI 3.0 strategy is not just an update; it is a bold declaration of intent. By systematically building the open, foundational plumbing for inference, agentic development, and hardware management, Red Hat is positioning itself to be more than just a participant in the AI race. It is making a highly credible, strategic bid to become the 'Red Hat' of the AI era—the indispensable, trusted, and unifying platform that enterprises will rely on to run their most critical workloads. https://lnkd.in/esqcrh8t
To view or add a comment, sign in
-
This is one of the best articles I've read on Red Hat's AI strategy and capabilities. Definitely worth the 6 minute read. If you would like to understand what Red Hat is doing in the AI space, this article highlights our commitment to providing the foundational infrastructure (i.e. The Platform) needed for flexibility and choice across the entire AI/ML lifecycle. This means ensuring our platform can effectively handle any model, run optimally on various hardware configurations, and operate seamlessly across different cloud environments.
To view or add a comment, sign in
-
“The signals are clear: When budget plans, cloud road maps, and C-suite conversations all point toward inference, it’s time to align your business strategy. In practice, that means treating AI not as magic pixie dust or a moonshot R&D experiment, but as a powerful tool in the enterprise toolbox, one that needs to be deployed, optimized, governed, and scaled like any other mission-critical capability.”
To view or add a comment, sign in
-
¿Y si la verdadera revolución de la inteligencia artificial estuviera en lo local, en tu escritorio, en el servidor local de tu empresa ... ? La IA empeuza a bajarse de los transatlánticos. "The era of monolithic, generalist AI being the only game in town is drawing to a close. A more vibrant, decentralized, and practical ecosystem is rising to take its place, fueled by accessible hardware and intelligent software abstractions. This new landscape empowers a broader set of builders to create specialized models tuned for specific, high-value tasks. As this happens, the central debate in the industry is shifting. The question is no longer just about who can build the largest model, but who will win the coming "battle of definitions" and shape our understanding of what AI truly is and what it is for. The future of AI is being built on desktops and in labs, and the debate over what to call it is just getting started. I've already bought the popcorn." https://lnkd.in/d3rc6J3w
To view or add a comment, sign in
-
AI innovation isn't just happening in LLMs, it's accelerating across the entire computing stack, from silicon to algorithms. Our Global CTO & CAIO John Roese sits down with AMD CTO Mark Papermaster to discuss why enterprises need to rethink their AI strategy beyond massive foundation models. 🔑Key insights: • Training vs. inference requires fundamentally different architectures • Finely-tuned models and SLMs are changing AI economics • Only 5% of enterprises report positive ROI on AI investments so far • Hybrid deployment across cloud, edge and PC is becoming the norm Watch the full conversation to learn what separates successful AI deployments from the rest: https://del.ly/60437pQlh
To view or add a comment, sign in
-
Had a great conversation with Mark Papermaster of AMD about where AI innovation is really happening right now—and it's not just at the model layer. We're seeing silicon and algorithms co-evolve in ways we haven't seen before. Lower precision math baked into hardware. Inference optimizations that make 70B parameter models run on laptops. The entire stack is moving faster than any technology cycle I've been a part of or seen. But here's what struck me most in our discussion: The enterprises getting this right aren't chasing the biggest foundation models. They're thinking full-stack and matching their architecture to their actual business problems. Only 5% of enterprises are seeing positive ROI on their AI investments. The gap between the leaders and everyone else isn't about having more GPUs or bigger models. It's about understanding where to deploy what—finely-tuned models for specific problems, hybrid architectures that put compute where it makes economic sense and software stacks that work consistently across cloud, edge and PC. Worth a watch if you're navigating these decisions: https://del.ly/60437pQlh
AI innovation isn't just happening in LLMs, it's accelerating across the entire computing stack, from silicon to algorithms. Our Global CTO & CAIO John Roese sits down with AMD CTO Mark Papermaster to discuss why enterprises need to rethink their AI strategy beyond massive foundation models. 🔑Key insights: • Training vs. inference requires fundamentally different architectures • Finely-tuned models and SLMs are changing AI economics • Only 5% of enterprises report positive ROI on AI investments so far • Hybrid deployment across cloud, edge and PC is becoming the norm Watch the full conversation to learn what separates successful AI deployments from the rest: https://del.ly/60437pQlh
To view or add a comment, sign in
-
AI is evolving beyond LLMs transforming every layer from silicon to software. Dell’s John Roese and AMD’s Mark Papermaster discuss how enterprises can drive real ROI with smarter, hybrid AI strategies built for training, inference, and scale. Watch the full conversation: https://del.ly/60437pQlh #DellTechnologies #AMD #AIInnovation #HybridAI #EdgeComputing #AITransformation
AI innovation isn't just happening in LLMs, it's accelerating across the entire computing stack, from silicon to algorithms. Our Global CTO & CAIO John Roese sits down with AMD CTO Mark Papermaster to discuss why enterprises need to rethink their AI strategy beyond massive foundation models. 🔑Key insights: • Training vs. inference requires fundamentally different architectures • Finely-tuned models and SLMs are changing AI economics • Only 5% of enterprises report positive ROI on AI investments so far • Hybrid deployment across cloud, edge and PC is becoming the norm Watch the full conversation to learn what separates successful AI deployments from the rest: https://del.ly/60437pQlh
To view or add a comment, sign in
-
Becoming Frontier: How human ambition and AI-first differentiation are helping Microsoft customers go further with AI | by Judson Althoff https://lnkd.in/ejbhVAb4 #microsoft #ai #trends #azure #cloud #copilot
To view or add a comment, sign in
-
The AI Compute Wars are officially the new multi-billion dollar arms race. 🤯 Anthropic securing up to a million Google TPUs signals that the era of general-purpose models running on basic cloud infrastructure is over. What is a Tensor Processing Unit (TPU)⁉️ They are custom-designed AI accelerators, optimized for training and inference of AI models. This massive deal proves the future of AI is highly specialized hardware. The implication for the enterprise is clear: Your AI strategy must be multi-cloud and multi-model. The critical need now is for System Integrators who can orchestrate these specialized tools (Claude, Gemini, etc.) into secure, cohesive, autonomous workflows. The architecture is getting complex—make sure your partner is an expert in the entire orchestra, not just one instrument. 🎵 https://lnkd.in/ggMVxFbb #AgenticAI #GoogleCloud #Anthropic #EnterpriseStrategy
To view or add a comment, sign in
More from this author
Explore related topics
- The Importance of Orchestration in AI
- Choosing The Right AI Models For Enterprises
- Scaling AI Solutions Without Sacrificing Quality
- The Importance Of User Experience In AI
- Developing Scalable AI Use Cases
- Best Practices For Scaling AI In Large Companies
- Balancing AI Ambitions With Realistic Goals
- How to Scale AI in Enterprises
- Evaluating Long-Term AI Scalability
- Aligning AI Strategy With Customer Needs