Joel Archer, founder of Influence Gap, recently took on a massive challenge: becoming one of the first 50+ people in the world to earn the new NVIDIA Agentic AI certification. For Joel, this isn't just about the credential; it’s about staying up to date to better serve his clients and network. Validate your skills with a certification exam at #NVIDIAGTC. ➡️ https://lnkd.in/dDHZttey
About us
Explore the latest breakthroughs made possible with AI. From deep learning model training and large-scale inference to enhancing operational efficiencies and customer experience, discover how AI is driving innovation and redefining the way organizations operate across industries.
- Website
-
http://nvda.ws/2nfcPK3
External link for NVIDIA AI
- Industry
- Computer Hardware Manufacturing
- Company size
- 10,001+ employees
- Headquarters
- Santa Clara, CA
Updates
-
Nice drop from Philip Kiely and Baseten. 📗Inference Engineering maps the stack behind modern AI inference — runtimes, infrastructure, and tooling — and digs into the practical details of serving LLMs on NVIDIA GPUs with TensorRT LLM and Dynamo. 👇 ICYMI — worth the read.
Inference Engineering launches today. https://lnkd.in/gJ3fJSEV
-
🌍 AI is transforming global trade, from supply chains to e-commerce. On the latest NVIDIA AI Podcast, Kuo Zhang, President of Alibaba.com, shares how AI agents like Accio are transforming global commerce for 50M+ buyers and 200K+ suppliers worldwide. 🎧 Listen here: https://nvda.ws/4s0fZ25
-
This livestream is designed to give you clear, practical insights into how VLM fine-tuning works for your specific use case on DGX Spark with 128GB of VRAM. In this 30-minute session, we’ll cover when fine-tuning is the right approach, dataset preparation, and different training strategies (including full fine-tuning and parameter-efficient methods like LoRA). We’ll also discuss how this fine-tuning approach can be extended to video-based VLMs and the practical considerations that come with training on video workloads. Join us live, bring your questions, and walk away with a grounded understanding of how to approach VLM fine-tuning effectively.
DGX Spark Live: Scaling On-Device VLM fine-tuning
www.linkedin.com
-
✨ Qwen3.5 — new from Alibaba Group— introduces a frontier‑class VLM built for native multimodal agents. With a ~400B‑parameter architecture combining MoE and Gated Delta Networks, Qwen3.5 can reason across text, code, and vision — and even understand and navigate user interfaces. Learn how to: ✅ Run Qwen3.5 on free NVIDIA GPU endpoints ✅ Deploy with NIM ✅ Fine‑tune using NVIDIA NeMo See the details in our technical blog ➡️ https://lnkd.in/g75TPY8E
-
-
NVIDIA AI reposted this
Welcome Jensen Huang, NVIDIA Founder & CEO, to the Adobe Summit keynote stage! Join us for inspiring perspectives on the future of AI from global leaders transforming digital experiences. Las Vegas, April 19–22 https://bit.ly/4aCii59
-
-
Build the tools. Change the world. 🌍 We’re bringing TWO massive hackathons to #NVIDIAGTC to push the limits of the creator economy and AI for good. The Challenges: 1️⃣ Hack to Create: Build the engines for the creator economy. 2️⃣ Hack for Impact: Use NVIDIA Nemotron & Cosmos for Human/Eco/Culture. 🛠️ The Gear: Build on GB10 systems from Dell Technologies, HP, and Lenovo based on NVIDIA DGX Spark. 🏆 Win hardware + a spot on the BIG stage! 🎤 Register now: 🔗 https://lnkd.in/gp63YXxj 🔗 https://lnkd.in/gbFxpfpk A valid #NVIDIAGTC 2026 pass is required to attend on-site to participate.
-
-
🎉 Announcing the first Interactive Physical AI Workshop at #CVPR2026.🎉 Join us for a half-day workshop exploring AI systems that see, communicate, and act safely in our shared physical world — including robots, environment-aware avatars (e.g., AR telepresence), and on-device multimodal agents. ✅ Cross-disciplinary topics spanning vision, robotics, and multimodal AI ✅ Featuring invited speakers (incl. Yaser Sheikh), poster sessions, and spotlight talks 📅 Paper deadline is Feb 28: https://lnkd.in/dFNfWpUD More info: https://lnkd.in/d2D_CPmA 💡 Organized by #NVIDIAResearch. We look forward to seeing you at CVPR.
-
-
Deploy Kimi K2.5 on NVIDIA Blackwell with lower latency and lower cost per token—using Baseten Inference Stack. • TensorRT-LLM + Baseten Inference Stack → higher throughput, lower cost per token • NVIDIA Model Optimizer + NVFP4 precision → precision-optimized inference that unlocks Blackwell performance gains • Baseten Speculation Engine → faster generation through speculative decoding 👇
Introducing Kimi K2.5 on Baseten’s Model APIs with the most performant TTFT (0.26 sec) and TPS (340) on Artificial Analysis. Even among a landscape of incredible open source models, Kimi K2.5 stands out with its multi-modal capabilities and it's ability to accommodate an alarmingly large number of tool calls. Get the good stuff here: https://lnkd.in/gEJRs_ZJ
-