ICYMI: This week’s dev updates include a compact AR image generation model, performance tips for long-context LLMs, and new Ryzen AI tutorial + a sweepstakes reminder. Swipe through and let us know what you’d like to see next → Meet Nitro-AR: A Compact AR Transformer for High-Quality Image Generation: https://lnkd.in/gQiRKZsG AMD × LMcache: AMD GPU Acceleration with LMcache: https://lnkd.in/gB4isi8g Deploying an End-to-End Object Detection Model on AMD AI PC with NPU: https://lnkd.in/grXKBpV9 Join the AMD AI Developer Program: https://lnkd.in/g3MVtr8v
AMD Developer
Semiconductor Manufacturing
Advancing AI innovation together. Built with devs, for devs. Supported through an open ecosystem. Powered by AMD.
About us
Advancing AI innovation together. Built with devs, for devs. Supported through an open ecosystem. Powered by AMD.
- Website
-
https://www.amd.com/aidevprogram
External link for AMD Developer
- Industry
- Semiconductor Manufacturing
- Company size
- 10,001+ employees
Updates
-
AMD achieves Day 0 support for Baidu’s latest PaddleOCR-VL-1.5 model, successfully running it on AMD Instinct MI Series GPUs using the AMD ROCm software release 7.0. Get started here: https://bit.ly/4kb6V7k
-
-
DigitalOcean's Agentic Inference Cloud + AMD Instinct GPUs delivered >2× production throughput for Character.AI, meeting strict latency targets. Platform + ROCm open software unlocked scalable, cost-efficient inference at scale. Learn more: https://lnkd.in/eFXhXwFc
-
-
AMD Developer reposted this
I’m excited to share the launch of the AMD ROCm Debugger for Windows. This release expands the #ROCm ecosystem by bringing GPU debugging support to Windows, helping developers more easily build, debug, and optimize HIP and ROCm-based applications on AMD hardware. We are absolutely committed to meeting developers on their terms and on their turf, and this represents an important milestone in delivering cross-platform tooling and extending ROCm’s reach. By bringing comprehensive debugging capabilities to Windows, we’re making it easier for developers to harness the full potential of AMD GPUs, accelerate innovation, and explore new possibilities across high-performance computing and AI workflows. You can check out the preview by downloading the latest #HIP SDK v7.1.1 here: https://lnkd.in/gyiWWMPH And learn more about the debugger here: https://lnkd.in/gJipZAsc #togetherweadvance_AI #AIatAMD #ROCm #HIP #DeveloperTools #Windows #GPU #AI #HPC Anush E. Emad Barsoum Hari Halilovic Stefan Tu Hisham Chowdhury
-
AMD Developer reposted this
Running molecular dynamics (MD) simulations efficiently is critical for accelerating scientific discovery in many life science use cases such as drug discovery. GROMACS is a widely used, GPU-accelerated molecular dynamics engine powering many life science workflows. Its performance can vary significantly depending on the installation method and hardware configuration. This blog post, link in comments, provides a walk-through of installing a custom, AMD-modified, bare metal version of GROMACS on the LUMI supercomputer. Even though this blog is focused on LUMI, the steps described serve as a playbook for replicating the installation on other similar HPC systems and AI factories. This guide shows how to fully exploit HPC capabilities and include the latest GROMACS features. #AI #AIforScience
-
Ryzen AI 1.7 is here. This release adds new architecture support, expands context length for LLMs, integrates Stable Diffusion into the unified Ryzen AI installer, and improves BF16 inference latency. This means less friction in setup, quicker feedback loops when you test changes, and a more capable local stack for shipping LLM/VLM features for devs. Explore what’s new → https://lnkd.in/gnwHwdys
-
-
Learn how to deploy Qwen3-Coder with vLLM and build your own agentic workflows using the OpenHands SDK, all on your own infrastructure. 🔗 https://lnkd.in/gJq3uQjx
-
-
ROCm just became a first-class platform in the vLLM ecosystem. Here’s what that means for devs 👇 pip install vLLM now just works on ROCm, - official wheels and a real release pipeline and no hacks required. Official Docker images are available for both vLLM and vLLM-omni, so you can get up and running without building from source. CI stability improved from 37% to 93% in two months, with upstream tests enabled and regressions actively maintained. Performance improvements landed across the board, including KV cache, attention, sampling ops, quantization, and faster model loading. Modern model support out-of-the-box, including MoE, multimodal workloads, and sliding window attention. vLLM-omni shipped with Day-0 ROCm support, bringing omni-modality without waiting or workarounds. Production deployments are now straightforward, with validated hardware, tuned configurations, and pre-built images ready to go. All of this reflects months of upstream collaboration and deep engineering work across teams and the open-source community. 🙏 Net: vLLM on ROCm is no longer something you “try.” It’s something you can ship. #AMDevs
-
Have you joined the AMD AI Developer Program yet? The January sweepstakes winner will be selected this Friday! Join here: https://lnkd.in/gasUg-Gx
-