Astera Labs封面照片
Astera Labs

Astera Labs

半導體製造

CASanta Clara 45,112 位關注者

Purpose-Built Connectivity for Rack-Scale AI

關於我們

Astera Labs (NASDAQ: ALAB) provides rack-scale AI infrastructure through purpose-built connectivity solutions. By collaborating with hyperscalers and ecosystem partners, Astera Labs enables organizations to unlock the full potential of modern AI. Astera Labs’ Intelligent Connectivity Platform integrates CXL®, Ethernet, NVLink™ Fusion, PCIe®, and UALink™ semiconductor-based technologies with the company’s COSMOS software suite to unify diverse components into cohesive, flexible systems that deliver end-to-end scale-up, and scale-out connectivity. The company’s custom connectivity solutions business complements its standards-based portfolio, enabling customers to deploy tailored architectures to meet their unique infrastructure requirements. Discover more at www.asteralabs.com.

網站
http://www.asteralabs.com
產業
半導體製造
公司規模
501-1,000 名員工
總部
Santa ClaraCA
類��
上市公司
創立時間
2017
專長
Connectivity solutions、Signal Conditioning、PCIe、Heterogeneous Compute、Hyper-scale Data Center、NVMe、Ethernet、CXL、AI、ML、Connectivity、Data Center、UALink、NVLink Fusion和Rack-Scale AI

地點

Astera Labs員工

動態消息

  • 瀏覽Astera Labs的組織專業

    45,112 位關注者

    As AI clusters scale from single nodes to thousand-GPU fabrics, connectivity is becoming the bottleneck holding compute back from its full potential. NVIDIA’s Blackwell and Vera Rubin architectures have redefined what a rack can do—but that density introduces new challenges across scale-up, scale-out, memory, and storage. When any one of these lags, you’re leaving expensive GPU cycles on the table. The next phase of AI infrastructure will be won—or lost—in the interconnect layer. Click through to learn how Astera Labs is collaborating with NVIDIA to tackle these challenges head-on, and why purpose-built connectivity is no longer optional. It’s the difference between a GPU that simply runs and one that truly performs. Read more here: https://lnkd.in/gNMnxGEs #NVIDIA #NVLink #NVLinkFusion #GPU #AIInfrastructure #AI

  • 瀏覽Astera Labs的組織專業

    45,112 位關注者

    The era of one-size-fits-all AI compute is over. Hyperscalers are deploying heterogeneous racks – NVIDIA GPUs alongside custom XPUs and purpose-built accelerators. This diversity makes connectivity the competitive differentiator. That’s why our collaboration with NVIDIA is gaining traction with customer design wins for NVLink Fusion-based custom solutions. In his latest blog, Thad Omura breaks down how NVLink Fusion changes the game: enabling non-NVIDIA XPUs to connect via NVLink's proven high-bandwidth fabric, unified infrastructure across diverse architectures, and a path to scale-up without performance tradeoffs. If you’re exploring heterogeneous architectures that blend NVIDIA GPUs with custom or third-party accelerators, we’re delighted to support you with NVLink Fusion, giving you a path to scale-up multiple compute solutions without sacrificing the scalability and performance that large model training and inference demands. Read the blog to learn more: https://lnkd.in/g2MMiKK2 #NVIDIA #NVLinkFusion #NVLink #AIInfrastructure

    • 無圖片說明
  • 瀏覽Astera Labs的組織專業

    45,112 位關注者

    What does it really take to Deliver Results? Nate Unger, one of Astera Labs’ earliest employees, shares a story from our very first product that brings this core value to life. It’s a reminder that delivering results isn’t just about outcomes – it’s about ownership, perseverance, and doing what it takes to get across the finish line. Moments like these helped shape the culture we have today – and continue to guide how we show up for our customers and each other. Take a look below! #TeamAsteraLabs #CompanyCulture #DeliverResults #Leadership #Teamwork

  • 瀏覽Astera Labs的組織專業

    45,112 位關注者

    🚀 𝗖𝗮𝗹𝗹𝗶𝗻𝗴 𝗮𝗹𝗹 𝗔𝗦𝗜𝗖 𝗗𝗲𝘀𝗶𝗴𝗻 & 𝗗𝗶𝗴𝗶𝘁𝗮𝗹 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗟𝗲𝗮𝗱𝗲𝗿𝘀! Are you ready for your next challenge? Astera Labs is hiring Principal-level ASIC and Digital Design Engineers to join Don Sanders' team and help us define the future of connectivity silicon powering AI infrastructure. You’ll partner across verification, physical design, and post‑silicon teams in a fast‑paced environment where innovation and execution go hand in hand. Whether you’re shaping RTL, driving timing closure, or mentoring the next generation of engineers, your work will have real impact on cutting‑edge products deployed at scale. 🔎 What we’re looking for: ✅ Deep expertise in digital design with RTL/SystemVerilog and synthesis flows ✅ Experience owning complex blocks – from definition through silicon bring-up ✅ A collaborative mindset and desire to elevate the team around you ✅ Proven ability to work with advanced protocols and high-performance SoCs 📍 Open Roles in San Jose: ✅ Principal Digital Design Engineer – Architect and implement next-generation connectivity logic with ownership from micro-architecture through silicon sign-off https://lnkd.in/gqDh5Sds ✅ Senior Digital Design Engineer (AI Fabric) – Collaborate on RTL implementation, timing closure, and high-speed digital design in a cross-functional setting https://lnkd.in/gkmqDRix 💡 If you’re driven by solving hard problems and building world-class silicon, we want to hear from you! #ASICDesign #DigitalDesign #EngineeringCareers #AIInfrastructure

    • 無圖片說明
  • 瀏覽Astera Labs的組織專業

    45,112 位關注者

    The accelerator stack is fragmenting by design. And that makes the fabric holding it together more consequential than ever.

    瀏覽Peter Lo的個人檔案

    The accelerator stack is fragmenting by design, and that makes the fabric holding it together more consequential than ever. ⬇️ My take on what NVIDIA GTC 2026 meant for the connectivity ecosystem. ⬇️ A few things that stood out to me: 💾 KV cache is the new inference bottleneck. As context windows grow and agentic workloads compound, GPU HBM alone can't carry the load. CXL-attached memory is emerging as the right middle tier — and the numbers from our Leo CXL demo at GTC backed that up: 3.6x memory expansion, 75% higher GPU utilization, 2x inference throughput. ⚡Disaggregated inference moves KV cache across the network — fast. NVIDIA Dynamo's 15x DeepSeek R1 throughput gains on GB200 NVL72 are impressive, but that software orchestration only works if the underlying fabric can move cached context with low, predictable latency. 📊MoE architecture made scale-up bandwidth a first-order problem. Ian Buck put an 18x gap on the table between high-bandwidth switched fabrics and Ethernet for inter-GPU communication. At the context lengths and decode patterns modern reasoning models demand, that gap is felt directly in inference throughput. 🧩The rack is no longer homogeneous. Vera Rubin for prefill. Groq LPU for decode. CPU for agentic orchestration. Distinct silicon, distinct workloads — and one fabric that has to handle all of it without becoming the bottleneck. Connectivity used to be the part of the rack nobody talked about at keynotes. That’s changing. Thx Sandeep Dattaprasad, Michael Ocampo, Thad Omura, Jignesh Shah Adithyaram (Adit) Narasimha for reviewing my thoughts for Astera Labs. #GTC2026 #AIInfrastructure #CXL #KVCache

  • 瀏覽Astera Labs的組織專業

    45,112 位關注者

    Please join us in welcoming Desmond Lynch to Astera Labs as our new Chief Financial Officer! Des brings over 25 years of finance leadership in the semiconductor industry, with a proven track record of scaling global organizations. As we build on the strong foundation set by the incomparable Mike Tate who has transitioned to a strategic advisor role, we’re excited to have Des join the team and guide our next phase of growth and innovation in rack-scale AI connectivity. Welcome to the team, Des! #Leadership #CFO #Semiconductors #AIInfrastructure #TeamGrowth Nasdaq #NASDAQ

    • 無圖片說明
  • 瀏覽Astera Labs的組織專業

    45,112 位關注者

    ⚡ The 400G-per-lane inflection point is here—and it’s redefining AI infrastructure. Last week at NVIDIA GTC, on the ‘Supercharging AI with Multi-Gigawatt AI Factories’ panel, Alexis Bjorlin, SVP at NVIDIA, shared “It’s really an exciting time in the field of optics right now. Everybody will take copper as long as you can, but when you hit that bandwidth/distance limitation, optics opens up a large degree of possibilities for what you can do.” As AI clusters scale, the question isn’t if optical replaces copper—it’s where, when, and for which workloads each technology delivers the most value. 🔍 Key takeaways: ✅ 400G/lane is emerging as the critical threshold where copper faces real physical limits ✅ AI training vs. inference drives very different connectivity requirements ✅ Active Electrical Cables (AECs) are bridging the gap between copper and optical ✅ The future isn’t binary—it’s hybrid architectures optimized by workload, distance, and power At Astera Labs, we’re enabling this full connectivity spectrum – helping our customers design the right solution for their AI infrastructure. 📖 Read Chris Blackburn’s full perspective: https://lnkd.in/gPbk2aDf #AI #AIInfrastructure

    • 無圖片說明
  • 瀏覽Astera Labs的組織專業

    45,112 位關注者

    🚀 Inside the Taurus Interop Lab at Astera Labs As AI infrastructure scales beyond the rack, ensuring seamless interoperability across the ecosystem is critical. In this video, Prasanna Swaminathan takes you inside our Cloud-Scale Interop Lab, where we’ve expanding validation for Taurus Smart Cable Modules and Active Electrical Cables – enabling high-speed, reliable Ethernet connectivity for next-gen AI networks. From COSMOS Developer Kit compatibility to link reliability and performance stability, we’re rigorously testing across real-world topologies with leading ecosystem partners including the latest products from NVIDIA and Arista Networks. The result? ✔️ Robust, plug-and-play interoperability ✔️ High-performance links at 800G per port ✔️ Reliable, scalable Ethernet connectivity for AI workloads ▶️ Take a look inside the lab and see how we’re accelerating deployment while reducing your interop risk https://lnkd.in/g4pCczFh #AIInfrastructure #Ethernet #Interoperability #Connectivity #DataCenter #Hyperscale #AsteraLabs

相似頁面

融資

Astera Labs 總計 4 輪

最近一輪

D 系列

US$150,000,000.00

Crunchbase 查看更多資訊