Tuesday, April 28, 2026
Search

Astera Labs Cuts Chip Simulation Time 3.5X Using NVIDIA B200 GPUs on AWS

Astera Labs achieved a 3.5X speedup in chip design simulations by running Synopsys PrimeSim on NVIDIA B200 GPU-accelerated AWS instances. The breakthrough reduces development cycles for AI connectivity chips, creating a competitive advantage as GPU-based design tools become critical infrastructure for semiconductor companies racing to market.

Salvado
Salvado

March 21, 2026

Astera Labs Cuts Chip Simulation Time 3.5X Using NVIDIA B200 GPUs on AWS
Image generated by AI for illustrative purposes. Not actual footage or photography from the reported events.
Loading stream...

Astera Labs cut chip simulation runtime by 3.5X using NVIDIA B200 GPUs on AWS EC2 instances for Synopsys PrimeSim workloads. The speedup compresses design iteration cycles for AI-focused connectivity semiconductors, directly impacting time-to-market in a sector where development delays cost market share.

Jitendra Mohan of Astera Labs said NVIDIA B200 GPU-accelerated computing on AWS "significantly reduced simulation times and enhanced design capabilities." The collaboration between Astera Labs, Synopsys, NVIDIA, and AWS is "transforming ability to design advanced connectivity solutions," according to Mohan.

The performance gain matters because chip design verification is a bottleneck. Traditional CPU-based simulations for complex AI accelerators can take weeks. A 3.5X reduction translates to days instead of weeks per design iteration, allowing engineers to test more configurations and catch design flaws earlier.

This creates a feedback loop: AI hardware accelerates the design of next-generation AI hardware. Companies with early access to GPU-accelerated EDA tools can iterate faster, file patents sooner, and reach production while competitors are still in simulation. The advantage compounds as each generation of GPUs enables more complex chip designs.

GPU-based simulation is becoming infrastructure, not just tooling. Semiconductor firms without access to high-end GPU compute risk falling behind in design cycles. The shift mirrors how cloud infrastructure became table stakes for software companies—except capital requirements are higher and switching costs steeper.

For NVIDIA, the business model extends beyond selling GPUs for AI inference. Chip designers now depend on NVIDIA hardware to build competitive products, including chips that may eventually compete with NVIDIA's own offerings. AWS benefits by becoming the preferred platform for compute-intensive EDA workloads, locking in semiconductor customers with multi-year design cycles.

Synopsys gains as GPU acceleration makes its simulation software essential for cutting-edge designs. Companies unable to afford GPU-accelerated workflows may settle for less ambitious chip architectures or longer development timelines.

The competitive moat forms not from a single technology but from ecosystem lock-in. Astera Labs' workflow integrates four vendors. Replicating the stack requires partnerships, validation, and engineering effort that takes quarters to establish. First movers in GPU-accelerated design may maintain multi-quarter leads in product launches.


Sources:
1 substrate.com Analysis

Salvado
Salvado

Tracking how AI changes money.