The semiconductor industry is at an inflection point. The convergence of advanced multi-die architectures, AI-driven workloads, and rapidly evolving interface protocols is creating unprecedented design complexity. At the same time, market pressures demand faster time-to-market and higher performance, leaving little room for error. From data center to edge developments, users have to run software and AI workloads well before RTL is implemented into target technologies to avoid late surprises at the HW/SW interface.
Why verification is under pressure
Modern systems are defined by the workloads they run. As software complexity accelerates, verification must ensure that designs meet stringent requirements across functionality, power, performance, latency, security, safety, and scalab…
The semiconductor industry is at an inflection point. The convergence of advanced multi-die architectures, AI-driven workloads, and rapidly evolving interface protocols is creating unprecedented design complexity. At the same time, market pressures demand faster time-to-market and higher performance, leaving little room for error. From data center to edge developments, users have to run software and AI workloads well before RTL is implemented into target technologies to avoid late surprises at the HW/SW interface.
Why verification is under pressure
Modern systems are defined by the workloads they run. As software complexity accelerates, verification must ensure that designs meet stringent requirements across functionality, power, performance, latency, security, safety, and scalability. Missing these targets can lead to costly silicon re-spins and delayed product launches.
AI applications are characterized by software programs (called workloads) and large-language models (LLMs). Workload complexity is defined by the lines of software. Software applications have become more specific and thus “contained” to certain specialized tasks like AI training vs. inferencing, or the creation of text vs. images vs. video, etc. Within that scope, software complexity continues to grow as end users expect more from applications and they therefore have to evolve very rapidly to stay competitive. In addition, we are facing a literal wave of AI LLMs, serving generative AI and inferencing needs from the data center to the edge, with models doubling in size every four months.

**Fig. 1: A tidal wave of AI training. **Source: Visual Capitalist, “Eight Years of Consumer AI Deployment in One Giant Timeline”, http://bit.ly/46B99HW, 2025
This reality is driving a “shift left” approach, making it a critical requirement to move verification and validation earlier in the design cycle. By pulling these tasks into the pre-silicon phase, teams can identify and resolve issues before tape-out, reducing risk and accelerating schedules.
The explosion of use cases creates a scale challenge
Verification is no longer about checking RTL alone. Today’s workflows include:
- RTL verification for functional correctness.
- Software validation to ensure compatibility with real-world workloads.
- Performance and power validation to meet throughput, power, and latency targets.
Each domain introduces unique requirements, and the number of use cases continues to grow. This expansion is fueled by the need to characterize and optimize end products under realistic conditions—long before they reach the market.
This drives a skyrocketing demand for verification cycles. Quadrillions of cycles are now required to validate complex systems across diverse scenarios. Traditional simulation-based approaches cannot keep pace with this scale. Figure 2 illustrates the compounding complexity increases across software, hardware, interfaces, and verification use cases.

Fig. 2: Compounding verification challenges. Image credits: Synopsys, AI and Memory Wall: 2403.14123 (arxiv.org), Baya Systems, https://bit.ly/4hDXCe9, Visual Capitalist http://bit.ly/46B99HW
Let’s take NVIDIA as an example.
After the announcement of Blackwell, NVIDIA announced the Rubin AI Platform and the Rubin Ultra family to be available in 2026 and 2027, respectively. When the Rubin CPX architecture, purpose-built for massive-context inference applications, was announced in September 2025, NVIDIA also reported massive software-driven improvements for the existing hardware products, including 2x Blackwell performance since its launch, 4x performance improvement for Hopper during its lifetime so far, and 6x better throughput enabled by the Dynamo software.
For end users, this yielded at the system-level 2-4x speedup on Llama, up to a 6x improvement in first-token latency, and 3x higher token output.
As hardware and software grow increasingly complex and specialized, software-defined systems enable ongoing upgrades throughout the hardware’s lifecycle. Designing silicon for these systems is challenging, especially as Moore’s Law faces physical limits. To scale, the industry has turned to multi-die architectures. While each die is developed and verified like a standalone SoC, integrating multiple chiplets introduces unique challenges. Hardware size continues to double roughly every 18 months thanks to “More than Moore” innovations. Adding to the complexity, chiplets increasingly come from different vendors, requiring ecosystem-level coordination on communication protocols and verification methodologies.
As a result, rapid evolution of interface IP implementing these communication protocols has become a crucial contributor to verification complexity. To feed more data into AI algorithms to provide more intelligent insights or autonomously perform specific tasks, communication protocols evolve at an astounding pace, as illustrated in Figure 3, now doubling in bandwidth every two years.

Fig. 3: PCIe and Ethernet evolution.
Furthermore, all this data must be stored and read back at a very fast pace to reduce the lag between a request to an AI bot or agent and the actual action. Innovation in memory architectures, as illustrated in Figure 4, has been critical to push forward the ability to support the latest AI computing architectures.

Fig. 4: HBM innovations.
Finally, use cases define the scope of what needs to be verified, and more and more use cases are emerging at a record pace because of the pressure to characterize and then optimize the end product with the workloads running.
Software complexity is accelerating, driving new demands for multi-die scaling and interface IP innovation. At the same time, silicon development cycles are speeding up to stay competitive in the fast-moving AI market. To meet aggressive time-to-market goals and avoid costly re-spins, teams are shifting verification and validation earlier in the design process.
As a result, verification scope has expanded dramatically. Designers must validate functionality, power, performance, throughput, latency, security, safety, and scalability—often before tape-out. This shift left methodology has created new pre-silicon verification use cases that pull most of these tasks forward in the development cycle.
Software-defined hardware-assisted verification (HAV)
Hardware-assisted verification (HAV) has emerged as the solution, offering the capacity and speed needed to handle massive workloads. But HAV itself is evolving. It’s no longer enough to provide raw capacity. Verification platforms must be:
- Specialized to support domain-specific requirements.
- Flexible to adapt to changing workloads and emerging protocols.
- Future-proof to accommodate continuous innovation.
Just as data centers and vehicles have embraced software-defined architectures, verification hardware is following suit. Software-defined HAV introduces a new level of adaptability, enabling continuous updates and enhancements without replacing hardware. This approach ensures that verification platforms can scale with industry demands and support new use cases as they emerge.
Software-defined HAV delivers:
- Ongoing improvements through software updates.
- Dynamic scalability to handle growing workloads.
- Future readiness for evolving standards and protocols.
Looking ahead
The convergence of hardware and software complexity is reshaping verification. Success will depend on platforms that combine massive capacity with software-driven flexibility. Just as software-driven updates continually enhance data centers and vehicles, we are now in the era of software-defined hardware-assisted verification, with the requirement to deliver ongoing improvements and flexibility.
At Synopsys, we’re committed to empowering innovation through continuous re-engineering. And we have engineered our hardware-assisted verification systems with the future in mind. Our software-defined HAV solutions enable engineers to scale verification across industries, meeting ever-expanding pre-silicon demands.
Software-defined HAV transforms verification. Let’s make progress together – one software improvement at a time.