2025 has certainly been a year of unexpected changes. These had a significant impact on the semiconductor industry and everything that supports it. Not all the changes have been bad, but flexibility has been a requirement for continued success or to make the most of an opportunity provided.
Some industries, such as aerospace and defense, are seeing a significant boost around the world. Data centers continue to grow, driven by an almost insatiable demand for AI, and that is driving industry growth — despite challenges in the supply chain.
“Several things have gone faster than I expected, including the announced investments in AI infrastructure, projected semiconductor and memory consumption, demand for power, and demand for advanced packaging,” says Steven Woo, fellow and distinguish…
2025 has certainly been a year of unexpected changes. These had a significant impact on the semiconductor industry and everything that supports it. Not all the changes have been bad, but flexibility has been a requirement for continued success or to make the most of an opportunity provided.
Some industries, such as aerospace and defense, are seeing a significant boost around the world. Data centers continue to grow, driven by an almost insatiable demand for AI, and that is driving industry growth — despite challenges in the supply chain.
“Several things have gone faster than I expected, including the announced investments in AI infrastructure, projected semiconductor and memory consumption, demand for power, and demand for advanced packaging,” says Steven Woo, fellow and distinguished inventor at Rambus. “Some things are progressing slower than I had hoped, including fundamental voltage scaling at advanced process nodes, the use of high-NA EUV, and the standardization of chiplet interfaces.”
Speed of change was a common thread. “I used to think the semiconductor world moved at a much slower pace, but the past six months have proven me wrong,” says William Wang, CEO of ChipAgents. “The onset of the AI-driven memory and storage supercycles, rapid shifts in IDM strategies, the surging demand for agentic AI, and EDA have accelerated innovation across the stack.”
AI is impacting the entire industry, either directly or indirectly. “More engineers are being hired by the big boys. Everyone wants to make chips,” says Shiv Sikand, executive vice president at IC Manage. “It had become all about the software, and everyone forgot about hardware. Then you realize, hang on a second, software needs to run on something. And then ‘chips are us’ again. We’re going to have more chips, and we’re going to have better chips, because AI tools have made us more productive.”
It is not just the chips. “Pure-play companies moved up and down the stack, creating new competitive pressures, as well as new opportunities,” says Uzi Baruch, chief strategy officer for proteanTecs. “Chip companies began building full systems, and even data center-scale solutions, while hyperscalers and device makers invested heavily in custom silicon. Entire ecosystems formed around these verticalized approaches, and with them came new business models. Companies increasingly found themselves involved not only at the silicon level, but also at the system level, and many entered the custom-silicon business to differentiate and capture more of the value chain.”
Supply chains Supply chains around the world have been broken, and companies are racing to deal with the fallout. “What really surprised us was how unstable our supply chains are,” says Andy Heinig, department head for efficient electronics at Fraunhofer IIS’ Engineering of Adaptive Systems Division. “We have recently seen it again in Europe with the Nexperia problems. We expected that our supply chains would have become a little bit more stable after Covid, but they have become problematic again. We can no longer trust that we can get devices from everywhere in the World. We need more local supply chains, and maybe also more local solutions. Devices that cost pennies have caused problems, and it has destroyed the whole supply chain.”
The disappearance of even the simplest of devices can have significant impact. “OEMs are trying to set up supply chains that are less susceptible to disruption, including a company going bankrupt,” says Roland Jancke, head of department design methodology at Fraunhofer IIS/EAS. “The problems at Nexperia meant that Volkswagen could no longer produce cars. We no longer have the notion of second-source, where another company can quickly step in if the other company fails to deliver.”
Semiconductor manufacturing has started to become more distributed, changing the dynamics of assembly and packaging. “The name ‘advanced packaging’ is misleading,” says Marc Swinnen, director of product marketing at Ansys, now part of Synopsys. “It jumps the gun. What is being done is chip assembly, which is something new. It has its own formats, its own constraints.”
That also creates some new opportunities. “It is clear that classical packaging can’t be done in Europe, because it’s impossible to do that with European salaries,” says Fraunhofer’s Heinig. “But if we do advanced packaging and chiplets — where you put more functionality into the package, where the packages themselves are more complex — then it makes sense to do this in Europe, because then you have added value and you build more trust in your supply chain.”
Stalling chiplets There is an increasing divide in the semiconductor industry between those doing advanced chip assembly and packaging, and those continuing with monolithic integration. “2.5D and 3D design methodologies have matured faster than expected,” says Andy Nightingale, vice president of product management and marketing at Arteris. “CoWoS, Foveros Direct, and I-Cube3 capacity expansions made multi-die practical. EDA flows finally caught up with packaging physics, integrating thermal, stress, and voltage-aware closure. That shift also reached the interconnect layer. NoCs continue to evolve to manage latency balancing, bandwidth partitioning, and IP disaggregation across multiple dies — effectively validating interconnect design as a system-level discipline. It’s interesting how quickly chiplet verification has started to mirror traditional NoC integration methodology.”
That does not mean it’s easy. “Chiplets are here to stay, but I don’t think the yield challenges have been resolved,” says Nilesh Kamdar, general manager, design and verification business unit at Keysight Technologies. “Chiplets continue to be a very complex set of technologies, and the packaging of multiple chips together continues to be expensive and difficult. It works, and has been showcased very well with certain 3D memory stacking. But beyond that, there’s still a lot of work that needs to be done in that space.”
Designs for the data center are adopting 2.5D integration because they have no choice. They have reached the reticle limit, and to grow compute power they have to spread across multiple dies. Most designs have not reached this stage, so the economics do not look as attractive. “I was never a huge fan of chiplets because I believe in VLSI,” says IC Manage’s Sikand. “Chiplets are the diametric opposite of integration. If it’s not happening on the same die, that means you’ve got wires between the chiplets, and it’s always the wires that slow everything down, cause complexity, crosstalk, disturbance, and side-channel attacks. Integrated silicon continues to be the Holy Grail.”
But when the reticle limit has been reached, chiplets may be the easiest way forward. “With traditional scaling walls becoming undeniable, heterogeneous integration moved from an optional optimization to an essential strategy for meeting modern AI and HPC workloads,” says proteanTecs’ Baruch. “What surprised many was how quickly ecosystem alignment matured. Packaging technologies, EDA flows, and test methodologies advanced together, enabling architectures that only recently felt speculative. Yet as integration density grew, so did new system-level variability and failure mechanisms that challenged long-held assumptions about reliability and coverage.”
It looks as if this trend will continue. “Last year, we made a prediction that 50% of HPC designs will be multi-die,” says Shekhar Kapoor, executive director for product management at Synopsys. “Twelve months later, industry reports and surveys confirm what we anticipated — multi-die design is reaching scale. Multi-die designs have become a cornerstone of advanced semiconductor design. This shift was driven by two forces, the physical and economic limits of monolithic scaling, and the explosive growth of AI and HPC workloads demanding higher performance and efficiency.”
There are other reasons to adopt chiplets beyond size. “It’s things like flexibility in your product lineup,” says Ansys’ Swinnen. “You can swap between a number of processes without having to redesign the whole thing. You just swap out for upgradeability. If you have a new USB interface, for example, you don’t have to redesign the entire chip. You just swap out that chiplet and you’re good to go. There are advantages other than simply performance and power.”
The industry is struggling to get there. “At the Chiplet Summit, there was frustration regarding chiplets,” says Heinig. “This is not the case for the data center side. For them, it was totally clear because they need chiplets for performance. But for the rest of the industry, many companies have stopped all chiplet activities because there is no business model. Everything is more expensive if you go with chiplets.”
But there are rays of light. “In the last weeks, especially for defense and automotive, we have received requests to speed up with chiplets,” adds Heinig. “For certain companies, it has become clear that they have to spend a little bit more money, and chiplets can be a solution to secure their supply chains. They see advanced packaging and chiplets as a way to do that. By using building blocks, you can then order processors from two suppliers and gain flexibility.”
There are still many problems that have to be solved. “What does a chiplet or 2.5D or 3D stacking look like when we are talking about chips that operate at tens or hundreds of gigahertz?” asks Keysight’s Kamdar. “What happens if there’s a communication chip and a digital chip next to it in an aerospace application? The problems that are being solved there are much different, and because of the higher-frequency communication challenges, these are tougher to solve. We are seeing a lot of engagement in this space, and there has been some exciting research published. I just think it’s going to roll out a little bit slower.”
The industry may need to have patience. “Chiplets are still an aspiration more than reality,” says Swinnen. “It was overhyped, but eventually we’ll get there. It’s like the IP revolution. People struggled. That took several years of standardization before it was ironed out. Chiplets are even more complicated than that, because there’s more involved. But eventually we will get there.”
Standards will be a cornerstone for more general adoption. “Standards are progressing, and advanced packaging technologies are powering heterogeneous integration,” says Synopsys’ Kapoor. “These developments reflect a fundamental industry pivot toward multi-die designs. The UCIe 3.0 specification was released in August. This update offers high bandwidth, interoperability, and ecosystem upgrades that reduce risk and accelerate adoption, supporting multi-die design as a mainstream design strategy.”
To get to a more open chiplet marketplace, some technical hurdles need to be overcome. “Thermal and mechanical stress analysis are moving from niche into necessity,” says Keysight’s Kamdar. “If you look at the chiplet problem, it is not just, ‘Can I package chips closer together?’ I have to look at what happens to power and what happens to temperature, and what happens if I stack too many chips on top of each other. Is there mechanical stress that changes the aging of the chips? Exploring things, not just from an electronics perspective, but from a multi-physics perspective, is starting to be more of a necessity.”
AI adoption The meteoric rise of AI is impacting everyone. “The explosive rise of GenAI reshaped semiconductor roadmaps faster than even the boldest forecasts suggested,” says proteanTecs’ Baruch. “What began as a compute race quickly became a system-level transformation that exposed bottlenecks in memory bandwidth, interconnect, power integrity, reliability, and lifecycle monitoring. The scale and pace of AI deployment pushed device complexity to unprecedented levels, bringing with it far deeper requirements for observability, predictability, and long-term resilience.”
Those bottlenecks continue to be addressed. “Memory continues to be a key driver of performance as expected, but the scale of the planned buildout will require tremendous investments in semiconductor manufacturing and advanced packaging just to keep up,” says Rambus’ Woo. “I expected that HBM would continue to be in the spotlight, and there continues to be no end in sight to the demand for current and future HBM DRAMs. The announcement of Rubin CPX and the use of GDDR in tandem with Rubin and HBM was a surprise, showing that the industry has confidence in the long-term viability and utility of Large Language Models (LLMs) and the need to optimize hardware for different phases and use cases.”
But not everyone has been able to move at the same pace. “What has been surprising for me is there’s a big thirst for AI, but when it comes to EDA, the response has been more nuanced,” says Kamdar. “What you see with foundational models, and what you see from OpenAI and Google and others, and how fast they release a new foundation model, and how fast it gets adopted — the EDA world definitely has a slightly different perspective.”
There are several reasons for this. “In 2025, we started seeding our efforts to step into AI, because prior to that, it appeared like AI had a lot of promise, but everything was changing so fast that making a sustainable engineering investment in it was a little confusing,” says Prakash Narain, CEO of Real Intent. “That cleared out this year. The other aspect that became clearer was the value of AI from the point of view of training. Typically, any time our customers bring on new users of the tools, there is an element of training that is involved. That aspect is well facilitated by AI, reducing the time to expertise, or familiarity.”
Many aspects of EDA flows are being improved. “Significant progress was achieved in the development and deployment of generative AI assistants, or copilots,” says Anand Thiruvengadam, director of product management for AI at Synopsys. “These copilots now provide expert guidance on tools and workflows, automate complex tasks such as RTL and formal testbench creation, and have greatly enhanced efficiency and productivity across design teams. AI agents are able to reason, plan, learn, and execute engineering tasks both individually and as part of coordinated teams. By collaborating, multi-agent systems can address complex, multi-step engineering challenges that previously required extensive manual effort and expertise.”
There are also questions about the models. What tasks are they really suited for? “Agentic AI is the current flavor in vogue, but LLMs are inherently flawed because they can’t reason and they’ve been programmed in a way to please the user,” says IC Manage’s Sikand. “What is needed are world models. LLMs have been trained on white-guy data sets. Our journals, our news, our political polarization, our societal problems are all first-world problems. But where are the majority of people on Earth? They’re not here. How are we helping those people? That’s more important. What AI really needs to do is to help lift people out of poverty so that we can have a better world.”
Adoption of AI has also solidified views about cloud usage. “We see a significant change that has happened in that many companies now demand that all AI should be delivered on-prem,” says Kamdar. “It involves designs and EDA and IP, and most companies are not ready to let their IP go off-site, into the cloud. We found this out because we developed applications for the cloud and we had to pivot and modify them to be AI solutions that are on-prem.”
Data centers Everyone is aware of the rate of data center buildouts, but exactly how big is it? “The semiconductor industry is talking about becoming a trillion-dollar industry by 2030, just looking at the growth rates,” says Rich Goldman, director, electronics and semiconductors business unit at Ansys. “That’s a great mark for the semiconductor industry. But recently, Jensen said that Nvidia, a single company, has visibility into sales of Blackwell and Rubin through 2026, five quarters, that will total half a trillion dollars. That is triple their revenue from last year. That gives them half of the number that the semiconductor industry says they’re going to achieve in five years. That not only surprised me, it shocked me.”
While growth is good, it requires that other parts of the infrastructure can keep up. “I expected the planning for AI infrastructure buildout to keep sprinting forward, but what surprised me is the scale of investment that is being discussed by companies, including Meta, Google, Oracle, and OpenAI,” says Woo. “The power required to support the proposed scale of investment is mind-boggling. Power has become so important that deployments are being talked about in terms of gigawatts instead of tera operations per second (TOPs) or some traditional compute-related metrics.”
And that is sending clear signals through the industry. “If we’re going to build the next generation of AI, we don’t have enough power for them,” says Sikand. “We haven’t yet solved nuclear fusion. We’re still reliant on traditional energy sources. Today, the energy density required to build these data centers isn’t viable. We’ve said this before, but now we have a scale problem, and there isn’t enough juice. If you look at how much juice is being generated, just in terms of new power production, it’s all happening in China. There is a huge amount of power production, and so they’re able to scale. We are short on power. Silicon Valley needs to make things more efficient and drive efficiency. You don’t necessarily need the numbers of transistors that we currently have in legacy architectures.”
Design houses need to adapt. “Power, performance, and thermal limits have become first-class design constraints, while co-optimization of architecture, process, and packaging is now happening in near real-time (integrating pre-silicon and post-silicon processes),” says ChipAgents’ Wang. “The industry’s response to AI workloads has shown that hardware evolution can move just as fast as model innovation when the feedback loop between compute demand and silicon capability tightens.”
That will create a divide between those that adapt and those that cannot make the change fast enough. “Power and performance management have emerged as the most significant limiter to future scaling,” says Baruch. “Solving it will be the key enabler for the next era of growth, and it will be the number one strategic focus for every semiconductor company.”
Verification The Siemens-Wilson Research Group data showed another year of erosion in terms of first-pass silicon success, with the majority of respins caused by shifting or incomplete specifications. “AI helped find bugs faster, but didn’t stop spec drift,” says Arteris’ Nightingale. “Executable specifications are still more vision than reality, where most flows are document-driven, not data-driven. The integration between requirements, RTL, and test remains fragmented. Specification traceability remains the weakest link. Until specs become executable and continuously validated, respins will continue.”
AI is driving the development of new applications. “Verification AI is an obvious application, and we have seen an acceleration in this area,” says Dave Kelf, CEO for Breker Verification Systems. “It is now easy to predict the advent of the executable specification, where manual specs are read by machines that go on to create full test benches. However, my 2026 prediction will be a tailoring of this new technology with a back-to-basics verification foundation that couples more traditional techniques with AI frontends, thereby creating practical flows that work on today’s designs across the verification process.”
There are also possibilities to tailor verification strategies and tools to emerging design trends. “There’s a lot of replication and repetition in AI chips, at least the core AI chips,” says Real Intent’s Narain. “Each of the replicated modules are somewhat simpler, but the total size of the design is very large. There is an opportunity to take advantage of these aspects. What are the special attributes of AI designs that we can take advantage of? Can we create new applications or improve existing applications so they are more efficient for the scale of the design, which is happening for AI?”
Emerging technologies Some technologies appear to have been on the cusp for several years. “Quantum computing is talked about, and many say it is overhyped,” says Swinnen. “And yet, if you look at the valuations of the quantum computing companies, they continue to rise. Companies like IBM and others are not stupid. They know what they’re doing. They continue to invest in quantum. It leads me to suspect that maybe more is going on than we know about. Why are these valuations so consistently going up, and why do these companies keep investing? What is it that we don’t know?”
Others share those views, but also see the potential and the possible disruptions. “There is a lot of hype around quantum compute and what qubits can do,” says Sikand. “It is exciting because things could change in such a dramatic way. For example, the quantum computer allegedly could transform Bitcoin computation. Even though there’s not many coins left to mine because of the finite limit, it could mine them in blink of an eye.”
It is also bringing technologies together. “The convergence of photonics and quantum is happening,” says Kamdar. “We can see quantum computing continue down this path. It is not mainstream by any means, but the research in quantum computing continues, announcements of 1,000-qubit computers and more are happening. There is lots of research happening within the Western world, but you are also seeing it happen in other parts of the world. Japan, India, and other countries are announcing major research initiatives around quantum computing.”
Photonics may be ready for a big move, too. “Not totally unanticipated, but co-packaged optics is finally here,” says Swinnen. “They’ve been around for years, but were always too expensive, too complicated. The technology wasn’t quite there, so it had limited application. Now there’s a feeling that they have finally arrived. TSMC has thrown its weight behind the COUPE architecture by saying, ‘Here’s a standard architecture that has high enough bandwidth and reliable enough for broad application.’”
This is being pushed by the data centers. “The silicon photonics impact in short haul communications, from rack to rack and maybe even on-board, on-chip, is definitely happening,” says Kamdar. ” Within the AI industry and the trillions of dollars that are being invested, money is no longer the only factor. There is enough investment, and money to be made, if we can showcase faster speeds.”