Choosing the right problems for AI to tackle will be crucial.

In the past few years, AI has burst onto the public stage in grand style. This ongoing trend is apparent in the rising number of AI applications in everyday life. But more and more, it can also be seen in a broad range of technical niches, where the main motivation is AI’s promise of continuously increasing efficiency.
One such niche that has seen decades of attempts to achieve greater efficiency is the field of analog and mixed signal (A/MS) chip design. To date, the focus here has been on improving design efficiency by adopting more automation; specifically, new electronic design automation (ED…
Choosing the right problems for AI to tackle will be crucial.

In the past few years, AI has burst onto the public stage in grand style. This ongoing trend is apparent in the rising number of AI applications in everyday life. But more and more, it can also be seen in a broad range of technical niches, where the main motivation is AI’s promise of continuously increasing efficiency.
One such niche that has seen decades of attempts to achieve greater efficiency is the field of analog and mixed signal (A/MS) chip design. To date, the focus here has been on improving design efficiency by adopting more automation; specifically, new electronic design automation (EDA) methods. AI is likely to establish itself here as a new tool in the EDA toolbox. However, choosing the right problems for AI to tackle will be crucial. If we look at the V-model, four types of tasks could be particularly suitable: 1) requirements analysis, 2) surrogate models, 3) verification, and 4) search and documentation.
As the extremely popular LLMs get better and better at word processing, they will be used more and more in requirements analyses. For example, an AI assistant can quickly identify and collate contradictions, duplications, or missing aspects in huge documents and thus serve as a “guide” for the developer. The primary aim here is not to completely automate the analysis, but rather to find obvious logical errors in the huge documents much more quickly—in other words, AI is the machine that finds the needle(s) in the haystack.
Once the requirements analysis is done and the specification has been defined, development engineers are frequently confronted with the problem of designing an appropriate system architecture and system components. This requires access to detailed knowledge, including in the form of models. Often, however, designers use empirical values heuristically and immediately transition to the concrete design step. With complex systems, though, this can quickly lead to undesirable developments that result in time-consuming iterations. So, from a technical point of view, early model-based system optimization is better. But the initial modeling effort can be economically unattractive (even if it would pay off strategically in the long term). This is precisely where neural networks can be used as surrogate models, since they allow model creation and model refinement to be transferred in part from the modeler to a computer. In this scenario, the modeler’s focus shifts to identifying suitable components and modeling procedures, as well as to developing a solid methodical process for the given problem.
Typically, the critical verification phase begins only after the design phase has been completed. But as product cycles continue to accelerate, there is growing pressure to verify as early as possible during the design phase. Verification is also multilayered—it ranges from simple plausibility tests and compliance with specific design guidelines to component tests and complete system tests. In particular, system verification once again calls for component models that not only simulate quickly but also map the current implementation with sufficient accuracy. AI-supported automations that can generate and/or improve component models based on implementations “at the touch of a button” will presumably greatly improve the coverage and speed of system verification.
Finally, there is the topic of documentation. It is fair to say that this is not exactly many engineers’ favorite job. However, good documentation is the basis for excellent cooperation and reusability of the design results. One approach may therefore be to develop assistance systems that create documentation proposals based on the development work that has been carried out. The assistant could, for example, regularly “look over the engineer’s shoulder” and occasionally ask brief questions about the current design decision, perhaps via a pop-up window. This way, the assistant could document the work piece by piece while it’s being done, resulting in comprehensive, largely automated (and machine-readable) documentation of design decisions for use later on. Such a system could also use a database to identify what other development work has already been performed throughout the company; this would let it swiftly find the components required for the respective design step and suggest them for subsequent use, or quickly arrange discussions with the people who have the relevant knowledge.
In essence, the fields of activity proposed here for AI are always about achieving greater efficiency and letting people focus on work that adds value. Everyone’s already talking about AI, but it’s still in its infancy. Especially when applying existing AI methods to niche problems in A/MS design, the spotlight should be on those tasks that are the most complex, repetitive, risky, and yes, annoying. Rather than repeating the development work over and over again, the idea should be to focus on the development process itself and have AI solve subproblems wherever appropriate. This would turn AI from a buzzword into a tool for A/MS IC design that can actually be used productively.
Benjamin Prautsch
(all posts) Benjamin Prautsch is group manager for mixed signal automation at Fraunhofer EAS. He holds a degree in electrical engineering from Technische Universitaet Dresden.