Search

Preparing For Test Early In The Design Flow - SemiEngineering

thekflow.blogspot.com

Until very recently, semiconductor design, verification, and test were separate domains. Those domains have since begun to merge, driven by rising demand for reliability, shorter market windows, and increasingly complex chip architectures.

In the past, products were designed from a functional perspective, and designers were not concerned about what the physical implementation of the product was really like. That’s no longer the case.

“Five or ten years ago, the understanding about why test needed to be part of the same conversation starting evolving,” observed Rob Knoth, product management group director in the Digital & Signoff Group at Cadence. “We began to see we could no longer ignore test for a couple of key reasons. First, for safety-critical, high-reliability products, we have to make sure there are zero defects, that the products are capable of the expected long lifetimes, and that safety is being handled in the right ways. As a result, test increasingly started to creep over to the designer’s desktop. Coupled with advanced nodes, designers must make sure they are testing for all the new defects on these advanced nodes, as well as keeping an eye on them when they’re in field. Now, all three parties sit at the table with equal voices. So it’s not so much about the designer preparing for test, but it is the product designer considering test as one of the three important elements for the end function of the product, alongside the physical realization of the product and the test aspects. Test has become a concurrent activity, as opposed to something that follows along.”

As a result, there are specific tasks and considerations when starting the design, because they will impact the design later in the flow.

“Some of this is just the physical realities of test, including the fact that it’s going to consume some area along with some routing,” Knoth said. “Designers also must understand the power, performance, area, and congestion impact of test, ensuring this is ameliorated as much as possible, and make that part of the physical implementation flow. Designers also need to consider what is being put in to accommodate test. Whether that’s memory BiST or logic BiST or just test compression, it’s going to take up room. They must ensure that is part of the floor-planning part of the inter-block communication when they’re thinking about pin-out. They need to consider how test is traveling over the fabric between blocks, because it’s going to consume some part of the resources on the design. Planning for that upfront, as opposed to reacting to it, is critical.”

Automation of test processes has eased some of the complexity of test planning, but that’s not the whole story, noted Ron Press, a technology enablement director at Siemens EDA. “People are thinking of using AI techniques to solve the challenges with test, and with the rest of design, too. Instead of refining a really difficult problem, you can move to something that’s a little more practical, easier to use doesn’t take all this complexity. Instead of artificial intelligence, we should look at architectural intelligence. Part of that takes the really complex problem of how all this work is being done in the core, set against these different pieces, stepping back and looking at it from the architectural level. Our suggestion is to put a plug-and-play platform together, such as an IEEE 1687 iJTAG frame that is industry standard, to make it easy to have everything plug-and-play.”

Press recommended that design teams conduct architecture reviews for their designs. “We say, ‘Let’s look at your overall plan. Is it a tile-based design where there is top-level logic, where each piece can be made independently by wrapping an on-chip clock control (OCC) inside? If so, your life is so much easier.’ We’ve had companies say, ‘I have to go back and forth hundreds of times, because I refined this tile versus this other tile. And now this one’s gotten bigger, I’ve changed some things, so now this one has to adjust, and I have to take some of my bandwidth and put it here.’ With a plug-and-play approach, and with packetized data delivery, you don’t have to change anything. If something changes over here, that’s fine.”

The knowledge set required for the DFT engineer is just as broad, according to Robert Ruiz, director of product marketing, test products at Synopsys. “For the DFT engineer, there are certain things that designers and DFT engineers need to know. There are things that require more reliance on tools, and with every area in design, where there’s more reliance on tools, the design engineers need to understand the fundamental of the design engineering process. After that, they rely on the automation that tools provide.”

Verification challenges
From there, verification engineers need to understand how DFT impacts the functionality of a chip.

“How do you verify more with less? Or how do you verify more, faster?” asked Ruiz. “If you use this set of new libraries, DFT engineers have to know the structure and the architecture, because that’s going to impact the design. In some cases, the DFT engineer has to know about physical design, and how test impacts it. But the logic also needs to be verified, so they must know how to run some of the verification tools. Timing must be checked, as well, along with running formal verification tools to make sure the other logic was correct. DFT engineers are somewhat unique in that they’re evolving into their own specialized IP designers. And just like with IP, they have to know all aspects of an EDA flow, of a design flow.”

For smaller designs with a smaller portion of digital logic, the process is straightforward because the automation has risen so much. “They want scan chains,” he said. “It’s literally one command. They just have to know how many scan chains they want. Then it’s a couple commands. That’s all very straightforward automation that’s built into synthesis tools. There’s a straight connection to ATPG to generate the patterns, and they hand that off.

On the other hand, if there are complex SoCs, GPUs, or an AI chip, there has to be a thought process and skill set that includes everything from how testable the different blocks of the design are, to what the DFT architecture should optimize and the constraints on the physical tester. And all of that needs to happen within the physical limitations the physical design team provides. For example, specific considerations may be provided for aspects such as RTL power estimation.

“When you’re doing RTL power estimation, you’ve got to have a very easy way to account for the type of test that is going to be included, such as memory BiST, without actually needing the real memory BiST put in there,” said Knoth. “You want to make sure the designers have the ability to mock that up. If they don’t, there will be a pretty nasty surprise later. It’s that sort of mentality, whether it’s for the physical designers or RTL designers, where you want to be able to give them the ability to add in placeholders that represent what test is going to eventually do, and then ensure during implementation that you meet or beat those budgets.”

Planning ahead
The more critical test is to the functioning of the design, the more imperative it is for test IP to be present in the files being used for functional verification.

Knoth has seen a big migration of that content, from the gate level up to RTL, in order for it to be seen by the functional verification team. The more physically aware certain aspects of test become, the better it dovetails with the design convergence process. There always will be room for improvement on this, and that makes moving test content to the RTL space more difficult.

“When something is inserted during implementation, it’s easy to understand how a test can manipulated it in a way that’s different than the functional circuitry,” Knoth said. “But when you’re inserting stuff at the RTL level, that can be a little trickier. So there’s always going to be room to improve the implementation flow, the verification flow, etc.”

For some design teams, earlier test considerations require an overhaul. For others, not so much.

Press said this depends on what they’re doing already, because there are many design teams switching over to a hierarchical methodology. “One of the reasons for this is because the design is too large, and this is an issue across the industry, not just for test. It’s not uncommon to have 500 million gates in a chiplet. If you’re doing that all with one image, the computers have to be huge, your runtime is going to be really big, so everybody’s realizing they need to cut it up into pieces. ‘Finish one piece, then employ a smart architecture that lets them plug in IP, and then you never have to look at the whole thing as one unit.’ Is there some resistance? It’s not as bad as I would have figured, because people have to go this way anyway.”

At the same time, some chip design teams run into costly challenges involving how to get enough huge workstations to deal with their design as a ‘flat’ major piece of the design. “Once they start breaking it up, it’s very easy to have the resources to deal with it, and the time to run their experiments,” Press said. “Another aspect of this is if you can finish the work at the core with a partition, then it’s done much earlier and it’s a smaller problem. You can run more experiments and optimize better too.”

Back to basics
With so many complicated techniques being discussed, sometimes the straightforward path is the best one. Design, verification, and test are intricately connected, and must be approached as such.

“Following the requirements in the architectural definition is the most important thing to keep in mind when writing RTL that is completely clean,” said Natalija Colic, digital design engineer at Vtool. “Then, follow the standards to avoid common errors or discrepancies between the simulation and further steps in the design flow. When writing the code, what needs to be kept in mind is that although this is the verification task, you need to look ahead to all the possible scenarios that might occur in the design and have a fail-safe mechanism to avoid bugs or a failure further down the design flow. Ask yourself, ‘What could go wrong with these requirements? Are these requirements specific enough? Are we missing some sort of information? Are we writing the most detailed documentation we can and planning ahead so we can cut the time of doing the revision later on, when it might be already too late or the bug is already found?’”

Linting tools help here. “Linting is one of the tools that is run first, where you check the coding styles, which can show bugs or functional discrepancies or other errors in the code like combinational loops or something else that’s not synthesizable,” said Colic. “CDC tools also can show whether there is proper synchronization of clock domain crossing. In addition, running trial synthesis can show if there are timing closure issues. It’s not strictly related to verification tests, but it is related to tests further down the line to see if the frequencies are okay and to prepare the DFT chain, scan chains, etc., that are used for production testing.”

Olivera Stojanovic, project manager at Vtool, noted that in some cases, designers create directed testbenches with basic traffic just to be sure that the design is alive. “When the design is too complicated, they may not do that if it is deemed not worthwhile. They can start with the verification environment, but if the verification environment is not ready, they may create some very simple test benches, just to see if the design is live.”

Another technique many design engineers have found useful is assertions. “Some designers want to add assertions, some don’t,” Stojanovic said. “If they do, assertions can decrease time for debug because they point to the exact root cause of an issue. You’re doing black-box testing, of course, but you will catch bugs. The cause can be somewhere inside that requires enough time to locate where the issue is, so adding assertions in the design speeds up debug.

Finally, writing the verification plan before writing the first line of a verification environment is always a good place to start, she explained. “This is a very good check point for the designers to be sure that both sides actually understood the design. And for the verification team, it is a very important step to get feedback both from the architect and from the designer if something was misunderstood, or if additional tests need to be added. Sometimes it is not related to the functionality of the design requirements, but more aligned to the feeling of the designer. What is the weak point in their design? What are the types of stress scenarios the design would be susceptible to? These are the kinds of things that they’re afraid can be buggy.”

Conclusion
Whether the most advanced, AI-based test and verification approaches are used in a design, or more fundamental approaches are adhered to, designer engineers must go into a design having a breadth and depth of knowledge of not only design techniques, but DFT, test, as well as verification techniques in order to make the best design choices. While automation will continue to mature, and become more sophisticated over time, there always will be a need for the engineering team members to direct those tools.


Adblock test (Why?)



"flow" - Google News
March 02, 2022 at 03:03PM
https://ift.tt/CR4XWZU

Preparing For Test Early In The Design Flow - SemiEngineering
"flow" - Google News
https://ift.tt/ys0an8o
https://ift.tt/8pmrQ1S

Bagikan Berita Ini

0 Response to "Preparing For Test Early In The Design Flow - SemiEngineering"

Post a Comment


Powered by Blogger.