1. What’s with all this talk about coverage? David Lacey and Rob Porter Hewlett Packard Company June 20, 2006
2. “ You have this awesome generation that pseudo-randomly creates all sorts of good scenarios. You also have created equally awesome scoreboard and temporal checker infrastructure that will catch all the bugs. Next, you run it like mad, with all sorts of seeds to hit as much of the verification space as possible. ” Peet James Verification Plans
14. How do I get started with this coverage stuff?
15. Coverage roadmap – getting started PSL / SVA / OVL Coverage tools Code / Assertion / FCP / Txn Txn FCP Code 4. Collect data 1. Choose specification form 2. Identify coverage model 3. Implement coverage model 5. Analyze data 6. React to data Spec, design Adjust stimulus Planning Execution Consumption
16. Coverage planning PSL / SVA / OVL Coverage tools Code / Assertion / FCP / Txn Txn FCP Code 4. Collect data 1. Choose specification form 2. Identify coverage model 3. Implement coverage model 5. Analyze data 6. React to data Spec, design Adjust stimulus Planning Execution Consumption
17.
18.
19.
20.
21.
22. Coverage execution PSL / SVA / OVL Coverage tools Code / Assertion / FCP / Txn Txn FCP Code 4. Collect data 1. Choose specification form 2. Identify coverage model 3. Implement coverage model 5. Analyze data 6. React to data Spec, design Adjust stimulus Planning Execution Consumption
23.
24.
25. Coverage analysis PSL / SVA / OVL Coverage tools Code / Assertion / FCP / Txn Txn FCP Code 4. Collect data 1. Choose specification form 2. Identify coverage model 3. Implement coverage model 5. Analyze data 6. React to data Spec, design Adjust stimulus Planning Execution Consumption
26.
27.
28.
29.
30. Coverage results PSL / SVA / OVL Coverage tools Code / Assertion / FCP / Txn Txn FCP Code 4. Collect data 1. Choose specification form 2. Identify coverage model 3. Implement coverage model 5. Analyze data 6. React to data Spec, design Adjust stimulus Planning Execution Consumption
31.
32.
33.
34.
Editor's Notes
I liked this quote. Emphasizes that we are building these fancy verification environments which do so much automatically for us. We get a slick compute farm setup and run simulations 24x7. And then “run like mad”. But the question remains….
What have I really done? Did I get the results I wanted? Are my tests really doing what I think they are doing? Are they still doing what they did last week? Am I done? Emphasize the coverage is a tool to HELP answer the question of “am I done verifying the design?” It does not provide the full answer.
What coverage does provide is excellent feedback on what you have actually accomplished. Test effectiveness and redundancy. Caution on doing too much with random test grading.
Coverage space: different aspects could be unioned into a single coverage space definition, but are really disjoint Coverage space: a space could be defined to use multiple technologies, but is likely to cause confusion during data generation and analysis
What is the distinction between a coverage model and coverage tools. The coverage model is the description of WHAT you want to cover. It ranges from high-level detail such as that from the architecture definition and project specification documents to low-level details of the design implementation. Coverage tools provide the HOW do I work with a coverage model. We can get ad-hoc coverage data such as bug rates and number of simulation cycles run More traditional coverage tools include code coverage tools. More recently, embedded monitors in the form of FCPs and assertions are providing coverage feedback. Moving forward, the use of transaction level modeling in the testbench is enabling transaction coverage tools to help use the coverage model.
Code coverage monitors the design as a whole without any specific knowledge of its operation We have typically only used Line coverage. We look for 100% line coverage. Code coverage provides an opportunistic view of coverage. Could mark line as covered even though it may have an error in it if error never propagated to a checker. Increased risk of misinterpreting coverage data Issues have we had with other sources of code coverage Path coverage are analyzed without an understanding of the relationship of variables controlling the paths. For example, if had a module that had if(a) at top and another if(a) at the bottom with sequential code in the middle, would get four paths but reality is really only two. Led to many, many, many false paths. The other two paths are unreachable, but not marked as such. On the order of 10,000’s. State – issues with state machine extraction. Extra work to add pragmas to get extraction to happen properly. Expression – time consuming to analyze. Tons of data to migrate through.
Structural coverage (code) (tied to structure of design/RTL), implementation coverage, as apposed to purely functional coverage (architectural) Focus of logic engineers (implementation) and verification engineers (architecture) Example The assertion check will never be evaluated if the precondition is not first seen. Assertion coverage provides feedback on whether you have seen the precondition. In this example, we also need to see if we have seen both Flush and SMQueStop. If we don’t see both through our verification efforts, we have a coverage hole in which the check has never been evaluated.
Structural coverage (code) (tied to structure of design/RTL), implementation coverage, as apposed to purely functional coverage (architectural) Focus of logic engineers (implementation) and verification engineers (architecture)
With TLM verification approaches, simulation databases can record transactions. Other types of txn coverage can include having verification tool looking for a sequence of things watched for in the Verilog domain. When the tool sees the sequence of events, it can log a “packet” with significant information. In a simple case, this would be the same as temporal functional coverage. Normally these cases will be recording significant information related to the sequence of events.
EDA companies are beginning to provide tools to help with coverage. Previously we each had to create our own custom solutions. There are still a number of things that are not provided by the EDA industry, but a significant amount of infrastructure is now available. For instance, the is broad support for PSL and SVA. The results of assertion and FCPs are automatically recorded into a database during a simulation. Most vendors provide a GUI that allows you to look at the coverage data graphically, including effective use of color coding. Most provide a batch mode interface from which text reports can be generated. Debug tools exist for analyzing assertions and FCPs written in PSL or SVA. Often can replay a simulation through a database file after making changes to the PSL or SVA code. Moving forward, some vendors are beginning to build their tool suites around encouraging a coverage-driven methodology. Tools will provide links between the test plan, specifications, and simulation results using coverage data to link it together. Others are creating unified coverage databases which will link multiple sources of coverage (code, assertion, FCP, transaction, etc) in one database and accessed through a common tool interface. What is still lacking is more advance capabilities around supporting volume simulations, efficient merging of coverage databases from multiple locations and environment, and generating customized views of the data. Tools to use – assertions/FCP/txn coverage databases, code coverage, initial reporting from vendors. Need some custom tools to get all views that you will need
Choose form of specification Plan coverage model Start with functional spec, test plan Plan coverage model – identify areas that need to be covered and how you will cover them. Execution Implement coverage models – create FCPs, record transactions Collect the data Consumption Analyze the data React to the data
Content of coverage model includes not only where coverage is needed, but what coverage tool will be used to get that coverage. Understand what coverage types will be used and what information you will obtain from each type. Some types may not providing meaningful data on your project Plan time for validating correct behavior of vendor/custom tools for your use Tools that will be needed are not always available from the vendor. Will need to understand capabilities of the vendor tools and plan for development costs to supplement with custom tools. Coverage infrastructure consists not only of tools (vendor and custom) Computer infrastructure with enough horsepower to support the cost of the coverage tools Disk space Processes and tools to aggregate coverage data from all volume simulations Coverage execution There is a cost to coverage tools running during a simulation. Plans should be made on what percent of sims will be run with coverage enabled. We run with FCPs and transaction coverage always enabled. Code coverage is run once a week. Maintenance – The infrastructure will require time to maintain. Enhancements will be identified over the course of the project. Integrate new vendor tool releases Coverage goals need to be defined up front. Identify metrics that will be used to help track coverage progress Need to focus on quality and completeness of coverage model through reviews just as teams normally do with design implementation.
What does each engineer create? Logic: put in coverage of interest at a low level. Must force themselves to include a big-picture view. DV: put in coverage from black box viewpoint; identify additional coverage required for test plans. What to cover? Planning, planning, planning. When to add? Adding with RTL applies to FCPs/assertion and txn. For code coverage, might need to add pragmas (unreachable, state machines, etc). Where to look for ideas. See books. Excellent examples. Why? I don’t think it will take long for open-minded engineers to understand the value of coverage methodologies. It is more a need to understand how to get started.
These are excellent questions that should be asked to help add FCPs to a design.
Duplication – there is a cost to coverage. Don’t want to incur cost of capturing duplicate coverage data. For example, don’t add FCPs that provide the same coverage obtained from code coverage Don’t read too much into grading of tests. Understand what the focus of different tests is. Just because the coverage for a limited number of runs shows duplicate coverage between two tests, one test could be going after a difficult to reach corner case that requires many simulations to actually hit.
Historically, a rule of thumb has been that the significant majority of simulation time should be spent inside the RTL simulator, not the verification tool. With advanced verification techniques being used, there is a cost associated with these technologies. The rule of thumb is shifting to spending more time in the verification tool. The payback is that the computer is doing more of the verification work and the verification engineer is being utilized for his knowledge.
How do I use the coverage tools to describe the coverage model? For code coverage, the RTL is the coverage model description. For assertion and FCPs, an assertion language or library such as PSL, SVA, or OVL are used. For transactions, there is less infrastructure in place for this. Most simulators provide hooks to “record” transactions into a database. Queries must be written to extract coverage data from this database. Can also embed into the monitors and checkers used in verification environment.
Coverage data needs to be aggregated across all simulation licenses that are running 24x7. Data aggregation Setup how data will be aggregated. By simulation environment, by block or chip, global. Create a central location to store aggregated coverage data Large amounts of data will be generated and must be managed from an infrastructure standpoint. How much historic coverage data to keep? When to reset coverage aggregation? Mode release boundaries? Calendar boundaries?
It is very easy to generate many gigabytes worth of coverage data. Information is the goal, not just data. The trick is to have good methods in place to help look at this volume of data. You must find ways to organize it. As part of your coverage planning determine how you will want to view the data.
How can I most effectively look at my coverage data? Without a good plan here, you will feel like you are standing at the base of this waterfall. Easiest question is to get a report of only un-hit coverage. Even looking just at un-hit coverage can be challenging. Must break coverage into buckets. By team or block. By functionality (errors, debug, mode, etc.) As program execution continues, might want to see just coverage for features that should be implemented for the current milestone. Specific modules might be instantiated multiple times. Common blocks used across multiple chips. Multiple instances of an interface (i.e. two memory controllers). Do you want to see the coverage for the module (merge coverage from multiple instances) only? Do you need to know how well you are exercising each instance (each memory controller) With all the coverage data being generated, what combinations of this data is interesting? Merge data from different simulation environments? Merge data over time? What new coverage am I hitting this week that I didn’t hit last week? What coverage was I hitting last week that I am not hitting this week? Merge data over multiple model releases? Sometimes you need to cross different views into a new combined view Might be interested in module instance (merge multiple instances) but some module instances will need to have coverage filtered due to constrained functionality.
Views Verification environments. Useful for identifying tests that are more/less effective and the effectiveness of each environment Also use views to help determine TR readiness. Focus on major blocks (including common blocks) and complete chips. Use a combination of instance specific and merged module coverage at block level. Filtering Utilize a custom filtering infrastructure to provide enhanced views Unreachable FCP – some common functionality may be constrained in some instances and result in FCPs being unreachable. Aggregation Aggregate across model releases using a sliding window. Always aggregate over the last N releases. Historic data is kept for a limited period of time and then deleted. Automatically generated metrics (% coverage) provided for each team and for full chips. Metrics are generated and tracked throughout project. However, we are not driven by the metrics. Use it as a tool. But our management does not make rash decisions based solely on coverage metrics.
Really understand the un-hit coverage. Work to find ways to fill those coverage holes. Could mean tweaking knobs controls or writing directed tests to target specific functionality Look for difficult to hit coverage. Determine if more focused tests are need to hit those areas of the coverage model more often. Metrics – need to understand completeness of the design. Early on, track metric to get some trends. Caution against hard and fast project-wide coverage goals. Factors affecting coverage including completeness of design, completeness of coverage space implementation, amount of testing/simulation time focused on different functionality. We lean towards having each team setting individual goals based on their specific execution path.
For the SX1000 and SX2000 coverage models, we utilized assertion and functional coverage. We are currently expanding our coverage infrastructure into transaction coverage utilizing a custom backend system to extract coverage from vendor transaction databases.
Here are some books I have read and would recommend. Other books on SystemVerilog assertions are beginning to come out. You can also contact myself or Rob Porter if you have any specific questions on how you can most effectively utilize coverage on your projects.