<rp id="qnaul"></rp>
    1. <th id="qnaul"><pre id="qnaul"></pre></th>
    2. <rp id="qnaul"></rp>

    3. <button id="qnaul"></button>

      Verification Horizons

      Portable Stimulus Modeling in a High-Level Synthesis User’s Verification Flow

      by Mike Andrews and Mike Fingeroff, Mentor, A Siemens Business

      Portable Stimulus has become quite the buzz-word in the verification community in the last year or two, but like most ‘new’ concepts it has evolved from some already established tools and methodologies. For example, having a common stimulus model between different levels of design abstraction has been possible for many years with graph-based stimulus automation tools like Questa® inFact. High-Level Synthesis (HLS), which synthesizes SystemC/C++ to RTL has also been available for many years with most users doing functional verification at the C-level using a mixture of home grown environments or directed C tests. With HLS now capable of doing very large hierarchical designs, however, there has been a growing need to have a verification methodology that enables high performance and production worthy constrained random stimulus for SystemC/C++ in order to achieve coverage closure at the C-level and then be able reproduce that exact stimulus to test the synthesized RTL for confidence. This article describes a methodology where, a stimulus model can be defined (and refined) to help reach 100% code coverage of the C++ HLS DUT, and then reused in a SystemVerilog or UVM testbench with the synthesized RTL. Given a truly common model, it is also possible to maintain random stability between the two environments, allowing some issues to be found in one domain and then debugged in the other.

      INTRODUCTION

      A little over a decade ago, ESL (Electronic System-level) methodologies were all the rage and there were a number of language options that promised to raise the abstraction level for both design and verification, with C/C++, SystemC and SystemVerilog being the dominant ones. While C/SystemC are the most prevalent languages for abstract hardware and system modeling, SystemVerilog has standardized the necessary features needed for advanced verification such as constrained random stimulus and functional coverage.

      At the same time, many users have been looking for more efficient ways to describe stimulus, specifically looking for ways to expand the number of verification scenarios that can be automated from a compact description, and also improve the efficiency in the generation process.

      Questa® inFact has provided just such a capability, with a rule/graph based approach, borrowed from software testing techniques and enhanced for hardware verification. Since this rule based model is independent of the target language (it has been applied in at least 7 different HVL environments) it has always been a portable stimulus solution.

      HLS users often state that one of the major benefits that they receive from moving to C++/SystemC for design is the performance for verification which enables them to run substantially more tests. However, there are no standards in the C environment that have the power of SystemVerilog for advanced verification capabilities, which includes amongst other things, random/automated stimulus modeling. A Portable Stimulus solution provides this power and capability in addition to preserving the investment in creating a stimulus model for C-based simulation environments to be leveraged downstream when the RTL is wrapped in SystemVerilog, and vice-versa.

      THE COMMON STIMULUS MODEL

      The rule-based stimulus model is, as you might expect, created hierarchically from a main top-level rule file, which typically varies very little from a default template, and one or more modular rule segment files. The top level rule file declares the main rule_graph, giving the graph model a name and, depending on the code architecture chosen doesn’t need to contain anything else other than statements to import the necessary rule segment files that define the details of the stimulus to be applied.

      The example in Figure 1 below shows four separate files, two of which – test_data_C.rules and test_data_SV.rules – both define a graph object called test_data_gen. These two top level rule files correspond to graph components that are language specific wrappers for the actual stimulus model. In other words, the inFact automated generators will create a C++ class called test_data_C, and an SV class called test_data_SV respectively, and each of these will define a rule graph model test_data_gen.

      Image
      Figure 1. Hierarchical Rule Code Architecture

      Both top-level rule files import a common hierarchy of rule segment files that actually define the behavior. By keeping all the definition of the rules hierarchy in common files, the compiled graph models will behave identically.

      The test_data_C.rules file has an extra construct, which is an attribute that specifies language specific requirements for the code generated. In this case, it specifies the code needed to add an include statement to the generated C++ class definition file. The language supports other attributes that can be used to customize the generated HVL files, but these have no effect on the underlying graph model.

      The test_data_gen.rseg file defines the rule for the scenario(s) that the graph can generate, which in this case is simply to loop through the randomization of the contents of the test_data object, shown by Figure 2 below.

      Image
      Figure 2. The test_data_gen Rule Graph

      Note: the scenario rule could include multiple objects, either instances of the same object type, or instances of multiple different types, as well as other graph topology constructs, and this will be described briefly later.

      The test_data object itself, declared as a struct in the inFact rule language, is defined in a separate rule segment file, to allow for modularity and re-use. This struct has additional hierarchy, defining other structs called packedArray0 and packedArray1, which mirror C++ structs defined and used for the DUT stimulus in the C++ testbench.

      This is another key element of the methodology, i.e. that the rule graph references objects that have the same name and hierarchy, and use data types that can be mapped to the corresponding C++ and SV types. Since the inFact language allows bit widths to be defined for all variables, this allows us to target the Mentor algorithmic bit-accurate data types and the SystemC data types.

      In this example, the form of the test data object was derived from the C++ testbench around a C model that implements a configurable vector multiply-accumulate. The first step therefore in implementing this methodology is to determine which of the DUT inputs are going to be randomized by the graph model, what their bit-widths are, and to collect these into a test data struct or class. In this case, packedArray0 contains an 8-element fixed-size array of 10-bit values, packedArray1 contains a similar array of 7-bit values. Added to these is a single 4-bit quantity called num_col. The data types used for these structs are defined using the Mentor Algorithmic, bit-accurate data types allowing designs to be modeled with arbitrary precision.

      Although this begins with the C++ testbench, there will also need to be a similar object in the SystemVerilog domain. The SystemVerilog and C++ versions of this object are shown in Figure 3.

      Image
      Figure 3. Test Data Types for SV and C++

      The SystemVerilog model, like the inFact model, can contain algebraic constraints, and probably should if it is ever going to be randomized using a traditional SystemVerilog .randomize() call. If the inFact graph is always going to do the randomization, then this is not necessary.

      Note: the single constraint in this simple example limits the values of num_col to the range 1 up to 8, but the inFact language supports all the common constraint operators that are used in SystemVerilog, with some minor syntax differences. As a bonus, for those familiar with SystemVerilog syntax, a utility is available to create or update the inFact graph model from the SystemVerilog one.

      RUNNING C IN A TESTBENCH

      Once the test data object is defined, the integration of the portable stimulus model into the C testbench is quite simple. As mentioned previously, a C++ class is created automatically from the common rule model, and this class has a method built-in that corresponds to an interface that is defined in the rules language. The test_data_gen.rseg file declares an interface called fill, that operates on any instance of the type test_data. This produces a method, task or function in the generated HVL object called ifc_fill, simple prepending ifc_.

      This method, task or function will take an argument which is a handle to the corresponding HVL object of the same name – i.e. the test_data class or struct shown earlier.

      So, the integration mechanism is simply to construct an instance of the class containing the portable stimulus model, and then call its ifc_fill method with a handle to the testbench test_data container. Figure 4 below shows a code excerpt from the C++ testbench, with the creation of a handle to the test_data struct – td_h – and a handle to the class containing the inFact model – td_gen_h – with the latter’s constructor call defining the instance name for inFact to use internally. This inFact instance name is important as it will be discussed later.

      Image
      Figure 4. Code Snippet from C++ Testbench

      Inside a for loop in the C++ test, the call to the ifc_fill method can be seen, followed by the assignment of the contents of the td_h struct instance to the local variables that will be applied to the C function that is the DUT in this bench.

      This architecture is not really any different from using a SystemVerilog random class or sequence item and .randomize(), or a SystemC/SCV class with its ‘next’ method. The only difference is that the model doing the randomizing is an inFact graph model.

      At this stage the value that the inFact portable stimulus model is adding is the ability to randomize several numeric values, while obeying any algebraic constraints that may be defined on these values, or their relationships.

      CONSIDERING COVERAGE

      An additional value of the inFact model is that there is another type of input that can be overlaid on the stimulus model, which is termed a coverage strategy. This strategy can be considered somewhat analogous to a SystemVerilog covergroup, in that it defines the variables of interest, desired bins of these values, as well as crosses of these variables. The difference is that this is an input to the randomization process that alters the random distribution to efficiently cover the goals in the strategy.

      The coverage metrics that are being measured in this case are not functional coverage coverpoints/ crosses but rather code coverage, which is more common in C/C++ environments (although functional coverage could also be implemented). So the goals defined in the coverage strategy should be, as the name implies, the encoding of a strategy (or strategies) expected to achieve high code coverage, or to target specific areas that are not included in other strategies.

      Since the DUT in this example – the multiplier – is quite simple, a fairly simple strategy may suffice. The inFact tool set includes utilities that can create coverage strategies from a variety of inputs, including automated strategies of pre-defined types, custom strategies defined using a CSV file or spreadsheet, and also a graphical editor. In this example, an automated strategy can be used, which targets each stimulus variable in isolation, i.e. no crosses. For each variable in the test_data hierarchy (including each array element), the utility will ascertain all the legal values, employing an analysis of the constraints, and divide them into a defined number of bins. For this example a total of 128 bins was specified, since that would mean all the coeff values are covered for each 7-bit element in that array. Distinct edge-bins (the individual values at the top and bottom of the range) can be added if desired, and in this case the larger quantities – the 10-bit data values – had two single-value bins created at each of the two extremes.

      As hoped, after running the automated strategy to completion, the code coverage results are very good – see Figure 5 below – hitting 100% (the results from the initial pure random test approach were about 20% lower).

      Image
      Figure 5. Code Coverage Results

      Note: Being able to achieve 100% code coverage on the C++ source is essential to being able to easily close coverage on the synthesized RTL from HLS using the same stimulus. This is because debugging C++ coverage issues is far easier than debugging the RTL output from HLS.

      Image
      Figure 6. SystemVerilog Testbench Code Excerpt

      PORTABLE STIMULUS WITH RANDOM STABILITY

      While getting high code coverage is nice, the point of this article is to describe how a stimulus model, and one or more accompanying coverage strategies, can be developed for one domain and then re-run in another. A seed for the inFact stimulus model can be defined by a user, or simply output to a file from the original run.

      The SystemVerilog wrapped version of the model can be dropped into an SV testbench to drive the RTL DUT in the same way as the C version, i.e. simply instantiate the SV class object that contains it, and then use its built-in task – ifc_fill – to randomize the contents of the SystemVerilog test_data class, as shown in Figure 6, above.

      In this case, the packed arrays used in the test_data class need to be reformatted to fit the wide reg objects that are the DUT inputs in this case, but that is quite simple, with a concatenation operator being used – {arrEl[0], ... , arrEl[N]} – to achieve this. In this example, the state of the coverage strategy can be queried via another available built-in function of the strategy – allCoverageGoalsHaveBeenMet() – and used as a qualifier for generating new inputs or to define a loop exit condition for the test.

      When the SystemVerilog testbench is run, the code coverage produced for the RTL DUT is also high – 97.11% in this case, when running until the coverage strategy was completed, as shown below in figure 7.

      Image
      Figure 7. The RTL Code Coverage Results

      While this example is simple it does illustrate the re-usability of the common portable stimulus model that the Questa® inFact tool suite provides. Of course additional tests are always likely to be needed for the RTL version of the DUT to handle the additional behavior added to the synthesized RTL. This is because the HLS process adds additional structures that do not exist in the untimed C++ source description such as stall-able interface protocols, control FSMs, and clock and reset logic. However, by closing 100% coverage on the C++ using an inFact portable stimulus model we are guaranteed to get the same coverage of the design functionality when running RTL verification. Then it is simply a matter of adding additional tests to cover the remaining structures added by HLS.

      CREATING MORE COMPLEX SCENARIOS

      Any number of rule graph scenario models can be created in this way and applied in either domain. For example, a new scenario can be created by adding a new rule segment that creates two instances of the test data object – td1 and td2 – and uses them in the same fill interface in series in the rule. This allows the creation of a coverage strategy that would achieve transition coverage of one of the fields in test_data, e.g. the num_col variable. Figure 8 shows this new rule graph and the selection of the num_col variables in td1 and td2 as the fields to target for cross coverage.

      Image
      Figure 8. Scenario with Two Test Data Instances

      SUMMARY

      The existence of portable stimulus solutions can help bring advanced verification capabilities to a C-based high-level verification environment, and also allow reuse of the investment in the stimulus models and coverage information at other levels of abstraction. High-level synthesis users can especially benefit from this, especially if the stimulus created can be mirrored in both environments via seed-based random stability, since they are more familiar with the C source of the design and would find it easier to develop a comprehensive set of stimulus models at that level. For the HLS user, portable stimulus gives them a standards-based methodology to predictably and quickly close coverage from C to RTL.

      久久人人97超碰_国产精品综合色区_97色色影院_九九热线有精品视频6
      Chat | Contact