Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update eval and expect semantics #15

Merged
merged 1 commit into from
Jul 25, 2018
Merged

Update eval and expect semantics #15

merged 1 commit into from
Jul 25, 2018

Conversation

leonardt
Copy link
Owner

This now matches verilator's semantics, where expect is equivalent to
asserting a peek. So, the user must now call eval before expect if
the expected value reflects newly poked inputs. Compare this to before,
where expect meant " this is the expected value when the next eval is
called".

This now matches verilator's semantics, where expect is equivalent to
asserting a peek. So, the user must now call `eval` before `expect` if
the expected value reflects newly poked inputs. Compare this to before,
where `expect` meant " this is the expected value when the next eval is
called".
@leonardt leonardt requested a review from rsetaluri July 25, 2018 03:36
@coveralls
Copy link

Pull Request Test Coverage Report for Build 88

  • 10 of 11 (90.91%) changed or added relevant lines in 2 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage increased (+0.7%) to 91.914%

Changes Missing Coverage Covered Lines Changed/Added Lines %
fault/tester.py 9 10 90.0%
Totals Coverage Status
Change from base Build 86: 0.7%
Covered Lines: 341
Relevant Lines: 371

💛 - Coveralls

Copy link
Collaborator

@rsetaluri rsetaluri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if we maybe want to change the mechanics of fault.Tester a bit. It seems like we the user of Tester is thinking in terms of the standard simulator (verilator, python simulator) and expects those semantics. We then roll that up into test vectors, and then each target unrolls it again into sets, evals, and gets. What if instead of creating test vectors we just recorded the sequence of actions. So the fundamental representation in tester is List<Action> where action is either Set(Wire, Value), Eval(), or Check(Wire, Value). It seems the codegen then would be very simple. The only thing would be some special actions for clocks. Let's talk more in person.

Good to merge for now.

@leonardt
Copy link
Owner Author

That makes sense, and I agree that would make the semantics more apparent to the user. One issue right now is that when an assertion fails, it doesn't map to the line when expect was called. I think if we explicitly have a notion that users are constructing test sequences, which are then passed to the simulator, understanding error messages will be more natural. However, we should still try to map errors back to the original line that created the test vector.

@leonardt leonardt merged commit ecaf062 into master Jul 25, 2018
@leonardt leonardt deleted the expect-semantics branch July 25, 2018 15:26
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants