Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: additional validations #10

Open
1 of 9 tasks
mds1 opened this issue Nov 16, 2022 · 3 comments
Open
1 of 9 tasks

feat: additional validations #10

mds1 opened this issue Nov 16, 2022 · 3 comments

Comments

@mds1
Copy link
Contributor

mds1 commented Nov 16, 2022

Validate that:

  • 1. Non-constant/immutable variables use camelCase
  • 2. Functions in the src and script directories are camelCase
  • 3. Contract names are PascalCase
  • 4. Imports are sorted by forge-std -> test -> script -> src, then alphabetically by path within each
  • 5. Imports use named import syntax
  • 6. Named import items are sorted alphabetically
  • 7. Only apply script checks to .s.sol files in the script directory — done in Bug fixes, bring back line numbers #9
  • 8. Check for unused imports
  • 9. In test contracts, all non function test* methods are internal or private
@radeksvarz
Copy link
Contributor

radeksvarz commented May 24, 2023

Would add to the tree for the spec besides functions:

  • errors in spec tree view
  • events in spec tree view

@mds1
Copy link
Contributor Author

mds1 commented May 26, 2023

Hmm interesting, so the spec would include when certain errors/events are expected to be thrown/emitted? Do you have a sample UX in mind for this? I've been thinking a bit about how to make the spec-structuring more flexible and expandable, will create an issue with some ideas next week, so curious to hear more about how you see errors and events fitting in to scopelint spec

@radeksvarz
Copy link
Contributor

Indeed this is better to be thought through.

A)
One perspective is that errors and events are part of the interface besides functions. So there are requirements in some spec on those as well. When doing unit tests here I typically have to answer whether "Do I emit the event / error correctly? with correct arguments?". This is in line with adding them to the tree. And is easier to implement now to scopelint.

B)
The other perspective is that errors and events are product of functions and so the requirements are covered within requirements on functions.

Testing for the latter case rather answers the question - "do I cover all the situations when this error / event shall be emitted?" Or is there any situation forgotten?

That could be visualized by the different output - matrix / table of functions vs erros+events.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants