Skip to content

LimeChain Application pvm_evm_compat_check.md #2564

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

marius080
Copy link

Project Abstract

This project attempts to identify inconsistencies in the PVM runtime that would manifest themselves when running arbitrary EVM Solidity contracts.

Our goal is to deploy common use-case contracts onto a testnet (Paseo), and run the Chai Tests that would usually run locally on Hardhat, on-chain.

The results of the automated tests are to be collated and eventually published.

Grant level

  • Level 1: Up to $10,000, 2 approvals
  • Level 2: Up to $30,000, 3 approvals
  • Level 3: Unlimited, 5 approvals (for >$100k: Web3 Foundation Council approval)

Application Checklist

  • The application template has been copied and aptly renamed (project_name.md).
  • I have read the application guidelines.
  • Payment details have been provided (Polkadot AssetHub (USDC & DOT) address in the application and bank details via email, if applicable).
  • I understand that an agreed upon percentage of each milestone will be paid in vested DOT, to the Polkadot address listed in the application.
  • I am aware that, in order to receive a grant, I (and the entity I represent) have to successfully complete a KYC/KYB check.
  • The software delivered for this grant will be released under an open-source license specified in the application.
  • The initial PR contains only one commit (squash and force-push if needed).
  • The grant will only be announced once the first milestone has been accepted (see the announcement guidelines).
  • I prefer the discussion of this application to take place in a private Element/Matrix channel. My username is: @_______:matrix.org (change the homeserver if you use a different one)

@github-actions github-actions bot added the admin-review This application requires a review from an admin. label Jun 4, 2025
Copy link
Contributor

github-actions bot commented Jun 4, 2025

CLA Assistant Lite bot All contributors have signed the CLA ✍️ ✅

@Noc2 Noc2 requested a review from alexdimes June 4, 2025 10:34
@marius080
Copy link
Author

I have read and hereby sign the Contributor License Agreement.

@Noc2 Noc2 requested a review from semuelle June 10, 2025 08:41
@semuelle semuelle self-assigned this Jun 10, 2025
semuelle
semuelle previously approved these changes Jun 11, 2025
Copy link
Member

@semuelle semuelle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the application, @marius080. Sounds great. One thing that seems to be as much a blocker as the VM compatibility is the surrounding architecture, like RPC endpoints. Is this something we could explore in a possible follow-up?

@semuelle semuelle added ready for review The project is ready to be reviewed by the committee members. and removed admin-review This application requires a review from an admin. labels Jun 11, 2025
Fixed company link

Co-authored-by: Sebastian Müller <sebastian@web3.foundation>
@github-actions github-actions bot added the admin-review This application requires a review from an admin. label Jun 11, 2025
@marius080
Copy link
Author

marius080 commented Jun 11, 2025

@semuelle muelle

One thing that seems to be as much a blocker as the VM compatibility is the surrounding architecture, like RPC endpoints. Is this something we could explore in a possible follow-up?

Sounds good! We were trying to keep things limited to the VM as part of this application, and could definitely apply for separate one relating to the RPC endpoint issues.
To pick your brain a bit, are you thinking of having a similar util for RPC compat, or are you interested in looking at developing the rest of the endpoints that would achieve parity?

Copy link
Member

@semuelle semuelle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To pick your brain a bit, are you thinking of having a similar util for RPC compat, or are you interested in looking at developing the rest of the endpoints that would achieve parity?

Just testing. Let's see, perhaps by the time we're closer to launch, there will be a test suite. Happy to discuss this then.

@semuelle semuelle removed the admin-review This application requires a review from an admin. label Jun 11, 2025
Copy link

@alexdimes alexdimes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

General comments:

  • I suggest avoiding the term "Research" => I see it more as a rapid experimentation and technical support grant, for checking the compatibility with specific smart contract implementation, that could either be considered as a standard, dependency for common use cases, or strict requirements from commonly (re)used open-source codebases (e.g. Uniswap, Aave, Pendle, Compound, as well commonly used implementations

  • I suggest talking to Parity core smart contract team to ensure qualification of a successful scope. I can facilitate that.

Questions for Limechain

  • Are there any strict requirements (e.g. tooling) for you to perform these smart contracts deployments/experimentation?

  • Does your scope or deliverables on the rapid experimentation include suggestions for Ethereum compatible tooling, and/or specific list of improvements for desired level of compatibility?

  • Can your scope and/or deliverables involve "scoring" PVM's Ethereum compatibility with the current standings and/vs the future (recommended) state according to your results/findings?

  • Can your scope involve reactive approach to testing smart contracts from Hub's (PVM) ongoing sales pipeline from BD teams (e.g. if we're talking to Uniswap, or Uniswap fork/deployments implementation partners, try and deploy their specific open-source limit order deployments, custom order types e.g. from Uni-v3, etc to check for compatibility of those?

  • Can your scope and/or deliverables involve continuous discussions (back-and-forth) with Parity's (or Velocity Labs / Papermoon teams, as BD and technical support) smart contract team(s), and improved/adapted accordingly?

@TorstenStueber
Copy link
Contributor

Thanks for this proposal. Generally any tooling that automatically checks compatibility levels and reports issues to our tracker is very useful.

I have a couple of more technical questions that are not clear to me from the proposal:

  • The project overview describes "we have chosen some of the more common use cases and contracts, focusing on open-source contracts available for the Ethereum Ecosystem". What are these use cases and contracts? Are these some well known protocols? Is the list already complete or compiling it part of the first milestone?
  • Furthermore, the proposal claims "run the Chai Tests that would usually run locally on Hardhat". What are these Chai tests and who will create them? Is this a manual process or are they derived from any existing Hardhat test suite? Many projects use Foundry instead of Hardhat, what about these?
  • Why is the intention to deploy these contracts on Paseo and execute tests there instead of on a local chain, that can run much faster in CI?
  • Point 4 of the roadmap reads "Automatically creates or updates issues based on test outcomes". Can you be more specific about how that works – I really want to avoid that the system accidentally created hundreds of issues per day and makes the issue tracker unusable. Manually created issues might be enough initially.

@marius080
Copy link
Author

Hi @alexdimes
Thanks for taking the time to review our proposal and give your suggestions. We appreciate that this can seem as an experimental project, and I would assume in the future we will be more cautious to follow convention. The reason that this proposal was considered a "Research" proposal on our side, is because we wanted to research the maturity level of the PVM vis-a-vis EVM.

We'd be very happy to reach out and try to ensure that the scope falls within the expectations of the Parity team.

Regarding your questions:

Are there any strict requirements (e.g. tooling) for you to perform these smart contracts deployments/experimentation?

We primarily would like to stick to a "traditional" tech stack. In the EVM world this would be a Hardhat, NodeJS setup. We intend to use that along with Github's Actions in order to run everything in the same environment.

Does your scope or deliverables on the rapid experimentation include suggestions for Ethereum compatible tooling, and/or specific list of improvements for desired level of compatibility?

At this stage, our goal is to evaluate the readiness of the PVM to handle most Solidity contracts. The goal is to include OpenZeppelin, Uniswap and Diamond contracts, and use that as a measure of "where we want to be" rather than just where we are. We are willing to work with Parity to also understand mid-to-long-term goals and figure out which contracts can be included. This could work as a "suggestion" for future improvement.

Can your scope and/or deliverables involve "scoring" PVM's Ethereum compatibility with the current standings and/vs the future (recommended) state according to your results/findings?

All tests have a True/False result. The scoring can be done as X of Y, where X is number of passing tests and Y is number of total tests. What I would personally like to see is whether that "score" can be tracked over time so we can get a feel for the progress both in terms of pass/fail, but also total number of tests. I'll work with my team to see how we can do this during implementation.

Can your scope involve reactive approach to testing smart contracts from Hub's (PVM) ongoing sales pipeline from BD teams (e.g. if we're talking to Uniswap, or Uniswap fork/deployments implementation partners, try and deploy their specific open-source limit order deployments, custom order types e.g. from Uni-v3, etc to check for compatibility of those?

I would assume that the team responsible for deploying those contracts and tests would be able to also add them to the repository that we will build. They would then be able to run the same type of compatibility check as the rest of the contracts.
Our goal has always been to make this an "open source tool" of sort, and we can figure out how the permissioning would work along with Parity.

Can your scope and/or deliverables involve continuous discussions (back-and-forth) with Parity's (or Velocity Labs / Papermoon teams, as BD and technical support) smart contract team(s), and improved/adapted accordingly?

Absolutely! I would suggest actually that it's necessary. We would need to communicate with Parity's teams in order to also understand future needs and account for that.

@marius080
Copy link
Author

Hi @TorstenStueber

Thank you for your comments and questions. Let me try to answer them for you below.

The project overview describes "we have chosen some of the more common use cases and contracts, focusing on open-source contracts available for the Ethereum Ecosystem". What are these use cases and contracts? Are these some well known protocols? Is the list already complete or compiling it part of the first milestone?

We have built a compatibility validation tool in the past in collaboration with Hedera. In that scenario we were able to create a large list of use-cases and a large repository of open source contracts that would classify as common. These cover use-cases for OpenZeppelin base contracts, upgradeability, diamond patterns, swaps, conditional tokens, and more specific op-code tests. We will use that experience, as well as collaboration with Parity in order to set priorities, and clearly define a list at Milestone 1.
So these are well-known contracts, the list is not currently complete, and we will be compiling it with the stakeholders' collaboration.

Furthermore, the proposal claims "run the Chai Tests that would usually run locally on Hardhat". What are these Chai tests and who will create them? Is this a manual process or are they derived from any existing Hardhat test suite? Many projects use Foundry instead of Hardhat, what about these?

The tests are currently derived from previous test suites. We have not accounted for foundry tests at this stage, as the mechanism for testing would be completely different. Our approach is JS execution of onchain functions and validating outcomes.

Why is the intention to deploy these contracts on Paseo and execute tests there instead of on a local chain, that can run much faster in CI?

The goal of this project is to have an "as close to real life" approach to the testing. While the speed of a local chain could definitely boost the activity rate, we do not expect these tests to be run with high frequency. It will also help to establish whether there are issues in live environments.

Point 4 of the roadmap reads "Automatically creates or updates issues based on test outcomes". Can you be more specific about how that works – I really want to avoid that the system accidentally created hundreds of issues per day and makes the issue tracker unusable. Manually created issues might be enough initially.

We intend to build hooks that would automatically publish issues to your tracker. When tests fail, each contract can be identified and a summary can be sent there.

@TorstenStueber
Copy link
Contributor

The goal of this project is to have an "as close to real life" approach to the testing. While the speed of a local chain could definitely boost the activity rate, we do not expect these tests to be run with high frequency. It will also help to establish whether there are issues in live environments.

Note that just running the Open Zeppelin test suite alone on a public chain like Paseo Asset Hub (or "PassetHub") will take about six hours. For CI a local network set to execute blocks as fast as possible is more suitable.

@marius080
Copy link
Author

@TorstenStueber,
We are aware that they are extensive tests and do take quite a bit of time, and considering that we will also have additional tests, it will take even longer.
For the purposes of something that runs periodically, I think that would be fine. We can also segment the tests in order to make them more modular, and that way they can be run to target specific changes.

@semuelle
Copy link
Member

Put on hold as discussed.

@semuelle semuelle added the on hold There is an external blocker, such as another grant in progress. label Jun 26, 2025
@github-actions github-actions bot added the stale label Jul 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
on hold There is an external blocker, such as another grant in progress. ready for review The project is ready to be reviewed by the committee members. stale
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants