Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Universal specs for JSON-RPC API methods #217
Universal specs for JSON-RPC API methods
Existing RPC test suites include ethereum/rpc-tests and bas-vk/hive/tree/rpc. The former is based on mocha.js and the latter on go-ethereum's
Here's an example JSON schema specification, for eth_getBlockTransactionCountByHash, annotated for explanation:
The JSON schema specifications are paired with test cases. Here's an example file with test cases for eth_getBlockTransactionCountByHash:
One remaining task is to specify a test format for the pub/sub methods. One possibility is a
Test suite maintenance
An automated maintenance process for the RPC test suite will be described in a forthcoming Meta-EIP. The current plan is to maintain updates to the RPC test suite through a voting contract, with one-developer one-vote where developer identities are verified using devcon2 tokens. (There is a separate plan to expand the Devcon2 tokens into a web of trust, in order to enable participation by any and all Ethereum developers/stakeholders). Tentatively named
Is it possible to include strict verifications, such as a subset of regular expressions on some data fields? Thinking here about validating that an actual address was returned for example by
Have you seen cpp-ethereum's
This is very exciting to me. Developers need a place they can look up stable methods, and client devs need a smooth way to evolve their interfaces.
Other clients that will be nice to add to the tested columns:
And I'm sure others, too. Once we have a strong singular test suite like this, building new RPC providers should become a much more stable process.
Starting with the RPCs makes sense, and I hope that the patterns which are built can be extended to broader specifications over time. We've had discussions along very similar lines for Enterprise using Boardroom.
See also http://html5test.com.
This was referenced
Mar 8, 2017
See "Test suite maintenance" at the very top of this issue.
The existing tests are incomplete placeholders. The plan is to expand/refine those to become the missing client-neutral executable specification.
An automated maintenance process for the RPC test suite will be described in a forthcoming Meta-EIP. The current plan is to maintain updates to the RPC test suite through a voting contract, with one-developer one-vote where developer identities are verified using devcon2 tokens. (There is a separate plan to expand the Devcon2 tokens into a web of trust, in order to enable participation by any and all Ethereum developers/stakeholders). Tentatively named EipSignal, this automated governance process is somewhat in the spirit of the Yellow Paper Committee; however, the outcome of EipSignal votes will only mandate changes to the RPC API specification and test suite as described above, and will NOT govern "core/consensus" protocol changes. The work-in-progress on EipSignal can seen here: MetaMask/IPFS-Ethereum-Hackathon#14.
@bobsummerwill beat me to it, but I'll still reword:
There is no current spec to measure against, which is why this EIP exists.
The current plan is to manage this test suite with a form of governance, at which point non-geth-compliant specs may emerge, or even become dominant.
Until then, the linked test suite should be treated as a template for specs, not as a proposed spec itself.
A test suite isn't a specification, it just tests conformity to some specification. The schema described here is not sufficient to specify certain primitives which all methods rely upon. This is exactly how you get into the situation of strange corner cases that each client implements differently while still being "to-spec".
The places where I see the current documentation lacking the most:
I'm glad that this EIP is working on point 3. But we need strong definitions of the primitives used to build the RPC methods rather than specific test cases which will just lead to overfitting.
It's misleading to label one specific implementation of ambiguous methods as "correct".
I think we're actually starting to stray beyond the scope of this issue, but there isn't a better place for it. This issue is really about an RPC test format. We're beginning to talk about the governance process of managing those tests.
I understand the initial alarm, it could sound like someone is trying to hijack what "correct" means, but that's not our intention at all. We're not trying to shame Parity over a specific difference from Geth, we're trying to create a tool for all client devs to better coordinate their implementations.
The (current, in flux) plan is that a given suite of tests would accompany an EIP spec, not replace it. For example, for a given EIP, a variety of accompanying test suites could be nominated, and the most popular one(s?) could be displayed on the compatibility table.
Since the most popular test suite for a given EIP would be displayed, the test would not itself be an endorsement of that EIP spec, it would merely be a representation of it, and a way of automatically measuring compliance to it, which better represents the sort of ad-hoc standards process we actually have, where endorsement is almost solely represented through client implementation.
It certainly would not be a spec itself, since the table could include distinct, incompatible implementations of the same method, but it would provide a singular place to view and compare those incompatibilities. This table would not be about labeling any one implementation as correct, but instead, be a way of highlighting differences of implementation, for the benefit of both dapp and client developers.
I don't think a test suite has to encourage over-fitting, either. In the test format that @cdetrio has proposed, specific values of the response can be used to validate results, so it does not need to blindly enforce bit-conformity.
In the case of
I'm very interested in seeing solutions to 1 & 2 as well, and while a test suite may not be a complete solution for them, I think it may still be a useful tool for even those issues, when combined with EIP specifications.
@rphmeier Good points, I agree that pending state and error messages are two important areas to address.
EIP #136 addresses RPC error messages, and I think the schema here is sufficient to incorporate those error messages.
Pending state is trickier. Do you have any suggestions for how to define the pending state as a primitive?
I'm still unsure about this, but one thought is that perhaps an additional testing-specific RPC method would be useful, such as a
On the third point, "lack of consensus between client devs over changes", that will be addressed by a separate EIP/meta-EIP, which will propose a governance/maintenance process. However, I wouldn't mind bootstrapping it with an initial test suite of "compatible/correct" behavior, determined informally by pre-existing standards and rough consensus. The current examples were generated against geth (two incompatible geth branches, in fact), but they are certainly only meant as examples.
The discussion here should focus on the spec or test schema/format. The term "spec" is used loosely, in the sense of an API spec, where the aim is to enable black-box testing of RPC method behavior. We want to settle on a format that is sufficient to specify behavior (at least implicitly if not explicitly) that is currently ambiguous, and test compliance.
What I see as a major hole in these tests is that they don't appear to have a mechanism for setting up starting state. What these tests really should be testing is along the lines of:
This means tests necessarily need to be more complex than a simple "given request, expect result". One way to achieve this (pseudocode only):
Alternatively, the test could explicitly setup expected state. For example:
Then the test would call a one or more RPC methods and assert that it gets back a very well defined response.
The key here is that all tests should setup the current full chain state, then call a particular RPC method and assert on the results. Building a test suite like this is expensive, but is also valuable for multi-implementation compatibility. Note that this test doesn't care how an implementation achieves the result (implementation details), but it is well defined that given a particular state and an RPC call, an expected outcome is achieved.
As to how to codify "setup", I'll leave that up for discussion but I do think it is critical that COMPLETE setup is included in the test. The test should assume nothing about the current state of the world when it executes.
Personally, I'm a fan of the first style because it is generally easier to write tests when the test defines the path to the expected state, rather than requiring that the thing under test has the ability to set the current state. The first mechanism also allows true black box testing, where the test can be run against an endpoint without knowing its implementation. The second mechanism requires that the test have a back-door mechanism to setup current state.
added a commit
Jun 8, 2017
referenced this issue
Jul 3, 2017
added a commit
Jul 31, 2017
I just now stumled upon this when searching for something like swagger to define and generate documentation for our json-rpc apis. Unfortunately, swagger isn't for json-rpc.
@cdetrio Have you found any good editors to write/generate the specifications above? Or did you do it by hand?
I'd really like to have not only a specification, but also an editor which makes writing/validating the specification intuitive and simple. If it involves writing files by hand, then running some npm script to validate and generate docs based on our custom specification file, then I'm afraid it will just bitrot after a while.
@holiman I think I'd like to pick this up on my team. My thought was that we could add these schemas to the
Do it, @pipermerriam. That would be awesome!
When I spoke at the launch event for the EEA back in February 2017, 17 months ago, this was actually one of the areas of protocol specification which I highlighted as being most needed and highest impact - both for public Ethereum and for enterprise scenarios.
The lack of clear and testable JSON-RPC specifications really hurts everyone trying to build on top of Ethereum.
What are PegaSys doing for RPCs in Pantheon, @shahankhatch? Are you just trying to be close to Geth? Or did you do some modal thing like Parity have?
@rphmeier What do you see as the best path towards standardization of RPC specs across the various Ethereum clients, Robert?
As you said last year, there is a balance here to ensure that we don't "over-fit", and to ensure that any automated tests that we build are encoding unambiguous original specifications.
Any thoughts, @gavofyork?