Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Review and backport EOS PR 3392 packing/unpacking array-like types #995

Closed
abitmore opened this issue May 30, 2018 · 12 comments

Comments

Projects
3 participants
@abitmore
Copy link
Member

commented May 30, 2018

@pmconrad

This comment has been minimized.

Copy link
Contributor

commented May 30, 2018

Need to be careful with introducing new limits here, to remain compatible with existing code.
I think the limits used by EOS should be safe because they're greater than the current max blocksize.

@abitmore abitmore changed the title Review and backport EOR PR 3392 packing/unpacking array-like types Review and backport EOS PR 3392 packing/unpacking array-like types May 30, 2018

@jmjatlanta

This comment has been minimized.

Copy link
Contributor

commented May 30, 2018

May be related: EOSIO/eos#2424

also EOSIO/eos@50ceb9c

@jmjatlanta

This comment has been minimized.

Copy link
Contributor

commented Jun 18, 2018

Note: This is not necessarily a hardfork change, but is an area where we need to use extreme caution. See the PR bitshares/bitshares-fc#55 for more information

@jmjatlanta jmjatlanta removed the 6 Protocol label Jun 18, 2018

@abitmore abitmore added this to New -Awaiting Core Team Evaluation in Project Backlog via automation Sep 13, 2018

@jmjatlanta

This comment has been minimized.

Copy link
Contributor

commented Nov 22, 2018

I have taken a step back to re-evaluate. I am adding notes here as a memory aide. I believe the major intent in eos was threefold:

  1. To handle the serialization and de-serialization of large data files (i.e. persisted database files)
  2. To make sure size limits are consistent between pack and unpack. If it fit going in, it should fit coming out.
  3. Set limits that are easier to understand both in code and in documentation.

The part that requires caution relates to backward-compatibility. The methods must be able to interpret incoming bytes the same as before. Unit tests will need to be complete and exhaustive, with the ability to read (and for testing, create) byte streams that exactly match the results of the previous pack/unpack versions (need to figure out how best to accomplish this). Edge cases exist at extremely small objects, overly large objects and collections with both small and large numbers of elements.

Also, if the limits imposed are different than before, this change requires hardfork protection.

@pmconrad

This comment has been minimized.

Copy link
Contributor

commented Dec 31, 2018

Just realized that sizeof as used in current fc implementation may be platform-dependent and therefore also endangers consensus. We should fix this in the next hardfork, using limits that are beyond current block size.

@abitmore abitmore added this to To Do in Protocol Upgrade Release (3.0.0) via automation Jan 1, 2019

@abitmore abitmore removed this from New -Awaiting Core Team Evaluation in Project Backlog Jan 1, 2019

@jmjatlanta

This comment has been minimized.

Copy link
Contributor

commented Jan 15, 2019

Another option to prevent consensus size differences (suggested by @pmconrad) is to prevent network messages above a certain size. This would "catch the problem at the door", and not affect consensus at the pack/unpack level. This has been done as part of pull bitshares/bitshares-fc#100

Should it be decided that pull 100 fixes this issue as well, the following questions remain:

  • Should we remove now unnecessary FC_ASSERTs related to max pack/unpack sizes?
    • In my opinion, yes.
  • Does pull 100 also fix the sizeof() consensus worry?
    • I believe the answer is "mostly". This code would no longer have size checking, so would be clean. The code in the networking layer would be susceptible to breaking consensus on messages very close to the maximum size (one platform may accept it, while another does not).

I am looking forward to other's thoughts.

@pmconrad

This comment has been minimized.

Copy link
Contributor

commented Jan 16, 2019

PR 100 alone probably does not fix the sizeof problem. But IMO with PR 100 we can get rid of the sizeof checks during deserialization without much risk.

@jmjatlanta

This comment has been minimized.

Copy link
Contributor

commented Jan 22, 2019

@pmconrad PR 103 fixed the problem of serialization checks, but the sizeof problem probably still exists in areas. Should a new ticket be created, or should we just use this one?

@pmconrad

This comment has been minimized.

Copy link
Contributor

commented Jan 23, 2019

IMO the sizeof issue has nothing to do with the original ticket. Better close this and create a new one.

@jmjatlanta

This comment has been minimized.

Copy link
Contributor

commented Jan 25, 2019

Fixed by bitshares/bitshares-fc/pull/100 and (to a lesser degree) bitshares/bitshares-fc/pull/103. This exposed another issue detailed at #1540

@abitmore

This comment has been minimized.

Copy link
Member Author

commented Jan 25, 2019

I guess this can be put into next feature release?

@jmjatlanta

This comment has been minimized.

Copy link
Contributor

commented Jan 25, 2019

Good question. Neither bitshares/bitshares-fc/pull/103 nor bitshares/bitshares-fc/pull/100 have hardfork protection, as the rules should not have changed.

The FC_ASSERTs removed in bitshares/bitshares-fc/pull/103 have been replaced by the 1 FC_ASSERT added in bitshares/bitshares-fc/pull/100.

Please feel free to put a second set of eyes on those changes. If we all agree, I believe this can go in the feature release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.