Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The 130kb limit rule should not be created, it is too arbitary. #15

Open
tomFlowee opened this issue Apr 11, 2024 · 7 comments
Open

The 130kb limit rule should not be created, it is too arbitary. #15

tomFlowee opened this issue Apr 11, 2024 · 7 comments

Comments

@tomFlowee
Copy link

I object against the 130kb limit max stack usage rule introduced directly in this CHIP.

I am aware that the proposal states the limit is based on the effective limit from the old system, but that is not a very good reason for introducing a new consensus rule. That implies we don't understand the effect on raising this limit to, say, 1MB. And if there are unknowns like that this CHIP is clearly not researched well enough yet to introduce new consensus rules.

What is the reason for having a maximum stack size at all?

One might assume this is to protect the hardware it runs on. If this is the case, then the limit should be set in accordance to what actually protects the hardware. Which would make more sense at 1MB. Though frankly 25MB would be fine...

So, please remove the 130KB limit consensus rule or make it based on actually realistic hardware limits if it should really exist.

Lets not have consensus rules added based on "we always did it that way".. :-)

@tomFlowee
Copy link
Author

in short.

The CHIP suggests:

✅ The 520-byte stack element length limit is raised to 10,000 bytes, a constant equal to the consensus-maximum VM bytecode length (A.K.A. MAX_SCRIPT_SIZE) prior to this proposal.

We have today a stack-depth limit of 1000. This remains unchanged.

For some reason Jason introduced a new consensus rule of 130KB, I'd expect it was because he was uncertain if the 520 -> 10K would cause memory issues.

As a C++ dev, I can say that you won't get memory issues by the worst possible case using 20MB.

As a consequence, that new 130K rule is not needed and should not be part of the consensus rules.

@A60AB5450353F40E
Copy link

A60AB5450353F40E commented Apr 11, 2024

Wouldn't it be easier on implementers too?

If max. stack item size is 10kB, and max. number of items is 1k, and max. total stack size is 130kB that doesn't help implementers much.

Imagine your stack is filled with 1000 x 130B items and then some script wants to pop them all to create 130 x 1000B items. If all you had was some 130kB scratchpad, it would require lots of juggling around.

More likely: implementers will just create a 1000 x 10k (==10MB) blob of RAM and work with that e.g.

scratchpad = malloc(10000000)
char* stack[1000]
char* altstack[1000]

then you can just swap pointer address on some OP_SWAP or OP_ROLL etc., no matter the size of involved elements, without need of dynamic arrays or w/e which add overheads.

@cculianu
Copy link

cculianu commented Apr 11, 2024

NACK. 130KB limit is well motivated. Please read this section: https://github.com/bitjson/bch-vm-limits/?tab=readme-ov-file#stack-memory-usage-limit (be sure to expand the "Selection of 130,000..." section). TL;DR: ~130kb was about the largest stack you could generate with the current VM limits.

Keeping 130KB in place is just a conservative approach, while also liberalizing the 201 opcode limit -> unlimited and liberalizing the largest pushes possible from 520 -> 10,000.

Considering that our goal is scaling and ultimately large blocks with many inputs, it's best to err on the side of caution and not be overly aggressive with liberalizing limits. Any liberalization of 130kb (which is implicitly what we have now) -> 10MB or 20MB or whatever needs to be motivated and needs research (read: needs benchmark numbers in real C++ code).

For now, I think 130kb captures things nicely.

Anyway further characterization and benchmarking is warranted, regardless.

And who knows? We may find in our testing that 10MB (e.g. no explicit limit, just the implicit one) has no noticeable slowdown as compared to 130KB, even for the most pathological block validation imaginable.

@tomFlowee
Copy link
Author

@cculianu

This is quite amateur and I disagree that anybody but the most basic of C programmers would do this.

I think BCA is not a C++ developer, but still your words are overly harsh. And also missing the point.

Embedded systems have a hard rule to allocate all things up front, and this is not a strange way of doing things at all and indeed the concept has been well explorerd with garbage collecting based systems. JavaScript itself does this, as it has a garbage collector.
Sure, the implementation will obviously be more complex than a big malloc and while the simple 3 lines code was maybe "amateur", I don't think it is fair to call the entire approach that.

You can do this anyway with std::vector moves. BCHN already does this. Please learn C++.

There are lots of approaches, and I again think the harsh reply is off base.

130KB limit is well motivated.

Actually, introducing a limit that effectively states "like before we changed these other variables" is not motivation because the original is not motivated. Satoshi most likely picked those numbers out of the hat.

Considering that our goal is scaling and ultimately large blocks with many inputs, it's best to err on the side of caution and not be overly aggressive with liberalizing limits. Any liberalization of 130kb (which is implicitly what we have now) -> 10MB or 20MB or whatever needs to be motivated and needs research (read: needs benchmark numbers in real C++ code).

I'm OK with this being stated out loud.

But here is the crux of the matter, if people can't state for certain that increasing those limits will not in actual fact open us up for abuse and failure, then it has not been researched well.

Frankly, we either KNOW a much higher limit can be abused and then we can have an actual motivation for the limit, or we are unaware of risks and then we should not touch those changes at all. Until we are certain. It really is that simple.

As far as I can see, there is NO possible way to abuse an unlimited stack size from script. And that means the limit should not be here.

To be clear: fear of removing this limit, due to uncertainty, reflects much worse on the entire spec than certainty. Either we have a certainty this limit is needed, or the limit does not belong. There is no need to protect the VM, as Calin said elsewhere that couple of MB is not relevant to the whole.

@cculianu
Copy link

Satoshi most likely picked those numbers out of the hat.

Perhaps. But network has been running ok for 14+ years with these limits in place so.. if it ain't broke don't fix it.

Reason for lifting the stack push and opcode limit is contracts have hit a wall, so it deserves investigation to raise them, while still not introducing any "surprises".


Well all that we have left to do is characterize it, then. Try 130KB explicit limit.. or 10MB implicit limit, and see how they compare. See how pathological a block is that we can create in either regime, and what the tradeoffs are (if any). It could be there is no difference, in which case -- hey great -- 10MB implicit limit is great.. or it could make a difference in which case we may want to opt for an explicit limit.

@cculianu
Copy link

cculianu commented Apr 12, 2024

What is the reason for having a maximum stack size at all?

Ok, in order to give this discussion some meat:

It's not about protecting memory usage. As everybody pointed out 10MB is trivial memory these days. And as @A60AB5450353F40E and @tomFlowee pointed out, you can just prealloc anyway and be happy.

It's about guarding against CPU cost to process inputs.. or rather, guarding against expansion of CPU costs it in a direction we don't want. Let me explain:

If you are pushing 10MB, that means you copied (or generated) 10MB.. from somewhere. Intuitively that is going to mean that each input can gnosh on 10MB of data .. on top of whatever hash limit they also had. It's just more cost per input to process. Let's say average mildly weaksauce VPS computer can do 1GB/s memory transfers? So 10MB is what -- 10msec to gnosh on? Multiply by thousands of inputs and now we can imagine a poison block materializing... and taking much longer to process than anybody would like... Maybe. IDK. One can certainly make an argument as to why this requires investigation.

This is why I cautiously say -- leave 130KB in there now -- and also test 10MB (no limit), and see what happens, and have that inform our decision.

All in all I think @bitjson 's instincts are good with this one and 130KB limit is a good starting point for investigating the impact of this VM change. And if it turns out 10MB (no limit) is fine, we can do that too. But not before characterizing the tradeoffs (if any!).

@tomFlowee
Copy link
Author

It's not about protecting memory usage.

Thank you, that was the main question I was interested to get multiple C++ devs to put down. I concur. There is no reason to have any limits for memory reasons or VM protection since the other limits together can't get higher high enough to do damage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants