Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EIP-615: Subroutines and Static Jumps for the EVM #615

Closed
gcolvin opened this issue Apr 27, 2017 · 37 comments
Closed

EIP-615: Subroutines and Static Jumps for the EVM #615

gcolvin opened this issue Apr 27, 2017 · 37 comments

Comments

@gcolvin
Copy link
Contributor

gcolvin commented Apr 27, 2017


eip: 615
title: Subroutines and Static Jumps for the EVM
status: Draft
type: Standards Track
category: Core
author: Greg Colvin greg@colvin.org, Brooklyn Zelenka (@expede), Paweł Bylica (@chfast), Christian Reitwiessner (@chriseth)
discussions-to: https://ethereum-magicians.org/t/eip-615-subroutines-and-static-jumps-for-the-evm-last-call/3472
created: 2016-12-10

Simple Summary

In the 21st century, on a blockchain circulating billions of ETH, formal specification and verification are an essential tool against loss. Yet the design of the EVM makes this unnecessarily difficult. Further, the design of the EVM makes near-linear-time compilation to machine code difficult. We propose to move forward with proposals to resolve these problems by tightening EVM security guarantees and reducing barriers to performance.

Abstract

EVM code is currently difficult to statically analyze, hobbling critical tools for preventing the many expensive bugs our blockchain has experienced. Further, none of the current implementations of the Ethereum Virtual Machine—including the compilers—are sufficiently performant to reduce the need for precompiles and otherwise meet the network's long-term demands. This proposal identifies dynamic jumps as a major reason for these issues, and proposes changes to the EVM specification to address the problem, making further efforts towards a safer and more performant the EVM possible.

We also propose to validate—in near-linear time—that EVM contracts correctly use subroutines, avoid misuse of the stack, and meet other safety conditions before placing them on the blockchain. Validated code precludes most runtime exceptions and the need to test for them. And well-behaved control flow and use of the stack makes life easier for interpreters, compilers, formal analysis, and other tools.

Motivation

Currently the EVM supports only dynamic jumps, where the address to jump to is an argument on the stack. Worse, the EVM fails to provide ordinary, alternative control flow facilities like subroutines and switches provided by Wasm and most CPUs. So dynamic jumps cannot be avoided, yet they obscure the structure of the code and thus mostly inhibit control- and data-flow analysis. This puts the quality and speed of optimized compilation fundamentally at odds. Further, since many jumps can potentially be to any jump destination in the code, the number of possible paths through the code can go up as the product of the number of jumps by the number of destinations, as does the time complexity of static analysis. Many of these cases are undecidable at deployment time, further inhibiting static and formal analyses.

However, given Ethereum's security requirements, near-linear n log n time complexity is essential. Otherwise, Contracts can be crafted or discovered with quadratic complexity to use as denial of service attack vectors against validations and optimizations.

But absent dynamic jumps code can be statically analyzed in linear time. That allows for linear time validation. It also allows for code generation and such optimizations as can be done in log n time to comprise an n log n time compiler.

And absent dynamic jumps, and with proper subroutines the EVM is a better target for code generation from other languages, including

  • Solidity
  • Vyper
  • LLVM IR
    • front ends include C, C++, Common Lisp, D, Fortran, Haskell, Java, Javascript, Kotlin, Lua, Objective-C, Pony, Pure, Python, Ruby, Rust, Scala, Scheme, and Swift

The result is that all of the following validations and optimizations can be done at deployment time with near-linear (n log n) time complexity.

  • The absence of most exceptional halting states can be validated.
  • The maximum use of resources can be sometimes be calculated.
  • Bytecode can be compiled to machine code in near-linear time.
  • Compilation can more effectively optimize use of smaller registers.
  • Compilation can more effectively optimize injection of gas metering.

Specification

Dependencies

EIP-1702. Generalized Account Versioning Scheme. This proposal needs a versioning scheme to allow for its bytecode (and eventually eWasm bytecode) to be deployed with existing bytecode on the same blockchain.

Proposal

We propose to deprecate two existing instructions—JUMP and JUMPI—and propose new instructions to support their legitimate uses. In particular, it must remain possible to compile Solidity and Vyper code to EVM bytecode, with no significant loss of performance or increase in gas price.

Especially important is efficient translation to and from eWasm and to machine code. To that end we maintain a close correspondence between Wasm, x86, ARM and proposed EVM instructions.

EIP-615 Wasm x86 ARM
JUMPTO br JMP B
JUMPIF br_if JE BEQ
JUMPV br_table JMP TBH
JUMPSUB call CALL BL
JUMPSUBV call_indirect CALL BL
RETURN return RET RET
GETLOCAL local.get POP POP
PUTLOCAL local.put PUSH PUSH
BEGINSUB func
BEGINDATA tables

Preliminaries

These forms

INSTRUCTION

INSTRUCTION x

INSTRUCTION x, y

name an INSTRUCTION with no, one and two arguments, respectively. An instruction is represented in the bytecode as a single-byte opcode. Any arguments are laid out as immediate data bytes following the opcode inline, interpreted as fixed length, MSB-first, two's-complement, two-byte positive integers. (Negative values are reserved for extensions.)

Branches and Subroutines

The two most important uses of JUMP and JUMPI are static jumps and return jumps. Conditional and unconditional static jumps are the mainstay of control flow. Return jumps are implemented as a dynamic jump to a return address pushed on the stack. With the combination of a static jump and a dynamic return jump you can—and Solidity does—implement subroutines. The problem is that static analysis cannot tell the one place the return jump is going, so it must analyze every possibility (a heavy analysis).

Static jumps are provided by

JUMPTO jump_target

JUMPIF jump_target

which are the same as JUMP and JUMPI except that they jump to an immediate jump_target rather than an address on the stack.

To support subroutines, BEGINSUB, JUMPSUB, and RETURNSUB are provided. Brief descriptions follow, and full semantics are given below.

BEGINSUB n_args, n_results

marks the single entry to a subroutine. n_args items are taken off of the stack at entry to, and n_results items are placed on the stack at return from the subroutine. The subroutine ends at the next BEGINSUB instruction (or BEGINDATA, below) or at the end of the bytecode.

JUMPSUB jump_target

jumps to an immediate subroutine address.

RETURNSUB

returns from the current subroutine to the instruction following the JUMPSUB that entered it.

Switches, Callbacks, and Virtual Functions

Dynamic jumps are also used for O(1) indirection: an address to jump to is selected to push on the stack and be jumped to. So we also propose two more instructions to provide for constrained indirection. We support these with vectors of JUMPDEST or BEGINSUB offsets stored inline, which can be selected with an index on the stack. That constrains validation to a specified subset of all possible destinations. The danger of quadratic blow up is avoided because it takes as much space to store the jump vectors as it does to code the worst case exploit.

Dynamic jumps to a JUMPDEST are used to implement O(1) jumptables, which are useful for dense switch statements. Wasm and most CPUs provide similar instructions.

JUMPV n, jump_targets

jumps to one of a vector of n JUMPDEST offsets via a zero-based index on the stack. The vector is stored inline at the jump_targets offset after the BEGINDATA bytecode as MSB-first, two's-complement, two-byte positive integers. If the index is greater than or equal to n - 1 the last (default) offset is used.

Dynamic jumps to a BEGINSUB are used to implement O(1) virtual functions and callbacks, which take at most two pointer dereferences on most CPUs. Wasm provides a similar instruction.

JUMPSUBV n, jump_targets

jumps to one of a vector of n BEGINSUB offsets via a zero-based index on the stack. The vector is stored inline at the jump_targets offset after the DATA bytecode, as MSB-first, two's-complement, two-byte positive integers. If the index is greater than or equal to n - 1 the last (default) offset is used.

Variable Access

These operations provide convenient access to subroutine parameters and local variables at fixed stack offsets within a subroutine. Otherwise only sixteen variables can be directly addressed.

PUTLOCAL n

Pops the stack to the local variable n.

GETLOCAL n

Pushes the local variable n onto the stack.

Local variable n is the nth stack item below the frame pointer, FP[-n], as defined below.

Data

There needs to be a way to place unreachable data into the bytecode that will be skipped over and not validated. Indirect jump vectors will not be valid code. Initialization code must create runtime code from data that might not be valid code. And unreachable data might prove useful to programs for other purposes.

BEGINDATA

specifies that all of the following bytes to the end of the bytecode are data, and not reachable code.

Structure

Valid EIP-615 EVM bytecode begins with a valid header. This is the magic number ‘\0evm’ followed by the semantic versioning number '\1\5\0'. (For Wasm the header is '\0asm\1').

Following the header is the BEGINSUB opcode for the main routine. It takes no arguments and returns no values. Other subroutines may follow the main routine, and an optional BEGINDATA opcode may mark the start of a data section.

Semantics

Jumps to and returns from subroutines are described here in terms of

  • The EVM data stack, (as defined in the Yellow Paper) usually just called “the stack.”
  • A return stack of JUMPSUB and JUMPSUBV offsets.
  • A frame stack of frame pointers.

We will adopt the following conventions to describe the machine state:

  • The program counter PC is (as usual) the byte offset of the currently executing instruction.
  • The stack pointer SP corresponds to the Yellow Paper's substate s of the machine state.
    • SP[0] is where a new item is can be pushed on the stack.
    • SP[1] is the first item on the stack, which can be popped off the stack.
    • The stack grows towards lower addresses.
  • The frame pointer FP is set to SP + n_args at entry to the currently executing subroutine.
    • The stack items between the frame pointer and the current stack pointer are called the frame.
    • The current number of items in the frame, FP - SP, is the frame size.

Note: Defining the frame pointer so as to include the arguments is unconventional, but better fits our stack semantics and simplifies the remainder of the proposal.

The frame pointer and return stacks are internal to the subroutine mechanism, and not directly accessible to the program. This is necessary to prevent the program from modifying its own state in ways that could be invalid.

Execution of EVM bytecode begins with the main routine with no arguments, SP and FP set to 0, and with one value on the return stack—code_size - 1. (Executing the virtual byte of 0 after this offset causes an EVM to stop. Thus executing a RETURNSUB with no prior JUMPSUB or JUMBSUBV—that is, in the main routine—executes a STOP.)

Execution of a subroutine begins with JUMPSUB or JUMPSUBV, which

  • pushes PC on the return stack,
  • pushes FP on the frame stack
    • thus suspending execution of the current subroutine,
  • sets FP to SP + n_args, and
  • sets PC to the specified BEGINSUB address
    • thus beginning execution of the new subroutine.

Execution of a subroutine is suspended during and resumed after execution of nested subroutines, and ends upon encountering a RETURNSUB, which

  • sets FP to the top of the virtual frame stack and pops the stack,
  • sets SP to FP + n_results,
  • sets PC to top of the return stack and pops the stack, and
  • advances PC to the next instruction

thus resuming execution of the enclosing subroutine or main routine. A STOP or RETURN also ends the execution of a subroutine.

For example, starting from this stack,

_________________
      | locals      20 <- FP
frame |             21
______|___________  22
                       <- SP

and after pushing two arguments and branching with JUMPSUB to a BEGINSUB 2, 3

PUSH 10
PUSH 11
JUMPSUB beginsub

and initializing three local variables

PUSH 99
PUSH 98
PUSH 97

the stack looks like this

                    20
                    21
__________________  22
      | arguments   10 <- FP
frame |___________  11
      | locals      99
      |             98
______|___________  97
                       <- SP

After some amount of computation the stack could look like this

                    20
                    21
__________________  22
      | returns     44 <- FP
      |             43
frame |___________  42
      | locals      13
______|___________  14
                       <- SP

and after RETURNSUB would look like this

_________________
      | locals      20 <- FP
      |             21
frame |___________  22
      | returns     44
      |             43
______|___________  42
                       <- SP

Validity

We would like to consider EVM code valid iff no execution of the program can lead to an exceptional halting state, but we must validate code in linear time. So our validation does not consider the code’s data and computations, only its control flow and stack use. This means we will reject programs with invalid code paths, even if those paths are not reachable. Most conditions can be validated, and will not need to be checked at runtime; the exceptions are sufficient gas and sufficient stack. As such, static analysis may yield false negatives belonging to well-understood classes of code requiring runtime checks. Aside from those cases, we can validate large classes at validation time and with linear complexity.

Execution is as defined in the Yellow Paper—a sequence of changes in the EVM state. The conditions on valid code are preserved by state changes. At runtime, if execution of an instruction would violate a condition the execution is in an exceptional halting state. The Yellow Paper defines five such states.

1 Insufficient gas

2 More than 1024 stack items

3 Insufficient stack items

4 Invalid jump destination

5 Invalid instruction

We propose to expand and extend the Yellow Paper conditions to handle the new instructions we propose.

To handle the return stack we expand the conditions on stack size:

2a The size of the data stack does not exceed 1024.

2b The size of the return stack does not exceed 1024.

Given our more detailed description of the data stack we restate condition 3—stack underflow—as

3 SP must be less than or equal to FP

Since the various DUP and SWAP operations—as well as PUTLOCAL and GETLOCAL—are defined as taking items off the stack and putting them back on, this prevents them from accessing data below the frame pointer, since taking too many items off of the stack would mean that SP is less than FP.

To handle the new jump instructions and subroutine boundaries, we expand the conditions on jumps and jump destinations.

4a JUMPTO, JUMPIF, and JUMPV address only JUMPDEST instructions.

4b JUMPSUB and JUMPSUBV address only BEGINSUB instructions.

4c JUMP instructions do not address instructions outside of the subroutine they occur in.

We have two new conditions on execution to ensure consistent use of the stack by subroutines:

6 For JUMPSUB and JUMPSUBV the frame size is at least the n_args of the BEGINSUB(s) to jump to.

7 For RETURNSUB the frame size is equal to the n_results of the enclosing BEGINSUB.

Finally, we have one condition that prevents pathological uses of the stack:

8 For every instruction in the code the frame size is constant.

In practice, we must test at runtime for conditions 1 and 2—sufficient gas and sufficient stack. We don’t know how much gas there will be, we don’t know how deep a recursion may go, and analysis of stack depth even for non-recursive programs is nontrivial.

All of the remaining conditions we validate statically.

Costs & Codes

All of the instructions are O(1) with a small constant, requiring just a few machine operations each, whereas a JUMP or JUMPI typically does an O(log n) binary search of an array of JUMPDEST offsets before every jump. With the cost of JUMPI being high and the cost of JUMP being mid, we suggest the cost of JUMPV and JUMPSUBV should be mid, JUMPSUB and JUMPIF should be low, andJUMPTO and the rest should be verylow. Measurement will tell.

We suggest the following opcodes:

0xb0 JUMPTO
0xb1 JUMPIF
0xb2 JUMPV
0xb3 JUMPSUB
0xb4 JUMPSUBV
0xb5 BEGINSUB
0xb6 BEGINDATA
0xb7 RETURNSUB
0xb8 PUTLOCAL
0xb9 GETLOCAL

Backwards Compatibility

These changes would need to be implemented in phases at decent intervals:

1. If this EIP is accepted, invalid code should be deprecated. Tools should stop generating invalid code, users should stop writing it, and clients should warn about loading it.

2. A later hard fork would require clients to place only valid code on the block chain. Note that despite the fork old EVM code will still need to be supported indefinitely; older contracts will continue to run, and to create new contracts.

If desired, the period of deprecation can be extended indefinitely by continuing to accept code not versioned as new—but without validation. That is, by delaying or canceling phase 2.

Regardless, we will need a versioning scheme like EIP-1702 to allow current code and EIP-615 code to coexist on the same blockchain.

Rationale

This design was highly constrained by the existing EVM semantics, the requirement for eWasm compatibility, and the security demands of the Ethereum environment. It was also informed by the lead author's previous work implementing Java and Scheme interpreters. As such there was very little room for alternative designs.

As described above, the approach was simply to deprecate the problematic dynamic jumps, then ask what opcodes were necessary to provide for the features they supported. These needed to include those provided by eWasm, which themselves were modeled after typical hardware. The only real innovation was to move the frame pointer and the return pointer to their own stacks, so as to prevent any possibility of overwriting them. (Although Forth also uses a return stack.) This allowed for treating subroutine arguments as local variables, and facilitated the return of multiple values.

Implementation

Implementation of this proposal need not be difficult. At the least, interpreters can simply be extended with the new opcodes and run unchanged otherwise. The new opcodes require only stacks for the frame pointers and return offsets and the few pushes, pops, and assignments described above. The bulk of the effort is the validator, which in most languages can almost be transcribed from the pseudocode above.

A lightly tested C++ reference implementation is available in Greg Colvin's Aleth fork. This version required circa 110 lines of new interpreter code and a well-commented, 178-line validator.

Appendix A

Validation

Validation comprises two tasks:

  • Check that jump destinations are correct and instructions valid.
  • Check that subroutines satisfy the conditions on control flow and stack use.

We sketch out these two validation functions in pseudo-C below. To simplify the presentation only the five primitives are handled (JUMPV and JUMPSUBV would just add more complexity to loop over their vectors), we assume helper functions for extracting instruction arguments from immediate data and managing the stack pointer and program counter, and some optimizations are forgone.

Validating Jumps

Validating that jumps are to valid addresses takes two sequential passes over the bytecode—one to build sets of jump destinations and subroutine entry points, another to check that addresses jumped to are in the appropriate sets.

    bytecode[code_size]   // contains EVM bytecode to validate
    is_sub[code_size]     // is there a BEGINSUB at PC?
    is_dest[code_size]    // is there a JUMPDEST at PC?
    sub_for_pc[code_size] // which BEGINSUB is PC in?

    bool validate_jumps(PC)
    {
        current_sub = PC

        // build sets of BEGINSUBs and JUMPDESTs
        for (PC = 0; instruction = bytecode[PC]; PC = advance_pc(PC))
        {
            if instruction is invalid
                return false
            if instruction is BEGINDATA
                break;
            if instruction is BEGINSUB
                is_sub[PC] = true
                current_sub = PC
                sub_for_pc[PC] = current_sub
            if instruction is JUMPDEST
                is_dest[PC] = true
            sub_for_pc[PC] = current_sub
        }

        // check that targets are in subroutine
        for (PC = 0; instruction = bytecode[PC]; PC = advance_pc(PC))
        {
            if instruction is BEGINDATA
                break;
            if instruction is BEGINSUB
                current_sub = PC
            if instruction is JUMPSUB
                if is_sub[jump_target(PC)] is false
                    return false
            if instruction is JUMPTO or JUMPIF
                if is_dest[jump_target(PC)] is false
                    return false
            if sub_for_pc[PC] is not current_sub
                return false
       }
       return true
    }

Note that code like this is already run by EVMs to check dynamic jumps, including building the jump destination set every time a contract is run, and doing a lookup in the jump destination set before every jump.

Subroutine Validation

This function can be seen as a symbolic execution of a subroutine in the EVM code, where only the effect of the instructions on the state being validated is computed. Thus the structure of this function is very similar to an EVM interpreter. This function can also be seen as an acyclic traversal of the directed graph formed by taking instructions as vertices and sequential and branching connections as edges, checking conditions along the way. The traversal is accomplished via recursion, and cycles are broken by returning when a vertex which has already been visited is reached. The time complexity of this traversal is O(|E|+|V|): The sum of the number of edges and number of vertices in the graph.

The basic approach is to call validate_subroutine(i, 0, 0), for i equal to the first instruction in the EVM code through each BEGINDATA offset. validate_subroutine() traverses instructions sequentially, recursing when JUMP and JUMPI instructions are encountered. When a destination is reached that has been visited before it returns, thus breaking cycles. It returns true if the subroutine is valid, false otherwise.

    bytecode[code_size]     // contains EVM bytecode to validate
    frame_size[code_size ]  // is filled with -1

    // we validate each subroutine individually, as if at top level
    // * PC is the offset in the code to start validating at
    // * return_pc is the top PC on return stack that RETURNSUB returns to
    // * at top level FP = SP = 0 is both the frame size and the stack size
    // * as items are pushed SP get more negative, so the stack size is -SP
    validate_subroutine(PC, return_pc, SP)
    {
        // traverse code sequentially, recurse for jumps
        while true
        {
            instruction = bytecode[PC]

            // if frame size set we have been here before
            if frame_size[PC] >= 0
            {
                // check for constant frame size
                if instruction is JUMPDEST
                    if -SP != frame_size[PC]
                        return false

                // return to break cycle
                return true
            }
            frame_size[PC] = -SP

            // effect of instruction on stack
            n_removed = removed_items(instructions)
            n_added = added_items(instruction)

            // check for stack underflow
            if -SP < n_removed
                return false

            // net effect of removing and adding stack items
            SP += n_removed
            SP -= n_added

            // check for stack overflow
            if -SP > 1024
                return false

            if instruction is STOP, RETURN, or SUICIDE
                return true

            // violates single entry
            if instruction is BEGINSUB
                 return false

            // return to top or from recursion to JUMPSUB
            if instruction is RETURNSUB
                return true;;

            if instruction is JUMPSUB
            {
                // check for enough arguments
                sub_pc = jump_target(PC)
                if -SP < n_args(sub_pc)
                    return false
                return true
            }

            // reset PC to destination of jump
            if instruction is JUMPTO
            {
                PC = jump_target(PC)
                continue
            }

            // recurse to jump to code to validate
            if instruction is JUMPIF
            {
                if not validate_subroutine(jump_target(PC), return_pc, SP)
                    return false
            }

            // advance PC according to instruction
            PC = advance_pc(PC)
        }

        // check for right number of results
        if (-SP != n_results(return_pc)
            return false
        return true
    }

Appendix B

EVM Analysis

There is a large and growing ecosystem of researchers, authors, teachers, auditors, and analytic tools--providing software and services focused on the correctness and security of EVM code. A small sample is given here.

Some Tools

Some Papers

Copyright

Copyright and related rights waived via CC0.

@seed
Copy link

seed commented Nov 13, 2017

This is a very useful proposal, especially for anything that involves decompilation of bytecode.
It would be great to have support for this proposal in the Solidity compiler.

I have a few questions regarding the variable access section.
Is the purpose of GETLOCAL and PUTLOCAL to allow writing more efficient bytecode?
Is the objective to reduce memory accesses by using the stack instead?

@gcolvin
Copy link
Contributor Author

gcolvin commented Nov 13, 2017

Yeah, it's to support local variables on the stack that are too far away to reach with DUP or SWAP. So more efficient bytecode, and also a better target for transpiling from eWASM.

@seed
Copy link

seed commented Nov 13, 2017

Thanks.

What is the semantics of GETLOCAL n, when n is greater than the number of arguments specified in BEGINSUB? I'd suspect an exception is thrown in that case?

It seems that the number of return variables specified in BEGINSUB, isn't used anywhere.
According to my understanding, whenever we execute a RETSUB, the following should be true: FP - narg + nret == SP. If so, should an EVM interpreter check this and throw an exception otherwise?

@seed
Copy link

seed commented Nov 13, 2017

Could you please clarify if GETLOCAL n, where n is positive, is accessing an argument or local variable? It seems that the former is true, but it would be good to be more explicit about this in the proposal.

@seed
Copy link

seed commented Nov 13, 2017

Actually, the following is really confusing to me:

Execution of a subroutine begins with JUMPSUB or JUMPSUBV, which
...
set FP to SP + n_args, and

Shouldn't this be rather "set FP to SP" or "set FP to SP - n_args" because the arguments have already been placed on the stack at this point.

@gcolvin
Copy link
Contributor Author

gcolvin commented Nov 13, 2017

Perhaps, Seed. It confuses me too. It's been a long while since I wrote and implemented this, so it will take a little study. Getting near bedtime here, and I've got 100 miles of driving to get home tomorrow, so please be patient. It's on my stack to do a cleanup pass and new PR for this and 616, so I much appreciate your review. (If you read C++ your questions might be answered in the code at https://github.com/ethereum/cpp-ethereum/tree/develop/libevm)

@seed
Copy link

seed commented Nov 19, 2017

We started specifying the semantics of new instructions in Lem.
See https://github.com/seed/eth-isabelle/blob/evm15/lem/evm.lem

One of the things we noticed is that it makes more sense to setup the stack-frame with the BEGINSUB instruction because we have the number of function arguments at hand. It also makes sense when compared native assembly like x86, where the stack-frame is created in the body of the function, not by the CALL instruction.

I'll be continuing the specification next week and will probably have more comments to make.

@gcolvin
Copy link
Contributor Author

gcolvin commented Nov 19, 2017

You ask

What is the semantics of GETLOCAL n, when n is greater than the number of arguments specified in BEGINSUB?"

It pushes FP[-n] on the stack.
You also ask

According to my understanding, whenever we execute a RETSUB, the following should be true: FP - narg + nret == SP.

Close.
I think both cases are handled by condition 3, that after the execution of an instruction SP <= FP.

@gcolvin
Copy link
Contributor Author

gcolvin commented Nov 19, 2017

As for setting up the stack frame, the semantics is given as:

Execution of a subroutine begins with JUMPSUB or JUMPSUBV, which

  • push PC on the return stack,
  • push FP on the frame stack,thus suspending execution of the current subroutine, and
  • set FP to SP + n_args, and
  • set PC to the specified BEGINSUB address,
    thus beginning execution of the new subroutine.

I'm not sure it matters where or in what order anything in between encountering the JUMPSUB and executing the BEGINSUB is described as happening, so long as the program winds up executing the instruction after the BEGINSUB with the correct stack. So get the semantics right and we can get the English right.

@gcolvin
Copy link
Contributor Author

gcolvin commented Nov 26, 2017

@seed I've edited to make clear that SP is mu-sub-s in the yellow paper, and removed the reference in the text to SP being the stack size. The rest of the pointer arithmetic in the text is (I think) now consistent with the stack growing towards lower addresses, so that positive offsets off the stack pointer and negative offsets off the frame pointer both reach into the stack frame. The validation pseudocode I have not fixed.

@seed
Copy link

seed commented Dec 3, 2017

Thanks for updating the proposal. It does make a lot more sense.

Note that μs growing towards lower addresses doesn't make much sense to me in the context of the yellow paper because the EVM doesn't expose stack addresses in any way.

Let me try to confirm my understanding of the proposal with an example:
[other subroutines stack frames] [arg1] [arg2] < FP is here > [local1] [local2] <SP/top is here>.

where μs[0] = local2, μs[1] = local1, μs[2] = arg2, ...
and GETLOCAL 0 = arg2, GETLOCAL 1 = arg1
and GETLOCAL -1 = local1, GETLOCAL -2 = local2

Few questions:
Is the above correct?
Is the validation code expected to reject GETLOCAL 3 and GETLOCAL -3?
Are EVM interpreters meant to throw an invalid stack access exception when they encounter these?

@gcolvin
Copy link
Contributor Author

gcolvin commented Dec 4, 2017

@seed The EVM doesn't expose stack addresses, but in 9.5 the Yellow Paper says, "Stack items are added or removed from the left-most, lower-indexed portion of the series." And throughout Appendix H we have notation like µ's[0] ≡ µs[0] + µs[1]. So in this proposal after, e.g.

PUSH 10
PUSH 11
JUMPSUB _x_
PUSH 12
PUSH 13

the stack looks like

10 <- FP
11
12
13
   <- SP

and GETLOCAL 0 is 10 and GETLOCAL 3 is 13. So GETLOCAL can address the subroutine arguments, and might better be named GETVAR. Negative arguments are not possible (args are unsigned ints) and GETLOCAL n where n > 3 would be rejected by the validator.

@seed
Copy link

seed commented Dec 5, 2017

@gcolvin for some reason I had assumed FP = SP at entry of subroutine. Never mind, with the example it all makes sense now, thanks. I would actually suggest adding the example to the proposal.

@seed
Copy link

seed commented Dec 5, 2017

The proposal uses the minus sign (-) instead of the em-dash (—) symbols in a few places which can lead to confusion.
Examples:

Local variable n is the n'th stack item below the frame pointer - FP[-n] as defined below.

I believe the first - should be a —. Also the syntax introduced, FP[-n], doesn't seem to be used elsewhere.

The first instruction of an array of EVM bytecode begins execution of a main routine with no arguments, SP and FP set to 0, and with one value on the return stack - code size - 1. (Executing the virtual byte of 0 after this offset causes an EVM to stop. Thus executing a RETURNSUB with no prior JUMPSUB or JUMBSUBV - that is, in the main routine - executes a STOP.)

Here again em-dashes should be used.

@gcolvin
Copy link
Contributor Author

gcolvin commented Dec 5, 2017

You are right on the punctuation. I was too lazy to crawl through layers of menus to find the em-dash. I took the array syntax from the yellow paper (e.g. µs[0]). It would be nice if markdown wasn't too primitive (that is, more primitive than what we used 40 years ago) to support subscripts, but might we still do better to switch from SP and FP to µs and µf?

You are also right about the example diagram.

@seed
Copy link

seed commented Dec 5, 2017

We finished the Lem implementation of JUMPIF, JUMPTO, JUMPSUB, RETURNSUB, BEGINSUB, GETLOCAL, PUTLOCAL.
See https://github.com/seed/eth-isabelle/blob/evm15/lem/evm.lem

Few questions/comments resulting from this:

  • Should the address popped from the return stack by RETURNSUB point to a JUMPDEST? Or is JUMPSUB implicitly implying that the next instruction is the target of a RETURNSUB.
  • It seems more natural for JUMPSUB to push the return address of the next instruction, instead of jumping back to the JUMPSUB and advancing the PC in RETURNSUB.
  • The size of a JUMPSUB instruction is 3 bytes, so initialising the return stack to "code_size - 1" is misleading, though harmless. To reach the the first virtual STOP after the bytecode when executing a RETURNSUB we need to initialise the return stack with "code_size - 3".

To properly test both the Lem code and the cpp-Ethereum client at this point we need the Solidity compiler to output subroutine instructions.
Do you know the status of subroutine support in the Solidity compiler?

@axic
Copy link
Member

axic commented Dec 5, 2017

It can compile "Solidity inline assembly" programs to EVM 1.5:

{
  function f(a) -> b
  {
    b := add(a, 5)
  }

  let x := f(2)
}

using solc --assemble --machine=evm15 test.asm

results in the following bytecode:

b000000011b560006005820190509050b75b6002b30000000550

@seed
Copy link

seed commented Dec 5, 2017

@axic thanks, I'll look into it.
Has anyone written tests to stimulate that part of the compiler, which we could potentially use to compare the behaviour of new instructions?

@axic
Copy link
Member

axic commented Dec 6, 2017

@seed it is not tested since the testing client (cpp-ethereum) build we have didn't support it. It could be added if some submits a PR.

@gcolvin
Copy link
Contributor Author

gcolvin commented Dec 6, 2017

@seed Are you talking about initializing the return stack in the cpp-ethereum code or in the validation pseudocode?

No JUMPDEST is involved. Just a JUMPSUB to a BEGINSUB, and a RETURNSUB back to the JUMPSUB. The x86 CALL instruction pushes the PC on the stack and RET pops it and jumps to it. Then the PC gets incremented in the usual way. That was the model I had in mind, it's how the C++ code is written for JUMP, and I think it comports with (136) in the Yellow Paper. (The C++ code is not written to match this model for RETURNSUB, although the behavior is the same.)

@gcolvin
Copy link
Contributor Author

gcolvin commented Dec 8, 2017

So far as a formal spec (or implementation) goes I think you can specify it however you like, so long as the PC winds up in the right place. It could be changed here too if it helps, but I find it more clear the way it is.

@nootropicat
Copy link

nootropicat commented Jan 26, 2018

For formal analysis purposes, all of this is implementable right now as an intermediate language that's easily compilable to evm. PUTLOCAL and GETLOCAL are indeed very useful but there's no need for separate opcodes - stack could be mapped to negative memory and accessed with MLOAD/MSTORE (ie. the stack starts at (-32) and every push goes downwards). Space of one-byte opcodes is limited.

Why not add it as a precompiled contract? Ie. the very first instruction (in EVM) calls it with code as an argument. The gas cost of that particular call could be set to zero, in effect working only as a costless (except storage) marking for a different vm version. This would totally avoid problems due to deprecating opcodes and backwards compatibility, allowing for a much more fundamental changes.

In principle I very much dislike the idea of breaking backwards compatibility. What if someone has a compiled contract on some private blockchain? It would be impossible to deploy on the public one without recompilation. What if there's some private or just unknown language that compiles to evm? Its compiler would have to be rewritten. Even if one doesn't exist at the moment (most likely), potential ones that may exist may not be written at all, because the risk of compatibility-breaking changes would be judged as too high. For all these reasons modern processors still support old 16 bit code - even including undocumented accidental instructions (ie. ffreep)!

@gcolvin
Copy link
Contributor Author

gcolvin commented Jan 26, 2018

The difference with PUTLOCAL and GETLOCAL is that they have to respect frame boundaries. This proposal was meant to break compatibility less than eWasm, which breaks it completely.

@nootropicat
Copy link

nootropicat commented Jan 26, 2018

Respect how? You replied previously that they don't throw "It pushes FP[-n] on the stack.". So they act as memory access just with fp as the offset.

@gcolvin
Copy link
Contributor Author

gcolvin commented Jan 26, 2018

Validity condition 3. Programs that try to get or put a stack slot outside of the local frame are invalid. Memory access isn't constrained that way.

@fulldecent
Copy link
Contributor

Here a very brief review.

Who said static analysis is valuable?

This proposal recommends a super-large change, deprecating op-codes and introducing an entire subroutine mechanism into the machine for the benefit "tightening the security".

The justification is static analysis of (prospective) contracts will be easier and that formal methods are "essential [tools] against loss".

The proposal will be much stronger if examples (a single example?) are included where static analysis has previously been successful in detecting any problem. I don't know of any. My understanding is just the opposite -- every bug with $1M+ loss on Ethereum has been due to ****-poor coding and failure to pay bug bounties higher than the cost of a nice dinner.

This is already possible WITHOUT implementing the EIP

Certain statements are made such as:

Further, since every jump can potentially be to any jump destination in the code, the number of possible paths through the code goes up as the product of the number of jumps by the number of destinations, as does the time complexity of static analysis.

Here is a 5-minute analysis of an actual contract.

BAM! The practical cost of static analysis is already reduced by an order of magnitude from the theoretical statement above. (Because they are already using literal jumps.)

Stronger support and more analysis of existing contracts should be presented in this EIP before recommending such a large change.

PS

  • The edited-date above is invalid.
  • The specification "These forms ... name instructions with one, two, and two or more arguments, respectively" is not parseable English. The number of examples does not line up with the number of "respectively" definitions.

@gcolvin
Copy link
Contributor Author

gcolvin commented Feb 26, 2019

I don't know if this is a super-large proposal or not. A fair comparison might be to eWasm, which provides essentially the same functionality.

There is no current eWasm EIP and implementation for direct comparison, but the Wasm spec itself is 150 pages, and only floating point and a few other non-deterministic operations aren't relevant. eWasm also requires an Environment Interface between the VM and the client, and a Transcompiler of EVM to eWasm bytecode for backwards compatibility. The ewasm/design repo, which is currently mostly empty, gives a feel for the scope of the effort remaining.

This spec is about 15 pages, and it took about 300 lines of C++ to add it to the aleth interpreter. It is backwards-compatible by default.

@bmann @expede Boris, Brooke and I have spoken with VM and program specification and proof experts, program auditors, providers of tools like disassemblers, decompilers, lints, and others. They use static analysis, find dynamic jumps a serious obstacle, and support this proposal. Perhaps an appendix of links to their companies and repos would help.

This spec also supports fast, near-linear-time compilation to machine code, LLVM IR, or Wasm, with no quadratic "compiler bombs." The current EVM makes that difficult-to-impossible.

Thanks for noticing typos. I've removed the "edited-date' field, as Github now maintains a history of edits.

@expede
Copy link
Contributor

expede commented Feb 26, 2019

Who said static analysis is valuable?

I mean... lots of people. Entire companies and divisions have sprung up around smart contract analysis (generally assisted by a checker). For example, here's a bunch of automated smart contract security and analysis. And you're right: it's hard to do right now, largely because of the difficult to trace control flow.

This proposal makes this much more tractable in a way that is in-line with modern VMs such as Wasm and the JVM. We've spoken with a bunch of folks from SecurETH who feel that adding structured control flow will substantially improve the depth and reliability of their analyses.

every bug with $1M+ loss on Ethereum has been due to ****-poor coding and failure to pay bug bounties higher than the cost of a nice dinner.

Yes, and you still need these bounties for previously undiscovered cases. However, humans are notoriously bad at doing thorough analyses, and automated tools to check that properties hold is faster, cheaper, and works every time. Why not eliminate entire classes of bug before deployment, regardless of coding skill or size of cheque book?

BAM! The practical cost of static analysis is already reduced by an order of magnitude from the theoretical statement above.

** deep sigh ** Oh sweet summer child. Were it only that simple.

EDIT expanded first section for clarity

@GNSPS
Copy link

GNSPS commented Feb 26, 2019

I can get behind @expede 's comment and say that this EIP would be beneficial to researchers working on formal verification at ConsenSys and at Diligence, specifically.
I have talked offline with many of the people involved in getting this work to fruition and, in the case it is helpful, I am now backing its usefulness publicly.

@MrChico
Copy link
Member

MrChico commented Feb 26, 2019

Happy to see this EIP getting traction again! I can assist in getting this formalized in KEVM a.k.a. the Jello paper.
There's a lot of talk about performance in the EIP, but no numbers. Are there any rough ideas on how much can be gained?

@fulldecent
Copy link
Contributor

fulldecent commented Feb 26, 2019

Which class of bug is solved by the proposed kind of static analysis? Has one bug ever been found?

Evidence here would be much more persuasive in making arguments than citing other people that agree static analysis is valuable.


Yes, it literally is that simple.

Just replace PUSH(\d+).*\nJUMP$ to JUMPTO ($1). These codes are isomorphic. Now a vast majority of jumps in all contracts can be analyzed using the proposed techniques.

@expede
Copy link
Contributor

expede commented Feb 26, 2019

@MrChico

Formalization

First off, thank you for your work on the KEVM!

Formalization of EIP-615 in the JP would be amazing 🙌

As an aside, you may have already seen this EM thread, but just in case: Jello Paper as Canonical EVM Spec

Performance

For performance we don't have any hard numbers as of yet. We'll need to look at actual real-world cases to give you a better idea (some funding in this direction would really help such a study, if anyone has a line to grants or contracts). As part of a larger strategy, we have lots of ideas for improving the speed of the EVM (for mainnet, this is most likely via incremental JIT optimization), most of which are not touched on here. AOT compilation is more viable for private EVM-based networks, which isn't the primary concern here, but also good to be aware of.

For EIP-615, below are a couple back-of-the-napkin calculations for optimizations off the top of my head (to at least give you a flavour). They are made easier when you can trace control flow more easily. Today we can already infer some cases (ex. direct PUSH; JUMP), but it is not at all tractable in general with dynamic jumps (simplest toy example: CALLDATALOAD; JUMP). Obviously by definition, these are all branch penalty reduction strategies, which are a fairly well understood area of compiler optimization.

Client Gas Aggregation

ie: (structural) cost dynamics

We could do a lot of this already without EIP-615 (but don't for some reason?). However, this proposal can help with anything involving loops (especially tight loops), can be improved anywhere from 3-50% (based on ops/instruction, and for cases that can be statically found to converge, but that is in theory most/all of what we do on the EVM).

Loop Unrolling

Static unrolling (removing jumps and inlining code) can see a large improvement for common cases (40-80% from lower bookkeeping overhead). This can also be done dynamically in more complex cases, but obviously this improvement is generally smaller (roughly 20-50%).

Most improvement if compiled to native by the client where hardware optimization kicks in (predictive branch execution, &c). More study needed to know if this is practical at the relative small size of most Ethereum transactions (we're not doing heavy numeric analysis or machine learning).

Parallel Execution

via loop dependency analysis

Not viable in all implementations (only parallel and concurrent languages), but some multiple improvement based on how many isolated branches or CALLs there are (linear with number of parallel branches).


BTW, I would love feedback on the above (both additions and subtractions!). The more of these we have up our sleeve, the better we can improve both the EVM (and the Ethereum-specific portions of eWasm.)

@expede
Copy link
Contributor

expede commented Feb 26, 2019

Yes, it literally is that simple.

Just replace PUSH(\d+).*\nJUMP$ to JUMPTO ($1). These codes are isomorphic. Now a vast majority of jumps in all contracts can be analyzed using the proposed techniques.

Yes, these are isomorphic. However, the cases where we don't do exactly PUSH ; JUMP are not generally tractable. Part of this proposal is adding static jumps, which limit JUMP to exactly the case that you cite, and not unconstrained other positions.

As a total aside for fun, it's worth noting that we're having essentially the same discussion here as folks had about structured programming in the 1970s. IIRC, this is the classic Dijkstra paper about GOTOs (AKA dynamic jumps) that set off the whole thing.

Which class of bug is solved by the proposed kind of static analysis? Has one bug ever been found?

Evidence here would be much more persuasive in making arguments than citing other people that agree static analysis is valuable.

Since we're talking about all possible paths through a program, there are many types of bug that this can catch. Tracing if code can be jumped to from a specific point can trigger subtle bugs. Off the top of my head, we can statically check for the following:

  • whether the contract complete under the gas limit (for typical convergent cases)
  • stack depth overruns
  • jump location sanity (ie: do all jumps go to acrual program addresses?)
  • semantic errors
    • ie: if bytecode matches a high-level spec (even if there was a compiler error)

@gcolvin
Copy link
Contributor Author

gcolvin commented Feb 27, 2019

@MrChico @expede The paper is too vague on performance, and Brooke's examples would help. The main point is that these optimizations can be done in at worst N log N time, whereas dynamic jumps can force the optimizer into N**2 time. The relative performance of native C, Wasm, and EVM compilers vs. EVM interpreters can be seen in this graph for a few tests. We can't get to native C, but we can do OK.

@gcolvin
Copy link
Contributor Author

gcolvin commented Feb 27, 2019

https://github.com/gcolvin/evm-drag-race/blob/master/time-vs-gas.pdf

@gcolvin gcolvin assigned gcolvin and unassigned gcolvin Mar 1, 2019
@sorpaas
Copy link
Contributor

sorpaas commented Mar 31, 2019

  1. A later hard fork would require clients to place only valid code on the block chain. Note that despite the fork old EVM code will still need to be supported indefinitely.

Want to link EIP-1712 to here: #1712

@axic axic changed the title Subroutines and Static Jumps for the EVM EIP-615: Subroutines and Static Jumps for the EVM Jul 4, 2019
@gcolvin gcolvin closed this as completed Jul 16, 2019
@axic
Copy link
Member

axic commented Jul 16, 2019

I think Greg closed this because a lot of the discussion moved to https://ethereum-magicians.org/t/eip-615-subroutines-and-static-jumps-for-the-evm/2728.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants