Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Persistent bytecode v2 #1577

Closed
wants to merge 12 commits into from

Conversation

dpgeorge
Copy link
Member

@dpgeorge dpgeorge commented Nov 2, 2015

This is an updated version of #1527 and provides support for portable/persistent bytecode. It has the following features:

  • Config variables MICROPY_PORTABLE_CODE and MICROPY_PORTABLE_CODE_SAVE.
  • Without any of the configs enabled (the default) the code/ROM size and RAM usage of bytecode is almost the same as before. There has been some code refactoring and shuffling, but conceptually everything is the same.
  • With MICROPY_PORTABLE_CODE enabled the bytecode changes: qstrs are stored within the bytecode but with a fixed 2-byte encoding (should be plenty for all use cases) and pointers are stored in a constant table outside the bytecode. There is only a small increase in RAM usage with this option enabled (maybe 1-3% increase).
  • MICROPY_PORTABLE_CODE costs about 1060 bytes ROM for stmhal.
  • With MICROPY_PORTABLE_CODE_SAVE enabled, bytecode can be saved to a .mpc file. This is portable code that can be loaded and linked (at compile or run time) into any uPy instance.
  • With MICROPY_PORTABLE_CODE enabled, one can load and import .mpc files and execute them. No lexer/parser/compiler is needed. But the bytecode must be loaded into RAM because the qstrs need to be linked (ie point to the global qstr table of the uPy instance).
  • There is a script called tools/mpcdump.py which can read a .mpc file and convert it into C source, which can then be compiled with your program to create a properly frozen Python script. These frozen scripts can be executed directly from ROM and include all constant objects in ROM as well (eg bignum, float, tuple). RAM is only needed when executing (eg to populate the globals dict of the module).

So we can now do the following:

  • Have proper frozen module support that requires no compiling and no RAM to store the compiled bytecode. This means we could potentially start to rewrite parts of uPy in Python itself, if we wanted.
  • Provide an option to disable the parser/lexer/compiler to save a lot (roughly 20k) of code size and just embed frozen scripts and execute those (no REPL though!). This would give complete Python applications in under 64k (Thumb2 arch).
  • Compile scripts offline, upload the .mpc files to the filesystem of pyboard/wipy/etc and execute them (import them) directly. This would save time during bootup of the board (no compiling needed) and reduce RAM pressure of compilation stage. Would be useful for drivers (eg onewire.mpc).

Things that are missing:

  • The ability to add persistent bytecode to a uPy executable after it has been linked, and execute that bytecode from ROM. Use case would be microbit: we have a given firmware and want to append the user's compiled code to the firmware so it can be executed without using much RAM.
  • Ability to make native/viper code persistent. This could be done, and would be nice to do, and would share a lot of code with dynamic loadable native C modules.

The one issue with .mpc files is that they are not 100% portable: bytecode differs if MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE is enabled or not. This is annoying because that feature is enabled on unix port to get a big speed boost, but not enabled on any other port (because it increases RAM usage of bytecode, and makes the VM non-deterministic in terms of execution speed). It means you need a different build of unix binary to compile code for pyboard/wipy/etc.

Contains just argument names at the moment but makes it easy to add
arbitrary constants.
…ode.

Main changes when MICROPY_PORTABLE_CODE is enabled are:

- qstrs are encoded as 2-byte fixed width in the bytecode
- all pointers are removed from bytecode and put in const_table (this
  includes const objects and raw code pointers)
With MICROPY_PORTABLE_CODE, bytecode can be read from a .mpc file and
executed.  With MICROPY_PORTABLE_CODE_SAVE enabled as well, bytecode can
be saved to a .mpc file.
@dpgeorge
Copy link
Member Author

dpgeorge commented Nov 2, 2015

As an example of using tools/mpcdump.py to create a frozen script, here is the input:

x = 1
def y(z):
    print('abc', x + z)
y(2)

and here is the output .c file:

#include "py/emitglue.h"

// Q(<module>)
// Q(ab.py)
// Q(x)
// Q(y)
// Q(y)
// Q(y)
// Q(ab.py)
// Q(print)
// Q(abc)
// Q(x)
// Q(z)

// frozen bytecode for file ab.py, scope y
STATIC const byte bytecode_data_ab_y[31] = {
    0x05, 0x00, 0x00, 0x01, 0x00, 0x00, 0x08,
    MP_QSTR_y & 0xff, MP_QSTR_y >> 8,
    MP_QSTR_ab_dot_py & 0xff, MP_QSTR_ab_dot_py >> 8,
    0x41, 0x00, 0x00, 0xff,
    0x1d, MP_QSTR_print & 0xff, MP_QSTR_print >> 8,
    0x16, MP_QSTR_abc & 0xff, MP_QSTR_abc >> 8,
    0x1d, MP_QSTR_x & 0xff, MP_QSTR_x >> 8,
    0xb0, 
    0xdb, 
    0x64, 0x02, 
    0x32, 
    0x11, 
    0x5b, 
};
STATIC const mp_uint_t const_table_data_ab_y[1] = {
    (mp_uint_t)MP_OBJ_NEW_QSTR(MP_QSTR_z),
};
STATIC const mp_raw_code_t raw_code_ab_y = {
    .kind = MP_CODE_BYTECODE,
    .scope_flags = 0x00,
    .n_pos_args = 1,
    .data.u_byte = {
        .bytecode = bytecode_data_ab_y,
        .const_table = const_table_data_ab_y,
        #if MICROPY_PORTABLE_CODE_SAVE
        .bc_len = 31,
        .n_obj = 0,
        .n_raw_code = 0,
        #endif
    },
};

// frozen bytecode for file ab.py, scope <module>
STATIC const byte bytecode_data_ab__lt_module_gt_[34] = {
    0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x09,
    MP_QSTR__lt_module_gt_ & 0xff, MP_QSTR__lt_module_gt_ >> 8,
    MP_QSTR_ab_dot_py & 0xff, MP_QSTR_ab_dot_py >> 8,
    0x25, 0x45, 0x00, 0x00, 0xff,
    0x81, 
    0x24, MP_QSTR_x & 0xff, MP_QSTR_x >> 8,
    0x60, 0x00, 
    0x24, MP_QSTR_y & 0xff, MP_QSTR_y >> 8,
    0x1c, MP_QSTR_y & 0xff, MP_QSTR_y >> 8,
    0x82, 
    0x64, 0x01, 
    0x32, 
    0x11, 
    0x5b, 
};
STATIC const mp_uint_t const_table_data_ab__lt_module_gt_[1] = {
    (mp_uint_t)&raw_code_ab_y,
};
const mp_raw_code_t raw_code_ab__lt_module_gt_ = {
    .kind = MP_CODE_BYTECODE,
    .scope_flags = 0x00,
    .n_pos_args = 0,
    .data.u_byte = {
        .bytecode = bytecode_data_ab__lt_module_gt_,
        .const_table = const_table_data_ab__lt_module_gt_,
        #if MICROPY_PORTABLE_CODE_SAVE
        .bc_len = 34,
        .n_obj = 0,
        .n_raw_code = 1,
        #endif
    },
};

You would use this by coping the qstrs to qstrdefsport.h, and including the .c file in your build. Then there is a little bit of code needed in builtinimport.c to find and execute this (this code is not in any of the commits, it's working but messy).

@dhylands
Copy link
Contributor

dhylands commented Nov 2, 2015

The one issue with .mpc files is that they are not 100% portable: bytecode differs if MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE is enabled or not.

Does it make sense to have a flag or something stored in the bytecode so that this can be detected? At least then we could raise an error rather than have a program mysteriously fail due an incompatible bytecode.

@dpgeorge
Copy link
Member Author

dpgeorge commented Nov 2, 2015

Does it make sense to have a flag or something stored in the bytecode so that this can be detected?

Yes, definitely. The hard part is having separate binaries to generate the different formats. We could make unix support both (in the compile stage) without much hacking.

@dhylands
Copy link
Contributor

dhylands commented Nov 2, 2015

And to clarify, if you use the generated C code version then the bytecode has the qstr's fully resolved and the bytecode will run directly from flash?

@dpgeorge
Copy link
Member Author

dpgeorge commented Nov 2, 2015

And to clarify, if you use the generated C code version then the bytecode has the qstr's fully resolved and the bytecode will run directly from flash?

Yes! It's completely frozen, including the constant table and all the constants.

You'll need to add some qstrs to qstrdefsport (mpcdump.py will generate them for you). I think ideally we would want a separate qstr file for this (eg qstrsdefsfrozen.h) which is appended to the end of existing qstr list so that changing qstrsdefsfrozen.h does not require a complete recompile of all source. This is possible to do, and would mean you could change your frozen scripts and have a very fast compile time.

@dhylands
Copy link
Contributor

dhylands commented Nov 2, 2015

You'll need to add some qstrs to qstrdefsport (mpcdump.py will generate them for you). I think ideally we would want a separate qstr file for this (eg qstrsdefsfrozen.h) which is appended to the end of existing qstr list so that changing qstrsdefsfrozen.h does not require a complete recompile of all source. This is possible to do, and would mean you could change your frozen scripts and have a very fast compile time.

I think that all we'd need to make that happen is to have a generated header file which contains the number of the highest qstr that's currently in the firmware.

@danicampora
Copy link
Member

Wow!! Simply amazing Damien :-)

@dbc
Copy link
Contributor

dbc commented Nov 4, 2015

This is really great functionality. I have plans for this...

MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE is enabled or not. This is annoying because that feature is enabled on unix port to get a big speed boost, but not enabled on any other port
but not enabled on any other port (because it increases RAM usage of bytecode, and makes the VM non-deterministic in terms of execution speed). It means you need a different build of unix binary to compile code for pyboard/wipy/etc.

So I don't see needing to have a different Unix binary as any kind of inconvenience. Am I missing something? As to the byte code compatibility, in the short term, why not belly-flop on this:

but not enabled on any other port

If MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE is enabled, don't enable persistent byte code generation. That way a user can't generate something that won't work. The right long term answer is a version byte or a byte of compatibility flags or such, of course.

@dpgeorge
Copy link
Member Author

dpgeorge commented Nov 4, 2015

So I don't see needing to have a different Unix binary as any kind of inconvenience. Am I missing something?

It's just that you need to have multiple executables lying around and know which to use for what. But when support for persistent native/vipre code comes, then you'll definitely need separate unix executables to "cross compile" for different archs.

The right long term answer is a version byte or a byte of compatibility flags or such, of course.

Yes, it already has a version number, and it should also have some flags indicating architecture and/or bytecode type.

@dpgeorge
Copy link
Member Author

dpgeorge commented Nov 4, 2015

One thing I'd like feedback on is what extension to use for the persistent bytecode files. I chose ".mpc" for "MicroPython compiled". Other choice would be to use ".pyc" and have a different header (the first few bytes of the file) to CPython to make sure they don't get confused. I much prefer ".mpc".

And then the plan would be to use ".mpc" files to contain not only persistent bytecode, but also persistent native/viper code, inline assembler, as well as dynamic loadable C modules #583. This might seem like trying to stuff a lot into one file, but it's actually quite natural, and means the user doesn't need to worry/know about all the different kinds of dynamically loadable content.

The reason it's quite natural to put everything together is: currently persistent bytecode is just bytecode, and so the .py file that you compile must not contain any @micropython.native decorators (or viper or asm_thumb). But there's no reason in the future to relax this constraint and allow such decorators. Then the .mpc file will contain a mix of bytecode and native functions. Making persistent native functions requires exactly the same kind of linking support as dynamic loadable C modules. In fact, when loading a .mpc file that has native code in it, the runtime doesn't care how that native code was generated. It may have come from a .py file with @micropython.native, or may have been compiled from a .c file, or .cc, etc. The way the content is loaded and linked is the same.

So it makes sense to me to have provision for .mpc files to contain any loadable content.

Finally, at the moment I have added config variables called MICROPY_PORTABLE_CODE and MICROPY_PORTABLE_CODE_SAVE. Probably they should be MICROPY_PERSIST_BYTECODE and MICROPY_PERSIST_BYTECODE_SAVE, or something like that.

@danicampora
Copy link
Member

Wow, everything sounds super awesome to me :-)

@danicampora
Copy link
Member

I think that .mpc and putting everything together is a good idea.

@dpgeorge
Copy link
Member Author

dpgeorge commented Nov 4, 2015

One thing I realised: for a board like WiPy which doesn't have the room for native/viper/inline-asm compilers, it can still support loading of .mpc files with native code without adding too much to the firmware. So this would allow to write and compile WiPy code offline which uses the native emitter.

@danicampora
Copy link
Member

So this would allow to write and compile WiPy code offline which uses the native emitter.

That's really nice!

@dbc
Copy link
Contributor

dbc commented Nov 4, 2015

One thing I'd like feedback on is what extension to use for the persistent bytecode files. I chose ".mpc" for "MicroPython compiled". Other choice would be to use ".pyc" and have a different header (the first few bytes of the file) to CPython to make sure they don't get confused. I much prefer ".mpc".

.mpc seems fine. Using .pyc seems like it could lead to confusion on the part of both humans and tools.

Persistent bytecode enables a number of interesting opportunities. One is the ability to squeeze into RAM-constrained parts. I'm very interested in the T4MC123G as used in: https://github.com/micropython/micropython/wiki/Board-Tiva-TM4C123G-Launchpad but 32K of RAM is marginal right now. This could make it practical to develop/test interactively on a TM4C129x and deploy persistent code on a TM4C123G. (Not that anyone is working on TM4C ports at the moment.)

@dbc
Copy link
Contributor

dbc commented Nov 4, 2015

A thought on building persistent byte code...

The one issue with .mpc files is that they are not 100% portable: bytecode differs if MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE is enabled or not. This is annoying because that feature is enabled on unix port to get a big speed boost, but not enabled on any other port (because it increases RAM usage of bytecode, and makes the VM non-deterministic in terms of execution speed). It means you need a different build of unix binary to compile code for pyboard/wipy/etc.

Seems to me an expedient solution is to leave the Unix port alone, built optimally for Unix. Then for cross-compile mode, create an executable named 'persistor' or something more clever. The persistor is a driver script with a signature like:

persistor --arch= foo.py bar.py

It calls underlying binaries which are built for cross-compilation and named persistor-. The Unix makefile can have (a) target(s) to build persistor versions of Unix micropython as needed.

Benefits of this approach:

  • No confusion about how Unix micropython was built.
  • No confusion about which Unix micropython to use for cross-compilation
  • Portable makefiles: the recipe always says 'persistor --arch=$(ARCH) foo.py' and so long as the source is portable it gets built correctly for the target.
  • Building the persistor tools is a straightforward makefile modification for the Unix build.

Downside:

  • Wow, we sure have a lot of persister-foo binaries to stuff into some rabbit hole.

Edit:
persistor is an awful name. mpcross and mpcross- have a lot more going for them.

@pfalcon
Copy link
Contributor

pfalcon commented Nov 5, 2015

The one issue with .mpc files is that they are not 100% portable: bytecode differs if MICROPY_OPT_CACHE_MAP_LOOKUP_IN_BYTECODE is enabled or not.

Yes, so there're 2 polar choices for bytecode:

  • Make it absolutely portable
  • Make it absolutely efficient

What's being implemented is somewhere inbetween (with a bias towards portable bytecode of course). But it seems, that both of these ultimate choices are hard to achieve, and "absolutely portable" is the harder one, in the sense that it requires to make hard choices of giving up hard-earned optimizations. Besides cached lookups, another "issue" is constant folding/replacement. The latter case requires caring about "ABI", at least of the standard modules, that's why I'd like to find right solution for #1550 .

But of course, that largely depends on usecases for portable bytecode. Do we really have one for portability across such large port groups as unix vs baremetal? A usecase I had in mind is upip. But well, we're not going to have one executable which can run on bare metal and unix, and when building separate executables, we can compile upip for it as well.

@dbc
Copy link
Contributor

dbc commented Nov 5, 2015

But of course, that largely depends on usecases for portable bytecode. Do we really have one for portability across such large port groups as unix vs baremetal?

Exactly. If there is a use case for byte-code portability across bare metal, when the byte code is being frozen at link time, I fail to see it.

What I think should be is:

  • Add an "arch" byte to the signature, so that linking in the wrong frozen byte code can at least raise at run time, or maybe even be detected at link time.
  • Build "cross" versions of the Unix Micro Python per architecture and give them unique names, something like mpcross-stm32f4 or upycross-stm32f4 or the like. The standard build for Unix should be optimized for Unix. The cross-compile versions only have two purposes: 1) building for linkable byte code, 2) running regression tests of the cross-built module while still on the Unix side.

Source code portability from Unix to cross-compiled modules is important. Byte code portability is not interesting.

@danicampora
Copy link
Member

I agree. I don't think that byte code portability is really important.

On Nov 5, 2015, at 6:18 PM, Dave Curtis notifications@github.com wrote:

But of course, that largely depends on usecases for portable bytecode. Do we really have one for portability across such large port groups as unix vs baremetal?

Exactly. If there is a use case for byte-code portability across bare metal, when the byte code is being frozen at link time, I fail to see it.

What I think should be is:

Add an "arch" byte to the signature, so that linking in the wrong frozen byte code can at least raise at run time, or maybe even be detected at link time.
Build "cross" versions of the Unix Micro Python per architecture and give them unique names, something like mpcross-stm32f4 or upycross-stm32f4 or the like. The standard build for Unix should be optimized for Unix. The cross-compile versions only have two purposes: 1) building for linkable byte code, 2) running regression tests of the cross-built module while still on the Unix side.
Source code portability from Unix to cross-compiled modules is important. Byte code portability is not interesting.


Reply to this email directly or view it on GitHub.

@pfalcon
Copy link
Contributor

pfalcon commented Nov 5, 2015

One thing I'd like feedback on is what extension to use for the persistent bytecode files. I chose ".mpc" for "MicroPython compiled". Other choice would be to use ".pyc" and have a different header (the first few bytes of the file) to CPython to make sure they don't get confused. I much prefer ".mpc".

I find ".mpc" to be non-intuitive. Using ".pyc" would be plain confusing. If statying within 3 chars, ".mpy" would still be better.

@drohm
Copy link

drohm commented Nov 5, 2015

+1 for .mpy.

@pfalcon
Copy link
Contributor

pfalcon commented Nov 5, 2015

And then the plan would be to use ".mpc" files to contain not only persistent bytecode, but also persistent native/viper code, inline assembler, as well as dynamic loadable C modules #583.

That sounds like a pretty ambitious plan (thinking how to support all that together at the current stage). I'd think it may take a lot of time to get it right (and require reworking previously done stuff). Understanding that it will be in beta for a long time and there will be breaking changes may help to set expectations right.

def read_mpc(filename):
with open(filename, 'rb') as f:
header = f.read(6)
if header[:3] != b'MPC':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I already gave comment that this seems like a long signature, and it would be nice to get "signature" itself to 4-6 bits, and use rest of space for various flags and version numbers. Can this get response?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for "MPC" was so that you can inspect the file by eye (eg editor, xxd) and guess what it is. If we don't want that feature then I don't think we need any signature. No signature means no checking, which simplifies code :) There would anyway be some kind of checking of the version and flags, so that provides a small amount of "safety".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, my thinking is along the usual lines of "there can be hundreds of modules, and 3x100 = 300 bytes which can be used for something else". Some basic signature is still nice to have, and one nibble is just enough for hex editor. Or if you really want to support search too, then 1 byte.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bytes are important but I don't think as important to save for files as they are for RAM. Anyway, I'd be happy with one byte, which would have to be "M" :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could use one of the bits in the first byte as a flag and still have ascii: there could be 'M' and 'm' for the signature :) 'M' could mean that the mpy file contains pure bytecode, and 'm' could mean that it has at least one native function. If 'm' then you'd need a few extra bytes in the header to tell the architecture and possibly target board (eg pyboard, wipy). If 'M' then the header doesn't need these extra bytes.

@dpgeorge
Copy link
Member Author

dpgeorge commented Nov 6, 2015

Then for cross-compile mode, create an executable named 'persistor' or something more clever. The persistor is a driver script with a signature like:

Yes, not a bad idea (micropython-cross or something like that). Alternative to a script is to build a single binary that can target all possible archs. That's almost possible, since the backend emitter is already configurable (eg @micropython.bytecode, @micropython.native), would just need to make it configurable with a command-line option.

@dpgeorge
Copy link
Member Author

dpgeorge commented Nov 7, 2015

Yes, so there're 2 polar choices for bytecode:

Make it absolutely portable
Make it absolutely efficient

What's being implemented is somewhere inbetween (with a bias towards portable bytecode of course).

Yes, it tries to retain efficiency of the VM and RAM usage for when persistent bytecode is not used (but the runtime still supports it), as well as semi-portable bytecode, and semi-efficient to load and link.

But it seems, that both of these ultimate choices are hard to achieve, and "absolutely portable" is the harder one, in the sense that it requires to make hard choices of giving up hard-earned optimizations.

The only real way to get absolute portability is to re-encode the bytecode for the given target VM. But that's slow and requires lots of code to do it.

Besides cached lookups, another "issue" is constant folding/replacement. The latter case requires caring about "ABI", at least of the standard modules, that's why I'd like to find right solution for #1550 .

Very good point. ABI here means the Python ABI. Well, that's a strong case for using the same constants (errno etc) across all arch/ports.

But of course, that largely depends on usecases for portable bytecode. Do we really have one for portability across such large port groups as unix vs baremetal?

Probably not. Probably there are other things to optimise for with persistent bytecode. Remember that ultimate portability already exists: that's what the source (.py) files are for!

A usecase I had in mind is upip. But well, we're not going to have one executable which can run on bare metal and unix, and when building separate executables, we can compile upip for it as well.

Yes, upip is a good use case. I don't think we give up much not being able to use the same upip.mpy across unix and baremetal. We can anyway achieve absolute efficiency here by using the mpcdump.py file to freeze the upip bytecode for the given executable/firmware and link it into the executable.

@dpgeorge
Copy link
Member Author

dpgeorge commented Nov 7, 2015

Add an "arch" byte to the signature, so that linking in the wrong frozen byte code can at least raise at run time, or maybe even be detected at link time.

Yes, there should be "arch" as well as "bytecode features" flags. For .mpy files with pure bytecode, no arch is needed. Only if they contain native code is an "arch" flag needed.

@dpgeorge
Copy link
Member Author

dpgeorge commented Nov 7, 2015

I'm happy with using ".mpy" as the generic extension for loadable content.

That sounds like a pretty ambitious plan (thinking how to support all that together at the current stage). I'd think it may take a lot of time to get it right (and require reworking previously done stuff). Understanding that it will be in beta for a long time and there will be breaking changes may help to set expectations right.

Agree it's ambitious, but I think we should try in this case! It will be a good outcome if we get it right. The whole project is anyway in a state of flux :) We need to have the liberty to make changes, else we can't improve.

@dpgeorge
Copy link
Member Author

Ok, majority of this PR is merged in 6 commits, ending in 432e827 .

Config variables are MICROPY_PERSISTENT_CODE_{LOAD,SAVE} and file extension is .mpy. Code can now import .mpy files, but there is no way to create them, just yet. Also the mpydump.py tool is not yet merged. Will open another PR for these parts.

@dpgeorge dpgeorge closed this Nov 13, 2015
@pfalcon
Copy link
Contributor

pfalcon commented Nov 13, 2015

Thanks!

@dbc
Copy link
Contributor

dbc commented Nov 13, 2015

Excellent! Can't wait to start putting some mileage on this.

@dpgeorge
Copy link
Member Author

@dbc you'll need PR #1619 to actually use persistent bytecode.

@dpgeorge dpgeorge deleted the persistent-bytecode-v2 branch July 5, 2022 05:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants