Skip to content
This repository has been archived by the owner on Sep 25, 2019. It is now read-only.

Moving forward towards execution with v8 modules. #38

Open
2 of 7 tasks
afinch7 opened this issue Jan 16, 2019 · 5 comments
Open
2 of 7 tasks

Moving forward towards execution with v8 modules. #38

afinch7 opened this issue Jan 16, 2019 · 5 comments

Comments

@afinch7
Copy link
Contributor

afinch7 commented Jan 16, 2019

I just finishing up my PR for a modular module resolver system #20. I've gotten familiar with the code base, and thought about what features I need in fly and what features might be useful to others. I have decided that my first big goal is move code execution over to the module system in v8. This should allow for more dynamic dependency resolution and analysis in a way that isn't as heavy(and insecure) as running typescript compiler/language services in each runtime. Moving import control out of the runtime will make import metadata like referer more trustworthy sources of information, so some form of security can be established for things like secrets from imports. In general I want to make dynamic module resolution and loading more production friendly.

I've come up with an approach that is designed around making flexible and extensible systems from simpler goals, so that I can be adding value to the project with each step along the way. Here is my list of smaller goals:

  • Implement Modular and flexible resolver system.
  • Implement c++ bindings for running, compiling, and receiving import callbacks for v8 Modules.
  • Design framework for service runtimes that run javascript native code as services, so things like the typescript compiler can be run as a shared service.
  • Module resolvers/loaders will need to talk to these service runtimes, so design and implement a system for asynchronous communication between runtimes(not from untrusted code/javascript directly for security).
  • Create a service runtime for the typescript compiler, and the respective tools/js libs for creating others.
  • Implement a SourceLoader for typescript code that uses these new systems.
  • Move execution from v8 Script to Module.

Note the first two are already done I started working on this while working on #20, but as I got a better feel for the scope of this goal I saw that It was far to much for one PR.

That about sums up my goals for the next month or so. I welcome any feedback/ideas since most of this is in no way final.

@michaeldwan
Copy link
Contributor

Awesome work again on #20, @afinch7, let’s keep going :)

Right now we're focused on gradually migrating production to the distributed-fly binary and replacing our node cli with the new dev tools. Until the new runtime is everywhere we need to maintain compatibility with the node runtime, node cli, and the existing app bundles it creates and deploys.

Eventually we'll run the same binary in dev and prod, so during the migration dev tools are a great place to put energy that'll directly inform production.

Along with your list, here’s some things our dev tools need to replace our node cli:

  • Finishing off the work on native ES modules so we can remove AMD and steamline how code is loaded.
  • recompile and reload the dev server when source changes
  • support typescrpt and module loading in the test binary + bundle mocha/chai in the dev-tools
  • generate optimized code for production
  • load modules from node_modules (simplified cjs logic)
  • generate type definitions for the v8env api
  • load modules over http{s}

If that sounds interesting to you we'd love your help!

@afinch7
Copy link
Contributor Author

afinch7 commented Jan 23, 2019

I spent the past week working on some proof of concept code for runtime to runtime communication. I have something that "works", but I had to implement some less than optimal locks to hold it all together. I'm pretty sure a better solution exists, so I would like to get some other input and ideas on that one.

Right now we're focused on gradually migrating production to the distributed-fly binary and replacing our node cli with the new dev tools. Until the new runtime is everywhere we need to maintain compatibility with the node runtime, node cli, and the existing app bundles it creates and deploys.

Eventually we'll run the same binary in dev and prod, so during the migration dev tools are a great place to put energy that'll directly inform production.

Unless the optimized/bundled code includes require or import statements, it should run exactly the same in either environment. I don't think that you expect to support running code built with new tools in the old environment. Here is what I think would be reasonable expectations for this switch in as certain of terms as possible:

  1. Builds of apps made using the old dev tools/environment should work in the new environment via js lib shims or other means if needed.
  2. New dev tools should be considered experimental right now. Features may come and go, things might not work in production/development or at all, and if you want to design and deploy today you should use the node tools.
  3. Code designed in this new environment with it's new dev tools should not be expected to be compatible with the old tools/environment.
  4. Some of the new features of the new dev tools may not work with code designed with the old tools(it may work but will not be officially supported).
  5. Transitioning from the old tools to the new should be as user friendly and simple as possible once they are at a more finished state. This process will also likely need some specific documentation.

If we can agree that those are reasonable expectations of compatibility I'm all in. Let me know what you think and if there is anything I missed or need to be more specific on.

Looking at the features/requirements you have listed; things like node_modules and http/https loading should just be a simple as implementing revolvers/loaders. I would be willing to create some documentation on this, and walk someone through the process of a test of the system and the documentation. If not though I might just do it myself at some point, but I would like others to have the information they need to make these additions. The system was designed to make the process pretty easy and straightforward.

  • Finishing off the work on native ES modules so we can remove AMD and steamline how code is loaded.

I not sure how you interpreted v8 Modules, but I kinda figured this would be a given. I guess I should be specific about what standard I'm talking about, so to be very specific I want to target the native module system in v8 which I think is up to date with EMCAScript 2018 Module standard. There is currently a more dynamic import standard proposal already supported by chrome, but It already looks too much like eval and we all know how that turned out.

  • generate optimized code for production

I have some ideas with this one. I'm thinking something like the typescript compiler runtime service, but with rollup or webpack instead. Just load up as a service runtime, and send request to bundle with params much like a dns or http request gets passed and processed. I would also like to be able to define some boundaries for blunder inclusions sort of like the "include" and "exclude" options for a tsconfig. Later in the future I would like to have a "project loader" to load an entire folder or git repository in as a single module via some just in time bundling process with caching of course. It would make developing groups of services that are dependent on others much easier. Think docker-compose for fly apps.

  • support typescript and module loading in the test binary + bundle mocha/chai in the dev-tools
  • generate type definitions for the v8env api

I haven't looked at this much, but I know I want to support typescript as it provides machine readable code documentation.
At some point it might make sense to come up with a standard to bundle/link ES modules with their tests. I expect this to be a very useful feature in larger projects, and important to ensuring compatibility of old apps with new code. This would also play a big role in some of my future goals described below.

For the others I don't see anything that looks to difficult, I would like to have a little more detail on what you expect on some of them though.

More on long term goals:
It may not be clear, but from a design and workflow prospective I love typescript. I don't want to just support typescript, I want the systems to use and abuse typescript. Typescript can describe detailed type requirements and expectations and their relations in a way that can be used to discover clear issues with code before running any tests or trying to run the code. I've seen many other tools that are designed to do the same thing, but none that are really as universally compatible. Tools already exist to convert network protocol definitions like flatbuffers, graphql, json schema, or grpc into typescript code, and tools to do the opposite with typescript code exist too. That's great, but I still think we can do better. That's where dynamic module resolution and just in time typescript compilation come into play. Why should I as a developer have to worry about causing problems with other code? We have the tools to automatically and quickly determine problems so developers can iterate faster, and typescript already does this with the information it has. The problem is that most typescript environments doesn't have the context to know what dependents would be affected by my changes nor a good way to present this to the me. Fly is in a position where it could easily bridge the gap. The key part in is creating that link between code and it's dependents so that the impact can be determined and accounted for in as little time possible. I see plenty of other little chores that can be automated but this one stands out as the one that would really sell people. I basically see fly as a opportunity to modernize web development with that same great typescript syntax I like so much. Knowing my longer term goals isn't really important for working together on something, but I think my ideas might go a long way in setting fly apart from the crowd. There are things about fly currently that makes it appealing, but I would be lying if I said it was entirely unique. Projects like serverless already offer a lot of the same functionality with much greater industry support, but there is clearly a reason I'm working on this and not serverless.

TL;DR; I have some ideas of how to accomplish some of the goals/requirements you set forth, and want to get a little more specific about what compatibility expectations you have for moving from old tools/env to new ones. I give some more detail on what my longer term goals are, and why typescript support is important.

That's about all for now. I'm looking forward to getting these features in place, and I'm hoping those expectations bring help get everyone on the same page. I might have another pull request to WIP stage later this week.

@michaeldwan
Copy link
Contributor

Hey @afinch7, sorry for the slow reply. I think we're on the same page with backwards compatibility, but don't let that slow you down. We're making progress on our end, so hopefully compatibility solves itself before too long.

I'm interested in your runtime services idea. We've been thinking about how we can throttle resources for things like image processing, I wonder if there's overlap with your idea. It looks like you have a solid grasp on your next steps. Let us know if you have questions or want to talk through anything. I'm excited to see what you're working on!

@afinch7
Copy link
Contributor Author

afinch7 commented Feb 6, 2019

Good to know where are on the same page @michaeldwan. I'm going to see if I can get what I have right now ready for a PR soon. I got a little lost some other projects. I now have a good idea of the scope of integration of custom resolver systems into the editor. Sadly it's not looking to great for that, but creating a abstracted and configurable system ts compilation/loading will be a big help. Just more reason to keep heading down the path of ts compilation in a service runtime.

More refinements to service runtimes design

I want the design for the service runtime system to fit as many uses as possible, and the way that it supports the other systems it will be a pretty key component. This means I need to put some time and thought into it before I decide on an exact design.

I decided that it might be helpful to draw up a diagram to illustrate how I want it to work, and how it all fits into the existing systems.
alt text

The only part of this system that I'm really still trying to figure out is the communication protocol.
My idea right now is a sort of 20 questions like communication:

  1. Initial request is sent to the service runtime.
  2. If initial request is valid/accepted the service runtime gets a channel associated with this request and can request additional information from the client if needed to complete the request.
  3. Any failure at this point will cause a skip to step 6.
  4. The service will then make requests for an additional needed information and the requester can respond with a rust Result or equivalent allowing for the entire request to fail if the information cannot be supplied.
  5. Once the service has everything figured out it completes the request with a with a rust Result or equivalent allowing for failure to also occur here.
  6. Both sides can then remove the channel and references from memory, and if the request was a success the requester can cache the results. If the request failed the requester can then attempt a retry, and if the service has cached the question responses from the last request it can be faster than a complete fresh start.

Editor integration is import, but also hard sometimes.

I did some research on what it would take to integrate custom resolution into the editor, and it turns out it might require adding some new features to the plugin system for typescript.

Some nomenclature for better clarity

module service - The service that resolves, loads, tracks dependency information, etc in a environment. Specifiers and referrer information in; module definitions and compiled code out.

resolver - A resolver should be able to take a specifier-referrer combination and spit out a strategy for loading a ES module.

The parity problem

When doing my research on editor integration with custom resolution. I realized we need a way to standardize module resolution configuration to allow for parity between editor language services and fly runtimes. This should include JSON configurable module services. I've have two ideas here on how to tackle this.

Option 1: create a network service.

This option should be pretty straight forward. Just load up and configure module service, and then listen and respond to requests on local network, so editor integration tools can just talk to the service and get the information they need. This option will take advantage of existing module service code in rust. This option sounds a lot more clunky from the end user perspective, and more of a pain to setup. I really don't like the idea of another thing that a developer has to setup and configure/troubleshoot. This one kinda goes against my original goals by adding another step for developers instead of removing steps.

Option 2: move another system into a service runtime.

This option would likely require a lot more work and thought, but I like the flexibility and it should make it easy for others in the industry integrate with and collaborate on this system. This solution doesn't necessarily entirely exclude the first option either. The standard here could easily include the option to talk to external service for resolution/loading.

The goal is to make the module services system it's own package and implement it into a runtime service and editor plugins. This package could easily allow for end users to setup custom revolvers in their configurations. I've played with this one a bit, and I really like the idea importing custom resolver factories as modules via dynamic imports. To maintain flexibility and security in fly I think that custom resolver resolution should be a part of implementation of the package. Implementors need control over what resolvers can be used and where their respective code comes from.

In general I like this idea alot. I've noticed several others that are interested in the idea of dynamic module resolution, so I might try to round up those interested and get their ideas. This makes for a good opportunity for community collaboration, and would go a long ways towards a dynamic module resolution standard. Once a standard can be established those who use it in their projects could easily move from one platform to another eliminating another barrier to entry.

Conclusions

I look forward to where all this goes, and I want to know what others think.

@afinch7
Copy link
Contributor Author

afinch7 commented Feb 7, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants