Skip to content
This repository has been archived by the owner on Jun 21, 2020. It is now read-only.

MVP #1

Closed
4 tasks done
dgp1130 opened this issue Apr 30, 2020 · 9 comments
Closed
4 tasks done

MVP #1

dgp1130 opened this issue Apr 30, 2020 · 9 comments
Assignees

Comments

@dgp1130
Copy link
Owner

dgp1130 commented Apr 30, 2020

This issue tracks the mainline work to be done to launch minimum viable product (MVP). It gives me somewhere to write down general thoughts and notes that don't necessarily relate to commit messages or markdown docs.

In my view, MVP requires:

  • Users can install the compiler as a dev dependency and use it to generate client/service stubs.
    • An ugly/unwieldy command is fine for now.
  • Users can implement the service as a simple (req: RequestMessage) => Promise<ResponseMessage>
  • Users can call the service with a simple const res = await myService.myMethod(req);.
  • Works on latest Chrome with a single message passing API.

Edit: All the major requirements are currently complete, but there are a few cleanup tasks still to do before the 1.0.0 release.

Requirements explicitly not covered by MVP:

  • Error handling/propagation.
  • Streaming requests or responses.
  • Side channel support.
  • Builder integration (Webpack, Rollup, Bazel, etc.).
  • Proto service co-existing with non-proto service on the same message passing API.
  • Multiple proto services on the same message passing API.
  • Other browsers/WebExtensions support.
  • Development on non-Linux platforms.
@dgp1130 dgp1130 self-assigned this Apr 30, 2020
dgp1130 added a commit that referenced this issue Apr 30, 2020
Refs #1.

Lerna provides two key features for this project.
1. Allows packages to be linked together and dependent on each other directly in the repo.
2. Executes scripts in all packages in dependency order, allowing a b uild script to run over all packages in the right order.

These two features are necessary for organizing the repo and making packages dependent on each other in a usable and testable manner.

Release functionality might come in handy too, but that's not why I'm adding it for now.

I generated this commit by running:

```shell
npx lerna init
```

Only additional change was adding `lerna` to the NPM scripts so developers don't need to install it globally or use `npx` each time.
dgp1130 added a commit that referenced this issue Apr 30, 2020
Refs #1.

Generated the package with:

```
$ npm run lerna create bxpb-runtime

> root@ lerna /home/dparker/Source/bxpb
> lerna "create" "bxpb-runtime"

lerna notice cli v3.20.2
package name: (bxpb-runtime)
version: (0.0.0)
description: Runtime library for browser extension protocol buffers.
keywords: browser-extension browser-extensions chrome-extension chrome-extensions protocol-buffers library service services
homepage: https://github.com/dgp1130/bxpb/packages/bxpb-runtime/
license: (ISC) MIT
entry point: (lib/bxpb-runtime.js)
git repository: (https://github.com/dgp1130/bxpb)
```

Additional edits were deleting the generated JavaScript code and dropping the `main`, `files`, and `directories` keys in the `package.json` file. I'll re-add those as they become relevant. I also filled in the README with expected usage details.
dgp1130 added a commit that referenced this issue May 1, 2020
Refs #1.

Generated the package with:

```
$ npm run lerna create bxpb-runtime

> root@ lerna /home/dparker/Source/bxpb
> lerna "create" "bxpb-runtime"

lerna notice cli v3.20.2
package name: (bxpb-runtime)
version: (0.0.0)
description: Runtime library for browser extension protocol buffers.
keywords: browser-extension browser-extensions chrome-extension chrome-extensions protocol-buffers library service services
homepage: https://github.com/dgp1130/bxpb/packages/bxpb-runtime/
license: (ISC) MIT
entry point: (lib/bxpb-runtime.js)
git repository: (https://github.com/dgp1130/bxpb)
```

Additional edits were deleting the generated JavaScript code and dropping the `main`, `files`, and `directories` keys in the `package.json` file. I'll re-add those as they become relevant. I also filled in the README with expected usage details.
dgp1130 added a commit that referenced this issue May 1, 2020
Refs #1.

Generated the `tsconfig.json` file with:

```shell
(cd packages/bxpb-runtime && npx typescript --init)
```

Updated some settings as well:
* Generated declarations so downstream packages can take advantage of TypeScript.
* Added sourcemaps to improve debuggability.
* Targetted ES2019 to use real Promises and iterators.
* Used ES2015 modules, as there is no need for CommonJS just yet (might need to switch later for protobufs).
* Accepted source files from `src/` and output to `dist/`. I'll still need to figure out exactly how to package with this format.

Added `tsc` to the `build` script and added documentation for how to build the package itself.

I included a dummy `hello.ts` with "Hello world" just to validate the build.
dgp1130 added a commit that referenced this issue May 1, 2020
Refs #1.

Created the Jasmine config with:

```shell
(cd packages/bxpb-runtime/ && npx jasmine init)
```

Since we're using TypeScript and I wanted to leverage the existing build configuration (or else risk divergence between the build and test commands), I use a test-only `tsconfig.spec.json` which extends the base `tsconfig.json`. This writes to a different output directory so tests can be included without polluting the release `dist/` directory. Jasmine then runs on these files to execute the tests.

I don't see an easy way of debugging tests just yet. I tried setting up VSCode's debugger, but I couldn't find a configuration I was happy with, as it needs to pass the debugging port through `npm run` which executes two subcommands. This could be avoided by running the tests directly and not implicitly building with `npm test`, but I'm not comfortable with that as it is far too easy to forget to build. I could split it into multiple tasks in VSCode's configuration, but then I need an `npm run build-for-test` and `npm run exec-tests` which are just both called by `npm test` which is confusing and awkward and has non-obvious reasons for existing. At the end of all that it binds debugging support to VSCode directly, which I'm not a fan of.

In the end, I could probably get debugging to work as is, but I'm not able to find a developer experience I'm satisfied with. I'll probably set up Karma soon to get a quick edit/refresh cycle with DevTools.
@dgp1130
Copy link
Owner Author

dgp1130 commented May 1, 2020

Quick rant about Lerna, I've been trying really hard to use existing Node tooling for this project, but I'm still not satisfied. I was hoping I could use Lerna to link local dependencies together and then use lerna run build to build them in the correct order. This seems to work ok, however with testing it kind of falls apart.

The possible ways I can test a package include:

  • lerna run test - This will execute tests for the entire repo, but can't pick a specific package and doesn't build the repo first.
  • lerna run build && lerna run test - This will build the entire repo and then run tests, but can't pick a specific package.
  • lerna run test --scope packages/foo/ - This will execute tests for a single package, but won't build any dependencies.
  • lerna run build --scope packages/foo/ --include-dependencies && lerna run test --scope packages/foo/ - This will build dependencies and test the specific package, but double builds foo and is a pretty long and unintuitive command. This is probably the best option.

Debugging is weird too, because I need to set a node port on a deeply nested command. Add to this that npm test is a combination of tsc (to build the test code) and jasmine (to execute it). This means that to debug I would need something like:

npm run lerna run build --scope packages/foo/ --include-dependencies && npm run lerna run test:build --scope packages/foo/ && npm run lerna exec node --scope packages/foo/ -- node --inspect=${DEBUG_PORT} node_modules/.bin/jasmine

Then the package.json would need to be:

{
  "scripts": {
    "test": "npm run -s test:build && jasmine",
    "test:build": "tsc -p tsconfig.spec.json"
  }
}

The final exec command is needed to put the Node port exactly where it's needed and breaks abstraction in a really ugly way. I also need to separate the test build step just so it can be manually run for the package before executing the test. I'd also need to set this up separately for VSCode using multiple tasks, and preLaunchTasks which also break abstractions and repeat the build/test infrastructure.

TL;DR: I don't see an elegant way of managing/running tests with Lerna which I'm happy with. Maybe there's something I'm missing. I suspect Karma would solve a few of these problems, but mainly just by sidestepping them rather than Lerna really doing what I want? I'll see if I can get a decent debug flow with Karma, and if not, I might swap out Lerna for Bazel instead.

dgp1130 added a commit that referenced this issue May 2, 2020
Refs #1.

Forked from https://github.com/actions/starter-workflows/blob/6adb515f324bc8ef4c0091053438298bbc34a85a/ci/node.js.yml.

Modified this slightly to use Lerna for the dependency management piece. I wasn't able to find any explicit documentation for `npm ci`-like behavior for Lerna, but this appears to be implemented. Looking through lerna/lerna#1360, it appears to support `--ci`.
dgp1130 added a commit that referenced this issue May 3, 2020
Refs #1.

This includes a `test:debug` script to execute Karma. Karma's config uses the existing `tsconfig.spec.json` so test builds should be consistent between test and debug, though I suspect Karma adds its own options for sourcemapping.

I had a lot of fun trying to get sourcemaps to work. The compiled JavaScript would be wrapped in coverage instrumentation, so the sourcemap comment was not at the end of the file, and thus would not be picked up by the browser. Eventually, I found the option to disable coverage which removed the instrumentation and Chrome was able to pick up the sourcemap. This does mean that coverage is very likely broken, but I'm not that concerned about it at the moment.

I also discovered that Lerna's `--scope` command was not actually working. I would run `npm run lerna run foo --scope packages/bar/`, however `--scope` is actually an NPM argument, so NPM accepts it and does not pass it to Lerna. Since the repo is currently just one package, the lack of a scope would run on all packages, which is just that one so I had not noticed. Running `npm run -- foo-script` properly escapes arguments after the `--`. At this point I also discovered that the `--scope` flag expects a package name, not a path, so the *correct* way to run a command through Lerna is actually:

```shell
npm run -- lerna run foo --scope bar
```

Updated the documentation to reflect this. If `foo` expects arguments, though should also follow a second `--`, for example:

```shell
npm run -- lerna run foo --scope bar -- --baz hello/world.txt
# Runs `npm run foo --baz hello/world.txt` in package `bar`
```
dgp1130 added a commit that referenced this issue May 3, 2020
Refs #1.

This defines descriptor types for services and methods as well as a `serve()` function which is able to infer the implementation methods required and type check them accordingly.

The test required a mock service, however there is no easy way of having a `Message` implementation without compiling a real *.proto file, which would complicate Karma significantly or yield wierd rebuilds. I'm avoiding that for now, but it may be necessary to add in the future. This might be possible by having a private `bxpb-testdata` package which builds protobufs in its `npm run build`. Then Lerna would be able to build it as a dependency without requiring changes the Karma config, however Karma's live reload would not automatically rebuild it.

Also updated the test commands in the README to include `--stream`, which streams test output to the terminal.
dgp1130 added a commit that referenced this issue May 3, 2020
Refs #1.

Somehow this got missed previously. I think I had it as a dev dependency, and then removed it but forgot to add it back as a real dependency.
dgp1130 added a commit that referenced this issue May 3, 2020
Refs #1.

Previously we used Jasmine, however this runs within Node. Using Karma for debug tests and Jasmine for normal tests, means that one set runs in a browser and the other set runs in Node, which creates some discrepancies. By using Karma, all tests are consistently running in a browser. Running in a browsers is preferable to Node, because the browser is closer to the real runtime where bxpb will be used.
dgp1130 added a commit that referenced this issue May 4, 2020
Refs #1.

I realized there's no need to mock `chrome.runtime.onMessage()` because it is provided in the input. Instead I made a `FakeEvent` class which maintains the listeners and exposes a new `trigger()` function for use in testing.

I also included tests for `FakeEvent`, though I have not found a good way to test that `fail()` is used correctly. I found a Stack Overflow question which does this, but I was not able to get it to work as `jasmine.Env.prototype.execute()` does not appear to execute tests synchronously. I wasn't able to find a good way to wait for all tests to be completed. Instead I left those untested for now, hopefully I can find a better way to cover those cases in the future.
@dgp1130
Copy link
Owner Author

dgp1130 commented May 4, 2020

I just spent a couple hours trying to compile a .proto file only to eventually determine this to be infeasible. AFAICT, Google has no supported mechanism of generating TypeScript-compatible code with protobufs. There are a number of community-built alternatives, but I wanted to stick with Google-supported tools as much as possible.

I tried to generate JavaScript protos instead, and then run them through Clutz to generate TypeScript definitions. I believe this is how it works in google3, so I figured that was closest I could get to a directly supported option. Unfortunately, this doesn't really seem to work either. Closest I got was this command:

grpc_tools_node_protoc --js_out=import_style=commonjs,binary:test-out/test_data/ --proto_path src/test_data/ src/test_data/*.proto
clutz test-out/test_data/*.js --closure_env BROWSER --externs ../../closure/contrib/nodejs/globals.js
test-out/test_data/greeter_pb.js:27: ERROR - [JSC_UNDEFINED_VARIABLE] variable proto is undeclared
  25|  * @constructor
  26|  */
  27| proto.foo.bar.GreetRequest = function(opt_data) {
  28|   jspb.Message.initialize(this, opt_data, 0, -1, null, null);
  29| };

test-out/test_data/greeter_pb.js:31: ERROR - [JSC_UNDEFINED_VARIABLE] variable COMPILED is undeclared
  29| };
  30| goog.inherits(proto.foo.bar.GreetRequest, jspb.Message);
  31| if (goog.DEBUG && !COMPILED) {
  32|   proto.foo.bar.GreetRequest.displayName = 'proto.foo.bar.GreetRequest';
  33| }

test-out/test_data/greeter_pb.js:300: ERROR - [JSC_UNDEFINED_VARIABLE] variable exports is undeclared
  298| 
  299| 
  300| goog.object.extend(exports, proto.foo.bar);

3 error(s), 0 warning(s)

The first conflict is about the proto import. The generated proto JavaScript has a require('google-protobuf') which should define that symbol. The proto namespace is generated by individual proto files, so there's no library/runtime to link to here. The fact that jspb.Message resolves, makes me think the require('google-protobuf') worked. I'm not sure why proto. doesn't. This might be that proto files generate with goog.exportSymbol() rather than goog.provide() or goog.module(). The latter two actually create the object at that namespace, while goog.exportSymbol() does not (per docs).

The second conflict is COMPILED. This comes from base.js, but trying to add it causes a name conflict for goog because that is also defined in the protobuf JavaScript for some reason. This is just a boolean, so I could manually declare it if absolutely necessary.

The third conflict seems to be exporting the proto. I added Node externs to handle CommonJS symbols (ie. require()), but this didn't work for export. I'm not entirely sure why, but it might be necessary to set --process_common_js_modules per this doc. Unfortunately, Clutz wraps Closure and does not appear to expose a hook to that flag.

See angular/clutz#334 for more info.

I could keep debugging and just try to get Closure itself to compile the generated protobuf, then worry about Clutz afterwards. Maybe I'll take another crack at this later, but it seems like Clutz and Closure are not effectively working together here. If I can't get anything usable from this, I might just have to fall back to the community alternatives.

@dgp1130
Copy link
Owner Author

dgp1130 commented May 7, 2020

I took another quick pass at Clutz, but didn't have much luck. I can get Closure to compile if I use compilation_level=SIMPLE, but I think that just turns off a lot of the type checking, which is necessary for Clutz.

(cd packages/bxpb-runtime/ && npx google-closure-compiler --js test-out/test_data/greeter_pb.js --compilation_level SIMPLE)

I tried the --process_common_js_modules flag on Closure, but that isn't able to resolve the require() call either.

$ (cd packages/bxpb-runtime/ && npx google-closure-compiler --js test-out/test_data/greeter_pb.js --js node_modules/google-protobuf/**/*.js --js node_modules/google-protobuf/package.json --compilation_level ADVANCED --process_common_js_modules --module_resolution NODE)

test-out/test_data/greeter_pb.js:10: ERROR - [JSC_UNDEFINED_VARIABLE] variable module$node_modules$google_protobuf$google_protobuf is undeclared
var jspb = require('google-protobuf');
                   ^^^^^^^^^^^^^^^^^

test-out/test_data/greeter_pb.js:27: ERROR - [JSC_UNDEFINED_VARIABLE] variable proto is undeclared
proto.foo.bar.GreetRequest = function(opt_data) {
^^^^^

There are also unrelated errors directly in google-protobuf code.

I also tried changing the output format of protoc to --js_out=import_style=closure,binary:out/. This generates Closure-compatible code as expected, but fails because jspb.* is not provided. google-protobuf doesn't export this to node_modules/, so it looks like I'd need to link directly into the source code, via a git submodule or some other trick.

This is just becoming a big waste of time. Clutz, Closure, and protobufs are just too unergonomic as to be effectively unusable in the OSS ecosystem. For now, I think I'll use some community resources to generate TypeScript code and worry about potential google3 compatibility if/when that becomes relevant.

dgp1130 added a commit that referenced this issue May 7, 2020
Refs #1.

Without this, the compilation will sometimes include files from other generated directories and conflict with itself.
dgp1130 added a commit that referenced this issue May 8, 2020
Refs #1.

This introduces `greeter.proto` which includes a simple service. It generates TypeScript declarations and puts them in `generated/`, using TypeScript's `paths` feature to map imports. Karma also needs to map the JavaScript in a similar fashion to resolve itself.

I also refactored the way the `tsconfig.json` files worked to better support this. Now there are three config files which are all a bit different:
* `tsconfig.json` is meant for the editor and contains all TypeScript files (including generated). This allows the IDE to resolve all imports in all files so there are no awkward red squigglies that don't go away.
* `tsconfig.lib.json` is for actually running the compilation. It simply excludes test files, so as not to accidentally use them in a production build.
* `tsconfig.spec.json` is for running tests. It forces a different output directory and format to be compatible with Karma.

One downside of this strategy is that the editor belives all imports are available from all locations which isn't strictly true. For example, `service.ts` should **not** reference Jasmine's `describe()` function, but the editor will not complain about this. In TypeScript 3.9, solution style configs allow for more flexibility in the editor and should be able to better support this format. Filed #3 to look into this when TS3.9 launches.
dgp1130 added a commit that referenced this issue May 8, 2020
Refs #1.

This introduces `greeter.proto` which includes a simple service for testing purposes. It generates TypeScript declarations and puts them in `generated/`, using TypeScript's `paths` feature to map imports. Karma also needs to map the JavaScript in a similar fashion to resolve itself.

I also refactored the way the `tsconfig.json` files worked to better support this. Now there are three config files which are all a bit different:
* `tsconfig.json` is meant for the editor and contains all TypeScript files (including generated). This allows the IDE to resolve all imports in all files so there are no awkward red squigglies that don't go away.
* `tsconfig.lib.json` is for actually running the compilation. It simply excludes test files, so as not to accidentally use them in a production build.
* `tsconfig.spec.json` is for running tests. It forces a different output directory and format to be compatible with Karma.

One downside of this strategy is that the editor belives all imports are available from all locations which isn't strictly true. For example, `service.ts` should **not** reference Jasmine's `describe()` function, but the editor will not complain about this. In TypeScript 3.9, solution style configs allow for more flexibility in the editor and should be able to better support this format. Filed #3 to look into this when TS3.9 launches.
dgp1130 added a commit that referenced this issue May 11, 2020
Refs #1.

This defines the wire format of an RPC request/response pair. It listens to the provided transport to `serve()` and awaits a message. When a message is received, it is validated against the expected wire format (`ProtoRequest`) and the method implementation is identified and called. Most of this is attempting to handle all the possible errors which can be encountered, as all the data provided cannot be trusted to be consistent with the specification.

Proto messages are encoded in base64 because that is the simplest option for now. It probably does not matter for the time being, and proper performance analysis would identify the most ideal message format in the future.

I also changed the method implementation to use the exact method name provided in the `*.proto` file. Most of the time, this will be in pascal case per protobuf conventions, which might be awkward for some JavaScript linters/programmers. For example:

```proto
service Greeter {
  // Name "Greet" is pascal case, normal for *.proto files.
  rpc Greet(GreetRequest) returns (GreetResponse) { }
}
```

```typescript
serve(chrome.runtime.onMessage, GREETER_SERVICE, {
  // Name "Greet" is also pascal case, weird for JavaScript/TypeScript files.
  async Greet(req: GreetRequest): Promise<GreetResponse> {
    return new GreetResponse();
  },
});
```

I think this is simpler as it is does not include an awkward conversion of the RPC name in the `.proto` file, which can be tricky to do correctly. Messages like `JSObject` would naively be turned into `jSObject` which doesn't make much sense. There are lots of edge cases which cannot be handled algorithmically. This is also simpler for developers as there is a direct one-to-one correlation. Downside is that this looks less JavaScript-y and some linters may complain about it. The linter warnings may be annoying, so we'll have to see exactly how well they play with this format.
dgp1130 added a commit that referenced this issue May 15, 2020
Refs #1.

This introduces a `ProtoClient` class with an `rpc` function.

Generated clients will extend `ProtoClient` with their implementation. This gives a consistent constructor interface for usage of the clients and allows protected members to be added in the future if necessary.

`rpc()` does the real work of sending a message to the backend service and awaiting the response. Again, most of the work here is validating data, but this turned out to be simpler than the service side.

Both of these APIs are marked via comment as being private. They need to be exported so generated code can call them, but should **not** be called directly by user code. This allows the generated code structure to be entirely an implementation detail which I can change at will. That means the runtime can have a new function added and the compiler can generate code which uses it without any backwards-compatibility concerns. Downside of this decision is that it does not allow client/service code to be independently rolled out. I think this is ok, as browser extensions are released monolithically through a store, so decoupled releases is a pretty specific use case. If this decision becomes a real problem for a significant number of users, then maybe we can revisit this in the future.

`rpc()` could have been a protected method of `ProtoClient`, but I opted for a separate function so it would be harder to misuse, as developers would need to import it before calling it. As long as I expose this code at some kind of `import { rpc } from 'internal/DO_NOT_DEPEND_OR_ELSE/client';`, then I think we should be ok.
dgp1130 added a commit that referenced this issue May 16, 2020
Refs #1.

This will contain a simple "Hello world" example using BXPB. It will also serve as a test case to use for the rest of the repo.
dgp1130 added a commit that referenced this issue May 16, 2020
Refs #1.

This creates a minimal Chrome extension (just a manifest that does nothing). A build script copies the file to an output directory with instructions on how to install it.
dgp1130 added a commit that referenced this issue May 16, 2020
Refs #1.

This adds a simple popup window to the browser extension to act as the client. Currently this just displays simple HTML.

Also updated the README with additional details around reloading local changes.
dgp1130 added a commit that referenced this issue May 16, 2020
Refs #1.

This adds a background script to the extension manifest so it is loaded on startup.

The build strives to be as simple as possible, so it just uses `tsc` and no bundler. This is accomplished by leveraging native ESM imports from the browser. https://medium.com/front-end-weekly/es6-modules-in-chrome-extensions-an-introduction-313b3fce955b

Typically, background scripts are referenced as JavaScript files from the manifest. In this case, we reference an HTML file, with a `<script type="module">` in order to use ESM imports. Because there is no bundler, one weird aspect is that imports need to include `.js` at the end. Otherwise module resolution in the browser will fail to find the file. See microsoft/TypeScript#16577 and https://github.com/alshdavid/tsc-website/pull/1

I used a subdirectory for TypeScript/JavaScript code rather than including them directly in `src/` and `dist/` because the build process does not output a single file. This meant I had to pick a name for the subdirectory and the best I came up with was `js/`. I wanted the name under `src/` and `dist/` to be the same, so it is a little strange that TypeScript source files live under the `js/` directory, but I've definitely seen weirder naming conventions.

The `tsconfig.json` file came from `(cd examples/greeter/ && npx typescript --init)` with some settings pulled from `packages/bxpb-runtime/` for consistency. This uses inline sourcemaps for simplicity since they do not need to be exported to downstream projects like the runtime.
dgp1130 added a commit that referenced this issue May 18, 2020
Refs #1.

Switched the build pipeline to use Rollup to bundle TypeScript. This is necessary because Protobuf generated code can only use Closure or ComonJS, there is currently no ESM option. As a result I opted for CommonJS and the compilation must work with that format.

This example was intended to be the simplest, using only NPM scripts with no Webpack/Rollup/Grunt/Gulp/etc. However, `tsc` doesn't provide any means of actually using CommonJS without a bundler. As a result, I added Rollup to this example as I felt it was the "simplest", based on my totally subjective personal experience. I've had trouble with Webpack in the past for multiple output JavaScript files with disjoint dependencies, which is exactly the use case I'm trying to support with this library. I'm still using NPM scripts as the main driver for the build, only using Rollup for the TypeScript compilation and bundling.

After all this, I'm able to generate protobuf code and import it into the build. This uses `grpc-tools` which ships a version of `protoc` which is installable via NPM. Otherwise devs have to install `protoc` themselves, which I would much rather avoid. This generates the JavaScript files, but no TypeScript definitions unfortunately. I used `protoc-gen-ts` to generate typings, though it is really sad that I need a community tool just to use protobufs in TypeScript.

One side effect of using the gRPC tool is that it generates a `myprotofile_grpc_pb.d.ts` file, which contains definitions for gRPC and links to that runtime. This is useless for the purposes of this project but I could not find a way to prevent the file from being generated. Instead, I simply delete that file after generation, which is the best I can do.

`protoc` is more awkward to use than I had hoped, as it requires all input files to be explicitly named. I wanted to just say "Compile all protos under the proto/ directory", however there is no way to do this. Instead, I had to use a `find` command to list out all such files, though this does create more of a build dependency on Linux. None of this would work for Windows, though that isn't too much of a concern right now.

I also wanted generated protos to be available under `import * as foo_pb from 'protos/...'`. This requires a `paths` argument in the `tsconfig.json` file, which works fine with `tsc` but not with Rollup. There is probably a way to configure this to be compatible with Rollup, but I couldn't find any obvious documented solution in `@rollup/plugin-typescript`. Instead, attempting to be as simple as possible, I decided to just import from `../`. I think that is a bit of a code smell, but it's the most straightforward and obvious solution for any users who look at this example.

I also converted `background.js` to a simple script as it no longer needs to be a module.
dgp1130 added a commit that referenced this issue May 18, 2020
Refs #1.

Switched the build pipeline to use Rollup to bundle TypeScript. This is necessary because Protobuf generated code can only use Closure or ComonJS, there is currently no ESM option. As a result I opted for CommonJS and the compilation must work with that format.

This example was intended to be the simplest, using only NPM scripts with no Webpack/Rollup/Grunt/Gulp/etc. However, `tsc` doesn't provide any means of actually using CommonJS without a bundler. As a result, I added Rollup to this example as I felt it was the "simplest", based on my totally subjective personal experience. I've had trouble with Webpack in the past for multiple output JavaScript files with disjoint dependencies, which is exactly the use case I'm trying to support with this library. I'm still using NPM scripts as the main driver for the build, only using Rollup for the TypeScript compilation and bundling.

After all this, I'm able to generate protobuf code and import it into the build. This uses `grpc-tools` which ships a version of `protoc` which is installable via NPM. Otherwise devs have to install `protoc` themselves, which I would much rather avoid. This generates the JavaScript files, but no TypeScript definitions unfortunately. I used `protoc-gen-ts` to generate typings, though it is really sad that I need a community tool just to use protobufs in TypeScript.

One side effect of using the gRPC tool is that it generates a `myprotofile_grpc_pb.d.ts` file, which contains definitions for gRPC and links to that runtime. This is useless for the purposes of this project but I could not find a way to prevent the file from being generated. Instead, I simply delete that file after generation, which is the best I can do.

`protoc` is more awkward to use than I had hoped, as it requires all input files to be explicitly named. I wanted to just say "Compile all protos under the proto/ directory", however there is no way to do this. Instead, I had to use a `find` command to list out all such files, though this does create more of a build dependency on Linux. None of this would work for Windows, though that isn't too much of a concern right now.

I also wanted generated protos to be available under `import * as foo_pb from 'protos/...'`. This requires a `paths` argument in the `tsconfig.json` file, which works fine with `tsc` but not with Rollup. There is probably a way to configure this to be compatible with Rollup, but I couldn't find any obvious documented solution in `@rollup/plugin-typescript`. Instead, attempting to be as simple as possible, I decided to just import from `../`. I think that is a bit of a code smell, but it's the most straightforward and obvious solution for any users who look at this example.

I also converted `background.js` to a simple script as it no longer needs to be a module.
dgp1130 added a commit that referenced this issue May 18, 2020
Refs #1.

Switched the build pipeline to use Rollup to bundle TypeScript. This is necessary because Protobuf generated code can only use Closure or ComonJS, there is currently no ESM option. As a result I opted for CommonJS and the compilation must work with that format.

This example was intended to be the simplest, using only NPM scripts with no Webpack/Rollup/Grunt/Gulp/etc. However, `tsc` doesn't provide any means of actually using CommonJS without a bundler. As a result, I added Rollup to this example as I felt it was the "simplest", based on my totally subjective personal experience. I've had trouble with Webpack in the past for multiple output JavaScript files with disjoint dependencies, which is exactly the use case I'm trying to support with this library. I'm still using NPM scripts as the main driver for the build, only using Rollup for the TypeScript compilation and bundling.

After all this, I'm able to generate protobuf code and import it into the build. This uses `grpc-tools` which ships a version of `protoc` which is installable via NPM. Otherwise devs have to install `protoc` themselves, which I would much rather avoid. This generates the JavaScript files, but no TypeScript definitions unfortunately. I used `protoc-gen-ts` to generate typings, though it is really sad that I need a community tool just to use protobufs in TypeScript.

One side effect of using the gRPC tool is that it generates a `myprotofile_grpc_pb.d.ts` file, which contains definitions for gRPC and links to that runtime. This is useless for the purposes of this project but I could not find a way to prevent the file from being generated. Instead, I simply delete that file after generation, which is the best I can do.

`protoc` is more awkward to use than I had hoped, as it requires all input files to be explicitly named. I wanted to just say "Compile all protos under the proto/ directory", however there is no way to do this. Instead, I had to use a `find` command to list out all such files, though this does create more of a build dependency on Linux. None of this would work for Windows, though that isn't too much of a concern right now.

I also wanted generated protos to be available under `import * as foo_pb from 'protos/...'`. This requires a `paths` argument in the `tsconfig.json` file, which works fine with `tsc` but not with Rollup. There is probably a way to configure this to be compatible with Rollup, but I couldn't find any obvious documented solution in `@rollup/plugin-typescript`. Instead, attempting to be as simple as possible, I decided to just import from `../`. I think that is a bit of a code smell, but it's the most straightforward and obvious solution for any users who look at this example.

I also converted `background.js` to a simple script as it no longer needs to be a module.
dgp1130 added a commit that referenced this issue May 18, 2020
Refs #1.

We're only testing on one version of Node anyways and it was already hard-coded in the config. No need for the matrix as well, it didn't do anything anyways.
dgp1130 added a commit that referenced this issue May 18, 2020
Refs #1.

This copies the background Rollup config to apply to the popup as well with a separate top-level file for it.

Code could pretty easily be shared for the Rollup config between the two binaries, but I decided simplicity of the configuration is more important than maintainability in this particular case.
dgp1130 added a commit that referenced this issue Jun 1, 2020
…vice.

Refs #1.

Expands test coverage to include multiple methods in a single service and ensure they are generated and aligned correctly.
dgp1130 added a commit that referenced this issue Jun 1, 2020
Refs #1.

This more closely matches how service and client code is generated and provides inline documentation to users of the generated code.

Ideally this would `{@link ...}` to the proto source, but I have no symbols to link to due as this code itself is the service and method descriptors. Instead I just did inline code blocks with the service and method names.
dgp1130 added a commit that referenced this issue Jun 1, 2020
Refs #1.

This generates the JavaScript and TypeScript definition code for BXPB clients. Relatively straightforward implementation after going through services and descriptors. There is a bit more code to generate which adds some complexity, in particular because this deals with both services **and** methods, unlike generated service code.
dgp1130 added a commit that referenced this issue Jun 1, 2020
Refs #1.

This calls the newly implemented client generation code from the plugin. Tests simply assert that the function is called correctly as it's implementation tests it pretty thoroughly already.
dgp1130 added a commit that referenced this issue Jun 1, 2020
Refs #1.

Greeter was previously using hand-written client code as a stand-in. Now that the compiler implements code generation, the existing hard-coded client is no longer necessary. The replacement is a drop-in replacement with no additional work changes needed to use it effectively. Simply altering the import is sufficient.
@dgp1130
Copy link
Owner Author

dgp1130 commented Jun 1, 2020

At this point all the major requirements are supported. The library is technically in a releaseable state.

Before I actually release version 1.0.0, there are a few remaining tasks I want to take care of. This is mostly to make sure I don't immediately follow up with a 2.0.0 release in order to fix something I should have considered before the initial release.

Edit: Crossed out items have been ignore/skipped due to #1 (comment).

  • Re-evaluate Transport types.
    • As it stands, I don't think I can easily expand the Transport types to cover additional APIs without breaking existing usage. Adding a new Transport should not break existing code, so I need to rethink the type to make it easily expandable.
  • Audit API surface.
    • Need to make sure the API does not leak internal details which should not be leaked.
    • Need to make sure all "internal-only" APIs are explicitly marked as such. In particular:
      • *_bxdescriptors.{js,d.ts} should be internal-only.
      • Existing bxpb-runtime APIs should be internal-only.
  • Change exported bxpb-runtime file structure.
    • Currently all files need to imported from bxpb-runtime/dist/... because that's the way the file structure worked out. Definitely shouldn't include dist/..., so I need to figure out exactly how to control the exported file structure.
  • Consider publishing as @bxpb/....
    • Ideally, all officially supported packages should go under a protected namespace to avoid typo-squatting and clearly indicate their official nature (bxpb-* packages could be owned by anyone).
    • Not sure exactly how this work in NPM, or if it is worth the potential cost.
  • Audit documentation.
    • A lot of the initial READMEs of example usage were written prospectively. Things may have since changed and they should be updated to match the real API.
  • Document "Hello world" setup.
    • Greeter is a checked-in example, which is nice. However there is no "getting started" docs telling people how to actually integrate BXPB into their extension.
    • Should consider adding this to the Wiki section of the repo.
  • Add unit tests to Greeter.
    • It would be good to have/document examples of unit testing code written using BXPB.
    • This is mostly showing how to structure code so it is easily testable.
  • Add end-to-end tests of Greeter.
    • Not critical for launch, but it would be great if there were automated tests for the Greeter example, that way CI can verify in an end-to-end fashion that BXPB is actually working in a real extension.
  • File issues for known features/bugs not covered by MVP.
    • This is just to head-off users complaining about obvious missing features or bugs and avoid needless duplicates.
    • In particular, we should test using proto messages from a different file than the service. I strongly suspect this won't work.
  • Test releases.
    • Should do a 0.0.x prerelease.
    • Can iterate on release tooling as much or little as necessary.

dgp1130 added a commit that referenced this issue Jun 2, 2020
Refs #1.

Sometimes when generated the output JavaScript binary is not executable. This makes it executable at the end of the build process.
dgp1130 added a commit that referenced this issue Jun 2, 2020
Refs #1.

This updates the package name to use the new NPM scope. This improves protection against typo-squatting and clearly indicates that it is an "officially" supported package.

I opted to have the directory structure as `packages/runtime/` rather than `packages/@bxpb/runtime/` as all packages should be under `@bxpb`, so it would just be extra typing and an extra directory which serves no benefit.
dgp1130 added a commit that referenced this issue Jun 2, 2020
Refs #1.

This moves the package under the `@bxpb/` scope to better protect it from typosquatting and clearly designate it as an "officially" supported package.

The binary exported by `@bxpb/protoc-plugin` can't be renamed similarly however, as binaries don't fit that model. I left that name as `bxpb-protoc-plugin` as I didn't want to just use `protoc-plugin` and be too generic. Hopefully this one exception of using `bxpb-*` over `@bxpb/*` won't cause too much confusion.
@dgp1130
Copy link
Owner Author

dgp1130 commented Jun 2, 2020

I looked into making an @bxpb/ NPM org, and apparently it is free as long as the packages are public, which works for me! I created the organization and renamed the packages to use the new NPM scope.

https://www.npmjs.com/org/bxpb

dgp1130 added a commit that referenced this issue Jun 2, 2020
Refs #1.

This is generally good for documentation, but the main purpose here is to identify `bxdescriptors{.js,.d.ts}` as private, internal-only code not suitable for direct use by consumers.

This explicitly declares that the file is subject to change and is not covered by semantic versioning. This should relieve the package from having to support a particular file format and allow it to change as needed to support APIs which are actually exported.
dgp1130 added a commit that referenced this issue Jun 4, 2020
Refs #1.

This took a few hours of debugging and was quite complicated in its root cause. I was trying to do some random refactorings and I kept coming across very strange errors. Mainly I kept getting an error along the lines of "Cannot assign (req: GreetRequest) => Promise<GreetResponse> to MethodImplementation<any>" in `background.ts` when calling `serveGreeter()`. The `MethodImplementation` type should never have an `any` in it, as it is always generated with the most specific type parameter, so this seemed quite strange. After a **lot** of trial and error, I found that I could reproduce the problem by simply adding `export` to `type ServiceImplementation = ...` in `packages/runtime/src/service.ts`. See https://github.com/dgp1130/bxpb/commits/tsc-dts-export-error.

Eventually I discovered that even without this change, the generated `greeter_bxservices.d.ts` file imports were not working correctly (thought it would build successfully). In that file, we import `Transport` and `ServiceImplementation`. However, these both seemed to get resolved to `any` (VSCode tooltips unhelpfully label these as `import Transport`, rather than resolving their types as `any`, so it took a while to figure this out).

I didn't see any typos and there was no compiler error on the import which made this especially strange. Eventually I discovered that `Transport` and `ServiceImplementation` simply were not exported by `packages/runtime/src/service.ts`. It turns out that `.d.ts` files do not display import errors and instead resolve them as `any`! See: microsoft/TypeScript#36971.

Adding the `export` to `Transport` and `ServiceImplementation` didn't quite fix the problem though. After more debugging I found that the generated `GreeterService` type was *also* being resolved to `any` in the generated `greeter_bxservices.d.ts`. I came to realize that I was using `ServiceImplementation<descriptors.GreeterService>` when `descriptors.GreeterService` was a **value**, not a **type**. I should have used either `typeof descriptors.GreeterService` or `descriptors.IGreeterService`, which is the actual interface type it implements. Changing the generator to use the interface as the type **finally** fixed the problem and passed the build. `serveGreeter()` now correctly resolves the types and will give meaningful and accurate error messages if used incorrectly.

Another interesting aspect is that we had a test for specifically this kind of error. When I first implemented `serveGreeter()`, I knew that having strong type inference of generated code was an important requirement which should be tested, so I specifically included a test to verify this exact use case. I did it by calling `serve()` (the backend of generated code like `serveGreeter()`) with an invalid input using `@ts-ignore` (with the intent of upgrading it to `@ts-expect-error`, when TS3.9 came out, just haven't gotten around to it yet): https://github.com/dgp1130/bxpb/blob/c9dea344d5f8bcbfc7760fa1e2cc1b66b3f74da0/packages/runtime/src/service_spec.ts#L147

However, even `@ts-expect-error` would **not** have caught this bug. This is because the test imports from `./src/service.ts` which defines `ServiceImplementation` in the same file. Production usage however imports through the built file, `./dist/service.d.ts`, thus requiring `ServiceImplementation` to be exported in a way that the test did not. Basically, the test imports the source `*.ts` file, while production usage imports the generated `*.d.ts` file. The difference in the import graph means that the test failed to catch this particular error.

That also means I had to write very unintuitive regression tests to assert that `ServiceImplementation` is actually exported. This fundamental import difference also means that a unit test like this simply cannot fully test the behavior that `serve()` is properly typed to its callers. I would need a full integration test with bad user code that checks the error message of the build to be truly certain of this.

My takeaways from the last few days of debugging are:
1. `serveGreeter()` has had broken types for some time now. It was being resolved to `any` and I didn't notice until now.
2. I didn't notice the problem because I expected `.d.ts` files to give errors on cases like this. TypeScript developers simply cannot rely on the compiler to sanity check `.d.ts` files. Just because it compiles, does **not** mean it is correct. I'm also not aware of any strictness flag which will address this problem (we already use `--strict` for instance).
3. Tests may not catch typing problems. We already had a test to cover this exact use case, but the differences in the build and test pipeline make it effectively impossible to test this at a unit level.
4. Debugging types is still pretty difficult. All I could really do was use VSCode hover tooltip to find the type of specific symbols, but the way it displays imported types makes this harder than it should be. Maybe there are better ways I am not aware of.
@dgp1130
Copy link
Owner Author

dgp1130 commented Jun 4, 2020

I was looking into refactoring @bxpb/runtime to include a "main" file that re-exports all public symbols. While doing that I encountered some strange type errors and spent a long time debugging them before discovering that generated types for service implementation had a couple mistakes which caused serveGreeter() to accept any. I was able to fix them in 7ee936c and have full details in that commit message. I did want to add one other takeaway upon reflection which I think is the most important aspect but did not hit me until after I pushed the commit.

The biggest lesson from the relevant TypeScript issue (microsoft/TypeScript#36971) is that you should not hand-write .d.ts files. When I was first designing BXPB, I considered generating .ts files rather than .d.ts and .js files. I decided against this because I wanted the library to be usable for JavaScript projects as well without requiring them to go through a TypeScript compile step. Even though we are generating such .d.ts files, they are still effectively "hand-written" (read: not emitted by tsc). Manually writing .d.ts files has many unintuitive side-effects not immediately obvious as described in 7ee936c. I think the biggest lesson is that humans should not author .d.ts files (which is weird cause DefinitelyTyped does that pretty deliberately, but that's a bit of a special case). Instead, it is tsc's responsibility to emit .d.ts files from .ts source files.

Thinking back, if we had emitted real .ts files, they would have thrown relevant errors for all the mistakes made in them and would have generated the correct output .d.ts. I think the real fix for this kind of error is to generate actual .ts files and compile them, instead of generating .d.ts files.

In our compiler, we could generate .ts files and then run tsc ourselves on them to generate .js and .d.ts files. However, this is trickier than it sounds as the BXPB compiler is simply a plugin to protoc, and it is not responsible for actually writing files. It is only responsible for returning a map of file path -> file content which protoc is then responsible for actually generating. The level of abstraction doesn't lend itself for a plugin to call an external tool which interacts with the real file system.

I think the proper solution here is to use typescript as a dependency in the compiler to emit an actual AST of source code and generate it. I'm not sure how well the APIs support that particular kind of use case of a compilation which is stored entirely in-memory. If not, we could alternatively generate .ts source files in a temp directory, and then invoke tsc on it, then read back the output files and return them to protoc. That's a bit hacky and roundabout (it also might cause problems for integration with Bazel, as it typically doesn't like files writing to undeclared outputs).

I filed #4 to make this infrastructure change. I'm not convinced that this is necessary for MVP, but I would really like to avoid such complicated and in-depth debugging like this again.

dgp1130 added a commit that referenced this issue Jun 5, 2020
Refs #1.

This allows users to import directly from `@bxpb/runtime` and clearly defines what the supported APIs is (everything exported at that location).

Currently all the exported symbols are only used by generated code and are considered "private". I put these under `internalOnlyDoNotDependOrElse` to be clear that these APIs are not explicitly supported and are not fit for direct use by end-users.

I also chose to clear out the README for `@bxpb/runtime` because I don't want to advertise the existence of these APIs.

I couldn't find a good way of re-exporting types under a namespace. It seems like `import export Foo` syntax is supposed to be used here, but I simply could not get that to work. Instead, the generics need to be re-declared a bit awkwardly, but it shouldn't be a real problem.
dgp1130 added a commit that referenced this issue Jun 5, 2020
Refs #1.

This type has a generic `Implementation` parameter which itself is a subtype of `ServiceDescriptor`, making this awkwardly recursive in a very unnecessary fashion.

I believe this was originally done while following similar types in gRPC generated code, however this simply isn't necessary. An object literal is already a subtype of `Record<K, V>`, so the `methods` property of `ServiceDescriptor` was ok without needing the generic. This makes the types overall simpler and easier to work with.
dgp1130 added a commit that referenced this issue Jun 5, 2020
Refs #1.

These tests only exist to cause compile-errors if their respective types are not exported. Since they are compile-time checks, there is nothing to do at runtime and they were emitting warnings that no expections were present. Adding `expect().nothing()` suppresses the warning.

Also took the liberty to use `any` as a generic type argument to make the test less brittle to changes, since all it cares about is that the type is exported.
@dgp1130
Copy link
Owner Author

dgp1130 commented Jun 6, 2020

I've been thinking a lot about this project over the past couple days and have slowly convinced myself that it cannot be successful in the current ecosystem. I'm going to try to put my thoughts down coherently and explain my change of heart:

The problem

I was always a bit frustrated and disappointed with the end developer experience I am able to provide. I've talked before about how protocol buffers are a pain to use with current web tooling, but something more has been bothering me that I was never quite able to identify until now. The core problem with protocol buffers in the context of BXPB is that they require the buy-in of the entire data model system of a given application.

Conveniently calling a function with BXPB

What I mean by this is that in order to conveniently call a given function using BXPB, an application needs to already be able to express its core data model in protocol buffers. If I have a method like driveCar(), I first need to define Car via protocol buffers. That requires me to define a driver as a Person, a Company that owns the car, and any other data needed for my application. This will quickly grow to encompass the entire data set of the application.

In this scenario my logic comes roughly follows:

  1. BXPB is about making it trivial to "call a function" in another context of a browser extension.
  2. "Trivially calling a function" means that it must be trivial to provide arguments to the function.
  3. "Trivially providing arguments to a function" means that any object must be trivially serialized/deserialized.
    • "Any object" here is limited to fundamentally serializable objects (there's the "law of physics" limitation about what data is and is not serializable in the first place).
  4. "Trivially serializing/deserializing an object" means that all its components must also be trivially serializable/deserializable.
  5. Any sufficiently complex application will compose all its data models (objects) together in a hierarchical fashion.
    • i.e. A Car object is composed of a Person driver, a Company owner, a Manufacturer, etc. This eventually grows to encompass all objects in the system because: why would you use them together if they didn't relate to each other?
  6. Per the last two points, any sufficiently complex application must trivially serialize/deserialize all its data models.
  7. To be "trivially serializable/deserializable", an object must be expressed and converted to/from protocol buffers.
  8. Per the last two points, any sufficiently complex application must make all its data models expressible and convertible to/from protocol buffers.

QED

If there is a flaw in this proof, I think it is that using BXPB does not inherently require an application to define its entire set of data models in protocol buffers. Whatever operation is being performed via RPC may be simple and limited enough that a relatively small subset of the data model needs to be expressed. Such an application could still benefit from BXPB, but would not be likely to accept the onboarding cost of setting up protocol buffers, learning and using them correctly, and getting the required organization buy-in.

How users should use BXPB

The ideal BXPB user already uses protocol buffers for their entire existing data model and wants to leverage them to easily call functions across multiple contexts in a browser extension.

How users will actually use a BXPB

Practically speaking, few if any browser extensions actually have protocol buffer-definitions for their entire data set and uses them uniformly throughout the application.

The different between the starting point of "no protobufs" to the end state of "all protobufs" force an application to choose among some very undesirable options:

  1. Define your data models entirely in protocol buffers and only use those representations everywhere in your app.
  2. Duplicate data models in the protocol buffer IDL, but keep existing JS models around, and include serialize()/deserialize() functions for converting between them.
  3. Use protobufs only to wrap other, more convenient representations (i.e. JSON).

The first option requires total buy-in of the application into protocol buffers. Unless you are already using protobufs heavily in your application and call to gRPC backends, this is almost certainly not worth the benefit. If you already have non-protobuf data models, you'd need to migrate them to this point and would be a significant amount of effort.

Don't forget that the primary selling point of protocol buffers if that they provide strong forwards and backwards compatibility guarantees between client and service. However a browser extension is typically released monolithically, so the entire codebase is built at a single point in time, with no version skew between components. As a result, protobuf compatibility is effectively worthless to browser extensions. The only real use case I see are for calling gRPC backends, where version skew does become a meaningful problem. Hypothetically, an extension could become large and complex enough that a team might choose to more strongly version different components owned by different teams, but at that point the system is so large and complex that the team would likely develop and maintain their own RPC system rather than rely on one supported by only a single open-source developer.

The second option requires duplicating a significant amount of code with manual gluing between the two. It's a hack at best and unmaintainable at worst.

The third option would effectively define all your methods as:

message GreetRequest {
    string json = 1;
}

message GreetResponse {
    string json = 1;
}

service Greeter {
    rpc Greet(GreetRequest) returns (GreetResponse) { }
}

The actual meaningful data would be stored as a simple JSON string, only using protocol buffers as a wire format. This is effectively JSON-over-RPC. Both the client and service would be responsible for serializing/deserializing the real data models to/from JSON. At this point, why even bother using protocol buffers?

The bottom-line

None of these are great options and are definitely against the general spirit of protocol buffers. The only way I can see a team using and actually benefiting from BXPB is if:

  1. They are developing a non-trivial extension which runs code in multiple contexts.
  2. They already use protocol buffers for most of their data models, and have existing organizational buy-in to do so.
  3. They are willing to put up with the current state of protobuf tooling.

I have a hard time seeing any team choose this state of affairs, they only find themselves in it due to other circumstances. The only viable use case I think of is Google itself, where:

  1. The prevalence of protocol buffers is significant enough that there is existing organizational buy-in to use protobufs for any kind of serialization or RPC problem.
  2. Most data models are already expressed as protocol buffers, and investing more in that area is a reasonable ask.
  3. Using Bazel, Closure, and owning the protobuf ecosystem (while sidestepping NPM and other bundlers) means the tooling is much more compatible and works out of the box in a reasonable way.

So now what?

So having now recognized this problem, what next? I could try another IDL which is more compatible with existing web tooling, but I'm not aware of any particularly good candidate. Even then, any other IDL will encounter the same problem of requiring a team to define their entire data model with it, which is just not how modern web development is done. REST APIs are typically done with JSON, not any special IDL format.

Accepting that any IDL format leads to the above problems (in some form or another), then to be effective we need to simplify the problem to avoid a dependency on the data format entirely.

The most immediate answer I see: let user-code serialize the data. If the API contracts simply treat request and response data as strings or simple (serializable) JavaScript objects, then defining a function looks something like:

// common.ts
export interface GreeterService {
  greet(request: string): Promise<string>;
}

export class GreetRequest {
  constructor(readonly name: string) { }

  serialize(): string {
    return JSON.stringify({ name: this.name });
  }

  static deserialize(serialized: string): GreetRequest {
    const json = JSON.parse(serialized);
    return new GreetRequest(json.name);
  }
}

export class GreetResponse {
  constructor(readonly message: string) { }

  serialize(): string {
    return JSON.stringify({ message: this.message });
  }

  static deserialize(serialized: string): GreetResponse {
    const json = JSON.parse(serialized);
    return new GreetResponse(json.message);
  }
}
// background.ts
serve<GreeterService>(chrome.runtime.onMessage, {
  async greet(request: string): Promise<string> {
    const req = GreetRequest.deserialize(request);
    const res = new GreetResponse({
      message: `Hello, ${request.name}!`,
    });
    return res.serialize();
  }
});
// popup.ts
const client = new Client<GreeterService>(chrome.runtime.sendMessage);
const req = new GreetRequest('Dave');
const serializedRes = await client.greet(req.serialize());
const res = GreetResponse.deserialize(serializedRes);
console.log(res.message); // Hello, Dave!

I'm taking some creative liberties with the implementations of serve<GreeterService> and Client<GreeterService>, these would likely need a value object for GreeterService (not just a type), or would require a compiler to generate implementations from a given TypeScript interface.

However, the main take away here is that a simple TypeScript interface is being used as the IDL format here. It has a few critical advantages:

  1. It plugs into existing data models. No matter what serialization method you use (JSON, protobufs, plain objects, etc.) it is all easily expressible with the interface.
  2. Web developers are already familiar with this, there is very little cognitive overhead to learn and use such a system.
  3. This does not require organizational buy-in, because all these tools and systems are already used by modern web developers!
  4. Adding serialize()/deserialize() functions is already a common pattern used when calling many REST APIs or storing data long term.
  5. Using generators/iterables/observables for streaming data fits nicely with the conceptual model.

There are also a few cons however:

  1. Users have to serialize/deserialize the data correctly themselves, there's nothing this library can do for them in that regard, largely by design.
  2. The types in the service interface still need to be serializable, which is not really intuitive with this developer experience. We'd need some compile-time or run-time check that the given object is serializable, and the interface would likely be defined with SerializedCar, SerializedPerson, and SerializedCompany types, rather than their "real" equivalents.
  3. Users must define their own serialize()/deserialize() functions.
  4. No existing "error" semantics. Will need to define our own error types, unlike protocol buffers which already have canonical errors.

This definitely requires some more thought and exploration to evaluate further and see if it has any merit.

So what happens to BXPB?

To be totally honest, as much fun as I had making this project, I don't see a valid path where it can become successful. I can see couple teams at Google potentially using it (after Bazel and third party integration, which is non-trivial amount of setup work before it becomes viable to individual projects), possibly one or two teams at other companies with similar tech stacks, and possibly a couple more teams using BXPB where they really shouldn't have and regretting it later. As a result, I can't justify additional work and effort going into the project at this point.

Things may change in the future. Maybe protobufs take off. Maybe they embrace the web ecosystem and become more usable and ergonomic. Maybe I'm overstating the perceived requirement of using protobufs throughout an application's entire data model.

As it stands, we're so close to a 1.0.0 release, that I think I want to push it over the finish line just to close this down. I'm thinking I'll wrap up the remaining tasks (largely by ignoring anything related to future work), release 1.0.0, and then deprecate it. That way passers-by can try it out, play with it, and learn from it. That will also leave the repo in a reasonable state if it ever gets revived or forked in the future. I think the historical and educational value of the project exceeds its practical application at this point, so simply preserving the current state is the best thing that can be done right now.

After that, I think I'll explore the viability of not using an IDL by letting user-code serialize the data. I think this has a lower potential than BXPB (as it is inherently less ergonomic by design, after all tooling is hypothetically perfected), but is much more achievable given the current state and culture of modern web development. I can only hope there is real potential in that direction.

dgp1130 added a commit that referenced this issue Jun 9, 2020
Refs #1.

Both `@bxpb/runtime` and `@bxpb/protoc-plugin` will be published to NPM under a scope. According to NPM and Lerna, this value must be set for them to be publically visible.

https://docs.npmjs.com/misc/config#access
dgp1130 added a commit that referenced this issue Jun 9, 2020
Refs #1.

Test suites were already excluded, but test helpers and data should also be excluded from production builds.
dgp1130 added a commit that referenced this issue Jun 9, 2020
Refs #1.

This adds an allowlist of files to include in the published package. This should only contain the built files, no source or other config files.

https://docs.npmjs.com/files/package.json#files
dgp1130 added a commit that referenced this issue Jun 9, 2020
Refs #1.

Per [Lerna's lifecycle docs](https://github.com/lerna/lerna/tree/master/commands/publish#lifecycle-scripts), many scripts are executed during the publish process, including `prepack`. This will clean and build each package from scratch to ensure no dirty data gets pushed to NPM.

I'm honestly not sure which of the 4 pre-publish scripts should be used here. I went with `prepack` as the others seem special cased for local `npm install` cases, which I don't really need because Lerna already handles all that for local development.
dgp1130 added a commit that referenced this issue Jun 9, 2020
Refs #1.

This reinstalls `node_modules/` and bootstraps the entire codebase to reset `node_modules/` to a consistent state, no matter what state it may have been in to begin with. It then cleans all packages to remove any previously generated files.

Each package's `prepack` script can now assume they are starting from a clean state. They each `build` and then `test` to verify that tests are passing before publish. That also means that greeter needs to build and test itself to make sure everything passes, even though it won't be published.
dgp1130 added a commit that referenced this issue Jun 9, 2020
Refs #1.

I just realized the `prepack` hook is run from Lerna, so the only way this is executed, is if the user has already run `npm install` (or `npm ci`). As a result, deleting and resetting `node_modules/` mid-execution of Lerna is likely going to cause more problems than it will solve.
dgp1130 added a commit that referenced this issue Jun 9, 2020
Refs #1.

This discusses how to do a canary release and a follow up production release.

I am somewhat guessing at the production release, as I haven't done it yet. I'll update this doc when I discover additional steps necessary.
dgp1130 added a commit that referenced this issue Jun 9, 2020
Refs #1.

Updates the `package.json` `bin` reference to point to the JavaScript file in the `dist/` directory. It seems that NPM publish does not include symlinks, so `bin/protoc-plugin` was not being published and was not available. The easiest solution is to just point to the actual executable file.
@dgp1130
Copy link
Owner Author

dgp1130 commented Jun 20, 2020

I published 1.0.0 of @bxpb/protoc-plugin and @bxpb/runtime to NPM. I then followed up with 1.0.1 with a simple change to the README displayed in NPM (which doesn't seem to have worked, but 🤷). I've also adding a comprehensive Getting Started doc which goes through setting up a simple service. It also links to some Git tags in the getting-started branch for users to reference.

Lastly I've updated the README and Wiki to indicate that the project is shut down, with a brief explanation of the decision and a link to the above comment which provides more thoughts on the matter.

I'm closing all the other open issues as obsolete (technically this one should be considered "Done", as I've accomplish all the requirements initially specified). I'll also archive the repository as it only exists for historical purposes now.

RIP BXPB.

@dgp1130 dgp1130 closed this as completed Jun 20, 2020
This was referenced Jun 20, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant