Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation feedback #74

Closed
kriswest opened this issue Oct 4, 2022 · 3 comments
Closed

Implementation feedback #74

kriswest opened this issue Oct 4, 2022 · 3 comments
Labels
enhancement New feature or request

Comments

@kriswest
Copy link
Contributor

kriswest commented Oct 4, 2022

While investigating test failure we collected up some initial feedback on the implementation of the test suite:

1) Getting started

App Directory configuration is in an inconsistent location for different containers:

Ideally, the following should be provided:

  • generic app directory records in a JSON file
  • a description of the components of the tests and layout of the repository (currently split between readme and mock readme)
  • generic instructions for setup
    • specific instructions for particular containers are also useful, but a secondary concern
  • A description of how the tests are initiated and what to expect from them.

2) Multiple app servers

Its not clear why multiple servers are started for the different apps, different tours on a single server or at most two should suffice. Minor issue.

3) Some tests are hard to grok

Tests involving the mocks can be hard to follow as the implementation is split across the test and the mock (necessary but can be made much easier to follow) and some of the function names don't tell you whats going to happen, e.g. AppChannelService.addContextListener does indeed add a listener... but it will always immediately leave the channel after receiving its first message. This could be a more specific name that links better to the addition of that listener here and the broadcast to it here.

Further, it would be better if the channelsAppContext (used to drive the mock app's behavior) defined the name of the app channel that's being joined and what it will do once it's joined it, rather than that being baked in the the object's defaults and the mock implementation:

it("Should receive context when app B broadcasts context to an app channel before A joins and listens on the same channel", async () => {
const errorMessage = `\r\nSteps to reproduce:\r\n- App B joins an app channel\r\n- App B broadcasts context of type fdc3.instrument\r\n- App A joins the same app channel as B\r\n- App A adds a context listener of type null${documentation}`;
return new Promise(async (resolve, reject) => {
channelsAppContext.useAppChannel = true;
//App B creates/joins an app channel then broadcasts context
await window.fdc3.open("ChannelsApp", channelsAppContext);
//give app B time to fully execute
await wait();

To confirm what this test is doing you currently have to seek out the ChannelsApp implementation (which involves reading the appD config to get the port, then package.json to figure out which command ran on that port and how it was built, then find the right folder, dig through the files and find a script tag in an HTML file and read it).

To make this more penetrable:

  • The javascript code for the mock should be in a JS or TS file,
  • Comments should describe the behaviour created by the settings in the context object,
  • Channel names and behaviors should be explicitly defined in each test - rather than split between the test and the mock app implementation
    • e.g. channel joined on one end is defined in the test, the other end is implicit in the implementation of the mock app and its handling of an argument on the channel type.
  • A function, imported into the tests from the mock app implementation, could be used to create the context object based on parameters passed to it (rather than resetting the same object for each test and mostly relying on its default settings with a few tweaks).
  • The code implementing the app's behaviour doesn't need to be split across multiple files (i.e. broadcastContextItems in the index.html and AppChannelService.broadcast in channelService.js), could make use of typescript to provide interfaces (governing the polymorphism of the ChannelServices) and should include comments.
    • A better approach might be to just implement all the behaviour in (for example) broadcastContextItems based on explicit arguments to the function (pulled from the context object passed in).

4) Wait times and total runtime.

The use of wait times also makes me nervous. Where possible these could/should be replaced with an exchange of ready messages - although I grant you it's not ideal doing such coordination of the comms mechanism being tested.
However, timeouts like this:

//Add context listener to app A
listener = await testChannel.addContextListener(null, (context) => {
expect(context.type).to.be.equals("fdc3.instrument", errorMessage);
resolve();
});
validateListenerObject(listener);
//if no context received throw error
await wait();

should really be cancelled when the promise above it resolves and the rejection statement after it only run if the timeout completed (rather than getting cancelled). Not doing so is unlikely to cause an incorrect failure (promise can't reject after it's been resolved) but it is probably keeping the test running until its been hit and probably contributing a lot to the overall runtime of the test and test suite (which is relatively slow right now).

5) Window closing

This is again dependent on a wait time (could reply first then close immediately and shorten the wait right down)...
Also the close method is Finsemble specific currently - probably because FDC3 doesn't contemplate a standard way to close a window (which it perhaps could...). You could use [window.close()}(https://developer.mozilla.org/en-US/docs/Web/API/Window/close) to do this as its part of the HTML standard (although it doesn't work in Finsemble browserview windows I note - which we'll address - due to the fact that the content pane doesn't own the window). It does work if you set finsemble.appd[].manifest.foreign.components.Window Manager.titlebarType = "injected" in the individual component config or finsemble.Window Manager.titlebarType = "injected" in the whole desktop config.

@kriswest
Copy link
Contributor Author

kriswest commented Oct 7, 2022

Re:

2) Multiple app servers

This will need resolving in order to host the test applications on a website, e.g. under fdc3.finos.org/toolbox/fdc3-conformance framework (as should be the end goal). Essentially, the build need to assemble a single directory structure with all of the built applications which can then be copied to a location for hosting.

By using node/express with static file hosting you can then switch over to a single server for development use. Your application configs can then use a common host/port, but a different path, for each app URL. This will also simplify use as a part of a CI environment for a container vendor.

@ColinEberhardt

@Joe-Dunleavy
Copy link
Contributor

Here's a pull request that addresses points 3, 4 and 5 for the Channel tests specifically.

Test now run a lot faster (44 seconds until completion).
I will carry over some of these changes to the other tests and address the other concerns that were raised shortly.

@robmoffat
Copy link
Member

all these initial issues are now addressed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants