Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workshopper testing #13

Open
llkats opened this issue Oct 4, 2016 · 4 comments
Open

Workshopper testing #13

llkats opened this issue Oct 4, 2016 · 4 comments

Comments

@llkats
Copy link
Contributor

llkats commented Oct 4, 2016

Let's discuss setting up a CI-type solution to test workshoppers comprehensively before release.

jk sorta, please see this comment below.

@martinheidegger
Copy link
Contributor

I am not sure CI is possible. The issue that usually occurs is that on windows the tty doesn't work correctly. Testing it would mean to actually have a tty tool (that has different timings) and that is executed (per hand?) to make sure that the tty works.

@ghost
Copy link

ghost commented Oct 5, 2016

Workshops can have other kinds of tests. This test ensures all the recommended solutions pass: https://github.com/workshopper/browserify-adventure/blob/master/test/solutions.js

@llkats
Copy link
Contributor Author

llkats commented Oct 5, 2016

I should try this again, since I misunderstood the initial case that @martinheidegger meant. Is there a way to test workshoppers against new and upcoming versions of node/npm? For example, TTY no longer working in Windows.

@martinheidegger
Copy link
Contributor

@substack there is workshopper-adventure-test that should do that even further. But those are automated tests.

The real problem, that we can't tackle easily, are integration tests. Depending on Node-version there are various things that can fail. i.e.:

  • TTY bugs (the interaction with the terminal is not-working, sometimes-working, `conditionally working ー this is by far the most annoying and difficult issue to deal with ー; also, the control characters could mingled resulting in wrong output)
  • Colors displayed in the terminals with too low contrast ("black on black", etc.)
  • Interaction might be "slow", i.e. the cursors work but there is a long delay
  • The installation of npm packages doesn't work properly.
  • The environment paths after installing Node.js are not set properly (Please can't simple run npm i sometimes on windows)
  • File System operation bugs (problems loading files with delay, etc.)
  • Home environment variable lookup might change or break (i.e. where the storage is being kept)
  • If the encoding handshake goes wrong it might be resulting in scrambled international characters.

So: In short: I think the tests that need interaction with other parties are of a nature to prevent bugs like above. I am not sure how many of those can be well automated considering that this my test matrix:

Platform Win 7 Win 8 Win 10 Mac Linux
Terminal ☑️ ☑️
iTerm2 ☑️
Hyperterm ☑️ ☑️ ☑️ ☑️ ☑️
Command Prompt ☑️ ☑️ ☑️
Powershell ☑️ ☑️ ☑️
Cygwin ☑️ ☑️ ☑️
Github Shell ☑️ ☑️ ☑️ ☑️
Node Command Prompt ☑️ ☑️ ☑️
Cmder ☑️ ☑️ ☑️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants