Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Windows support #919

Closed
1 task
m-mohr opened this issue Jan 15, 2020 · 13 comments
Closed
1 task

Improve Windows support #919

m-mohr opened this issue Jan 15, 2020 · 13 comments
Assignees
Labels

Comments

@m-mohr
Copy link
Contributor

m-mohr commented Jan 15, 2020

Chore summary

The tests don't run on Windows, but fail although they succeed in the CI. Thus I have to always force-push my changes to PRs so that the CI checks the tests for me. That is discouraging Windows users to develop PRs for this project and rather use other projects.

Tasks

  • Make tests run on Windows
@P0lip
Copy link
Contributor

P0lip commented Jan 15, 2020

@nulltoken since you are on Windows, could you take a stab at it?
I know Spectral tests used to run perfectly fine back in the day, but we might have regressed over the time.

@nulltoken
Copy link
Contributor

@m-mohr Hey! I've also noted a few things that indeed lag behind when working on a Windows box and that would require some tweaking

The tests don't run on Windows, but fail although they succeed in the CI.

Three different test suites are running:

  • the browser tests
  • the node tests
  • the harness tests

Could you please share which one(s) is/are currently causing you trouble and what kind of errors you currently face on your environment? That would be much helpful.

@m-mohr
Copy link
Contributor Author

m-mohr commented Jan 16, 2020

Indeed, the issue lacks details. I'm on Windows 10 and not 100% sure, but I thought yesterday there were more issues. Today - after git pull and npm install - I got the following test summary for npm run test.prod (does it cover all tests?):

Summary of all failing tests
 FAIL  src/__tests__/linter.jest.test.ts (13.953s)
  ● Linter › should report resolving errors for correct files

    expect(received).toEqual(expected) // deep equality

    Expected: ArrayContaining [ObjectContaining {"code": "invalid-ref", "message": "ENOENT: no such file or directory, open 'C:\\Dev\\spectral\\src\\__tests__\\__fixtures__\\schemas\\broken-age.yaml'", "path": ["age", "$ref"], "source": "c:/Dev/spectral/src/__tests__/__fixtures__/schemas/user.json"}, ObjectContaining {"code": "invalid-ref", "message": "ENOENT: no such file or directory, open 'C:\\Dev\\spectral\\src\\__tests__\\__fixtures__\\schemas\\broken-length.json'", "path": ["maxLength", "$ref"], "source": "c:/Dev/spectral/src/__tests__/__fixtures__/schemas/name.json"}]
    Received: [{"code": "invalid-ref", "message": "ENOENT: no such file or directory, open 'c:\\Dev\\spectral\\src\\__tests__\\__fixtures__\\schemas\\broken-length.json'", "path": ["maxLength", "$ref"], "range": {"end": {"character": 23, "line": 1}, "start": {"character": 0, "line": 0}}, "severity": 0, "source": "c:/Dev/spectral/src/__tests__/__fixtures__/schemas/name.json"}, {"code": "invalid-ref", "message": "ENOENT: no such file or directory, open 'c:\\Dev\\spectral\\src\\__tests__\\__fixtures__\\schemas\\broken-age.yaml'", "path": ["age", "$ref"], "range": {"end": {"character": 23, "line": 1}, "start": {"character": 0, "line": 0}}, "severity": 0, "source": "c:/Dev/spectral/src/__tests__/__fixtures__/schemas/user.json"}]

      92 |     );
      93 |
    > 94 |     expect(result).toEqual(
         |                    ^
      95 |       expect.arrayContaining([
      96 |         expect.objectContaining({
      97 |           code: 'invalid-ref',

      at Object.it (src/__tests__/linter.jest.test.ts:94:20)

 FAIL  src/fs/__tests__/reader.jest.test.ts (14.261s)
  ● readFile util › when a file descriptor is supplied › throws when fd cannot be accessed

    : Timeout - Async callback was not invoked within the 10000ms timeout specified by jest.setTimeout.Timeout - Async callback was not invoked within the 10000ms timeout specified by jest.setTimeout.Error:

      30 |     });
      31 |
    > 32 |     it('throws when fd cannot be accessed', () => {
         |     ^
      33 |       return expect(readFile(2, { encoding: 'utf8' })).rejects.toThrow();
      34 |     });
      35 |   });

      at new Spec (node_modules/jest-jasmine2/build/jasmine/Spec.js:116:22)
      at Suite.describe (src/fs/__tests__/reader.jest.test.ts:32:5)


Test Suites: 2 failed, 101 passed, 103 total
Tests:       2 failed, 953 passed, 955 total
Snapshots:   0 total

The other issue I had was that the project uses yarn and I had only npm installed, usually easy to fix but still annoying if you want to do a small contribution only. I guess the issue was I have seen the CONTRIBUTING.md only after some days.

@nulltoken
Copy link
Contributor

@m-mohr Thanks!

  • The first one is very surprising... and I can't repro it. It looks like it it fails on a different file than expected. Could you please share what's the output of yarn test linter.jest.test.ts ?

  • The second failure is something that I see from time to time. It's related to the test taking too much time executing. It might be related to IOs (that are notoriously slower on Windows than on Linux). I've bumped the jest timeout a few weeks ago to help a bit on that front. Maybe should we bump it a bit again. Could you see if tweaking the setupJest.ts file helps a bit?

One thing that I've noted is that I'm unable to run the harness tests (yarn test.harness). Do you also suffer from this?

FWIW, nowadays, I rarely run the whole test suite. Most of the time I only run the test files I'm interested in through the yarn test xxxxxx command. And when that pass, I push and let the CI tells me if I broke something elsewhere.

Another thing I rely on is the Jest runner VSCode extension which is pretty handy to easily initiate the debug from a test.

@m-mohr
Copy link
Contributor Author

m-mohr commented Jan 17, 2020

  • The first one is very surprising... and I can't repro it. It looks like it it fails on a different file than expected. Could you please share what's the output of yarn test linter.jest.test.ts ?

Sure:

c:\Dev\spectral>npm  test linter.jest.test.ts

> @stoplight/spectral@0.0.0 pretest c:\Dev\spectral
> node ./scripts/generate-assets.js


> @stoplight/spectral@0.0.0 test c:\Dev\spectral
> jest --silent "linter.jest.test.ts"

 FAIL  src/__tests__/linter.jest.test.ts (11.477s)
  ● Linter › should report resolving errors for correct files

    expect(received).toEqual(expected) // deep equality

    Expected: ArrayContaining [ObjectContaining {"code": "invalid-ref", "message": "ENOENT: no such file or directory, open 'C:\\Dev\\spectral\\src\\__tests__\\__fixtures__\\schemas\\broken-age.yaml'", "path": ["age", "$ref"], "source": "c:/Dev/spectral/src/__tests__/__fixtures__/schemas/user.json"}, ObjectContaining {"code": "invalid-ref", "message": "ENOENT: no such file or directory, open 'C:\\Dev\\spectral\\src\\__tests__\\__fixtures__\\schemas\\broken-length.json'", "path": ["maxLength", "$ref"], "source": "c:/Dev/spectral/src/__tests__/__fixtures__/schemas/name.json"}]
    Received: [{"code": "invalid-ref", "message": "ENOENT: no such file or directory, open 'c:\\Dev\\spectral\\src\\__tests__\\__fixtures__\\schemas\\broken-length.json'", "path": ["maxLength", "$ref"], "range": {"end": {"character": 23, "line": 1}, "start": {"character": 0, "line": 0}}, "severity": 0, "source": "c:/Dev/spectral/src/__tests__/__fixtures__/schemas/name.json"}, {"code": "invalid-ref", "message": "ENOENT: no such file or directory, open 'c:\\Dev\\spectral\\src\\__tests__\\__fixtures__\\schemas\\broken-age.yaml'", "path": ["age", "$ref"], "range": {"end": {"character": 23, "line": 1}, "start": {"character": 0, "line": 0}}, "severity": 0, "source": "c:/Dev/spectral/src/__tests__/__fixtures__/schemas/user.json"}]

      92 |     );
      93 |
    > 94 |     expect(result).toEqual(
         |                    ^
      95 |       expect.arrayContaining([
      96 |         expect.objectContaining({
      97 |           code: 'invalid-ref',

      at Object.it (src/__tests__/linter.jest.test.ts:94:20)

Test Suites: 1 failed, 1 total
Tests:       1 failed, 5 passed, 6 total
Snapshots:   0 total
Time:        11.591s, estimated 18s
npm ERR! Test failed.  See above for more details.
  • The second failure is something that I see from time to time. It's related to the test taking too much time executing. It might be related to IOs (that are notoriously slower on Windows than on Linux). I've bumped the jest timeout a few weeks ago to help a bit on that front. Maybe should we bump it a bit again. Could you see if tweaking the setupJest.ts file helps a bit?

To make sure it's not a too narrow timeout, I set it from 10 * 1000 to 100 * 1000. Doesn't change anything though.

One thing that I've noted is that I'm unable to run the harness tests (yarn test.harness). Do you also suffer from this?

I thought that's part of test.prod, but it seems it's not. It's not working, here's the log (same happens for yarn):

c:\Dev\spectral>npm run test.harness

> @stoplight/spectral@0.0.0 test.harness c:\Dev\spectral
> jest -c ./jest.harness.config.js

 FAIL   HARNESS  test-harness/index.ts (6.547s)
  cli acceptance tests
    alphabetical-responses-order.oas3.scenario file
      × Responses can be sorted alphabetically (37ms)
    custom-ref-resolver.scenario file
      × Custom json-ref-resolver instance is used for $ref resolving (34ms)
    enabled-rules-amount.oas3.scenario file
      × The amount of enabled rules is printed in a parenthesis. (32ms)
    external-schemas-ruleset.scenario file
      × Schemas referenced via $refs are resolved and used (33ms)
    help-no-document.scenario file
      × Errors when no document is provided (32ms)
    ignored-unrecognized-format.scenario file
      × Does not report unrecognized formats given --ignore-unknown-format (32ms)
    invalid-custom-ref-resolver.scenario file
      × Prints meaningful error message when custom json-ref-resolver instance cannot be imported (44ms)
    operation-security-defined.oas3.scenario file
      × Operation security defined, allow optional / no auth security (33ms)
    parameter-description-links.oas3.scenario file
      × Parameters in links are not validated to have description. (32ms)
    parameter-description-parameters.oas2.scenario file
      × Parameters - Parameter Objects - are validated to have description. (33ms)
    parameter-description-parameters.oas3.scenario file
      × Parameters - Parameter Objects - are validated to have description. (32ms)
    proxy-agent.scenario file
      × Requests for $refs are proxied when PROXY env variable is set (32ms)
    results-default-format-json-quiet.oas3.scenario file
      × Invalid OAS3 document outputs results --format=json and hides text with --quiet (31ms)
    results-default-format-json.oas3.scenario file
      × Invalid OAS3 document outputs results --format=json (30ms)
    results-default-output.oas3.scenario file
      × Invalid OAS3 document --output to a file, will show all the contents
when the file is read (34ms)
    results-default.oas3.scenario file
      × Invalid OAS3 document returns results in default (stylish) format (31ms)
    results-format-html.oas3.scenario file
      × Invalid OAS3 document outputs results when --format=html (31ms)
    results-format-junit.oas3.scenario file
      × Invalid OAS3 document outputs results when --format=junit (31ms)
    results-format-stylish.oas3.scenario file
      × Invalid OAS3 document outputs results when --format=stylish (32ms)
    results-skip-rule.oas3.scenario file
      × Can skip a rule with --skip-rule=info-contact (30ms)
    results-skip-rules-multiple.oas3.scenario file
      × Can skip multiple rules (30ms)
    rules-matching-multiple-places.scenario file
      × Rules matching multiple properties in the document (32ms)
    stdin-document-with-errors.scenario file
      × Lints stdin input (37ms)
    stdin.scenario file
      × Lints stdin input (36ms)
    todo-full-loose-schema.scenario file
      × Loose JSON Schema can be validated (29ms)
    unrecognized-format.scenario file
      × Reports unrecognized formats (29ms)
    valid-no-errors.oas2.scenario file
      × Valid OAS2 document returns no results (30ms)
    severity/display-errors.oas3.scenario file
      × Request only errors be shown, but no errors exist (29ms)
    severity/display-warnings.oas3.scenario file
      × Fail severity is set to error but only warnings exist,
so status should be success and output should show warnings (30ms)
    severity/fail-on-error-no-error.scenario file
      × Will only fail if there is an error, and there is not. Can still see all warnings. (30ms)
    severity/fail-on-error.oas3.scenario file
      × Will fail and return 1 as exit code because errors exist (30ms)
    severity/stylish-display-proper-names.scenario file
      × The name of severity levels are display correctly by stylish formatter (29ms)

  ● cli acceptance tests › alphabetical-responses-order.oas3.scenario file › Responses can be sorted alphabetically

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › custom-ref-resolver.scenario file › Custom json-ref-resolver instance is used for $ref resolving

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › enabled-rules-amount.oas3.scenario file › The amount of enabled rules is printed in a parenthesis.

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › external-schemas-ruleset.scenario file › Schemas referenced via $refs are resolved and used

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › help-no-document.scenario file › Errors when no document is provided

    Der Befehl "." ist entweder falsch geschrieben oder
    konnte nicht gefunden werden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › ignored-unrecognized-format.scenario file › Does not report unrecognized formats given --ignore-unknown-format

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › invalid-custom-ref-resolver.scenario file › Prints meaningful error message when custom json-ref-resolver instance cannot be imported

    expect(received).toEqual(expected) // deep equality

    Expected: "Cannot find module 'c:/Dev/spectral/test-harness/scenarios/resolvers/missing-resolver.js'"
    Received: "Das System kann den angegebenen Pfad nicht finden."

      59 |
      60 |       if (expectedStderr !== void 0) {
    > 61 |         expect(stderr).toEqual(expectedStderr);
         |                        ^
      62 |       } else if (stderr) {
      63 |         throw new Error(stderr);
      64 |       }

      at Object.test (test-harness/index.ts:61:24)

  ● cli acceptance tests › operation-security-defined.oas3.scenario file › Operation security defined, allow optional / no auth security

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › parameter-description-links.oas3.scenario file › Parameters in links are not validated to have description.

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › parameter-description-parameters.oas2.scenario file › Parameters - Parameter Objects - are validated to have description.

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › parameter-description-parameters.oas3.scenario file › Parameters - Parameter Objects - are validated to have description.

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › proxy-agent.scenario file › Requests for $refs are proxied when PROXY env variable is set

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › results-default-format-json-quiet.oas3.scenario file › Invalid OAS3 document outputs results --format=json and hides text with --quiet

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › results-default-format-json.oas3.scenario file › Invalid OAS3 document outputs results --format=json

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › results-default-output.oas3.scenario file › Invalid OAS3 document --output to a file, will show all the contents
when the file is read

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › results-default.oas3.scenario file › Invalid OAS3 document returns results in default (stylish) format

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › results-format-html.oas3.scenario file › Invalid OAS3 document outputs results when --format=html

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › results-format-junit.oas3.scenario file › Invalid OAS3 document outputs results when --format=junit

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › results-format-stylish.oas3.scenario file › Invalid OAS3 document outputs results when --format=stylish

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › results-skip-rule.oas3.scenario file › Can skip a rule with --skip-rule=info-contact

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › results-skip-rules-multiple.oas3.scenario file › Can skip multiple rules

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › rules-matching-multiple-places.scenario file › Rules matching multiple properties in the document

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › stdin-document-with-errors.scenario file › Lints stdin input

    Der Befehl "c:" ist entweder falsch geschrieben oder
    konnte nicht gefunden werden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › stdin.scenario file › Lints stdin input

    Der Befehl "c:" ist entweder falsch geschrieben oder
    konnte nicht gefunden werden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › todo-full-loose-schema.scenario file › Loose JSON Schema can be validated

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › unrecognized-format.scenario file › Reports unrecognized formats

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › valid-no-errors.oas2.scenario file › Valid OAS2 document returns no results

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › severity/display-errors.oas3.scenario file › Request only errors be shown, but no errors exist

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › severity/display-warnings.oas3.scenario file › Fail severity is set to error but only warnings exist,
so status should be success and output should show warnings

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › severity/fail-on-error-no-error.scenario file › Will only fail if there is an error, and there is not. Can still see all warnings.

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › severity/fail-on-error.oas3.scenario file › Will fail and return 1 as exit code because errors exist

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

  ● cli acceptance tests › severity/stylish-display-proper-names.scenario file › The name of severity levels are display correctly by stylish formatter

    Das System kann den angegebenen Pfad nicht finden.

      61 |         expect(stderr).toEqual(expectedStderr);
      62 |       } else if (stderr) {
    > 63 |         throw new Error(stderr);
         |               ^
      64 |       }
      65 |
      66 |       if (expectedStdout !== void 0) {

      at Object.test (test-harness/index.ts:63:15)

Test Suites: 1 failed, 1 total
Tests:       32 failed, 32 total
Snapshots:   0 total
Time:        10.299s
Ran all test suites.
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! @stoplight/spectral@0.0.0 test.harness: `jest -c ./jest.harness.config.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the @stoplight/spectral@0.0.0 test.harness script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:
npm ERR!     C:\Users\m_mohr08\AppData\Roaming\npm-cache\_logs\2020-01-17T08_54_15_632Z-debug.log

FWIW, nowadays, I rarely run the whole test suite. Most of the time I only run the test files I'm interested in through the yarn test xxxxxx command. And when that pass, I push and let the CI tells me if I broke something elsewhere.

Yes, makes sense. Maybe clarify in CONTRIBUTING.md after "Running a specific test:" that parts of the path also work?

Jest runner VSCode extension

I'll try that, thanks.

@m-mohr
Copy link
Contributor Author

m-mohr commented Jan 17, 2020

What I just realized is that the lint error does ONLY occur in the old cmd line (which I still use due to bad habit), but is WORKS in the power shell. Maybe I should really switch now... But running test.harness in power shell doesn't help and it even produces more errors if you don't take care of upper/lowercase of file names, it seems. A summary from test.prod in power shell:

My folder with spectral is called "c:/Dev/spectral" (blame me for using upper-case).

If I do "cd c:/dev/spectral" to access the folder I get for test.prod:

Summary of all failing tests
 FAIL  src/rulesets/__tests__/finder.jest.test.ts
  ● Rulesets finder › should support spectral built-in rules

    expect(received).resolves.toEqual(expected) // deep equality

    Expected: "c:/dev/spectral/src/rulesets/oas/index.json"
    Received: "c:/Dev/spectral/src/rulesets/oas/index.json"

      24 |
      25 |   it('should support spectral built-in rules', () => {
    > 26 |     return expect(findFile('/b/c/d', '@stoplight/spectral/rulesets/oas/index.json')).resolves.toEqual(
         |                                                                                               ^
      27 |       path.join(process.cwd(), 'src/rulesets/oas/index.json'),
      28 |     );
      29 |   });

      at Object.args [as toEqual] (node_modules/expect/build/index.js:202:20)
      at Object.it (src/rulesets/__tests__/finder.jest.test.ts:26:95)

  ● Rulesets finder › should support spectral built-in ruleset shorthand

    expect(received).resolves.toEqual(expected) // deep equality

    Expected: "c:/dev/spectral/src/rulesets/oas/index.json"
    Received: "c:/Dev/spectral/src/rulesets/oas/index.json"

      30 |
      31 |   it('should support spectral built-in ruleset shorthand', () => {
    > 32 |     return expect(findFile('', `spectral:oas`)).resolves.toEqual(
         |                                                          ^
      33 |       path.join(process.cwd(), `src/rulesets/oas/index.json`),
      34 |     );
      35 |   });

      at Object.args [as toEqual] (node_modules/expect/build/index.js:202:20)
      at Object.it (src/rulesets/__tests__/finder.jest.test.ts:32:58)

  ● Rulesets finder › should resolve spectral built-in ruleset shorthand even if a base uri is provided

    expect(received).resolves.toEqual(expected) // deep equality

    Expected: "c:/dev/spectral/src/rulesets/oas/index.json"
    Received: "c:/Dev/spectral/src/rulesets/oas/index.json"

      36 |
      37 |   it('should resolve spectral built-in ruleset shorthand even if a base uri is provided', () => {
    > 38 |     return expect(findFile('https://localhost:4000', `spectral:oas`)).resolves.toEqual(
         |                                                                                ^
      39 |       path.join(process.cwd(), `src/rulesets/oas/index.json`),
      40 |     );
      41 |   });

      at Object.args [as toEqual] (node_modules/expect/build/index.js:202:20)
      at Object.it (src/rulesets/__tests__/finder.jest.test.ts:38:80)

 FAIL  src/fs/__tests__/reader.jest.test.ts (103.094s)
  ● readFile util › when a file descriptor is supplied › throws when fd cannot be accessed

    : Timeout - Async callback was not invoked within the 100000ms timeout specified by jest.setTimeout.Timeout - Async callback was not invoked within the 100000ms timeout specified by jest.setTimeout.Error:

      30 |     });
      31 |
    > 32 |     it('throws when fd cannot be accessed', () => {
         |     ^
      33 |       return expect(readFile(2, { encoding: 'utf8' })).rejects.toThrow();
      34 |     });
      35 |   });

      at new Spec (node_modules/jest-jasmine2/build/jasmine/Spec.js:116:22)
      at Suite.describe (src/fs/__tests__/reader.jest.test.ts:32:5)


Test Suites: 2 failed, 101 passed, 103 total
Tests:       4 failed, 951 passed, 955 total
Snapshots:   0 total
Time:        111.509s
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

If I do "cd c:/Dev/spectral" to access the folder I only get for test.prod:

Summary of all failing tests
 FAIL  src/fs/__tests__/reader.jest.test.ts (102.218s)
  ● readFile util › when a file descriptor is supplied › throws when fd cannot be accessed

    : Timeout - Async callback was not invoked within the 100000ms timeout specified by jest.setTimeout.Timeout - Async callback was not invoked within the 100000ms timeout specified by jest.setTimeout.Error:

      30 |     });
      31 |
    > 32 |     it('throws when fd cannot be accessed', () => {
         |     ^
      33 |       return expect(readFile(2, { encoding: 'utf8' })).rejects.toThrow();
      34 |     });
      35 |   });

      at new Spec (node_modules/jest-jasmine2/build/jasmine/Spec.js:116:22)
      at Suite.describe (src/fs/__tests__/reader.jest.test.ts:32:5)


Test Suites: 1 failed, 102 passed, 103 total
Tests:       1 failed, 954 passed, 955 total
Snapshots:   0 total
Time:        108.434s
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

That's strange. It seems Windows has problems with process.cwd() returning the properly cased cwd or so?!?! Maybe __dirname or fs.realpathSync(path) can help to avoid this.

@m-mohr
Copy link
Contributor Author

m-mohr commented Jan 17, 2020

@nulltoken @P0lip I did some debugging:

  • harness is not running due to build.binary not working. build.binary is not working as I have installed node 11, but pkg only supports node up to node 10. See Error! No available node version satisfies 'node11' vercel/pkg#584
  • The issue in reader.jest.test.ts can be "fixed" by specifying another file descriptor than 2. If I specify 1000, it doesn't fail. Maybe fd 2 is somewhat occupied by something else...

So to run tests on Windows you need to:

  • run node version <= 10
  • use windows power shell
  • use the properly cased path as working directory in the power shell

@philsturgeon philsturgeon changed the title Don't exclude developers on Windows Improve Windows support Jan 22, 2020
@philsturgeon
Copy link
Contributor

CircleCI has Windows CI, is this going to do the trick?

@m-mohr
Copy link
Contributor Author

m-mohr commented Feb 6, 2020

@philsturgeon The builds would fail on Windows due to the issues I mentioned above.

@nulltoken
Copy link
Contributor

@m-mohr #966 slightly improves the harness situation on my side.

I'm still suffering from the following error report on Windows that I haven't started to debug. Yet.

1:1    error  oas3-schema       can't resolve reference ./schemas/schema.oas3.json from id #

@m-mohr
Copy link
Contributor Author

m-mohr commented Mar 20, 2020

I'm still getting more errors than the one you got. See #919 (comment) for some details.

@nulltoken
Copy link
Contributor

@m-mohr Although it doesn't solve the native Windows experience, I've pushed a little work that allows me to run the harness on Windows situation through docker.

Could you take a look at #999 and see if it helps you a bit as well?

@philsturgeon
Copy link
Contributor

Please send a PR if there are still problems with windows support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants