Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChakraCore failed to run tests with 2 failures on windows with MSVC #6722

Open
spacelg opened this issue Jun 25, 2021 · 6 comments
Open

ChakraCore failed to run tests with 2 failures on windows with MSVC #6722

spacelg opened this issue Jun 25, 2021 · 6 comments
Assignees

Comments

@spacelg
Copy link

spacelg commented Jun 25, 2021

Hi All,

ChakraCore failed to run tests with 2 failures on windows with MSVC. This issue can be reproduced on latest version 385409e, could you please look at this issue?

Repro steps:

  1. git clone http://github.com/microsoft/ChakraCore F:\gitP\microsoft\ChakraCore
  2. Open a vs2019 x64 command prompt.
  3. cd F:\gitP\microsoft\ChakraCore
  4. msbuild /m /p:Platform=x64 /p:Configuration=Test /p:WindowsTargetPlatformVersion=10.0.18362.0 Build\Chakra.Core.sln /t:Rebuild
  5. cd F:\gitP\microsoft\ChakraCore\test
  6. set NUM_RL_THREADS=10
  7. set WindowsSDKVersion=10.0.18362.0
  8. runtests -x64test

Build log:
run_test.log

Error info:
10>ERROR: Test failed to run correctly: diffs from baseline (F:\gitP\microsoft\ChakraCore\test\Error\errorCtor_v4.baseline):
10> ch.exe -WERExceptionSupport -ExtendedErrorStackForTestHost -BaselineMode -forceNative -off:simpleJit -bgJitDelay:0 -dynamicprofileinput:profile.dpl.UnnamedTest568 -ExtendedErrorStackForTestHost F:\gitP\microsoft\ChakraCore\test\Error\errorCtor.js >F:\gitP\microsoft\ChakraCore\test\Error\testout440 2>&1
10>ERROR: name of output file: F:\gitP\microsoft\ChakraCore\test\Error\testout440; size: 21662; creation: Thu Jun 24 23:56:45 2021, last access: Thu Jun 24 23:56:45 2021, now: Thu Jun 24 23:56:46 2021
10>ERROR: bad output file follows ============
10>-----------------------------------------
10>Error()
10>message = (string)
10>name = Error (string)
10>number = undefined (undefined)
10>stack = Error
10> at eval code (eval code:1:1)
10> at Test(string, string) (errorCtor.js:68:5)
10> at TestCtor(string) (errorCtor.js:75:5)
10> at Global code (errorCtor.js:111:1)(string)
10>-----------------------------------------
10>Error(NaN, NaN)
10>message = NaN (string)
10>name = Error (string)
10>number = undefined (undefined)
10>stack = Error: NaN
10> at eval code (eval code:1:1)
10> at Test(string, string) (errorCtor.js:68:5)
10> at TestCtor(string) (errorCtor.js:78:5)
10> at Global code (errorCtor.js:111:1)(string)
10>-----------------------------------------
...
Summary: F:\gitP\microsoft\ChakraCore\test had 2217 tests; 2 failures
-- runtests.cmd >> Tests failed. See logs for details.
-- runtests.cmd >> exiting with exit code 1

@ppenzin
Copy link
Member

ppenzin commented Jul 1, 2021

@rhuanjl line endings strike back again :) There was an obvious unix file checked in, which I think I fixed in the linked PR above, but there is some more issues with errorCtor.js test, parts of the output look like this (vim has non-printed character support):

message      = function Test(typename, s)^M
{^M
    WScript.Echo("-----------------------------------------");^M
    WScript.Echo(typename + "(" + s + ")");^M
    var e = eval("new " + typename + "(" + s + ")");^M
    DumpObject(e);^M
}(string)
name         = Error       (string)
number       = undefined   (undefined)
stack        = Error: function Test(typename, s)^M
{^M
    WScript.Echo("-----------------------------------------");^M
    WScript.Echo(typename + "(" + s + ")");^M
    var e = eval("new " + typename + "(" + s + ")");^M
    DumpObject(e);^M
}

I can see that those were edited in 3d0d0cb and I can see the line breaks in git diff, but for some reason I can't strip them out with regular tools yet. Will try again later.

Also, I think our plan B was to discontinue using Windows-only test runner in favor of the one written in Python.

@rhuanjl
Copy link
Collaborator

rhuanjl commented Jul 1, 2021

We've switched over the CI to the python test runner, which does line ending normalisation, hence why this is only an offline problem.

I held off on deleting the old windows test runner because it has a few features the python runner doesn't replicate yet that I wanted to review again before removal.

In the past I've fixed baseline line ending issues with a utility called 'unix2dos'

@ppenzin
Copy link
Member

ppenzin commented Jul 2, 2021

In the past I've fixed baseline line ending issues with a utility called 'unix2dos'

I've done that yesterday in a few different ways, though it was in WSL, ran on BSD today as well. Maybe the issue with errorCtor output is not with the test files then. Somewhat frustrating part of this situation is that both Github (in diffs) and git (when checking out or committing) can normalize newlines.

The lines in erroCtor output come from printout of the source throwing the error. What do we do about newlines internally (and does it matter these days anyway)?

@rhuanjl
Copy link
Collaborator

rhuanjl commented Jul 2, 2021

The issue in test cases comes from ch the equivalent of printf that it uses for output does LF on posix and CRLF on windows.

As ch is just for testing/demo-ing the engine this is only a test issue, could be good long term to make it do LF everywhere.

@ppenzin
Copy link
Member

ppenzin commented Nov 1, 2021

For some reason I did not understand what you said right away. Let me confirm, but it makes sense to print the expected line endings (some people might be using Notepad), and make relax the strictness of the check.

@rhuanjl
Copy link
Collaborator

rhuanjl commented Nov 1, 2021

runtests.py explicitly corrects for line ending issues so when running the testsuite through runtests.py this is not an issue.

This only comes up if someone runs the test using runtests.cmd the windows only version that I intend to deprecate (and remove in the long run) our CI no longer uses it but I haven't removed it yet as it has a few extra testing options I want to look at/consider adding into runtests.py at some point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants