Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix mutable sync/async logging and support async tests #60

Merged
merged 3 commits into from Jan 9, 2017
Merged

fix mutable sync/async logging and support async tests #60

merged 3 commits into from Jan 9, 2017

Conversation

AnthonyLloyd
Copy link
Contributor

Hopefully the style is ok.

I think for LF vs CRLF you need a .gitattributes file.

I think ultimately there must be a better way to handle sync/async logging. Needs to be able to be switched from async flush to manual flush.

@AnthonyLloyd
Copy link
Contributor Author

AnthonyLloyd commented Jan 6, 2017

I think I know how this should work. I've just been trying to add support for async tests.

The executions of a test with logging should be made into an async<TestRunResult> which is easy enough.
The parallel ones then can be run together and the sequenced one by one completing their logging.
This should simplify the code and the TestPrinters are now correct with their Async<unit> return values.

@@ -544,8 +527,12 @@ module Impl =

fun (printers: TestPrinters) map ->

let log test =
if test.sequenced then Async.StartImmediate else Async.Start
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's really just the join at the end of the run that should decide the method with which logs are awaited... Just because we sequence tests, we don't have to start their logs immediately. Also, logger.infoWithBP will only back-pressure the logger's buffer, not the printing. So the returned Async<unit> from such a function call will yield as soon as the buffer has accepted the message – a final flush of the logging infrastructure at the end should then be enough.

@haf
Copy link
Owner

haf commented Jan 6, 2017

I really like these changes. However, we may have to think about it a bit; see my in-code comment.

@AnthonyLloyd
Copy link
Contributor Author

Sure. I've been working on the async tests. I'll add that to the PR when its ready and we can discuss.

Currently having some issues with the run taking 20s insead of 2s (logging ACK? not sure)

@AnthonyLloyd AnthonyLloyd changed the title fix mutable sync/async logging [WIP] fix mutable sync/async logging and support async tests Jan 6, 2017
@haf
Copy link
Owner

haf commented Jan 6, 2017

@AnthonyLloyd
Copy link
Contributor Author

@haf
I've had to limit the number of threads to get this to not take 20s.
I'm not sure why this is necessary.

Could you take a look at some point and let me know if you think I'm on the right track.

@AnthonyLloyd AnthonyLloyd changed the title [WIP] fix mutable sync/async logging and support async tests fix mutable sync/async logging and support async tests Jan 6, 2017
@AnthonyLloyd AnthonyLloyd changed the title fix mutable sync/async logging and support async tests [WIP] fix mutable sync/async logging and support async tests Jan 7, 2017
@AnthonyLloyd AnthonyLloyd changed the title [WIP] fix mutable sync/async logging and support async tests fix mutable sync/async logging and support async tests Jan 7, 2017
@AnthonyLloyd
Copy link
Contributor Author

@haf,

Just to update you on the status of this.

I've added number of parallel workers to command line and async test tests. Its basically done as far I can see apart from some docs.

There are a few issues outstanding:

  • getMethodName - I can't write the code to locate the async tests. --summary-location just gives [:0] for me in Debug and Release for sync tests

  • The BenchmarkDotNet tests are still not working. They pass but are excepting and nan in travis.

  • There are some tests that are failing for me but not travis (even when I checkout haf/master):

Failed: 7
all/string comparison/different length, actual is shorter [:0]
all/string comparison/different length, actual is longer [:0]
all/string comparison/fail - different content [:0]
expecto/expectations/#containsAll/sequence contains everything expected [:0]
expecto/expectations/#distribution/sequence doesn't contain repeats in expected [:0]
expecto/expectations/#distribution/sequence does contain repeats in expected but should not [:0]
expecto/expectations/#distribution/sequence does not contains everything expected [:0]

@haf
Copy link
Owner

haf commented Jan 7, 2017

Your failures may be fixed w/ string changes in #64

@haf
Copy link
Owner

haf commented Jan 8, 2017

No 64 is merged now.

@AnthonyLloyd
Copy link
Contributor Author

rebased but I still see the errors. May be down to LF vs CRLF:

[20:03:12 ERR] expecto/expectations/#distribution/sequence does contain repeats in expected but should not failed in 00:00:00.0855580.
Test failure strings should equal.
          Expected string to equal:
          "Sequence should contain two, two and four.
    Sequence `actual` does not contain every `expected` elements.
        All elements in `actual`:
        {2, 2}
        All elements in `expected` ['item', 'number of expected occurrences']:
        {2: 2, 4: 1}
        Missing elements from `actual`:
        '4' (0/1)"

                                                                                                          ↑
          The string differs at index 306.
          "Sequence should contain two, two and four.
    Sequence `actual` does not contain every `expected` elements.
        All elements in `actual`:
        {2, 2}
        All elements in `expected` ['item', 'number of expected occurrences']:
        {2: 2, 4: 1}
        Missing elements from `actual`:
        '4' (0/1)
" C:\Users\Ant\src\expecto\Expecto\Expect.fs(301,1): Expecto.Expect.distribution[a](IEnumerable`1 actual, FSharpMap`2 expected, String format)

                                                                                                          ↑
          String `actual` was longer than expected, at pos 306 found item '\010'.
  C:\Users\Ant\src\expecto\Expecto.Tests\Tests.fs(518,1): Expecto.Tests.expecto@518-173.Invoke(String msg)

@@ -434,13 +434,13 @@ let isFasterThanSub (f1:Performance.Measurer<_,_>->'a) (f2:Performance.Measurer<
Tests.failtestf "%s. Expected f1 (%s) to be faster than f2 (%s) but is ~%.0f%% slower."
format (toString s1) (toString s2) ((s1.mean/s2.mean-1.0)*100.0)
| Performance.MetricLessThan (s1,s2) ->
Impl.logger.info (
Impl.logger.log Info (
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we expect this to be sent all the way to the terminal before the process exists, logWithAck should be used.

createSummaryMessage summary
|> log.write Info
|> logger.log Info
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LogWithAck

@haf
Copy link
Owner

haf commented Jan 9, 2017

Do you still have issues with speed?

@haf haf merged commit e7bffbc into haf:master Jan 9, 2017
@AnthonyLloyd
Copy link
Contributor Author

AnthonyLloyd commented Jan 9, 2017

No its fine with the parallel-workers defaulting to logical processors.
I always thought you never needed to do this with Async because of the TPL only creating a fixed number of threads.
All other unit testing libraries have a config for workers and default to no logical processors.

I'll add docs for this command line later and test if a .gitattributes with git config core.autocrlf false fixes my local test failure.

Will close #2 when docs are done.

@haf
Copy link
Owner

haf commented Jan 9, 2017

This is a good PR, I've merged it and tested it to some extent and it's looking good. I've also added the testCaseAsync test function to the README.

@AnthonyLloyd
Copy link
Contributor Author

Cool

@AnthonyLloyd
Copy link
Contributor Author

Still have the 7 failing tests. Changing to LF doesn't seem to fix.

@haf
Copy link
Owner

haf commented Jan 10, 2017

Can you set up an appveyor.yml file to showcase the test errors?

@AnthonyLloyd
Copy link
Contributor Author

@haf

Sure will look to.

I also did a contention profile and think that the problem with the slow down when using many workers is down to assertTestFails and multiple full evals running. Still experimenting but this probably could be transformed into a normal test without using the full runner inside the test.

I may also pick up #68 later in the week if no one else has done it. I have an interest in what could be done later with FsCheck (scaling for stress testing, easy stdgen etc)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants