Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to get the code coverage report for E2E testing by using Testcafe? #2778

Closed
raja1313 opened this issue Aug 28, 2018 · 19 comments
Labels
FREQUENCY: level 2 TYPE: enhancement The accepted proposal for future implementation.
Milestone

Comments

@raja1313
Copy link

Are you requesting a feature or reporting a bug?

Feature

What is the current behavior?

I don't see any code coverage for my 32 scripts

What is the expected behavior?

I need a code coverage report for my 32 scripts. Let's say 70% coverage #0% yet to go

Provide the test code and the tested page URL (if applicable)

Tested page URL:

Test code

Specify your

  • operating system:mac OS
  • testcafe version:0.20.4
  • node.js version: "node": ">= 4.0.0",
@AndreyBelym AndreyBelym added the TYPE: question The issue contains a question that won't be addressed. label Aug 28, 2018
@churkin
Copy link
Contributor

churkin commented Sep 6, 2018

Hi @raja1313,
Unfortunately, we do not have built-in functionality to check code coverage. But, I think it is a good idea, we should think about this, thank you.

@churkin churkin added this to the Planned milestone Sep 6, 2018
@raja1313
Copy link
Author

raja1313 commented Sep 6, 2018 via email

@raja1313
Copy link
Author

raja1313 commented Sep 6, 2018 via email

@miherlosev miherlosev added TYPE: proposal and removed TYPE: question The issue contains a question that won't be addressed. labels Oct 2, 2018
@AndreyBelym AndreyBelym added TYPE: enhancement The accepted proposal for future implementation. and removed TYPE: proposal labels Feb 6, 2019
@AndreyBelym AndreyBelym added this to Integrations in Enhancements processing Mar 1, 2019
@AndreyBelym AndreyBelym moved this from Integrations to New Features & APIs in Enhancements processing Mar 1, 2019
@armand1m
Copy link

armand1m commented Apr 4, 2019

maybe we can use chrome css and js coverage for building this feature? https://developers.google.com/web/updates/2017/04/devtools-release-notes#coverage

@need-response-app need-response-app bot added the STATE: Need response An issue that requires a response or attention from the team. label Apr 4, 2019
@AlexKamaev
Copy link
Contributor

@armand1m thanks for provided information, it's interesting.
TestCafe uses the Chrome DevTools protocol internally, so in theory we can implement this feature. However, it's not completely clear how the set of e2e tests can cover 100% of all js scripts or css rules. To achieve 100% coverage, you need to write a really huge number of tests. For me this feature looks more suitable for unit-tests.
Anyway, thank you for sharing this information. We'll bear it in mind.

@need-response-app need-response-app bot removed the STATE: Need response An issue that requires a response or attention from the team. label Apr 5, 2019
@JoshuaKGoldberg
Copy link

JoshuaKGoldberg commented Apr 10, 2019

@AlexKamaev if it helps, the goal here might not be to achieve 100% js test coverage through TestCafe. It could just be unused css rule detection, which would be very useful!

Additionally: a common setup nowadays is to have Jest running unit tests and TestCafe running end-to-end tests. Changes that technically decrease unit test coverage but increase overall test coverage still get flagged as reducing test coverage because there's no way for systems to know what coverage end-to-end tests provide. Tools like istanbul-combine would be great to use here.

Edit: not expecting a response here, no need to appease the need-response-app 😆

@need-response-app need-response-app bot added the STATE: Need response An issue that requires a response or attention from the team. label Apr 10, 2019
@AndreyBelym
Copy link
Contributor

@JoshuaKGoldberg thank you for sharing your opinion. However, this feature requires a fair amount of research, so we need some time to think about it.

@need-response-app need-response-app bot added STATE: Need response An issue that requires a response or attention from the team. and removed STATE: Need response An issue that requires a response or attention from the team. labels Apr 11, 2019
@AlexSkorkin AlexSkorkin removed the STATE: Need response An issue that requires a response or attention from the team. label Apr 15, 2019
@jakerobb
Copy link

jakerobb commented Jul 8, 2020

I'd also like coverage support in TestCafe.

Regarding this:

However, it's not completely clear how the set of e2e tests can cover 100% of all js scripts or css rules. To achieve 100% coverage, you need to write a really huge number of tests. For me this feature looks more suitable for unit-tests.

I don't think 100% coverage is relevant here. Having a map of what is and isn't covered (note: the map is more important than the percentage) is useful regardless of the level of coverage, and IMO 100% coverage is only reasonable through a combination of unit, functional, integration, and e2e tests, applying each where they make the most sense for the code to be tested. (Note that I'm not advocating for a 100% coverage target in general -- just saying that it's definitely a bad idea through the lens of a single type of testing.)

No response needed. 🙂

@need-response-app need-response-app bot added the STATE: Need response An issue that requires a response or attention from the team. label Jul 8, 2020
@Ogurecher Ogurecher removed the STATE: Need response An issue that requires a response or attention from the team. label Jul 9, 2020
@RaviH
Copy link

RaviH commented Aug 22, 2020

Any update on this?

@need-response-app need-response-app bot added the STATE: Need response An issue that requires a response or attention from the team. label Aug 22, 2020
@AlexSkorkin
Copy link
Collaborator

AlexSkorkin commented Aug 24, 2020

No updates. Once we get any results, we will share them here.

@need-response-app need-response-app bot removed the STATE: Need response An issue that requires a response or attention from the team. label Aug 24, 2020
@kkrull
Copy link

kkrull commented Oct 8, 2020

I have a couple of thoughts on this:

  1. There can be more reasons to measure code coverage; not just to look for reaching some target percentage of coverage (which may or may not be considered valuable, depending on your situation). In my case, I'd like to measure the number of times each line is executed by my suite of e2e tests and use that to identify redundancies in my test suite.
  2. Istanbul.js already provides the instrumentation and reporting needed to measure coverage, so it need not be re-implemented here. If there's a way for testcafe to call istanbul's nyc command instead of whatever it is calling now (just node?) - then a small bit of configuration may be all that's needed to implement this feature.

What do maintainers think of the istanbul.js idea? I'd be happy to share some of the work in trying out this idea, if someone can point me to where to look for how the testcafe process starts up.

@need-response-app need-response-app bot added the STATE: Need response An issue that requires a response or attention from the team. label Oct 8, 2020
@kkrull
Copy link

kkrull commented Oct 8, 2020

I did a little bit of investigation, and I think I found the entry point that launches the testcafe process, using node.

I don't know yet of a good way to make that change in this codebase, but it might be enough to try the idea locally to see if it even works:

  1. install istanbul
  2. copy the testcafe-with-v8-flag-filter.js script to testcafe-with-coverage.js, and change the first line to call nyc instead of node
  3. add a bin entry to package.json for this new script
  4. try running testcafe using testcafe-with-coverage

If everything is working, nyc should show some console when the process is exiting that says it's generating a coverage report.

@arubtsov
Copy link
Contributor

Hello @kkrull,

you can launch testcafe under nyc without any changes in the codebase. For instance, you can run an example test from the root of the testcafe repository as follows: npx nyc testcafe chrome ./examples/basic/test.js. TestCafe runs test files in the node.js process, so istanbul won't have access to scripts used in the browser. Thus, a coverage report you'll get this way will only contain files that are used in your test files (the page model in this particular case).

@need-response-app need-response-app bot removed the STATE: Need response An issue that requires a response or attention from the team. label Oct 13, 2020
@kkrull
Copy link

kkrull commented Oct 13, 2020

istanbul won't have access to scripts used in the browser

So true. Thanks for reminding me of this. It's easy to lose track of which lines of code run in the node.js process vs. in the browser. On the bright side, forgetting that detail is only made possible by the fact that testcafe works quite well.

Now I see why the issue is more complex than meets the eye. Maybe Istanbul or the browser itself could be configured to instrument the code that runs in the browser and track code execution, but there would need to be some integration points

  1. for testcafe to tell that instrumentation when to stop/reset measuring
  2. for the instrumentation code in the browser to send results somewhere (an API endpoint at the testcafe base URL, store+retrieve in local storage, etc...)

Just verbalizing what you probably already knew ^^, since writing helps me think through the issue.

@need-response-app need-response-app bot added the STATE: Need response An issue that requires a response or attention from the team. label Oct 13, 2020
@arubtsov arubtsov removed the STATE: Need response An issue that requires a response or attention from the team. label Oct 16, 2020
@AndreyBelym AndreyBelym added STATE: Stale An outdated issue that will be automatically closed by the Stale bot. FREQUENCY: level 2 and removed STATE: Stale An outdated issue that will be automatically closed by the Stale bot. labels Oct 19, 2020
@need-response-app need-response-app bot added the STATE: Need response An issue that requires a response or attention from the team. label May 1, 2023
@miherlosev
Copy link
Collaborator

Hi @raga-varicent,

Could you please make a step-by-step instruction describing what we should do in order to reproduce the described behavior on our side?

@need-response-app need-response-app bot removed the STATE: Need response An issue that requires a response or attention from the team. label May 4, 2023
@raga-varicent
Copy link

Hi @miherlosev
I have resolved my initial issue, but I have the following issue. I am trying to coverage object using the client function.
It works fine for smaller objects up to 4MB (window.coverage). But if the object is too big (65 mb), it seems to hang after the let c = await cov();

Is this because of http transmission size limitation? How I can get it to work for larger objects?

async getCoverage(filename: any){

// @ts-ignore
const cov = ClientFunction(() =>  JSON.stringify(window.__coverage__));
let c = await cov();
if(c !== undefined){
  let covInDir = "coverage_in";
  await this.createCovFolders(covInDir);
  let f = fs.openSync(covInDir+`/${filename}_coverage.json`, 'w');
  fs.writeSync(f,c);
  fs.close;
  
}else{
  
}
return c;

}

@need-response-app need-response-app bot added the STATE: Need response An issue that requires a response or attention from the team. label May 9, 2023
@miherlosev
Copy link
Collaborator

But if the object is too big (65 Mb), it seems to hang after the let c = await cov();
Is this because of http transmission size limitation? How I can get it to work for larger objects?

ClientFunction is not intended for transferring such big objects. You can try debugging it and describe the cause of the hang in this thread. Maybe after it, we can offer a solution.

@need-response-app need-response-app bot removed the STATE: Need response An issue that requires a response or attention from the team. label May 11, 2023
@miherlosev
Copy link
Collaborator

Hi folks,

TestCafe processed the code (html and js) of the tested website for the Smart Assertion Query Mechanism to work. The code coverage tool will measure service TestCafe code, which constantly changes. TestCafe also processed the code of test files with Babel to support modern JavaScript features with older Node.js versions. The processing algorithm depends on Node.js and Babel versions and sometimes can be changed.
E.g., you cannot have stable code coverage results due to architecture specifics.

Enhancements processing automation moved this from New Features & APIs to Closed Jun 9, 2023
@cenfun
Copy link

cenfun commented Mar 17, 2024

I create a TestCafe custom reporter for generating native V8 coverage reports:

npm i testcafe-reporter-coverage
testcafe chrome:headless:cdpPort=9223 tests/*.test.js -r spec,coverage

@need-response-app need-response-app bot added the STATE: Need response An issue that requires a response or attention from the team. label Mar 17, 2024
@Bayheck Bayheck removed the STATE: Need response An issue that requires a response or attention from the team. label Mar 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
FREQUENCY: level 2 TYPE: enhancement The accepted proposal for future implementation.
Projects
No open projects
Development

No branches or pull requests