Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automated system testing framework #708

Closed
nvaccessAuto opened this issue Jun 16, 2010 · 11 comments
Closed

Automated system testing framework #708

nvaccessAuto opened this issue Jun 16, 2010 · 11 comments
Labels

Comments

@nvaccessAuto
Copy link

Reported by jteh on 2010-06-16 00:38
All testing of NVDA is currently done manually by its developers and users. This can be extremely time consuming and tedious. Also, there is a high potential for code changes to cause regressions in other (sometimes unforeseeable) circumstances and functionality. To alleviate these problems and to allow for faster discovery of new problems, an automated testing system needs to be developed for NVDA. This system would automatically run tests defined by the developers on a regular basis and report on the success or failure of the tests.

Due to the interdependence of much of the code and the inherent interaction between NVDA and other applications, unit testing the code is extremely difficult and much less useful than system testing. Therefore, at least initially, the focus will be system testing. System tests will examine NVDA's interaction with the operating system and other applications.

@nvaccessAuto
Copy link
Author

Comment 1 by orcauser on 2012-01-23 18:05
Came across [while browsing, and thought it might be worth investigating when this ticket is worked on.
1: http://www.tizmoi.net/watsup/intro.html

@nvaccessAuto
Copy link
Author

Comment 3 by blindbhavya on 2014-10-02 10:06
Hmm.
Sounds a very efficient method to me. Will CC myself to receive updates on this ticket.

@LeonarddeR
Copy link
Collaborator

It seems this is still a relevant ticket, might be good to set a priority for this.

@derekriemer
Copy link
Collaborator

#7026 is a start for this. @jcsteh used to have an NVDA test harnice for doing integration and usability tests, what is the status on this?

@jcsteh
Copy link
Contributor

jcsteh commented Jul 3, 2017

I did start on this, but the approach needs some refining. It watches log messages to gather output, but that means you have to get exact log strings to put in your tests, which is cumbersome. Processing log output is also ugly and error prone. Instead, once we get some sort of signalling/hooking framework (which will be done as a prerequisite of speech refactor), we can properly watch for output and serialise it to the test harness process.

@jcsteh jcsteh added feature and removed task labels Jul 3, 2017
@jcsteh
Copy link
Contributor

jcsteh commented Aug 30, 2017

I'm not going to be able to work on this before i leave NV Access :(, so here is a bit of a brain dump.

My original work on this is in the t708 branch.

  1. We use the remote Python console to get stuff out of NVDA. I think this is probably still okay. It's a bit inelegant, but I'm not sure there's necessarily any real advantage to creating some custom test server.
  2. In the existing code, we watch log messages to gather output, but that means you have to get exact log strings to put in your tests (and in the right order if you want to assert more than one), which is cumbersome. Processing log output is also ugly and error prone.
  3. Instead, we should use extensionPoints (Generic framework for code extensibility via actions and filters #7484) to allow the test plugin to directly capture various kinds of output; e.g. speech sequences past to speech.speak, braille dots/text prior to output to the display, etc.
  4. Each type of captured output would be a separate channel of data sent to the test harness. That way, they can be examined separately. For example, a test might only care about speech or might only care about braille; it might or might not want to assert on both. In terms of implementation, we can probably just do that by prefixing each channel with a different string prefix, just as the current code prefixes presentation logs with "PRES ".
  5. Speech sequences would be serialised/de-serialised so that the test harness would have access to the actual speech sequence, not just a log string. The advantage of this is that the harness can provide methods to just assert on the text, part of the text or the actual speech sequence. For example, some tests don't care about language changes, etc.; the text is all that's important. This can probably just be done using repr/eval, as long as all speech commands have a suitable __repr__ (most of the existing ones do).

@jcsteh jcsteh removed their assignment Aug 30, 2017
@Brian1Gaff
Copy link

Nice to see this woken up again. I am of course a pretty lay person when it comes to programming at your level, but one question I had was this.
How does this get information on things like slow downs oreffects, undesirable ones, that may happen to applications when its being run?

Brian

@derekriemer
Copy link
Collaborator

@Brian1Gaff commented on Jul 5, 2018, 2:52 AM MDT:

Nice to see this woken up again. I am of course a pretty lay person when it comes to programming at your level, but one question I had was this.
How does this get information on things like slow downs oreffects, undesirable ones, that may happen to applications when its being run?

It doesn't currently do this. It only verifies whether system behavior (IO) is as expected.

@bdorer
Copy link
Sponsor

bdorer commented Mar 19, 2019

As you are working with automated tests allready what has to be done to get this fixed?

@Adriani90
Copy link
Collaborator

@feerrenrut I guess this is solved given the PR referenced above which is already merged. cc: @jcsteh. Should it be closed? or not yet?

@feerrenrut
Copy link
Contributor

Yes, I think we can close this issue. There are several ways that we could expand the system tests, and we would love to see some contributions here!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants