-
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Acceptance Tests
Acceptance tests are end-to-end tests that test the complete functionality of the application, and this will help users to catch bugs and regressions before they are released ensuring that the code does what it is supposed to do.
This guide will help you to get started on how to write e2e acceptance test
for a particular user-type.
oppia/core/tests/
└── puppeteer-acceptance-tests
├── spec
│ ├── blog-admin-tests (give user type name)
│ │ ├── assign-role-to-users-and-change-tag-properties.spec.ts
│ ├── blog-editor-tests
│ │ ├── try-to-publish-a-duplicate-blog-post-and-get-blocked.spec.ts
│ ├── logged-in-user-tests
│ │ ├── click-all-buttons-in-about-foundation-page.spec.ts
│ │ ├── click-all-buttons-in-about-page.spec.ts
│ │ ├── click-all-buttons-in-thanks-for-donating-page.spec.ts
│ │ ├── click-all-buttons-on-navbar.spec.ts
│ ├── practice-question-admin-tests
│ │ ├── add-and-remove-contribution-rights.spec.ts
│ ├── translation-admin-tests
│ │ ├── add-translation-rights.spec.ts
│ │ ├── remove-translation-rights.spec.ts
├── images
│ └── blog-post-thumbnail.svg
├── puppeteer-testing-utilities
│ ├── puppeteer-utils.ts
│ ├── show-message-utils.ts
│ ├── test-constants.ts
│ ├── user-factory.ts
│ └── console-reporter.ts
└── user-utilities
└── blog-admin-utils.ts
└── blog-post-editor-utils.ts
└── logged-in-users-utils.ts
└── question-admin-utils.ts
└── super-admin-utils.ts
└── translation-admin-utils.ts
The directory structure is as follows:
-
The
spec
directory contains all the top-level test files. Each test file is named as*.spec.ts
and contains the test for a particular user type. For example,blog-admin-tests
directory contains all the tests for theBlog Admin
user. -
The
puppeteer-testing-utilities
directory contains all the utility files and helper functions, which you would require to write new acceptance tests. This directory can also be used to append more utility functions as when required or needed by the user. Files included inside this directory are :
-
puppeteer-utils.ts
-> This file contains the base BaseUser class which provides the most common and useful methods such as openBrowser, goto, clickOn, openExternalPdfLink etc. This class also serves as a foundation for defining other user-oriented subclasses, facilitating various testing scenarios. -
user-factory.ts
-> This file contains methods for creating a certain user. The file has different methods for creating different types of user. -
test-constants.ts
-> This file contains defined constants such as _*URLs, classname, id etc. which are used in the tests. -
show-message-utils.ts
-> This file contains method for logging the progress and errors during a test. -
console-reporter.ts
-> This file contains methods for capturing, filtering, and reporting specific console messages during puppeteer tests.
-
The
user-utilities
directory holds the utility files for different user types. Each user utility class is build upon the baseBaseUser
class containing the original methods along with the ones related to that user type. For eg.BlogPostEditor
contains base functions as well as additional functions just related toBlog Post Editor
user. -
The
images
directory contains all the images used in the tests.
From the root directory of oppia, run the following command:
python -m scripts.run_acceptance_tests --suite={{suiteName}}
Docker:
make run_tests.acceptance suite=SUITE_NAME
For example, to run the check-blog-editor-unable-to-publish-duplicate-blog-post.spec.ts
test, run the following command:
Python:
python -m scripts.run_acceptance_tests --suite="blog-editor-tests/check-blog-editor-unable-to-publish-duplicate-blog-post"
Docker:
make run_tests.acceptance suite="blog-editor-tests/check-blog-editor-unable-to-publish-duplicate-blog-post"
Note: Typically, these tests take anywhere between 0.5 to 2-3 minutes (excluding the time taken for setting up the server) for any suite to run, both in headless and non-headless modes, assuming the machine has sufficient resources. If the total time taken is significantly longer than this, it may indicate an issue with the testing environment. If such an issue is noticed or observed, please raise it on our issue tracker.
- Create a new directory for the specific user if it doesn't already exists inside the
spec
directory. For ex.Topic Manager
user can have directory named astopic-manager-tests
, and within the user directory, each test file is named as*.spec.ts
.
Note: Naming convention for directories / files is kebab case, where each word is separated by a (-)
-
Within the user directory, create a new file for each test. For ex.
create-new-topic.spec.ts
anddelete-topic.spec.ts
forTopic Manager
user. And these top-level test contains single user stories checking their test steps and expectations mentioned in the testing spreadsheet. -
The functionality of the top-level tests for each user-type is defined in the
user-utilities
directory. For ex. the blog editor tests are written within thespec/blog-editor-tests
directory, and the functionality of the tests are defined in theuser-utilities/blog-post-editor-utils.ts
file.
Note: A utility file is maintained for each user type. The purpose of maintaining this file is to add methods specific to that user on top of the already provided basic methods. This file maintains a user class which is extended from the base class of puppeteer-utils.ts . For ex. blog-post-admin-utils.ts have a class BlogPostEditor which have methods like
createDraftBlogPostWithTitle
,deleteDraftBlogPostWithTitle
etc. specific to Blog Admin only.
-
The utility files are imported into the top-level test files, and the methods are called to perform the required actions. For example, in the
try-to-publish-a-duplicate-blog-post-and-get-blocked.spec.ts
file, thecreateNewBlogPostWithTitle
method is called to create a new blog post with the given title. Additionally, theexpectUserUnableToPublishBlogPost
method is called to check if the user is unable to publish a blog post. To facilitate instantiation of classes, each utils file should also include aUserFactory
function. This function's purpose is to instantiate a new class of the corresponding type. For instance,export let QuestionAdminFactory = (): QuestionAdmin => new QuestionAdmin();
would create a QuestionAdmin instance. -
After adding a new user utility file, you should make the following changes to user factory:
If the role requires a super admin to assign it, first, add the role to the
Roles
enum intest-constants.ts
. Then, to add it, reference theUSER_ROLE_MAPPING
inside theuser-factory.ts
file. If the user requires a role from the super admin, add the reference accordingly.For example, if we want to add Translation admin with the help of super admin then:
• Define the role in
Roles
enum:Roles: { other roles... , TRANSLATION_ADMIN: 'translation admin', }
• Add the role to
USER_ROLE_MAPPING
:const USER_ROLE_MAPPING = { other roles... , [ROLES.TRANSLATION_ADMIN]: TranslationAdminFactory, } as const;
For roles that don't require super admin privileges, such as
LoggedInUser
, add the factory to the array insidecreateNewUser
undercomposeUserWithRoles(BaseUserFactory(), [...])
. This ensures that the new user role is included when creating a new user instance.Please ensure to follow the appropriate conventions and guidelines while adding new user-utilities files to the user-factory to maintain consistency and clarity in the testing process.
-
For each test, the user is created using the
UserFactory
class. For ex. in thetry-to-publish-a-duplicate-blog-post-and-get-blocked.spec.ts
file, ThecreateNewUser
method is called to create a new user, with the parameter[ROLES.BLOG_POST_EDITOR]
assigned as the role of the blog post editor. ThecreateNewUser
method is defined in theuser-factory.ts
file. ThecreateNewUser
method creates a new user with the provided username, email and role, and then returns the user object. The user object is used to perform the required actions (that are defined in theuser-utilities/*-utils.ts
). -
After successful completition of any test step or any expectation, the
showMessage
method is called to log the progress. For ex. in theblog-post-editor-utils.ts
file, theshowMessage
method is called to log the progress after publishing new blog post. TheshowMessage
method is defined in theshow-message-utils.ts
file. -
If there is any error during the test, then we throw errors in the expectation step or there would be timeout error if some component does not behave as intended.
-
The
puppeteer-testing-utilities
directory contains all the utility files and helper functions, which you would require to write new acceptance tests. This directory can also be used to append more utility functions as when required or needed by the user. -
The test must be thoroughly tested before submitting a PR. The test can be run locally by running the following command as mentioned above or you can run the test on the CI server by pushing your code to the remote branch in your fork. The CI server will run the test and will show the result.
Acceptance Tests have the capability to detect console errors during CUJs, potentially resulting in test failures. However, there are scenarios where certain console errors can be deemed acceptable and should not cause the test to fail. In order to ignore errors like these, you can use ConsoleReporter.setConsoleErrorsToIgnore
, which takes in an array of error regexes to match the errors that can be acceptable. For instance, an error like Blog Post with the given title exists already. Please use a different title.
, which occurs during the 'blog-editor-tests/try-to-publish-a-duplicate-blog-post-and-get-blocked' test, is ignored using the ConsoleReporter since it is an acceptable error in the context of the test. When passing acceptable errors like these to the ConsoleReporter, you should be specific and not use vague errors like Failed to load resource...
.
Below is an example of this usage:
ConsoleReporter.setConsoleErrorsToIgnore([
'Blog Post with the given title exists already. Please use a different title.'
]);
To handle errors that need to be ignored and are not specific to any acceptance test, you should include them directly within the console-reporter.ts
utility. In this file, you would add the error regex to the CONSOLE_ERRORS_TO_IGNORE
array and explain with a comment why this error should be ignored.
const CONSOLE_ERRORS_TO_IGNORE = [
// These "localhost:9099" are errors related to communicating with the
// Firebase emulator, which would never occur in production, so we just ignore
// them.
escapeRegExp(
'http://localhost:9099/www.googleapis.com/identitytoolkit/v3/' +
'relyingparty/getAccountInfo?key=fake-api-key'
),
// This error covers the case when the PencilCode site uses an
// invalid SSL certificate (which can happen when it expires).
// In such cases, we ignore the error since it is out of our control.
escapeRegExp(
'https://pencilcode.net/lib/pencilcodeembed.js - Failed to ' +
'load resource: net::ERR_CERT_DATE_INVALID'
),
];
To handle errors that need to be fixed, you should include them directly within the console-reporter.ts
utility. In this file, you would add the error regex to the CONSOLE_ERRORS_TO_FIX
array and add a TODO comment which points to the existing issue number (this comment should be removed when the bug is resolved). If the error doesn't have any corresponding issue, then file a new issue on our issue tracker.
For example:
const CONSOLE_ERRORS_TO_FIX = [
// TODO(#19746): Development console error "Uncaught in Promise" on signup.
new RegExp(
'Uncaught \\(in promise\\).*learner_groups_feature_status_handler'
),
// TODO(#19733): 404 (Not Found) for resources used in midi-js.
escapeRegExp(
'http://localhost:8181/dist/oppia-angular/midi/examples/soundfont/acoustic' +
'_grand_piano-ogg.js Failed to load resource: the server responded with a ' +
'status of 404 (Not Found)'
)
];
Similar to desktop, we also have acceptance tests for mobile to ensure responsiveness and uninterrupted user journeys on small screen devices. While the tests themselves remain largely the same for both desktop and mobile, there are some differences. For instance, large full menus on desktop may be converted to dropdowns, hamburger menus, or other shortcuts on mobile, requiring additional actions to complete the tests.
There will be no change in the spec
file of the tests; however, there may be some changes in the utils
file, which is optional and dependent on the specific test cases. In most cases, the tests will run correctly for both mobile and desktop.
However, in scenarios where certain actions are affected by the smaller screen size, additional steps may be required.
For example: consider a scenario where a menu is collapsed into a hamburger menu due to the small screen size:
Here, if we want to click on the "Home" or any other button, we need to first click on the hamburger menu. Additionally, there may be differences in selectors for the same buttons between desktop and mobile. For instance, the publish button in desktop might be e2e-test-publish-exploration
, while in mobile it could be e2e-test-mobile-publish-button
.
We can handle these differences by including conditional statements in the utils
file, using the isViewportAtMobileWidth()
function to execute commands specific to mobile devices.
For example:
async discardCurrentChanges(): Promise<void> {
// Check if the viewport corresponds to a mobile device.
if (this.isViewportAtMobileWidth()) {
// If on mobile, click on the mobile-specific discard button.
await this.clickOn(mobileDiscardButton);
} else {
// If on desktop, click on the desktop-specific discard button.
await this.clickOn(discardDraftButton);
}
// Confirm the discard action, regardless of the viewport size(common in both).
await this.clickOn(discardConfirmButton);
}
In this example, the discardCurrentChanges()
function checks if the viewport width corresponds to a mobile device, and if so, clicks on the mobile-specific discard button. Otherwise, it clicks on the desktop-specific discard button. Finally, it confirms the discard action. This approach allows us to maintain a single set of tests while accommodating differences between desktop and mobile environments.
From the root directory of oppia, run the following command:
python -m scripts.run_acceptance_tests --mobile --suite={{suiteName}}
Docker:
make run_tests.acceptance --mobile suite=SUITE_NAME
For example, to run the check-blog-editor-unable-to-publish-duplicate-blog-post.spec.ts
test, run the following command:
Python:
python -m scripts.run_acceptance_tests --mobile --suite="blog-editor-tests/check-blog-editor-unable-to-publish-duplicate-blog-post"
Docker:
make run_tests.acceptance --mobile suite="blog-editor-tests/check-blog-editor-unable-to-publish-duplicate-blog-post"
Blog Admin and Blog Editor Tests - Blog Admin top-level tests Blog Editor top-level tests user utility files puppeteer utility files - base class puppeteer utility files - user factory
Have an idea for how to improve the wiki? Please help make our documentation better by following our instructions for contributing to the wiki.
Core documentation
Developing Oppia
- FAQs
- How to get help
- Getting started with the project
- How the codebase is organized
- Making your first PR
- Debugging
- Testing
- Codebase policies and processes
- Guidelines for launching new features
- Guidelines for making an urgent fix (hotfix)
- Testing jobs and other features on production
- Guidelines for Developers with Write Access to the Oppia Repository
- Release schedule and other information
- Revert and Regression Policy
- Privacy aware programming
- Code review:
- Project organization:
- QA Testing:
- Design docs:
- Team-Specific Guides
- LaCE/CD:
- Developer Workflow:
Developer Reference
- Oppiabot
- Git cheat sheet
- Frontend
- Backend
- Backend Type Annotations
- Writing state migrations
- Calculating statistics
- Storage models
- Coding for speed in GAE
- Adding a new page
- Adding static assets
- Wipeout Implementation
- Notes on NDB Datastore transactions
- How to handle merging of change lists for exploration properties
- Instructions for editing roles or actions
- Protocol buffers
- Webpack
- Third-party libraries
- Extension frameworks
- Oppia-ml Extension
- Mobile development
- Performance testing
- Build process
- Best practices for leading Oppia teams
- Past Events