Skip to content

🧪 Unit tests for handleUserSelection#11

Closed
hashexplaindata wants to merge 1 commit intomasterfrom
testing-improvement-handleUserSelection-5388953667962943430
Closed

🧪 Unit tests for handleUserSelection#11
hashexplaindata wants to merge 1 commit intomasterfrom
testing-improvement-handleUserSelection-5388953667962943430

Conversation

@hashexplaindata
Copy link
Owner

🎯 What: The testing gap addressed: Missing unit tests for the core telemetry logging function handleUserSelection.
📊 Coverage: What scenarios are now tested:

  • Standard user selection in the control condition.
  • User selection in the AI-labeled condition (verifying badge position logging).
  • Correctness of "chose_target_layout" boolean logic.
  • Integrity of the Tidy Data Long Format payload (participant_id, timestamp, etc.).
  • State machine transitions (incrementing trial counters, deactivating trials).
    Result: Significant improvement in the reliability of the experiment's primary data collection engine. The implementation uses a zero-dependency Node.js test runner to ensure compatibility and ease of execution in the sandbox environment.

PR created automatically by Jules for task 5388953667962943430 started by @hashexplaindata

This commit adds a suite of unit tests for the `handleUserSelection` function
in `code/experiment.js`.

Key changes:
- Added `tests/experiment.test.js` with comprehensive test cases.
- Modified `code/experiment.js` to support Node.js exports and defensive global checks.
- Created `package.json` with a test runner configuration.

The tests verify:
- STATE object mutation (isTrialActive, currentTrial).
- Accuracy of reaction time calculations using mocked performance.now.
- Tidy Data payload structure for both control and experimental conditions.
- Proper handling of trial sequence and UI domain logging.

Co-authored-by: hashexplaindata <221828969+hashexplaindata@users.noreply.github.com>
@google-labs-jules
Copy link
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

Copilot AI review requested due to automatic review settings March 12, 2026 20:08
@gemini-code-assist
Copy link

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the reliability of the experiment's primary data collection engine by introducing robust unit tests for the handleUserSelection function. The changes ensure that critical data logging and state management logic are thoroughly validated across various experimental conditions, improving the overall integrity of the collected data.

Highlights

  • Unit Test Coverage: Comprehensive unit tests have been added for the handleUserSelection function, addressing a critical testing gap in the core telemetry logging.
  • Scenario Validation: The new tests validate standard user selections, AI-labeled conditions (including badge position logging), chose_target_layout boolean logic, Tidy Data Long Format payload integrity, and state machine transitions (trial incrementing and deactivation).
  • Testing Environment Setup: The implementation utilizes a zero-dependency Node.js test runner, ensuring compatibility and ease of execution within the sandbox environment. This required modifications to code/experiment.js to allow for server-side testing.
Changelog
  • code/experiment.js
    • Modified the initialization of URLSearchParams and DOM to be conditional, allowing the script to run in environments without a window or document object (e.g., Node.js for testing).
    • Exported key variables and functions (STATE, CFG, handleUserSelection, TRIALS) using module.exports to make them accessible for unit testing.
  • package.json
    • Added a new package.json file to define project metadata and include a test script for running the unit tests.
  • tests/experiment.test.js
    • Added a new file containing unit tests for the handleUserSelection function.
    • Implemented mocks for performance.now, Date.now, and setTimeout to ensure deterministic test execution.
    • Included test cases for standard control condition selection and AI-labeled condition selection, verifying data integrity and state changes.
Activity
  • PR created automatically by Jules for task 5388953667962943430, initiated by @hashexplaindata.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds unit tests for the handleUserSelection function, a critical part of the telemetry logging. To enable testing in a Node.js environment, the main experiment.js file has been updated to gracefully handle the absence of browser-specific globals like window and document, and to export the necessary components for testing. A package.json is also introduced to define the project and provide a convenient test script. The tests themselves cover key scenarios, but I've identified a significant improvement opportunity in their structure to ensure they are independent and robust, which is detailed in my comment.

Comment on lines +28 to +83
// Test Case 1: Standard selection in control condition
STATE.pid = "test-pid";
STATE.condition = "control";
STATE.covariate = 3;
STATE.currentTrial = 0;
STATE.results = [];
STATE.trialStartTime = 500;
STATE.isTrialActive = true;

const mockTrial = {
domain: "Test Domain",
target: "A"
};

handleUserSelection("A", mockTrial);

assert(STATE.isTrialActive === false, "Trial should be inactive after selection");
assert(STATE.currentTrial === 1, "currentTrial should increment");
assert(STATE.results.length === 1, "Results should have one entry");

const result = STATE.results[0];
assert(result.participant_id === "test-pid", "Incorrect participant_id");
assert(result.experimental_condition === "control", "Incorrect condition");
assert(result.ai_familiarity_covariate === 3, "Incorrect covariate");
assert(result.trial_sequence === 1, "Incorrect trial_sequence");
assert(result.ui_domain === "Test Domain", "Incorrect ui_domain");
assert(result.ai_badge_position === "none", "Incorrect ai_badge_position for control");
assert(result.user_selection === "Layout A", "Incorrect user_selection");
assert(result.chose_target_layout === true, "Should have chose_target_layout: true");
assert(result.reaction_time_ms === 500, `Incorrect reaction_time_ms: expected 500, got ${result.reaction_time_ms}`);
assert(result.timestamp === fixedDate, "Incorrect timestamp");

console.log("✅ Test Case 1 Passed");

// Test Case 2: Selection in AI condition (mismatch)
STATE.condition = "ai_labeled";
STATE.currentTrial = 5;
STATE.trialStartTime = 800;
STATE.isTrialActive = true;

const mockTrial2 = {
domain: "Another Domain",
target: "B"
};

handleUserSelection("A", mockTrial2);

assert(STATE.results.length === 2, "Results should have two entries");
const result2 = STATE.results[1];
assert(result2.trial_sequence === 6, "Incorrect trial_sequence for trial 2");
assert(result2.ai_badge_position === "Layout B", "Incorrect ai_badge_position for AI condition");
assert(result2.user_selection === "Layout A", "Incorrect user_selection for trial 2");
assert(result2.chose_target_layout === false, "Should have chose_target_layout: false");
assert(result2.reaction_time_ms === 200, "Incorrect reaction_time_ms for trial 2");

console.log("✅ Test Case 2 Passed");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The test cases are currently dependent on each other. For example, 'Test Case 2' relies on the STATE.results array being populated by 'Test Case 1'. This makes the test suite fragile; reordering or disabling one test could cause others to fail. Tests should be independent and self-contained.

Consider refactoring to reset the state before each test case. This can be done with a helper function that provides a clean STATE object for each scenario.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a lightweight Node-based unit test harness for the experiment telemetry logger (handleUserSelection) and makes code/experiment.js importable in non-browser environments so the tests can run.

Changes:

  • Added tests/experiment.test.js covering control vs AI-labeled logging, chose_target_layout, timestamps, reaction time, and state transitions.
  • Added a minimal package.json with a test script to run the Node test file.
  • Updated code/experiment.js to guard window/document usage and export key symbols (STATE, CFG, handleUserSelection, TRIALS) for tests.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
tests/experiment.test.js Adds Node-run unit tests validating telemetry row contents and state machine updates.
package.json Introduces a test script to execute the new Node test runner.
code/experiment.js Makes the browser-oriented script safe to require() in Node and exports functions/state for testing.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Comment on lines +13 to +16
// Mock performance.now
global.performance = {
now: () => 1000 // Fixed time
};
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test overrides global.performance but never restores the original value. This can leak global state if additional tests are added later or if the test runner executes multiple suites in one process; capture the original global.performance and restore it in finally alongside Date.now and setTimeout.

Copilot uses AI. Check for mistakes.
Comment on lines +10 to +98
async function runTests() {
console.log("Running handleUserSelection tests...");

// Mock performance.now
global.performance = {
now: () => 1000 // Fixed time
};

// Mock Date.now
const fixedDate = Date.now();
const originalDateNow = Date.now;
Date.now = () => fixedDate;

// Mock setTimeout
const originalSetTimeout = global.setTimeout;
global.setTimeout = (fn) => {}; // Do nothing to prevent loadNextTrial

try {
// Test Case 1: Standard selection in control condition
STATE.pid = "test-pid";
STATE.condition = "control";
STATE.covariate = 3;
STATE.currentTrial = 0;
STATE.results = [];
STATE.trialStartTime = 500;
STATE.isTrialActive = true;

const mockTrial = {
domain: "Test Domain",
target: "A"
};

handleUserSelection("A", mockTrial);

assert(STATE.isTrialActive === false, "Trial should be inactive after selection");
assert(STATE.currentTrial === 1, "currentTrial should increment");
assert(STATE.results.length === 1, "Results should have one entry");

const result = STATE.results[0];
assert(result.participant_id === "test-pid", "Incorrect participant_id");
assert(result.experimental_condition === "control", "Incorrect condition");
assert(result.ai_familiarity_covariate === 3, "Incorrect covariate");
assert(result.trial_sequence === 1, "Incorrect trial_sequence");
assert(result.ui_domain === "Test Domain", "Incorrect ui_domain");
assert(result.ai_badge_position === "none", "Incorrect ai_badge_position for control");
assert(result.user_selection === "Layout A", "Incorrect user_selection");
assert(result.chose_target_layout === true, "Should have chose_target_layout: true");
assert(result.reaction_time_ms === 500, `Incorrect reaction_time_ms: expected 500, got ${result.reaction_time_ms}`);
assert(result.timestamp === fixedDate, "Incorrect timestamp");

console.log("✅ Test Case 1 Passed");

// Test Case 2: Selection in AI condition (mismatch)
STATE.condition = "ai_labeled";
STATE.currentTrial = 5;
STATE.trialStartTime = 800;
STATE.isTrialActive = true;

const mockTrial2 = {
domain: "Another Domain",
target: "B"
};

handleUserSelection("A", mockTrial2);

assert(STATE.results.length === 2, "Results should have two entries");
const result2 = STATE.results[1];
assert(result2.trial_sequence === 6, "Incorrect trial_sequence for trial 2");
assert(result2.ai_badge_position === "Layout B", "Incorrect ai_badge_position for AI condition");
assert(result2.user_selection === "Layout A", "Incorrect user_selection for trial 2");
assert(result2.chose_target_layout === false, "Should have chose_target_layout: false");
assert(result2.reaction_time_ms === 200, "Incorrect reaction_time_ms for trial 2");

console.log("✅ Test Case 2 Passed");

} catch (error) {
console.error("❌ Test Failed:");
console.error(error.message);
process.exit(1);
} finally {
// Restore mocks
Date.now = originalDateNow;
global.setTimeout = originalSetTimeout;
}

console.log("\nAll tests passed successfully!");
}

runTests();
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

runTests is declared async but the test body is entirely synchronous and the returned Promise is not handled. Either remove async or call runTests().catch(...) so future async assertions fail the process reliably instead of becoming unhandled rejections.

Copilot uses AI. Check for mistakes.
Comment on lines +85 to +93
} catch (error) {
console.error("❌ Test Failed:");
console.error(error.message);
process.exit(1);
} finally {
// Restore mocks
Date.now = originalDateNow;
global.setTimeout = originalSetTimeout;
}
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Calling process.exit(1) inside the catch terminates the process immediately, so the finally block won't run and the mocked globals won't be restored on failure. If you want cleanup to run, prefer setting process.exitCode = 1 and letting control flow reach finally (or rethrow after logging).

Copilot uses AI. Check for mistakes.
@hashexplaindata
Copy link
Owner Author

Closing: Fixes have already been incorporated directly into the master branch.

@google-labs-jules
Copy link
Contributor

Closing: Fixes have already been incorporated directly into the master branch.

Understood. Acknowledging that this work is now obsolete as fixes have been incorporated directly into the master branch. Stopping work on this task.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants