Conversation
This commit adds a suite of unit tests for the `handleUserSelection` function in `code/experiment.js`. Key changes: - Added `tests/experiment.test.js` with comprehensive test cases. - Modified `code/experiment.js` to support Node.js exports and defensive global checks. - Created `package.json` with a test runner configuration. The tests verify: - STATE object mutation (isTrialActive, currentTrial). - Accuracy of reaction time calculations using mocked performance.now. - Tidy Data payload structure for both control and experimental conditions. - Proper handling of trial sequence and UI domain logging. Co-authored-by: hashexplaindata <221828969+hashexplaindata@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the reliability of the experiment's primary data collection engine by introducing robust unit tests for the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds unit tests for the handleUserSelection function, a critical part of the telemetry logging. To enable testing in a Node.js environment, the main experiment.js file has been updated to gracefully handle the absence of browser-specific globals like window and document, and to export the necessary components for testing. A package.json is also introduced to define the project and provide a convenient test script. The tests themselves cover key scenarios, but I've identified a significant improvement opportunity in their structure to ensure they are independent and robust, which is detailed in my comment.
| // Test Case 1: Standard selection in control condition | ||
| STATE.pid = "test-pid"; | ||
| STATE.condition = "control"; | ||
| STATE.covariate = 3; | ||
| STATE.currentTrial = 0; | ||
| STATE.results = []; | ||
| STATE.trialStartTime = 500; | ||
| STATE.isTrialActive = true; | ||
|
|
||
| const mockTrial = { | ||
| domain: "Test Domain", | ||
| target: "A" | ||
| }; | ||
|
|
||
| handleUserSelection("A", mockTrial); | ||
|
|
||
| assert(STATE.isTrialActive === false, "Trial should be inactive after selection"); | ||
| assert(STATE.currentTrial === 1, "currentTrial should increment"); | ||
| assert(STATE.results.length === 1, "Results should have one entry"); | ||
|
|
||
| const result = STATE.results[0]; | ||
| assert(result.participant_id === "test-pid", "Incorrect participant_id"); | ||
| assert(result.experimental_condition === "control", "Incorrect condition"); | ||
| assert(result.ai_familiarity_covariate === 3, "Incorrect covariate"); | ||
| assert(result.trial_sequence === 1, "Incorrect trial_sequence"); | ||
| assert(result.ui_domain === "Test Domain", "Incorrect ui_domain"); | ||
| assert(result.ai_badge_position === "none", "Incorrect ai_badge_position for control"); | ||
| assert(result.user_selection === "Layout A", "Incorrect user_selection"); | ||
| assert(result.chose_target_layout === true, "Should have chose_target_layout: true"); | ||
| assert(result.reaction_time_ms === 500, `Incorrect reaction_time_ms: expected 500, got ${result.reaction_time_ms}`); | ||
| assert(result.timestamp === fixedDate, "Incorrect timestamp"); | ||
|
|
||
| console.log("✅ Test Case 1 Passed"); | ||
|
|
||
| // Test Case 2: Selection in AI condition (mismatch) | ||
| STATE.condition = "ai_labeled"; | ||
| STATE.currentTrial = 5; | ||
| STATE.trialStartTime = 800; | ||
| STATE.isTrialActive = true; | ||
|
|
||
| const mockTrial2 = { | ||
| domain: "Another Domain", | ||
| target: "B" | ||
| }; | ||
|
|
||
| handleUserSelection("A", mockTrial2); | ||
|
|
||
| assert(STATE.results.length === 2, "Results should have two entries"); | ||
| const result2 = STATE.results[1]; | ||
| assert(result2.trial_sequence === 6, "Incorrect trial_sequence for trial 2"); | ||
| assert(result2.ai_badge_position === "Layout B", "Incorrect ai_badge_position for AI condition"); | ||
| assert(result2.user_selection === "Layout A", "Incorrect user_selection for trial 2"); | ||
| assert(result2.chose_target_layout === false, "Should have chose_target_layout: false"); | ||
| assert(result2.reaction_time_ms === 200, "Incorrect reaction_time_ms for trial 2"); | ||
|
|
||
| console.log("✅ Test Case 2 Passed"); |
There was a problem hiding this comment.
The test cases are currently dependent on each other. For example, 'Test Case 2' relies on the STATE.results array being populated by 'Test Case 1'. This makes the test suite fragile; reordering or disabling one test could cause others to fail. Tests should be independent and self-contained.
Consider refactoring to reset the state before each test case. This can be done with a helper function that provides a clean STATE object for each scenario.
There was a problem hiding this comment.
Pull request overview
Adds a lightweight Node-based unit test harness for the experiment telemetry logger (handleUserSelection) and makes code/experiment.js importable in non-browser environments so the tests can run.
Changes:
- Added
tests/experiment.test.jscovering control vs AI-labeled logging,chose_target_layout, timestamps, reaction time, and state transitions. - Added a minimal
package.jsonwith atestscript to run the Node test file. - Updated
code/experiment.jsto guardwindow/documentusage and export key symbols (STATE,CFG,handleUserSelection,TRIALS) for tests.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| tests/experiment.test.js | Adds Node-run unit tests validating telemetry row contents and state machine updates. |
| package.json | Introduces a test script to execute the new Node test runner. |
| code/experiment.js | Makes the browser-oriented script safe to require() in Node and exports functions/state for testing. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| // Mock performance.now | ||
| global.performance = { | ||
| now: () => 1000 // Fixed time | ||
| }; |
There was a problem hiding this comment.
The test overrides global.performance but never restores the original value. This can leak global state if additional tests are added later or if the test runner executes multiple suites in one process; capture the original global.performance and restore it in finally alongside Date.now and setTimeout.
| async function runTests() { | ||
| console.log("Running handleUserSelection tests..."); | ||
|
|
||
| // Mock performance.now | ||
| global.performance = { | ||
| now: () => 1000 // Fixed time | ||
| }; | ||
|
|
||
| // Mock Date.now | ||
| const fixedDate = Date.now(); | ||
| const originalDateNow = Date.now; | ||
| Date.now = () => fixedDate; | ||
|
|
||
| // Mock setTimeout | ||
| const originalSetTimeout = global.setTimeout; | ||
| global.setTimeout = (fn) => {}; // Do nothing to prevent loadNextTrial | ||
|
|
||
| try { | ||
| // Test Case 1: Standard selection in control condition | ||
| STATE.pid = "test-pid"; | ||
| STATE.condition = "control"; | ||
| STATE.covariate = 3; | ||
| STATE.currentTrial = 0; | ||
| STATE.results = []; | ||
| STATE.trialStartTime = 500; | ||
| STATE.isTrialActive = true; | ||
|
|
||
| const mockTrial = { | ||
| domain: "Test Domain", | ||
| target: "A" | ||
| }; | ||
|
|
||
| handleUserSelection("A", mockTrial); | ||
|
|
||
| assert(STATE.isTrialActive === false, "Trial should be inactive after selection"); | ||
| assert(STATE.currentTrial === 1, "currentTrial should increment"); | ||
| assert(STATE.results.length === 1, "Results should have one entry"); | ||
|
|
||
| const result = STATE.results[0]; | ||
| assert(result.participant_id === "test-pid", "Incorrect participant_id"); | ||
| assert(result.experimental_condition === "control", "Incorrect condition"); | ||
| assert(result.ai_familiarity_covariate === 3, "Incorrect covariate"); | ||
| assert(result.trial_sequence === 1, "Incorrect trial_sequence"); | ||
| assert(result.ui_domain === "Test Domain", "Incorrect ui_domain"); | ||
| assert(result.ai_badge_position === "none", "Incorrect ai_badge_position for control"); | ||
| assert(result.user_selection === "Layout A", "Incorrect user_selection"); | ||
| assert(result.chose_target_layout === true, "Should have chose_target_layout: true"); | ||
| assert(result.reaction_time_ms === 500, `Incorrect reaction_time_ms: expected 500, got ${result.reaction_time_ms}`); | ||
| assert(result.timestamp === fixedDate, "Incorrect timestamp"); | ||
|
|
||
| console.log("✅ Test Case 1 Passed"); | ||
|
|
||
| // Test Case 2: Selection in AI condition (mismatch) | ||
| STATE.condition = "ai_labeled"; | ||
| STATE.currentTrial = 5; | ||
| STATE.trialStartTime = 800; | ||
| STATE.isTrialActive = true; | ||
|
|
||
| const mockTrial2 = { | ||
| domain: "Another Domain", | ||
| target: "B" | ||
| }; | ||
|
|
||
| handleUserSelection("A", mockTrial2); | ||
|
|
||
| assert(STATE.results.length === 2, "Results should have two entries"); | ||
| const result2 = STATE.results[1]; | ||
| assert(result2.trial_sequence === 6, "Incorrect trial_sequence for trial 2"); | ||
| assert(result2.ai_badge_position === "Layout B", "Incorrect ai_badge_position for AI condition"); | ||
| assert(result2.user_selection === "Layout A", "Incorrect user_selection for trial 2"); | ||
| assert(result2.chose_target_layout === false, "Should have chose_target_layout: false"); | ||
| assert(result2.reaction_time_ms === 200, "Incorrect reaction_time_ms for trial 2"); | ||
|
|
||
| console.log("✅ Test Case 2 Passed"); | ||
|
|
||
| } catch (error) { | ||
| console.error("❌ Test Failed:"); | ||
| console.error(error.message); | ||
| process.exit(1); | ||
| } finally { | ||
| // Restore mocks | ||
| Date.now = originalDateNow; | ||
| global.setTimeout = originalSetTimeout; | ||
| } | ||
|
|
||
| console.log("\nAll tests passed successfully!"); | ||
| } | ||
|
|
||
| runTests(); |
There was a problem hiding this comment.
runTests is declared async but the test body is entirely synchronous and the returned Promise is not handled. Either remove async or call runTests().catch(...) so future async assertions fail the process reliably instead of becoming unhandled rejections.
| } catch (error) { | ||
| console.error("❌ Test Failed:"); | ||
| console.error(error.message); | ||
| process.exit(1); | ||
| } finally { | ||
| // Restore mocks | ||
| Date.now = originalDateNow; | ||
| global.setTimeout = originalSetTimeout; | ||
| } |
There was a problem hiding this comment.
Calling process.exit(1) inside the catch terminates the process immediately, so the finally block won't run and the mocked globals won't be restored on failure. If you want cleanup to run, prefer setting process.exitCode = 1 and letting control flow reach finally (or rethrow after logging).
|
Closing: Fixes have already been incorporated directly into the master branch. |
Understood. Acknowledging that this work is now obsolete as fixes have been incorporated directly into the master branch. Stopping work on this task. |
🎯 What: The testing gap addressed: Missing unit tests for the core telemetry logging function
handleUserSelection.📊 Coverage: What scenarios are now tested:
✨ Result: Significant improvement in the reliability of the experiment's primary data collection engine. The implementation uses a zero-dependency Node.js test runner to ensure compatibility and ease of execution in the sandbox environment.
PR created automatically by Jules for task 5388953667962943430 started by @hashexplaindata