Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

utteranceQueue isn't emitting to the data stream #17

Closed
zepumph opened this issue Mar 19, 2021 · 18 comments
Closed

utteranceQueue isn't emitting to the data stream #17

zepumph opened this issue Mar 19, 2021 · 18 comments

Comments

@zepumph
Copy link
Member

zepumph commented Mar 19, 2021

Since work done in #14. I think this deserves its own side issue. I'll update the TODO to point to this issue.

@zepumph
Copy link
Member Author

zepumph commented Mar 20, 2021

I wonder if instead of instrumenting the UtteranceQueue, we should instrument announcers, so that we can emit the text to the data stream?

@zepumph
Copy link
Member Author

zepumph commented Jul 30, 2021

I wonder if instead of instrumenting the UtteranceQueue, we should instrument announcers, so that we can emit the text to the data stream?

I felt this way again while working on #18, and then again separately while working on #13. I think I'll experiment with that.

The announcers seem to be important for knowing what was spoken, like for the data stream, as an output.

The queue may be important for state. if we want to be able to set the queue back to its length and order. Right now I feel like this is not actually needing state support, and is more transitory (like a button highlight. Maybe we won't feel that way soon though.

@jessegreenberg, presumably we need to figure this out for Greenhouse, right? I'll add some priority to it and see if I can work on it.

@zepumph
Copy link
Member Author

zepumph commented Aug 12, 2021

I'm a little bit ambivalent on this again, because I feel like UtteranceQueue should be the central location for text to go through, even if it is being processed through announceImmediately. Why do we need to instrument each individual announcer, when we could fulfill our goals by having the same lines use Utterance.getAlertText(). I would like to discuss this further.

Index: js/UtteranceQueue.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/js/UtteranceQueue.js b/js/UtteranceQueue.js
--- a/js/UtteranceQueue.js	(revision b64b4f424d37ca34b7c059414bd24a02751330f2)
+++ b/js/UtteranceQueue.js	(date 1628783252716)
@@ -384,11 +384,11 @@
       // phet-io event to the data stream
       // TODO: What to do about this? See https://github.com/phetsims/utterance-queue/issues/17
       // We cannot get the text yet, that needs to be done in announce
-      // this.phetioStartEvent( 'announced', { data: { utterance: text } } );
+      this.phetioStartEvent( 'announced', { data: { utterance: nextUtterance.getAlertText() } } );
 
       this.announcer.announce( nextUtterance, nextUtterance.announcerOptions );
 
-      //this.phetioEndEvent();
+      this.phetioEndEvent();
     }
   }
 
Index: js/Utterance.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/js/Utterance.js b/js/Utterance.js
--- a/js/Utterance.js	(revision b64b4f424d37ca34b7c059414bd24a02751330f2)
+++ b/js/Utterance.js	(date 1628783215659)
@@ -104,7 +104,7 @@
 
   /**
    * Get the string to alert, with no side effects
-   * @private
+   * @public (UtteranceQueue)
    * @returns {string}
    */
   getAlertText() {

@zepumph
Copy link
Member Author

zepumph commented Aug 12, 2021

We are going to flip flop again! After talking with @jessegreenberg, we think that Announcer should extend PhetioObject, and we can instrument that way.

@zepumph
Copy link
Member Author

zepumph commented Aug 13, 2021

For context, the announcer is the best place for this because there is further logic in the announcer that may keep an announcement from being announced.

Next for this issue, I'm going to instrument voicingManager and ariaHerald (by having Announcer extend PhetioObject but be Tandem.OPTIONAL).

Then we will discuss if we still want UtteranceQueue to be instrumented from there. I don't really see any value being added by the UtteranceQueue. @jessegreenberg and I spoke yesterday about how the queue is really just a tool for the announcer to use, and not the other way around. So it would make sense that the announcers will get instrumented.

@zepumph
Copy link
Member Author

zepumph commented Aug 13, 2021

I could see this working, but I feel like instead we should perhaps just create and instrument a single Emitter on Announcer that is instrumented for PhET-io. I'll try that next and see how I like it.

Announcer extends PhetioObject
Index: js/Announcer.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/js/Announcer.js b/js/Announcer.js
--- a/js/Announcer.js	(revision a6176e8ffd4a4ba4486d9af6b24a5ea8c50a48ed)
+++ b/js/Announcer.js	(date 1628874752507)
@@ -6,13 +6,26 @@
  * @author Michael Kauzmann (PhET Interactive Simulations)
  */
 
+import merge from '../../phet-core/js/merge.js';
+import PhetioObject from '../../tandem/js/PhetioObject.js';
+import Tandem from '../../tandem/js/Tandem.js';
+import IOType from '../../tandem/js/types/IOType.js';
 import utteranceQueueNamespace from './utteranceQueueNamespace.js';
 
-class Announcer {
+class Announcer extends PhetioObject {
+
+  constructor( options ) {
+    options = merge( {
+      phetioType: Announcer.AnnouncerIO,
+      tandem: Tandem.OPTIONAL
+    }, options );
+    super();
+  }
 
   /**
    * Announce an alert, setting textContent to an aria-live element.
-   * @public
+   *
+   * @private - do not call directly
    *
    * @param {Utterance} utterance - Utterance with content to announce
    * @param {Object} [options] - specify support for options particular to this announcer's features.
@@ -21,7 +34,27 @@
   announce( utterance, options ) {
     throw new Error( 'announce() must be overridden by subtype' );
   }
+
+  /**
+   * @public (UtteranceQueue)
+   * @param {Utterance} utterance
+   * @param {Object} [options]
+   */
+  sendToAnnounce( utterance, options ) {
+
+    // PhET-iO event to the data stream
+    this.phetioStartEvent( 'announced', { data: { utterance: utterance.getAlertText() } } );
+
+    this.announce( utterance, options );
+
+    this.phetioEndEvent();
+  }
 }
+
+Announcer.AnnouncerIO = new IOType( 'AnnouncerIO', {
+  valueType: Announcer,
+  events: [ 'announced' ]
+} );
 
 utteranceQueueNamespace.register( 'Announcer', Announcer );
 export default Announcer;
\ No newline at end of file
Index: js/UtteranceQueue.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/js/UtteranceQueue.js b/js/UtteranceQueue.js
--- a/js/UtteranceQueue.js	(revision a6176e8ffd4a4ba4486d9af6b24a5ea8c50a48ed)
+++ b/js/UtteranceQueue.js	(date 1628874985343)
@@ -393,7 +393,7 @@
 
     // only speak the utterance if not muted and the Utterance predicate returns true
     if ( !this._muted && utterance.predicate() ) {
-      this.announcer.announce( utterance, utterance.announcerOptions );
+      this.announcer.sendToAnnounce( utterance, utterance.announcerOptions );
     }
   }
 
Index: js/Utterance.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/js/Utterance.js b/js/Utterance.js
--- a/js/Utterance.js	(revision a6176e8ffd4a4ba4486d9af6b24a5ea8c50a48ed)
+++ b/js/Utterance.js	(date 1628874752484)
@@ -104,7 +104,7 @@
 
   /**
    * Get the string to alert, with no side effects
-   * @private
+   * @public
    * @returns {string}
    */
   getAlertText() {
Index: js/AriaHerald.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/js/AriaHerald.js b/js/AriaHerald.js
--- a/js/AriaHerald.js	(revision a6176e8ffd4a4ba4486d9af6b24a5ea8c50a48ed)
+++ b/js/AriaHerald.js	(date 1628874841397)
@@ -65,8 +65,12 @@
 
 class AriaHerald extends Announcer {
 
-  constructor() {
-    super();
+  /**
+   *
+   * @param {Object} [options]
+   */
+  constructor( options ) {
+    super( options );
 
     // @private - index of current aria-live element to use, updated every time an event triggers
     this.politeElementIndex = 0;

@zepumph
Copy link
Member Author

zepumph commented Aug 13, 2021

I spoke with @samreid, we will proceed with this patch over the above:

Index: utterance-queue/js/Announcer.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/utterance-queue/js/Announcer.js b/utterance-queue/js/Announcer.js
--- a/utterance-queue/js/Announcer.js	(revision a6176e8ffd4a4ba4486d9af6b24a5ea8c50a48ed)
+++ b/utterance-queue/js/Announcer.js	(date 1628875687459)
@@ -6,12 +6,31 @@
  * @author Michael Kauzmann (PhET Interactive Simulations)
  */
 
+import Emitter from '../../axon/js/Emitter.js';
+import merge from '../../phet-core/js/merge.js';
+import Tandem from '../../tandem/js/Tandem.js';
+import StringIO from '../../tandem/js/types/StringIO.js';
 import utteranceQueueNamespace from './utteranceQueueNamespace.js';
 
 class Announcer {
 
+  constructor( options ) {
+
+    options = merge( {
+      tandem: Tandem.OPTIONAL
+    }, options );
+
+    // @public - added for PhET-iO support, for a record of every announcement that comes through this Announcer.
+    this.announcedEmitter = new Emitter( {
+      tandem: options.tandem.createTandem( 'announcedEmitter' ),
+      parameters: [ { name: 'utterance', phetioType: StringIO } ],
+      phetioDocumentation: 'Emits the announced string when announcing it to this Announcer\'s implemented output'
+    } );
+  }
+
   /**
-   * Announce an alert, setting textContent to an aria-live element.
+   * Announce an alert, setting textContent to an aria-live element. Should call onAnnounce inside this implementation for
+   * PhET-iO support
    * @public
    *
    * @param {Utterance} utterance - Utterance with content to announce
@@ -21,6 +40,15 @@
   announce( utterance, options ) {
     throw new Error( 'announce() must be overridden by subtype' );
   }
+
+  /**
+   * @public
+   * @param {string} utteranceText
+   */
+  onAnnounce( utteranceText ) {
+    assert && assert( typeof utteranceText === 'string' );
+    this.announcedEmitter.emit( utteranceText );
+  }
 }
 
 utteranceQueueNamespace.register( 'Announcer', Announcer );
Index: scenery/js/accessibility/voicing/voicingManager.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/accessibility/voicing/voicingManager.js b/scenery/js/accessibility/voicing/voicingManager.js
--- a/scenery/js/accessibility/voicing/voicingManager.js	(revision 58e9c9834947cc2c59e6b82c926aa3faa04e97a5)
+++ b/scenery/js/accessibility/voicing/voicingManager.js	(date 1628877264833)
@@ -19,6 +19,7 @@
 import Range from '../../../../dot/js/Range.js';
 import merge from '../../../../phet-core/js/merge.js';
 import stripEmbeddingMarks from '../../../../phet-core/js/stripEmbeddingMarks.js';
+import Tandem from '../../../../tandem/js/Tandem.js';
 import Announcer from '../../../../utterance-queue/js/Announcer.js';
 import Utterance from '../../../../utterance-queue/js/Utterance.js';
 import scenery from '../../scenery.js';
@@ -26,8 +27,8 @@
 import KeyboardUtils from '../KeyboardUtils.js';
 
 class VoicingManager extends Announcer {
-  constructor() {
-    super();
+  constructor( options ) {
+    super( options );
 
     // @public {null|SpeechSynthesisVoice}
     this.voiceProperty = new Property( null );
@@ -289,6 +290,8 @@
         // remove from list after speaking
         const index = this.stepTimerListeners.indexOf( stepTimerListener );
         this.stepTimerListeners.splice( index, 1 );
+
+        this.announcedEmitter.emit( stringToSpeak );
       }, 250 );
       this.stepTimerListeners.push( stepTimerListener );
     }
@@ -370,7 +373,10 @@
   }
 }
 
-const voicingManager = new VoicingManager();
+// TODO: this will add a voicingManager to all sims, even if they don't support voicing. Is that ok? https://github.com/phetsims/utterance-queue/issues/17
+const voicingManager = new VoicingManager( {
+  tandem: Tandem.GENERAL_VIEW.createTandem( 'voicingManager' )
+} );
 
 scenery.register( 'voicingManager', voicingManager );
 export default voicingManager;
\ No newline at end of file
Index: utterance-queue/js/AriaHerald.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/utterance-queue/js/AriaHerald.js b/utterance-queue/js/AriaHerald.js
--- a/utterance-queue/js/AriaHerald.js	(revision a6176e8ffd4a4ba4486d9af6b24a5ea8c50a48ed)
+++ b/utterance-queue/js/AriaHerald.js	(date 1628876789373)
@@ -65,8 +65,8 @@
 
 class AriaHerald extends Announcer {
 
-  constructor() {
-    super();
+  constructor( options ) {
+    super( options );
 
     // @private - index of current aria-live element to use, updated every time an event triggers
     this.politeElementIndex = 0;
@@ -112,8 +112,10 @@
         this.assertiveElementIndex = ( this.assertiveElementIndex + 1 ) % this.assertiveElements.length;
       }
       else {
-        assert && assert( false, 'unsupported aria live prioirity' );
+        assert && assert( false, 'unsupported aria live priority' );
       }
+
+      this.announcedEmitter.emit( textContent );
     } );
 
     // increment index so the next AriaHerald instance has different ids for its elements.
Index: scenery/js/display/Display.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/display/Display.js b/scenery/js/display/Display.js
--- a/scenery/js/display/Display.js	(revision 58e9c9834947cc2c59e6b82c926aa3faa04e97a5)
+++ b/scenery/js/display/Display.js	(date 1628875766250)
@@ -333,7 +333,9 @@
     this.setBackgroundColor( options.backgroundColor );
 
     // @public {UtteranceQueue} - data structure for managing aria-live alerts the this Display instance
-    const ariaHerald = new AriaHerald();
+    const ariaHerald = new AriaHerald( {
+      tandem: options.tandem.createTandem( 'ariaHerald' )
+    } );
     this.utteranceQueue = new UtteranceQueue( ariaHerald, {
       implementAsSkeleton: !this._accessible,
       tandem: options.tandem.createTandem( 'utteranceQueue' )

@zepumph
Copy link
Member Author

zepumph commented Aug 13, 2021

There was one question in the above patch that we didn't get to. @samreid is it alright to instrument the voicingUtteranceQueue as a singleton? This will then be added to every sim, even if it doesn't support voicing. Does that seem alright to you? I think it is alright, but I understand if we should discuss this further.

@zepumph zepumph assigned samreid and unassigned zepumph Aug 13, 2021
@zepumph
Copy link
Member Author

zepumph commented Aug 16, 2021

Not sure if this is any different from the most recent one:

Index: utterance-queue/js/Announcer.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/utterance-queue/js/Announcer.js b/utterance-queue/js/Announcer.js
--- a/utterance-queue/js/Announcer.js	(revision a6176e8ffd4a4ba4486d9af6b24a5ea8c50a48ed)
+++ b/utterance-queue/js/Announcer.js	(date 1628879617957)
@@ -6,12 +6,31 @@
  * @author Michael Kauzmann (PhET Interactive Simulations)
  */
 
+import Emitter from '../../axon/js/Emitter.js';
+import merge from '../../phet-core/js/merge.js';
+import Tandem from '../../tandem/js/Tandem.js';
+import StringIO from '../../tandem/js/types/StringIO.js';
 import utteranceQueueNamespace from './utteranceQueueNamespace.js';
 
 class Announcer {
 
+  constructor( options ) {
+
+    options = merge( {
+      tandem: Tandem.OPTIONAL
+    }, options );
+
+    // @public - added for PhET-iO support, for a record of every announcement that comes through this Announcer.
+    this.announcedEmitter = new Emitter( {
+      tandem: options.tandem.createTandem( 'announcedEmitter' ),
+      parameters: [ { name: 'utterance', phetioType: StringIO } ],
+      phetioDocumentation: 'Emits the announced string when announcing it to this Announcer\'s implemented output'
+    } );
+  }
+
   /**
-   * Announce an alert, setting textContent to an aria-live element.
+   * Announce an alert, setting textContent to an aria-live element. Should call onAnnounce inside this implementation for
+   * PhET-iO support
    * @public
    *
    * @param {Utterance} utterance - Utterance with content to announce
@@ -21,6 +40,15 @@
   announce( utterance, options ) {
     throw new Error( 'announce() must be overridden by subtype' );
   }
+
+  /**
+   * @public
+   * @param {string} utteranceText
+   */
+  onAnnounce( utteranceText ) {
+    assert && assert( typeof utteranceText === 'string' );
+    this.announcedEmitter.emit( utteranceText );
+  }
 }
 
 utteranceQueueNamespace.register( 'Announcer', Announcer );
Index: scenery/js/accessibility/voicing/voicingManager.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/accessibility/voicing/voicingManager.js b/scenery/js/accessibility/voicing/voicingManager.js
--- a/scenery/js/accessibility/voicing/voicingManager.js	(revision 58e9c9834947cc2c59e6b82c926aa3faa04e97a5)
+++ b/scenery/js/accessibility/voicing/voicingManager.js	(date 1628879644776)
@@ -19,6 +19,7 @@
 import Range from '../../../../dot/js/Range.js';
 import merge from '../../../../phet-core/js/merge.js';
 import stripEmbeddingMarks from '../../../../phet-core/js/stripEmbeddingMarks.js';
+import Tandem from '../../../../tandem/js/Tandem.js';
 import Announcer from '../../../../utterance-queue/js/Announcer.js';
 import Utterance from '../../../../utterance-queue/js/Utterance.js';
 import scenery from '../../scenery.js';
@@ -26,8 +27,8 @@
 import KeyboardUtils from '../KeyboardUtils.js';
 
 class VoicingManager extends Announcer {
-  constructor() {
-    super();
+  constructor( options ) {
+    super( options );
 
     // @public {null|SpeechSynthesisVoice}
     this.voiceProperty = new Property( null );
@@ -289,6 +290,8 @@
         // remove from list after speaking
         const index = this.stepTimerListeners.indexOf( stepTimerListener );
         this.stepTimerListeners.splice( index, 1 );
+
+        this.announcedEmitter.emit( stringToSpeak );
       }, 250 );
       this.stepTimerListeners.push( stepTimerListener );
     }
@@ -370,7 +373,9 @@
   }
 }
 
-const voicingManager = new VoicingManager();
+const voicingManager = new VoicingManager( {
+  tandem: Tandem.GENERAL_VIEW.createTandem( 'voicingManager' )
+} );
 
 scenery.register( 'voicingManager', voicingManager );
 export default voicingManager;
\ No newline at end of file
Index: utterance-queue/js/AriaHerald.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/utterance-queue/js/AriaHerald.js b/utterance-queue/js/AriaHerald.js
--- a/utterance-queue/js/AriaHerald.js	(revision a6176e8ffd4a4ba4486d9af6b24a5ea8c50a48ed)
+++ b/utterance-queue/js/AriaHerald.js	(date 1628879617972)
@@ -65,8 +65,8 @@
 
 class AriaHerald extends Announcer {
 
-  constructor() {
-    super();
+  constructor( options ) {
+    super( options );
 
     // @private - index of current aria-live element to use, updated every time an event triggers
     this.politeElementIndex = 0;
@@ -112,8 +112,10 @@
         this.assertiveElementIndex = ( this.assertiveElementIndex + 1 ) % this.assertiveElements.length;
       }
       else {
-        assert && assert( false, 'unsupported aria live prioirity' );
+        assert && assert( false, 'unsupported aria live priority' );
       }
+
+      this.announcedEmitter.emit( textContent );
     } );
 
     // increment index so the next AriaHerald instance has different ids for its elements.
Index: scenery/js/display/Display.js
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
diff --git a/scenery/js/display/Display.js b/scenery/js/display/Display.js
--- a/scenery/js/display/Display.js	(revision 58e9c9834947cc2c59e6b82c926aa3faa04e97a5)
+++ b/scenery/js/display/Display.js	(date 1628879617957)
@@ -333,7 +333,9 @@
     this.setBackgroundColor( options.backgroundColor );
 
     // @public {UtteranceQueue} - data structure for managing aria-live alerts the this Display instance
-    const ariaHerald = new AriaHerald();
+    const ariaHerald = new AriaHerald( {
+      tandem: options.tandem.createTandem( 'ariaHerald' )
+    } );
     this.utteranceQueue = new UtteranceQueue( ariaHerald, {
       implementAsSkeleton: !this._accessible,
       tandem: options.tandem.createTandem( 'utteranceQueue' )

@samreid
Copy link
Member

samreid commented Aug 16, 2021

There was one question in the above patch that we didn't get to. @samreid is it alright to instrument the voicingUtteranceQueue as a singleton? This will then be added to every sim, even if it doesn't support voicing. Does that seem alright to you?

Is there a way to instrument it as a singleton, but leave it uninstrumented for sims that don't support voicing? I'm concerned that, while adding a voicingUtteranceQueue to every sim is in our long term goals, it seems odd to go through an intermediate phase where some sims have a queue that does nothing--may seem broken.

@zepumph
Copy link
Member Author

zepumph commented Aug 30, 2021

@samreid, I think this warrants a sync discussion together. There is nuance and history that I would most prefer to share together. Would you send me a calendar invite?

@samreid
Copy link
Member

samreid commented Sep 1, 2021

@zepumph can you please summarize our meeting?

@zepumph
Copy link
Member Author

zepumph commented Feb 16, 2022

@zepumph can you please summarize our meeting?

That is a great question! I don't think I can at this point. That said, I just ran into this over in #61. I think that it would be good to recognize the purpose of utterance-queue instrumentation was to know what was coming out of the sim from this module. I think that at this point Announcer is much more set up to support that. I recommend removing the UtteranceQueue instrumentation and instead to have Announcer instrumented. Then we can instrument the announcementCompleteEmitter, and we will get data stream, and the ability to add listeners.

I don't think this will be challenging, and it will unblock this issue.

@zepumph
Copy link
Member Author

zepumph commented Feb 16, 2022

Ok. I was able to get Announcer.announcementCompleteEmitter instrumented, which to me feels much better than instrumenting UtteranceQueue.

This has been annoying me for some time, but I'm glad I waited because all of @jessegreenberg's awesome work with refactoring Announcer has made this tremendously nice and easy.

For review:

  • Expanding from @samreid's idea over in https://github.com/phetsims/phet-io/issues/1584#issuecomment-614068438, I am always instrumenting these, even when these features aren't supported by the simulation. For example: naturalSelection.global.view.voicingManager.announcementCompleteEmitter.
  • Note that the phetioIDs for aria-live and web speech are not parallel. There is just a single global voicingManager, but there is an ariaLiveAnnouncer for each display. That is because of implementation needs and concerns. I hope it is alright!

I asked on slack who may be best to review this, but didn't get a response, so I'll start with @arouinfar for general studio tree structure and data stream.

To test and explore:

  • Note the phetioID ending in voicingManager.announcementCompleteEmitter, and the same with ariaLiveAnnouncer.announcementCompleteEmitter.
  • See them in studio, whether or not the sim supports description or voicing.
  • Run the sim with ?phetioConsoleLog=colorized&phetioEmitHighFrequencyEvents=false&voicingInitiallyEnabled (either standalone or studio is fine) and search in the console for the output of announcementCompleteEmitter.
  • Testing Ratio and Proportion makes sense for voicing and description, since it has both.

@zepumph
Copy link
Member Author

zepumph commented Feb 16, 2022

Blocking until it gets reviewed.

@arouinfar
Copy link

@zepumph I reviewed the things you listed in RaP and they seem to be as described. I really don't know what a client would want or expect here. One question I had was whether or not the annoucementCompleteEmitter be read-only. Seems like these things should only emit based on the sim state, not forcibly by the client.

@arouinfar arouinfar removed their assignment Feb 16, 2022
@arouinfar
Copy link

I don't have the appropriate permissions in this repo so I can't reassign @zepumph.

@zepumph
Copy link
Member Author

zepumph commented Feb 16, 2022

I marked these as phetioReadOnly:true, very good idea!

I really don't know what a client would want or expect here.

I think the primary goal for me is to have a way to tap into the inputs and outputs of the simulation. We have architecture set up to track all inputs (mouse/touch/keyboard), and many outputs (model changes, view changes, visual changes like via a screenshot). This seemed like a vital output that was not being conveyed. I could think of any number of research questions that revolved around "what did the sim present to the user" in which this would be an important piece of that view.

@zepumph zepumph closed this as completed Feb 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants