-
Notifications
You must be signed in to change notification settings - Fork 8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix typo #1006
Fix typo #1006
Conversation
Some of the proposed changes are incorrect. I'll accept the valid ones. Thanks. |
@@ -21,7 +21,7 @@ For an intro to casting, see the *Basic Media Casting Sample*. | |||
**Scenario 1: Media Element Casting 101:** | |||
Press the *Cast* button next to the progress bar in the video element. Select the device you'd like to cast to. | |||
|
|||
This is an example of the built in casting that comes with the media element transport controls. This will enable casting to Miracast, DLNA, and Bluetooth devices. | |||
This is an example of the built-in casting that comes with the media element transport controls. This will enable casting to Miracast, DLNA, and Bluetooth devices. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is okay.
- Create an ad control to show display ads programatically | ||
- Create an ad control to show interstitial video ads programatically | ||
- Create an ad control to show display ads programmatically | ||
- Create an ad control to show interstitial video ads programmatically |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is okay.
@@ -45,7 +45,7 @@ In the code-behind for this scenario, an **AudioGraph** is created with three no | |||
This scenario shows how to generate and route audio data into an audio graph from custom code. Press the *Generate Audio* button to start generating audio from custom code. | |||
Press *Stop* to stop the audio. | |||
|
|||
In the code-behind for this scenario an **AudioGraph** is created with two nodes: an **AudioFrameInputNode** that represents the custom audio generation code and which is connected to an **AudioDeviceOutputNode** representing the default output device. The **AudioFrameInputNode** is created with the same encoding as the audio graph so that the generated audio data has the same format as the graph. Once the audio graph is started, the **QuantumStarted** event is raised by the audio graph whenever the custom code needs to provide more audio data. The custom code creates a new **AudioFrame** object in which the audio data is stored. The example accesses the underlying buffer of the **AudioFrame**, which requires an **unsafe** code block, and inserts values from a sine wav into the buffer. The **AudioFrame** containing the audio data is then added to the **AudioFrameInputNode** list of frames ready to be processed, which is then consumed by the audio graph and passed to the audio device output node. | |||
In the code-behind for this scenario an **AudioGraph** is created with two nodes: an **AudioFrameInputNode** that represents the custom audio generation code and which is connected to an **AudioDeviceOutputNode** representing the default output device. The **AudioFrameInputNode** is created with the same encoding as the audio graph so that the generated audio data has the same format as the graph. Once the audio graph is started, the **QuantumStarted** event is raised by the audio graph whenever the custom code needs to provide more audio data. The custom code creates a new **AudioFrame** object in which the audio data is stored. The example accesses the underlying buffer of the **AudioFrame**, which requires an **unsafe** code block, and inserts values from a sine wave into the buffer. The **AudioFrame** containing the audio data is then added to the **AudioFrameInputNode** list of frames ready to be processed, which is then consumed by the audio graph and passed to the audio device output node. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is okay (wav->wave).
|
||
The code-behind for this scenario uses just two nodes. An **AudioFileInputNode** and a **AudioDeviceOutputNode**. The effects are initialized and then added to the **EffectDefinitions** list of the file input node. | ||
|
||
**Scenario 6: Custom Effects:** | ||
This scenario demonstrates how to create an custom audio effect and then using it in an audio graph. Press the *Load File* button to select an audio file to play. Press *Start Graph* to begin playback of the file with the custom effect. | ||
|
||
The custom effect for this scenario is defined in a single file, CustomEffect.cs, that is included in its own project. The class implemented in this file, AudioEchoEffect, implements the **IBasicAudioEffect** interface which allows it to be used in an audio graph. The actual audio processing is implemented in the **ProcessFrame** method. The audio graph calls this method and passes in a **ProcessAudioFrameContext** object which provides access to **AudioFrame** objects representing the input to the effect and the output from the effect. The effect implements a simple echo by storing samples from the input frame in a buffer and then adding the samples previously stored in the buffer to the current input samples and then inserting thos values into the output frame buffer. | ||
The custom effect for this scenario is defined in a single file, CustomEffect.cs, that is included in its own project. The class implemented in this file, AudioEchoEffect, implements the **IBasicAudioEffect** interface which allows it to be used in an audio graph. The actual audio processing is implemented in the **ProcessFrame** method. The audio graph calls this method and passes in a **ProcessAudioFrameContext** object which provides access to **AudioFrame** objects representing the input to the effect and the output from the effect. The effect implements a simple echo by storing samples from the input frame in a buffer and then adding the samples previously stored in the buffer to the current input samples and then inserting those values into the output frame buffer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is okay.
The only ones that survived are the ones in AudioCreation. (Advertising is being archived.) |
* 360VideoPlayback: Align configurations, reset shader between stages #791 #942 * Clipboard: Add history and roaming scenarios, improve delay-rendering, clipboard change reporting, error-handling. * CustomSerialDeviceAccess: Fix RequestToSendXOnXOff #1085 * Holographic samples support HoloLens 2 device and emulator * HolographicFaceTracking: Fix comment about NV12 #834 * HolographicVoiceInput: Fix release build * PlayReady: Don't create container twice #1159 * RadioManager: Clarify require capabilities #1165 * SpeechRecognitionAndSynthesis: Clarify which scenarios require internet access #725, Update permission links #1002 * UserActivity: Fix protocol navigation #1176 * XamlMasterDetail: Fix navigation problems #344 #345 * SharedContent: Default.rd.xml fixes #950 * Miscellaneous typos: AdvancedCasting #1006, AudioCreation #1006, BluetoothAdvertisement #898, BluetoothRfcommChat #1139, Json #404, Nfc #1039, XamlBind #1169 #1182 * Add missing Line Display sample to TOC #704 * All C++ samples now build in both VS2017 and VS2019. * All C++/WinRT samples upgraded to C++/WinRT 2.0 #1179 * New: Clipboard (C++/WinRT), JapanesePhoneticAnalysis (C++/WinRT) * Archived: Advertising, CallerId, Clipboard (C++/CX and VB), CommunicationBlockAndFilter, PhoneCall
No description provided.